Search is not available for this dataset
project
stringlengths
1
235
source
stringclasses
16 values
language
stringclasses
48 values
content
stringlengths
909
64.8M
resume
readthedoc
Unknown
resume 0.1 documentation [resume](index.html#document-index) --- The Resume of <NAME>, A Pythonista[¶](#the-resume-of-randall-degges-a-pythonista) === Hi there, I’m [Randall](http://rdegges.com/). This is my resume. If you’d like to see the **technical** details, you’ll need to check out my [resume project](https://github.com/rdegges/resume) on github. Note My resume is currently *under development*. I’ll remove this notice once it reaches a stable release ;) In a nutshell, * I’m a python / Django / telephony developer. * I live in Los Angeles with my awesome wife Sami and dog Scribbles. * I love geek humor, loud music, tech books, writing, and open source software. Table of Contents[¶](#table-of-contents) --- ### About Me[¶](#about-me) I’m your typical hacker–I love learning new things, playing around with the latest tech, and figuring out how things work. My passion is building software that makes things simple. There’s something about the process of solving problems in an elegant way that excites me. #### Philosophy[¶](#philosophy) My philosophy on software and life is the same: *be agile*. I strongly believe in learning from mistakes, iterating quickly, and building strength over time. I understand that big change is made from consistent small efforts, and strive to continuously improve myself every day. When I work on things, I give them 100% of my effort. Regardless of whether I’m folding laundry or writing unit tests, I thoroughly enjoy myself and constantly push myself to new levels of understanding and efficiency. #### Purpose[¶](#purpose) My main purpose is to make the world an awesome place. I want to solve cool problems, and help make Earth the [best planet in the universe](http://en.wikipedia.org/wiki/Extrasolar_planet) :) To this end, my daily goal is to push myself out of my comfort zone a little bit every day, and continuously improve. I want to build and create software that *rocks*. #### Personality[¶](#personality) I think the most accurate way to describe myself is a “laid-back programmer”: * I enjoy wearing shorts, sandals, and t-shirts from [ThinkGeek](http://www.thinkgeek.com/tshirts-apparel/). * I *hate* Windows. * When I’m not programming I’m often attending programming events, or doing fun outdoor activities like [CrossFit](http://www.teamcrossfit.com/), biking, hiking, and exploring. * I constantly build and contribute to open source projects. * I read every day. * I like writing on [my blog](http://rdegges.com/). * I’ve got an uncanny ability for finding relevant gif images during IRC conversations. I’m friendly, fun, and nerdy. ### Skills[¶](#skills) I’ve been programming since I was just a young kid, playing [Wolfenstein 3D](http://en.wikipedia.org/wiki/Wolfenstein_3D). Over the years, I’ve worked with numerous technologies and learned various design patterns. My greatest strength is my insane drive to learn new things and improve every day. Along with this, I’ve internalized the [three virtues of a programmer](http://en.wikipedia.org/wiki/Larry_Wall#Virtues_of_a_programmer), and constantly battle with myself to build software that satisfies my laziness, impatience, and hubris >:) #### Python[¶](#python) Python is by far my strongest programming language. I’ve worked with it extensively for ~5 years, and am very familiar with the standard library, and open source ecosystem. I’ve used Python to build: * Small command line utilities. * Professional software for large telephony companies. * Numerous websites and web services. * Open source libraries and tools. I’m active in the Python community, frequently attend Python events, and work with other Python developers. ##### Django[¶](#django) I’ve been using [Django](https://www.djangoproject.com/) (both professionally and personally) for the past 2 years. I’ve used it to build web applications for my company, as well as myself, and actively contribute to the Django ecosystem. What I love about Django is its flexibility, best-practices driven design, and awesome documentation. I’ve personally adopted a lot of Django’s development traits, and the framework has inspired me to become a better developer. Furthermore, the Django ecosystem is enormous, and it’s really nice to use so many great applications written by other programmers. ##### Flask[¶](#flask) I started using [Flask](http://flask.pocoo.org/) when it was first released for small projects. To date, I’ve built 3 web applications with it (all for my company) that power small internal websites. What I like about Flask is the simple project skeleton, and extremely in-depth documentation. ##### Celery[¶](#celery) In the past year I’ve come to love [celery](http://celeryproject.org/), the distributed task queue. I’ve used it in my workplace to help scale our applications to support thousands of users and improve server response times, and I’ve used it personally to build beautifully scalable asynchronous applications. I’m extremely familiar with the setup and usage of celery, and have used both celery and celerybeat to build high-performance applications. #### puppet[¶](#puppet) After discovering [puppet](http://www.puppetlabs.com/) earlier this year, my life completely changed. I now use puppet for everything–personally and professionally. I’ve used puppet to: * Build and manage several large production, staging, and development environments. * Manage complex software environments that support thousands of users. * Automate sysadmin work and remove thousands of hours worth of manual labor creating, configuring, and managing servers. * Automatically scale my work’s cloud infrastructure to actively add and remove servers based on user demand. #### monit[¶](#monit) [monit](http://mmonit.com/monit/) is an application monitoring and healing program that I’ve used to build fault-tolerant software stacks at work, and for fun. I’m familiar with the basic monit scripting language, and have written numerous scripts to monitor applications, ensure they’re running properly (via numerous statistics such as load, memory usage, response time, etc.), and attempt to both auto-correct issues and send notifications when problems arise. #### memcached[¶](#memcached) In order to help scale large web applications, I’ve made extensive use of [memcached](http://memcached.org/) to improve application response time and scale computationally expensive software. I’ve setup and maintained memcached server clusters, and have experience working with the python memcached libraries (specifically, with Django’s caching backend) to rapidly scale python web applications. #### Rackspace[¶](#rackspace) As the lead developer at a tech startup, I’ve personally helped build a scalable cloud infrastructure using [Rackspace Cloud Servers](http://www.rackspace.com/cloud/cloud_hosting_products/servers/). I’m familiar with their python APIs and CLI tools, and know how to easily automate server provisioning. #### Asterisk[¶](#asterisk) Over the past 3 years I’ve worked intimately with the popular open source PBX system, [Asterisk](http://www.asterisk.org/). In addition to designing, implementing, and maintaining complex telephony systems for various companies, I’ve also made several performance patches, bug fixes, and built helper libraries to make interacting with Asterisk easier for developers. #### Heroku[¶](#heroku) If I’m not currently held captive by terrorists who insist I deploy code to their own servers, then I’ll unquestionably be using [Heroku](http://www.heroku.com/) to host my web projets. To date, I’ve migrated multiple large sites from other cloud providers (like Rackspace) to Heroku, for the betterment of all society. In all seriousness though: I love Heroku, and I’m an expert at running sites on it. I’m familiar with their deployment model, best practices, and most of their addons.
speech-corpus-tools
readthedoc
R
Speech Corpus Tools 0.1.0 documentation [Speech Corpus Tools](index.html#document-index) --- Welcome to SpeechCorpusTools’s documentation![¶](#welcome-to-speechcorpustools-s-documentation) === Contents: Introduction[¶](#introduction) --- ### General Background[¶](#general-background) *Speech Corpus Tools* is an application for interacting with large scale datasets. It uses PolyglotDB as the underlying data storage, which allows for consistent queries across a wide range of possible input formats. Speech Corpus Tools is written in Python, which allows for Python scripts to be written using its API, so advanced users can create their own queries using Python, rather than SQL or Cypher (the underlying database languages). In addition, Speech Corpus Tools provides a graphical user interface for easily displaying annotations and speech in the database and the results of queries. Tutorial[¶](#tutorial) --- Speech Corpus Tools is a system for going from a raw speech corpus to a data file (CSV) ready for further analysis (e.g. in R), which conceptually consists of a pipeline of four steps: 1. **Import** the corpus into SCT * Result: a structured database of linguistic objects (words, phones, discourses). 2. **Enrich** the database * Result: Further linguistic objects (utterances, syllables), and information about objects (e.g. speech rate, word frequencies). 3. **Query** the database * Result: A set of linguistic objects of interest (e.g. utterance-final words ending with a stop), 4. **Export** the results * Result: A CSV file containing information about the set of objects of interest Ideally, the import and enrichment steps are only performed once for a given corpus. The typical use case of SCT is performing a query and export corresponding to some linguistic question(s) of interest. This tutorial is structured as follows: * [Installation](#installation-tutorial): Install necessary software – [Neo4j](#neo4j-install) and [SCT](#sct-install). * [Librispeech database](#librispeech): Obtain a database for the Librispeech Corpus where the *import* and *enrichment* steps have been completed , either by using a [premade](#premade) version, or doing the import and enrichment steps [yourself](#buildownlibrispeech). * [Examples](#vignettemain): + Two worked examples ([1](#example1), [2](#example2)) illustrating the *Query* and *Export* steps, as well as (optional) basic analysis of the resulting data files (CSV’s) in R. + One additional example ([3](#example3)) left as an exercise. ### Installation[¶](#installation) #### Installing Neo4j[¶](#installing-neo4j) SCT currently requires that [Neo4j](https://neo4j.com/) version 3.0 be installed locally and running. To install Neo4j, please use the following links: * [Mac version](http://info.neotechnology.com/download-thanks.html?edition=community&release=3.0.3&flavour=dmg) * [Windows version](http://info.neotechnology.com/download-thanks.html?edition=community&release=3.0.3&flavour=winstall64) Once downloaded, just run the installer and it’ll install the database software that SCT uses locally. SCT currently requires you to change the configuration for Neo4j, by doing the following **once**, before running SCT: * Open the Neo4j application/executable (Mac/Windows) * Click on `Options ...` * Click on the `Edit...` button for `Database configuration` * Replace the text in the window that comes up with the following, then save the file: ``` #*************************************************************** # Server configuration #*************************************************************** # This setting constrains all `LOAD CSV` import files to be under the `import` directory. Remove or uncomment it to # allow files to be loaded from anywhere in filesystem; this introduces possible security problems. See the `LOAD CSV` # section of the manual for details. #dbms.directories.import=import # Require (or disable the requirement of) auth to access Neo4j dbms.security.auth_enabled=false # # Bolt connector # dbms.connector.bolt.type=BOLT dbms.connector.bolt.enabled=true dbms.connector.bolt.tls_level=OPTIONAL # To have Bolt accept non-local connections, uncomment this line: # dbms.connector.bolt.address=0.0.0.0:7687 # # HTTP Connector # dbms.connector.http.type=HTTP dbms.connector.http.enabled=true #dbms.connector.http.encryption=NONE # To have HTTP accept non-local connections, uncomment this line: #dbms.connector.http.address=0.0.0.0:7474 # # HTTPS Connector # # To enable HTTPS, uncomment these lines: #dbms.connector.https.type=HTTP #dbms.connector.https.enabled=true #dbms.connector.https.encryption=TLS #dbms.connector.https.address=localhost:7476 # Certificates directory # dbms.directories.certificates=certificates #***************************************************************** # Administration client configuration #***************************************************************** # Comma separated list of JAX-RS packages containing JAX-RS resources, one # package name for each mountpoint. The listed package names will be loaded # under the mountpoints specified. Uncomment this line to mount the # org.neo4j.examples.server.unmanaged.HelloWorldResource.java from # neo4j-examples under /examples/unmanaged, resulting in a final URL of # http://localhost:${default.http.port}/examples/unmanaged/helloworld/{nodeId} #dbms.unmanaged_extension_classes=org.neo4j.examples.server.unmanaged=/examples/unmanaged #***************************************************************** # HTTP logging configuration #***************************************************************** # HTTP logging is disabled. HTTP logging can be enabled by setting this # property to 'true'. dbms.logs.http.enabled=false # Logging policy file that governs how HTTP log output is presented and # archived. Note: changing the rollover and retention policy is sensible, but # changing the output format is less so, since it is configured to use the # ubiquitous common log format #org.neo4j.server.http.log.config=neo4j-http-logging.xml # Enable this to be able to upgrade a store from an older version. #dbms.allow_format_migration=true # The amount of memory to use for mapping the store files, in bytes (or # kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g'). # If Neo4j is running on a dedicated server, then it is generally recommended # to leave about 2-4 gigabytes for the operating system, give the JVM enough # heap to hold all your transaction state and query context, and then leave the # rest for the page cache. # The default page cache memory assumes the machine is dedicated to running # Neo4j, and is heuristically set to 50% of RAM minus the max Java heap size. #dbms.memory.pagecache.size=10g #***************************************************************** # Miscellaneous configuration #***************************************************************** # Enable this to specify a parser other than the default one. #cypher.default_language_version=3.0 # Determines if Cypher will allow using file URLs when loading data using # `LOAD CSV`. Setting this value to `false` will cause Neo4j to fail `LOAD CSV` # clauses that load data from the file system. dbms.security.allow_csv_import_from_file_urls=true # Retention policy for transaction logs needed to perform recovery and backups. dbms.tx_log.rotation.retention_policy=false # Enable a remote shell server which Neo4j Shell clients can log in to. #dbms.shell.enabled=true # The network interface IP the shell will listen on (use 0.0.0.0 for all interfaces). #dbms.shell.host=127.0.0.1 # The port the shell will listen on, default is 1337. #dbms.shell.port=1337 # Only allow read operations from this Neo4j instance. This mode still requires # write access to the directory for lock purposes. #dbms.read_only=false # Comma separated list of JAX-RS packages containing JAX-RS resources, one # package name for each mountpoint. The listed package names will be loaded # under the mountpoints specified. Uncomment this line to mount the # org.neo4j.examples.server.unmanaged.HelloWorldResource.java from # neo4j-server-examples under /examples/unmanaged, resulting in a final URL of # http://localhost:7474/examples/unmanaged/helloworld/{nodeId} #dbms.unmanaged_extension_classes=org.neo4j.examples.server.unmanaged=/examples/unmanaged ``` #### Installing SCT[¶](#installing-sct) Once Neo4j is set up as above, the latest version of SCT can be downloaded from the [SCT releases](https://github.com/MontrealCorpusTools/speechcorpustools/releases) page. As of 12 July 2016, the most current release is v0.5. ##### Windows[¶](#windows) 1. Download the zip archive for Windows 2. Extract the folder 3. Double click on the executable to run SCT. ##### Mac[¶](#mac) 1. Download the DMG file. 2. Double-click on the DMG file. 3. Drag the sct icon to your Applications folder. 4. Double click on the SCT application to run. ### LibriSpeech database[¶](#librispeech-database) The examples in this tutorial use a subset of the [LibriSpeech ASR corpus](http://www.openslr.org/12/), a corpus of read English speech prepared by <NAME>, <NAME>, and collaborators. The subset used here is the `test-clean` subset, consisting of **5.4 hours of speech** from **40 speakers**. This subset was force-aligned using the [Montreal Forced Aligner](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner), and the pronunciation dictionary provided with this corpus. This procedure results in one Praat TextGrid per sentence in the corpus, with phone and word boundaries. We refer to the resulting dataset as the *LibriSpeech dataset*: 5.4 hours of read sentences with force-aligned phone and word boundaires. The examples require constructing a *Polyglot DB* database for the LibriSpeech dataset, in two steps: [[2]](#f2) 1. *Importing* the LibriSpeech dataset using SCT, into a database containing information about words, phones, speakers, and files. 2. *Enriching* the database to include additional information about other linguistic objects (utterances, syllables) and properties of objects (e.g. speech rate). Instructions are below for using a [pre-made copy](#premade) (LINK TODO) of the LibriSpeech database, where steps (1) and (2) have been carried out for you. Instructions for [making your own](#buildownlibrispeech) are coming soon. (For **BigPhon 2016** tutorial, just use the pre-made copy.) #### Use pre-made database[¶](#use-pre-made-database) Make sure you have opened the SCT application and started Neo4j, at least once. This creates folders for Neo4j databases and for all SCT’s local files (including SQL databases): * OS X: `/Users/username/Documents/Neo4j`, `/Users/username/Documents/SCT` * Windows: `C:\Users\username\Documents\Neo4j`, `C:\Users\username\Documents\SCT` Unzip the [librispeechDatabase.zip](https://github.com/MontrealCorpusTools/speechcorpustools/releases/download/v0.5/librispeechDatabase.zip) file. It contains two folders, `librispeech.graphdb` and `LibriSpeech`. Move these (using Finder on OS X, or File Explorer on Windows) to the `Neo4j` and `SCT` folders. After doing so, these directories should exist: * `/Users/username/Documents/Neo4j/librispeech.graphdb` * `/Users/username/Documents/SCT/LibriSpeech` When starting the Neo4j server the next time, select the `librispeech.graphdb` rather than the default folder. Some important information about the database (to replicate if you are building your own): * Utterances have been defined as speech chunks separated by non-speech (pauses, disfluencies, other person talking) chunks of at least 150 msec. * Syllabification has been performed using maximal onset. #### Building your own Librispeech database[¶](#building-your-own-librispeech-database) **Coming soon!** Some general information on building a database in SCT (= importing data) is [*here*](index.html#document-additional/buildown). ### Examples[¶](#examples) Several worked examples follow, which demonstrate the workflow of SCT and how to construct queries and exports. You should be able to complete each example by following the steps listed in **bold**. The examples are designed to be completed in order. Each example results in a CSV file containing data, which you should then be able to use to visualize the results. Instructions for basic visualization in R are given. [Example 1](#example1) : Factors affecting vowel duration [Example 2](#example2) : Polysyllabic shortening [Example 3](#example3) : Menzerath’s Law #### Example 1: Factors affecting vowel duration[¶](#example-1-factors-affecting-vowel-duration) ##### Motivation[¶](#motivation) A number of factors affect the duration of vowels, including: 1. Following consonant voicing (voiced > voiceless) 2. Speech rate 3. Word frequency 4. Neighborhood density #1 is said to be particularly strong in varieties of English, compared to other languages (e.g. Chen, 1970). Here we are interested in examining whether these factors all affect vowel duration, and in particular in seeing how large and reliable the effect of consonant voicing is compared to other factors. ##### Step 1: Creating a query profile[¶](#step-1-creating-a-query-profile) Based on the motivation above, we want to make a query for: * All vowels in CVC words (fixed syllable structure) * Only words where the second C is a stop (to examine following C voicing) * Only words at the end of utterances (fixed prosodic position) To perform a query, you need a *query profile*. This consists of: * The type of linguistic object being searched for (currently: phone, word, syllable, utterance) * Filters which restrict the set of objects returned by the query Once a query profile has been constructed, it can be saved (“Save query profile”). Thus, to carry out a query, you can either create a new one or select an existing one (under “Query profiles”). We’ll assume here that a new profile is being created: 1. **Make a new profile**: Under “Query profiles”, select “New Query”. 2. **Find phones**: Select “phone” under “Linguistic objects to find”. The screen should now look like: > 3. **Add filters** to the query. A single filter is added by pressing “+” and constructing it, by making selections from drop-down menus which appear. For more information on filters and the intuition behind them, see [*here*](index.html#document-additional/filters). The first three filters are: These do the following: * *Restrict to utterance-final words* : > + `word`: the word containing the phone > + `alignment`: something about the word’s alignment with respect to a higher unit > + `Right aligned with`, `utterance`: the word should be right-aligned with its containing utterance > * *Restrict to syllabic phones* (vowels and syllabic consonants): > + `subset`: refer to a “phone subset”, which has been previously defined. Those available in this example include `syllabics` and `consonants`. > + `==`, `syllabic`: this phone should be a syllabic. > * *Restrict to phones followed by a stop* > + `following`: refer to the following phone > + `manner_of_articulation`: refer to a property of phones, which has been previously defined. Those available here include “manner_of_articulation” and “place_of_articulation” > + `==`, `stop`: the following phone should be a stop. Then, add three more filters: These do the following: * *Restrict to phones preceded by a consonant* * *Restrict to phones which are the second phone in a word* > + `previous`: refer to the preceding phone > + `alignment`, `left aligned with`, `word`: the preceding phone should be left-aligned with (= begin at the same time as) the word containing the *target* phone. (So in this case, this ensures both that V is preceded by a word-initial C in the same word: #CV.) > * *Restrict to phones which precede a word-final phone* These filters together form a query corresponding to the desired set of linguistic objects (vowels in utterance-final CVC words, where C2 is a stop). You should now: 4. **Save the query** : Selecting `Save query profile`, and entering a name, such as “LibriSpeech CVC”. 5. **Run the query** : Select “Run query”. ##### Step 2: Creating an export profile[¶](#step-2-creating-an-export-profile) The next step is to export information about each vowel token as a CSV file. We would like the vowel’s *duration* and *identity*, as well as the following factors which are expected to affect the vowel’s duration: * *Voicing* of the following consonant * The word’s *frequency* and *neighborhood density* * The utterance’s *speech rate* In addition, we want some identifying information (to debug, and potentially for building statistical models): * What *speaker* each token is from * The *time* where the token occurs in the file * The *orthography* of the word. Each of these 9 variables we would like to export corresponds to one row in an *export profile*. To **create a new export profile**: 1. Select “New export profile” from the “Export query results” menu. 2. Add one row per variable to be exported, as follows: [[1]](#f1) * Press “+” (create a new row) * Make selections from drop-down menus to describe the variable. * Put the name of the variable in the Output name field. (This will be the name of the corresponding column in the exported CSV. You can use whatever name makes sense to you.) The nine rows to be added for the variables above result in the following export profile: Some explanation of these rows, for a single token: (We use the [u] in /but/ as a running example) * Rows 1, 2, 8 are the `duration`, `label`, and the beginning time (`time`) of the *phone object* (the [u]), in the containing file. * Row 3 refers to the `voicing` of the *following phone* object(the [t]) + Note that “following” automatically means “following phone”” (i.e., `phone` doesn’t need to put put after following) because the linguistic objects being found are phones. If the linguistic objects being found were syllabes (as in Example 2 below), “following” would automatically mean “following syllable”. * Rows 4, 5, and 9 refer to properties of the *word which contains the phone* object: its `frequency`, `neighborhood density`, and `label` (= orthography, here “boot”) * Row 6 refers to the *utterance which contains the phone*: its `speech_rate`, defined as syll`ables per second over the utterance. * Row 7 refers to the *speaker* (their `name`) whose speech contains this phone. Each case can be thought of as a property (shown in `teletype`) of a linguistic object or organizational unit (shown in *italics*). You can now: 3. **Save the export profile** : Select “Save as...”, then enter a name, such as “LibriSpeech CVC export”. 4. **Perform the export** : Select “Run”. You will be prompted to enter a filename to export to; make sure it ends in `.csv` (e.g. `librispeechCvc.csv`). ##### Step 3: Examine the data[¶](#step-3-examine-the-data) Here are the first few rows of the resulting data file, in Excel: We will load the data and do basic visualization in R. (Make sure that you have the ggplot2 library.) First, **load the data file**: ``` cvc <- read.csv("librispeechCvc.txt") ``` ###### Voicing[¶](#voicing) A plot of the basic voicing effect, by vowel: ``` ggplot(aes(x=following_consonant_voicing, y=vowel_duration), data=cvc) + geom_boxplot() + facet_wrap(~vowel, scales = "free_y") + xlab("Consonant voicing") + ylab("Vowel duration (sec)") ``` It looks like there is generally an effect in the expected direction, but the size of the effect may differ by vowel. ###### Speech rate[¶](#speech-rate) A plot of the basic speech rate effect, divided up by consonant voicing: ``` ggplot(aes(x=speech_rate, y=vowel_duration), data=cvc) + geom_smooth(aes(color=following_consonant_voicing)) + geom_point(aes(color=following_consonant_voicing), alpha=0.1, size=1) + xlab("Speech rate (sylls/sec)") + ylab("Vowel duration") ``` There is a large (and possibly nonlinear) speech rate effect. The size of the voicing effect is small compared to speech rate, and the voicing effect may be modulated by speech rate. ###### Frequency[¶](#frequency) A plot of the basic frequency effect, divided up by consonant voicing: ``` ggplot(aes(x=word_frequency, y=vowel_duration), data=cvc) + geom_smooth(aes(color=following_consonant_voicing), method="lm") + geom_point(aes(color=following_consonant_voicing), alpha=0.1, size=1) + xlab("Word frequency (log)") + ylab("Vowel duration") + scale_x_log10() ``` (Note that we have forced a linear trend here, to make the effect clearer given the presence of more tokens for more frequent words. This turns out to be what the “real” effect looks like, once token frequency is accounted for.) The basic frequency effect is as expected: shorter duration for higher frequency words. The voicing effect is (again) small in comparison, and may be modulated by word frequency: more frequent words (more reduced?) show a smaller effect. ###### Neighborhood density[¶](#neighborhood-density) In contrast, there is no clear effect of neighborhood density: ``` ggplot(aes(x=word_neighborhood_density, y=vowel_duration), data=cvc) + geom_smooth(aes(color=following_consonant_voicing)) + geom_point(aes(color=following_consonant_voicing), alpha=0.1, size=1) + xlab("Neighborhood density") + ylab("Vowel duration") ``` This turns out to be not unexpected, given previous work: while word duration and vowel quality (e.g., centralization) depend on neighborhood density (e.g. Gahl & Yao, 2011), vowel *duration* has not been consistently found to depend on neighborhood density (e.g. Munson, 2007). #### Example 2: Polysyllabic shortening[¶](#example-2-polysyllabic-shortening) ##### Motivation[¶](#id3) *Polysyllabic shortening* refers to the “same” rhymic unit (syllable or vowel) becoming shorter as the size of the containing domain (word or prosodic domain) increases. Two classic examples: * English: *stick*, *stick*y, *stick*iness (Lehiste, 1972) * French: p*â*te, p*â*té, p*â*tisserie (Grammont, 1914) Polysyllabic shortening is often – but not always – defined as being restricted to accented syllables. (As in the English, but not the French example.) Using SCT, we can check whether a couple simple versions of polysyllabic shortening holds in the LibriSpeech corpus: 1. Considering all utterance-final words, does the initial syllable duration decrease as word length increases? 2. Considering just utterance-final words with primary stress on the *initial* syllable, does the initial syllable duration decrease as word length increases? We show (1) here, and leave (2) as an exercise. ##### Step 1: Query profile[¶](#step-1-query-profile) In this case, we want to make a query for: * Word-initial syllables * ...only in words at the end of utterances (fixed prosodic position) For this query profile: * “Linguistic objects to find” = “syllables” * Filters are needed to restrict to: + Word-initial syllables + Utterance-final words This corresponds to the following query profile, which has been saved (in this screenshot) as “PSS: first syllable” in SCT: The first and second filters are similar to those in Example 1: * *Restrict to word-initial syllables* > + `alignment`: something about the syllable’s alignment > + `left aligned with` `word`: what it says > * *Restrict to utterance-final words* > + `word`: word containing the syllable > + `right aligned with` utterance``: the word and utterance have > the same end time. You should **input this query profile**, then **run it** (optionally saving first). ##### Step 2: Export profile[¶](#step-2-export-profile) This query has found all word-initial stressed syllables for words in utterance-final position. We now want to export information about these linguistic objects to a CSV file, for which we again need to construct a query profile. (You should now **Start a new export profile**.) We want it to contain everything we need to examine how syllable duration (in seconds) depends on word length (which could be defined in several ways): * The *duration* of the syllable * Various word duration measures: the *number of syllables* and *number of phones* in the word containing the syllable, as well as the *duration* (in seconds) of the word. We also export other information which may be useful (as in Example 1): the *syllable label*, the *speaker name*, the *time* the token occurs in the file, the *word label* (its orthography), and the word’s *stress pattern*. The following export profile contains these nine variables: After you **enter these rows** in the export profile, **run the export** (optionally saving the export profile first). I exported it as `librispeechCvc.csv`. ##### Step 3: Examine the data[¶](#id4) In R: **load in the data**: ``` pss <- read.csv("librispeechCvc.csv") ``` There are very few words with 6+ syllables: ``` library(dplyr) group_by(pss, word_num_syllables) %>% summarise(n_distinct(word)) ``` ``` ## Source: local data frame [6 x 2] ## ## word_num_syllables n_distinct(word) ## (int) (int) ## 1 1 1019 ## 2 2 1165 ## 3 3 612 ## 4 4 240 ## 5 5 60 ## 6 6 7 ``` So let’s just exclude words with 6+ syllables: ``` pss <- subset(pss, word_num_syllables<6) ``` Plot of the duration of the initial stressed syllable as a function of word duration (in syllables): ``` library(ggplot2) ggplot(aes(x=factor(word_num_syllables), y=syllable_duration), data=pss) + geom_boxplot() + xlab("Number of syllables in word") + ylab("Duration of initial syllable (stressed)") + scale_y_sqrt() ``` Here we see a clear polysyllabic shortening effect from 1 to 2 syllables, and possibly one from 2 to 3 and 3 to 4 syllables. This plot suggests that the effect is pretty robust across speakers (at least for 1–3 syllables): ``` ggplot(aes(x=factor(word_num_syllables), y=syllable_duration), data=pss) + geom_boxplot() + xlab("Number of syllables in word") + ylab("Duration of initial syllable (stressed)") + scale_y_sqrt() + facet_wrap(~speaker) ``` #### Example 3: Menzerath’s Law[¶](#example-3-menzerath-s-law) **Motivation**: Menzerath’s Law (Menzerath 1928, 1954) refers to the general finding that segments and syllables are shorter in longer words, both in terms of * duration per unit * number of units (i.e. segments per syllable) (Menzerath’s Law is related to polysyllabic shortening, but not the same.) For example, Menzerath’s Law predicts that for English: 1. The average duration of syllables in a word (in seconds) should decrease as the number of syllables in the word increases. 2. `` `` for *segments* in a word. 3. The average number of phones per syllable in a word should decrease as the number of syllables in the word increases. **Exercise**: Build a query profile and export profile to export a data file which lets you test Menzerath’s law for the LibriSpeech corpus. For example, for prediction (1), you could: * Find all utterance-final words (to hold prosodic position somewhat constant) * Export word duration (seconds), number of syllables, anything else necessary. (This exercise should be possible using pieces covered in Examples 1 and 2, or minor extensions.) | [[1]](#id2) | Note that it is also possible to input some of these rows automatically, using the checkboxes in the Simple exports tab. | | [[2]](#id1) | Technically, this database consists of two sub-databases: a Neo4j database (which contains the hierarchical representation of discourses), and a SQL database (which contains lexical and featural information, and cached acoustic measurements). | Navigation Tour[¶](#navigation-tour) --- This is a tour to get you familiarized with the SCT layout and its functions. This is the entire window The numbers of the panels surrounded by red rectangles correspond to: ### Queries (1)[¶](#queries-1) In the upper left corner, you will find the query panel You will begin by selecting a target type in the dropdown menu next to “Lingustic objects to find”. You can add filters by pressing the long “+” bar at the bottom of the panel. If you want to used a saved query, you can do so by selecting it from the dropdown menu on the top right of the panel. Additionally, you can use premade templates that can be selected by checking them. Both simple queries and complex queries have been incorporated. Checking these boxes will add a fixed set of filters which correspond to that query. Some are only available through enrichment, and will become clickable once the prerequisite enrichment is completed (e.g. utterance-initial words is only available after utterances have been encoded). Note that there are more simple query options available than fit in the window, so scrolling may be necessary to view all of them. Complex queries generally consist of more filters and can be checked and run just like simple queries. Running, exporting, and saving a query are all done using the respective buttons along the bottom of the panel. * * **NB** Running, exporting, and saving a query are all different functions. Running a query simply executes the query on the database and returns a default set of results to an in-app tab. Exporting a query runs the query on the database but allows the user to choose what information is returned, in the form of a file written to the computer. Saving a query allows the user to save a query profile and re-use it later. For more information see the following pages: [*Building Queries*](index.html#document-additional/buildingqueries) [*Exporting Queries*](index.html#document-additional/exporting) ### Discourse (2)[¶](#discourse-2) The discourse panel shows the waveform and spectrogram views of the audio for a given file (if there is audio) as well as the alignment of words, phones, and utterances (if they have been encoded) overlaid onto the waveform. For more information on viewing discourses, see [*Viewing discourses*](index.html#document-additional/viewingdiscourses) ### Connection (3)[¶](#connection-3) This panel is used to establish connections with existing databases, or to construct a new database by ‘importing’ a corpus from the hard drive. Connect to a Neo4j server by filling in the host and port information and pressing “Connect”. Import a database from the hard drive by pressing “Import Local Corpus”. If a database has already been used in SCT it does not need to be imported again. Select a corpus by clicking on it (it will then be highlighed in blue or grey). For more information, see [*Connecting to servers*](index.html#document-additional/connecting) ### Details/Acoustics/Help (4)[¶](#details-acoustics-help-4) This panel will give you details about your file, as well as precise acoustic information and help for a selected feature in the program Connection[¶](#connection) --- To see an example connection, go to [*Connection example*](index.html#document-additional/exconnecting) In the connection tab, there are various features. These are detailed below ### IP address (or localhost)[¶](#ip-address-or-localhost) This is the address of the Neo4j server. In most cases, it will be ‘localhost’ ### Port[¶](#port) This is the port through which a connection to the Neo4j server is made. By default, it is 7474. It must always match the port shown in the Neo4j window. ### Username and Password[¶](#username-and-password) These are by default not required, but available should you need authentication for your Neo4j server ### Connect[¶](#connect) This button will actually connect the user to the specified server. ### Find local audio files[¶](#find-local-audio-files) Pressing this allows the user to browse his/her file system for directories containing audiofiles that correspond to files in a corpus. ### Corpora[¶](#corpora) The user select a corpus (for runnning queries, viewing discourses, enrichment, etc.) by clicking that corpus in the “Available corpora” menu. The selected corpus will be highlighted in blue or grey. ### Import local corpus[¶](#import-local-corpus) This is strictly for constructing a new relational database in Neo4j that does not already exist. Any corpus that has already been imported can be accessed by pressing “Connect” and selecting it instead. Re-importing the same corpus will overwrite the previous corpus of the same name, as well as remove any enrichment the user has done on the corpus. When importing a new corpus, the user selects the directory of the overall corpus, not specific files or subdirectories. Building Queries[¶](#building-queries) --- In this panel, the user constructs queries by adding filters (these will be explained more thoroughly in a moment). There are two key concepts that drive a query in SCT: * **Linguistic Object** A linguistic object can be an utterance, word, or phone. By selecting a linguistic object, the user is specifying the set of elements over which the query is to be made. For example, selecting “phones” will cause the program to look for phones with properties specified by the user (if “words” were selected, then the program would look for words, etc.) * **Filters** Filters are statements that limit the data returned to a specific set. Each filter added provides another constraint on the data. Click [*here*](index.html#document-additional/filters) for more information on filters. Here’s an example of a filter: This filter specifies all the object (utterance, phone, syllable) which are followed by an object of the same type that shares its rightmost boundary with a word. Now you’re ready to start building queries. Here’s an overview of what each dropdown item signifies ### Linguistic Objects[¶](#linguistic-objects) * **Utterance**: An utterance is (loosely) a group of sounds delimited by relatively long pauses on either side. This could be a clause, sentence, or phrase. Note that utterances need to be encoded before they are available. * **Syllables** Syllables currently have to be encoded before this option is available. The encoding is done through maximum attested onset * **Word**: A word is a collection of phones that form a single meaningful element. * **Phone**: A phone is a single speech segment. The following is avaiable only for the TIMIT database: * **surface_transcription** This is the phonetic transcription of the utterance ### Filters[¶](#filters) Filters are conditions that must be satisfied for data to pass through. For example is a filter Many filters have dropdown menus. These look like this: Generally speaking, the first dropdown menu is used to target a property. These properties are available without enrichment for all databases: * **alignment** The position of the object in a super-object (i.e. a word in an utterance, a phone in a word...) * **following** Specifies the object after the current object * **previous** Specifies the object before the current object * **subset** Used to delineate classes of phones and words. Certain classes come premade. Others are avaiable through enrichment * **duration** How much time the object occupies * **begin** The start of the object in time (seconds) * **end** The end of the object in time (seconds) * **label** The orthographic contents of an object * **word** Specifies a word (only available for Utterance, Syllable, and Phone) * **syllable** Specifies a syllable * **phone** Specifies a phone * **speaker** Specifies the speaker * **discourse** Specifies the discourse, or file * **category** Only available for words, specifies the word category * **transcription** Only available for words, specifies the phonetic transcription of the word in the corpus These are available after enrichment: * **utterance** Available for all objects except utterance, specifies the utterance that the object came from * **syllable_position** Only available for phones, specifies the phone’s position in a syllable * **num_phones** Only available for words, specifies the number of phones in a word * **num_syllables** Only available for words, specifies the number of syllables in a word * **position_in_utterance** Only available for words, specifies the word’s index in the utterance These are only available for force-aligned database: * **manner_of_articulation** Only available for phones * **place_of_articulation** Only available for phones * **voicing** Only available for phones * **vowel_backness** Only available for phones * **vowel_rounding** Only available for phones * **vowel_height** Only available for phones * **frequency** Only available for words, specifies the word frequency in the corpus * **neighborhood_density** Only available for words, specifies the number of phonological neighbours of a given word. * **stress_pattern** Only available for words, specifies the stress pattern for a word The second filter will depend on which filter you chose in the first column. For example, if you chose **phone** you will get all of the phone options specified above. However if you choose **label** you will be presented with a different type of dropdown menu. This section covers some of these possibilities. * **alignment** > + **right aligned with** This will filter for objects whose rightmost boundary lines up with the rightmost boundary of the object you will select in the third column of dropdown menus (**utterance**, **syllable**, **word**, or **phone**). > + **left aligned with** This will filter for objects whose leftmost boundary lines up with the left most boundary of the object you will select in the third column of dropdown menus (**utterance**, **syllable**, **word**, or **phone**). > + **not right aligned with** This will exclude objects whose rightmost boundary lines up with the rightmost boundary of the object you will select in the third column of dropdown menus (**utterance**, **syllable**, **word**, or **phone**). > + **not left aligned with** This will exclude objects whose leftmost boundary lines up with the left most boundary of the object you will select in the third column of dropdown menus (**utterance**, **syllable**, **word**, or **phone**). > * **subset** + **==** This will filter for objects that are in the class that you select in the third dropdown menu. * **begin**/**end**/**num_phones**/**num_syllables**/ **position_in_utterance**/**frequency**/ **neighborhood_density**/**duration** > + **==** This will filter for objects whose property is equal to what you have specified in the text box following this menu. > + **!=** This will exclude objects whose property is equal to what you have specified in the text box following this menu. > + **>=** This will filter for objects whose property is greater than or equal to what you have specified in the text box following this menu. > + **<=** This will filter for objects whose property is less than or equal to what you have specified in the text box following this menu. > + **>** This will filter for objects whose property is greater than what you have specified in the text box following this menu. > + **<** This will filter for objects whose property is less than what you have specified in the text box following this menu. > * **stress_pattern**/**category**/**label**/ **speaker** + **name**/**discourse** + **name**/ **transcription**/**vowel_height**/ **vowel_backness**/**vowel_rounding**/ **manner_of_articulation**/ **place_of_articulation**/**voicing** > + **==** This will filter for objects whose property is equivalent to what you have specified in the text box or dropdown menu following this menu. > + **!=** This will exclude objects whose property name is equivalent to what you have specified in the text box or dropdown menu following this menu. > + **regex** This option allows you to input a regular expression to match certain properties. Experiment with combining these filters. Remember that each time you add a filter, you are applying further constraints on the data. Some complex queries come pre-made. These include “all vowels in mono-syllabic words” and “phones before word-final consonants”. Translating from English to filters can be complicated, so here we’ll cover which filters constitute these two queries. * All vowels in mono-syllabic words > + Since we’re looking for vowels, we know that the linguistic object to search for must be “phones” > + To get mono-syllabic words, we have to go through three phases of enrichment > > > > - First, we need to encode syllabic segments > > - Second, we need to encode syllables > > - Finally, we can encode the hierarchical property: count of syllables in word > > > + Now that we have this property, we can add a filter to look for monosyllabic words: > `word: count_of_syllable_in_word == 1` > > > > - Notice that we had to select “word” for “count_of_syllable_in_word” to be available > > > + The next filter we want to add would be to get only the vowels from this word. > `subset == syllabic` > > > > - This will get the syllabic segments (vowels) that we encoded earlier > > > * Phones before word-final consonants > + Once again, it is clear that we are looking for “phones” as our linguistic object. > + The word “before” should tip you off that we will need to use the “following” or “previous” property. > + We start by getting all phones that are in the penultimate position in a word. > `following phone right-aligned with word` > > > > - This will ensure that the phone after the one we are looking for is the word-final phone > > > + Now we need to limit it to consonants > `following phone subset != syllabic` > > > > - This will further limit the results to only phones before non-syllabic word-final segments (word-final consonants) > > Exporting Queries[¶](#exporting-queries) --- While getting in-app results can be a quick way to visualize data, most often the user will want to further manipulate the data (i.e. in R, MatLab, etc.) To this end, there is the “Export query results” feature. It allows the user to specify the information that is exported by adding columns to the final output file. This is somewhat similar to [*building queries*](index.html#document-additional/buildingqueries) , but not quite the same. Insttead of filters, pressing the “+” button will add a column to the exported file. For example, if the user wanted the timing information (begin/end) and lables for the object found and the object before it, the export profile would look like: Perhaps a researcher would be interested in knowing whether word-initial segments in some word categories are longer than in others. To get related information (phone timing information and label, word category) into a .csv file, the export profile would be something like: Here, “phone” has been selected as the linguistic object to find (since that is what we’re interested in) so any property without a preceding dropdown menu is a property of the target phone – in this case, alignment would have been used to specify “word-initial phones”. Another option is to use the “simple export” window. Here, there are several commong options that can be selected by checking them. Once checked, they will appear as columns in the query profile: While many of the column options are the same as ones available for [*building queries*](index.html#document-additional/buildingqueries) there are some differences : * “alignment” and “subset” are not valid column options * column options do not change depending on the linguistic object that was chosen earlier > + instead, you can select “word” and then “label” (or some other option) or “phone” + options, etc. > * you can edit the column name by typing what you would like to call it in the “Output name:” box. These names are by default very descriptive, but perhaps too long for the user’s purposes. Since the options are similar but not all identical, here is a full list of all the options available: * **following** Specifies the object after the current object. There will be another dropdown menu to select a property of this following object. * **previous** Specifies the object before the current object. There will be another dropdown menu to select a property of this preceding object. * **duration** Adds how much time the object occupies as a column * **begin** Adds the start of the object in time (seconds) as a column * **end** Adds the end of the object in time (seconds) as a column * **label** Adds the orthographic contents of an object as a column * **word** Specifies a word (another dropdown menu will become available to specify another property to add as a column). The following are only available if “word” is selected either as the original object to search for, or as the first property in a column. > + **category** Adds the word category as a column > + **transcription** Adds the underlying phonetic transcription of the word in the corpus as a column > + **surface_transcription** Adds the surface transcription of the word in the corpus as a column > + **utterance** Specifies the utterance that the word came from (another dropdown menu will become available to specify another property to add as a column) > * **phone** Specifies a phone (another dropdown menu will become available to specify another property to add as a column) * **speaker** Specifies the speaker (another dropdown menu will become available to specify another property to add as a column) * **discourse** Specifies the discourse, or file (another dropdown menu will become available to specify another property to add as a column) Once the profile is ready, pressing “run” will open the following window: Here the user can pick a name and location for the final file. After pressing save, the query will run and the results will be written in the desired columns to the file. Viewing Discourses[¶](#viewing-discourses) --- After completing a query, it might be useful to take a closer look at the discourse, or file, that a result came from. To this end, SCT has the ‘Discourse’ window on the bottom left. The user is presented with two windows inside of the ‘Discourse’ window. The top one shows the waveform of the file as well as the transcriptions of words and phones. The bottom window is a spectrogram. This maps time and frequency on the X and Y axes respectively, while the darkeness of an area indicates the amplitude. Lines generated by the software also indicate pitch and formants when available. Pitch and formants will only become available by first selecting “Analyze acoustics” in the enrichment menu. Viewing one of the discourses’ acoustic information can be done by clicking on a discourse either in the “Discourse” tab of the top right window (right next to “Connection”), or by double-clicking on a result from a query in the “Query #” tab*. * **NB** A successful query must first be run for this tab to appear Now something like this should be displayed: The waveform is displayed, with annotations as well as the spectrogram, whose features can be toggled on and off by clicking on them. * Spectrogram On > * Spectrogram Off (just formants and pitch) Viewing Results[¶](#viewing-results) --- Having run a query, a user will want to make sense of the results. These can be found in the “Query #” that will appear as soon as the query has finished running. Within this tab, based on the linguistic objects the user was searching for (utterance, word, phone, or syllable) there will be different columns*. Here is a list of the default columns ### Utterance[¶](#utterance) * **begin** * **end** * **discourse** * **speaker** ### Word[¶](#word) * **begin** * **category** * only in buckeye * **end** * **label** * **surface_transcription** * only in buckeye * **transcription** * **discourse** * **speaker** ### Phone[¶](#phone) * **begin** * **end** * **label** * **discourse** * **speaker** ### Syllable[¶](#syllable) * **begin** * **end** * **label** * **discourse** * **speaker** * **NB** Scrolling horizontally may be required to view all of these options. Example: Connecting to Servers[¶](#example-connecting-to-servers) --- If you already have Neo4j open and started, you’re ready to start connecting to servers. Go to the upper right panel in SCT You’re not connected to the Neo4j graph database the first time you start the program. Let’s fix that. Make sure that the port is the same as in your Neo4j window. If they match, you’re ready to proceed. Press connect. Because it is your first time using the program, nothing will appear in “Available Corpora”, but the “Reset Local Cache” button should now be clickable. Next, go to “Import Local Corpus” at the bottom center and click on it. Press “Buckeye Corpus”. This was included with the tutorial. Go to the tutorial folder and select “buckeyeDataForTutorial”. You will have to wait for the corpus to be imported. When the process has completed, you are ready to make some queries. Simply select the corpus by clicking on it under “Available corpora” and begin adding filters. Enrichment[¶](#enrichment) --- Databases can be enriched by encoding various elements. Usually, the database starts off with just words and phones, but by using enrichment options a diverse range of options will become available to the user. Here are some of the options: * **Encode non-speech elements** this allows the user to specify for a given database what should not count as speech * **Encode utterances** After encoding non-speech elements, we can use them to define utterances (segments of speech separated by a .15-.5 second pause) * **Encode syllabic segments** This allows the user to specify which segments in the corpus are counted as syllabic * **Encode syllables** if the user has encoded syllabic segments, syllables can now be encoded using maximum attested onset * **Encode hierarchical properties** These allow the user to encode such properties as number of syllables in each utterance, or rate of syllables per second * **Enrich lexicon** This allows the user to assign certain properties to specific words. For example the user might want to encode word frequency. This can be done by having words in one column and corresponding frequencies in the other column of a column-delimited text file. * **Enrich phonological inventory** Similar to lexical enrichment, this allows the user to add certain helpful features to phonological properties – for example, adding ‘fricative’ to ‘manner_of_articulation’ for some phones * **Encode subsets** Similar to how syllabic phones were encoded into subsets, the user can encode other phones in the corpus into subsets as well * **Analyze acousticcs** This will encode pitch and formants into the corpus. This is necessary to view the waveforms and spectrogram. Filters Explained[¶](#filters-explained) --- So far, there has been a lot of talk about objects, filters, and alignment, but these can be a difficult-to-grasp concepts. These illustrated examples might be helpful in gleaning a better understanding of what is meant by “object”, “filter” and “alignment”. The easiest way to start is with an example. Let’s say the user wanted to search for **word-final fricatives in utterance-initial words** While to a person this seems like a fairly simple task that can be accomplished at a glance, for SCT it has to be broken up into its constituent steps. Let’s see how this works on this sample sentence: Here, each level (utterance, word, phone) corresponds to an object. Since we are ultimately looking for fricatives, we would want to select “phones” as our linguistic object to find. Right now we have all phones selected, since we haven’t added any filters. Let’s limit these phones by adding the first part of our desired query: word-final phones. To accomplish this, we need to grasp the idea of alignment. Each object (utterances, words, phones) has two boundaries, left and right. These are represented by the walls of the boxes containing each object in the picture. To be “aligned”, two objects must share a boundary. For example, the non-opaque objects in the next 2 figures are all aligned. Shared boundaries are indicated by thick black lines. Parent objects (for example, words in which a target phone is found) are outlined in dashed lines. In the first picture, the words and phones are “left aligned” with the utterance (their left boundaries are the same as that of the utterance) and in the second image, words and phones are “right aligned” with the utterance. Now that we understand alignment, we can use it to filter for **word-final** phones, by adding in this filter: By specifying that we only want phones which share a right boundary with a word, we are getting all **word-final** phones. However, recall that our query asked for **word-final fricatives**, and not all phones. This can easily be remedied by adding another filter *: * **NB** the “fricative” property is only available through [*enrichment*](index.html#document-additional/enrichment). Now the following phones are found: Finally, in our query we wanted to specify only **utterance-intial** words. This will again be done with alignment. Since English reads left to right, the first word in an utterance will be the leftmost word, or the word which shares its leftmost boundary with the utterance. To get this, we add the filter: This gives us the result we are looking for: **word-final fricatives in utterance-initial words** Another thing we can do is specify previous and following words/phones and their properties. For example: what if we wanted the final segment of the second word in an utterance? This is where the “following” and “previous” options come into play. We can use “previous” to specify the object before the one we are looking for. If we wanted the last phone of the second word in our sample utterance (the “s” in “reasons”) we would want to specify something about the previous word’s alignment. If we wanted to get the final phone of the words in this position, our filters would be: For a full list of filters and their uses, see the section on [*building queries*](index.html#document-additional/buildingqueries) Build your own database[¶](#build-your-own-database) --- ### Import[¶](#import) SCT currently supports the following corpus formats: * Buckeye * TIMIT * Force-aligned TextGrids + FAVE (multiple talkers okay) + LaBB-CAT TextGrid export + Prosodylab To import one of those corpora, press the “Import local corpus” button below the “Available corpora” list. Once it has been pressed, select one of the three main options to import. From there, you will have to select where on the local computer the corpus files live and they will be imported into the local server. At the moment, importing ignores any connections to remote servers, and requires that a local version of Neo4j is running. Sound files will be detected based on sharing a name with a text file or TextGrid. If the location of the sound files is changed, you can update where SCT thinks they are through the “Find local audio files” button. See [Speech Corpus Tools: Tutorial and examples](index.html#enrichment-tutorial) for more details on how to further enrich your database Indices and tables[¶](#indices-and-tables) === * [Index](genindex.html) * [Module Index](py-modindex.html) * [Search Page](search.html)
react-router-mambascript
npm
JavaScript
react-router-dom === DOM bindings for [React Router](https://reacttraining.com/react-router). Installation --- Using [npm](https://www.npmjs.com/): ``` $ npm install --save react-router-dom ``` Then with a module bundler like [webpack](https://webpack.github.io/), use as you would anything else: ``` // using ES6 modules import { BrowserRouter, Route, Link } from "react-router-dom"; // using CommonJS modules const BrowserRouter = require("react-router-dom").BrowserRouter; const Route = require("react-router-dom").Route; const Link = require("react-router-dom").Link; ``` The UMD build is also available on [unpkg](https://unpkg.com): ``` <script src="https://unpkg.com/react-router-dom/umd/react-router-dom.min.js"></script> ``` You can find the library on `window.ReactRouterDOM`. Issues --- If you find a bug, please file an issue on [our issue tracker on GitHub](https://github.com/ReactTraining/react-router/issues). Credits --- React Router is built and maintained by [React Training](https://reacttraining.com). Readme --- ### Keywords * react * router * route * routing * history * link
ncdf4
cran
R
Package ‘ncdf4.helpers’ October 13, 2022 Version 0.3-6 Date 2021-10-12 Title Helper Functions for Use with the 'ncdf4' Package Author <NAME> <<EMAIL>> for the Pacific Climate Impacts Consortium (PCIC) Maintainer <NAME> <<EMAIL>> Depends R (>= 2.12.0) Imports ncdf4, PCICt, abind Suggests proj4, testthat Description Contains a collection of helper functions for dealing with 'NetCDF' files <https://www.unidata.ucar.edu/software/netcdf/> opened using 'ncdf4', particularly 'NetCDF' files that conform to the Climate and Forecast (CF) Metadata Conventions <http://cfconventions.org/Data/cf-conventions/cf-conventions-1.7/ cf-conventions.html>. License LGPL-2.1 URL https://www.r-project.org BugReports https://github.com/pacificclimate/ncdf4.helpers/issues RoxygenNote 7.1.1 NeedsCompilation no Repository CRAN Date/Publication 2021-10-15 10:20:02 UTC R topics documented: get.cluster.worker.subset... 2 get.f.step.siz... 3 get.split.filename.cmip... 4 nc.conform.dat... 5 nc.copy.att... 6 nc.get.climatology.bounds.var.lis... 7 nc.get.compress.dim... 8 nc.get.coordinate.axe... 9 nc.get.dim.axe... 10 nc.get.dim.axes.from.name... 11 nc.get.dim.bounds.var.lis... 12 nc.get.dim.for.axi... 13 nc.get.dim.name... 14 nc.get.proj4.strin... 14 nc.get.time.multiplie... 15 nc.get.time.serie... 16 nc.get.var.subset.by.axe... 17 nc.get.variable.lis... 18 nc.is.regular.dimensio... 19 nc.make.time.bound... 20 nc.put.var.subset.by.axe... 21 ncdf4.helper... 22 get.cluster.worker.subsets Get subsets to be distributed to workers Description Get subsets to be distributed to workers. Usage get.cluster.worker.subsets( num.vals, dim.size, dim.axes, axis.to.split.on, min.num.chunks = 1 ) Arguments num.vals The maximum number of values to process at once. dim.size The sizes of the dimensions of the data to be processed. dim.axes The axes of the data, as returned by nc.get.dim.axes. axis.to.split.on The axis (X, Y, T, etc) to split the data on. min.num.chunks The minimum number of chunks to generate, even if the chunks are considerably smaller than num.vals. Details Given a desired number of values (num.vals), the sizes of the dimensions (dim.size), the cor- responding axes (dim.axes), the desired axis to split on (axis.to.split.on), and optionally the minimum number of chunks to return (min.num.chunks), returns a list of lists of subsets appropri- ate to be passed to nc.put.var.subsets.by.axes or nc.get.var.subsets.by.axes. This functionality is useful when you want to keep memory consumption down but want to maxi- mize the amount read in at one time to make the best use of available I/O bandwidth. Value A list of lists describing subsets in a suitable form to be passed to nc.put.var.subsets.by.axes or nc.get.var.subsets.by.axes. Examples ## Get a subset from an example subsets <- get.cluster.worker.subsets(1E7, c(128, 64, 50000), c(lon="X", lat="Y", time="T"), "Y") get.f.step.size Get step size for data Description Get step size for data. Usage get.f.step.size(d, f) Arguments d The data to have the step size determined f The function to aggregate the step size Details Gets the step size for data, aggregated by the supplied function. This is useful when you want to know the mean timestep size, median, minimum, range, etc for the purposes of classifying data by time resolution. Value The step size Examples dat <- c(1, 2, 3, 4, 5, 7) ## Will be 2 max.step.size <- get.f.step.size(dat, max) ## Will be 1 min.step.size <- get.f.step.size(dat, min) get.split.filename.cmip5 Splits up a CMIP5 filename Description Splits up a CMIP5 filename into its component parts. Usage get.split.filename.cmip5(cmip5.file) Arguments cmip5.file The filename to be split. Details As the CMIP5 conventions define the format of filenames, quite a bit of data can be extracted from the filename alone. This function makes that process easier by splitting up the given CMIP5 filename, returning a named vector consisting of the variable, time resolution, model, emissions scenario, run, time range, and time start and end. Value A vector containing the variable (var), time resolution (tres), model (model), emissions scenario (emissions), run (run), time range (trange), time start (tstart) and time end (tend) for the file. References https://pcmdi.llnl.gov/mips/cmip5/docs/CMIP5_output_metadata_requirements.pdf?id= 28 Examples ## Split up filename into component bits split.bits <- get.split.filename.cmip5( "pr/pr_day_MRI-CGCM3_historical_r1i1p1_18500101-20051231.nc") nc.conform.data Conform data to dimension order and structure of output Description Conform data to dimension order and structure of output. Usage nc.conform.data( f.input, f.output, v.input, v.output, dat.input, allow.dim.subsets = FALSE ) Arguments f.input The input file (an object of class ncdf4) f.output The output file (an object of class ncdf4) v.input The input variable (a string naming a variable in a file or an object of class ncvar4). v.output The output variable (a string naming a variable in a file or an object of class ncvar4). dat.input The input data to be reordered to match the output file’s ordering. allow.dim.subsets Whether to allow the conforming process to subset the data. Details Sometimes files come in in different latitude (up is north, up is south), longitude (0 to 360 vs -180 to 180), and temporal schemes. The purpose of this function is to make data from one scheme comparable to data from another. It takes a given input file, variable, and slab of data and permutes the data such that the dimension order and the index order matches the order in the output file and variable. Value The data permuted to match the output file’s ordering and optionally clipped to the extent of the output. Note This function currently isn’t useful for conforming subsets of output data. Examples ## Get data from one file and conform it to the dimension order of another. ## Not run: f1 <- nc_open("pr.nc") f2 <- nc_open("pr2.nc", write=TRUE) dat <- nc.get.var.subset.by.axes(f1, "pr") new.dat <- nc.conform.data(f2, f1, "pr", "pr", dat) nc_close(f1) nc_close(f2) ## End(Not run) nc.copy.atts Copy attributes from one variable in one file to another file Description Copy attributes from one variable in one file to another file. Usage nc.copy.atts( f.src, v.src, f.dest, v.dest, exception.list = NULL, rename.mapping = NULL, definemode = FALSE ) Arguments f.src The source file (an object of class ncdf4) v.src The source variable: a string naming a variable in a file or an object of class ncvar4. f.dest The destination file (an object of class ncdf4) v.dest The destination variable: a string naming a variable in a file or an object of class ncvar4. exception.list A vector containing names of variables not to be copied. rename.mapping A vector containing named values mapping source to destination names. definemode Whether the file is already in define mode. Details This function copies attributes from a variable in one file to a variable in another file. If the source or destination variable is 0, then attributes are copied from/to the NetCDF file’s global attributes. If desired, some attributes can be left out using exception.list, a vector of names of attributes to be excluded. Attributes can also be renamed at the destination using rename.mapping, a named vector of strings in which the name of the attribute to be renamed is the name, and the attribute’s new name is the value. Examples ## Copy attributes from one variable to another; but don't copy units or ## standard_name, and copy long_name as old_long_name. ## Not run: f1 <- nc_open("pr.nc") f2 <- nc_open("pr2.nc") nc.copy.atts(f1, "pr", f2, "pr", c("units", "standard_name"), c(long_name="old_long_name")) dim.axes <- nc.get.dim.axes.from.names(f, "pr") nc_close(f1) nc_close(f2) ## End(Not run) nc.get.climatology.bounds.var.list Get a list of names of climatology bounds variables Description Get a list of names of climatology bounds variables. Usage nc.get.climatology.bounds.var.list(f) Arguments f The file (an object of class ncdf4) Details The CF metadata convention defines a climatology attribute which can be applied to a time axis to indicate that the data is climatological in nature; the value of this attribute is the name of another variable in the file which defines the bounds of each climatological time period. This function returns the names of any climatology bounds variables found in a file. Value A character vector naming all of the climatology bounds variables found. References http://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions. html#climatological-statistics Examples ## Get list of climatology bounds variables ## Not run: f <- nc_open("pr.nc") clim.bounds <- nc.get.climatology.bounds.var.list(f) nc_close(f) ## End(Not run) nc.get.compress.dims Get X and Y dimension variables for reduced (compressed) grids Description Get X and Y dimension variables for reduced (compressed) grids. Usage nc.get.compress.dims(f, v) Arguments f The file (an object of class ncdf4) v The name of a variable Details The CF metadata convention defines a method for implementing reduced grids (grids missing pieces of themselves); they call this compression by gathering. This function retrieves the X and Y dimen- sions for reduced (compressed) grids, returning a list containing the X and Y dimensions. Value A list consisting of two members of class ncdim4: x.dim for the X axis, and y.dim for the Y axis. References http://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions. html#reduced-horizontal-grid Examples ## Get compress dimensions from file. ## Not run: f <- nc_open("pr.nc") compress.dims <- nc.get.compress.dims(f, "pr") nc_close(f) ## End(Not run) nc.get.coordinate.axes Get a list of dimension variables and axes for a variable’s coordinate variable Description Get a list of dimension variables and axes for a variable’s coordinate variable. Usage nc.get.coordinate.axes(f, v) Arguments f The file (an object of class ncdf4) v The name of a variable Details The CF metadata standard defines a convention for definining 2-dimensional variables to accom- pany pairs of dimension variables. Usually these are latitude and longitude variables, and accom- pany projected grids. This function returns a named list of axes, the names of which are the associ- ated dimension variables. Value A named character vector containing axes, the names of which are the corresponding dimension variables. References http://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions. html#_two_dimensional_latitude_longitude_coordinate_variables Examples ## Get coordinate axes from file. ## Not run: f <- nc_open("pr.nc") coord.axes <- nc.get.coordinate.axes(f, "pr") nc_close(f) ## End(Not run) nc.get.dim.axes Get dimension axes Description Get dimension axes for the given variable. Usage nc.get.dim.axes(f, v, dim.names) Arguments f The file (an object of class ncdf4) v The name of a variable dim.names Optionally, dimension names (to avoid looking them up repeatedly) Details This function returns the dimension axes for a given variable as a named character vector; the names are the names of the corresponding dimensions. If no variable is supplied, the function will return data for all dimensions found in the file. Axes are X, Y, Z (depth, plev, etc), T (time), and S (space, for reduced grids). This routine will attempt to infer axes for dimensions if no ’axis’ attribute is found on a dimension variable, using the nc.get.dim.axes.from.names function. Value A named character vector mapping dimension names to axes. Examples ## Get dimension axes from file. ## Not run: f <- nc_open("pr.nc") ## Get dim axes for a specified variable dim.axes <- nc.get.dim.axes(f, "pr") ## Get all dim axes in file dim.axes <- nc.get.dim.axes(f) nc_close(f) ## End(Not run) nc.get.dim.axes.from.names Infer dimension axes from names of dimensions Description Infer dimension axes from names of dimensions. Usage nc.get.dim.axes.from.names(f, v, dim.names) Arguments f The file (an object of class ncdf4) v The name of a variable dim.names Optionally, dimension names (to avoid looking them up repeatedly) Details This function makes educated guesses as to what axes dimensions may apply to in the case of files with poor metadata. Value A named character vector mapping dimension names to axes. Examples ## Get dimension axes from file by inferring them from dimension names ## Not run: f <- nc_open("pr.nc") dim.axes <- nc.get.dim.axes.from.names(f, "pr") nc_close(f) ## End(Not run) nc.get.dim.bounds.var.list Get a list of names of dimension bounds variables. Description Get a list of names of dimension bounds variables. Usage nc.get.dim.bounds.var.list(f, v = NULL) Arguments f The file (an object of class ncdf4). v The name of the variable (a string). Details Many dimension variables are not single points, but in fact represent a range along the axis. This is expressed by associated dimension bounds variables. This function returns the names of any dimension bounds variables found in a file. Value A character vector naming all of the dimension bounds variables found. References http://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions. html#cell-boundaries Examples ## Get list of dimension bound variables ## Not run: f <- nc_open("pr.nc") dim.bounds.var.list <- nc.get.dim.bounds.var.list(f) nc_close(f) ## End(Not run) nc.get.dim.for.axis Get dimension corresponding to a given axis Description Get dimension corresponding to a given axis. Usage nc.get.dim.for.axis(f, v, axis) Arguments f The file (an object of class ncdf4) v The source variable: a string naming a variable in a file or an object of class ncvar4. axis The axis to retrieve the dimension for: a string consisting of either X, Y, Z, T, or S. Details This function returns the dimension (of class ’ncdim4’) corresponding to the specified axis (X, Y, Z, T, or S). Value An object of class ncdim4 if a dimension is found for the specified axis; NA otherwise. Examples ## Get dimension for X axis ## Not run: f <- nc_open("pr.nc") x.axis.dim <- nc.get.dim.axes.from.names(f, "pr", "X") nc_close(f) ## End(Not run) nc.get.dim.names Get a list of names of dimensions Description Get a list of names of dimensions. Usage nc.get.dim.names(f, v) Arguments f The file (an object of class ncdf4) v Optionally, a variable Details This function returns the names of dimensions in a file or, if v is also supplied, attached to a partic- ular variable. Value A character vector naming the dimensions found. Examples ## Get dimension names ## Not run: f <- nc_open("pr.nc") dim.names <- nc.get.dim.names(f, "pr") nc_close(f) ## End(Not run) nc.get.proj4.string Gets the proj4 string for a file Description Gets the proj4 string for a file. Usage nc.get.proj4.string(f, v) Arguments f The file (an object of class ncdf4) v The name of a variable Details Most NetCDF files are stored without any projection information as a lat-long grid. However, some files – particularly those from RCMs – are on a projected grid. This function returns a proj4 string, suitable for use with the ’proj4’ library, which can be used to perform forward and inverse projections. Given a file and a variable, this function returns the proj4 string for the given file should be. If no projection data is found, it returns an empty string. It currently supports Lambert Conformal Conic, Transverse Mercator, Polar Sterographic, and Rotated Pole projections, plus the latitude_longitude pseudo-projection. Value A string containing the proj4 string, or NULL if a translator is not available for the given projection. References http://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions. html#grid-mappings-and-projections Examples ## Get the proj4 string for a hypothetical file. ## Not run: f <- nc_open("pr.nc") proj4.string <- nc.get.proj4.string(f, "pr") nc_close(f) ## End(Not run) nc.get.time.multiplier Gets conversion factor for time scale given units Description Gets conversion factor for time scale given units. Usage nc.get.time.multiplier(x) Arguments x The time scale Details This function returns a conversion factor from the supplied time scale (days, hours, minutes, months) to seconds. This can be used to convert to/from "(days or hours) since X" style dates. Value A numeric conversion factor to convert to seconds. Note The conversion factor for months is approximate. Examples ## Will return 3600 mul <- nc.get.time.multiplier("hours") nc.get.time.series Returns time axis data as PCICt for a file Description Returns time axis data as PCICt for a file. Usage nc.get.time.series( f, v, time.dim.name, correct.for.gregorian.julian = FALSE, return.bounds = FALSE ) Arguments f The file (an object of class ncdf4) v Optionally, the variable to look for a time dimension on. time.dim.name Optionally, the time dimension name. correct.for.gregorian.julian Specific workaround for Gregorian-Julian calendar transitions in non-proleptic Gregorian calendars return.bounds Whether to return the time bounds as an additional attribute Details Retrieving time data from a NetCDF file in an intelligible format is a non-trivial problem. The PCICt package solves part of this problem by allowing for 365- and 360-day calendars. This function complements it by returns time data for a file as PCICt, doing all necessary conversions. Value A vector of PCICt objects, optionally with bounds Note If the file was opened with readunlim=FALSE, it will read in the time values from the file; otherwise, it will retrieve the time values from the ncdf4 class’ data structures. References http://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions. html#time-coordinate Examples ## Get time series from file ## Not run: f <- nc_open("pr.nc") ts <- nc.get.time.series(f) nc_close(f) ## End(Not run) nc.get.var.subset.by.axes Gets a data subset in the place described by the named list of axes Description Gets a data subset in the place described by the named list of axes. Usage nc.get.var.subset.by.axes(f, v, axis.indices, axes.map = NULL) Arguments f An object of class ncdf4 which represents a NetCDF file. v A string naming a variable in a file or an object of class ncvar4. axis.indices A list consisting of zero or more vectors of indices, named by which axis they refer to (X, Y, T, etc). axes.map An optional vector mapping axes to NetCDF dimensions. If not supplied, it will be generated from the file. Details This function will read data from the specified file (f) and variable (v) at the location specified by axis.indices. See Also ncdf4.helpers-package Examples ## Get a subset of the data. ## Not run: f <- nc_open("pr.nc") dat <- nc.get.var.subset.by.axes(f1, "pr", list(X=1:4, Y=c(1, 3, 5))) nc_close(f) ## End(Not run) nc.get.variable.list Get a list of names of data variables Description Get a list of names of data variables. Usage nc.get.variable.list(f, min.dims = 1) Arguments f The file (an object of class ncdf4) min.dims The minimum number of dimensions a variable must have to be included. Details This function returns the names of any data variables found in the file – that is, variables which are NOT dimension variables, dimension bounds variables, climatology bounds variables, coordinate variables, or grid mapping variables. Optionally, one may require that the variables have a minimum number of dimensions; this can eliminate unwanted variables left in files. Value A character vector naming all of the data variables found. Examples ## Get dimension axes from file by inferring them from dimension names ## Not run: f <- nc_open("pr.nc") var.list <- nc.get.variable.list(f) nc_close(f) ## End(Not run) nc.is.regular.dimension Determine if a dimension is regular Description Determine if a dimension is regular (evenly spaced). Usage nc.is.regular.dimension(d, tolerance = 1e-06) Arguments d The data to be tested tolerance The tolerance for variation in step size, as a fraction of the step size. Details Not all dimensions or data are regular (evenly spaced). This function will, given data and optionally a tolerance level, determine if the dimension is regular or not. Value TRUE if the data is regular; FALSE if not. Examples dat <- c(1, 2, 3, 4, 5, 6, 7) ## TRUE nc.is.regular.dimension(dat) dat[7] <- 7.001 ## FALSE nc.is.regular.dimension(dat) nc.make.time.bounds Creates time bounds for a time series Description Creates time bounds for a time series. Usage nc.make.time.bounds(ts, unit = c("year", "month")) Arguments ts The time values, of type PCICt unit The units to be used. Details When aggregating data along the time axis, it is occasionally useful to be able to generate bounds for that data. This function will, given a time series of PCICt, returns a set of bounds for that time series based the supplied units. Value 2-dimensional bounds array for the time values with dimensions [length(ts), 2]. References http://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions. html#climatological-statistics Examples library(PCICt) ts <- as.PCICt(c("1961-01-15", "1961-02-15", "1961-03-15"), cal="360") ts.bounds <- nc.make.time.bounds(ts, unit="month") nc.put.var.subset.by.axes Puts a data subset in the place described by the named list of axes Description Puts a data subset in the place described by the named list of axes. Usage nc.put.var.subset.by.axes( f, v, dat, axis.indices, axes.map = NULL, input.axes = NULL ) Arguments f An object of class ncdf4 which represents a NetCDF file. v A string naming a variable in a file or an object of class ncvar4. dat The data to put in the file. axis.indices A list consisting of zero or more vectors of indices, named by which axis they refer to (X, Y, T, etc). axes.map An optional vector mapping axes to NetCDF dimensions. If not supplied, it will be generated from the file. input.axes An optional vector containing the input axis map. If supplied, it will be used to permute the data from the axis order in the input data, to the axis order in the output data. Details This function will write data (dat) out to the specified file (f) and variable (v) at the location specified by axis.indices. See Also ncdf4.helpers-package Examples ## Copy a subset of the data from one location to another. ## Not run: f <- nc_open("pr.nc") dat <- nc.get.var.subset.by.axes(f1, "pr", list(X=1:4, Y=c(1, 3, 5))) nc.put.var.subset.by.axes(f1, "pr", dat, list(X=5:8, Y=1:3)) nc_close(f) ## End(Not run) ncdf4.helpers ncdf4.helpers: helper functions for NetCDF files. Description This package provides a number of helper functions for NetCDF files opened using the ncdf4 package. Details Dealing with NetCDF format data is unnecessarily difficult. The ncdf4 package does a good job of making many lower-level operations easier. The ncdf4.helpers package aims to build higher-level functions upon the foundation of ncdf4. One concept central to much of the package is the idea of indexing, and dealing with data, by axis rather than by indices or by specific dimension names. The axes used are: • X (the horizontal axis) • Y (the vertical axis) • Z (the pressure / depth axis) • S (the reduced spatial grid axis) • T (the time axis) Indexing by axis avoids the pitfalls of using data in forms other than (X, Y, Z, T), such as (T, X, Y). Avoiding using dimension names directly avoids problems when using projected data. The functions in the package can be broken down into the following categories: 1. Functions which get, put, and transform data: nc.put.var.subset.by.axes, nc.get.var.subset.by.axes, nc.conform.data 2. Functions which deal with identifying axes, variables, and types of dimensions: nc.get.variable.list, nc.get.dim.axes, nc.get.dim.for.axis, nc.get.dim.bounds.var.list, nc.get.dim.names, nc.get.dim.axes.from.names, nc.get.coordinate.axes, nc.get.compress.dims, nc.is.regular.dimension. 3. Functions which deal with getting, classifying, and using time information: nc.get.time.series, nc.make.time.bounds, nc.get.time.multiplier. 4. Functions which make sense of projection information: nc.get.proj4.string. 5. Functions to ease chunked processing of data in parallel: get.cluster.worker.subsets. 6. Functions to ease dealing with CMIP5 data: get.split.filename.cmip5. 7. Utility functions: get.f.step.size. References http://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions. html
sketch-constructor
npm
JavaScript
Sketch Constructor === This library provides helpers and classes that make it easy to read/write/manipulate Sketch files in Javascript, without Sketch installed! ⚠️ Warning ⚠️ --- This library is a work in progress, use at your own risk. But feel free to help out where you see bugs or incomplete things! Also, because this library is not using any Sketch APIs/libraries and manipulating the underlying sketch files, the internal file API might change in the future. We will do our best to keep up with any Sketch changes and communicate any breaking API changes. Why? --- If you want to produce Sketch assets for your design team that are generated from your production codebase in a reliable and consistent way, like part of a build process or CI/CD pipeline. Or maybe you want to have your source of truth for your design tokens or components in Sketch, you can use this to extract that data out into your codebase. What can you do with this? --- * Generate Sketch files programmatically without actually running Sketch (no plugins!) * Use Sketch as input to create resources/documentation for a design system * Build Sketch files in a CI/CD pipeline * Read a Sketch file as a template, hydrate it with data, output a new Sketch file How is this different from html-sketchapp or react-sketchapp? --- Those tools are great and very powerful, but rely on creating sketch plugins on the fly and manipulating a Sketch document that is open on your computer. They are also focused on rendering sketch files and not using a Sketch file as input or data. This tool however helps you directly manipulate and generate Sketch files without a sketch plugin or even having Sketch open or installed. Installation --- This library is published as an npm module, you can install it via npm or yarn: ``` npm install --save-dev sketch-constructor ``` ``` yarn add sketch-constructor ``` Usage --- ### Creating a Sketch file Creating a completely new Sketch file from scratch, programmatically in node. ``` const {Sketch, Page, Artboard} = require('sketch-constructor'); // Without a path it creates an empty sketch class to work with const newSketch = new Sketch(); // Create a new Page instance const page = new Page({ }); // Add an artboard to the page const artboard = new Artboard({}); page.addArtboard(artboard); // Add the page to the Sketch instance newSketch.addPage( page ); // You can also add a page by giving it an object, the arguments // are the same as that of the Page constructor newSketch.addPage({ name: 'My Page' }); // Add preview image Sketch.addPreview('/path/to/preview.png'); // Creates the sketch file newSketch.build('newSketch.sketch'); ``` ### Temporary files This library by defaults uses `.sketch-constructor` directory for placing temporary files like Bitmap images or preview. We provide APIs to change default temporary directory by providing `STORAGE_DIR` environment variable. ### Read/manipulate an existing Sketch file Getting data from or manipulating an existing Sketch file. ``` const {Sketch, Page} = require('sketch-constructor'); // static method fromFile returns a promise Sketch.fromFile(`${process.cwd()}/__tests__/__sketch-files/test.sketch`) .then((sketch) => { // You can now get data from the sketch instance // For example, get all the Groups in a particular artboard // and page that has a name that includes 'test' sketch.getPage('Page 1') .getArtboard('Artboard 1') .getGroups((group) => group.name.includes('test')); // Or manipulate the sketch file the same way you would // with a new sketch instance var page = new Page({}); sketch.addPage( page ); }); ``` Models --- These are all the classes Sketch uses in it's JSON schema. They should all map 1:1 with any object in the JSON that has `_class` attribute. All the models are ES6 classes which have instance properties that map 1:1 with the attributes of JSON schema of that class. This makes it easy automatically create class instances from a sketch document as well as write the JSON of a sketch document because calling `JSON.stringify()` on a class will only output instance properties. All classes have the same constructor method signature: ``` class Layer { constructor(args = {}, json) {} } ``` If you are creating new classes from scratch, you will supply the constructor with a single object like this: ``` let text = new Text({ // options }); ``` This options object does NOT map 1:1 with the underlying JSON schema of the class. It simplifies the API so that you don't have to understand the underlying structure and to hopefully future-proof the classes. A simple example is the `Color` class: ``` let color = new Color("#fff") console.log(JSON.stringify(color, null, 2)); /* { "_class": "color", "alpha": 1, "blue": 1, "green": 1, "red": 1 } */ ``` Rather than having to pass in the rgba channels (out of 1 not 255 like CSS), you can pass in anything that [Tiny Color](https://github.com/bgrins/TinyColor) can understand. For most classes there are helper methods to make interacting with them easier. For example, the `Color` class has all of the [Tiny Color](https://github.com/bgrins/TinyColor) methods so you can modify the color or get different representations of it. If you start with an existing Sketch file by using `Sketch.fromFile(filePath)`, the library will recursively create all the underlying classes so you don't have to worry about manually instantiating the classes. For example: ``` const { Sketch, Page, Artboard } = require('sketch-constructor'); Sketch.fromFile('myFile.sketch') .then((sketch) => { // create an array of Group instances const groups = sketch.getPage('Page 1') .getArtboard('Artboard 1') .getGroups((group) => group.name.includes('test')); // Now you can use any Group instance methods like adding layers groups.forEach((group) => { group.addLayer( new Text({ string: "Hello", name: "Hello", fontSize: 24, color: "#ccc", frame: { width: 200, height: 50, x: 0, y: 120, } }) ) }) }) ``` Contributing --- We love contributors! This project is still WIP so any help would be amazing. Please take a look at our [contributing docs](https://github.com/amzn/sketch-constructor/blob/HEAD/CONTRIBUTING.md) License --- This library is licensed under the Apache 2.0 License. Readme --- ### Keywords * sketch * node
mtx
ctan
TeX
### Pre-Compiled Executables On Windows systems, one can install prepmx.exe in the Windows32 or Windows64 sub-directory; these are pre-compiled executables and should be copied to any folder on the PATH of executables. This might entail creating a suitable folder and adding that folder to the PATH as follows: right click on the "My Computer" desktop icon, left click on Properties\(\rightarrow\)Advanced\(\rightarrow\)Environment Variables in the "System Variables" section, scroll down to "path", select it, click edit, and append the full path name you have selected for the new folder. Also install the batch script m-tx.bat in a folder on the PATH. On the MAC OS-X platform, one can install the prepmx binary that is in the OSX sub-directory. Ensure that the file has the execute permission set: chmod +x prepmx ### Compilation from Source On any platform with basic GNU tools (tar, gunzip, make) and gcc or fpc, you should be able to build the prepmx executable as follows: 1. Unpack the mtx-0.63.tar.gz archive: tar zxvf mtx-0.63.tar.gz and move to the resulting mtx-0.63 directory. 2. Configure:./configure --prefix=$HOME or just./configure if you have super-user privileges. 3. Build: make If this fails, you might want to install fpc (the Free Pascal Compiler5) and try make -f Makefile.orig Footnote 5: [http://www.frepascal.org/](http://www.frepascal.org/) 4. Install: make install This step should be executed as root if you need super-user privileges. You should now have an executable prepmx in your PATH. Usage To process an M-Tx source file, use the musixtex script which is included in the musixtex package. If applied to a file with.mtx extension, it will by default execute prepmx followed by pmxab, followed by etex, musixflx, and etex again, followed by conversion to PDF using dvips and ps2pdf. There are many options to vary the default behaviour. ## 5 Discussion Other pre-processor packages, additional documentation, additional add-on packages, and many examples of M-Tx and MusiXTeX typesetting may be found at the Werner Icking Music Archive6. Support for users of MusiXTeX and related software may be obtained via the MusiXTeX mail list7. M-Tx may be freely copied, duplicated and used in conformance to the MIT License (see included file LICENSE). Footnote 6: [http://icking-music-archive.org](http://icking-music-archive.org) Footnote 7: [http://tug.org/mailman/listinfo/tex-music](http://tug.org/mailman/listinfo/tex-music)
YTKNetwork
cocoapods
Objective-C
YTKNetwork === Welcome to the documentation for YTKNetwork! Overview --- YTKNetwork is a lightweight and high-performance iOS network library. It’s built on top of [AFNetworking](https://github.com/AFNetworking/AFNetworking) but provides a simplified and convenient API for making HTTP requests. With YTKNetwork, you can easily manage the request life cycle, handle network errors, and easily integrate with other libraries or frameworks. Installation --- To install YTKNetwork, you have several options: * **CocoaPods:** Add the following line to your Podfile: `pod 'YTKNetwork'` * **Carthage:** Add the following line to your Cartfile: `github "yuantiku/YTKNetwork"` * **Manually:** Download the latest release from the [GitHub repository](https://github.com/yuantiku/YTKNetwork/releases) and add the necessary files to your project. Getting Started --- Once you have installed YTKNetwork, follow these steps to start using it: 1. Create a new subclass of `YTKRequest`. 2. Override the `requestUrl` method to return the URL for your request. 3. Override the `requestMethod` method to specify the HTTP method for your request. 4. Implement the `requestArgument` method to provide any additional request parameters. 5. Optionally, override other methods to customize the request behavior. 6. Send the request using the `start` method. Here’s an example of a subclassed request: ``` import YTKNetwork class MyRequest: YTKRequest { override func requestUrl() -> String { return "https://api.example.com/path" } override func requestMethod() -> YTKRequestMethod { return .get } override func requestArgument() -> Any? { return ["param1": "value1", "param2": "value2"] } } ``` Make sure to create a shared instance of the request object and use it to send requests throughout your app. Advanced Usage --- Besides the basic usage outlined above, YTKNetwork offers several advanced features: * **Request Callbacks:** You can implement various callback methods in your request subclass to handle the request’s life cycle, parse response data, and handle error scenarios. * **Request Groups:** YTKNetwork allows you to create request groups to synchronize multiple requests and handle dependency between them. * **Serializer:** You can choose different serializers for handling request and response data including JSON, HTTP, or custom ones. * **Reachability:** YTKNetwork provides a built-in reachability component to monitor network connection status. * **File Upload:** You can upload files using YTKNetwork and easily monitor the upload progress. * **SSL Pinning:** YTKNetwork supports SSL pinning to enhance network security. FAQ --- Here are some frequently asked questions about YTKNetwork: * **Q: Can I use YTKNetwork with Swift?** A: Yes, YTKNetwork is fully compatible with both Objective-C and Swift projects. * **Q: How can I handle network errors?** A: You can implement the `requestFailedFilter` method in your request subclass to customize the error handling logic. * **Q: Can I cancel a request?** A: Yes, you can call the `cancel` method on a request object to cancel an ongoing request. You can also cancel a request group to cancel all requests within that group. Community and Support --- If you need help or have any inquiries, you can reach out to the YTKNetwork community for support: * Join the [GitHub repository](https://github.com/yuantiku/YTKNetwork) and submit any issues or feature requests. * Ask questions and share ideas in the [Discussions section](https://github.com/yuantiku/YTKNetwork/discussions). * Join the [Gitter chat room](https://gitter.im/yuantiku/YTKNetwork) to interact with other users and maintainers.
pysaml2
readthedoc
JSON
pysaml2 documentation [pysaml2](index.html#document-index) --- About SAML 2.0[¶](#about-saml-2-0) === SAML 2.0 or Security Assertion Markup Language 2.0 is a version of the SAML standard for exchanging authentication and authorization data between security domains. About PySAML2[¶](#about-pysaml2) === PySAML2 is a pure python implementation of SAML2. It contains all necessary pieces for building a SAML2 service provider or an identity provider. The distribution contains examples of both. Originally written to work in a WSGI environment, there are extensions that allow you to use it with other frameworks. How to use PySAML2[¶](#how-to-use-pysaml2) === Before you can use Pysaml2, you’ll need to get it installed. If you have not done it yet, read the [Quick install guide](index.html#install) Well, now you have it installed, and you want to do something. And I’m sorry to tell you this, but there isn’t really a lot you can do with this code on it’s own. Sure you can send a AuthenticationRequest to an IdentityProvider or an AttributeQuery to an AttributeAuthority but in order to get what they return you have to sit behind a Web server. Well, that is not really true since the AttributeQuery would be over SOAP and you would get the result over the connection you have to the AttributeAuthority. But anyway, you may get my point. This is middleware stuff! PySAML2 is built to fit into a [WSGI](http://www.python.org/dev/peps/pep-0333/) application But it can be used in a non-WSGI environment too. So you will find descriptions of both cases here. The configuration is the same disregarding whether you are using PySAML2 in a WSGI or non-WSGI environment. Python compatibility[¶](#python-compatibility) --- PySAML2 has transitioned to Python3. Master Apps is maintaining a fork with Python2 compatibility on [GitHub](<https://github.com/masterapps-au/pysaml2>). Table of contents[¶](#table-of-contents) === Quick install guide[¶](#quick-install-guide) --- Before you can use PySAML2, you’ll need to get it installed. This guide will guide you to a simple, minimal installation. ### Install PySAML2[¶](#install-pysaml2) For all this to work, you need to have Python installed. The development has been done using 2.7. There is now a 3.X version. #### Prerequisites[¶](#prerequisites) You have to have ElementTree, which is either part of your Python distribution if it’s recent enough, or if the Python is too old you have to install it, for instance by getting it from the Python Package Instance by using easy_install. You also need xmlsec1 which you can download from <http://www.aleksey.com/xmlsec/If you’re on macOS, you can get xmlsec1 installed from MacPorts or Fink. If you’re on rhel/centos 7 you will need to install xmlsec1 and xmlsec1-openssl: ``` yum install xmlsec1 xmlsec1-openssl ``` Depending on how you are going to use PySAML2 you might also need * Mako * pyASN1 * repoze.who * python-memcache * memcached #### Quick build instructions[¶](#quick-build-instructions) Once you have installed all the necessary prerequisites a simple: ``` python setup.py install ``` will install the basic code. Note for rhel/centos 6: cffi depends on libffi-devel, and cryptography on openssl-devel to compile So you might want first to do: yum install libffi-devel openssl-devel After this, you ought to be able to run the tests without a hitch. The tests are based on the pypy test environment, so: ``` cd tests pip install -r test-requirements.txt pytest ``` is what you should use. If you don’t have py.test, get it it’s part of pypy! It’s really good! Quick pysaml2 example[¶](#quick-pysaml2-example) --- | Release: | 7.2.1 | | Date: | Sep 23, 2022 | In order to confirm that pysaml2 has been installed correctly and are ready to use you could run this basic example Contents: ### An extremely simple example of a SAML2 service provider[¶](#an-extremely-simple-example-of-a-saml2-service-provider) #### How it works[¶](#how-it-works) A SP works with authentication and possibly attribute aggregation. Both of these functions can be seen as parts of the normal Repoze.who setup. Namely the Challenger, Identifier and MetadataProvider parts. Normal for Repoze.who Identifier and MetadataProvider plugins are that they place information in environment variables. The convention is to place identity information in environ[“repoze.who.identity”]. This is a dictionary with keys like ‘login’, and ‘repoze.who.userid’. The SP follows this pattern and places the information gathered from the IdP that handled the authentication and possible extra information received from attribute authorities in the above mentioned dictionary under the key ‘user’. So in environ[“repoze.who.identity”] you will find a dictionary with attributes and values, the attribute names used depends on what’s returned from the IdP/AA. If there exists both a name and a friendly name, for instance, the friendly name is used as the key. #### Setup[¶](#setup) **sp-wsgi:** * Go to the folder and copy the example files: ``` cd [your path]/pysaml2/example/sp-wsgi cp service_conf.py.example service_conf.py cp sp_conf.py.example sp_conf.py ``` sp_conf.py is configured to run on localhost on port 8087. If you want to you could make the necessary changes before proceeding to the next step. * In order to generate the metadata file open a terminal: ``` cd [your path]/pysaml2/example/sp-wsgi make_metadata.py sp_conf.py > sp.xml ``` **sp-repoze:** * Go to the folder: [your path]/pysaml2/example/sp-repoze * Take the file named sp_conf.py.example and rename it sp_conf.py sp_conf.py is configured to run on localhost on port 8087. If you want to you could make the necessary changes before proceeding to the next step. * In order to generate the metadata file open a terminal: ``` cd [your path]/pysaml2/example/sp-repoze make_metadata.py sp_conf.py > sp.xml ``` Important files: sp_conf.py The SPs configuration who.ini The repoze.who configuration file Inside the folder named pki there are two files with certificates, mykey.pem with the private certificate and mycert.pem with the public part. I’ll go through these step by step. ##### sp_conf.py[¶](#sp-conf-py) The configuration is written as described in [Configuration of PySAML2 entities](index.html#howto-config). It means among other things that it’s easily testable as to the correct syntax. You can see the whole file in example/sp/sp_conf.py, here I will go through it line by line: ``` "service": ["sp"], ``` Tells the software what type of services the software is supposed to supply. It is used to check for the completeness of the configuration and also when constructing metadata from the configuration. More about that later. Allowed values are: “sp” (service provider), “idp” (identity provider) and “aa” (attribute authority). ``` "entityid" : "urn:mace:example.com:saml:sp", "service_url" : "http://example.com:8087/", ``` The ID of the entity and the URL on which it is listening.: ``` "idp_url" : "https://example.com/saml2/idp/SSOService.php", ``` Since this is a very simple SP it only needs to know about one IdP, therefore there is really no need for a metadata file or a WAYF-function or anything like that. It needs the URL of the IdP and that’s all.: ``` "my_name" : "My first SP", ``` This is just for informal purposes, not really needed but nice to do: ``` "debug" : 1, ``` Well, at this point in time you’d really like to have as much information as possible as to what’s going on, right ? ``` "key_file" : "./mykey.pem", "cert_file" : "./mycert.pem", ``` The necessary certificates.: ``` "xmlsec_binary" : "/opt/local/bin/xmlsec1", ``` Right now the software is built to use xmlsec binaries and not the python xmlsec package. There are reasons for this but I won’t go into them here.: ``` "organization": { "name": "Example Co", #display_name "url":"http://www.example.com/", }, ``` Information about the organization that is behind this SP, only used when building metadata. ``` "contact": [{ "given_name":"John", "sur_name": "Smith", "email_address": "<EMAIL>", #contact_type #company #telephone_number }] ``` Another piece of information that only matters if you build and distribute metadata. So, now to that part. In order to allow the IdP to talk to you, you may have to provide the one running the IdP with a metadata file. If you have a SP configuration file similar to the one I’ve walked you through here, but with your information, you can make the metadata file by running the make_metadata script you can find in the tools directory. Change directory to where you have the configuration file and do ``` make_metadata.py sp_conf.py > metadata.xml ``` ##### who.ini[¶](#who-ini) The file named `who.ini` is the `sp-repoze` folder I’m not going through the INI file format here. You should read [Middleware Responsibilities](http://docs.repoze.org/who/2.0/middleware.html) to get a good introduction to the concept. The configuration of the pysaml2 part in the applications middleware are first the special module configuration, namely: ``` [plugin:saml2auth] use = s2repoze.plugins.sp:make_plugin saml_conf = sp_conf.py rememberer_name = auth_tkt debug = 1 path_logout = .*/logout.* ``` Which contains a specification (“use”) of which function in which module should be used to initialize the part. After that comes the name of the file (“saml_conf”) that contains the PySaml2 configuration. The third line (“rememberer_name”) points at the plugin that should be used to remember the user information. After this, the plugin is referenced in a couple of places: ``` [identifiers] plugins = saml2auth auth_tkt [authenticators] plugins = saml2auth [challengers] plugins = saml2auth [mdproviders] plugins = saml2auth ``` Which means that the plugin is used in all phases. #### Run SP:[¶](#run-sp) Open a Terminal: ``` cd [your path]/pysaml2/example/sp-wsgi python sp.py sp_conf ``` Note that you should not have the .py extension on the sp_conf.py while running the program Now you should be able to open a web browser and go to to service provider (if you didn’t change sp_conf.py it should be: <http://localhost:8087>) You should be redirected to the IDP and presented with a login screen. You could enter Username:roland and Password:dianakra All users are specified in idp.py in a dictionary named PASSWD ##### The application[¶](#the-application) The app is, as said before, extremely simple. The only thing that is connected to the PySaml2 configuration is at the bottom, namely where the server is. You have to ascertain that this coincides with what is specified in the PySaml2 configuration. Apart from that there really is nothing in application.py that demands that you use PySaml2 as middleware. If you switched to using the LDAP or CAS plugins nothing would change in the application. In the application configuration yes! But not in the application. And that is really how it should be done. There is one assumption, and that is that the middleware plugin that gathers information about the user places the extra information in as a value on the “user” property in the dictionary found under the key “repoze.who.identity” in the environment. ### An extremly simple example of a SAML2 identity provider.[¶](#an-extremly-simple-example-of-a-saml2-identity-provider) There are 2 example IDPs in the project’s example directory: * idp2 has a static definition of users: + user attributes are defined in idp_user.py + the password is defined in the PASSWD dict in idp.py * idp2_repoze is using repoze.who middleware to perform authentication and attribute retrieval #### Configuration[¶](#configuration) Entity configuration is described in “Configuration of pysaml2 entities” Server parameters like host and port and various command line parameters are defined in the main part of idp.py ##### Setup:[¶](#setup) The folder [your path]/pysaml2/example/idp2 contains a file named idp_conf.py.example Take the file named idp_conf.py.example and rename it idp_conf.py Generate a metadata file based in the configuration file (idp_conf.py) by using the command: ``` make_metadata.py idp_conf.py > idp.xml ``` ##### Run IDP:[¶](#run-idp) Open a Terminal: ``` cd [your path]/pysaml2/example/idp2 python idp.py idp_conf ``` Note that you should not have the .py extension on the idp_conf.py while running the program How to use PySAML2[¶](#how-to-use-pysaml2) --- | Release: | | | Date: | Sep 23, 2022 | Before you can use Pysaml2, you’ll need to get it installed. If you have not done it yet, read the [Quick install guide](index.html#install) Well, now you have it installed and you want to do something. And I’m sorry to tell you this; but there isn’t really a lot you can do with this code on its own. Sure you can send a AuthenticationRequest to an IdentityProvider or a AttributeQuery to an AttributeAuthority, but in order to get what they return you have to sit behind a Web server. Well that is not really true since the AttributeQuery would be over SOAP and you would get the result over the connection you have to the AttributeAuthority. But anyway, you may get my point. This is middleware stuff! PySAML2 is built to fit into a [WSGI](http://www.python.org/dev/peps/pep-0333/) application But it can be used in a non-WSGI environment too. So you will find descriptions of both cases here. The configuration is the same regardless of whether you are using PySAML2 in a WSGI or non-WSGI environment. ### Configuration of PySAML2 entities[¶](#configuration-of-pysaml2-entities) Whether you plan to run a PySAML2 Service Provider, Identity Provider or an attribute authority, you have to configure it. The format of the configuration file is the same regardless of which type of service you plan to run. What differs are some of the directives. Below you will find a list of all the used directives in alphabetical order. The configuration is written as a python module which contains a named dictionary (“CONFIG”) that contains the configuration directives. The basic structure of the configuration file is therefore like this: ``` from saml2 import BINDING_HTTP_REDIRECT CONFIG = { "entityid": "http://saml.example.com:saml/idp.xml", "name": "Rolands IdP", "service": { "idp": { "endpoints": { "single_sign_on_service": [ ( "http://saml.example.com:saml:8088/sso", BINDING_HTTP_REDIRECT, ), ], "single_logout_service": [ ( "http://saml.example.com:saml:8088/slo", BINDING_HTTP_REDIRECT, ), ], }, ... } }, "key_file": "my.key", "cert_file": "ca.pem", "xmlsec_binary": "/usr/local/bin/xmlsec1", "delete_tmpfiles": True, "metadata": { "local": [ "edugain.xml", ], }, "attribute_map_dir": "attributemaps", ... } ``` Note You can build the metadata file for your services directly from the configuration. The make_metadata.py script in the PySAML2 tools directory will do that for you. #### Configuration directives[¶](#configuration-directives) * [General directives](#general-directives) + [logging](#logging) + [debug](#debug) + [http_client_timeout](#http-client-timeout) + [additional_cert_files](#additional-cert-files) + [entity_attributes](#entity-attributes) + [assurance_certification](#assurance-certification) + [attribute_map_dir](#attribute-map-dir) + [contact_person](#contact-person) + [entityid](#entityid) + [name](#name) + [description](#description) + [verify_ssl_cert](#verify-ssl-cert) + [key_file](#key-file) + [cert_file](#cert-file) + [tmp_cert_file](#tmp-cert-file) + [tmp_key_file](#tmp-key-file) + [encryption_keypairs](#encryption-keypairs) + [generate_cert_info](#generate-cert-info) + [ca_certs](#ca-certs) + [metadata](#metadata) + [organization](#organization) + [preferred_binding](#preferred-binding) + [service](#service) + [accepted_time_diff](#accepted-time-diff) + [allow_unknown_attributes](#allow-unknown-attributes) + [xmlsec_binary](#xmlsec-binary) + [xmlsec_path](#xmlsec-path) + [delete_tmpfiles](#delete-tmpfiles) + [valid_for](#valid-for) + [metadata_key_usage](#metadata-key-usage) + [secret](#secret) + [crypto_backend](#crypto-backend) + [verify_encrypt_advice](#verify-encrypt-advice) + [verify_encrypt_cert_assertion](#verify-encrypt-cert-assertion) * [Specific directives](#specific-directives) + [idp/aa](#idp-aa) - [sign_assertion](#sign-assertion) - [sign_response](#sign-response) - [encrypt_assertion](#encrypt-assertion) - [encrypted_advice_attributes](#encrypted-advice-attributes) - [encrypt_assertion_self_contained](#encrypt-assertion-self-contained) - [want_authn_requests_signed](#want-authn-requests-signed) - [want_authn_requests_only_with_valid_cert](#want-authn-requests-only-with-valid-cert) - [policy](#policy) - [scope](#scope) - [ui_info](#ui-info) - [name_qualifier](#name-qualifier) - [session_storage](#session-storage) - [domain](#domain) + [sp](#sp) - [authn_requests_signed](#authn-requests-signed) - [want_response_signed](#want-response-signed) - [force_authn](#force-authn) - [name_id_policy_format](#name-id-policy-format) - [name_id_format_allow_create](#name-id-format-allow-create) - [name_id_format](#name-id-format) - [allow_unsolicited](#allow-unsolicited) - [hide_assertion_consumer_service](#hide-assertion-consumer-service) - [sp_type](#sp-type) - [sp_type_in_metadata](#sp-type-in-metadata) - [requested_attributes](#requested-attributes) - [idp](#idp) - [optional_attributes](#optional-attributes) - [required_attributes](#required-attributes) - [want_assertions_signed](#want-assertions-signed) - [want_assertions_or_response_signed](#want-assertions-or-response-signed) - [discovery_response](#discovery-response) - [ecp](#ecp) - [requested_attribute_name_format](#requested-attribute-name-format) - [requested_authn_context](#requested-authn-context) + [idp/aa/sp](#idp-aa-sp) - [endpoints](#endpoints) - [only_use_keys_in_metadata](#only-use-keys-in-metadata) - [validate_certificate](#validate-certificate) - [logout_requests_signed](#logout-requests-signed) - [signing_algorithm](#signing-algorithm) - [digest_algorithm](#digest-algorithm) - [logout_responses_signed](#logout-responses-signed) - [subject_data](#subject-data) - [virtual_organization](#virtual-organization) * [Complete example](#complete-example) ##### [General directives](#id1)[¶](#general-directives) ###### [logging](#id2)[¶](#logging) The logging configuration format is the python logging format. The configuration is passed to the python logging dictionary configuration handler, directly. Example: ``` "logging": { "version": 1, "formatters": { "simple": { "format": "[%(asctime)s] [%(levelname)s] [%(name)s.%(funcName)s] %(message)s", }, }, "handlers": { "stdout": { "class": "logging.StreamHandler", "stream": "ext://sys.stdout", "level": "DEBUG", "formatter": "simple", }, }, "loggers": { "saml2": { "level": "DEBUG" }, }, "root": { "level": "DEBUG", "handlers": [ "stdout", ], }, }, ``` The example configuration above will enable DEBUG logging to stdout. ###### [debug](#id3)[¶](#debug) Example: ``` debug: 1 ``` Whether debug information should be sent to the log file. ###### [http_client_timeout](#id4)[¶](#http-client-timeout) Example: ``` http_client_timeout: 10 ``` The timeout of HTTP requests, in seconds. Defaults to None. ###### [additional_cert_files](#id5)[¶](#additional-cert-files) Example: ``` additional_cert_files: ["other-cert.pem", "another-cert.pem"] ``` Additional public certs that will be listed. Useful during cert/key rotation or if you need to include a certificate chain. Each entry in *additional_cert_files* must be a PEM formatted file with a single certificate. ###### [entity_attributes](#id6)[¶](#entity-attributes) Generates an `Attribute` element with the given NameFormat, Name, FriendlyName and values, each as an `AttributeValue` element. The element is added under the generated metadata `EntityDescriptor` as an `Extension` element under the `EntityAttributes` element. And omit Example: ``` "entity_attributes": [ { "name_format": "urn:oasis:names:tc:SAML:2.0:attrname-format:uri", "name": "urn:oasis:names:tc:SAML:profiles:subject-id:req", # "friendly_name" is not set "values": ["any"], }, ] ``` ###### [assurance_certification](#id7)[¶](#assurance-certification) Example: ``` "assurance_certification": [ "https://refeds.org/sirtfi", ] ``` Generates an `Attribute` element with name-format `urn:oasis:names:tc:SAML:2.0:attrname-format:uri` and name `urn:oasis:names:tc:SAML:attribute:assurance-certification` that contains `AttributeValue` elements with the given values from the list. The element is added under the generated metadata `EntityDescriptor` as an `Extension` element under the `EntityAttributes` element. Read more about [representing assurance information at the specification](https://wiki.oasis-open.org/security/SAML2IDAssuranceProfile). ###### [attribute_map_dir](#id8)[¶](#attribute-map-dir) Points to a directory which has the attribute maps in Python modules. Example: ``` "attribute_map_dir": "attribute-maps" ``` A typical map file will look like this: ``` MAP = { "identifier": "urn:oasis:names:tc:SAML:2.0:attrname-format:basic", "fro": { 'urn:mace:dir:attribute-def:aRecord': 'aRecord', 'urn:mace:dir:attribute-def:aliasedEntryName': 'aliasedEntryName', 'urn:mace:dir:attribute-def:aliasedObjectName': 'aliasedObjectName', 'urn:mace:dir:attribute-def:associatedDomain': 'associatedDomain', 'urn:mace:dir:attribute-def:associatedName': 'associatedName', ... }, "to": { 'aRecord': 'urn:mace:dir:attribute-def:aRecord', 'aliasedEntryName': 'urn:mace:dir:attribute-def:aliasedEntryName', 'aliasedObjectName': 'urn:mace:dir:attribute-def:aliasedObjectName', 'associatedDomain': 'urn:mace:dir:attribute-def:associatedDomain', 'associatedName': 'urn:mace:dir:attribute-def:associatedName', ... } } ``` The attribute map module contains a MAP dictionary with three items. The identifier item is the name-format you expect to support. The *to* and *fro* sub-dictionaries then contain the mapping between the names. As you see the format is again a python dictionary where the key is the name to convert from, and the value is the name to convert to. Since *to* in most cases is the inverse of the *fro* file, the software allows you only to specify one of them, and it will automatically create the other. ###### [contact_person](#id9)[¶](#contact-person) This is only used by *make_metadata.py* when it constructs the metadata for the service described by the configuration file. This is where you describe who can be contacted if questions arise about the service or if support is needed. The possible types are according to the standard **technical**, **support**, **administrative**, **billing** and **other**.: ``` contact_person: [ { "givenname": "Derek", "surname": "Jeter", "company": "Example Co.", "mail": ["<EMAIL>"], "type": "technical", }, { "givenname": "Joe", "surname": "Girardi", "company": "Example Co.", "mail": "<EMAIL>", "type": "administrative", }, ] ``` ###### [entityid](#id10)[¶](#entityid) Example: ``` entityid: "http://saml.example.com/sp" ``` The globally unique identifier of the entity. Note It is recommended that the entityid should point to a real webpage where the metadata for the entity can be found. ###### [name](#id11)[¶](#name) A string value that sets the name of the PySAML2 entity. Example: ``` "name": "Example IdP" ``` ###### [description](#id12)[¶](#description) A string value that sets the description of the PySAML2 entity. Example: ``` "description": "My IdP", ``` ###### [verify_ssl_cert](#id13)[¶](#verify-ssl-cert) Specifies if the SSL certificates should be verified. Can be `True` or `False`. The default configuration is `False`. Example: ``` "verify_ssl_cert": True ``` ###### [key_file](#id14)[¶](#key-file) Example: ``` key_file: "key.pem" ``` *key_file* is the name of a PEM formatted file that contains the private key of the service. This is currently used both to encrypt/sign assertions and as the client key in an HTTPS session. ###### [cert_file](#id15)[¶](#cert-file) Example: ``` cert_file: "cert.pem" ``` This is the public part of the service private/public key pair. *cert_file* must be a PEM formatted file with a single certificate. ###### [tmp_cert_file](#id16)[¶](#tmp-cert-file) Example:: “tmp_cert_file”: “tmp_cert.pem” *tmp_cert_file* is a PEM formatted certificate file ###### [tmp_key_file](#id17)[¶](#tmp-key-file) Example:: “tmp_key_file”: “tmp_key.pem” *tmp_key_file* is a PEM formatted key file. ###### [encryption_keypairs](#id18)[¶](#encryption-keypairs) Indicates which certificates will be used for encryption capabilities: ``` # Encryption 'encryption_keypairs': [ { 'key_file': BASE_DIR + '/certificates/private.key', 'cert_file': BASE_DIR + '/certificates/public.cert', }, ], ``` ###### [generate_cert_info](#id19)[¶](#generate-cert-info) Specifies if information about the certificate should be generated. A boolean value can be `True` or `False`. Example: ``` "generate_cert_info": False ``` ###### [ca_certs](#id20)[¶](#ca-certs) This is the path to a file containing root CA certificates for SSL server certificate validation. Example: ``` "ca_certs": full_path("cacerts.txt"), ``` ###### [metadata](#id21)[¶](#metadata) Contains a list of places where metadata can be found. This can be * a local directory accessible on the server the service runs on * a local file accessible on the server the service runs on * a remote URL serving aggregate metadata * a metadata query protocol (MDQ) service URL For example: ``` "metadata": { "local": [ "/opt/metadata" "metadata.xml", "vo_metadata.xml", ], "remote": [ { "url": "https://kalmar2.org/simplesaml/module.php/aggregator/?id=kalmarcentral2&set=saml2", "cert": "kalmar2.cert", }, ], "mdq": [ { "url": "http://mdq.ukfederation.org.uk/", "cert": "ukfederation-mdq.pem", "freshness_period": "P0Y0M0DT2H0M0S", }, { "url": "https://mdq.thaturl.org/", "disable_ssl_certificate_validation": True, "check_validity": False, }, ], }, ``` The above configuration means that the service should read two aggregate local metadata files, one aggregate metadata file from a remote server, and query a remote MDQ server. To verify the authenticity of the metadata aggregate downloaded from the remote server and the MDQ server local copies of the metadata signing certificates should be used. These public keys must be acquired by some secure out-of-band method before being placed on the local file system. When the parameter *check_validity* is set to False metadata that have expired will be accepted as valid. When the paramenter *disable_ssl_certificate_validation* is set to True the validity of ssl certificate will be skipped. When using a remote metadata source, the node_name option can be set to define the name of the root node of the XML document, if needed. Usually, the node name will be urn:oasis:names:tc:SAML:2.0:metadata:EntityDescriptor or urn:oasis:names:tc:SAML:2.0:metadata:EntityDescriptor (node namespace and node tag name). When using MDQ, the freshness_period option can be set to define a period for which the metadata fetched from the the MDQ server are considered fresh. After that period has passed the metadata are not valid anymore and must be fetched again. The period must be in the format defined in [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html) or [RFC3999](https://tools.ietf.org/html/rfc3339#appendix-A). By default, if freshness_period is not defined, the metadata are refreshed every 12 hours (P0Y0M0DT12H0M0S). ###### [organization](#id22)[¶](#organization) Only used by *make_metadata.py*. Where you describe the organization responsible for the service.: ``` "organization": { "name": [ ("Example Company", "en"), ("Exempel AB", "se") ], "display_name": ["Exempel AB"], "url": [ ("http://example.com", "en"), ("http://exempel.se", "se"), ], } ``` Note You can specify the language of the name, or the language used on the webpage, by entering a tuple, instead of a simple string, where the second part is the language code. If you don’t specify a language, the default is “en” (English). ###### [preferred_binding](#id23)[¶](#preferred-binding) Which binding should be preferred for a service. Example configuration: ``` "preferred_binding" = { "single_sign_on_service": [ 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Artifact', ], "single_logout_service": [ 'urn:oasis:names:tc:SAML:2.0:bindings:SOAP', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Artifact', ], } ``` The available services are: * manage_name_id_service * assertion_consumer_service * name_id_mapping_service * authn_query_service * attribute_service * authz_service * assertion_id_request_service * artifact_resolution_service * attribute_consuming_service * single_logout_service ###### [service](#id24)[¶](#service) Which services the server will provide; those are combinations of “idp”, “sp” and “aa”. So if a server is a Service Provider (SP) then the configuration could look something like this: ``` "service": { "sp": { "name": "<NAME>", "endpoints": { "assertion_consumer_service": ["http://localhost:8087/"], "single_logout_service": [ ( "http://localhost:8087/slo", 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect', ), ], }, "required_attributes": [ "surname", "givenname", "edupersonaffiliation", ], "optional_attributes": ["title"], "idp": { "urn:mace:umu.se:saml:roland:idp": None, }, } }, ``` There are two options common to all services: ‘name’ and ‘endpoints’. The remaining options are specific to one or the other of the service types. Which one is specified alongside the name of the option. ###### [accepted_time_diff](#id25)[¶](#accepted-time-diff) If your computer and another computer that you are communicating with are not in sync regarding the computer clock, then here you can state how big a difference you are prepared to accept. Note This will indiscriminately affect all time comparisons. Hence your server may accept a statement that in fact is too old. ###### [allow_unknown_attributes](#id26)[¶](#allow-unknown-attributes) Indicates that attributes that are not recognized (they are not configured in attribute-mapping), will not be discarded. Default to False. ###### [xmlsec_binary](#id27)[¶](#xmlsec-binary) Currently xmlsec1 binaries are used for all the signing and encryption stuff. This option defines where the binary is situated. Example: ``` "xmlsec_binary": "/usr/local/bin/xmlsec1", ``` ###### [xmlsec_path](#id28)[¶](#xmlsec-path) This option is used to define non-system paths where the xmlsec1 binary can be located. It can be used when the xmlsec_binary option is not defined. Example: ``` "xmlsec_path": ["/usr/local/bin", "/opt/local/bin"], ``` OR: ``` from saml2.sigver import get_xmlsec_binary if get_xmlsec_binary: xmlsec_path = get_xmlsec_binary(["/opt/local/bin","/usr/local/bin"]) else: xmlsec_path = '/usr/bin/xmlsec1' "xmlsec_binary": xmlsec_path, ``` ###### [delete_tmpfiles](#id29)[¶](#delete-tmpfiles) In many cases temporary files will have to be created during the encryption/decryption/signing/validation process. This option defines whether these temporary files will be automatically deleted when they are no longer needed. Setting this to False, will keep these files until they are manually deleted or automatically deleted by the OS (i.e Linux rules for /tmp). Absence of this option, defaults to True. ###### [valid_for](#id30)[¶](#valid-for) How many *hours* this configuration is expected to be accurate.: ``` "valid_for": 24 ``` This, of course, is only used by *make_metadata.py*. The server will not stop working when this amount of time has elapsed :-). ###### [metadata_key_usage](#id31)[¶](#metadata-key-usage) This specifies the purpose of the entity’s cryptographic keys used to sign data. If this option is not configured it will default to `"both"`. The possible options for this configuration are `both`, `signing`, `encryption`. If metadata_key_usage is set to `"signing"` or `"both"`, and a cert_file is provided the value of use in the KeyDescriptor element will be set to `"signing"`. If metadata_key_usage is set to `"both"` or `"encryption"` and a enc_cert is provided the value of `"use"` in the KeyDescriptor will be set to `"encryption"`. Example: ``` "metadata_key_usage" : "both", ``` ###### [secret](#id32)[¶](#secret) A string value that is used in the generation of the RelayState. Example: ``` "secret": "0123456789", ``` ###### [crypto_backend](#id33)[¶](#crypto-backend) Defines the crypto backend used for signing and encryption. The default is `xmlsec1`. The options are `xmlsec1` and `XMLSecurity`. If set to “XMLSecurity”, the crypto backend will be pyXMLSecurity. Example: ``` "crypto_backend": "xmlsec1", ``` ###### [verify_encrypt_advice](#id34)[¶](#verify-encrypt-advice) Specifies if the encrypted assertions in the advice element should be verified. Can be `True` or `False`. Example: ``` def verify_encrypt_cert(cert_str): osw = OpenSSLWrapper() ca_cert_str = osw.read_str_from_file(full_path("root_cert/localhost.ca.crt")) valid, mess = osw.verify(ca_cert_str, cert_str) return valid ``` ``` "verify_encrypt_cert_advice": verify_encrypt_cert, ``` ###### [verify_encrypt_cert_assertion](#id35)[¶](#verify-encrypt-cert-assertion) Specifies if the encrypted assertions should be verified. Can be `True` or `False`. Example: ``` "verify_encrypt_cert_assertion": verify_encrypt_cert ``` ##### [Specific directives](#id36)[¶](#specific-directives) Directives that are specific to a certain type of service. ###### [idp/aa](#id37)[¶](#idp-aa) Directives that are specific to an IdP or AA service instance. ####### [sign_assertion](#id38)[¶](#sign-assertion) Specifies if the IdP should sign the assertion in an authentication response or not. Can be True or False. Default is False. ####### [sign_response](#id39)[¶](#sign-response) Specifies if the IdP should sign the authentication response or not. Can be True or False. Default is False. ####### [encrypt_assertion](#id40)[¶](#encrypt-assertion) Specifies if the IdP should encrypt the assertions. Can be `True` or `False`. Default is `False`. ####### [encrypted_advice_attributes](#id41)[¶](#encrypted-advice-attributes) Specifies if assertions in the advice element should be encrypted. Can be `True` or `False`. Default is `False`. ####### [encrypt_assertion_self_contained](#id42)[¶](#encrypt-assertion-self-contained) Specifies if all encrypted assertions should have all namespaces self contained. Can be `True` or `False`. Default is `True`. ####### [want_authn_requests_signed](#id43)[¶](#want-authn-requests-signed) Indicates that the AuthnRequest received by this IdP should be signed. Can be `True` or `False`. The default value is `False`. ####### [want_authn_requests_only_with_valid_cert](#id44)[¶](#want-authn-requests-only-with-valid-cert) When verifying a signed AuthnRequest ignore the signature and verify the certificate. ####### [policy](#id45)[¶](#policy) If the server is an IdP and/or an AA, then there might be reasons to do things differently depending on who is asking (which is the requesting service); the policy is where this behaviour is specified. The keys are SP entity identifiers, Registration Authority names, or ‘default’. First, the policy for the requesting service is looked up using the SP entityID. If no such policy is found, and if the SP metadata includes a Registration Authority then a policy for the registration authority is looked up using the Registration Authority name. If no policy is found, then the ‘default’ is looked up. If there is no default and only SP entity identifiers as keys, then the server will only accept connections from the specified SPs. An example might be: ``` "service": { "idp": { "policy": { # a policy for a service "urn:mace:example.com:saml:roland:sp": { "lifetime": {"minutes": 5}, "attribute_restrictions": { "givenName": None, "surName": None, }, }, # a policy for a registration authority "http://www.swamid.se/": { "attribute_restrictions": { "givenName": None, }, }, # the policy for all other services "default": { "lifetime": {"minutes":15}, "attribute_restrictions": None, # means all I have "name_form": "urn:oasis:names:tc:SAML:2.0:attrname-format:uri", "entity_categories": [ "edugain", ], }, } } } ``` *lifetime* This is the maximum amount of time before the information should be regarded as stale. In an Assertion, this is represented in the NotOnOrAfter attribute. *attribute_restrictions* By default, there are no restrictions as to which attributes should be returned. Instead, all the attributes and values that are gathered by the database backends will be returned if nothing else is stated. In the example above the SP with the entity identifier “<urn:mace:umu.se:saml:roland:sp>” has an attribute restriction: only the attributes ‘givenName’ and ‘surName’ are to be returned. There are no limitations as to what values on these attributes that can be returned. *name_form* Which name-form that should be used when sending assertions. Using this information, the attribute name in the data source will be mapped to the friendly name, and the saml attribute name will be taken from the uri/oid defined in the attribute map. *nameid_format* Which nameid format that should be used. Defaults to urn:oasis:names:tc:SAML:2.0:nameid-format:transient. *entity_categories* Entity categories to apply. *sign* Possible choices: “response”, “assertion”, “on_demand” If restrictions on values are deemed necessary, those are represented by regular expressions.: ``` "service": { "aa": { "policy": { "urn:mace:umu.se:saml:roland:sp": { "lifetime": {"minutes": 5}, "attribute_restrictions": { "mail": [".*\.umu\.se$"], } } } } } ``` Here only mail addresses that end with “.umu.se” will be returned. ####### [scope](#id46)[¶](#scope) A list of string values that will be used to set the `<Scope>` element The default value of regexp is `False`. Example: ``` "scope": ["example.org", "example.com"], ``` ####### [ui_info](#id47)[¶](#ui-info) This determines what information to display about an entity by configuring its mdui:UIInfo element. The configurable options include; *privacy_statement_url* The URL to information about the privacy practices of the entity. *information_url* Which URL contains localized information about the entity. *logo* The logo image for the entity. The value is a dictionary with keys height, width and text. *display_name* The localized name for the entity. *description* The localized description of the entity. The value is a dictionary with keys text and lang. *keywords* The localized search keywords for the entity. The value is a dictionary with keys lang and text. Example: ``` "ui_info": { "privacy_statement_url": "http://example.com/saml2/privacyStatement.html", "information_url": "http://example.com/saml2/info.html", "logo": { "height": "40", "width" : "30", "text": "http://example.com/logo.jpg" }, "display_name": "Example Co.", "description" : {"text":"Exempel Bolag","lang":"se"}, "keywords": {"lang":"en", "text":["foo", "bar"]} } ``` ####### [name_qualifier](#id48)[¶](#name-qualifier) A string value that sets the `NameQualifier` attribute of the `<NameIdentifier>` element. Example: ``` "name_qualifier": "http://authentic.example.com/saml/metadata", ``` ####### [session_storage](#id49)[¶](#session-storage) Example: ``` "session_storage": ("mongodb", "session") ``` ####### [domain](#id50)[¶](#domain) Example: ``` "domain": "umu.se", ``` ###### [sp](#id51)[¶](#sp) Directives specific to SP instances ####### [authn_requests_signed](#id52)[¶](#authn-requests-signed) Indicates if the Authentication Requests sent by this SP should be signed by default. This can be overridden by application code for a specific call. This sets the AuthnRequestsSigned attribute of the SPSSODescriptor node of the metadata so the IdP will know this SP preference. Valid values are True or False. Default value is True. Example: ``` "service": { "sp": { "authn_requests_signed": True, } } ``` ####### [want_response_signed](#id53)[¶](#want-response-signed) Indicates that Authentication Responses to this SP must be signed. If set to True, the SP will not consume any SAML Responses that are not signed. Valid values are True or False. Default value is True. Example: ``` "service": { "sp": { "want_response_signed": True, } } ``` ####### [force_authn](#id54)[¶](#force-authn) Mandates that the identity provider MUST authenticate the presenter directly rather than rely on a previous security context. Example: ``` "service": { "sp": { "force_authn": True, } } ``` ####### [name_id_policy_format](#id55)[¶](#name-id-policy-format) A string value that will be used to set the `Format` attribute of the `<NameIDPolicy>` element of an `<AuthnRequest>`. Example: ``` "service": { "sp": { "name_id_policy_format": "urn:oasis:names:tc:SAML:2.0:nameid-format:persistent", } } ``` ####### [name_id_format_allow_create](#id56)[¶](#name-id-format-allow-create) A boolean value (`True` or `False`) that will be used to set the `AllowCreate` attribute of the `<NameIDPolicy>` element of an `<AuthnRequest>`. Example: ``` "service": { "sp": { "name_id_format_allow_create": True, } } ``` ####### [name_id_format](#id57)[¶](#name-id-format) A list of string values that will be used to set the `<NameIDFormat>` element of the metadata of an entity. Example: ``` "service": { "sp": { "name_id_format": [ "urn:oasis:names:tc:SAML:2.0:nameid-format:persistent", "urn:oasis:names:tc:SAML:2.0:nameid-format:transient", ] } } ``` ####### [allow_unsolicited](#id58)[¶](#allow-unsolicited) When set to true, the SP will consume unsolicited SAML Responses, i.e. SAML Responses for which it has not sent a respective SAML Authentication Request. Example: ``` "service": { "sp": { "allow_unsolicited": True, } } ``` ####### [hide_assertion_consumer_service](#id59)[¶](#hide-assertion-consumer-service) When set to true the AuthnRequest will not include the AssertionConsumerServiceURL and ProtocolBinding attributes. Example: ``` "service": { "sp": { "hide_assertion_consumer_service": True, } } ``` This kind of functionality is required for the eIDAS SAML profile. > eIDAS-Connectors SHOULD NOT provide AssertionConsumerServiceURL. Note This is relevant only for the eIDAS SAML profile. ####### [sp_type](#id60)[¶](#sp-type) Sets the value for the eIDAS SPType node. By the eIDAS specification the value can be one of *public* and *private*. Example: ``` "service": { "sp": { "sp_type": "private", } } ``` Note This is relevant only for the eIDAS SAML profile. ####### [sp_type_in_metadata](#id61)[¶](#sp-type-in-metadata) Whether the SPType node should appear in the metadata document or as part of each AuthnRequest. Example: ``` "service": { "sp": { "sp_type_in_metadata": True, } } ``` Note This is relevant only for the eIDAS SAML profile. ####### [requested_attributes](#id62)[¶](#requested-attributes) A list of attributes that the SP requires from an eIDAS-Service (IdP). Each attribute is an object with the following attributes: * friendly_name * name * required * name_format Where friendly_name is an attribute name such as *DateOfBirth*, name is the full attribute name such as *http://eidas.europa.eu/attributes/naturalperson/DateOfBirth*, required indicates whether this attributed is required for authentication, and name_format indicates the name format for that attribute, such as *urn:oasis:names:tc:SAML:2.0:attrname-format:uri*. It is mandatory that at least name or friendly_name is set. By default attributes are assumed to be required. Missing attributes are inferred based on the attribute maps data. Example: ``` "service": { "sp": { "requested_attributes": [ { "name": "http://eidas.europa.eu/attributes/naturalperson/PersonIdentifier", }, { "friendly_name": "DateOfBirth", "required": False, }, ], } } ``` Note This is relevant only for the eIDAS SAML profile. This option is different from the required_attributes and optional_attributes parameters that control the requested attributes in the metadata of an SP. ####### [idp](#id63)[¶](#idp) Defines the set of IdPs that this SP is allowed to use; if unset, all listed IdPs may be used. If set, then the value is expected to be a list with entity identifiers for the allowed IdPs. A typical configuration, when the allowed set of IdPs are limited, would look something like this: ``` "service": { "sp": { "idp": ["urn:mace:umu.se:saml:roland:idp"], } } ``` In this case, the SP has only one IdP it can use. ####### [optional_attributes](#id64)[¶](#optional-attributes) Attributes that this SP would like to receive from IdPs. Example: ``` "service": { "sp": { "optional_attributes": ["title"], } } ``` Since the attribute names used here are the user-friendly ones an attribute map must exist, so that the server can use the full name when communicating with other servers. ####### [required_attributes](#id65)[¶](#required-attributes) Attributes that this SP demands to receive from IdPs. Example: ``` "service": { "sp": { "required_attributes": [ "surname", "givenName", "mail", ], } } ``` Again as for *optional_attributes* the names given are expected to be the user-friendly names. ####### [want_assertions_signed](#id66)[¶](#want-assertions-signed) Indicates if this SP wants the IdP to send the assertions signed. This sets the WantAssertionsSigned attribute of the SPSSODescriptor node of the metadata so the IdP will know this SP preference. Valid values are True or False. Default value is False. Example: ``` "service": { "sp": { "want_assertions_signed": True, } } ``` ####### [want_assertions_or_response_signed](#id67)[¶](#want-assertions-or-response-signed) Indicates that *either* the Authentication Response *or* the assertions contained within the response to this SP must be signed. Valid values are True or False. Default value is False. This configuration directive **does not** override `want_response_signed` or `want_assertions_signed`. For example, if `want_response_signed` is True and the Authentication Response is not signed an exception will be thrown regardless of the value for this configuration directive. Thus to configure the SP to accept either a signed response or signed assertions set `want_response_signed` and `want_assertions_signed` both to False and this directive to True. Example: ``` "service": { "sp": { "want_response_signed": False, "want_assertions_signed": False, "want_assertions_or_response_signed": True, } } ``` ####### [discovery_response](#id68)[¶](#discovery-response) This configuration allows the SP to include one or more Discovery Response Endpoints. The discovery_response can be the just the URL: ``` "discovery_response":["http://example.com/sp/ds"], ``` or it can be a 2 tuple of the URL+Binding: ``` from saml2.extension.idpdisc import BINDING_DISCO "discovery_response": [("http://example.com/sp/ds", BINDING_DISCO)] ``` ####### [ecp](#id69)[¶](#ecp) This configuration option takes a dictionary with the ecp client IP address as the key and the entity ID as the value. Example: ``` "ecp": { "203.0.113.254": "http://example.com/idp", } ``` ####### [requested_attribute_name_format](#id70)[¶](#requested-attribute-name-format) This sets the NameFormat attribute in the `<RequestedAttribute>` element. The name formats are defined in saml2.saml.py. If not configured the default is `NAME_FORMAT_URI` which corresponds to `urn:oasis:names:tc:SAML:2.0:attrname-format:uri`. Example: ``` from saml2.saml import NAME_FORMAT_BASIC ``` ``` "requested_attribute_name_format": NAME_FORMAT_BASIC ``` ####### [requested_authn_context](#id71)[¶](#requested-authn-context) This configuration option defines the `<RequestedAuthnContext>` for an AuthnRequest by a client. The value is a dictionary with two fields * `authn_context_class_ref` a list of string values representing `<AuthnContextClassRef>` elements. * `comparison` a string representing the Comparison xml-attribute value of the `<RequestedAuthnContext>` element. Per the SAML core specificiation the value should be one of “exact”, “minimum”, “maximum”, or “better”. The default is “exact”. Example: ``` "service": { "sp": { "requested_authn_context": { "authn_context_class_ref": [ "urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport", "urn:oasis:names:tc:SAML:2.0:ac:classes:TLSClient", ], "comparison": "minimum", } } } ``` ###### [idp/aa/sp](#id72)[¶](#idp-aa-sp) If the configuration is covering both two or three different service types (like if one server is actually acting as both an IdP and an SP) then in some cases you might want to have these below different for the different services. ####### [endpoints](#id73)[¶](#endpoints) Where the endpoints for the services provided are. This directive has as value a dictionary with one or more of the following keys: * artifact_resolution_service (aa, idp and sp) * [assertion_consumer_service](https://wiki.shibboleth.net/confluence/display/CONCEPT/AssertionConsumerService) (sp) * assertion_id_request_service (aa, idp) * attribute_service (aa) * manage_name_id_service (aa, idp) * name_id_mapping_service (idp) * single_logout_service (aa, idp, sp) * single_sign_on_service (idp) The value per service is a list of endpoint specifications. An endpoint specification can either be just the URL: ``` ”http://localhost:8088/A" ``` or it can be a 2-tuple (URL+binding): ``` from saml2 import BINDING_HTTP_POST (”http://localhost:8087/A”, BINDING_HTTP_POST) ``` or a 3-tuple (URL+binding+index): ``` from saml2 import BINDING_HTTP_POST (”http://lingon.catalogix.se:8087/A”, BINDING_HTTP_POST, 1) ``` If no binding is specified, no index can be set. If no index is specified, the index is set based on the position in the list. Example: ``` "service": "idp": { "endpoints": { "single_sign_on_service": [ ("http://localhost:8088/sso", BINDING_HTTP_REDIRECT), ], "single_logout_service": [ ("http://localhost:8088/slo", BINDING_HTTP_REDIRECT), ], }, }, }, ``` ####### [only_use_keys_in_metadata](#id74)[¶](#only-use-keys-in-metadata) If set to False, the certificate contained in a SAML message will be used for signature verification. Default True. ####### [validate_certificate](#id75)[¶](#validate-certificate) Indicates that the certificate used in sign SAML messages must be valid. Default to False. ####### [logout_requests_signed](#id76)[¶](#logout-requests-signed) Indicates if this entity will sign the Logout Requests originated from it. This can be overridden by application code for a specific call. Valid values are True or False. Default value is False. Example: ``` "service": { "sp": { "logout_requests_signed": False, } } ``` ####### [signing_algorithm](#id77)[¶](#signing-algorithm) Default algorithm to be used. Example: ``` "service": { "sp": { "signing_algorithm": "http://www.w3.org/2001/04/xmldsig-more#rsa-sha512", "digest_algorithm": "http://www.w3.org/2001/04/xmlenc#sha512", } } ``` ####### [digest_algorithm](#id78)[¶](#digest-algorithm) Default algorithm to be used. Example: ``` "service": { "idp": { "signing_algorithm": "http://www.w3.org/2001/04/xmldsig-more#rsa-sha512", "digest_algorithm": "http://www.w3.org/2001/04/xmlenc#sha512", } } ``` ####### [logout_responses_signed](#id79)[¶](#logout-responses-signed) Indicates if this entity will sign the Logout Responses while processing a Logout Request. This can be overridden by application code when calling `handle_logout_request`. Valid values are True or False. Default value is False. Example: ``` "service": { "sp": { "logout_responses_signed": False, } } ``` ####### [subject_data](#id80)[¶](#subject-data) The name of a database where the map between a local identifier and a distributed identifier is kept. By default, this is a shelve database. So if you just specify a name, then a shelve database with that name is created. On the other hand, if you specify a tuple, then the first element in the tuple specifies which type of database you want to use and the second element is the address of the database. Example: ``` "subject_data": "./idp.subject.db", ``` or if you want to use for instance memcache: ``` "subject_data": ("memcached", "localhost:12121"), ``` *shelve* and *memcached* are the only database types that are currently supported. ####### [virtual_organization](#id81)[¶](#virtual-organization) Gives information about common identifiers for virtual_organizations: ``` "virtual_organization": { "urn:mace:example.com:it:tek": { "nameid_format": "urn:oid:1.3.6.1.4.1.1466.115.121.1.15-NameID", "common_identifier": "umuselin", } }, ``` Keys in this dictionary are the identifiers for the virtual organizations. The arguments per organization are ‘nameid_format’ and ‘common_identifier’. Useful if all the IdPs and AAs that are involved in a virtual organization have common attribute values for users that are part of the VO. ##### [Complete example](#id82)[¶](#complete-example) We start with a simple but fairly complete Service provider configuration: ``` from saml2 import BINDING_HTTP_REDIRECT CONFIG = { "entityid": "http://example.com/sp/metadata.xml", "service": { "sp": { "name": "Example SP", "endpoints": { "assertion_consumer_service": ["http://example.com/sp"], "single_logout_service": [ ("http://example.com/sp/slo", BINDING_HTTP_REDIRECT), ], }, } }, "key_file": "./mykey.pem", "cert_file": "./mycert.pem", "xmlsec_binary": "/usr/local/bin/xmlsec1", "delete_tmpfiles": True, "attribute_map_dir": "./attributemaps", "metadata": { "local": ["idp.xml"] } "organization": { "display_name": ["Example identities"] } "contact_person": [ { "givenname": "Roland", "surname": "Hedberg", "phone": "+46 90510", "mail": "<EMAIL>", "type": "technical", }, ] } ``` This is the typical setup for an SP. A metadata file to load is *always* needed, but it can, of course, contain anything from 1 up to many entity descriptions. --- A slightly more complex configuration: ``` from saml2 import BINDING_HTTP_REDIRECT CONFIG = { "entityid": "http://sp.example.com/metadata.xml", "service": { "sp": { "name": "Example SP", "endpoints": { "assertion_consumer_service": ["http://sp.example.com/"], "single_logout_service": [ ("http://sp.example.com/slo", BINDING_HTTP_REDIRECT), ], }, "subject_data": ("memcached", "localhost:12121"), "virtual_organization": { "urn:mace:example.com:it:tek": { "nameid_format": "urn:oid:1.3.6.1.4.1.1466.115.121.1.15-NameID", "common_identifier": "eduPersonPrincipalName", } }, } }, "key_file": "./mykey.pem", "cert_file": "./mycert.pem", "xmlsec_binary": "/usr/local/bin/xmlsec1", "delete_tmpfiles": True, "metadata": { "local": ["example.xml"], "remote": [ { "url":"https://kalmar2.org/simplesaml/module.php/aggregator/?id=kalmarcentral2&set=saml2", "cert":"kalmar2.pem", } ] }, "attribute_maps": "attributemaps", "organization": { "display_name": ["Example identities"] } "contact_person": [ { "givenname": "Roland", "surname": "Hedberg", "phone": "+46 90510", "mail": "<EMAIL>", "type": "technical", }, ] } ``` Uses metadata files, both local and remote, and will talk to whatever IdP that appears in any of the metadata files. #### Other considerations[¶](#other-considerations) ##### Entity Categories[¶](#entity-categories) Entity categories and their attributes are defined in src/saml2/entity_category/<registrar-of-entity-category>.py. We can configure Entity Categories in PySAML2 in two ways: 1. Using the configuration options *entity_category_support* or *entity_category*, to generate the appropriate EntityAttribute metadata elements. 2. Using the configuration option *entity_categories* as part of the policy configuration, to make the entity category work as a filter on the attributes that will be released. If the entity categories are configured as metadata, as follow: ``` 'debug' : True, 'xmlsec_binary': get_xmlsec_binary([/usr/bin/xmlsec1']), 'entityid': '%s/metadata' % BASE_URL, # or entity_category: [ ... ] 'entity_category_support': [ edugain.COCO, # "http://www.geant.net/uri/dataprotection-code-of-conduct/v1" refeds.RESEARCH_AND_SCHOLARSHIP, ], 'attribute_map_dir': 'data/attribute-maps', 'description': 'SAML2 IDP', 'service': { 'idp': { ... ``` In the metadata we’ll then have: ``` <md:Extensions> <mdattr:EntityAttributes> <saml:Attribute Name="http://macedir.org/entity-category-support" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"> <saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xsi:type="xs:string">http://www.geant.net/uri/dataprotection-code-of-conduct/v1</saml:AttributeValue> <saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xsi:type="xs:string">http://refeds.org/category/research-and-scholarship</saml:AttributeValue> </saml:Attribute> </mdattr:EntityAttributes> ``` If the entity categories are configurated in the policy section, they will act as filters on the released attributes. Example: ``` "policy": { "default": { "lifetime": {"minutes": 15}, # if the SP is not conform to entity_categories # the attributes will not be released "entity_categories": ["refeds",], ``` How sp_test works internally[¶](#how-sp-test-works-internally) --- | Release: | | | Date: | Sep 23, 2022 | Here are a few hints on how sp_test works internally. It helps to extend it with new test classes When you want to test a SAML2 entity with this tool, you need the following things: 1. The Test Driver Configuration, an example can be found in tests/idp_test/config.py 2. Attribute Maps mapping URNs, OIDs and friendly names 3. Key files for the test tool 4. A metadata file representing the tool 5. The Test Target Configuration file describes how to interact with the entity to be tested. The metadata for the entity is part of this file. An example can be found in tests/idp_test/test_target_config.py. These files should be stored outside the saml2test package to have a clean separation between the package and a particular test configuration. To create a directory for the configuration files, copy the saml2test/tests including its contents. ### (1) Class and Object Structure[¶](#class-and-object-structure) #### Client (sp_test/__init__.py)[¶](#client-sp-test-init-py) Its life cycle is responsible for following activities: * read config files and command line arguments (the test driver’s config is “json_config”) * initialize the test driver IDP * initialize a Conversation * start the Conversion with .do_sequence_and_tests() * post-process log messages #### Conversation (sp_test/base.py)[¶](#conversation-sp-test-base-py) #### Operation (oper)[¶](#operation-oper) > * Comprises an id, name, sequence and tests > * Example: ‘sp-00’: {“name”: ‘Basic Login test’, “sequence”: [(Login, AuthnRequest, AuthnResponse, None)], “tests”: {“pre”: [], “post”: []} #### OPERATIONS[¶](#operations) > * set of operations provided in sp_test > * can be listed with the -l command line option #### Sequence[¶](#sequence) > * A list of flows > * Example: see “sequence” item in operation dict #### Test (in the context of an operation)[¶](#test-in-the-context-of-an-operation) > * class to be executed as part of an operation, either before (“pre”) or after (“post”) the sequence or in between a SAML request and response (“mid”). > There are standard tests with the Request class (VerifyAuthnRequest) and operation-specific tests. > * Example for an operation-specific “mid” test: VerifyIfRequestIsSigned > * A test may be specified together with an argument as a tuple. #### Flow[¶](#flow) > * A tuple of classes that together implement a SAML request-response pair between IDP and SP (and possibly other actors, such as a discovery service or IDP-proxy). A class can be derived from Request, Response (or other), Check or Operation. > * A flow for a solicited authentication consists of 4 classes: > + flow[0]: Operation (Handling a login flow such as discovery or WAYF - not implemented yet) > + flow[1]: Request (process the authentication request) > + flow[2]: Response (send the authentication response) > + flow[3]: Check (optional - can be None. E.g. check the response if a correct error status was raised when sending a broken response) #### Check (and subclasses)[¶](#check-and-subclasses) > * An optional class that is executed on receiving the SP’s HTTP response(s) after the SAML response. If there are redirects, it will be called for each response. > * Writes a structured test report to conv.test_output > * It can check for expected errors, which do not cause an exception but in contrary are reported as a success #### Interaction[¶](#interaction) > * An interaction automates a human interaction. It searches a response from a test target for some constants, and if > there is a match, it will create a response suitable response. ### (2) Simplified Flow[¶](#simplified-flow) The following pseudo code is an extract showing an overview of what is executed for test sp-00: ``` do_sequence_and_test(self, oper, test): self.test_sequence(tests["pre"]) # currently no tests defined for sp_test for flow in oper: self.do_flow(flow) do_flow(flow): if len(flow) >= 3: self.wb_send_GET_startpage() # send start page GET request self.intermit(flow[0]._interaction) # automate human user interface self.parse_saml_message() # read relay state and saml message self.send_idp_response(flow[1], flow[2]) # construct, sign & send a nice Response from config, metadata and request if len(flow) == 4: self.handle_result(flow[3]) # pass optional check class else: self.handle_result() send_idp_response(req_flow, resp_flow): self.test_sequence(req_flow.tests["mid"]) # execute "mid"-tests (request has "VerifyContent"-test built in; others from config) # this line stands for a part that is a bit more involved .. see source args.update(resp._response_args) # set userid, identity test_sequence(sequence): # execute tests in sequence (first invocation usually with check.VerifyContent) for test in sequence: self.do_check(test, **kwargs) do_check(test, **kwargs): # executes the test class using the __call__ construct handle_result(response=None): if response: if isinstance(response(), VerifyEchopageContents): if 300 < self.last_response.status_code <= 303: self._redirect(self.last_response) self.do_check(response) elif isinstance(response(), Check): self.do_check(response) else: # A HTTP redirect or HTTP Post (not sure this is ever executed) ... else: if 300 < self.last_response.status_code <= 303: self._redirect(self.last_response) _txt = self.last_response.content if self.last_response.status_code >= 400: raise FatalError("Did not expected error") ``` ### (3) Status Reporting[¶](#status-reporting) The proper reporting of results is at the core of saml2test. Several conditions must be considered: 1. An operation that was successful because the test target reports OK (e.g. HTTP 200) 2. An operation that was successful because the test target reports NOK as expected, e.g. because of an invalid signature - HTTP 500 could be the correct response 3. An error in SAML2Test 4. An error in the configuration of SAML2Test Status values are defined in saml2test.check like this: INFORMATION = 0, OK = 1, WARNING = 2, ERROR = 3, CRITICAL = 4, INTERACTION = 5 There are two targets to write output to: * Test_output is written to conv.test_output during the execution of the flows. * [Index](genindex.html) * [Module Index](py-modindex.html) * [Search Page](search.html)
github.com/anacrolix/dht/v2
go
Go
README [¶](#section-readme) --- ### dht [![Go Reference](https://pkg.go.dev/badge/github.com/anacrolix/dht/v2.svg)](https://pkg.go.dev/github.com/anacrolix/dht/v2) #### Installation Get the library package with `go get github.com/anacrolix/dht/v2`, or the provided cmds with `go install github.com/anacrolix/dht/v2/cmd/...@latest`. #### Commands Here I'll describe what some of the provided commands in `./cmd` do. ##### dht-ping Pings DHT nodes with the given network addresses. ``` $ go run ./cmd/dht-ping router.bittorrent.com:6881 router.utorrent.com:6881 2015/04/01 17:21:23 main.go:33: dht server on [::]:60058 32f54e697351ff4aec29cdbaabf2fbe3467cc267 (router.bittorrent.com:6881): 648.218621ms ebff36697351ff4aec29cdbaabf2fbe3467cc267 (router.utorrent.com:6881): 873.864706ms 2/2 responses (100.000000%) ``` Documentation [¶](#section-documentation) --- [Rendered for](https://go.dev/about#build-context) linux/amd64 windows/amd64 darwin/amd64 js/wasm ### Overview [¶](#pkg-overview) Package dht implements a Distributed Hash Table (DHT) part of the BitTorrent protocol, as specified by BEP 5: <http://www.bittorrent.org/beps/bep_0005.htmlBitTorrent uses a "distributed hash table" (DHT) for storing peer contact information for "trackerless" torrents. In effect, each peer becomes a tracker. The protocol is based on Kademila DHT protocol and is implemented over UDP. Please note the terminology used to avoid confusion. A "peer" is a client/server listening on a TCP port that implements the BitTorrent protocol. A "node" is a client/server listening on a UDP port implementing the distributed hash table protocol. The DHT is composed of nodes and stores the location of peers. BitTorrent clients include a DHT node, which is used to contact other nodes in the DHT to get the location of peers to download from using the BitTorrent protocol. Standard use involves creating a Server, and calling Announce on it with the details of your local torrent client and infohash of interest. ### Index [¶](#pkg-index) * [Variables](#pkg-variables) * [func HashTuple(bs ...[]byte) (ret [20]byte)](#HashTuple) * [func MakeDeterministicNodeID(public net.Addr) (id krpc.ID)](#MakeDeterministicNodeID) * [func NodeIdSecure(id [20]byte, ip net.IP) bool](#NodeIdSecure) * [func RandomNodeID() (id krpc.ID)](#RandomNodeID)deprecated * [func ReadNodesFromFile(fileName string) (ns []krpc.NodeInfo, err error)](#ReadNodesFromFile) * [func SecureNodeId(id *krpc.ID, ip net.IP)](#SecureNodeId) * [func WriteNodesToFile(ns []krpc.NodeInfo, fileName string) (err error)](#WriteNodesToFile) * [type Addr](#Addr) * + [func GlobalBootstrapAddrs(network string) ([]Addr, error)](#GlobalBootstrapAddrs) + [func NewAddr(raw net.Addr) Addr](#NewAddr) + [func ResolveHostPorts(hostPorts []string) (addrs []Addr, err error)](#ResolveHostPorts) * [type Announce](#Announce) * + [func (a *Announce) Close()](#Announce.Close) + [func (a *Announce) Finished() events.Done](#Announce.Finished) + [func (a *Announce) NumContacted() uint32](#Announce.NumContacted) + [func (a *Announce) StopTraversing()](#Announce.StopTraversing) + [func (a *Announce) String() string](#Announce.String) + [func (a *Announce) TraversalStats() TraversalStats](#Announce.TraversalStats) * [type AnnounceOpt](#AnnounceOpt) * + [func AnnouncePeer(opts AnnouncePeerOpts) AnnounceOpt](#AnnouncePeer) + [func Scrape() AnnounceOpt](#Scrape) * [type AnnouncePeerOpts](#AnnouncePeerOpts) * [type Peer](#Peer) * [type PeersValues](#PeersValues) * [type QueryInput](#QueryInput) * [type QueryRateLimiting](#QueryRateLimiting) * [type QueryResult](#QueryResult) * + [func (qr QueryResult) ToError() error](#QueryResult.ToError) + [func (me QueryResult) TraversalQueryResult(addr krpc.NodeAddr) (ret traversal.QueryResult)](#QueryResult.TraversalQueryResult) * [type Server](#Server) * + [func NewServer(c *ServerConfig) (s *Server, err error)](#NewServer) * + [func (s *Server) AddNode(ni krpc.NodeInfo) error](#Server.AddNode) + [func (s *Server) AddNodesFromFile(fileName string) (added int, err error)](#Server.AddNodesFromFile) + [func (s *Server) Addr() net.Addr](#Server.Addr) + [func (s *Server) Announce(infoHash [20]byte, port int, impliedPort bool, opts ...AnnounceOpt) (_ *Announce, err error)](#Server.Announce)deprecated + [func (s *Server) AnnounceTraversal(infoHash [20]byte, opts ...AnnounceOpt) (_ *Announce, err error)](#Server.AnnounceTraversal) + [func (s *Server) Bootstrap() (TraversalStats, error)](#Server.Bootstrap) + [func (s *Server) BootstrapContext(ctx context.Context) (_ TraversalStats, err error)](#Server.BootstrapContext) + [func (s *Server) Close()](#Server.Close) + [func (s *Server) FindNode(addr Addr, targetID int160.T, rl QueryRateLimiting) (ret QueryResult)](#Server.FindNode) + [func (s *Server) Get(ctx context.Context, addr Addr, target bep44.Target, seq *int64, ...) QueryResult](#Server.Get) + [func (s *Server) GetPeers(ctx context.Context, addr Addr, infoHash int160.T, scrape bool, ...) (ret QueryResult)](#Server.GetPeers) + [func (s *Server) ID() [20]byte](#Server.ID) + [func (s *Server) IPBlocklist() iplist.Ranger](#Server.IPBlocklist) + [func (s *Server) IsGood(n *node) bool](#Server.IsGood) + [func (s *Server) IsQuestionable(n *node) bool](#Server.IsQuestionable) + [func (s *Server) NodeRespondedToPing(addr Addr, id int160.T)](#Server.NodeRespondedToPing) + [func (s *Server) Nodes() (nis []krpc.NodeInfo)](#Server.Nodes) + [func (s *Server) NumNodes() int](#Server.NumNodes) + [func (s *Server) PeerStore() peer_store.Interface](#Server.PeerStore) + [func (s *Server) Ping(node *net.UDPAddr) QueryResult](#Server.Ping) + [func (s *Server) PingQueryInput(node *net.UDPAddr, qi QueryInput) QueryResult](#Server.PingQueryInput) + [func (s *Server) Put(ctx context.Context, node Addr, i bep44.Put, token string, ...) QueryResult](#Server.Put) + [func (s *Server) Query(ctx context.Context, addr Addr, q string, input QueryInput) (ret QueryResult)](#Server.Query) + [func (s *Server) SetIPBlockList(list iplist.Ranger)](#Server.SetIPBlockList) + [func (s *Server) Stats() ServerStats](#Server.Stats) + [func (s *Server) String() string](#Server.String) + [func (s *Server) TableMaintainer()](#Server.TableMaintainer) + [func (s *Server) TraversalNodeFilter(node addrMaybeId) bool](#Server.TraversalNodeFilter) + [func (s *Server) TraversalStartingNodes() (nodes []addrMaybeId, err error)](#Server.TraversalStartingNodes) + [func (s *Server) WriteStatus(w io.Writer)](#Server.WriteStatus) * [type ServerConfig](#ServerConfig) * + [func NewDefaultServerConfig() *ServerConfig](#NewDefaultServerConfig) * + [func (c *ServerConfig) InitNodeId() (deterministic bool)](#ServerConfig.InitNodeId) * [type ServerStats](#ServerStats) * [type StartingNodesGetter](#StartingNodesGetter) * [type TraversalStats](#TraversalStats) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) ``` var DefaultGlobalBootstrapHostPorts = [][string](/builtin#string){ "router.utorrent.com:6881", "router.bittorrent.com:6881", "dht.transmissionbt.com:6881", "dht.aelitis.com:6881", "router.silotis.us:6881", "dht.libtorrent.org:25401", "dht.anacrolix.link:42069", "router.bittorrent.cloud:42069", } ``` ``` var DefaultSendLimiter = [rate](/golang.org/x/time/rate).[NewLimiter](/golang.org/x/time/rate#NewLimiter)(25, 25) ``` ``` var TransactionTimeout = [errors](/errors).[New](/errors#New)("transaction timed out") ``` ### Functions [¶](#pkg-functions) #### func [HashTuple](https://github.com/anacrolix/dht/blob/v2.20.0/hash-tuple.go#L7) [¶](#HashTuple) added in v2.17.0 ``` func HashTuple(bs ...[][byte](/builtin#byte)) (ret [20][byte](/builtin#byte)) ``` #### func [MakeDeterministicNodeID](https://github.com/anacrolix/dht/blob/v2.20.0/dht.go#L157) [¶](#MakeDeterministicNodeID) ``` func MakeDeterministicNodeID(public [net](/net).[Addr](/net#Addr)) (id [krpc](/github.com/anacrolix/dht/[email protected]/krpc).[ID](/github.com/anacrolix/dht/[email protected]/krpc#ID)) ``` #### func [NodeIdSecure](https://github.com/anacrolix/dht/blob/v2.20.0/security.go#L46) [¶](#NodeIdSecure) ``` func NodeIdSecure(id [20][byte](/builtin#byte), ip [net](/net).[IP](/net#IP)) [bool](/builtin#bool) ``` Returns whether the node ID is considered secure. The id is the 20 raw bytes. <http://www.libtorrent.org/dht_sec.html#### func [RandomNodeID](https://github.com/anacrolix/dht/blob/v2.20.0/dht.go#L153) deprecated ``` func RandomNodeID() (id [krpc](/github.com/anacrolix/dht/[email protected]/krpc).[ID](/github.com/anacrolix/dht/[email protected]/krpc#ID)) ``` Deprecated: Use function from krpc. #### func [ReadNodesFromFile](https://github.com/anacrolix/dht/blob/v2.20.0/nodes_file.go#L29) [¶](#ReadNodesFromFile) ``` func ReadNodesFromFile(fileName [string](/builtin#string)) (ns [][krpc](/github.com/anacrolix/dht/[email protected]/krpc).[NodeInfo](/github.com/anacrolix/dht/[email protected]/krpc#NodeInfo), err [error](/builtin#error)) ``` #### func [SecureNodeId](https://github.com/anacrolix/dht/blob/v2.20.0/security.go#L37) [¶](#SecureNodeId) ``` func SecureNodeId(id *[krpc](/github.com/anacrolix/dht/[email protected]/krpc).[ID](/github.com/anacrolix/dht/[email protected]/krpc#ID), ip [net](/net).[IP](/net#IP)) ``` Makes a node ID secure, in-place. The ID is 20 raw bytes. <http://www.libtorrent.org/dht_sec.html#### func [WriteNodesToFile](https://github.com/anacrolix/dht/blob/v2.20.0/nodes_file.go#L10) [¶](#WriteNodesToFile) ``` func WriteNodesToFile(ns [][krpc](/github.com/anacrolix/dht/[email protected]/krpc).[NodeInfo](/github.com/anacrolix/dht/[email protected]/krpc#NodeInfo), fileName [string](/builtin#string)) (err [error](/builtin#error)) ``` ### Types [¶](#pkg-types) #### type [Addr](https://github.com/anacrolix/dht/blob/v2.20.0/addr.go#L15) [¶](#Addr) ``` type Addr interface { Raw() [net](/net).[Addr](/net#Addr) Port() [int](/builtin#int) IP() [net](/net).[IP](/net#IP) String() [string](/builtin#string) KRPC() [krpc](/github.com/anacrolix/dht/[email protected]/krpc).[NodeAddr](/github.com/anacrolix/dht/[email protected]/krpc#NodeAddr) } ``` Used internally to refer to node network addresses. String() is called a lot, and so can be optimized. Network() is not exposed, so that the interface does not satisfy net.Addr, as the underlying type must be passed to any OS-level function that take net.Addr. #### func [GlobalBootstrapAddrs](https://github.com/anacrolix/dht/blob/v2.20.0/dht.go#L119) [¶](#GlobalBootstrapAddrs) ``` func GlobalBootstrapAddrs(network [string](/builtin#string)) ([][Addr](#Addr), [error](/builtin#error)) ``` Returns the resolved addresses of the default global bootstrap nodes. Network is unused but was historically passed by anacrolix/torrent. #### func [NewAddr](https://github.com/anacrolix/dht/blob/v2.20.0/addr.go#L54) [¶](#NewAddr) ``` func NewAddr(raw [net](/net).[Addr](/net#Addr)) [Addr](#Addr) ``` #### func [ResolveHostPorts](https://github.com/anacrolix/dht/blob/v2.20.0/dht.go#L125) [¶](#ResolveHostPorts) added in v2.20.0 ``` func ResolveHostPorts(hostPorts [][string](/builtin#string)) (addrs [][Addr](#Addr), err [error](/builtin#error)) ``` Resolves host:port strings to dht.Addrs, using the dht DNS resolver cache. Suitable for use with ServerConfig.BootstrapAddrs. #### type [Announce](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L23) [¶](#Announce) ``` type Announce struct { Peers chan [PeersValues](#PeersValues) // contains filtered or unexported fields } ``` Maintains state for an ongoing Announce operation. An Announce is started by calling Server.Announce. #### func (*Announce) [Close](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L195) [¶](#Announce.Close) ``` func (a *[Announce](#Announce)) Close() ``` Stop the announce. #### func (*Announce) [Finished](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L211) [¶](#Announce.Finished) added in v2.16.0 ``` func (a *[Announce](#Announce)) Finished() [events](/github.com/anacrolix/chansync/events).[Done](/github.com/anacrolix/chansync/events#Done) ``` Traversal and peer announcing steps are done. #### func (*Announce) [NumContacted](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L43) [¶](#Announce.NumContacted) ``` func (a *[Announce](#Announce)) NumContacted() [uint32](/builtin#uint32) ``` Returns the number of distinct remote addresses the announce has queried. #### func (*Announce) [StopTraversing](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L206) [¶](#Announce.StopTraversing) added in v2.16.0 ``` func (a *[Announce](#Announce)) StopTraversing() ``` Halts traversal, but won't block peer announcing. #### func (*Announce) [String](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L38) [¶](#Announce.String) added in v2.3.0 ``` func (a *[Announce](#Announce)) String() [string](/builtin#string) ``` #### func (*Announce) [TraversalStats](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L47) [¶](#Announce.TraversalStats) added in v2.18.0 ``` func (a *[Announce](#Announce)) TraversalStats() [TraversalStats](#TraversalStats) ``` #### type [AnnounceOpt](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L52) [¶](#AnnounceOpt) added in v2.6.0 ``` type AnnounceOpt func(a *[Announce](#Announce)) ``` Server.Announce option #### func [AnnouncePeer](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L70) [¶](#AnnouncePeer) added in v2.13.0 ``` func AnnouncePeer(opts [AnnouncePeerOpts](#AnnouncePeerOpts)) [AnnounceOpt](#AnnounceOpt) ``` Finish an Announce get_peers traversal with an announce of a local peer. #### func [Scrape](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L55) [¶](#Scrape) added in v2.6.0 ``` func Scrape() [AnnounceOpt](#AnnounceOpt) ``` Scrape BEP 33 bloom filters in queries. #### type [AnnouncePeerOpts](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L62) [¶](#AnnouncePeerOpts) added in v2.13.0 ``` type AnnouncePeerOpts struct { // The peer port that we're announcing. Port [int](/builtin#int) // The peer port should be determined by the receiver to be the source port of the query packet. ImpliedPort [bool](/builtin#bool) } ``` Arguments for announce_peer from a Server.Announce. #### type [Peer](https://github.com/anacrolix/dht/blob/v2.20.0/dht.go#L104) [¶](#Peer) ``` type Peer = [krpc](/github.com/anacrolix/dht/[email protected]/krpc).[NodeAddr](/github.com/anacrolix/dht/[email protected]/krpc#NodeAddr) ``` #### type [PeersValues](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L188) [¶](#PeersValues) ``` type PeersValues struct { Peers [][Peer](#Peer) // Peers given in get_peers response. [krpc](/github.com/anacrolix/dht/[email protected]/krpc).[NodeInfo](/github.com/anacrolix/dht/[email protected]/krpc#NodeInfo) // The node that gave the response. [krpc](/github.com/anacrolix/dht/[email protected]/krpc).[Return](/github.com/anacrolix/dht/[email protected]/krpc#Return) } ``` Corresponds to the "values" key in a get_peers KRPC response. A list of peers that a node has reported as being in the swarm for a queried info hash. #### type [QueryInput](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L936) [¶](#QueryInput) added in v2.8.0 ``` type QueryInput struct { MsgArgs [krpc](/github.com/anacrolix/dht/[email protected]/krpc).[MsgArgs](/github.com/anacrolix/dht/[email protected]/krpc#MsgArgs) RateLimiting [QueryRateLimiting](#QueryRateLimiting) NumTries [int](/builtin#int) } ``` The zero value for this uses reasonable/traditional defaults on Server methods. #### type [QueryRateLimiting](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L926) [¶](#QueryRateLimiting) added in v2.8.0 ``` type QueryRateLimiting struct { // Don't rate-limit the first send for a query. NotFirst [bool](/builtin#bool) // Don't rate-limit any sends for a query. Note that there's still built-in waits before retries. NotAny [bool](/builtin#bool) WaitOnRetries [bool](/builtin#bool) NoWaitFirst [bool](/builtin#bool) } ``` Rate-limiting to be applied to writes for a given query. Queries occur inside transactions that will attempt to send several times. If the STM rate-limiting helpers are used, the first send is often already accounted for in the rate-limiting machinery before the query method that does the IO is invoked. #### type [QueryResult](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L887) [¶](#QueryResult) added in v2.8.0 ``` type QueryResult struct { Reply [krpc](/github.com/anacrolix/dht/[email protected]/krpc).[Msg](/github.com/anacrolix/dht/[email protected]/krpc#Msg) Writes numWrites Err [error](/builtin#error) } ``` #### func (QueryResult) [ToError](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L893) [¶](#QueryResult.ToError) added in v2.10.0 ``` func (qr [QueryResult](#QueryResult)) ToError() [error](/builtin#error) ``` #### func (QueryResult) [TraversalQueryResult](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L905) [¶](#QueryResult.TraversalQueryResult) added in v2.10.0 ``` func (me [QueryResult](#QueryResult)) TraversalQueryResult(addr [krpc](/github.com/anacrolix/dht/[email protected]/krpc).[NodeAddr](/github.com/anacrolix/dht/[email protected]/krpc#NodeAddr)) (ret [traversal](/github.com/anacrolix/dht/[email protected]/traversal).[QueryResult](/github.com/anacrolix/dht/[email protected]/traversal#QueryResult)) ``` Converts a Server QueryResult to a traversal.QueryResult. #### type [Server](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L41) [¶](#Server) ``` type Server struct { // contains filtered or unexported fields } ``` A Server defines parameters for a DHT node server that is able to send queries, and respond to the ones from the network. Each node has a globally unique identifier known as the "node ID." Node IDs are chosen at random from the same 160-bit space as BitTorrent infohashes and define the behaviour of the node. Zero valued Server does not have a valid ID and thus is unable to function properly. Use `NewServer(nil)` to initialize a default node. #### func [NewServer](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L195) [¶](#NewServer) ``` func NewServer(c *[ServerConfig](#ServerConfig)) (s *[Server](#Server), err [error](/builtin#error)) ``` NewServer initializes a new DHT node server. #### func (*Server) [AddNode](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L386) [¶](#Server.AddNode) ``` func (s *[Server](#Server)) AddNode(ni [krpc](/github.com/anacrolix/dht/[email protected]/krpc).[NodeInfo](/github.com/anacrolix/dht/[email protected]/krpc#NodeInfo)) [error](/builtin#error) ``` Adds directly to the node table. #### func (*Server) [AddNodesFromFile](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1277) [¶](#Server.AddNodesFromFile) ``` func (s *[Server](#Server)) AddNodesFromFile(fileName [string](/builtin#string)) (added [int](/builtin#int), err [error](/builtin#error)) ``` #### func (*Server) [Addr](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L152) [¶](#Server.Addr) ``` func (s *[Server](#Server)) Addr() [net](/net).[Addr](/net#Addr) ``` Addr returns the listen address for the server. Packets arriving to this address are processed by the server (unless aliens are involved). #### func (*Server) [Announce](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L80) deprecated ``` func (s *[Server](#Server)) Announce(infoHash [20][byte](/builtin#byte), port [int](/builtin#int), impliedPort [bool](/builtin#bool), opts ...[AnnounceOpt](#AnnounceOpt)) (_ *[Announce](#Announce), err [error](/builtin#error)) ``` Deprecated: Use Server.AnnounceTraversal. Traverses the DHT graph toward nodes that store peers for the infohash, streaming them to the caller, and announcing the local node to each responding node if port is non-zero or impliedPort is true. #### func (*Server) [AnnounceTraversal](https://github.com/anacrolix/dht/blob/v2.20.0/announce.go#L92) [¶](#Server.AnnounceTraversal) added in v2.13.0 ``` func (s *[Server](#Server)) AnnounceTraversal(infoHash [20][byte](/builtin#byte), opts ...[AnnounceOpt](#AnnounceOpt)) (_ *[Announce](#Announce), err [error](/builtin#error)) ``` Traverses the DHT graph toward nodes that store peers for the infohash, streaming them to the caller. #### func (*Server) [Bootstrap](https://github.com/anacrolix/dht/blob/v2.20.0/bootstrap.go#L15) [¶](#Server.Bootstrap) ``` func (s *[Server](#Server)) Bootstrap() ([TraversalStats](#TraversalStats), [error](/builtin#error)) ``` See BootstrapContext. #### func (*Server) [BootstrapContext](https://github.com/anacrolix/dht/blob/v2.20.0/bootstrap.go#L20) [¶](#Server.BootstrapContext) added in v2.18.0 ``` func (s *[Server](#Server)) BootstrapContext(ctx [context](/context).[Context](/context#Context)) (_ [TraversalStats](#TraversalStats), err [error](/builtin#error)) ``` Populates the node table. #### func (*Server) [Close](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1176) [¶](#Server.Close) ``` func (s *[Server](#Server)) Close() ``` Stops the server network activity. This is all that's required to clean-up a Server. #### func (*Server) [FindNode](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1135) [¶](#Server.FindNode) added in v2.8.0 ``` func (s *[Server](#Server)) FindNode(addr [Addr](#Addr), targetID [int160](/github.com/anacrolix/dht/[email protected]/int160).[T](/github.com/anacrolix/dht/[email protected]/int160#T), rl [QueryRateLimiting](#QueryRateLimiting)) (ret [QueryResult](#QueryResult)) ``` Sends a find_node query to addr. targetID is the node we're looking for. The Server makes use of some of the response fields. #### func (*Server) [Get](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1212) [¶](#Server.Get) added in v2.11.0 ``` func (s *[Server](#Server)) Get(ctx [context](/context).[Context](/context#Context), addr [Addr](#Addr), target [bep44](/github.com/anacrolix/dht/[email protected]/bep44).[Target](/github.com/anacrolix/dht/[email protected]/bep44#Target), seq *[int64](/builtin#int64), rl [QueryRateLimiting](#QueryRateLimiting)) [QueryResult](#QueryResult) ``` Get gets item information from a specific target ID. If seq is set to a specific value, only items with seq bigger than the one provided will return a V, K and Sig, if any. Get must be used to get a Put write token, when you want to write an item instead of read it. #### func (*Server) [GetPeers](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1183) [¶](#Server.GetPeers) added in v2.8.0 ``` func (s *[Server](#Server)) GetPeers(ctx [context](/context).[Context](/context#Context), addr [Addr](#Addr), infoHash [int160](/github.com/anacrolix/dht/[email protected]/int160).[T](/github.com/anacrolix/dht/[email protected]/int160#T), scrape [bool](/builtin#bool), rl [QueryRateLimiting](#QueryRateLimiting)) (ret [QueryResult](#QueryResult)) ``` #### func (*Server) [ID](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L853) [¶](#Server.ID) ``` func (s *[Server](#Server)) ID() [20][byte](/builtin#byte) ``` ID returns the 20-byte server ID. This is the ID used to communicate with the DHT network. #### func (*Server) [IPBlocklist](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L270) [¶](#Server.IPBlocklist) ``` func (s *[Server](#Server)) IPBlocklist() [iplist](/github.com/anacrolix/torrent/iplist).[Ranger](/github.com/anacrolix/torrent/iplist#Ranger) ``` #### func (*Server) [IsGood](https://github.com/anacrolix/dht/blob/v2.20.0/node.go#L50) [¶](#Server.IsGood) added in v2.9.0 ``` func (s *[Server](#Server)) IsGood(n *node) [bool](/builtin#bool) ``` Per the spec in BEP 5. #### func (*Server) [IsQuestionable](https://github.com/anacrolix/dht/blob/v2.20.0/node.go#L25) [¶](#Server.IsQuestionable) added in v2.9.0 ``` func (s *[Server](#Server)) IsQuestionable(n *node) [bool](/builtin#bool) ``` #### func (*Server) [NodeRespondedToPing](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L719) [¶](#Server.NodeRespondedToPing) added in v2.10.0 ``` func (s *[Server](#Server)) NodeRespondedToPing(addr [Addr](#Addr), id [int160](/github.com/anacrolix/dht/[email protected]/int160).[T](/github.com/anacrolix/dht/[email protected]/int160#T)) ``` #### func (*Server) [Nodes](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1154) [¶](#Server.Nodes) ``` func (s *[Server](#Server)) Nodes() (nis [][krpc](/github.com/anacrolix/dht/[email protected]/krpc).[NodeInfo](/github.com/anacrolix/dht/[email protected]/krpc#NodeInfo)) ``` Returns non-bad nodes from the routing table. #### func (*Server) [NumNodes](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1147) [¶](#Server.NumNodes) ``` func (s *[Server](#Server)) NumNodes() [int](/builtin#int) ``` Returns how many nodes are in the node table. #### func (*Server) [PeerStore](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1294) [¶](#Server.PeerStore) added in v2.9.0 ``` func (s *[Server](#Server)) PeerStore() [peer_store](/github.com/anacrolix/dht/[email protected]/peer-store).[Interface](/github.com/anacrolix/dht/[email protected]/peer-store#Interface) ``` #### func (*Server) [Ping](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1070) [¶](#Server.Ping) ``` func (s *[Server](#Server)) Ping(node *[net](/net).[UDPAddr](/net#UDPAddr)) [QueryResult](#QueryResult) ``` Sends a ping query to the address given. #### func (*Server) [PingQueryInput](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1057) [¶](#Server.PingQueryInput) added in v2.15.0 ``` func (s *[Server](#Server)) PingQueryInput(node *[net](/net).[UDPAddr](/net#UDPAddr), qi [QueryInput](#QueryInput)) [QueryResult](#QueryResult) ``` Sends a ping query to the address given. #### func (*Server) [Put](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1075) [¶](#Server.Put) added in v2.11.0 ``` func (s *[Server](#Server)) Put(ctx [context](/context).[Context](/context#Context), node [Addr](#Addr), i [bep44](/github.com/anacrolix/dht/[email protected]/bep44).[Put](/github.com/anacrolix/dht/[email protected]/bep44#Put), token [string](/builtin#string), rl [QueryRateLimiting](#QueryRateLimiting)) [QueryResult](#QueryResult) ``` Put adds a new item to node. You need to call Get first for a write token. #### func (*Server) [Query](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L946) [¶](#Server.Query) added in v2.8.0 ``` func (s *[Server](#Server)) Query(ctx [context](/context).[Context](/context#Context), addr [Addr](#Addr), q [string](/builtin#string), input [QueryInput](#QueryInput)) (ret [QueryResult](#QueryResult)) ``` Performs an arbitrary query. `q` is the query value, defined by the DHT BEP. `a` should contain the appropriate argument values, if any. `a.ID` is clobbered by the Server. Responses to queries made this way are not interpreted by the Server. More specific methods like FindNode and GetPeers may make use of the response internally before passing it back to the caller. #### func (*Server) [SetIPBlockList](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L264) [¶](#Server.SetIPBlockList) ``` func (s *[Server](#Server)) SetIPBlockList(list [iplist](/github.com/anacrolix/torrent/iplist).[Ranger](/github.com/anacrolix/torrent/iplist#Ranger)) ``` Packets to and from any address matching a range in the list are dropped. #### func (*Server) [Stats](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L140) [¶](#Server.Stats) ``` func (s *[Server](#Server)) Stats() [ServerStats](#ServerStats) ``` Stats returns statistics for the server. #### func (*Server) [String](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L259) [¶](#Server.String) ``` func (s *[Server](#Server)) String() [string](/builtin#string) ``` Returns a description of the Server. #### func (*Server) [TableMaintainer](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1394) [¶](#Server.TableMaintainer) added in v2.10.0 ``` func (s *[Server](#Server)) TableMaintainer() ``` A routine that maintains the Server's routing table, by pinging questionable nodes, and refreshing buckets. This should be invoked on a running Server when the caller is satisfied with having set it up. It is not necessary to explicitly Bootstrap the Server once this routine has started. #### func (*Server) [TraversalNodeFilter](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1454) [¶](#Server.TraversalNodeFilter) added in v2.10.0 ``` func (s *[Server](#Server)) TraversalNodeFilter(node addrMaybeId) [bool](/builtin#bool) ``` Whether we should consider a node for contact based on its address and possible ID. #### func (*Server) [TraversalStartingNodes](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L1243) [¶](#Server.TraversalStartingNodes) added in v2.10.0 ``` func (s *[Server](#Server)) TraversalStartingNodes() (nodes []addrMaybeId, err [error](/builtin#error)) ``` #### func (*Server) [WriteStatus](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L83) [¶](#Server.WriteStatus) ``` func (s *[Server](#Server)) WriteStatus(w [io](/io).[Writer](/io#Writer)) ``` #### type [ServerConfig](https://github.com/anacrolix/dht/blob/v2.20.0/dht.go#L35) [¶](#ServerConfig) ``` type ServerConfig struct { // Set NodeId Manually. Caller must ensure that if NodeId does not conform // to DHT Security Extensions, that NoSecurity is also set. NodeId [krpc](/github.com/anacrolix/dht/[email protected]/krpc).[ID](/github.com/anacrolix/dht/[email protected]/krpc#ID) Conn [net](/net).[PacketConn](/net#PacketConn) // Don't respond to queries from other nodes. Passive [bool](/builtin#bool) // Whether to wait for rate limiting to allow us to reply. WaitToReply [bool](/builtin#bool) // Called when there are no good nodes to use in the routing table. This might be called any // time when there are no nodes, including during bootstrap if one is performed. Typically it // returns the resolve addresses of bootstrap or "router" nodes that are designed to kick-start // a routing table. StartingNodes [StartingNodesGetter](#StartingNodesGetter) // Disable the DHT security extension: <http://www.libtorrent.org/dht_sec.html>. NoSecurity [bool](/builtin#bool) // Initial IP blocklist to use. Applied before serving and bootstrapping // begins. IPBlocklist [iplist](/github.com/anacrolix/torrent/iplist).[Ranger](/github.com/anacrolix/torrent/iplist#Ranger) // Used to secure the server's ID. Defaults to the Conn's LocalAddr(). Set to the IP that remote // nodes will see, as that IP is what they'll use to validate our ID. PublicIP [net](/net).[IP](/net#IP) // Hook received queries. Return false if you don't want to propagate to the default handlers. OnQuery func(query *[krpc](/github.com/anacrolix/dht/[email protected]/krpc).[Msg](/github.com/anacrolix/dht/[email protected]/krpc#Msg), source [net](/net).[Addr](/net#Addr)) (propagate [bool](/builtin#bool)) // Called when a peer successfully announces to us. OnAnnouncePeer func(infoHash [metainfo](/github.com/anacrolix/torrent/metainfo).[Hash](/github.com/anacrolix/torrent/metainfo#Hash), ip [net](/net).[IP](/net#IP), port [int](/builtin#int), portOk [bool](/builtin#bool)) // How long to wait before resending queries that haven't received a response. Defaults to 2s. // After the last send, a query is aborted after this time. QueryResendDelay func() [time](/time).[Duration](/time#Duration) // TODO: Expose Peers, to return NodeInfo for received get_peers queries. PeerStore [peer_store](/github.com/anacrolix/dht/[email protected]/peer-store).[Interface](/github.com/anacrolix/dht/[email protected]/peer-store#Interface) // BEP-44: Storing arbitrary data in the DHT. If not store provided, a default in-memory // implementation will be used. Store [bep44](/github.com/anacrolix/dht/[email protected]/bep44).[Store](/github.com/anacrolix/dht/[email protected]/bep44#Store) // BEP-44: expiration time with non-announced items. Two hours by default Exp [time](/time).[Duration](/time#Duration) // If no Logger is provided, log.Default is used and log.Debug messages are filtered out. Note // that all messages without a log.Level, have log.Debug added to them before being passed to // this Logger. Logger [log](/github.com/anacrolix/log).[Logger](/github.com/anacrolix/log#Logger) DefaultWant [][krpc](/github.com/anacrolix/dht/[email protected]/krpc).[Want](/github.com/anacrolix/dht/[email protected]/krpc#Want) SendLimiter *[rate](/golang.org/x/time/rate).[Limiter](/golang.org/x/time/rate#Limiter) } ``` ServerConfig allows setting up a configuration of the `Server` instance to be created with NewServer. #### func [NewDefaultServerConfig](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L156) [¶](#NewDefaultServerConfig) added in v2.3.0 ``` func NewDefaultServerConfig() *[ServerConfig](#ServerConfig) ``` #### func (*ServerConfig) [InitNodeId](https://github.com/anacrolix/dht/blob/v2.20.0/server.go#L169) [¶](#ServerConfig.InitNodeId) added in v2.9.0 ``` func (c *[ServerConfig](#ServerConfig)) InitNodeId() (deterministic [bool](/builtin#bool)) ``` If the NodeId hasn't been specified, generate a suitable one. deterministic if c.Conn and c.PublicIP are non-nil. #### type [ServerStats](https://github.com/anacrolix/dht/blob/v2.20.0/dht.go#L85) [¶](#ServerStats) ``` type ServerStats struct { // Count of nodes in the node table that responded to our last query or // haven't yet been queried. GoodNodes [int](/builtin#int) // Count of nodes in the node table. Nodes [int](/builtin#int) // Transactions awaiting a response. OutstandingTransactions [int](/builtin#int) // Individual announce_peer requests that got a success response. SuccessfulOutboundAnnouncePeerQueries [int64](/builtin#int64) // Nodes that have been blocked. BadNodes [uint](/builtin#uint) OutboundQueriesAttempted [int64](/builtin#int64) } ``` ServerStats instance is returned by Server.Stats() and stores Server metrics #### type [StartingNodesGetter](https://github.com/anacrolix/dht/blob/v2.20.0/dht.go#L31) [¶](#StartingNodesGetter) ``` type StartingNodesGetter func() ([][Addr](#Addr), [error](/builtin#error)) ``` #### type [TraversalStats](https://github.com/anacrolix/dht/blob/v2.20.0/bootstrap.go#L12) [¶](#TraversalStats) ``` type TraversalStats = [traversal](/github.com/anacrolix/dht/[email protected]/traversal).[Stats](/github.com/anacrolix/dht/[email protected]/traversal#Stats) ```
github.com/go-sql-driver/mysql
go
Go
README [¶](#section-readme) --- ### Go-MySQL-Driver A MySQL-Driver for Go's [database/sql](https://golang.org/pkg/database/sql/) package ![Go-MySQL-Driver logo](https://raw.github.com/wiki/go-sql-driver/mysql/gomysql_m.png "Golang Gopher holding the MySQL Dolphin") --- * [Features](#readme-features) * [Requirements](#readme-requirements) * [Installation](#readme-installation) * [Usage](#readme-usage) + [DSN (Data Source Name)](#readme-dsn-data-source-name) - [Password](#readme-password) - [Protocol](#readme-protocol) - [Address](#readme-address) - [Parameters](#readme-parameters) - [Examples](#readme-examples) + [Connection pool and timeouts](#readme-connection-pool-and-timeouts) + [context.Context Support](#readme-contextcontext-support) + [ColumnType Support](#readme-columntype-support) + [LOAD DATA LOCAL INFILE support](#readme-load-data-local-infile-support) + [time.Time support](#readme-timetime-support) + [Unicode support](#readme-unicode-support) * [Testing / Development](#readme-testing--development) * [License](#readme-license) --- #### Features * Lightweight and [fast](https://github.com/go-sql-driver/sql-benchmark "golang MySQL-Driver performance") * Native Go implementation. No C-bindings, just pure Go * Connections over TCP/IPv4, TCP/IPv6, Unix domain sockets or [custom protocols](https://godoc.org/github.com/go-sql-driver/mysql#DialFunc) * Automatic handling of broken connections * Automatic Connection Pooling *(by database/sql package)* * Supports queries larger than 16MB * Full [`sql.RawBytes`](https://golang.org/pkg/database/sql/#RawBytes) support. * Intelligent `LONG DATA` handling in prepared statements * Secure `LOAD DATA LOCAL INFILE` support with file allowlisting and `io.Reader` support * Optional `time.Time` parsing * Optional placeholder interpolation #### Requirements * Go 1.13 or higher. We aim to support the 3 latest versions of Go. * MySQL (4.1+), MariaDB, Percona Server, Google CloudSQL or Sphinx (2.2.3+) --- #### Installation Simple install the package to your [$GOPATH](https://github.com/golang/go/wiki/GOPATH "GOPATH") with the [go tool](https://golang.org/cmd/go/ "go command") from shell: ``` $ go get -u github.com/go-sql-driver/mysql ``` Make sure [Git is installed](https://git-scm.com/downloads) on your machine and in your system's `PATH`. #### Usage *Go MySQL Driver* is an implementation of Go's `database/sql/driver` interface. You only need to import the driver and can use the full [`database/sql`](https://golang.org/pkg/database/sql/) API then. Use `mysql` as `driverName` and a valid [DSN](#readme-dsn-data-source-name) as `dataSourceName`: ``` import ( "database/sql" "time" _ "github.com/go-sql-driver/mysql" ) // ... db, err := sql.Open("mysql", "user:password@/dbname") if err != nil { panic(err) } // See "Important settings" section. db.SetConnMaxLifetime(time.Minute * 3) db.SetMaxOpenConns(10) db.SetMaxIdleConns(10) ``` [Examples are available in our Wiki](https://github.com/go-sql-driver/mysql/wiki/Examples "Go-MySQL-Driver Examples"). ##### Important settings `db.SetConnMaxLifetime()` is required to ensure connections are closed by the driver safely before connection is closed by MySQL server, OS, or other middlewares. Since some middlewares close idle connections by 5 minutes, we recommend timeout shorter than 5 minutes. This setting helps load balancing and changing system variables too. `db.SetMaxOpenConns()` is highly recommended to limit the number of connection used by the application. There is no recommended limit number because it depends on application and MySQL server. `db.SetMaxIdleConns()` is recommended to be set same to `db.SetMaxOpenConns()`. When it is smaller than `SetMaxOpenConns()`, connections can be opened and closed much more frequently than you expect. Idle connections can be closed by the `db.SetConnMaxLifetime()`. If you want to close idle connections more rapidly, you can use `db.SetConnMaxIdleTime()` since Go 1.15. ##### DSN (Data Source Name) The Data Source Name has a common format, like e.g. [PEAR DB](http://pear.php.net/manual/en/package.database.db.intro-dsn.php) uses it, but without type-prefix (optional parts marked by squared brackets): ``` [username[:password]@][protocol[(address)]]/dbname[?param1=value1&...&paramN=valueN] ``` A DSN in its fullest form: ``` username:password@protocol(address)/dbname?param=value ``` Except for the databasename, all values are optional. So the minimal DSN is: ``` /dbname ``` If you do not want to preselect a database, leave `dbname` empty: ``` / ``` This has the same effect as an empty DSN string: ``` ``` Alternatively, [Config.FormatDSN](https://godoc.org/github.com/go-sql-driver/mysql#Config.FormatDSN) can be used to create a DSN string by filling a struct. ###### Password Passwords can consist of any character. Escaping is **not** necessary. ###### Protocol See [net.Dial](https://golang.org/pkg/net/#Dial) for more information which networks are available. In general you should use an Unix domain socket if available and TCP otherwise for best performance. ###### Address For TCP and UDP networks, addresses have the form `host[:port]`. If `port` is omitted, the default port will be used. If `host` is a literal IPv6 address, it must be enclosed in square brackets. The functions [net.JoinHostPort](https://golang.org/pkg/net/#JoinHostPort) and [net.SplitHostPort](https://golang.org/pkg/net/#SplitHostPort) manipulate addresses in this form. For Unix domain sockets the address is the absolute path to the MySQL-Server-socket, e.g. `/var/run/mysqld/mysqld.sock` or `/tmp/mysql.sock`. ###### Parameters *Parameters are case-sensitive!* Notice that any of `true`, `TRUE`, `True` or `1` is accepted to stand for a true boolean value. Not surprisingly, false can be specified as any of: `false`, `FALSE`, `False` or `0`. `allowAllFiles` ``` Type: bool Valid Values: true, false Default: false ``` `allowAllFiles=true` disables the file allowlist for `LOAD DATA LOCAL INFILE` and allows *all* files. [*Might be insecure!*](http://dev.mysql.com/doc/refman/5.7/en/load-data-local.html) `allowCleartextPasswords` ``` Type: bool Valid Values: true, false Default: false ``` `allowCleartextPasswords=true` allows using the [cleartext client side plugin](https://dev.mysql.com/doc/en/cleartext-pluggable-authentication.html) if required by an account, such as one defined with the [PAM authentication plugin](http://dev.mysql.com/doc/en/pam-authentication-plugin.html). Sending passwords in clear text may be a security problem in some configurations. To avoid problems if there is any possibility that the password would be intercepted, clients should connect to MySQL Server using a method that protects the password. Possibilities include [TLS / SSL](#readme-tls), IPsec, or a private network. `allowFallbackToPlaintext` ``` Type: bool Valid Values: true, false Default: false ``` `allowFallbackToPlaintext=true` acts like a `--ssl-mode=PREFERRED` MySQL client as described in [Command Options for Connecting to the Server](https://dev.mysql.com/doc/refman/5.7/en/connection-options.html#option_general_ssl-mode) `allowNativePasswords` ``` Type: bool Valid Values: true, false Default: true ``` `allowNativePasswords=false` disallows the usage of MySQL native password method. `allowOldPasswords` ``` Type: bool Valid Values: true, false Default: false ``` `allowOldPasswords=true` allows the usage of the insecure old password method. This should be avoided, but is necessary in some cases. See also [the old_passwords wiki page](https://github.com/go-sql-driver/mysql/wiki/old_passwords). `charset` ``` Type: string Valid Values: <name> Default: none ``` Sets the charset used for client-server interaction (`"SET NAMES <value>"`). If multiple charsets are set (separated by a comma), the following charset is used if setting the charset failes. This enables for example support for `utf8mb4` ([introduced in MySQL 5.5.3](http://dev.mysql.com/doc/refman/5.5/en/charset-unicode-utf8mb4.html)) with fallback to `utf8` for older servers (`charset=utf8mb4,utf8`). Usage of the `charset` parameter is discouraged because it issues additional queries to the server. Unless you need the fallback behavior, please use `collation` instead. `checkConnLiveness` ``` Type: bool Valid Values: true, false Default: true ``` On supported platforms connections retrieved from the connection pool are checked for liveness before using them. If the check fails, the respective connection is marked as bad and the query retried with another connection. `checkConnLiveness=false` disables this liveness check of connections. `collation` ``` Type: string Valid Values: <name> Default: utf8mb4_general_ci ``` Sets the collation used for client-server interaction on connection. In contrast to `charset`, `collation` does not issue additional queries. If the specified collation is unavailable on the target server, the connection will fail. A list of valid charsets for a server is retrievable with `SHOW COLLATION`. The default collation (`utf8mb4_general_ci`) is supported from MySQL 5.5. You should use an older collation (e.g. `utf8_general_ci`) for older MySQL. Collations for charset "ucs2", "utf16", "utf16le", and "utf32" can not be used ([ref](https://dev.mysql.com/doc/refman/5.7/en/charset-connection.html#charset-connection-impermissible-client-charset)). `clientFoundRows` ``` Type: bool Valid Values: true, false Default: false ``` `clientFoundRows=true` causes an UPDATE to return the number of matching rows instead of the number of rows changed. `columnsWithAlias` ``` Type: bool Valid Values: true, false Default: false ``` When `columnsWithAlias` is true, calls to `sql.Rows.Columns()` will return the table alias and the column name separated by a dot. For example: ``` SELECT u.id FROM users as u ``` will return `u.id` instead of just `id` if `columnsWithAlias=true`. `interpolateParams` ``` Type: bool Valid Values: true, false Default: false ``` If `interpolateParams` is true, placeholders (`?`) in calls to `db.Query()` and `db.Exec()` are interpolated into a single query string with given parameters. This reduces the number of roundtrips, since the driver has to prepare a statement, execute it with given parameters and close the statement again with `interpolateParams=false`. *This can not be used together with the multibyte encodings BIG5, CP932, GB2312, GBK or SJIS. These are rejected as they may [introduce a SQL injection vulnerability](http://stackoverflow.com/a/12118602/3430118)!* `loc` ``` Type: string Valid Values: <escaped name> Default: UTC ``` Sets the location for time.Time values (when using `parseTime=true`). *"Local"* sets the system's location. See [time.LoadLocation](https://golang.org/pkg/time/#LoadLocation) for details. Note that this sets the location for time.Time values but does not change MySQL's [time_zone setting](https://dev.mysql.com/doc/refman/5.5/en/time-zone-support.html). For that see the [time_zone system variable](#readme-system-variables), which can also be set as a DSN parameter. Please keep in mind, that param values must be [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape)'ed. Alternatively you can manually replace the `/` with `%2F`. For example `US/Pacific` would be `loc=US%2FPacific`. `maxAllowedPacket` ``` Type: decimal number Default: 64*1024*1024 ``` Max packet size allowed in bytes. The default value is 64 MiB and should be adjusted to match the server settings. `maxAllowedPacket=0` can be used to automatically fetch the `max_allowed_packet` variable from server *on every connection*. `multiStatements` ``` Type: bool Valid Values: true, false Default: false ``` Allow multiple statements in one query. While this allows batch queries, it also greatly increases the risk of SQL injections. Only the result of the first query is returned, all other results are silently discarded. When `multiStatements` is used, `?` parameters must only be used in the first statement. `parseTime` ``` Type: bool Valid Values: true, false Default: false ``` `parseTime=true` changes the output type of `DATE` and `DATETIME` values to `time.Time` instead of `[]byte` / `string` The date or datetime like `0000-00-00 00:00:00` is converted into zero value of `time.Time`. `readTimeout` ``` Type: duration Default: 0 ``` I/O read timeout. The value must be a decimal number with a unit suffix (*"ms"*, *"s"*, *"m"*, *"h"*), such as *"30s"*, *"0.5m"* or *"1m30s"*. `rejectReadOnly` ``` Type: bool Valid Values: true, false Default: false ``` `rejectReadOnly=true` causes the driver to reject read-only connections. This is for a possible race condition during an automatic failover, where the mysql client gets connected to a read-only replica after the failover. Note that this should be a fairly rare case, as an automatic failover normally happens when the primary is down, and the race condition shouldn't happen unless it comes back up online as soon as the failover is kicked off. On the other hand, when this happens, a MySQL application can get stuck on a read-only connection until restarted. It is however fairly easy to reproduce, for example, using a manual failover on AWS Aurora's MySQL-compatible cluster. If you are not relying on read-only transactions to reject writes that aren't supposed to happen, setting this on some MySQL providers (such as AWS Aurora) is safer for failovers. Note that ERROR 1290 can be returned for a `read-only` server and this option will cause a retry for that error. However the same error number is used for some other cases. You should ensure your application will never cause an ERROR 1290 except for `read-only` mode when enabling this option. `serverPubKey` ``` Type: string Valid Values: <name> Default: none ``` Server public keys can be registered with [`mysql.RegisterServerPubKey`](https://godoc.org/github.com/go-sql-driver/mysql#RegisterServerPubKey), which can then be used by the assigned name in the DSN. Public keys are used to transmit encrypted data, e.g. for authentication. If the server's public key is known, it should be set manually to avoid expensive and potentially insecure transmissions of the public key from the server to the client each time it is required. `timeout` ``` Type: duration Default: OS default ``` Timeout for establishing connections, aka dial timeout. The value must be a decimal number with a unit suffix (*"ms"*, *"s"*, *"m"*, *"h"*), such as *"30s"*, *"0.5m"* or *"1m30s"*. `tls` ``` Type: bool / string Valid Values: true, false, skip-verify, preferred, <name> Default: false ``` `tls=true` enables TLS / SSL encrypted connection to the server. Use `skip-verify` if you want to use a self-signed or invalid certificate (server side) or use `preferred` to use TLS only when advertised by the server. This is similar to `skip-verify`, but additionally allows a fallback to a connection which is not encrypted. Neither `skip-verify` nor `preferred` add any reliable security. You can use a custom TLS config after registering it with [`mysql.RegisterTLSConfig`](https://godoc.org/github.com/go-sql-driver/mysql#RegisterTLSConfig). `writeTimeout` ``` Type: duration Default: 0 ``` I/O write timeout. The value must be a decimal number with a unit suffix (*"ms"*, *"s"*, *"m"*, *"h"*), such as *"30s"*, *"0.5m"* or *"1m30s"*. System Variables Any other parameters are interpreted as system variables: * `<boolean_var>=<value>`: `SET <boolean_var>=<value>` * `<enum_var>=<value>`: `SET <enum_var>=<value>` * `<string_var>=%27<value>%27`: `SET <string_var>='<value>'` Rules: * The values for string variables must be quoted with `'`. * The values must also be [url.QueryEscape](http://golang.org/pkg/net/url/#QueryEscape)'ed! (which implies values of string variables must be wrapped with `%27`). Examples: * `autocommit=1`: `SET autocommit=1` * [`time_zone=%27Europe%2FParis%27`](https://dev.mysql.com/doc/refman/5.5/en/time-zone-support.html): `SET time_zone='Europe/Paris'` * [`transaction_isolation=%27REPEATABLE-READ%27`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_transaction_isolation): `SET transaction_isolation='REPEATABLE-READ'` ###### Examples ``` user@unix(/path/to/socket)/dbname ``` ``` root:pw@unix(/tmp/mysql.sock)/myDatabase?loc=Local ``` ``` user:password@tcp(localhost:5555)/dbname?tls=skip-verify&autocommit=true ``` Treat warnings as errors by setting the system variable [`sql_mode`](https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html): ``` user:password@/dbname?sql_mode=TRADITIONAL ``` TCP via IPv6: ``` user:password@tcp([de:ad:be:ef::ca:fe]:80)/dbname?timeout=90s&collation=utf8mb4_unicode_ci ``` TCP on a remote host, e.g. Amazon RDS: ``` id:password@tcp(your-amazonaws-uri.com:3306)/dbname ``` Google Cloud SQL on App Engine: ``` user:password@unix(/cloudsql/project-id:region-name:instance-name)/dbname ``` TCP using default port (3306) on localhost: ``` user:password@tcp/dbname?charset=utf8mb4,utf8&sys_var=esc%40ped ``` Use the default protocol (tcp) and host (localhost:3306): ``` user:password@/dbname ``` No Database preselected: ``` user:password@/ ``` ##### Connection pool and timeouts The connection pool is managed by Go's database/sql package. For details on how to configure the size of the pool and how long connections stay in the pool see `*DB.SetMaxOpenConns`, `*DB.SetMaxIdleConns`, and `*DB.SetConnMaxLifetime` in the [database/sql documentation](https://golang.org/pkg/database/sql/). The read, write, and dial timeouts for each individual connection are configured with the DSN parameters [`readTimeout`](#readme-readtimeout), [`writeTimeout`](#readme-writetimeout), and [`timeout`](#readme-timeout), respectively. #### `ColumnType` Support This driver supports the [`ColumnType` interface](https://golang.org/pkg/database/sql/#ColumnType) introduced in Go 1.8, with the exception of [`ColumnType.Length()`](https://golang.org/pkg/database/sql/#ColumnType.Length), which is currently not supported. All Unsigned database type names will be returned `UNSIGNED` with `INT`, `TINYINT`, `SMALLINT`, `BIGINT`. #### `context.Context` Support Go 1.8 added `database/sql` support for `context.Context`. This driver supports query timeouts and cancellation via contexts. See [context support in the database/sql package](https://golang.org/doc/go1.8#database_sql) for more details. ##### `LOAD DATA LOCAL INFILE` support For this feature you need direct access to the package. Therefore you must change the import path (no `_`): ``` import "github.com/go-sql-driver/mysql" ``` Files must be explicitly allowed by registering them with `mysql.RegisterLocalFile(filepath)` (recommended) or the allowlist check must be deactivated by using the DSN parameter `allowAllFiles=true` ([*Might be insecure!*](http://dev.mysql.com/doc/refman/5.7/en/load-data-local.html)). To use a `io.Reader` a handler function must be registered with `mysql.RegisterReaderHandler(name, handler)` which returns a `io.Reader` or `io.ReadCloser`. The Reader is available with the filepath `Reader::<name>` then. Choose different names for different handlers and `DeregisterReaderHandler` when you don't need it anymore. See the [godoc of Go-MySQL-Driver](https://godoc.org/github.com/go-sql-driver/mysql "golang mysql driver documentation") for details. ##### `time.Time` support The default internal output type of MySQL `DATE` and `DATETIME` values is `[]byte` which allows you to scan the value into a `[]byte`, `string` or `sql.RawBytes` variable in your program. However, many want to scan MySQL `DATE` and `DATETIME` values into `time.Time` variables, which is the logical equivalent in Go to `DATE` and `DATETIME` in MySQL. You can do that by changing the internal output type from `[]byte` to `time.Time` with the DSN parameter `parseTime=true`. You can set the default [`time.Time` location](https://golang.org/pkg/time/#Location) with the `loc` DSN parameter. **Caution:** As of Go 1.1, this makes `time.Time` the only variable type you can scan `DATE` and `DATETIME` values into. This breaks for example [`sql.RawBytes` support](https://github.com/go-sql-driver/mysql/wiki/Examples#rawbytes). ##### Unicode support Since version 1.5 Go-MySQL-Driver automatically uses the collation `utf8mb4_general_ci` by default. Other collations / charsets can be set using the [`collation`](#readme-collation) DSN parameter. Version 1.0 of the driver recommended adding `&charset=utf8` (alias for `SET NAMES utf8`) to the DSN to enable proper UTF-8 support. This is not necessary anymore. The [`collation`](#readme-collation) parameter should be preferred to set another collation / charset than the default. See <http://dev.mysql.com/doc/refman/8.0/en/charset-unicode.html> for more details on MySQL's Unicode support. #### Testing / Development To run the driver tests you may need to adjust the configuration. See the [Testing Wiki-Page](https://github.com/go-sql-driver/mysql/wiki/Testing "Testing") for details. Go-MySQL-Driver is not feature-complete yet. Your help is very appreciated. If you want to contribute, you can work on an [open issue](https://github.com/go-sql-driver/mysql/issues?state=open) or review a [pull request](https://github.com/go-sql-driver/mysql/pulls). See the [Contribution Guidelines](https://github.com/go-sql-driver/mysql/blob/master/.github/CONTRIBUTING.md) for details. --- #### License Go-MySQL-Driver is licensed under the [Mozilla Public License Version 2.0](https://raw.github.com/go-sql-driver/mysql/master/LICENSE) Mozilla summarizes the license scope as follows: > MPL: The copyleft applies to any files containing MPLed code. That means: * You can **use** the **unchanged** source code both in private and commercially. * When distributing, you **must publish** the source code of any **changed files** licensed under the MPL 2.0 under a) the MPL 2.0 itself or b) a compatible license (e.g. GPL 3.0 or Apache License 2.0). * You **needn't publish** the source code of your library as long as the files licensed under the MPL 2.0 are **unchanged**. Please read the [MPL 2.0 FAQ](https://www.mozilla.org/en-US/MPL/2.0/FAQ/) if you have further questions regarding the license. You can read the full terms here: [LICENSE](https://raw.github.com/go-sql-driver/mysql/master/LICENSE). ![Go Gopher and MySQL Dolphin](https://raw.github.com/wiki/go-sql-driver/mysql/go-mysql-driver_m.jpg "Golang Gopher transporting the MySQL Dolphin in a wheelbarrow") Documentation [¶](#section-documentation) --- [Rendered for](https://go.dev/about#build-context) linux/amd64 windows/amd64 darwin/amd64 js/wasm ### Overview [¶](#pkg-overview) Package mysql provides a MySQL driver for Go's database/sql package. The driver should be used via the database/sql package: ``` import "database/sql" import _ "github.com/go-sql-driver/mysql" db, err := sql.Open("mysql", "user:password@/dbname") ``` See <https://github.com/go-sql-driver/mysql#usage> for details ### Index [¶](#pkg-index) * [Variables](#pkg-variables) * [func DeregisterLocalFile(filePath string)](#DeregisterLocalFile) * [func DeregisterReaderHandler(name string)](#DeregisterReaderHandler) * [func DeregisterServerPubKey(name string)](#DeregisterServerPubKey) * [func DeregisterTLSConfig(key string)](#DeregisterTLSConfig) * [func NewConnector(cfg *Config) (driver.Connector, error)](#NewConnector) * [func RegisterDial(network string, dial DialFunc)](#RegisterDial)deprecated * [func RegisterDialContext(net string, dial DialContextFunc)](#RegisterDialContext) * [func RegisterLocalFile(filePath string)](#RegisterLocalFile) * [func RegisterReaderHandler(name string, handler func() io.Reader)](#RegisterReaderHandler) * [func RegisterServerPubKey(name string, pubKey *rsa.PublicKey)](#RegisterServerPubKey) * [func RegisterTLSConfig(key string, config *tls.Config) error](#RegisterTLSConfig) * [func SetLogger(logger Logger) error](#SetLogger) * [type Config](#Config) * + [func NewConfig() *Config](#NewConfig) + [func ParseDSN(dsn string) (cfg *Config, err error)](#ParseDSN) * + [func (cfg *Config) Clone() *Config](#Config.Clone) + [func (cfg *Config) FormatDSN() string](#Config.FormatDSN) * [type DialContextFunc](#DialContextFunc) * [type DialFunc](#DialFunc)deprecated * [type Logger](#Logger) * [type MySQLDriver](#MySQLDriver) * + [func (d MySQLDriver) Open(dsn string) (driver.Conn, error)](#MySQLDriver.Open) + [func (d MySQLDriver) OpenConnector(dsn string) (driver.Connector, error)](#MySQLDriver.OpenConnector) * [type MySQLError](#MySQLError) * + [func (me *MySQLError) Error() string](#MySQLError.Error) + [func (me *MySQLError) Is(err error) bool](#MySQLError.Is) * [type NullTime](#NullTime)deprecated * + [func (nt *NullTime) Scan(value interface{}) (err error)](#NullTime.Scan) + [func (nt NullTime) Value() (driver.Value, error)](#NullTime.Value) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) ``` var ( ErrInvalidConn = [errors](/errors).[New](/errors#New)("invalid connection") ErrMalformPkt = [errors](/errors).[New](/errors#New)("malformed packet") ErrNoTLS = [errors](/errors).[New](/errors#New)("TLS requested but server does not support TLS") ErrCleartextPassword = [errors](/errors).[New](/errors#New)("this user requires clear text authentication. If you still want to use it, please add 'allowCleartextPasswords=1' to your DSN") ErrNativePassword = [errors](/errors).[New](/errors#New)("this user requires mysql native password authentication.") ErrOldPassword = [errors](/errors).[New](/errors#New)("this user requires old password authentication. If you still want to use it, please add 'allowOldPasswords=1' to your DSN. See also https://github.com/go-sql-driver/mysql/wiki/old_passwords") ErrUnknownPlugin = [errors](/errors).[New](/errors#New)("this authentication plugin is not supported") ErrOldProtocol = [errors](/errors).[New](/errors#New)("MySQL server does not support required protocol 41+") ErrPktSync = [errors](/errors).[New](/errors#New)("commands out of sync. You can't run this command now") ErrPktSyncMul = [errors](/errors).[New](/errors#New)("commands out of sync. Did you run multiple statements at once?") ErrPktTooLarge = [errors](/errors).[New](/errors#New)("packet for query is too large. Try adjusting the `Config.MaxAllowedPacket`") ErrBusyBuffer = [errors](/errors).[New](/errors#New)("busy buffer") ) ``` Various errors the driver might return. Can change between driver versions. ### Functions [¶](#pkg-functions) #### func [DeregisterLocalFile](https://github.com/go-sql-driver/mysql/blob/v1.7.1/infile.go#L48) [¶](#DeregisterLocalFile) ``` func DeregisterLocalFile(filePath [string](/builtin#string)) ``` DeregisterLocalFile removes the given filepath from the allowlist. #### func [DeregisterReaderHandler](https://github.com/go-sql-driver/mysql/blob/v1.7.1/infile.go#L81) [¶](#DeregisterReaderHandler) ``` func DeregisterReaderHandler(name [string](/builtin#string)) ``` DeregisterReaderHandler removes the ReaderHandler function with the given name from the registry. #### func [DeregisterServerPubKey](https://github.com/go-sql-driver/mysql/blob/v1.7.1/auth.go#L67) [¶](#DeregisterServerPubKey) added in v1.4.0 ``` func DeregisterServerPubKey(name [string](/builtin#string)) ``` DeregisterServerPubKey removes the public key registered with the given name. #### func [DeregisterTLSConfig](https://github.com/go-sql-driver/mysql/blob/v1.7.1/utils.go#L73) [¶](#DeregisterTLSConfig) added in v1.1.0 ``` func DeregisterTLSConfig(key [string](/builtin#string)) ``` DeregisterTLSConfig removes the tls.Config associated with key. #### func [NewConnector](https://github.com/go-sql-driver/mysql/blob/v1.7.1/driver.go#L88) [¶](#NewConnector) added in v1.5.0 ``` func NewConnector(cfg *[Config](#Config)) ([driver](/database/sql/driver).[Connector](/database/sql/driver#Connector), [error](/builtin#error)) ``` NewConnector returns new driver.Connector. #### func [RegisterDial](https://github.com/go-sql-driver/mysql/blob/v1.7.1/driver.go#L63) deprecated added in v1.2.0 ``` func RegisterDial(network [string](/builtin#string), dial [DialFunc](#DialFunc)) ``` RegisterDial registers a custom dial function. It can then be used by the network address mynet(addr), where mynet is the registered new network. addr is passed as a parameter to the dial function. Deprecated: users should call RegisterDialContext instead #### func [RegisterDialContext](https://github.com/go-sql-driver/mysql/blob/v1.7.1/driver.go#L49) [¶](#RegisterDialContext) added in v1.5.0 ``` func RegisterDialContext(net [string](/builtin#string), dial [DialContextFunc](#DialContextFunc)) ``` RegisterDialContext registers a custom dial function. It can then be used by the network address mynet(addr), where mynet is the registered new network. The current context for the connection and its address is passed to the dial function. #### func [RegisterLocalFile](https://github.com/go-sql-driver/mysql/blob/v1.7.1/infile.go#L36) [¶](#RegisterLocalFile) ``` func RegisterLocalFile(filePath [string](/builtin#string)) ``` RegisterLocalFile adds the given file to the file allowlist, so that it can be used by "LOAD DATA LOCAL INFILE <filepath>". Alternatively you can allow the use of all local files with the DSN parameter 'allowAllFiles=true' ``` filePath := "/home/gopher/data.csv" mysql.RegisterLocalFile(filePath) err := db.Exec("LOAD DATA LOCAL INFILE '" + filePath + "' INTO TABLE foo") if err != nil { ... ``` #### func [RegisterReaderHandler](https://github.com/go-sql-driver/mysql/blob/v1.7.1/infile.go#L68) [¶](#RegisterReaderHandler) ``` func RegisterReaderHandler(name [string](/builtin#string), handler func() [io](/io).[Reader](/io#Reader)) ``` RegisterReaderHandler registers a handler function which is used to receive a io.Reader. The Reader can be used by "LOAD DATA LOCAL INFILE Reader::<name>". If the handler returns a io.ReadCloser Close() is called when the request is finished. ``` mysql.RegisterReaderHandler("data", func() io.Reader { var csvReader io.Reader // Some Reader that returns CSV data ... // Open Reader here return csvReader }) err := db.Exec("LOAD DATA LOCAL INFILE 'Reader::data' INTO TABLE foo") if err != nil { ... ``` #### func [RegisterServerPubKey](https://github.com/go-sql-driver/mysql/blob/v1.7.1/auth.go#L56) [¶](#RegisterServerPubKey) added in v1.4.0 ``` func RegisterServerPubKey(name [string](/builtin#string), pubKey *[rsa](/crypto/rsa).[PublicKey](/crypto/rsa#PublicKey)) ``` RegisterServerPubKey registers a server RSA public key which can be used to send data in a secure manner to the server without receiving the public key in a potentially insecure way from the server first. Registered keys can afterwards be used adding serverPubKey=<name> to the DSN. Note: The provided rsa.PublicKey instance is exclusively owned by the driver after registering it and may not be modified. ``` data, err := ioutil.ReadFile("mykey.pem") if err != nil { log.Fatal(err) } block, _ := pem.Decode(data) if block == nil || block.Type != "PUBLIC KEY" { log.Fatal("failed to decode PEM block containing public key") } pub, err := x509.ParsePKIXPublicKey(block.Bytes) if err != nil { log.Fatal(err) } if rsaPubKey, ok := pub.(*rsa.PublicKey); ok { mysql.RegisterServerPubKey("mykey", rsaPubKey) } else { log.Fatal("not a RSA public key") } ``` #### func [RegisterTLSConfig](https://github.com/go-sql-driver/mysql/blob/v1.7.1/utils.go#L57) [¶](#RegisterTLSConfig) added in v1.1.0 ``` func RegisterTLSConfig(key [string](/builtin#string), config *[tls](/crypto/tls).[Config](/crypto/tls#Config)) [error](/builtin#error) ``` RegisterTLSConfig registers a custom tls.Config to be used with sql.Open. Use the key as a value in the DSN where tls=value. Note: The provided tls.Config is exclusively owned by the driver after registering it. ``` rootCertPool := x509.NewCertPool() pem, err := ioutil.ReadFile("/path/ca-cert.pem") if err != nil { log.Fatal(err) } if ok := rootCertPool.AppendCertsFromPEM(pem); !ok { log.Fatal("Failed to append PEM.") } clientCert := make([]tls.Certificate, 0, 1) certs, err := tls.LoadX509KeyPair("/path/client-cert.pem", "/path/client-key.pem") if err != nil { log.Fatal(err) } clientCert = append(clientCert, certs) mysql.RegisterTLSConfig("custom", &tls.Config{ RootCAs: rootCertPool, Certificates: clientCert, }) db, err := sql.Open("mysql", "user@tcp(localhost:3306)/test?tls=custom") ``` #### func [SetLogger](https://github.com/go-sql-driver/mysql/blob/v1.7.1/errors.go#L49) [¶](#SetLogger) added in v1.2.0 ``` func SetLogger(logger [Logger](#Logger)) [error](/builtin#error) ``` SetLogger is used to set the logger for critical errors. The initial logger is os.Stderr. ### Types [¶](#pkg-types) #### type [Config](https://github.com/go-sql-driver/mysql/blob/v1.7.1/dsn.go#L36) [¶](#Config) added in v1.3.0 ``` type Config struct { User [string](/builtin#string) // Username Passwd [string](/builtin#string) // Password (requires User) Net [string](/builtin#string) // Network type Addr [string](/builtin#string) // Network address (requires Net) DBName [string](/builtin#string) // Database name Params map[[string](/builtin#string)][string](/builtin#string) // Connection parameters Collation [string](/builtin#string) // Connection collation Loc *[time](/time).[Location](/time#Location) // Location for time.Time values MaxAllowedPacket [int](/builtin#int) // Max packet size allowed ServerPubKey [string](/builtin#string) // Server public key name TLSConfig [string](/builtin#string) // TLS configuration name TLS *[tls](/crypto/tls).[Config](/crypto/tls#Config) // TLS configuration, its priority is higher than TLSConfig Timeout [time](/time).[Duration](/time#Duration) // Dial timeout ReadTimeout [time](/time).[Duration](/time#Duration) // I/O read timeout WriteTimeout [time](/time).[Duration](/time#Duration) // I/O write timeout AllowAllFiles [bool](/builtin#bool) // Allow all files to be used with LOAD DATA LOCAL INFILE AllowCleartextPasswords [bool](/builtin#bool) // Allows the cleartext client side plugin AllowFallbackToPlaintext [bool](/builtin#bool) // Allows fallback to unencrypted connection if server does not support TLS AllowNativePasswords [bool](/builtin#bool) // Allows the native password authentication method AllowOldPasswords [bool](/builtin#bool) // Allows the old insecure password method CheckConnLiveness [bool](/builtin#bool) // Check connections for liveness before using them ClientFoundRows [bool](/builtin#bool) // Return number of matching rows instead of rows changed ColumnsWithAlias [bool](/builtin#bool) // Prepend table alias to column names InterpolateParams [bool](/builtin#bool) // Interpolate placeholders into query string MultiStatements [bool](/builtin#bool) // Allow multiple statements in one query ParseTime [bool](/builtin#bool) // Parse time values to time.Time RejectReadOnly [bool](/builtin#bool) // Reject read-only connections // contains filtered or unexported fields } ``` Config is a configuration parsed from a DSN string. If a new Config is created instead of being parsed from a DSN string, the NewConfig function should be used, which sets default values. #### func [NewConfig](https://github.com/go-sql-driver/mysql/blob/v1.7.1/dsn.go#L69) [¶](#NewConfig) added in v1.4.0 ``` func NewConfig() *[Config](#Config) ``` NewConfig creates a new Config and sets default values. #### func [ParseDSN](https://github.com/go-sql-driver/mysql/blob/v1.7.1/dsn.go#L301) [¶](#ParseDSN) added in v1.3.0 ``` func ParseDSN(dsn [string](/builtin#string)) (cfg *[Config](#Config), err [error](/builtin#error)) ``` ParseDSN parses the DSN string to a Config #### func (*Config) [Clone](https://github.com/go-sql-driver/mysql/blob/v1.7.1/dsn.go#L79) [¶](#Config.Clone) added in v1.5.0 ``` func (cfg *[Config](#Config)) Clone() *[Config](#Config) ``` #### func (*Config) [FormatDSN](https://github.com/go-sql-driver/mysql/blob/v1.7.1/dsn.go#L174) [¶](#Config.FormatDSN) added in v1.3.0 ``` func (cfg *[Config](#Config)) FormatDSN() [string](/builtin#string) ``` FormatDSN formats the given Config into a DSN string which can be passed to the driver. #### type [DialContextFunc](https://github.com/go-sql-driver/mysql/blob/v1.7.1/driver.go#L39) [¶](#DialContextFunc) added in v1.5.0 ``` type DialContextFunc func(ctx [context](/context).[Context](/context#Context), addr [string](/builtin#string)) ([net](/net).[Conn](/net#Conn), [error](/builtin#error)) ``` DialContextFunc is a function which can be used to establish the network connection. Custom dial functions must be registered with RegisterDialContext #### type [DialFunc](https://github.com/go-sql-driver/mysql/blob/v1.7.1/driver.go#L35) deprecated added in v1.2.0 ``` type DialFunc func(addr [string](/builtin#string)) ([net](/net).[Conn](/net#Conn), [error](/builtin#error)) ``` DialFunc is a function which can be used to establish the network connection. Custom dial functions must be registered with RegisterDial Deprecated: users should register a DialContextFunc instead #### type [Logger](https://github.com/go-sql-driver/mysql/blob/v1.7.1/errors.go#L43) [¶](#Logger) added in v1.2.0 ``` type Logger interface { Print(v ...interface{}) } ``` Logger is used to log critical error messages. #### type [MySQLDriver](https://github.com/go-sql-driver/mysql/blob/v1.7.1/driver.go#L29) [¶](#MySQLDriver) added in v1.1.0 ``` type MySQLDriver struct{} ``` MySQLDriver is exported to make the driver directly accessible. In general the driver is used via the database/sql package. #### func (MySQLDriver) [Open](https://github.com/go-sql-driver/mysql/blob/v1.7.1/driver.go#L72) [¶](#MySQLDriver.Open) added in v1.1.0 ``` func (d [MySQLDriver](#MySQLDriver)) Open(dsn [string](/builtin#string)) ([driver](/database/sql/driver).[Conn](/database/sql/driver#Conn), [error](/builtin#error)) ``` Open new Connection. See <https://github.com/go-sql-driver/mysql#dsn-data-source-name> for how the DSN string is formatted #### func (MySQLDriver) [OpenConnector](https://github.com/go-sql-driver/mysql/blob/v1.7.1/driver.go#L99) [¶](#MySQLDriver.OpenConnector) added in v1.5.0 ``` func (d [MySQLDriver](#MySQLDriver)) OpenConnector(dsn [string](/builtin#string)) ([driver](/database/sql/driver).[Connector](/database/sql/driver#Connector), [error](/builtin#error)) ``` OpenConnector implements driver.DriverContext. #### type [MySQLError](https://github.com/go-sql-driver/mysql/blob/v1.7.1/errors.go#L58) [¶](#MySQLError) ``` type MySQLError struct { Number [uint16](/builtin#uint16) SQLState [5][byte](/builtin#byte) Message [string](/builtin#string) } ``` MySQLError is an error type which represents a single MySQL error #### func (*MySQLError) [Error](https://github.com/go-sql-driver/mysql/blob/v1.7.1/errors.go#L64) [¶](#MySQLError.Error) ``` func (me *[MySQLError](#MySQLError)) Error() [string](/builtin#string) ``` #### func (*MySQLError) [Is](https://github.com/go-sql-driver/mysql/blob/v1.7.1/errors.go#L72) [¶](#MySQLError.Is) added in v1.7.0 ``` func (me *[MySQLError](#MySQLError)) Is(err [error](/builtin#error)) [bool](/builtin#bool) ``` #### type [NullTime](https://github.com/go-sql-driver/mysql/blob/v1.7.1/nulltime.go#L36) deprecated ``` type NullTime [sql](/database/sql).[NullTime](/database/sql#NullTime) ``` * [This NullTime implementation is not driver-specific](#hdr-This_NullTime_implementation_is_not_driver_specific) NullTime represents a time.Time that may be NULL. NullTime implements the Scanner interface so it can be used as a scan destination: ``` var nt NullTime err := db.QueryRow("SELECT time FROM foo WHERE id=?", id).Scan(&nt) ... if nt.Valid { // use nt.Time } else { // NULL value } ``` #### This NullTime implementation is not driver-specific [¶](#hdr-This_NullTime_implementation_is_not_driver_specific) Deprecated: NullTime doesn't honor the loc DSN parameter. NullTime.Scan interprets a time as UTC, not the loc DSN parameter. Use sql.NullTime instead. #### func (*NullTime) [Scan](https://github.com/go-sql-driver/mysql/blob/v1.7.1/nulltime.go#L41) [¶](#NullTime.Scan) ``` func (nt *[NullTime](#NullTime)) Scan(value interface{}) (err [error](/builtin#error)) ``` Scan implements the Scanner interface. The value type must be time.Time or string / []byte (formatted time-string), otherwise Scan fails. #### func (NullTime) [Value](https://github.com/go-sql-driver/mysql/blob/v1.7.1/nulltime.go#L66) [¶](#NullTime.Value) ``` func (nt [NullTime](#NullTime)) Value() ([driver](/database/sql/driver).[Value](/database/sql/driver#Value), [error](/builtin#error)) ``` Value implements the driver Valuer interface.
link_preview_generator
hex
Erlang
link_preview_generator v0.0.4 API Reference === * [Modules](#modules) Modules === [LinkPreviewGenerator](LinkPreviewGenerator.html) Simple package for link previews [LinkPreviewGenerator.Page](LinkPreviewGenerator.Page.html) Provides struct to store results of data processing and helper function to initialize it [LinkPreviewGenerator.Parsers.Basic](LinkPreviewGenerator.Parsers.Basic.html) Basic Parser implementation [LinkPreviewGenerator.Parsers.Html](LinkPreviewGenerator.Parsers.Html.html) Parser implementation based on html tags [LinkPreviewGenerator.Parsers.Opengraph](LinkPreviewGenerator.Parsers.Opengraph.html) Parser implementation based on opengraph [LinkPreviewGenerator.Processor](LinkPreviewGenerator.Processor.html) Combines the logic of other modules with user input [LinkPreviewGenerator.Requests](LinkPreviewGenerator.Requests.html) Module providing functions to handle needed requests link_preview_generator v0.0.4 LinkPreviewGenerator === Simple package for link previews. Summary === [Types](#types) --- [failure()](#t:failure/0) [success()](#t:success/0) [Functions](#functions) --- [parse(url)](#parse/1) See [`LinkPreviewGenerator.Processor.call/1`](LinkPreviewGenerator.Processor.html#call/1) Types === ``` [failure](#t:failure/0) :: {:error, atom} ``` ``` [success](#t:success/0) :: {:ok, [LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0)} ``` Functions === parse(url) See [`LinkPreviewGenerator.Processor.call/1`](LinkPreviewGenerator.Processor.html#call/1). link_preview_generator v0.0.4 LinkPreviewGenerator.Page === Provides struct to store results of data processing and helper function to initialize it. Summary === [Types](#types) --- [t()](#t:t/0) [Functions](#functions) --- [new(original_url)](#new/1) Initializes Page struct based on original url provided by user Types === ``` [t](#t:t/0) :: %LinkPreviewGenerator.Page{description: [String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0) | nil, images: list, original_url: [String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0) | nil, title: [String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0) | nil, website_url: [String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0) | nil} ``` Functions === new(original_url) #### Specs ``` new([String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0)) :: [t](#t:t/0) | {:error, atom} ``` Initializes Page struct based on original url provided by user link_preview_generator v0.0.4 LinkPreviewGenerator.Parsers.Basic === Basic Parser implementation. It provides overridable parsing functions that returns unchanged input `LinkPreviewGenerator.Page.t` struct. Main reason behind this is to let parsers work without need to implement all possible parsing functions. See [`__using__/1`](#__using__/1) macro. All parsing functions should take `LinkPreviewGenerator.Page.t` and `Floki.html_tree` as params and returns `LinkPreviewGenerator.Page.t` as result. Parsing function should not override result from previously invoked parser unless explicit configuration specifies otherwise. Summary === [Functions](#functions) --- [parsing_functions()](#parsing_functions/0) Returns list of parsing functions [Macros](#macros) --- [__using__(opts)](#__using__/1) This macro should be invoked in all non-basic parsers Functions === parsing_functions() #### Specs ``` parsing_functions :: list ``` Returns list of parsing functions. Macros === __using__(opts) This macro should be invoked in all non-basic parsers. link_preview_generator v0.0.4 LinkPreviewGenerator.Parsers.Html === Parser implementation based on html tags. Summary === [Functions](#functions) --- [description(page, parsed_body)](#description/2) Get page description based on first encountered h1..h6 tag [images(page, parsed_body)](#images/2) Get images based on img tags [title(page, parsed_body)](#title/2) Get page title based on first encountered title tag Functions === description(page, parsed_body) #### Specs ``` description([LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0), Floki.html_tree) :: [LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0) ``` Get page description based on first encountered h1..h6 tag. Preference: h1> h2 > h3 > h4 > h5 > h6 Config options: * `:friendly_strings` - remove leading and trailing whitespaces, change rest of newline characters to space and replace all multiple spaces by single space; default: true images(page, parsed_body) #### Specs ``` images([LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0), Floki.html_tree) :: [LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0) ``` Get images based on img tags. Config options: * `:force_images_absolute_url` - try to add website url from [`LinkPreviewGenerator.Page`](LinkPreviewGenerator.Page.html) struct to all relative urls, then remove remaining relative urls from list; default: false * `:force_images_url_schema` - try to add http:// to urls without schema, then remove all invalid urls; default: false * `:filter_small_images` - if set to true it filters images with at least one dimension smaller than 100px; if set to integer value it filters images with at least one dimension smaller than that integer; requires imagemagick to be installed on machine; default: false; WARNING: Using these options may reduce performance. To prevent very long processing time images limited to first 50 by design. title(page, parsed_body) #### Specs ``` title([LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0), Floki.html_tree) :: [LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0) ``` Get page title based on first encountered title tag. Config options: * `:friendly_strings` - remove leading and trailing whitespaces, change rest of newline characters to space and replace all multiple spaces by single space; default: true link_preview_generator v0.0.4 LinkPreviewGenerator.Parsers.Opengraph === Parser implementation based on opengraph. Summary === [Functions](#functions) --- [description(page, parsed_body)](#description/2) Get page description based on first encountered og:description property [images(page, parsed_body)](#images/2) Get page images based on og:image property [title(page, parsed_body)](#title/2) Get page title based on first encountered og:title property Functions === description(page, parsed_body) #### Specs ``` description([LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0), Floki.html_tree) :: [LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0) ``` Get page description based on first encountered og:description property. images(page, parsed_body) #### Specs ``` images([LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0), Floki.html_tree) :: [LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0) ``` Get page images based on og:image property. title(page, parsed_body) #### Specs ``` title([LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0), Floki.html_tree) :: [LinkPreviewGenerator.Page.t](LinkPreviewGenerator.Page.html#t:t/0) ``` Get page title based on first encountered og:title property. link_preview_generator v0.0.4 LinkPreviewGenerator.Processor === Combines the logic of other modules with user input. Summary === [Functions](#functions) --- [call(url)](#call/1) Takes url and returns result of processing Functions === call(url) #### Specs ``` call([String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0)) :: [LinkPreviewGenerator.success](LinkPreviewGenerator.html#t:success/0) | [LinkPreviewGenerator.failure](LinkPreviewGenerator.html#t:failure/0) ``` Takes url and returns result of processing. link_preview_generator v0.0.4 LinkPreviewGenerator.Requests === Module providing functions to handle needed requests. Summary === [Types](#types) --- [t()](#t:t/0) [Functions](#functions) --- [final_location(url)](#final_location/1) Follow redirects and returns final location [get(url, headers, options)](#get/3) Function that invokes HTTPoison.get only if badarg error can not occur [head(url, headers, options)](#head/3) Function that invokes HTTPoison.head only if badarg error can not occur [valid_image?(url)](#valid_image?/1) Check if given url leads to image Types === ``` [t](#t:t/0) :: {:ok, [String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0)} | {:error, atom} ``` Functions === final_location(url) #### Specs ``` final_location([String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0)) :: [t](#t:t/0) ``` Follow redirects and returns final location. get(url, headers, options) #### Specs ``` get([String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0), list, list) :: {:ok, HTTPoison.Response.t} | {:error, atom} ``` Function that invokes HTTPoison.get only if badarg error can not occur. head(url, headers, options) #### Specs ``` head([String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0), list, list) :: {:ok, HTTPoison.Response.t} | {:error, atom} ``` Function that invokes HTTPoison.head only if badarg error can not occur. valid_image?(url) #### Specs ``` valid_image?([String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0)) :: [t](#t:t/0) ``` Check if given url leads to image
intensegRid
cran
R
Package ‘intensegRid’ November 8, 2022 Type Package Title R Wrapper for the Carbon Intensity API Version 0.1.2 Author <NAME> <<EMAIL>> Maintainer <NAME> <<EMAIL>> Description Electricity is not made equal and it vary in its carbon footprint (or carbon intensity) depending on its source. This package enables to access and query data provided by the Carbon Intensity API (<https: //carbonintensity.org.uk/>). National Grid’s Carbon Intensity API provides an indicative trend of regional carbon intensity of the electricity system in Great Britain. License CC0 Encoding UTF-8 LazyData TRUE URL https://github.com/KKulma/intensegRid, https://kkulma.github.io/intensegRid/articles/intro-to-carbon-intensity.html BugReports https://github.com/KKulma/intensegRid/issues RoxygenNote 7.2.1 Depends R (>= 2.10) Imports dplyr, httr, jsonlite, lubridate, magrittr, tidyr, tibble, rlang, purrr Suggests utils, knitr, rmarkdown, testthat (>= 2.1.0), covr, vcr VignetteBuilder knitr NeedsCompilation no Repository CRAN Date/Publication 2022-11-08 10:50:06 UTC R topics documented: clean_colname... 2 get_british_c... 2 get_dat... 3 get_factor... 4 get_mi... 4 get_national_c... 5 get_postcode_c... 5 get_regional_c... 6 get_stat... 7 regions_looku... 7 clean_colnames Tidy up intensity results column names Description Tidy up intensity results column names Usage clean_colnames(result) Arguments result a data frame with raw results from Carbon Intensity API Value data frame get_british_ci Fetch British carbon intensity data for specified time period Description Fetch British carbon intensity data for specified time period Usage get_british_ci(start = NULL, end = NULL) Arguments start character A start date of the intesity. end character An end date of the intesity data. The maximum date range is limited to 14 days. Value a data.frame with 1/2-hourly carbon intensity data for specified time period Examples ## Not run: get_british_ci() get_british_ci(start = '2019-01-01', end = '2019-01-02') ## End(Not run) get_data Retrieve raw data from Carbon Intensity API Description Retrieve raw data from Carbon Intensity API Usage get_data(call) Arguments call character API URL Value tibble get_factors Get Carbon Intensity factors for each fuel type Description Get Carbon Intensity factors for each fuel type Usage get_factors() Value a tibble Examples get_factors() get_mix Get generation mix for current half hour Description Get generation mix for current half hour Usage get_mix(start = NULL, end = NULL) Arguments start character A start date of the intesity data end character An end date of the intesity data Value tibble Examples ## Not run: start <- "2019-04-01" end <- "2019-04-07" get_mix(start, end) get_mix() ## End(Not run) get_national_ci Get Carbon Intensity data for current half hour for a specified GB Region Description Get Carbon Intensity data for current half hour for a specified GB Region Usage get_national_ci(start = NULL, end = NULL, region = NULL) Arguments start character A start date of the intesity end character An end date of the intesity data region character The name of the GB region, one of ’England’, ’Scotland’ or ’Wales’ Value a tibble Examples ## Not run: get_national_ci() get_national_ci(region = 'England') get_national_ci(region = 'Scotland') get_national_ci(region = 'Wales') get_national_ci(start = '2019-01-01', end = '2019-01-02') ## End(Not run) get_postcode_ci Get Carbon Intensity for specified postcode. Description Get Carbon Intensity for specified postcode. Usage get_postcode_ci(postcode, start = NULL, end = NULL) Arguments postcode character Outward postcode i.e. RG41 or SW1 or TF8. Do not include full postcode, outward postcode only start character A start date of the intesity data end character An end date of the intesity data Value tibble Examples ## Not run: get_postcode_ci(postcode = 'EN2') get_postcode_ci(postcode = 'EN2', start = '2019-01-01', end = '2019-01-02') ## End(Not run) get_regional_ci Get Carbon Intensity data between specified datetimes for specified region Description Get Carbon Intensity data between specified datetimes for specified region Usage get_regional_ci(region_id, start = NULL, end = NULL) Arguments region_id numeric Region ID in the UK region. See list of Region IDs in regions_lookup start character A start date of the intesity data end character An end date of the intesity data Value a tibble Examples ## Not run: get_regional_ci(13) get_regional_ci(13, start = '2019-01-02', end = '2019-01-03') ## End(Not run) get_stats Get Carbon Intensity statistics between from and to dates Description Get Carbon Intensity statistics between from and to dates Usage get_stats(start, end, block = NULL) Arguments start character A start date of the stats data. The maximum date range is limited to 30 days. end character An end date of the stats data. The maximum date range is limited to 30 days. block numeric Block length in hours i.e. a block length of 2 hrs over a 24 hr period returns 12 items with the average, max, min for each 2 hr block Value tibble Examples ## Not run: start <- "2019-04-01" end <- "2019-05-01" get_stats(start, end) get_stats(start, end, block = 2) ## End(Not run) regions_lookup regions_lookup Description A lookup table of region_ids and corresponding GB regions Usage regions_lookup Format A data frame with 17 rows and 2 variables: Region ID region ID to be used in intensegRid package Shortname corresponding GB region Source https://carbon-intensity.github.io/api-definitions/#region-list
javascript_ruanyifeng_com_nodejs_express_html
free_programming_book
JavaScript
Date: 2015-05-23 Categories: Tags: Express是目前最流行的基于Node.js的Web开发框架,可以快速地搭建一个完整功能的网站。 Express上手非常简单,首先新建一个项目目录,假定叫做hello-world。 `$ mkdir hello-world` 进入该目录,新建一个package.json文件,内容如下。 ``` { "name": "hello-world", "description": "hello world test app", "version": "0.0.1", "private": true, "dependencies": { "express": "4.x" } } ``` 上面代码定义了项目的名称、描述、版本等,并且指定需要4.0版本以上的Express。 然后,就可以安装了。 `$ npm install` 执行上面的命令以后,在项目根目录下,新建一个启动文件,假定叫做index.js。 app.use(express.static(__dirname + '/public')); app.listen(8080); ``` 然后,运行上面的启动脚本。 `$ node index` 现在就可以访问 ``` http://localhost:8080 ``` ,它会在浏览器中打开当前目录的public子目录(严格来说,是打开public目录的index.html文件)。如果public目录之中有一个图片文件 `my_image.png` ,那么可以用 ``` http://localhost:8080/my_image.png ``` 访问该文件。 你也可以在index.js之中,生成动态网页。 var express = require('express'); var app = express(); app.get('/', function (req, res) { res.send('Hello world!'); }); app.listen(3000); ``` 然后,在命令行下运行启动脚本,就可以在浏览器中访问项目网站了。 `$ node index` 上面代码会在本机的3000端口启动一个网站,网页显示Hello World。 启动脚本index.js的 `app.get` 方法,用于指定不同的访问路径所对应的回调函数,这叫做“路由”(routing)。上面代码只指定了根目录的回调函数,因此只有一个路由记录。实际应用中,可能有多个路由记录。 app.get('/', function (req, res) { res.send('Hello world!'); }); app.get('/customer', function(req, res){ res.send('customer page'); }); app.get('/admin', function(req, res){ res.send('admin page'); }); 这时,最好就把路由放到一个单独的文件中,比如新建一个routes子目录。 ``` // routes/index.js module.exports = function (app) { app.get('/', function (req, res) { res.send('Hello world'); }); app.get('/customer', function(req, res){ res.send('customer page'); }); app.get('/admin', function(req, res){ res.send('admin page'); }); }; ``` 然后,原来的index.js就变成下面这样。 ``` // index.js var express = require('express'); var app = express(); var routes = require('./routes')(app); app.listen(3000); ``` ## 运行原理 ### 底层:http模块 Express框架建立在node.js内置的http模块上。http模块生成服务器的原始代码如下。 ``` var http = require("http"); var app = http.createServer(function(request, response) { response.writeHead(200, {"Content-Type": "text/plain"}); response.end("Hello world!"); }); app.listen(3000, "localhost"); ``` 上面代码的关键是http模块的createServer方法,表示生成一个HTTP服务器实例。该方法接受一个回调函数,该回调函数的参数,分别为代表HTTP请求和HTTP回应的request对象和response对象。 Express框架的核心是对http模块的再包装。上面的代码用Express改写如下。 app.get('/', function (req, res) { res.send('Hello world!'); }); 比较两段代码,可以看到它们非常接近。原来是用 `http.createServer` 方法新建一个app实例,现在则是用Express的构造方法,生成一个Epress实例。两者的回调函数都是相同的。Express框架等于在http模块之上,加了一个中间层。 ### 什么是中间件 简单说,中间件(middleware)就是处理HTTP请求的函数。它最大的特点就是,一个中间件处理完,再传递给下一个中间件。App实例在运行过程中,会调用一系列的中间件。 每个中间件可以从App实例,接收三个参数,依次为request对象(代表HTTP请求)、response对象(代表HTTP回应),next回调函数(代表下一个中间件)。每个中间件都可以对HTTP请求(request对象)进行加工,并且决定是否调用next方法,将request对象再传给下一个中间件。 一个不进行任何操作、只传递request对象的中间件,就是下面这样。 上面代码的next就是下一个中间件。如果它带有参数,则代表抛出一个错误,参数为错误文本。 抛出错误以后,后面的中间件将不再执行,直到发现一个错误处理函数为止。 ### use方法 use是express注册中间件的方法,它返回一个函数。下面是一个连续调用两个中间件的例子。 app.use(function(request, response, next) { console.log("In comes a " + request.method + " to " + request.url); next(); }); app.use(function(request, response) { response.writeHead(200, { "Content-Type": "text/plain" }); response.end("Hello world!\n"); }); 上面代码使用 `app.use` 方法,注册了两个中间件。收到HTTP请求后,先调用第一个中间件,在控制台输出一行信息,然后通过 `next` 方法,将执行权传给第二个中间件,输出HTTP回应。由于第二个中间件没有调用 `next` 方法,所以request对象就不再向后传递了。 `use` 方法内部可以对访问路径进行判断,据此就能实现简单的路由,根据不同的请求网址,返回不同的网页内容。 app.use(function(request, response, next) { if (request.url == "/") { response.writeHead(200, { "Content-Type": "text/plain" }); response.end("Welcome to the homepage!\n"); } else { next(); } }); app.use(function(request, response, next) { if (request.url == "/about") { response.writeHead(200, { "Content-Type": "text/plain" }); } else { next(); } }); 上面代码通过 `request.url` 属性,判断请求的网址,从而返回不同的内容。注意, `app.use` 方法一共登记了三个中间件,只要请求路径匹配,就不会将执行权交给下一个中间件。因此,最后一个中间件会返回404错误,即前面的中间件都没匹配请求路径,找不到所要请求的资源。 除了在回调函数内部判断请求的网址,use方法也允许将请求网址写在第一个参数。这代表,只有请求路径匹配这个参数,后面的中间件才会生效。无疑,这样写更加清晰和方便。 ``` app.use('/path', someMiddleware); ``` 上面代码表示,只对根目录的请求,调用某个中间件。 因此,上面的代码可以写成下面的样子。 app.use("/home", function(request, response, next) { response.writeHead(200, { "Content-Type": "text/plain" }); response.end("Welcome to the homepage!\n"); }); app.use("/about", function(request, response, next) { response.writeHead(200, { "Content-Type": "text/plain" }); response.end("Welcome to the about page!\n"); }); ## Express的方法 ### all方法和HTTP动词方法 针对不同的请求,Express提供了use方法的一些别名。比如,上面代码也可以用别名的形式来写。 上面代码的all方法表示,所有请求都必须通过该中间件,参数中的“*”表示对所有路径有效。get方法则是只有GET动词的HTTP请求通过该中间件,它的第一个参数是请求的路径。由于get方法的回调函数没有调用next方法,所以只要有一个中间件被调用了,后面的中间件就不会再被调用了。 除了get方法以外,Express还提供post、put、delete方法,即HTTP动词都是Express的方法。 这些方法的第一个参数,都是请求的路径。除了绝对匹配以外,Express允许模式匹配。 上面代码将匹配“/hello/alice”网址,网址中的alice将被捕获,作为req.params.who属性的值。需要注意的是,捕获后需要对网址进行检查,过滤不安全字符,上面的写法只是为了演示,生产中不应这样直接使用用户提供的值。 如果在模式参数后面加上问号,表示该参数可选。 下面是一些更复杂的模式匹配的例子。 ### set方法 set方法用于指定变量的值。 上面代码使用set方法,为系统变量“views”和“view engine”指定值。 ### response对象 (1)response.redirect方法 response.redirect方法允许网址的重定向。 (2)response.sendFile方法 response.sendFile方法用于发送文件。 (3)response.render方法 response.render方法用于渲染网页模板。 上面代码使用render方法,将message变量传入index模板,渲染成HTML网页。 ### requst对象 (1)request.ip request.ip属性用于获得HTTP请求的IP地址。 (2)request.files request.files用于获取上传的文件。 ### 搭建HTTPs服务器 使用Express搭建HTTPs加密服务器,也很简单。 ``` var fs = require('fs'); var options = { key: fs.readFileSync('E:/ssl/myserver.key'), cert: fs.readFileSync('E:/ssl/myserver.crt'), passphrase: '1234' }; var https = require('https'); var express = require('express'); var app = express(); app.get('/', function(req, res){ res.send('Hello World Expressjs'); }); var server = https.createServer(options, app); server.listen(8084); console.log('Server is running on port 8084'); ``` ## 项目开发实例 ### 编写启动脚本 上一节使用express命令自动建立项目,也可以不使用这个命令,手动新建所有文件。 先建立一个项目目录(假定这个目录叫做demo)。进入该目录,新建一个package.json文件,写入项目的配置信息。 在项目目录中,新建文件app.js。项目的代码就放在这个文件里面。 上面代码首先加载express模块,赋给变量express。然后,生成express实例,赋给变量app。 接着,设定express实例的参数。 上面代码中的set方法用于设定内部变量,use方法用于调用express的中间件。 最后,调用实例方法listen,让其监听事先设定的端口(3000)。 这时,运行下面的命令,就可以在浏览器访问http://127.0.0.1:3000。 网页提示“Cannot GET /”,表示没有为网站的根路径指定可以显示的内容。所以,下一步就是配置路由。 ### 配置路由 所谓“路由”,就是指为不同的访问路径,指定不同的处理方法。 (1)指定根路径 在app.js之中,先指定根路径的处理方法。 上面代码的get方法,表示处理客户端发出的GET请求。相应的,还有app.post、app.put、app.del(delete是JavaScript保留字,所以改叫del)方法。 get方法的第一个参数是访问路径,正斜杠(/)就代表根路径;第二个参数是回调函数,它的req参数表示客户端发来的HTTP请求,res参数代表发向客户端的HTTP回应,这两个参数都是对象。在回调函数内部,使用HTTP回应的send方法,表示向浏览器发送一个字符串。然后,运行下面的命令。 此时,在浏览器中访问http://127.0.0.1:3000,网页就会显示“Hello World”。 如果需要指定HTTP头信息,回调函数就必须换一种写法,要使用setHeader方法与end方法。 (2)指定特定路径 上面是处理根目录的情况,下面再举一个例子。假定用户访问/api路径,希望返回一个JSON字符串。这时,get可以这样写。 上面代码表示,除了发送字符串,send方法还可以直接发送对象。重新启动node以后,再访问路径/api,浏览器就会显示一个JSON对象。 我们也可以把app.get的回调函数,封装成模块。先在routes目录下面建立一个api.js文件。 然后,在app.js中加载这个模块。 现在访问时,就会显示与上一次同样的结果。 如果只向浏览器发送简单的文本信息,上面的方法已经够用;但是如果要向浏览器发送复杂的内容,还是应该使用网页模板。 ### 静态网页模板 在项目目录之中,建立一个子目录views,用于存放网页模板。 假定这个项目有三个路径:根路径(/)、自我介绍(/about)和文章(/article)。那么,app.js可以这样写: 上面代码表示,三个路径分别对应views目录中的三个模板:index.html、about.html和article.html。另外,向服务器发送信息的方法,从send变成了sendfile,后者专门用于发送文件。 假定index.html的内容如下: 上面代码是一个静态网页。如果想要展示动态内容,就必须使用动态网页模板。 ## 动态网页模板 网站真正的魅力在于动态网页,下面我们来看看,如何制作一个动态网页的网站。 ### 安装模板引擎 Express支持多种模板引擎,这里采用Handlebars模板引擎的服务器端版本hbs模板引擎。 先安装hbs。 上面代码将hbs模块,安装在项目目录的子目录node_modules之中。save-dev参数表示,将依赖关系写入package.json文件。安装以后的package.json文件变成下面这样: 安装模板引擎之后,就要改写app.js。 上面代码改用render方法,对网页模板进行渲染。render方法的参数就是模板的文件名,默认放在子目录views之中,后缀名已经在前面指定为html,这里可以省略。所以,res.render(‘index’) 就是指,把子目录views下面的index.html文件,交给模板引擎hbs渲染。 ### 新建数据脚本 渲染是指将数据代入模板的过程。实际运用中,数据都是保存在数据库之中的,这里为了简化问题,假定数据保存在一个脚本文件中。 在项目目录中,新建一个文件blog.js,用于存放数据。blog.js的写法符合CommonJS规范,使得它可以被require语句加载。 ### 新建网页模板 接着,新建模板文件index.html。 模板文件about.html。 模板文件article.html。 可以看到,上面三个模板文件都只有网页主体。因为网页布局是共享的,所以布局的部分可以单独新建一个文件layout.html。 ### 渲染模板 最后,改写app.js文件。 上面代码中的render方法,现在加入了第二个参数,表示模板变量绑定的数据。 现在重启node服务器,然后访问http://127.0.0.1:3000。 可以看得,模板已经使用加载的数据渲染成功了。 ### 指定静态文件目录 模板文件默认存放在views子目录。这时,如果要在网页中加载静态文件(比如样式表、图片等),就需要另外指定一个存放静态文件的目录。 上面代码在文件app.js之中,指定静态文件存放的目录是public。于是,当浏览器发出非HTML文件请求时,服务器端就到public目录寻找这个文件。比如,浏览器发出如下的样式表请求: 服务器端就到public/bootstrap/css/目录中寻找bootstrap.css文件。 ## Express.Router用法 从Express 4.0开始,路由器功能成了一个单独的组件 `Express.Router` 。它好像小型的express应用程序一样,有自己的use、get、param和route方法。 首先, `Express.Router` 是一个构造函数,调用后返回一个路由器实例。然后,使用该实例的HTTP动词方法,为不同的访问路径,指定回调函数;最后,挂载到某个路径。 ``` var router = express.Router(); router.get('/', function(req, res) { res.send('首页'); }); router.get('/about', function(req, res) { res.send('关于'); }); app.use('/', router); ``` 上面代码先定义了两个访问路径,然后将它们挂载到根目录。如果最后一行改为app.use(‘/app’, router),则相当于为 `/app` 和 `/app/about` 这两个路径,指定了回调函数。 这种路由器可以自由挂载的做法,为程序带来了更大的灵活性,既可以定义多个路由器实例,也可以为将同一个路由器实例挂载到多个路径。 ### router.route方法 router实例对象的route方法,可以接受访问路径作为参数。 ### router中间件 use方法为router对象指定中间件,即在数据正式发给用户之前,对数据进行处理。下面就是一个中间件的例子。 上面代码中,回调函数的next参数,表示接受其他中间件的调用。函数体中的next(),表示将数据传递给下一个中间件。 注意,中间件的放置顺序很重要,等同于执行顺序。而且,中间件必须放在HTTP动词方法之前,否则不会执行。 ### 对路径参数的处理 router对象的param方法用于路径参数的处理,可以 上面代码中,get方法为访问路径指定了name参数,param方法则是对name参数进行处理。注意,param方法必须放在HTTP动词方法之前。 ### app.route 假定app是Express的实例对象,Express 4.0为该对象提供了一个route属性。app.route实际上是express.Router()的缩写形式,除了直接挂载到根路径。因此,对同一个路径指定get和post方法的回调函数,可以写成链式形式。 上面代码的这种写法,显然非常简洁清晰。 ## 上传文件 首先,在网页插入上传文件的表单。 ``` <form action="/pictures/upload" method="POST" enctype="multipart/form-data"> Select an image to upload: <input type="file" name="image"> <input type="submit" value="Upload Image"> </form> ``` 然后,服务器脚本建立指向 `/upload` 目录的路由。这时可以安装multer模块,它提供了上传文件的许多功能。 ``` var express = require('express'); var router = express.Router(); var multer = require('multer'); var uploading = multer({ dest: __dirname + '../public/uploads/', // 设定限制,每次最多上传1个文件,文件大小不超过1MB limits: {fileSize: 1000000, files:1}, }) router.post('/upload', uploading, function(req, res) { }) 上面代码是上传文件到本地目录。下面是上传到Amazon S3的例子。 首先,在S3上面新增CORS配置文件。 ``` <?xml version="1.0" encoding="UTF-8"?> <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <AllowedMethod>POST</AllowedMethod> <AllowedMethod>PUT</AllowedMethod> <AllowedHeader>*</AllowedHeader> </CORSRule> </CORSConfiguration> ``` 上面的配置允许任意电脑向你的bucket发送HTTP请求。 然后,安装aws-sdk。 ``` $ npm install aws-sdk --save ``` 下面是服务器脚本。 ``` var express = require('express'); var router = express.Router(); var aws = require('aws-sdk'); router.get('/', function(req, res) { res.render('index') }) var AWS_ACCESS_KEY = 'your_AWS_access_key' var AWS_SECRET_KEY = 'your_AWS_secret_key' var S3_BUCKET = 'images_upload' router.get('/sign', function(req, res) { aws.config.update({accessKeyId: AWS_ACCESS_KEY, secretAccessKey: AWS_SECRET_KEY}); var s3 = new aws.S3() var options = { Bucket: S3_BUCKET, Key: req.query.file_name, Expires: 60, ContentType: req.query.file_type, ACL: 'public-read' } s3.getSignedUrl('putObject', options, function(err, data){ if(err) return res.send('Error with S3') res.json({ signed_request: data, url: 'https://s3.amazonaws.com/' + S3_BUCKET + '/' + req.query.file_name }) }) }) 上面代码中,用户访问 `/sign` 路径,正确登录后,会收到一个JSON对象,里面是S3返回的数据和一个暂时用来接收上传文件的URL,有效期只有60秒。 浏览器代码如下。 ``` // HTML代码为 // <br>Please select an image // <input type="file" id="image"> // <br> // <img id="preview"document.getElementById("image").onchange = function() { var file = document.getElementById("image").files[0] if (!file) return sign_request(file, function(response) { upload(file, response.signed_request, response.url, function() { document.getElementById("preview").src = response.url }) }) } function sign_request(file, done) { var xhr = new XMLHttpRequest() xhr.open("GET", "/sign?file_name=" + file.name + "&file_type=" + file.type) xhr.onreadystatechange = function() { if(xhr.readyState === 4 && xhr.status === 200) { var response = JSON.parse(xhr.responseText) done(response) } } xhr.send() } function upload(file, signed_request, url, done) { var xhr = new XMLHttpRequest() xhr.open("PUT", signed_request) xhr.setRequestHeader('x-amz-acl', 'public-read') xhr.onload = function() { if (xhr.status === 200) { done() } } xhr.send(file) } ``` 上面代码首先监听file控件的change事件,一旦有变化,就先向服务器要求一个临时的上传URL,然后向该URL上传文件。 * <NAME>, Introduction to Express * <NAME>, Getting Started With Node.js, Express, MongoDB * <NAME>, A short guide to Connect Middleware * <NAME>, Understanding Express.js * <NAME>, Learn to Use the New Router in ExpressJS 4.0 * <NAME>, Limitless file uploading to Amazon S3 with Node & Express assert模块是Node的内置模块,主要用于断言。如果表达式不符合预期,就抛出一个错误。该模块提供11个方法,但只有少数几个是常用的。 ## assert() assert方法接受两个参数,当第一个参数对应的布尔值为true时,不会有任何提示,返回undefined。当第一个参数对应的布尔值为false时,会抛出一个错误,该错误的提示信息就是第二个参数设定的字符串。 ``` // 格式 assert(value, message) var expected = add(1,2); assert( expected === 3, '预期1加2等于3'); ``` 上面代码不会有任何输出,因为assert方法的第一个参数是true。 ``` assert( expected === 4, '预期1加2等于3') // AssertionError: 预期1加2等于3 ``` 上面代码会抛出一个错误,因为assert方法的第一个参数是false。 ## assert.ok() ok是assert方法的另一个名字,与assert方法完全一样。 ## assert.equal() equal方法接受三个参数,第一个参数是实际值,第二个是预期值,第三个是错误的提示信息。 ``` // 格式 assert.equal(actual, expected, [message]) assert.equal(true, value, message); // 等同于 assert(value, message); // 以下三句效果相同 assert(expected == 3, '预期1+2等于3'); assert.ok(expected == 3, '预期1+2等于3'); assert.equal(expected, 3, '预期1+2等于3'); ``` equal方法内部使用的是相等运算符(==),而不是严格运算符(===),进行比较运算。 ## assert.notEqual() notEqual方法的用法与equal方法类似,但只有在实际值等于预期值时,才会抛出错误。 ``` // 格式 assert.notEqual(actual, expected, [message]) // 用法 var assert = require('assert'); // 以下三种写法效果相同 assert(expected != 4, '预期不等于4'); assert.ok(expected != 4, '预期不等于4'); assert.notEqual(expected, 4, '预期不等于4'); ``` notEqual方法内部使用不相等运算符(!=),而不是严格不相等运算符(!==),进行比较运算。 ## assert.deepEqual() deepEqual方法用来比较两个对象。只要它们的属性一一对应,且值都相等,就认为两个对象相等,否则抛出一个错误。 assert.deepEqual(list1, list2, '预期两个数组应该有相同的属性'); var person1 = { "name":"john", "age":"21" }; var person2 = { "name":"john", "age":"21" }; assert.deepEqual(person1, person2, '预期两个对象应该有相同的属性'); ``` ## assert.notDeepEqual() notDeepEqual方法与deepEqual方法正好相反,用来断言两个对象是否不相等。 ``` // 格式 assert.notDeepEqual(actual, expected, [message]) assert.notDeepEqual(list1, list2, '预期两个对象不相等'); var person1 = { "name":"john", "age":"21" }; var person2 = { "name":"jane", "age":"19" }; // deepEqual checks the elements in the objects are identical assert.notDeepEqual(person1, person2, '预期两个对象不相等'); ``` ## assert.strictEqual() strictEqual方法使用严格相等运算符(===),比较两个表达式。 ``` // 格式 assert.strictEqual(actual, expected, [message]) assert.strictEqual(1, '1', '预期严格相等'); // AssertionError: 预期严格相等 ``` ## assert.notStrictEqual() assert.notStrictEqual方法使用严格不相等运算符(!==),比较两个表达式。 assert.notStrictEqual(1, true, '预期严格不相等'); ``` ## assert.throws() throws方法预期某个代码块会抛出一个错误,且抛出的错误符合指定的条件。 ``` // 格式 assert.throws(block, [error], [message]) // 例一,抛出的错误符合某个构造函数 assert.throws( function() { throw new Error("Wrong value"); }, Error, '不符合预期的错误类型' ); // 例二、抛出错误的提示信息符合正则表达式 assert.throws( function() { throw new Error("Wrong value"); }, /value/, '不符合预期的错误类型' ); // 例三、抛出的错误符合自定义函数的校验 assert.throws( function() { throw new Error("Wrong value"); }, function(err) { if ( (err instanceof Error) && /value/.test(err) ) { return true; } }, '不符合预期的错误类型' ); ``` ## assert.doesNotThrow() doesNotThrow方法与throws方法正好相反,预期某个代码块不抛出错误。 ``` // 格式 assert.doesNotThrow(block, [message]) // 用法 assert.doesNotThrow( function() { console.log("Nothing to see here"); }, '预期不抛出错误' ); ``` ## assert.ifError() ifError方法断言某个表达式是否false,如果该表达式对应的布尔值等于true,就抛出一个错误。它对于验证回调函数的第一个参数十分有用,如果该参数为true,就表示有错误。 ``` // 格式 assert.ifError(value) // 用法 function sayHello(name, callback) { var error = false; var str = "Hello "+name; callback(error, str); } // use the function sayHello('World', function(err, value){ assert.ifError(err); // ... }) ``` ## assert.fail() fail方法用于抛出一个错误。 ``` // 格式 assert.fail(actual, expected, message, operator) assert.fail(21, 42, 'Test Failed', '###') // AssertionError: Test Failed assert.fail(21, 21, 'Test Failed', '###') // AssertionError: Test Failed assert.fail(21, 42, undefined, '###') // AssertionError: 21 ### 42 ``` 该方法共有四个参数,但是不管参数是什么值,它总是抛出一个错误。如果message参数对应的布尔值不为false,抛出的错误信息就是message,否则错误信息就是“实际值 + 分隔符 + 预期值”。 Date: 2014-05-04 Categories: Tags: Node是JavaScript语言的服务器运行环境。 所谓“运行环境”有两层意思:首先,JavaScript语言通过Node在服务器运行,在这个意义上,Node有点像JavaScript虚拟机;其次,Node提供大量工具库,使得JavaScript语言与操作系统互动(比如读写文件、新建子进程),在这个意义上,Node又是JavaScript的工具库。 Node内部采用Google公司的V8引擎,作为JavaScript语言解释器;通过自行开发的libuv库,调用操作系统资源。 ### 安装与更新 访问官方网站nodejs.org或者github.com/nodesource/distributions,查看Node的最新版本和安装方法。 官方网站提供编译好的二进制包,可以把它们解压到 `/usr/local` 目录下面。 ``` $ tar -xf node-someversion.tgz ``` 然后,建立符号链接,把它们加到$PATH变量里面的路径。 ``` $ ln -s /usr/local/node/bin/node /usr/local/bin/node $ ln -s /usr/local/node/bin/npm /usr/local/bin/npm ``` 下面是Ubuntu和Debian下面安装Deb软件包的安装方法。 ``` $ curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash - $ sudo apt-get install -y nodejs $ apt-get install nodejs ``` 安装完成以后,运行下面的命令,查看是否能正常运行。 ``` $ node --version # 或者 $ node -v ``` 更新node.js版本,可以通过node.js的 `n` 模块完成。 ``` $ sudo npm install n -g $ sudo n stable ``` 上面代码通过 `n` 模块,将node.js更新为最新发布的稳定版。 `n` 模块也可以指定安装特定版本的node。 `$ sudo n 0.10.21` ### 版本管理工具nvm 如果想在同一台机器,同时安装多个版本的node.js,就需要用到版本管理工具nvm。 ``` $ git clone https://github.com/creationix/nvm.git ~/.nvm $ source ~/.nvm/nvm.sh ``` 安装以后,nvm的执行脚本,每次使用前都要激活,建议将其加入~/.bashrc文件(假定使用Bash)。激活后,就可以安装指定版本的Node。 ``` # 安装最新版本 $ nvm install node # 安装指定版本 $ nvm install 0.12.1 # 使用已安装的最新版本 $ nvm use node # 使用指定版本的node $ nvm use 0.12 ``` nvm也允许进入指定版本的REPL环境。 `$ nvm run 0.12` 如果在项目根目录下新建一个.nvmrc文件,将版本号写入其中,就只输入 `nvm use` 命令即可,不再需要附加版本号。 下面是其他经常用到的命令。 ``` # 查看本地安装的所有版本 $ nvm ls # 查看服务器上所有可供安装的版本。 $ nvm ls-remote # 退出已经激活的nvm,使用deactivate命令。 $ nvm deactivate ``` 安装完成后,运行node.js程序,就是使用node命令读取JavaScript脚本。 当前目录的 `demo.js` 脚本文件,可以这样执行。 ``` $ node demo # 或者 $ node demo.js ``` 使用 `-e` 参数,可以执行代码字符串。 ``` $ node -e 'console.log("Hello World")' Hello World ``` ### REPL环境 在命令行键入node命令,后面没有文件名,就进入一个Node.js的REPL环境(Read–eval–print loop,”读取-求值-输出”循环),可以直接运行各种JavaScript命令。 ``` $ node > 1+1 2 > ``` 如果使用参数 –use_strict,则REPL将在严格模式下运行。 `$ node --use_strict` REPL是Node.js与用户互动的shell,各种基本的shell功能都可以在里面使用,比如使用上下方向键遍历曾经使用过的命令。 特殊变量下划线(_)表示上一个命令的返回结果。 ``` > 1 + 1 2 > _ + 1 3 ``` 在REPL中,如果运行一个表达式,会直接在命令行返回结果。如果运行一条语句,就不会有任何输出,因为语句没有返回值。 上面代码的第二条命令,没有显示任何结果。因为这是一条语句,不是表达式,所以没有返回值。 ### 异步操作 Node采用V8引擎处理JavaScript脚本,最大特点就是单线程运行,一次只能运行一个任务。这导致Node大量采用异步操作(asynchronous operation),即任务不是马上执行,而是插在任务队列的尾部,等到前面的任务运行完后再执行。 由于这种特性,某一个任务的后续操作,往往采用回调函数(callback)的形式进行定义。 上面代码就把进一步的处理,交给回调函数callback。 Node约定,如果某个函数需要回调函数作为参数,则回调函数是最后一个参数。另外,回调函数本身的第一个参数,约定为上一步传入的错误对象。 上面代码中,callback的第一个参数是Error对象,第二个参数才是真正的数据参数。这是因为回调函数主要用于异步操作,当回调函数运行时,前期的操作早结束了,错误的执行栈早就不存在了,传统的错误捕捉机制try…catch对于异步操作行不通,所以只能把错误交给回调函数处理。 ``` try { db.User.get(userId, function(err, user) { if(err) { throw err } // ... }) } catch(e) { console.log(‘Oh no!’); } ``` 上面代码中,db.User.get方法是一个异步操作,等到抛出错误时,可能它所在的try…catch代码块早就运行结束了,这会导致错误无法被捕捉。所以,Node统一规定,一旦异步操作发生错误,就把错误对象传递到回调函数。 如果没有发生错误,回调函数的第一个参数就传入null。这种写法有一个很大的好处,就是说只要判断回调函数的第一个参数,就知道有没有出错,如果不是null,就肯定出错了。另外,这样还可以层层传递错误。 ``` if(err) { // 除了放过No Permission错误意外,其他错误传给下一个回调函数 if(!err.noPermission) { return next(err); } } ``` ### 全局对象和全局变量 Node提供以下几个全局对象,它们是所有模块都可以调用的。 * global:表示Node所在的全局环境,类似于浏览器的window对象。需要注意的是,如果在浏览器中声明一个全局变量,实际上是声明了一个全局对象的属性,比如 `var x = 1` 等同于设置 `window.x = 1` ,但是Node不是这样,至少在模块中不是这样(REPL环境的行为与浏览器一致)。在模块文件中,声明 `var x = 1` ,该变量不是 `global` 对象的属性, `global.x` 等于undefined。这是因为模块的全局变量都是该模块私有的,其他模块无法取到。 * process:该对象表示Node所处的当前进程,允许开发者与该进程互动。 * console:指向Node内置的console模块,提供命令行环境中的标准输入、标准输出功能。 Node还提供一些全局函数。 * setTimeout():用于在指定毫秒之后,运行回调函数。实际的调用间隔,还取决于系统因素。间隔的毫秒数在1毫秒到2,147,483,647毫秒(约24.8天)之间。如果超过这个范围,会被自动改为1毫秒。该方法返回一个整数,代表这个新建定时器的编号。 * clearTimeout():用于终止一个setTimeout方法新建的定时器。 * setInterval():用于每隔一定毫秒调用回调函数。由于系统因素,可能无法保证每次调用之间正好间隔指定的毫秒数,但只会多于这个间隔,而不会少于它。指定的毫秒数必须是1到2,147,483,647(大约24.8天)之间的整数,如果超过这个范围,会被自动改为1毫秒。该方法返回一个整数,代表这个新建定时器的编号。 * clearInterval():终止一个用setInterval方法新建的定时器。 * require():用于加载模块。 * Buffer():用于操作二进制数据。 Node提供两个全局变量,都以两个下划线开头。 * `__filename` :指向当前运行的脚本文件名。 * `__dirname` :指向当前运行的脚本所在的目录。 除此之外,还有一些对象实际上是模块内部的局部变量,指向的对象根据模块不同而不同,但是所有模块都适用,可以看作是伪全局变量,主要为module, module.exports, exports等。 ## 模块化结构 ### 概述 Node.js采用模块化结构,按照CommonJS规范定义和使用模块。模块与文件是一一对应关系,即加载一个模块,实际上就是加载对应的一个模块文件。 require命令用于指定加载模块,加载时可以省略脚本文件的后缀名。 require方法的参数是模块文件的名字。它分成两种情况,第一种情况是参数中含有文件路径(比如上例),这时路径是相对于当前脚本所在的目录,第二种情况是参数中不含有文件路径,这时Node到模块的安装目录,去寻找已安装的模块(比如下例)。 有时候,一个模块本身就是一个目录,目录中包含多个文件。这时候,Node在package.json文件中,寻找main属性所指明的模块入口文件。 上面代码中,模块的启动文件为lib子目录下的bar.js。当使用 `require('bar')` 命令加载该模块时,实际上加载的是 ``` ./node_modules/bar/lib/bar.js ``` 文件。下面写法会起到同样效果。 ``` var bar = require('bar/lib/bar.js') ``` 如果模块目录中没有package.json文件,node.js会尝试在模块目录中寻找index.js或index.node文件进行加载。 模块一旦被加载以后,就会被系统缓存。如果第二次还加载该模块,则会返回缓存中的版本,这意味着模块实际上只会执行一次。如果希望模块执行多次,则可以让模块返回一个函数,然后多次调用该函数。 ### 核心模块 如果只是在服务器运行JavaScript代码,用处并不大,因为服务器脚本语言已经有很多种了。Node.js的用处在于,它本身还提供了一系列功能模块,与操作系统互动。这些核心的功能模块,不用安装就可以使用,下面是它们的清单。 * http:提供HTTP服务器功能。 * url:解析URL。 * fs:与文件系统交互。 * querystring:解析URL的查询字符串。 * child_process:新建子进程。 * util:提供一系列实用小工具。 * path:处理文件路径。 * crypto:提供加密和解密功能,基本上是对OpenSSL的包装。 上面这些核心模块,源码都在Node的lib子目录中。为了提高运行速度,它们安装时都会被编译成二进制文件。 核心模块总是最优先加载的。如果你自己写了一个HTTP模块, `require('http')` 加载的还是核心模块。 ### 自定义模块 Node模块采用CommonJS规范。只要符合这个规范,就可以自定义模块。 下面是一个最简单的模块,假定新建一个foo.js文件,写入以下内容。 上面代码就是一个模块,它通过module.exports变量,对外输出一个方法。 这个模块的使用方法如下。 上面代码通过require命令加载模块文件foo.js(后缀名省略),将模块的对外接口输出到变量m,然后调用m。这时,在命令行下运行index.js,屏幕上就会输出“这是自定义模块”。 module变量是整个模块文件的顶层变量,它的exports属性就是模块向外输出的接口。如果直接输出一个函数(就像上面的foo.js),那么调用模块就是调用一个函数。但是,模块也可以输出一个对象。下面对foo.js进行改写。 上面的代码表示模块输出out对象,该对象有一个print属性,指向一个函数。下面是这个模块的使用方法。 上面代码表示,由于具体的方法定义在模块的print属性上,所以必须显式调用print属性。 ## 异常处理 Node是单线程运行环境,一旦抛出的异常没有被捕获,就会引起整个进程的崩溃。所以,Node的异常处理对于保证系统的稳定运行非常重要。 一般来说,Node有三种方法,传播一个错误。 * 使用throw语句抛出一个错误对象,即抛出异常。 * 将错误对象传递给回调函数,由回调函数负责发出错误。 * 通过EventEmitter接口,发出一个error事件。 ### try…catch结构 最常用的捕获异常的方式,就是使用try…catch结构。但是,这个结构无法捕获异步运行的代码抛出的异常。 ``` try { process.nextTick(function () { throw new Error("error"); }); } catch (err) { //can not catch it console.log(err); } 上面代码分别用process.nextTick和setTimeout方法,在下一轮事件循环抛出两个异常,代表异步操作抛出的错误。它们都无法被catch代码块捕获,因为catch代码块所在的那部分已经运行结束了。 一种解决方法是将错误捕获代码,也放到异步执行。 ``` function async(cb, err) { setTimeout(function() { try { if (true) throw new Error("woops!"); else cb("done"); } catch(e) { err(e); } }, 2000) } async(function(res) { console.log("received:", res); }, function(err) { console.log("Error: async threw an exception:", err); }); // Error: async threw an exception: Error: woops! ``` 上面代码中,async函数异步抛出的错误,可以同样部署在异步的catch代码块捕获。 这两种处理方法都不太理想。一般来说,Node只在很少场合才用try/catch语句,比如使用 `JSON.parse` 解析JSON文本。 ### 回调函数 Node采用的方法,是将错误对象作为第一个参数,传入回调函数。这样就避免了捕获代码与发生错误的代码不在同一个时间段的问题。 ``` fs.readFile('/foo.txt', function(err, data) { if (err !== null) throw err; console.log(data); }); ``` 上面代码表示,读取文件 `foo.txt` 是一个异步操作,它的回调函数有两个参数,第一个是错误对象,第二个是读取到的文件数据。如果第一个参数不是null,就意味着发生错误,后面代码也就不再执行了。 下面是一个完整的例子。 ``` function async2(continuation) { setTimeout(function() { try { var res = 42; if (true) throw new Error("woops!"); else continuation(null, res); // pass 'null' for error } catch(e) { continuation(e, null); } }, 2000); } async2(function(err, res) { if (err) console.log("Error: (cps) failed:", err); else console.log("(cps) received:", res); }); // Error: (cps) failed: woops! ``` 上面代码中,async2函数的回调函数的第一个参数就是一个错误对象,这是为了处理异步操作抛出的错误。 ### EventEmitter接口的error事件 发生错误的时候,也可以用EventEmitter接口抛出error事件。 emitter.emit('error', new Error('something bad happened')); ``` 使用上面的代码必须小心,因为如果没有对error事件部署监听函数,会导致整个应用程序崩溃。所以,一般总是必须同时部署下面的代码。 ``` emitter.on('error', function(err) { console.error('出错:' + err.message); }); ``` 当一个异常未被捕获,就会触发uncaughtException事件,可以对这个事件注册回调函数,从而捕获异常。 ``` var logger = require('tracer').console(); process.on('uncaughtException', function(err) { console.error('Error caught in uncaughtException event:', err); }); 只要给uncaughtException配置了回调,Node进程不会异常退出,但异常发生的上下文已经丢失,无法给出异常发生的详细信息。而且,异常可能导致Node不能正常进行内存回收,出现内存泄露。所以,当uncaughtException触发后,最好记录错误日志,然后结束Node进程。 ``` process.on('uncaughtException', function(err) { logger.log(err); process.exit(1); }); ``` ### unhandledRejection事件 iojs有一个unhandledRejection事件,用来监听没有捕获的Promise对象的rejected状态。 ``` var promise = new Promise(function(resolve, reject) { reject(new Error("Broken.")); }); promise.then(function(result) { console.log(result); }) ``` 上面代码中,promise的状态变为rejected,并且抛出一个错误。但是,不会有任何反应,因为没有设置任何处理函数。 只要监听unhandledRejection事件,就能解决这个问题。 ``` process.on('unhandledRejection', function (err, p) { console.error(err.stack); }) ``` 需要注意的是,unhandledRejection事件的监听函数有两个参数,第一个是错误对象,第二个是产生错误的promise对象。这可以提供很多有用的信息。 http.createServer(function (req, res) { var promise = new Promise(function(resolve, reject) { reject(new Error("Broken.")) }) promise.info = {url: req.url} }).listen(8080) process.on('unhandledRejection', function (err, p) { if (p.info && p.info.url) { console.log('Error in URL', p.info.url) } console.error(err.stack) }) ``` 上面代码会在出错时,输出用户请求的网址。 ``` Error in URL /testurl Error: Broken. at /Users/mikeal/tmp/test.js:9:14 at Server.<anonymous> (/Users/mikeal/tmp/test.js:4:17) at emitTwo (events.js:87:13) at Server.emit (events.js:169:7) at HTTPParser.parserOnIncoming [as onIncoming] (_http_server.js:471:12) at HTTPParser.parserOnHeadersComplete (_http_common.js:88:23) at Socket.socketOnData (_http_server.js:322:22) at emitOne (events.js:77:13) at Socket.emit (events.js:166:7) at readableAddChunk (_stream_readable.js:145:16) ``` ## 命令行脚本 node脚本可以作为命令行脚本使用。 `$ node foo.js` 上面代码执行了foo.js脚本文件。 foo.js文件的第一行,如果加入了解释器的位置,就可以将其作为命令行工具直接调用。 `#!/usr/bin/env node` 调用前,需更改文件的执行权限。 ``` $ chmod u+x foo.js $ ./foo.js arg1 arg2 ... ``` 作为命令行脚本时, `console.log` 用于输出内容到标准输出, `process.stdin` 用于读取标准输入, `child_process.exec()` 用于执行一个shell命令。 * <NAME>, Package Managers: An Introductory Guide For The Uninitiated Front-End Developer * Stack Overflow, What is Node.js? * <NAME>, Using Node’s Event Module * <NAME>, task automation with npm run- <NAME>, Working on related Node.js modules locally * <NAME>, Export This: Interface Design Patterns for Node.js Modules * Node.js Manual & Documentation, Modules * <NAME>, Creating and publishing a node.js module * <NAME>, “npm install –save” No Longer Using Tildes * Satans17, Node稳定性的研究心得 * <NAME>, Write your shell scripts in JavaScript, via Node.js `Buffer` 对象是Node处理二进制数据的一个接口。它是Node原生提供的全局对象,可以直接使用,不需要 `require('buffer')` 。 JavaScript比较擅长处理字符串,对于处理二进制数据(比如TCP数据流),就不太擅长。 `Buffer` 对象就是为了解决这个问题而设计的。它是一个构造函数,生成的实例代表了V8引擎分配的一段内存,是一个类似数组的对象,成员都为0到255的整数值,即一个8位的字节。 ``` // 生成一个256字节的Buffer实例 var bytes = new Buffer(256); // 生成一个buffer的view // 从240字节到256字节 var end = bytes.slice(240, 256); end[0] // 240 end[0] = 0; end[0] // 0 ``` 上面代码演示了如何生成 `Buffer` 对象实例,以及它的赋值和取值。 除了直接赋值, `Buffer` 实例还可以拷贝生成。 ``` var bytes = new Buffer(8); var more = new Buffer(4); bytes.copy(more, 0, 4, 8); more[0] // 4 ``` 上面代码中, `copy` 方法将 `bytes` 实例的4号成员到7号成员的这一段,都拷贝到了 `more` 实例从0号成员开始的区域。 `Buffer` 对象与字符串的互相转换,需要指定编码格式。目前,Buffer对象支持以下编码格式。 * ascii * utf8 * utf16le:UTF-16的小端编码,支持大于U+10000的四字节字符。 * ucs2:utf16le的别名。 * base64 * hex:将每个字节转为两个十六进制字符。 ## 与二进制数组的关系 `TypedArray` 构造函数可以接受 `Buffer` 实例作为参数,生成一个二进制数组。比如, ``` new Uint32Array(new Buffer([1, 2, 3, 4])) ``` ,生成一个4个成员的二进制数组。注意,新数组的成员有四个,而不是只有单个成员( `[0x1020304]` 或者 `[0x4030201]` )。另外,这时二进制数组所对应的内存是从Buffer对象拷贝的,而不是共享的。二进制数组的 `buffer` 属性,保留指向原Buffer对象的指针。 二进制数组的操作,与Buffer对象的操作基本上是兼容的,只有轻微的差异。比如,二进制数组的 `slice` 方法返回原内存的拷贝,而Buffer对象的 `slice` 方法创造原内存的一个视图(view)。 ## Buffer构造函数 `Buffer` 作为构造函数,可以用 `new` 命令生成一个实例,它可以接受多种形式的参数。 ``` // 参数是整数,指定分配多少个字节内存 var hello = new Buffer(5); // 参数是数组,数组成员必须是整数值 var hello = new Buffer([0x48, 0x65, 0x6c, 0x6c, 0x6f]); hello.toString() // 'Hello' // 参数是字符串(默认为utf8编码) var hello = new Buffer('Hello'); hello.length // 5 hello.toString() // "Hello" // 参数是字符串(不省略编码) var hello = new Buffer('Hello', 'utf8'); // 参数是另一个Buffer实例,等同于拷贝后者 var hello1 = new Buffer('Hello'); var hello2 = new Buffer(hello1); ``` 下面是读取用户命令行输入的例子。 ``` var fs = require('fs'); var buffer = new Buffer(1024); var readSize = fs.readSync(fs.openSync('/dev/tty', 'r'), buffer, 0, bufferSize); var chunk = buffer.toString('utf8', 0, readSize); console.log('INPUT: ' + chunk); ``` 运行上面的程序结果如下。 ``` # 输入任意内容,然后按回车键 foo INPUT: foo ``` ## 类的方法 ### Buffer.isEncoding() Buffer.isEncoding方法返回一个布尔值,表示Buffer实例是否为指定编码。 ``` Buffer.isEncoding('utf8') // true ``` ### Buffer.isBuffer() Buffer.isBuffer方法接受一个对象作为参数,返回一个布尔值,表示该对象是否为Buffer实例。 ``` Buffer.isBuffer(Date) // false ``` ### Buffer.byteLength() Buffer.byteLength方法返回字符串实际占据的字节长度,默认编码方式为utf8。 ``` Buffer.byteLength('Hello', 'utf8') // 5 ``` ### Buffer.concat() Buffer.concat方法将一组Buffer对象合并为一个Buffer对象。 需要注意的是,如果Buffer.concat的参数数组只有一个成员,就直接返回该成员。如果有多个成员,就返回一个多个成员合并的新Buffer对象。 Buffer.concat方法还可以接受第二个参数,指定合并后Buffer对象的总长度。 省略第二个参数时,Node内部会计算出这个值,然后再据此进行合并运算。因此,显式提供这个参数,能提供运行速度。 ## 实例属性 ### length length属性返回Buffer对象所占据的内存长度。注意,这个值与Buffer对象的内容无关。 ``` buf = new Buffer(1234); buf.length // 1234 buf.write("some string", 0, "ascii"); buf.length // 1234 ``` 上面代码中,不管写入什么内容,length属性总是返回Buffer对象的空间长度。如果想知道一个字符串所占据的字节长度,可以将其传入Buffer.byteLength方法。 length属性是可写的,但是这会导致未定义的行为,不建议使用。如果想修改Buffer对象的长度,建议使用slice方法返回一个新的Buffer对象。 ## 实例方法 `write` 方法可以向指定的Buffer对象写入数据。它的第一个参数是所写入的内容,第二个参数(可省略)是所写入的起始位置(默认从0开始),第三个参数(可省略)是编码方式,默认为 `utf8` 。 ``` var buf = new Buffer(5); buf.write('He'); buf.write('l', 2); buf.write('lo', 3); console.log(buf.toString()); // "Hello" ``` ### slice() slice方法返回一个按照指定位置、从原对象切割出来的Buffer实例。它的两个参数分别为切割的起始位置和终止位置。 ``` var buf = new Buffer('just some data'); var chunk = buf.slice(5, 9); chunk.toString() // "some" ``` ### toString() `toString` 方法将Buffer实例,按照指定编码(默认为 `utf8` )转为字符串。 ``` var hello = new Buffer('Hello'); hello // <Buffer 48 65 6c 6c 6f> hello.toString() // "Hello" ``` `toString` 方法可以只返回指定位置内存的内容,它的第二个参数表示起始位置,第三个参数表示终止位置,两者都是从0开始计算。 ``` var buf = new Buffer('just some data'); console.log(buf.toString('ascii', 5, 9)); // "some" ``` ### toJSON() toJSON方法将Buffer实例转为JSON对象。如果JSON.stringify方法调用Buffer实例,默认会先调用toJSON方法。 ``` var buf = new Buffer('test'); var json = JSON.stringify(buf); json // '[116,101,115,116]' var copy = new Buffer(JSON.parse(json)); copy // <Buffer 74 65 73 74> ``` Date: 2014-08-19 Categories: Tags: child_process模块用于新建子进程。子进程的运行结果储存在系统缓存之中(最大200KB),等到子进程运行结束以后,主进程再用回调函数读取子进程的运行结果。 ## exec() `exec` 方法用于执行bash命令,它的参数是一个命令字符串。 ``` var exec = require('child_process').exec; var ls = exec('ls -l', function (error, stdout, stderr) { if (error) { console.log(error.stack); console.log('Error code: ' + error.code); } console.log('Child Process STDOUT: ' + stdout); }); ``` 上面代码的 `exec` 方法用于新建一个子进程,然后缓存它的运行结果,运行结束后调用回调函数。 `exec` 方法最多可以接受两个参数,第一个参数是所要执行的shell命令,第二个参数是回调函数,该函数接受三个参数,分别是发生的错误、标准输出的显示结果、标准错误的显示结果。 由于标准输出和标准错误都是流对象(stream),可以监听data事件,因此上面的代码也可以写成下面这样。 ``` var exec = require('child_process').exec; var child = exec('ls -l'); child.stdout.on('data', function(data) { console.log('stdout: ' + data); }); child.stderr.on('data', function(data) { console.log('stdout: ' + data); }); child.on('close', function(code) { console.log('closing code: ' + code); }); ``` 上面的代码还表明,子进程本身有 `close` 事件,可以设置回调函数。 上面的代码还有一个好处。监听data事件以后,可以实时输出结果,否则只有等到子进程结束,才会输出结果。所以,如果子进程运行时间较长,或者是持续运行,第二种写法更好。 下面是另一个例子,假定有一个child.js文件。 ``` // child.js var exec = require('child_process').exec; exec('node -v', function(error, stdout, stderr) { console.log('stdout: ' + stdout); console.log('stderr: ' + stderr); if (error !== null) { console.log('exec error: ' + error); } }); ``` 运行后,该文件的输出结果如下。 ``` $ node child.js stdout: v0.11.14 stderr: ``` exec方法会直接调用bash( `/bin/sh` 程序)来解释命令,所以如果有用户输入的参数,exec方法是不安全的。 ``` var path = ";user input"; child_process.exec('ls -l ' + path, function (err, data) { console.log(data); }); ``` 上面代码表示,在bash环境下, `ls -l; user input` 会直接运行。如果用户输入恶意代码,将会带来安全风险。因此,在有用户输入的情况下,最好不使用 `exec` 方法,而是使用 `execFile` 方法。 ## execSync() `execSync` 是 `exec` 的同步执行版本。 它可以接受两个参数,第一个参数是所要执行的命令,第二个参数用来配置执行环境。 ``` var execSync = require("child_process").execSync; var SEPARATOR = process.platform === 'win32' ? ';' : ':'; var env = Object.assign({}, process.env); env.PATH = path.resolve('./node_modules/.bin') + SEPARATOR + env.PATH; function myExecSync(cmd) { var output = execSync(cmd, { cwd: process.cwd(), env: env }); console.log(output); } myExecSync('eslint .'); ``` 上面代码中, `execSync` 方法的第二个参数是一个对象。该对象的 `cwd` 属性指定脚本的当前目录, `env` 属性指定环境变量。上面代码将 `./node_modules/.bin` 目录,存入 `$PATH` 变量。这样就可以不加路径,引用项目内部的模块命令了,比如 `eslint` 命令实际执行的是 ``` ./node_modules/.bin/eslint ``` 。 ## execFile() execFile方法直接执行特定的程序,参数作为数组传入,不会被bash解释,因此具有较高的安全性。 var path = "."; child_process.execFile('/bin/ls', ['-l', path], function (err, result) { console.log(result) }); ``` 上面代码中,假定 `path` 来自用户输入,如果其中包含了分号或反引号,ls程序不理解它们的含义,因此也就得不到运行结果,安全性就得到了提高。 ## spawn() spawn方法创建一个子进程来执行特定命令,用法与execFile方法类似,但是没有回调函数,只能通过监听事件,来获取运行结果。它属于异步执行,适用于子进程长时间运行的情况。 var path = '.'; var ls = child_process.spawn('/bin/ls', ['-l', path]); ls.stdout.on('data', function (data) { console.log('stdout: ' + data); }); ls.stderr.on('data', function (data) { console.log('stderr: ' + data); }); ls.on('close', function (code) { console.log('child process exited with code ' + code); }); ``` spawn方法接受两个参数,第一个是可执行文件,第二个是参数数组。 spawn对象返回一个对象,代表子进程。该对象部署了EventEmitter接口,它的 `data` 事件可以监听,从而得到子进程的输出结果。 spawn方法与exec方法非常类似,只是使用格式略有区别。 ``` child_process.exec(command, [options], callback) child_process.spawn(command, [args], [options]) ``` ## fork() fork方法直接创建一个子进程,执行Node脚本, `fork('./child.js')` 相当于 ``` spawn('node', ['./child.js']) ``` 。与spawn方法不同的是,fork会在父进程与子进程之间,建立一个通信管道,用于进程之间的通信。 ``` var n = child_process.fork('./child.js'); n.on('message', function(m) { console.log('PARENT got message:', m); }); n.send({ hello: 'world' }); ``` 上面代码中,fork方法返回一个代表进程间通信管道的对象,对该对象可以监听message事件,用来获取子进程返回的信息,也可以向子进程发送信息。 child.js脚本的内容如下。 ``` process.on('message', function(m) { console.log('CHILD got message:', m); }); process.send({ foo: 'bar' }); ``` 上面代码中,子进程监听message事件,并向父进程发送信息。 ## send() 使用 child_process.fork() 生成新进程之后,就可以用 child.send(message, [sendHandle]) 向新进程发送消息。新进程中通过监听message事件,来获取消息。 下面的例子是主进程的代码。 ``` var cp = require('child_process'); var n = cp.fork(__dirname + '/sub.js'); n.on('message', function(m) { console.log('PARENT got message:', m); }); n.send({ hello: 'world' }); ``` 下面是子进程sub.js代码。 ``` process.on('message', function(m) { console.log('CHILD got message:', m); }); process.send({ foo: 'bar' }); ``` * Lift Security Team, Avoiding Command Injection in Node.js: 为什么execFile()的安全性高于exec() * <NAME>, Node.js: managing child processes * byvoid, Node.js中的child_process及进程通信: exec()、execFile()、fork()、spawn()四种方法的简介 Node.js默认单进程运行,对于32位系统最高可以使用512MB内存,对于64位最高可以使用1GB内存。对于多核CPU的计算机来说,这样做效率很低,因为只有一个核在运行,其他核都在闲置。cluster模块就是为了解决这个问题而提出的。 cluster模块允许设立一个主进程和若干个worker进程,由主进程监控和协调worker进程的运行。worker之间采用进程间通信交换消息,cluster模块内置一个负载均衡器,采用Round-robin算法协调各个worker进程之间的负载。运行时,所有新建立的链接都由主进程完成,然后主进程再把TCP连接分配给指定的worker进程。 ``` var cluster = require('cluster'); var os = require('os'); if (cluster.isMaster){ for (var i = 0, n = os.cpus().length; i < n; i += 1){ cluster.fork(); } } else { http.createServer(function(req, res) { res.writeHead(200); res.end("hello world\n"); }).listen(8000); } ``` 上面代码先判断当前进程是否为主进程(cluster.isMaster),如果是的,就按照CPU的核数,新建若干个worker进程;如果不是,说明当前进程是worker进程,则在该进程启动一个服务器程序。 上面这段代码有一个缺点,就是一旦work进程挂了,主进程无法知道。为了解决这个问题,可以在主进程部署online事件和exit事件的监听函数。 if(cluster.isMaster) { var numWorkers = require('os').cpus().length; console.log('Master cluster setting up ' + numWorkers + ' workers...'); for(var i = 0; i < numWorkers; i++) { cluster.fork(); } cluster.on('online', function(worker) { console.log('Worker ' + worker.process.pid + ' is online'); }); cluster.on('exit', function(worker, code, signal) { console.log('Worker ' + worker.process.pid + ' died with code: ' + code + ', and signal: ' + signal); console.log('Starting a new worker'); cluster.fork(); }); } ``` 上面代码中,主进程一旦监听到worker进程的exit事件,就会重启一个worker进程。worker进程一旦启动成功,可以正常运行了,就会发出online事件。 ### worker对象 worker对象是 `cluster.fork()` 的返回值,代表一个worker进程。 它的属性和方法如下。 (1)worker.id worker.id返回当前worker的独一无二的进程编号。这个编号也是cluster.workers中指向当前进程的索引值。 (2)worker.process 所有的worker进程都是用child_process.fork()生成的。child_process.fork()返回的对象,就被保存在worker.process之中。通过这个属性,可以获取worker所在的进程对象。 (3)worker.send() 该方法用于在主进程中,向子进程发送信息。 ``` if (cluster.isMaster) { var worker = cluster.fork(); worker.send('hi there'); } else if (cluster.isWorker) { process.on('message', function(msg) { process.send(msg); }); } ``` 上面代码的作用是,worker进程对主进程发出的每个消息,都做回声。 在worker进程中,要向主进程发送消息,使用 ``` process.send(message) ``` ;要监听主进程发出的消息,使用下面的代码。 ``` process.on('message', function(message) { console.log(message); }); ``` 发出的消息可以字符串,也可以是JSON对象。下面是一个发送JSON对象的例子。 ``` worker.send({ type: 'task 1', from: 'master', data: { // the data that you want to transfer } }); ``` ### cluster.workers对象 该对象只有主进程才有,包含了所有worker进程。每个成员的键值就是一个worker进程对象,键名就是该worker进程的worker.id属性。 ``` function eachWorker(callback) { for (var id in cluster.workers) { callback(cluster.workers[id]); } } eachWorker(function(worker) { worker.send('big announcement to all workers'); }); ``` 上面代码用来遍历所有worker进程。 当前socket的data事件,也可以用id属性识别worker进程。 ``` socket.on('data', function(id) { var worker = cluster.workers[id]; }); ``` ## cluster模块的属性与方法 ### isMaster,isWorker isMaster属性返回一个布尔值,表示当前进程是否为主进程。这个属性由process.env.NODE_UNIQUE_ID决定,如果process.env.NODE_UNIQUE_ID为未定义,就表示该进程是主进程。 isWorker属性返回一个布尔值,表示当前进程是否为work进程。它与isMaster属性的值正好相反。 ### fork() fork方法用于新建一个worker进程,上下文都复制主进程。只有主进程才能调用这个方法。 该方法返回一个worker对象。 ### kill() kill方法用于终止worker进程。它可以接受一个参数,表示系统信号。 如果当前是主进程,就会终止与worker.process的联络,然后将系统信号法发向worker进程。如果当前是worker进程,就会终止与主进程的通信,然后退出,返回0。 在以前的版本中,该方法也叫做 worker.destroy() 。 ### listening事件 worker进程调用listening方法以后,“listening”事件就传向该进程的服务器,然后传向主进程。 该事件的回调函数接受两个参数,一个是当前worker对象,另一个是地址对象,包含网址、端口、地址类型(IPv4、IPv6、Unix socket、UDP)等信息。这对于那些服务多个网址的Node应用程序非常有用。 ``` cluster.on('listening', function (worker, address) { console.log("A worker is now connected to " + address.address + ":" + address.port); }); ``` ## 不中断地重启Node服务 ### 思路 重启服务需要关闭后再启动,利用cluster模块,可以做到先启动一个worker进程,再把原有的所有work进程关闭。这样就能实现不中断地重启Node服务。 首先,主进程向worker进程发出重启信号。 ``` workers[wid].send({type: 'shutdown', from: 'master'}); ``` worker进程监听message事件,一旦发现内容是shutdown,就退出。 ``` process.on('message', function(message) { if(message.type === 'shutdown') { process.exit(0); } }); ``` 下面是一个关闭所有worker进程的函数。 ``` function restartWorkers() { var wid, workerIds = []; for(wid in cluster.workers) { workerIds.push(wid); } workerIds.forEach(function(wid) { cluster.workers[wid].send({ text: 'shutdown', from: 'master' }); setTimeout(function() { if(cluster.workers[wid]) { cluster.workers[wid].kill('SIGKILL'); } }, 5000); }); }; ``` ### 实例 下面是一个完整的实例,先是主进程的代码master.js。 console.log('started master with ' + process.pid); // 新建一个worker进程 cluster.fork(); process.on('SIGHUP', function () { console.log('Reloading...'); var new_worker = cluster.fork(); new_worker.once('listening', function () { // 关闭所有其他worker进程 for(var id in cluster.workers) { if (id === new_worker.id.toString()) continue; cluster.workers[id].kill('SIGTERM'); } }); }); ``` 上面代码中,主进程监听SIGHUP事件,如果发生该事件就关闭其他所有worker进程。之所以是SIGHUP事件,是因为nginx服务器监听到这个信号,会创造一个新的worker进程,重新加载配置文件。另外,关闭worker进程时,主进程发送SIGTERM信号,这是因为Node允许多个worker进程监听同一个端口。 下面是worker进程的代码server.js。 if (cluster.isMaster) { require('./master'); return; } var express = require('express'); var http = require('http'); var app = express(); app.get('/', function (req, res) { res.send('ha fsdgfds gfds gfd!'); }); http.createServer(app).listen(8080, function () { console.log('http://localhost:8080'); }); ``` 使用时代码如下。 ``` $ node server.js started master with 10538 http://localhost:8080 ``` 然后,向主进程连续发出两次SIGHUP信号。 ``` $ kill -SIGHUP 10538 $ kill -SIGHUP 10538 ``` 主进程会连续两次新建一个worker进程,然后关闭所有其他worker进程,显示如下。 ``` Reloading... http://localhost:8080 Reloading... http://localhost:8080 ``` 最后,向主进程发出SIGTERM信号,关闭主进程。 `$ kill 10538` ## PM2模块 PM2模块是cluster模块的一个包装层。它的作用是尽量将cluster模块抽象掉,让用户像使用单进程一样,部署多进程Node应用。 ``` // app.js var http = require('http'); http.createServer(function(req, res) { res.writeHead(200); res.end("hello world"); }).listen(8080); ``` 上面代码是标准的Node架设Web服务器的方式,然后用PM2从命令行启动这段代码。 ``` $ pm2 start app.js -i 4 ``` 上面代码的i参数告诉PM2,这段代码应该在cluster_mode启动,且新建worker进程的数量是4个。如果i参数的值是0,那么当前机器有几个CPU内核,PM2就会启动几个worker进程。 如果一个worker进程由于某种原因挂掉了,会立刻重启该worker进程。 ``` # 重启所有worker进程 $ pm2 reload all ``` 每个worker进程都有一个id,可以用下面的命令查看单个worker进程的详情。 ``` $ pm2 show <worker id> ``` 正确情况下,PM2采用fork模式新建worker进程,即主进程fork自身,产生一个worker进程。 `pm2 reload` 命令则会用spawn方式启动,即一个接一个启动worker进程,一个新的worker启动成功,再杀死一个旧的worker进程。采用这种方式,重新部署新版本时,服务器就不会中断服务。 `$ pm2 reload <脚本文件名>` 关闭worker进程的时候,可以部署下面的代码,让worker进程监听shutdown消息。一旦收到这个消息,进行完毕收尾清理工作再关闭。 ``` process.on('message', function(msg) { if (msg === 'shutdown') { close_all_connections(); delete_logs(); server.close(); process.exit(0); } }); ``` * <NAME>, Reloading node with no downtime * <NAME>, Node.js clustering made easy with PM2 * <NAME>, How to Create a Node.js Cluster for Speeding Up Your Apps ## 启动 通常,我们在Shell启动Node脚本。 但是,这个Shell随着你退出Shell就自动结束了。 为了长期运行,Node应用程序可以在后台运行。 但是,在退出Shell以后,如果Node进程要在 `console` 输出内容,但 `console` 已经关了(即 `STDOUT` 已经不存在),这时进程就会退出。也没有办法重新启动。 为了让Node进程在后台长期启动,需要一个daemon(即常驻的服务进程)。有几种方法可以实现。 (1)forever `forever` 是一个Node应用程序,用于一个子进程意外退出时,自动重启。 ``` # 启动进程 $ forever start example.js # 列出所有forever启动的正在运行的进程 $ forever list # 停止进程 $ forever stop example.js # 或者 $ forever stop 0 # 停止所有正在运行的进程 $ forever stopall ``` # dns 模块 dns 模块 dns . lookup ( `www.myApi.com` , 4 , ( err , address ) => { cacheThisForLater ( address ); }); 留言 Please enable JavaScript to view the comments powered by Disqus. 回调函数模式让 Node 可以处理异步操作。但是,为了适应回调函数,异步操作只能有两个状态:开始和结束。对于那些多状态的异步操作(状态1,状态2,状态3,……),回调函数就会无法处理,你不得不将异步操作拆开,分成多个阶段。每个阶段结束时,调用下一个回调函数。 为了解决这个问题,Node 提供 Event Emitter 接口。通过事件,解决多状态异步操作的响应问题。 Event Emitter 是一个接口,可以在任何对象上部署。这个接口由 `events` 模块提供。 `events` 模块的 `EventEmitter` 是一个构造函数,可以用来生成事件发生器的实例 `emitter` 。 然后,事件发生器的实例方法 `on` 用来监听事件,实例方法 `emit` 用来发出事件。 ``` emitter.on('someEvent', function () { console.log('event has occured'); }); function f() { console.log('start'); emitter.emit('someEvent'); console.log('end'); } f() // start // event has occured // end ``` 上面代码中, `EventEmitter` 对象实例 `emitter` 就是消息中心。通过 `on` 方法为 `someEvent` 事件指定回调函数,通过 `emit` 方法触发 `someEvent` 事件。 上面代码还表明, `EventEmitter` 对象的事件触发和监听是同步的,即只有事件的回调函数执行以后,函数 `f` 才会继续执行。 ## Event Emitter 接口的部署 Event Emitter 接口可以部署在任意对象上,使得这些对象也能订阅和发布消息。 function Dog(name) { this.name = name; } Dog.prototype.__proto__ = EventEmitter.prototype; // 另一种写法 // Dog.prototype = Object.create(EventEmitter.prototype); var simon = new Dog('simon'); simon.on('bark', function () { console.log(this.name + ' barked'); }); setInterval(function () { simon.emit('bark'); }, 500); ``` 上面代码新建了一个构造函数 `Dog` ,然后让其继承 `EventEmitter` ,因此 `Dog` 就拥有了 `EventEmitter` 的接口。最后,为 `Dog` 的实例指定 `bark` 事件的监听函数,再使用 `EventEmitter` 的 `emit` 方法,触发 `bark` 事件。 Node 内置模块 `util` 的 `inherits` 方法,提供了另一种继承 Event Emitter 接口的方法。 ``` var util = require('util'); var EventEmitter = require('events').EventEmitter; var Radio = function (station) { var self = this; setTimeout(function() { self.emit('open', station); }, 0); setTimeout(function() { self.emit('close', station); }, 5000); this.on('newListener', function(listener) { console.log('Event Listener: ' + listener); }); }; util.inherits(Radio, EventEmitter); module.exports = Radio; ``` 上面代码中, `Radio` 是一个构造函数,它的实例继承了EventEmitter接口。下面是使用这个模块的例子。 ``` var Radio = require('./radio.js'); var station = { freq: '80.16', name: 'Rock N Roll Radio', }; var radio = new Radio(station); radio.on('open', function(station) { console.log('"%s" FM %s 打开', station.name, station.freq); console.log('♬ ♫♬'); }); radio.on('close', function(station) { console.log('"%s" FM %s 关闭', station.name, station.freq); }); ``` ## Event Emitter 的实例方法 Event Emitter 的实例方法如下。 * `emitter.on(name, f)` 对事件 `name` 指定监听函数 `f` * ``` emitter.addListener(name, f) ``` `addListener` 是 `on` 方法的别名 * ``` emitter.once(name, f) ``` 与 `on` 方法类似,但是监听函数 `f` 是一次性的,使用后自动移除 * ``` emitter.listeners(name) ``` 返回一个数组,成员是事件 `name` 所有监听函数 * ``` emitter.removeListener(name, f) ``` 移除事件 `name` 的监听函数 `f` * ``` emitter.removeAllListeners(name) ``` 移除事件 `name` 的所有监听函数 ### emit() `EventEmitter` 实例对象的 `emit` 方法,用来触发事件。它的第一个参数是事件名称,其余参数都会依次传入回调函数。 var connection = function (id) { console.log('client id: ' + id); }; myEmitter.on('connection', connection); myEmitter.emit('connection', 6); // client id: 6 ``` ### once() 该方法类似于 `on` 方法,但是回调函数只触发一次。 myEmitter.once('message', function(msg){ console.log('message: ' + msg); }); myEmitter.emit('message', 'this is the first message'); myEmitter.emit('message', 'this is the second message'); myEmitter.emit('message', 'welcome to nodejs'); ``` 上面代码触发了三次message事件,但是回调函数只会在第一次调用时运行。 下面代码指定,一旦服务器连通,只调用一次的回调函数。 该方法返回一个EventEmitter对象,因此可以链式加载监听函数。 ### removeListener() 该方法用于移除回调函数。它接受两个参数,第一个是事件名称,第二个是回调函数名称。这就是说,不能用于移除匿名函数。 emitter.on('message', console.log); setInterval(function(){ emitter.emit('message', 'foo bar'); }, 300); setTimeout(function(){ emitter.removeListener('message', console.log); }, 1000); ``` 上面代码每300毫秒触发一次message事件,直到1000毫秒后取消监听。 另一个例子是使用removeListener方法模拟once方法。 function onlyOnce () { console.log("You'll never see this again"); emitter.removeListener("firstConnection", onlyOnce); } emitter.on("firstConnection", onlyOnce); ``` ### setMaxListeners() Node默认允许同一个事件最多可以指定10个回调函数。 ``` emitter.on('someEvent', function () { console.log('event 1'); }); emitter.on('someEvent', function () { console.log('event 2'); }); emitter.on('someEvent', function () { console.log('event 3'); }); ``` 超过10个回调函数,会发出一个警告。这个门槛值可以通过 `setMaxListeners` 方法改变。 ``` emitter.setMaxListeners(20); ``` ### removeAllListeners() 该方法用于移除某个事件的所有回调函数。 // some code here emitter.removeAllListeners("firstConnection"); ``` 如果不带参数,则表示移除所有事件的所有回调函数。 ### listeners() `listeners` 方法接受一个事件名称作为参数,返回该事件所有回调函数组成的数组。 var ee = new EventEmitter; function onlyOnce () { console.log(ee.listeners("firstConnection")); ee.removeListener("firstConnection", onlyOnce); console.log(ee.listeners("firstConnection")); } ee.on("firstConnection", onlyOnce) ee.emit("firstConnection"); ee.emit("firstConnection"); // [ [Function: onlyOnce] ] // [] ``` 上面代码显示两次回调函数组成的数组,第一次只有一个回调函数 `onlyOnce` ,第二次是一个空数组,因为 `removeListener` 方法取消了回调函数。 ## 错误捕获 事件处理过程中抛出的错误,可以使用 `try...catch` 捕获。 console.log('before emit'); try { emitter.emit('beep'); } catch(err) { console.error('caught while emitting:', err.message); } console.log('after emit'); ``` 上面的代码, `beep` 一共绑定了三个监听函数。其中,第二个监听函数会抛出错误。执行上面的代码,会得到下面的结果。 ``` before emit beep caught while emitting: oops! after emit ``` 可以看到,第二个监听函数抛出的错误被 `try...catch` 代码块捕获了。一旦被捕获,该事件后面的监听函数都不会再执行了。 如果不使用 `try...catch` ,可以让进程监听 `uncaughtException` 事件。 ``` process.on('uncaughtException', function (err) { console.error('uncaught exception:', err.stack || err); // 关闭资源 closeEverything(function(err) { if (err) console.error('Error while closing everything:', err.stack || err); // 退出进程 process.exit(1); }); }); ``` ## 事件类型 Events模块默认支持两个事件。 * `newListener` 事件:添加新的回调函数时触发。 * `removeListener` 事件:移除回调时触发。 ``` ee.on("newListener", function (evtName) { console.log("New Listener: " + evtName); }); ee.on("removeListener", function (evtName) { console.log("Removed Listener: " + evtName); }); function foo() {} ee.on("save-user", foo); ee.removeListener("save-user", foo); // New Listener: removeListener // New Listener: save-user // Removed Listener: save-user ``` 上面代码会触发两次newListener事件,以及一次removeListener事件。 * H<NAME>, Node.js EventEmitter Tutorial `fs` 是 `filesystem` 的缩写,该模块提供本地文件的读写能力,基本上是POSIX文件操作命令的简单包装。但是,这个模块几乎对所有操作提供异步和同步两种操作方式,供开发者选择。 ## readFile(),readFileSync() `readFile` 方法用于异步读取数据。 ``` fs.readFile('./image.png', function (err, buffer) { if (err) throw err; process(buffer); }); ``` `readFile` 方法的第一个参数是文件的路径,可以是绝对路径,也可以是相对路径。注意,如果是相对路径,是相对于当前进程所在的路径( `process.cwd()` ),而不是相对于当前脚本所在的路径。 `readFile` 方法的第二个参数是读取完成后的回调函数。该函数的第一个参数是发生错误时的错误对象,第二个参数是代表文件内容的 `Buffer` 实例。 `readFileSync` 方法用于同步读取文件,返回一个字符串。 ``` var text = fs.readFileSync(fileName, 'utf8'); // 将文件按行拆成数组 text.split(/\r?\n/).forEach(function (line) { // ... }); ``` `readFileSync` 方法的第一个参数是文件路径,第二个参数可以是一个表示配置的对象,也可以是一个表示文本文件编码的字符串。默认的配置对象是 ``` { encoding: null, flag: 'r' } ``` ,即文件编码默认为 `null` ,读取模式默认为 `r` (只读)。如果第二个参数不指定编码( `encoding` ), `readFileSync` 方法返回一个 `Buffer` 实例,否则返回的是一个字符串。 不同系统的行结尾字符不同,可以用下面的方法判断。 ``` // 方法一,查询现有的行结尾字符 var EOL = fileContents.indexOf('\r\n') >= 0 ? '\r\n' : '\n'; // 方法二,根据当前系统处理 var EOL = (process.platform === 'win32' ? '\r\n' : '\n'); ``` ## writeFile(),writeFileSync() `writeFile` 方法用于异步写入文件。 ``` fs.writeFile('message.txt', 'Hello Node.js', (err) => { if (err) throw err; console.log('It\'s saved!'); }); ``` 上面代码中, `writeFile` 方法的第一个参数是写入的文件名,第二个参数是写入的字符串,第三个参数是回调函数。 回调函数前面,还可以再加一个参数,表示写入字符串的编码(默认是 `utf8` )。 ``` fs.writeFile('message.txt', 'Hello Node.js', 'utf8', callback); ``` `writeFileSync` 方法用于同步写入文件。 ``` fs.writeFileSync(fileName, str, 'utf8'); ``` 它的第一个参数是文件路径,第二个参数是写入文件的字符串,第三个参数是文件编码,默认为utf8。 ## exists(path, callback) exists方法用来判断给定路径是否存在,然后不管结果如何,都会调用回调函数。 ``` fs.exists('/path/to/file', function (exists) { util.debug(exists ? "it's there" : "no file!"); }); ``` 上面代码表明,回调函数的参数是一个表示文件是否存在的布尔值。 需要注意的是,不要在 `open` 方法之前调用 `exists` 方法,open方法本身就能检查文件是否存在。 下面的例子是如果给定目录存在,就删除它。 ``` if (fs.existsSync(outputFolder)) { console.log('Removing ' + outputFolder); fs.rmdirSync(outputFolder); } ``` ## mkdir(),writeFile(),readFile() mkdir方法用于新建目录。 fs.mkdir('./helloDir',0777, function (err) { if (err) throw err; }); ``` mkdir接受三个参数,第一个是目录名,第二个是权限值,第三个是回调函数。 writeFile方法用于写入文件。 fs.writeFile('./helloDir/message.txt', 'Hello Node', function (err) { if (err) throw err; console.log('文件写入成功'); }); ``` readFile方法用于读取文件内容。 上面代码使用readFile方法读取文件。readFile方法的第一个参数是文件名,第二个参数是文件编码,第三个参数是回调函数。可用的文件编码包括“ascii”、“utf8”和“base64”。如果没有指定文件编码,返回的是原始的缓存二进制数据,这时需要调用buffer对象的toString方法,将其转为字符串。 ``` var fs = require('fs'); fs.readFile('example_log.txt', function (err, logData) { if (err) throw err; var text = logData.toString(); }); ``` readFile方法是异步操作,所以必须小心,不要同时发起多个readFile请求。 ``` for(var i = 1; i <= 1000; i++) { fs.readFile('./'+i+'.txt', function() { // do something with the file }); } ``` 上面代码会同时发起1000个readFile异步请求,很快就会耗尽系统资源。 ## mkdirSync(),writeFileSync(),readFileSync() 这三个方法是建立目录、写入文件、读取文件的同步版本。 对于流量较大的服务器,最好还是采用异步操作,因为同步操作时,只有前一个操作结束,才会开始后一个操作,如果某个操作特别耗时(常常发生在读写数据时),会导致整个程序停顿。 ## readdir(),readdirSync() `readdir` 方法用于读取目录,返回一个所包含的文件和子目录的数组。 ``` fs.readdir(process.cwd(), function (err, files) { if (err) { console.log(err); return; } var count = files.length; var results = {}; files.forEach(function (filename) { fs.readFile(filename, function (data) { results[filename] = data; count--; if (count <= 0) { // 对所有文件进行处理 } }); }); }); ``` `readdirSync` 方法是 `readdir` 方法的同步版本。下面是同步列出目录内容的代码。 ``` var files = fs.readdirSync(dir); files.forEach(function (filename) { var fullname = path.join(dir,filename); var stats = fs.statSync(fullname); if (stats.isDirectory()) filename += '/'; process.stdout.write(filename + '\t' + stats.size + '\t' + stats.mtime + '\n' ); }); ``` ## stat() stat方法的参数是一个文件或目录,它产生一个对象,该对象包含了该文件或目录的具体信息。我们往往通过该方法,判断正在处理的到底是一个文件,还是一个目录。 fs.readdir('/etc/', function (err, files) { if (err) throw err; files.forEach( function (file) { fs.stat('/etc/' + file, function (err, stats) { if (err) throw err; if (stats.isFile()) { console.log("%s is file", file); } else if (stats.isDirectory ()) { console.log("%s is a directory", file); } console.log('stats: %s',JSON.stringify(stats)); }); }); }); ``` ## watchfile(),unwatchfile() watchfile方法监听一个文件,如果该文件发生变化,就会自动触发回调函数。 fs.watchFile('./testFile.txt', function (curr, prev) { console.log('the current mtime is: ' + curr.mtime); console.log('the previous mtime was: ' + prev.mtime); }); fs.writeFile('./testFile.txt', "changed", function (err) { if (err) throw err; console.log("file write complete"); }); ``` `unwatchfile` 方法用于解除对文件的监听。 ## createReadStream() `createReadStream` 方法往往用于打开大型的文本文件,创建一个读取操作的数据流。所谓大型文本文件,指的是文本文件的体积很大,读取操作的缓存装不下,只能分成几次发送,每次发送会触发一个 `data` 事件,发送结束会触发 `end` 事件。 function readLines(input, func) { var remaining = ''; input.on('data', function(data) { remaining += data; var index = remaining.indexOf('\n'); var last = 0; while (index > -1) { var line = remaining.substring(last, index); last = index + 1; func(line); index = remaining.indexOf('\n', last); } remaining = remaining.substring(last); }); input.on('end', function() { if (remaining.length > 0) { func(remaining); } }); } function func(data) { console.log('Line: ' + data); } var input = fs.createReadStream('lines.txt'); readLines(input, func); ``` ## createWriteStream() `createWriteStream` 方法创建一个写入数据流对象,该对象的 `write` 方法用于写入数据, `end` 方法用于结束写入操作。 ``` var out = fs.createWriteStream(fileName, { encoding: 'utf8' }); out.write(str); out.end(); ``` `createWriteStream` 方法和 `createReadStream` 方法配合,可以实现拷贝大型文件。 ``` function fileCopy(filename1, filename2, done) { var input = fs.createReadStream(filename1); var output = fs.createWriteStream(filename2); input.on('data', function(d) { output.write(d); }); input.on('error', function(err) { throw err; }); input.on('end', function() { output.end(); if (done) done(); }); } ``` ## http.STATUS_CODES `http.STATUS_CODES` 是一个对象,属性名都是状态码,属性值则是该状态码的简短解释。 ``` require('http').STATUS_CODES['301'] // "Moved Permanently" ``` ### 处理GET请求 `http` 模块主要用于搭建HTTP服务。使用Node搭建HTTP服务器非常简单。 http.createServer(function (request, response){ response.writeHead(200, {'Content-Type': 'text/plain'}); response.write("Hello World"); response.end(); }).listen(8080, '127.0.0.1'); // 另一种写法 function onRequest(request, response) { response.writeHead(200, {"Content-Type": "text/plain"}); response.write("Hello World"); response.end(); } http.createServer(onRequest).listen(process.env.PORT); ``` 上面代码第一行 ``` var http = require("http") ``` ,表示加载 `http` 模块。然后,调用 `http` 模块的 `createServer` 方法,创造一个服务器实例。 `ceateServer` 方法接受一个函数作为参数,该函数的 `request` 参数是一个对象,表示客户端的HTTP请求; `response` 参数也是一个对象,表示服务器端的HTTP回应。 `response.writeHead` 方法用来写入HTTP回应的头信息; `response.end` 方法用来写入HTTP回应的具体内容,以及回应完成后关闭本次对话。最后的 `listen(8080)` 表示启动服务器实例,监听本机的8080端口。 将上面这几行代码保存成文件 `app.js` ,然后执行该脚本,服务器就开始运行了。 `$ node app.js` 这时命令行窗口将显示一行提示“Server running at port 8080.”。打开浏览器,访问http://localhost:8080,网页显示“Hello world!”。 上面的例子是收到请求后生成网页,也可以事前写好网页,存在文件中,然后利用 `fs` 模块读取网页文件,将其返回。 http.createServer(function (request, response){ fs.readFile('data.txt', function readData(err, data) { response.writeHead(200, {'Content-Type': 'text/plain'}); response.end(data); }); // 或者 fs.createReadStream(`${__dirname}/index.html`).pipe(response); }).listen(8080, '127.0.0.1'); 下面的修改则是根据不同网址的请求,显示不同的内容,已经相当于做出一个网站的雏形了。 http.createServer(function(req, res) { // 主页 if (req.url == "/") { res.writeHead(200, { "Content-Type": "text/html" }); res.end("Welcome to the homepage!"); } // About页面 else if (req.url == "/about") { res.writeHead(200, { "Content-Type": "text/html" }); res.end("Welcome to the about page!"); } // 404错误 else { res.writeHead(404, { "Content-Type": "text/plain" }); res.end("404 error! File not found."); } }).listen(8080, "localhost"); ``` ### request 对象 `createServer` 方法的回调函数的第一个参数是一个 `request` 对象,拥有以下属性。 * `url` :发出请求的网址。 * `method` :HTTP请求的方法。 * `headers` :HTTP请求的所有HTTP头信息。 下面的例子是获取请求的路径名。 ``` var url = require('url'); var pathname = url.parse(request.url).pathname; ``` `setEncoding()` 方法用于设置请求的编码。 ``` request.setEncoding("utf8"); ``` `addListener()` 方法用于为请求添加监听事件的回调函数。 ``` var querystring = require('querystring'); var postData = ''; request.addListener('data', function (postDataChunk) { postData += postDataChunk; }); request.addListener('end', function () { response.writeHead(200, {'Content-Type': 'text/plain'}); response.write("You've sent the text: " + querystring.parse(postData).text); response.end(); }); ``` ### 处理异步操作 遇到异步操作时,会先处理后面的请求,等到当前请求有了结果以后,再返回结果。 ``` var exec = require("child_process").exec; exec('ls -lah', function (error, stdout, stderr) { response.writeHead(200, {'Content-Type': 'text/plain'}); response.write(stdout); response.end(); }); ``` ### 处理POST请求 当客户端采用POST方法发送数据时,服务器端可以对data和end两个事件,设立监听函数。 data事件会在数据接收过程中,每收到一段数据就触发一次,接收到的数据被传入回调函数。end事件则是在所有数据接收完成后触发。 对上面代码稍加修改,就可以做出文件上传的功能。 ``` "use strict"; var http = require('http'); var fs = require('fs'); var destinationFile, fileSize, uploadedBytes; http.createServer(function (request, response) { response.writeHead(200); destinationFile = fs.createWriteStream("destination.md"); request.pipe(destinationFile); fileSize = request.headers['content-length']; uploadedBytes = 0; request.on('data', function (d) { uploadedBytes += d.length; var p = (uploadedBytes / fileSize) * 100; response.write("Uploading " + parseInt(p, 0) + " %\n"); }); request.on('end', function () { response.end("File Upload Complete"); }); }).listen(3030, function () { console.log("server started"); }); ``` ## 发出请求 ### get() get方法用于发出get请求。 ``` function getTestPersonaLoginCredentials(callback) { return http.get({ host: 'personatestuser.org', path: '/email' }, function(response) { var body = ''; response.on('data', function(d) { body += d; }); response.on('end', function() { var parsed = JSON.parse(body); callback({ email: parsed.email, password: parsed.pass }); }); }); }, ``` ### request() request方法用于发出HTTP请求,它的使用格式如下。 ``` http.request(options[, callback]) ``` request方法的options参数,可以是一个对象,也可以是一个字符串。如果是字符串,就表示这是一个URL,Node内部就会自动调用 `url.parse()` ,处理这个参数。 options对象可以设置如下属性。 * host:HTTP请求所发往的域名或者IP地址,默认是localhost。 * hostname:该属性会被 `url.parse()` 解析,优先级高于host。 * port:远程服务器的端口,默认是80。 * localAddress:本地网络接口。 * socketPath:Unix网络套接字,格式为host:port或者socketPath。 * method:指定HTTP请求的方法,格式为字符串,默认为GET。 * path:指定HTTP请求的路径,默认为根路径(/)。可以在这个属性里面,指定查询字符串,比如 `/index.html?page=12` 。如果这个属性里面包含非法字符(比如空格),就会抛出一个错误。 * headers:一个对象,包含了HTTP请求的头信息。 * auth:一个代表HTTP基本认证的字符串 `user:password` 。 * agent:控制缓存行为,如果HTTP请求使用了agent,则HTTP请求默认为 ``` Connection: keep-alive ``` ,它的可能值如下: * undefined(默认):对当前host和port,使用全局Agent。 * Agent:一个对象,会传入agent属性。 * false:不缓存连接,默认HTTP请求为 `Connection: close` 。 * keepAlive:一个布尔值,表示是否保留socket供未来其他请求使用,默认等于false。 * keepAliveMsecs:一个整数,当使用KeepAlive的时候,设置多久发送一个TCP KeepAlive包,使得连接不要被关闭。默认等于1000,只有keepAlive设为true的时候,该设置才有意义。 request方法的callback参数是可选的,在response事件发生时触发,而且只触发一次。 `http.request()` 返回一个 `http.ClientRequest` 类的实例。它是一个可写数据流,如果你想通过POST方法发送一个文件,可以将文件写入这个ClientRequest对象。 下面是发送POST请求的一个例子。 ``` var postData = querystring.stringify({ 'msg' : 'Hello World!' }); var options = { hostname: 'www.google.com', port: 80, path: '/upload', method: 'POST', headers: { 'Content-Type': 'application/x-www-form-urlencoded', 'Content-Length': postData.length } }; var req = http.request(options, function(res) { console.log('STATUS: ' + res.statusCode); console.log('HEADERS: ' + JSON.stringify(res.headers)); res.setEncoding('utf8'); res.on('data', function (chunk) { console.log('BODY: ' + chunk); }); }); req.on('error', function(e) { console.log('problem with request: ' + e.message); }); // write data to request body req.write(postData); req.end(); ``` 注意,上面代码中, `req.end()` 必须被调用,即使没有在请求体内写入任何数据,也必须调用。因为这表示已经完成HTTP请求。 发送过程的任何错误(DNS错误、TCP错误、HTTP解析错误),都会在request对象上触发error事件。 ## Server() `Server` 方法用于新建一个服务器实例。 var server = new http.Server(); server.listen(8000); server.on('request', function (request, response) { // 解析请求的URL var url = require('url').parse(request.url); if (url.pathname === '/test/1') { response.writeHead(200, {'Content-Type': 'text/plain; charset=UTF-8'}); response.write('Hello'); response.end(); } else if (url.pathname === '/test/2') { response.writeHead(200, {'Content-Type': 'text/plain; charset=UTF-8'}); response.write(request.method + ' ' + request.url + ' HTTP/' + request.httpVersion + '\r\n'); for (var h in request.headers) { response.write(h + ': ' + request.headers[h] + '\r\n'); } response.write('\r\n'); request.on('data', function(chunk) { response.write(chunk); }); request.on('end', function(chunk) { response.end(); }); } else { var filename = url.pathname.substring(1); var type; switch(filename.substring(filename.lastIndexOf('.') + 1)) { case 'html': case 'htm': type = 'text/html; charset=UTF-8'; break; case 'js': type = 'application/javascript; charset=UTF-8'; break; case 'css': type = 'text/css; charset=UTF-8'; break; case 'txt' : type = 'text/plain; charset=UTF-8'; break; case 'manifest': type = 'text/cache-manifest; charset=UTF-8'; break; default: type = 'application/octet-stream'; break; } fs.readFile(filename, function (err, content) { if (err) { response.writeHead(404, { 'Content-Type': 'text/plain; charset=UTF-8'}); response.write(err.message); response.end(); } else { response.writeHead(200, {'Content-Type': type}); response.write(content); response.end(); } }); } }); ``` `listen` 方法用于启动服务器,它可以接受多种参数。 ``` var server = new http.Server(); // 端口 server.listen(8000); // 端口,主机 server.listen(8000, 'localhost'); // 对象 server.listen({ port: 8000, host: 'localhost', }) ``` 以上三种写法都是合法的。 ## 搭建HTTPs服务器 搭建HTTPs服务器需要有SSL证书。对于向公众提供服务的网站,SSL证书需要向证书颁发机构购买;对于自用的网站,可以自制。 自制SSL证书需要OpenSSL,具体命令如下。 ``` $ openssl genrsa -out key.pem $ openssl req -new -key key.pem -out csr.pem $ openssl x509 -req -days 9999 -in csr.pem -signkey key.pem -out cert.pem $ rm csr.pem ``` 上面的命令生成两个文件:ert.pem(证书文件)和 key.pem(私钥文件)。有了这两个文件,就可以运行HTTPs服务器了。 Node内置Https支持。 ``` var server = https.createServer({ key: privateKey, cert: certificate, ca: certificateAuthorityCertificate }, app); ``` Node.js提供一个https模块,专门用于处理加密访问。 ``` var https = require('https'); var fs = require('fs'); var options = { key: fs.readFileSync('key.pem'), cert: fs.readFileSync('cert.pem') }; var a = https.createServer(options, function (req, res) { res.writeHead(200); res.end("hello world\n"); }).listen(8000); ``` 上面代码显示,HTTPs服务器与HTTP服务器的最大区别,就是createServer方法多了一个options参数。运行以后,就可以测试是否能够正常访问。 ## 模块属性 (1)HTTP请求的属性 * headers:HTTP请求的头信息。 * url:请求的路径。 ## 模块方法 (1)http模块的方法 * createServer(callback):创造服务器实例。 (2)服务器实例的方法 * listen(port):启动服务器监听指定端口。 (3)HTTP回应的方法 * setHeader(key, value):指定HTTP头信息。 * write(str):指定HTTP回应的内容。 * end():发送HTTP回应。 Koa是一个类似于Express的Web开发框架,创始人也是同一个人。它的主要特点是,使用了ES6的Generator函数,进行了架构的重新设计。也就是说,Koa的原理和内部结构很像Express,但是语法和内部结构进行了升级。 官方faq有这样一个问题:”为什么koa不是Express 4.0?“,回答是这样的:”Koa与Express有很大差异,整个设计都是不同的,所以如果将Express 3.0按照这种写法升级到4.0,就意味着重写整个程序。所以,我们觉得创造一个新的库,是更合适的做法。“ ## Koa应用 一个Koa应用就是一个对象,包含了一个middleware数组,这个数组由一组Generator函数组成。这些函数负责对HTTP请求进行各种加工,比如生成缓存、指定代理、请求重定向等等。 ``` var koa = require('koa'); var app = koa(); app.use(function *(){ this.body = 'Hello World'; }); 上面代码中,变量app就是一个Koa应用。它监听3000端口,返回一个内容为Hello World的网页。 app.use方法用于向middleware数组添加Generator函数。 listen方法指定监听端口,并启动当前应用。它实际上等同于下面的代码。 ``` var http = require('http'); var koa = require('koa'); var app = koa(); http.createServer(app.callback()).listen(3000); ``` ## 中间件 Koa的中间件很像Express的中间件,也是对HTTP请求进行处理的函数,但是必须是一个Generator函数。而且,Koa的中间件是一个级联式(Cascading)的结构,也就是说,属于是层层调用,第一个中间件调用第二个中间件,第二个调用第三个,以此类推。上游的中间件必须等到下游的中间件返回结果,才会继续执行,这点很像递归。 中间件通过当前应用的use方法注册。 ``` app.use(function* (next){ var start = new Date; // (1) yield next; // (2) var ms = new Date - start; // (3) console.log('%s %s - %s', this.method, this.url, ms); // (4) }); ``` 上面代码中, `app.use` 方法的参数就是中间件,它是一个Generator函数,最大的特征就是function命令与参数之间,必须有一个星号。Generator函数的参数next,表示下一个中间件。 Generator函数内部使用yield命令,将程序的执行权转交给下一个中间件,即 `yield next` ,要等到下一个中间件返回结果,才会继续往下执行。上面代码中,Generator函数体内部,第一行赋值语句首先执行,开始计时,第二行yield语句将执行权交给下一个中间件,当前中间件就暂停执行。等到后面的中间件全部执行完成,执行权就回到原来暂停的地方,继续往下执行,这时才会执行第三行,计算这个过程一共花了多少时间,第四行将这个时间打印出来。 下面是一个两个中间件级联的例子。 ``` app.use(function *() { this.body = "header\n"; yield saveResults.call(this); this.body += "footer\n"; }); function *saveResults() { this.body += "Results Saved!\n"; } ``` 上面代码中,第一个中间件调用第二个中间件saveResults,它们都向 `this.body` 写入内容。最后, `this.body` 的输出如下。 ``` header Results Saved! footer ``` 只要有一个中间件缺少 `yield next` 语句,后面的中间件都不会执行,这一点要引起注意。 app.use(function *(next){ console.log('>> two'); this.body = 'two'; console.log('<< two'); }); 上面代码中,因为第二个中间件少了 `yield next` 语句,第三个中间件并不会执行。 如果想跳过一个中间件,可以直接在该中间件的第一行语句写上 `return yield next` 。 ``` app.use(function* (next) { if (skip) return yield next; }) ``` 由于Koa要求中间件唯一的参数就是next,导致如果要传入其他参数,必须另外写一个返回Generator函数的函数。 ``` function logger(format) { return function *(next){ var str = format .replace(':method', this.method) .replace(':url', this.url); console.log(str); yield next; } } app.use(logger(':method :url')); ``` 上面代码中,真正的中间件是logger函数的返回值,而logger函数是可以接受参数的。 ### 多个中间件的合并 由于中间件的参数统一为next(意为下一个中间件),因此可以使用 `.call(this, next)` ,将多个中间件进行合并。 ``` function *random(next) { if ('/random' == this.path) { this.body = Math.floor(Math.random()*10); } else { yield next; } }; function *backwards(next) { if ('/backwards' == this.path) { this.body = 'sdrawkcab'; } else { yield next; } } function *pi(next) { if ('/pi' == this.path) { this.body = String(Math.PI); } else { yield next; } } function *all(next) { yield random.call(this, backwards.call(this, pi.call(this, next))); } app.use(all); ``` 上面代码中,中间件all内部,就是依次调用random、backwards、pi,后一个中间件就是前一个中间件的参数。 Koa内部使用koa-compose模块,进行同样的操作,下面是它的源码。 上面代码中,middleware是中间件数组。前一个中间件的参数是后一个中间件,依次类推。如果最后一个中间件没有next参数,则传入一个空函数。 ## 路由 可以通过 `this.path` 属性,判断用户请求的路径,从而起到路由作用。 ``` app.use(function* (next) { if (this.path === '/') { this.body = 'we are at home!'; } else { yield next; } }) // 等同于 app.use(function* (next) { if (this.path !== '/') return yield next; this.body = 'we are at home!'; }) ``` 下面是多路径的例子。 ``` let koa = require('koa') let app = koa() // normal route app.use(function* (next) { if (this.path !== '/') { return yield next } this.body = 'hello world' }); // /404 route app.use(function* (next) { if (this.path !== '/404') { return yield next; } this.body = 'page not found' }); // /500 route app.use(function* (next) { if (this.path !== '/500') { return yield next; } this.body = 'internal server error' }); app.listen(8080) ``` 上面代码中,每一个中间件负责一个路径,如果路径不符合,就传递给下一个中间件。 复杂的路由需要安装koa-router插件。 ``` var app = require('koa')(); var Router = require('koa-router'); var myRouter = new Router(); myRouter.get('/', function *(next) { this.response.body = 'Hello World!'; }); app.use(myRouter.routes()); 上面代码对根路径设置路由。 Koa-router实例提供一系列动词方法,即一种HTTP动词对应一种方法。典型的动词方法有以下五种。 * router.get() * router.post() * router.put() * router.del() * router.patch() 这些动词方法可以接受两个参数,第一个是路径模式,第二个是对应的控制器方法(中间件),定义用户请求该路径时服务器行为。 ``` router.get('/', function *(next) { this.body = 'Hello World!'; }); ``` 上面代码中, `router.get` 方法的第一个参数是根路径,第二个参数是对应的函数方法。 注意,路径匹配的时候,不会把查询字符串考虑在内。比如, `/index?param=xyz` 匹配路径 `/index` 。 有些路径模式比较复杂,Koa-router允许为路径模式起别名。起名时,别名要添加为动词方法的第一个参数,这时动词方法变成接受三个参数。 ``` router.get('user', '/users/:id', function *(next) { // ... }); ``` 上面代码中,路径模式 `\users\:id` 的名字就是 `user` 。路径的名称,可以用来引用对应的具体路径,比如url方法可以根据路径名称,结合给定的参数,生成具体的路径。 ``` router.url('user', 3); // => "/users/3" router.url('user', { id: 3 }); // => "/users/3" ``` 上面代码中,user就是路径模式的名称,对应具体路径 `/users/:id` 。url方法的第二个参数3,表示给定id的值是3,因此最后生成的路径是 `/users/3` 。 Koa-router允许为路径统一添加前缀。 ``` var router = new Router({ prefix: '/users' }); router.get('/', ...); // 等同于"/users" router.get('/:id', ...); // 等同于"/users/:id" ``` 路径的参数通过 `this.params` 属性获取,该属性返回一个对象,所有路径参数都是该对象的成员。 ``` // 访问 /programming/how-to-node router.get('/:category/:title', function *(next) { console.log(this.params); // => { category: 'programming', title: 'how-to-node' } }); ``` param方法可以针对命名参数,设置验证条件。 ``` router .get('/users/:user', function *(next) { this.body = this.user; }) .param('user', function *(id, next) { var users = [ '0号用户', '1号用户', '2号用户']; this.user = users[id]; if (!this.user) return this.status = 404; yield next; }) ``` 上面代码中,如果 `/users/:user` 的参数user对应的不是有效用户(比如访问 `/users/3` ),param方法注册的中间件会查到,就会返回404错误。 redirect方法会将某个路径的请求,重定向到另一个路径,并返回301状态码。 ``` router.redirect('/login', 'sign-in'); // 等同于 router.all('/login', function *() { this.redirect('/sign-in'); this.status = 301; }); ``` redirect方法的第一个参数是请求来源,第二个参数是目的地,两者都可以用路径模式的别名代替。 ## context对象 中间件当中的this表示上下文对象context,代表一次HTTP请求和回应,即一次访问/回应的所有信息,都可以从上下文对象获得。context对象封装了request和response对象,并且提供了一些辅助方法。每次HTTP请求,就会创建一个新的context对象。 ``` app.use(function *(){ this; // is the Context this.request; // is a koa Request this.response; // is a koa Response }); ``` context对象的很多方法,其实是定义在ctx.request对象或ctx.response对象上面,比如,ctx.type和ctx.length对应于ctx.response.type和ctx.response.length,ctx.path和ctx.method对应于ctx.request.path和ctx.request.method。 context对象的全局属性。 * request:指向Request对象 * response:指向Response对象 * req:指向Node的request对象 * res:指向Node的response对象 * app:指向App对象 * state:用于在中间件传递信息。 ``` this.state.user = yield User.find(id); ``` 上面代码中,user属性存放在 `this.state` 对象上面,可以被另一个中间件读取。 context对象的全局方法。 * throw():抛出错误,直接决定了HTTP回应的状态码。 * assert():如果一个表达式为false,则抛出一个错误。 this.throw(400, 'name required'); // 等同于 var err = new Error('name required'); err.status = 400; throw err; ``` assert方法的例子。 ``` // 格式 ctx.assert(value, [msg], [status], [properties]) 以下模块解析POST请求的数据。 * co-body * https://github.com/koajs/body-parser * https://github.com/koajs/body-parsers ``` var parse = require('co-body'); // in Koa handler var body = yield parse(this); ``` ## 错误处理机制 Koa提供内置的错误处理机制,任何中间件抛出的错误都会被捕捉到,引发向客户端返回一个500错误,而不会导致进程停止,因此也就不需要forever这样的模块重启进程。 ``` app.use(function *() { throw new Error(); }); ``` 上面代码中,中间件内部抛出一个错误,并不会导致Koa应用挂掉。Koa内置的错误处理机制,会捕捉到这个错误。 当然,也可以额外部署自己的错误处理机制。 ``` app.use(function *() { try { yield saveResults(); } catch (err) { this.throw(400, '数据无效'); } }); ``` 上面代码自行部署了try…catch代码块,一旦产生错误,就用 `this.throw` 方法抛出。该方法可以将指定的状态码和错误信息,返回给客户端。 对于未捕获错误,可以设置error事件的监听函数。 ``` app.on('error', function(err){ log.error('server error', err); }); ``` error事件的监听函数还可以接受上下文对象,作为第二个参数。 ``` app.on('error', function(err, ctx){ log.error('server error', err, ctx); }); ``` 如果一个错误没有被捕获,koa会向客户端返回一个500错误“Internal Server Error”。 this.throw方法用于向客户端抛出一个错误。 this.throw('name required', 400) // 等同于 var err = new Error('name required'); err.status = 400; throw err; ``` `this.throw` 方法的两个参数,一个是错误码,另一个是报错信息。如果省略状态码,默认是500错误。 `this.assert` 方法用于在中间件之中断言,用法类似于Node的assert模块。 上面代码中,如果this.user属性不存在,会抛出一个401错误。 由于中间件是层级式调用,所以可以把 `try { yield next }` 当成第一个中间件。 ``` app.use(function *(next) { try { yield next; } catch (err) { this.status = err.status || 500; this.body = err.message; this.app.emit('error', err, this); } }); app.use(function *(next) { throw new Error('some error'); }) ``` ## cookie cookie的读取和设置。 ``` this.cookies.get('view'); this.cookies.set('view', n); ``` get和set方法都可以接受第三个参数,表示配置参数。其中的signed参数,用于指定cookie是否加密。如果指定加密的话,必须用 `app.keys` 指定加密短语。 ``` app.keys = ['secret1', 'secret2']; this.cookies.set('name', '张三', { signed: true }); ``` this.cookie的配置对象的属性如下。 * signed:cookie是否加密。 * expires:cookie何时过期 * path:cookie的路径,默认是“/”。 * domain:cookie的域名。 * secure:cookie是否只有https请求下才发送。 * httpOnly:是否只有服务器可以取到cookie,默认为true。 ## session ``` var session = require('koa-session'); var koa = require('koa'); var app = koa(); app.keys = ['some secret hurr']; app.use(session(app)); app.use(function *(){ var n = this.session.views || 0; this.session.views = ++n; this.body = n + ' views'; }) app.listen(3000); console.log('listening on port 3000'); ``` ## Request对象 Request对象表示HTTP请求。 (1)this.request.header 返回一个对象,包含所有HTTP请求的头信息。它也可以写成 `this.request.headers` 。 (2)this.request.method 返回HTTP请求的方法,该属性可读写。 (3)this.request.length 返回HTTP请求的Content-Length属性,取不到值,则返回undefined。 (4)this.request.path 返回HTTP请求的路径,该属性可读写。 (5)this.request.href 返回HTTP请求的完整路径,包括协议、端口和url。 ``` this.request.href // http://example.com/foo/bar?q=1 ``` (6)this.request.querystring 返回HTTP请求的查询字符串,不含问号。该属性可读写。 (7)this.request.search 返回HTTP请求的查询字符串,含问号。该属性可读写。 (8)this.request.host 返回HTTP请求的主机(含端口号)。 (9)this.request.hostname 返回HTTP的主机名(不含端口号)。 (10)this.request.type 返回HTTP请求的Content-Type属性。 ``` var ct = this.request.type; // "image/png" ``` (11)this.request.charset 返回HTTP请求的字符集。 ``` this.request.charset // "utf-8" ``` (12)this.request.query 返回一个对象,包含了HTTP请求的查询字符串。如果没有查询字符串,则返回一个空对象。该属性可读写。 比如,查询字符串 ``` color=blue&size=small ``` ,会得到以下的对象。 ``` { color: 'blue', size: 'small' } ``` (13)this.request.fresh 返回一个布尔值,表示缓存是否代表了最新内容。通常与If-None-Match、ETag、If-Modified-Since、Last-Modified等缓存头,配合使用。 ``` this.response.set('ETag', '123'); // 检查客户端请求的内容是否有变化 if (this.request.fresh) { this.response.status = 304; return; } // 否则就表示客户端的内容陈旧了, // 需要取出新内容 this.response.body = yield db.find('something'); ``` (14)this.request.stale 返回 `this.request.fresh` 的相反值。 (15)this.request.protocol 返回HTTP请求的协议,https或者http。 (16)this.request.secure 返回一个布尔值,表示当前协议是否为https。 (17)this.request.ip 返回发出HTTP请求的IP地址。 (18)this.request.subdomains 返回一个数组,表示HTTP请求的子域名。该属性必须与app.subdomainOffset属性搭配使用。app.subdomainOffset属性默认为2,则域名“tobi.ferrets.example.com”返回[“ferrets”, “tobi”],如果app.subdomainOffset设为3,则返回[“tobi”]。 (19)this.request.is(types…) 返回指定的类型字符串,表示HTTP请求的Content-Type属性是否为指定类型。 ``` // Content-Type为 text/html; charset=utf-8 this.request.is('html'); // 'html' this.request.is('text/html'); // 'text/html' this.request.is('text/*', 'text/html'); // 'text/html' // Content-Type为 application/json this.request.is('json', 'urlencoded'); // 'json' this.request.is('application/json'); // 'application/json' this.request.is('html', 'application/*'); // 'application/json' ``` 如果不满足条件,返回false;如果HTTP请求不含数据,则返回undefined。 ``` this.is('html'); // false ``` 它可以用于过滤HTTP请求,比如只允许请求下载图片。 ``` if (this.is('image/*')) { // process } else { this.throw(415, 'images only!'); } ``` (20)this.request.accepts(types) 检查HTTP请求的Accept属性是否可接受,如果可接受,则返回指定的媒体类型,否则返回false。 ``` // Accept: text/html this.request.accepts('html'); // "html" // Accept: text/*, application/json this.request.accepts('html'); // "html" this.request.accepts('text/html'); // "text/html" this.request.accepts('json', 'text'); // => "json" this.request.accepts('application/json'); // => "application/json" // Accept: text/*, application/json this.request.accepts('image/png'); this.request.accepts('png'); // false // Accept: text/*;q=.5, application/json this.request.accepts(['html', 'json']); this.request.accepts('html', 'json'); // "json" // No Accept header this.request.accepts('html', 'json'); // "html" this.request.accepts('json', 'html'); // => "json" ``` 如果accepts方法没有参数,则返回所有支持的类型(text/html,application/xhtml+xml,image/webp,application/xml,/)。 如果accepts方法的参数有多个参数,则返回最佳匹配。如果都不匹配则返回false,并向客户端抛出一个406”Not Acceptable“错误。 如果HTTP请求没有Accept字段,那么accepts方法返回它的第一个参数。 accepts方法可以根据不同Accept字段,向客户端返回不同的字段。 ``` switch (this.request.accepts('json', 'html', 'text')) { case 'json': break; case 'html': break; case 'text': break; default: this.throw(406, 'json, html, or text only'); } ``` (21)this.request.acceptsEncodings(encodings) 该方法根据HTTP请求的Accept-Encoding字段,返回最佳匹配,如果没有合适的匹配,则返回false。 ``` // Accept-Encoding: gzip this.request.acceptsEncodings('gzip', 'deflate', 'identity'); // "gzip" this.request.acceptsEncodings(['gzip', 'deflate', 'identity']); // "gzip" ``` 注意,acceptEncodings方法的参数必须包括identity(意为不编码)。 如果HTTP请求没有Accept-Encoding字段,acceptEncodings方法返回所有可以提供的编码方法。 ``` // Accept-Encoding: gzip, deflate this.request.acceptsEncodings(); // ["gzip", "deflate", "identity"] ``` 如果都不匹配,acceptsEncodings方法返回false,并向客户端抛出一个406“Not Acceptable”错误。 (22)this.request.acceptsCharsets(charsets) 该方法根据HTTP请求的Accept-Charset字段,返回最佳匹配,如果没有合适的匹配,则返回false。 ``` // Accept-Charset: utf-8, iso-8859-1;q=0.2, utf-7;q=0.5 this.request.acceptsCharsets('utf-8', 'utf-7'); // => "utf-8" this.request.acceptsCharsets(['utf-7', 'utf-8']); // => "utf-8" ``` ``` // Accept-Charset: utf-8, iso-8859-1;q=0.2, utf-7;q=0.5 this.request.acceptsCharsets(); // ["utf-8", "utf-7", "iso-8859-1"] ``` 如果都不匹配,acceptsCharsets方法返回false,并向客户端抛出一个406“Not Acceptable”错误。 (23)this.request.acceptsLanguages(langs) 该方法根据HTTP请求的Accept-Language字段,返回最佳匹配,如果没有合适的匹配,则返回false。 ``` // Accept-Language: en;q=0.8, es, pt this.request.acceptsLanguages('es', 'en'); // "es" this.request.acceptsLanguages(['en', 'es']); // "es" ``` ``` // Accept-Language: en;q=0.8, es, pt this.request.acceptsLanguages(); // ["es", "pt", "en"] ``` 如果都不匹配,acceptsLanguages方法返回false,并向客户端抛出一个406“Not Acceptable”错误。 (24)this.request.socket 返回HTTP请求的socket。 (25)this.request.get(field) 返回HTTP请求指定的字段。 ## Response对象 Response对象表示HTTP回应。 (1)this.response.header 返回HTTP回应的头信息。 (2)this.response.socket 返回HTTP回应的socket。 (3)this.response.status 返回HTTP回应的状态码。默认情况下,该属性没有值。该属性可读写,设置时等于一个整数。 (4)this.response.message 返回HTTP回应的状态信息。该属性与 ``` this.response.message ``` 是配对的。该属性可读写。 (5)this.response.length 返回HTTP回应的Content-Length字段。该属性可读写,如果没有设置它的值,koa会自动从this.request.body推断。 (6)this.response.body 返回HTTP回应的信息体。该属性可读写,它的值可能有以下几种类型。 * 字符串:Content-Type字段默认为text/html或text/plain,字符集默认为utf-8,Content-Length字段同时设定。 * 二进制Buffer:Content-Type字段默认为application/octet-stream,Content-Length字段同时设定。 * Stream:Content-Type字段默认为application/octet-stream。 * JSON对象:Content-Type字段默认为application/json。 * null(表示没有信息体) 如果 `this.response.status` 没设置,Koa会自动将其设为200或204。 (7)this.response.get(field) 返回HTTP回应的指定字段。 ``` var etag = this.get('ETag'); ``` 注意,get方法的参数是区分大小写的。 (8)this.response.set() 设置HTTP回应的指定字段。 ``` this.set('Cache-Control', 'no-cache'); ``` set方法也可以接受一个对象作为参数,同时为多个字段指定值。 ``` this.set({ 'Etag': '1234', 'Last-Modified': date }); ``` (9)this.response.remove(field) 移除HTTP回应的指定字段。 (10)this.response.type 返回HTTP回应的Content-Type字段,不包括“charset”参数的部分。 ``` var ct = this.reponse.type; // "image/png" ``` 该属性是可写的。 ``` this.reponse.type = 'text/plain; charset=utf-8'; this.reponse.type = 'image/png'; this.reponse.type = '.png'; this.reponse.type = 'png'; ``` 设置type属性的时候,如果没有提供charset参数,Koa会判断是否自动设置。如果 `this.response.type` 设为html,charset默认设为utf-8;但如果 `this.response.type` 设为text/html,就不会提供charset的默认值。 (10)this.response.is(types…) 该方法类似于 `this.request.is()` ,用于检查HTTP回应的类型是否为支持的类型。 它可以在中间件中起到处理不同格式内容的作用。 ``` var minify = require('html-minifier'); app.use(function *minifyHTML(next){ yield next; if (!this.response.is('html')) return; var body = this.response.body; if (!body || body.pipe) return; if (Buffer.isBuffer(body)) body = body.toString(); this.response.body = minify(body); }); ``` 上面代码是一个中间件,如果输出的内容类型为HTML,就会进行最小化处理。 (11)this.response.redirect(url, [alt]) 该方法执行302跳转到指定网址。 ``` this.redirect('back'); this.redirect('back', '/index.html'); this.redirect('/login'); this.redirect('http://google.com'); ``` 如果redirect方法的第一个参数是back,将重定向到HTTP请求的Referrer字段指定的网址,如果没有该字段,则重定向到第二个参数或“/”网址。 如果想修改302状态码,或者修改body文字,可以采用下面的写法。 ``` this.status = 301; this.redirect('/cart'); this.body = 'Redirecting to shopping cart'; ``` (12)this.response.attachment([filename]) 该方法将HTTP回应的Content-Disposition字段,设为“attachment”,提示浏览器下载指定文件。 (13)this.response.headerSent 该方法返回一个布尔值,检查是否HTTP回应已经发出。 (14)this.response.lastModified 该属性以Date对象的形式,返回HTTP回应的Last-Modified字段(如果该字段存在)。该属性可写。 ``` this.response.lastModified = new Date(); ``` (15)this.response.etag 该属性设置HTTP回应的ETag字段。 ``` this.response.etag = crypto.createHash('md5').update(this.body).digest('hex'); ``` 注意,不能用该属性读取ETag字段。 (16)this.response.vary(field) 该方法将参数添加到HTTP回应的Vary字段。 ## CSRF攻击 CSRF攻击是指用户的session被劫持,用来冒充用户的攻击。 koa-csrf插件用来防止CSRF攻击。原理是在session之中写入一个秘密的token,用户每次使用POST方法提交数据的时候,必须含有这个token,否则就会抛出错误。 ``` var koa = require('koa'); var session = require('koa-session'); var csrf = require('koa-csrf'); var route = require('koa-route'); var app = module.exports = koa(); app.keys = ['session key', 'csrf example']; app.use(session(app)); app.use(csrf()); app.use(route.get('/token', token)); app.use(route.post('/post', post)); function* token () { this.body = this.csrf; } function* post() { this.body = {ok: true}; } POST请求含有token,可以是以下几种方式之一,koa-csrf插件就能获得token。 * 表单的_csrf字段 * 查询字符串的_csrf字段 * HTTP请求头信息的x-csrf-token字段 * HTTP请求头信息的x-xsrf-token字段 ## 数据压缩 koa-compress模块可以实现数据压缩。 ``` app.use(require('koa-compress')()) app.use(function* () { this.type = 'text/plain' this.body = fs.createReadStream('filename.txt') }) ``` ## 源码解读 每一个网站就是一个app,它由 `lib/application` 定义。 ``` function Application() { if (!(this instanceof Application)) return new Application; this.env = process.env.NODE_ENV || 'development'; this.subdomainOffset = 2; this.middleware = []; this.context = Object.create(context); this.request = Object.create(request); this.response = Object.create(response); } var app = Application.prototype; exports = module.exports = Application; ``` `app.use()` 用于注册中间件,即将Generator函数放入中间件数组。 ``` app.use = function(fn){ if (!this.experimental) { // es7 async functions are allowed assert(fn && 'GeneratorFunction' == fn.constructor.name, 'app.use() requires a generator function'); } debug('use %s', fn._name || fn.name || '-'); this.middleware.push(fn); return this; }; ``` `app.listen()` 就是 ``` http.createServer(app.callback()).listen(...) ``` 的缩写。 ``` app.listen = function(){ debug('listen'); var server = http.createServer(this.callback()); return server.listen.apply(server, arguments); }; app.callback = function(){ var mw = [respond].concat(this.middleware); var fn = this.experimental ? compose_es7(mw) : co.wrap(compose(mw)); var self = this; if (!this.listeners('error').length) this.on('error', this.onerror); return function(req, res){ res.statusCode = 404; var ctx = self.createContext(req, res); onFinished(res, ctx.onerror); fn.call(ctx).catch(ctx.onerror); } }; ``` 上面代码中, `app.callback()` 会返回一个函数,用来处理HTTP请求。它的第一行 ``` mw = [respond].concat(this.middleware) ``` ,表示将respond函数(这也是一个Generator函数)放入 `this.middleware` ,现在mw就变成了 ``` [respond, S1, S2, S3] ``` 。 `compose(mw)` 将中间件数组转为一个层层调用的Generator函数。 上面代码中,下一个generator函数总是上一个Generator函数的参数,从而保证了层层调用。 ``` var fn = co.wrap(gen) ``` 则是将Generator函数包装成一个自动执行的函数,并且返回一个Promise。 ``` //co package co.wrap = function (fn) { return function () { return co.call(this, fn.apply(this, arguments)); }; }; ``` 由于 `co.wrap(compose(mw))` 执行后,返回的是一个Promise,所以可以对其使用catch方法指定捕捉错误的回调函数 ``` fn.call(ctx).catch(ctx.onerror) ``` 。 将所有的上下文变量都放进context对象。 ``` app.createContext = function(req, res){ var context = Object.create(this.context); var request = context.request = Object.create(this.request); var response = context.response = Object.create(this.response); context.app = request.app = response.app = this; context.req = request.req = response.req = req; context.res = request.res = response.res = res; request.ctx = response.ctx = context; request.response = response; response.request = request; context.onerror = context.onerror.bind(context); context.originalUrl = request.originalUrl = req.url; context.cookies = new Cookies(req, res, this.keys); context.accept = request.accept = accepts(req); context.state = {}; return context; }; ``` 真正处理HTTP请求的是下面这个Generator函数。 ``` function *respond(next) { yield *next; // allow bypassing koa if (false === this.respond) return; var res = this.res; if (res.headersSent || !this.writable) return; var body = this.body; var code = this.status; // ignore body if (statuses.empty[code]) { // strip headers this.body = null; return res.end(); } if ('HEAD' == this.method) { if (isJSON(body)) this.length = Buffer.byteLength(JSON.stringify(body)); return res.end(); } // status body if (null == body) { this.type = 'text'; body = this.message || String(code); this.length = Buffer.byteLength(body); return res.end(body); } // responses if (Buffer.isBuffer(body)) return res.end(body); if ('string' == typeof body) return res.end(body); if (body instanceof Stream) return body.pipe(res); // body: json body = JSON.stringify(body); this.length = Buffer.byteLength(body); res.end(body); } ``` * Koa Guide * <NAME>, Is Koa.js right for me? Date: 2014-08-25 Categories: Tags: Node 应用由模块组成,采用 CommonJS 模块规范。 每个文件就是一个模块,有自己的作用域。在一个文件里面定义的变量、函数、类,都是私有的,对其他文件不可见。 ``` // example.js var x = 5; var addX = function (value) { return value + x; }; ``` 上面代码中,变量 `x` 和函数 `addX` ,是当前文件 `example.js` 私有的,其他文件不可见。 如果想在多个文件分享变量,必须定义为 `global` 对象的属性。 ``` global.warning = true; ``` 上面代码的 `warning` 变量,可以被所有文件读取。当然,这样写法是不推荐的。 CommonJS规范规定,每个模块内部, `module` 变量代表当前模块。这个变量是一个对象,它的 `exports` 属性(即 `module.exports` )是对外的接口。加载某个模块,其实是加载该模块的 `module.exports` 属性。 ``` var x = 5; var addX = function (value) { return value + x; }; module.exports.x = x; module.exports.addX = addX; ``` 上面代码通过 `module.exports` 输出变量 `x` 和函数 `addX` 。 `require` 方法用于加载模块。 ``` var example = require('./example.js'); console.log(example.x); // 5 console.log(example.addX(1)); // 6 ``` `require` 方法的详细解释参见《Require命令》一节。 CommonJS模块的特点如下。 * 所有代码都运行在模块作用域,不会污染全局作用域。 * 模块可以多次加载,但是只会在第一次加载时运行一次,然后运行结果就被缓存了,以后再加载,就直接读取缓存结果。要想让模块再次运行,必须清除缓存。 * 模块加载的顺序,按照其在代码中出现的顺序。 ## module对象 Node内部提供一个 `Module` 构建函数。所有模块都是 `Module` 的实例。 ``` function Module(id, parent) { this.id = id; this.exports = {}; this.parent = parent; // ... ``` 每个模块内部,都有一个 `module` 对象,代表当前模块。它有以下属性。 * `module.id` 模块的识别符,通常是带有绝对路径的模块文件名。 * `module.filename` 模块的文件名,带有绝对路径。 * `module.loaded` 返回一个布尔值,表示模块是否已经完成加载。 * `module.parent` 返回一个对象,表示调用该模块的模块。 * `module.children` 返回一个数组,表示该模块要用到的其他模块。 * `module.exports` 表示模块对外输出的值。 下面是一个示例文件,最后一行输出module变量。 ``` // example.js var jquery = require('jquery'); exports.$ = jquery; console.log(module); ``` 执行这个文件,命令行会输出如下信息。 ``` { id: '.', exports: { '$': [Function] }, parent: null, filename: '/path/to/example.js', loaded: false, children: [ { id: '/path/to/node_modules/jquery/dist/jquery.js', exports: [Function], parent: [Circular], filename: '/path/to/node_modules/jquery/dist/jquery.js', loaded: true, children: [], paths: [Object] } ], paths: [ '/home/user/deleted/node_modules', '/home/user/node_modules', '/home/node_modules', '/node_modules' ] } ``` 如果在命令行下调用某个模块,比如 `node something.js` ,那么 `module.parent` 就是 `null` 。如果是在脚本之中调用,比如 ``` require('./something.js') ``` ,那么 `module.parent` 就是调用它的模块。利用这一点,可以判断当前模块是否为入口脚本。 ``` if (!module.parent) { // ran with `node something.js` app.listen(8088, function() { console.log('app listening on port 8088'); }) } else { // used with `require('/.something.js')` module.exports = app; } ``` ### module.exports属性 `module.exports` 属性表示当前模块对外输出的接口,其他文件加载该模块,实际上就是读取 `module.exports` 变量。 setTimeout(function() { module.exports.emit('ready'); }, 1000); ``` 上面模块会在加载后1秒后,发出ready事件。其他文件监听该事件,可以写成下面这样。 ``` var a = require('./a'); a.on('ready', function() { console.log('module a is ready'); }); ``` ### exports变量 为了方便,Node为每个模块提供一个exports变量,指向module.exports。这等同在每个模块头部,有一行这样的命令。 ``` var exports = module.exports; ``` 造成的结果是,在对外输出模块接口时,可以向exports对象添加方法。 注意,不能直接将exports变量指向一个值,因为这样等于切断了 `exports` 与 `module.exports` 的联系。 ``` exports = function(x) {console.log(x)}; ``` 上面这样的写法是无效的,因为 `exports` 不再指向 `module.exports` 了。 下面的写法也是无效的。 ``` exports.hello = function() { return 'hello'; }; module.exports = 'Hello world'; ``` 上面代码中, `hello` 函数是无法对外输出的,因为 `module.exports` 被重新赋值了。 这意味着,如果一个模块的对外接口,就是一个单一的值,不能使用 `exports` 输出,只能使用 `module.exports` 输出。 ``` module.exports = function (x){ console.log(x);}; ``` 如果你觉得, `exports` 与 `module.exports` 之间的区别很难分清,一个简单的处理方法,就是放弃使用 `exports` ,只使用 `module.exports` 。 ## AMD规范与CommonJS规范的兼容性 CommonJS规范加载模块是同步的,也就是说,只有加载完成,才能执行后面的操作。AMD规范则是非同步加载模块,允许指定回调函数。由于Node.js主要用于服务器编程,模块文件一般都已经存在于本地硬盘,所以加载起来比较快,不用考虑非同步加载的方式,所以CommonJS规范比较适用。但是,如果是浏览器环境,要从服务器端加载模块,这时就必须采用非同步模式,因此浏览器端一般采用AMD规范。 AMD规范使用define方法定义模块,下面就是一个例子: ``` define(['package/lib'], function(lib){ function foo(){ lib.log('hello world!'); } return { foo: foo }; }); ``` AMD规范允许输出的模块兼容CommonJS规范,这时 `define` 方法需要写成下面这样: ``` define(function (require, exports, module){ var someModule = require("someModule"); var anotherModule = require("anotherModule"); someModule.doTehAwesome(); anotherModule.doMoarAwesome(); exports.asplode = function (){ someModule.doTehAwesome(); anotherModule.doMoarAwesome(); }; }); ``` ## require命令 Node使用CommonJS模块规范,内置的 `require` 命令用于加载模块文件。 `require` 命令的基本功能是,读入并执行一个JavaScript文件,然后返回该模块的exports对象。如果没有发现指定模块,会报错。 ``` // example.js var invisible = function () { console.log("invisible"); } exports.message = "hi"; exports.say = function () { console.log(message); } ``` 运行下面的命令,可以输出exports对象。 ``` var example = require('./example.js'); example // { // message: "hi", // say: [Function] // } ``` 如果模块输出的是一个函数,那就不能定义在exports对象上面,而要定义在 `module.exports` 变量上面。 ``` module.exports = function () { console.log("hello world") } require('./example2.js')() ``` 上面代码中,require命令调用自身,等于是执行 `module.exports` ,因此会输出 hello world。 ### 加载规则 `require` 命令用于加载文件,后缀名默认为 `.js` 。 ``` var foo = require('foo'); // 等同于 var foo = require('foo.js'); ``` 根据参数的不同格式, `require` 命令去不同路径寻找模块文件。 (1)如果参数字符串以“/”开头,则表示加载的是一个位于绝对路径的模块文件。比如, ``` require('/home/marco/foo.js') ``` 将加载 `/home/marco/foo.js` 。 (2)如果参数字符串以“./”开头,则表示加载的是一个位于相对路径(跟当前执行脚本的位置相比)的模块文件。比如, `require('./circle')` 将加载当前脚本同一目录的 `circle.js` 。 (3)如果参数字符串不以“./“或”/“开头,则表示加载的是一个默认提供的核心模块(位于Node的系统安装目录中),或者一个位于各级node_modules目录的已安装模块(全局安装或局部安装)。 举例来说,脚本 ``` /home/user/projects/foo.js ``` 执行了 `require('bar.js')` 命令,Node会依次搜索以下文件。 * /usr/local/lib/node/bar.js * /home/user/projects/node_modules/bar.js * /home/user/node_modules/bar.js * /home/node_modules/bar.js * /node_modules/bar.js 这样设计的目的是,使得不同的模块可以将所依赖的模块本地化。 (4)如果参数字符串不以“./“或”/“开头,而且是一个路径,比如 ``` require('example-module/path/to/file') ``` ,则将先找到 `example-module` 的位置,然后再以它为参数,找到后续路径。 (5)如果指定的模块文件没有发现,Node会尝试为文件名添加 `.js` 、 `.json` 、 `.node` 后,再去搜索。 `.js` 件会以文本格式的JavaScript脚本文件解析, `.json` 文件会以JSON格式的文本文件解析, `.node` 文件会以编译后的二进制文件解析。 (6)如果想得到 `require` 命令加载的确切文件名,使用 `require.resolve()` 方法。 ### 目录的加载规则 通常,我们会把相关的文件会放在一个目录里面,便于组织。这时,最好为该目录设置一个入口文件,让 `require` 方法可以通过这个入口文件,加载整个目录。 在目录中放置一个 `package.json` 文件,并且将入口文件写入 `main` 字段。下面是一个例子。 ``` // package.json { "name" : "some-library", "main" : "./lib/some-library.js" } ``` `require` 发现参数字符串指向一个目录以后,会自动查看该目录的 `package.json` 文件,然后加载 `main` 字段指定的入口文件。如果 `package.json` 文件没有 `main` 字段,或者根本就没有 `package.json` 文件,则会加载该目录下的 `index.js` 文件或 `index.node` 文件。 ### 模块的缓存 第一次加载某个模块时,Node会缓存该模块。以后再加载该模块,就直接从缓存取出该模块的 `module.exports` 属性。 ``` require('./example.js'); require('./example.js').message = "hello"; require('./example.js').message // "hello" ``` 上面代码中,连续三次使用 `require` 命令,加载同一个模块。第二次加载的时候,为输出的对象添加了一个 `message` 属性。但是第三次加载的时候,这个message属性依然存在,这就证明 `require` 命令并没有重新加载模块文件,而是输出了缓存。 如果想要多次执行某个模块,可以让该模块输出一个函数,然后每次 `require` 这个模块的时候,重新执行一下输出的函数。 所有缓存的模块保存在 `require.cache` 之中,如果想删除模块的缓存,可以像下面这样写。 ``` // 删除指定模块的缓存 delete require.cache[moduleName]; // 删除所有模块的缓存 Object.keys(require.cache).forEach(function(key) { delete require.cache[key]; }) ``` 注意,缓存是根据绝对路径识别模块的,如果同样的模块名,但是保存在不同的路径, `require` 命令还是会重新加载该模块。 ### 环境变量NODE_PATH Node执行一个脚本时,会先查看环境变量 `NODE_PATH` 。它是一组以冒号分隔的绝对路径。在其他位置找不到指定模块时,Node会去这些路径查找。 可以将NODE_PATH添加到 `.bashrc` 。 ``` export NODE_PATH="/usr/local/lib/node" ``` 所以,如果遇到复杂的相对路径,比如下面这样。 有两种解决方法,一是将该文件加入 `node_modules` 目录,二是修改 `NODE_PATH` 环境变量, `package.json` 文件可以采用下面的写法。 ``` { "name": "node_path", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "start": "NODE_PATH=lib node index.js" }, "author": "", "license": "ISC" } ``` `NODE_PATH` 是历史遗留下来的一个路径解决方案,通常不应该使用,而应该使用 `node_modules` 目录机制。 ### 模块的循环加载 如果发生模块的循环加载,即A加载B,B又加载A,则B将加载A的不完整版本。 ``` // a.js exports.x = 'a1'; console.log('a.js ', require('./b.js').x); exports.x = 'a2'; // b.js exports.x = 'b1'; console.log('b.js ', require('./a.js').x); exports.x = 'b2'; 上面代码是三个JavaScript文件。其中,a.js加载了b.js,而b.js又加载a.js。这时,Node返回a.js的不完整版本,所以执行结果如下。 修改main.js,再次加载a.js和b.js。 执行上面代码,结果如下。 上面代码中,第二次加载a.js和b.js时,会直接从缓存读取exports属性,所以a.js和b.js内部的console.log语句都不会执行了。 ### require.main `require` 方法有一个 `main` 属性,可以用来判断模块是直接执行,还是被调用执行。 直接执行的时候( `node module.js` ), `require.main` 属性指向模块本身。 ``` require.main === module // true ``` 调用执行的时候(通过 `require` 加载该脚本执行),上面的表达式返回false。 ## 模块的加载机制 CommonJS模块的加载机制是,输入的是被输出的值的拷贝。也就是说,一旦输出一个值,模块内部的变化就影响不到这个值。请看下面这个例子。 下面是一个模块文件 `lib.js` 。 ``` // lib.js var counter = 3; function incCounter() { counter++; } module.exports = { counter: counter, incCounter: incCounter, }; ``` 上面代码输出内部变量 `counter` 和改写这个变量的内部方法 `incCounter` 。 然后,加载上面的模块。 ``` // main.js var counter = require('./lib').counter; var incCounter = require('./lib').incCounter; console.log(counter); // 3 incCounter(); console.log(counter); // 3 ``` 上面代码说明, `counter` 输出以后, `lib.js` 模块内部的变化就影响不到 `counter` 了。 ### require的内部处理流程 `require` 命令是CommonJS规范之中,用来加载其他模块的命令。它其实不是一个全局命令,而是指向当前模块的 `module.require` 命令,而后者又调用Node的内部命令 `Module._load` 。 ``` Module._load = function(request, parent, isMain) { // 1. 检查 Module._cache,是否缓存之中有指定模块 // 2. 如果缓存之中没有,就创建一个新的Module实例 // 3. 将它保存到缓存 // 4. 使用 module.load() 加载指定的模块文件, // 读取文件内容之后,使用 module.compile() 执行文件代码 // 5. 如果加载/解析过程报错,就从缓存删除该模块 // 6. 返回该模块的 module.exports }; ``` 上面的第4步,采用 `module.compile()` 执行指定模块的脚本,逻辑如下。 ``` Module.prototype._compile = function(content, filename) { // 1. 生成一个require函数,指向module.require // 2. 加载其他辅助方法到require // 3. 将文件内容放到一个函数之中,该函数可调用 require // 4. 执行该函数 }; ``` 上面的第1步和第2步, `require` 函数及其辅助方法主要如下。 * `require()` : 加载外部模块 * `require.resolve()` :将模块名解析到一个绝对路径 * `require.main` :指向主模块 * `require.cache` :指向所有缓存的模块 * `require.extensions` :根据文件的后缀名,调用不同的执行函数 一旦 `require` 函数准备完毕,整个所要加载的脚本内容,就被放到一个新的函数之中,这样可以避免污染全局环境。该函数的参数包括 `require` 、 `module` 、 `exports` ,以及其他一些参数。 ``` (function (exports, require, module, __filename, __dirname) { // YOUR CODE INJECTED HERE! }); ``` `Module._compile` 方法是同步执行的,所以 `Module._load` 要等它执行完成,才会向用户返回 `module.exports` 的值。 * <NAME>, Writing Modular JavaScript With AMD, CommonJS & ES Harmony * P<NAME>, A Gentle Browserify Walkthrough * <NAME>, What is require? * <NAME>, The Node.js Way - How require() Actually Works Date: 2014-10-01 Categories: Tags: MongoDB是目前最流行的noSQL数据库之一,它是专为Node开发的。 MongoDB的一条记录叫做文档(document),它是一个包含多个字段的数据结构,很类似于JSON格式。 下面是文档的一个例子。 ``` { "_id" : ObjectId("54c955492b7c8eb21818bd09"), "address" : { "street" : "2 Avenue", "zipcode" : "10075", "building" : "1480", "coord" : [ -73.9557413, 40.7720266 ] }, "borough" : "Manhattan", "cuisine" : "Italian", "grades" : [ { "date" : ISODate("2014-10-01T00:00:00Z"), "grade" : "A", "score" : 11 }, { "date" : ISODate("2014-01-16T00:00:00Z"), "grade" : "B", "score" : 17 } ], "name" : "Vella", "restaurant_id" : "41704620" } ``` 文档储存在集合(collection)之中,类似于关系型数据库的表。在一个集合之中,记录的格式并不需要相同。每个集合之中的每个文档,必须有一个 `_id` 字段作为主键。 安装完MongoDB数据库以后,使用 `mongod` 命令启动MongoDB。 ``` $ mongod # 或者指定配置文件 $ mongod --config /etc/mongodb.conf ``` 然后,安装Node驱动库。 脚本里引用MongoDB客户端的代码如下。 ``` var MongoClient = require('mongodb').MongoClient; var assert = require('assert'); var url = 'mongodb://localhost:27017/test'; MongoClient.connect(url, function(err, db) { assert.equal(null, err); console.log('Connected correctly to server.'); db.close(); }); ``` ### 插入数据 ``` var insertDocument = function(db, callback) { db.collection('restaurants').insertOne( { "address" : { "street" : "2 Avenue", "zipcode" : "10075", "building" : "1480", "coord" : [ -73.9557413, 40.7720266 ] }, "borough" : "Manhattan", "cuisine" : "Italian", "grades" : [ { "date" : new Date("2014-10-01T00:00:00Z"), "grade" : "A", "score" : 11 }, { "date" : new Date("2014-01-16T00:00:00Z"), "grade" : "B", "score" : 17 } ], "name" : "Vella", "restaurant_id" : "41704620" }, function(err, result) { assert.equal(err, null); console.log("Inserted a document into the restaurants collection."); callback(); }); }; ``` 执行这个操作。 ### 查询操作 取出一个collection里面的所有文档。 ``` var findRestaurants = function(db, callback) { var cursor =db.collection('restaurants').find( ); cursor.each(function(err, doc) { assert.equal(err, null); if (doc !== null) { console.dir(doc); } else { callback(); } }); }; ``` 查询语句的写法如下。 ``` { <field1>: <value1>, <field2>: <value2>, ... } ``` 下面是一个指定查询条件的例子。 ``` var findRestaurants = function(db, callback) { var cursor =db.collection('restaurants').find( { "borough": "Manhattan" } ); cursor.each(function(err, doc) { assert.equal(err, null); if (doc != null) { console.dir(doc); } else { callback(); } }); }; ``` 查询的时候,可以指定嵌套属性。 ``` var cursor =db.collection('restaurants').find( { "address.zipcode": "10075" } ); ``` 查询条件还可以指定数组的一个值。 ``` var cursor =db.collection('restaurants').find( { "grades.grade": "B" } ); ``` 查询条件可以指定运算符。 ``` // 大于 var cursor =db.collection('restaurants').find( { "grades.score": { $gt: 30 } } ); // 小于 var cursor =db.collection('restaurants').find( { "grades.score": { $lt: 10 } } ); ``` 查询条件可以指定逻辑运算符。 ``` // AND 运算 var cursor =db.collection('restaurants').find( { "cuisine": "Italian", "address.zipcode": "10075" } ); // OR 运算 var cursor =db.collection('restaurants').find( { $or: [ { "cuisine": "Italian" }, { "address.zipcode": "10075" } ] } ); ``` `sort` 方法用于排序,1代表升序,-1代表降序。 ``` var cursor =db.collection('restaurants').find().sort( { "borough": 1, "address.zipcode": 1 } ); ``` ### 更新数据 更新指定文档。 `updateOne` 方法返回更新的文档的数目。 ``` var updateRestaurants = function(db, callback) { db.collection('restaurants').updateOne( { "name" : "Juni" }, { $set: { "cuisine": "American (New)" }, $currentDate: { "lastModified": true } }, function(err, results) { console.log(results); callback(); }); }; updateRestaurants(db, function() { db.close(); }); }); ``` 更新嵌入的字段。 ``` db.collection('restaurants').updateOne( { "restaurant_id" : "41156888" }, { $set: { "address.street": "East 31st Street" } }, function(err, results) { console.log(results); callback(); } ); ``` 更新多个字段。 ``` db.collection('restaurants').updateMany( { "address.zipcode": "10016", cuisine: "Other" }, { $set: { cuisine: "Category To Be Determined" }, $currentDate: { "lastModified": true } } , function(err, results) { console.log(results); callback(); }); ``` 替换整个文档,除了 `_id` 字段。 ``` db.collection('restaurants').replaceOne( { "restaurant_id" : "41704620" }, { "name" : "<NAME>", "address" : { "coord" : [ -73.9557413, 40.7720266 ], "building" : "1480", "street" : "2 Avenue", "zipcode" : "10075" } }, function(err, results) { console.log(results); callback(); }); ``` `_id` 字段不能更新。 ### 删除数据 删除符合条件的所有文档。 ``` var removeRestaurants = function(db, callback) { db.collection('restaurants').deleteMany( { "borough": "Manhattan" }, function(err, results) { console.log(results); callback(); } ); }; removeRestaurants(db, function() { db.close(); }); }); ``` 删除单一文档。 ``` db.collection('restaurants').deleteOne( { "borough": "Queens" }, function(err, results) { console.log(results); callback(); } ); ``` 删除所有文档。 ``` db.collection('restaurants').deleteMany( {}, function(err, results) { console.log(results); callback(); }); ``` 删除整个集合。 ``` db.collection('restaurants').drop( function(err, response) { console.log(response) callback(); }); ``` ### 聚合操作 ``` var aggregateRestaurants = function(db, callback) { db.collection('restaurants').aggregate( [ { $group: { "_id": "$borough", "count": { $sum: 1 } } } ]).toArray(function(err, result) { assert.equal(err, null); console.log(result); callback(result); }); }; 上面的代码产生下面的结果。 ``` [ { _id: 'Missing', count: 51 }, { _id: 'Staten Island', count: 969 }, { _id: 'Manhattan', count: 10259 }, { _id: 'Brooklyn', count: 6086 }, { _id: 'Queens', count: 5656 }, { _id: 'Bronx', count: 2338 } ] ``` 带有过滤条件的聚合。 ``` db.collection('restaurants').aggregate( [ { $match: { "borough": "Queens", "cuisine": "Brazilian" } }, { $group: { "_id": "$address.zipcode" , "count": { $sum: 1 } } } ]).toArray(function(err, result) { assert.equal(err, null); console.log(result); callback(result); }); ``` ### 索引 生成一个单字段的索引, `1` 表示升序, `-1` 表示降序。 ``` var indexRestaurants = function(db, callback) { db.collection('restaurants').createIndex( { "cuisine": 1 }, null, function(err, results) { console.log(results); callback(); } ); }; 生成多个字段的索引。 ``` db.collection('restaurants').createIndex( { "cuisine": 1, "address.zipcode": -1 }, null, function(err, results) { console.log(results); callback(); } ); ``` ## 命令行操作 输入数据。 ``` $ mongoimport --db test --collection restaurants --drop --file primer-dataset.json ``` ## Mongoose 多种中间件可以用于连接node.js与MongoDB,目前比较常用的Mongoose。 首先,在项目目录将Mongoose安装为本地模块。 然后,就可以在node.js脚本中连接MongoDB数据库了。 注意,运行上面这个脚本时,必须确保MongoDB处于运行中。 数据库连接后,可以对open和error事件指定监听函数。 mongoose.Schema方法用来定义数据集的格式(schema),mongoose.model方法将格式分配给指定的数据集。 ``` var Schema = mongoose.Schema; var userSchema = new Schema({ name : String, age : Number, DOB : Date, isAlive : Boolean }); var User = mongoose.model('User', userSchema); var arvind = new User({ name : 'Arvind', age : 99, DOB : '01/01/1915', isAlive : true }); arvind.save(function (err, data) { if (err){ console.log(err); } else { console.log('Saved : ', data ); } }); ``` * <NAME>, Creating a Simple RESTful Web App with Node.js, Express, and MongoDB * arvindr21, Getting started with MongoDB and Mongoose Date: 2000-01-01 Categories: Tags: `net` 模块用于底层的网络通信。 下面是一段简单的监听2000端口的代码。 ``` var net = require('net'); var server = net.createServer(); server.listen(2000, function () { console.log('Listening on port 2000'); }); server.on('connection', function (stream) { console.log('Accepting connection from', stream.remoteAddress); stream.on('data', function (data) { stream.write(data); }); stream.on('end', function (data) { console.log('Connection closed'); }); }); ``` ## isIP() `isIP` 方法用于判断某个字符串是否为IP地址。 ``` require('net').isIP('10.0.0.1') // 4 require('net').isIP('cats') // 0 ``` ## 服务器端Socket接口 来看一个简单的Telnet服务的例子。 ``` var net = require('net'); var port = 1081; var logo = fs.readFileSync('logo.txt'); var ps1 = '\n\n>>> '; net.createServer( function ( socket ) { socket.write( logo ); socket.write( ps1 ); socket.on( 'data', recv.bind( null, socket ) ); } ).listen( port ); ``` 上面代码,在1081端口架设了一个服务。可以用telnet访问这个服务。 ``` $ telnet localhost 1081 ``` 一旦telnet连入以后,就会显示提示符 `>>>` ,输入命令以后,就会调用回调函数 `recv` 。 ``` function recv( socket, data ) { if ( data === 'quit' ) { socket.end( 'Bye!\n' ); return; } request( { uri: baseUrl + data }, function ( error, response, body ) { if ( body && body.length ) { $ = cheerio.load( body ); socket.write( $( '#mw-content-text p' ).first().text() + '\n' ); } else { socket.write( 'Error: ' + response.statusCode ); } socket.write( ps1 ); } ); } ``` 上面代码中,如果输入的命令是 `quit` ,然后就退出telnet。如果是其他命令,就发起远程请求读取数据,并显示在屏幕上。 下面代码是另一个例子,用到了更多的接口。 ``` var serverPort = 9099; var net = require('net'); var server = net.createServer(function(client) { console.log('client connected'); console.log('client IP Address: ' + client.remoteAddress); console.log('is IPv6: ' + net.isIPv6(client.remoteAddress)); console.log('total server connections: ' + server.connections); // Waiting for data from the client. client.on('data', function(data) { console.log('received data: ' + data.toString()); // Write data to the client socket. client.write('hello from server'); }); // Closed socket event from the client. client.on('end', function() { console.log('client disconnected'); }); }); server.on('error',function(err){ console.log(err); server.close(); }); server.listen(serverPort, function() { console.log('server started on port ' + serverPort); }); ``` 上面代码中,createServer方法建立了一个服务端,一旦收到客户端发送的数据,就发出回应,同时还监听客户端是否中断通信。最后,listen方法打开服务端。 ## 客户端Socket接口 客户端Socket接口用来向服务器发送数据。 ``` var serverPort = 9099; var server = 'localhost'; var net = require('net'); console.log('connecting to server...'); var client = net.connect({server:server,port:serverPort},function(){ console.log('client connected'); // send data console.log('send data to server'); client.write('greeting from client socket'); }); client.on('data', function(data) { console.log('received data: ' + data.toString()); client.end(); }); client.on('error',function(err){ console.log(err); }); client.on('end', function() { console.log('client disconnected'); }); ``` 上面代码连接服务器之后,就向服务器发送数据,然后监听服务器返回的数据。 ## DNS模块 DNS模块用于解析域名。resolve4方法用于IPv4环境,resolve6方法用于IPv6环境,lookup方法在以上两种环境都可以使用,返回IP地址(address)和当前环境(IPv4或IPv6)。 ``` var dns = require('dns'); dns.resolve4('www.pecollege.net', function (err, addresses) { if (err) console.log(err); console.log('addresses: ' + JSON.stringify(addresses)); }); dns.lookup('www.pecollege.net', function (err, address, family) { if (err) console.log(err); console.log('addresses: ' + JSON.stringify(address)); console.log('family: ' + JSON.stringify(family)); }); ``` `npm` 有两层含义。一层含义是Node的开放式模块登记和管理系统,网址为npmjs.org。另一层含义是Node默认的模块管理器,是一个命令行下的软件,用来安装和管理Node模块。 `npm` 不需要单独安装。在安装Node的时候,会连带一起安装 `npm` 。但是,Node附带的 `npm` 可能不是最新版本,最好用下面的命令,更新到最新版本。 ``` $ npm install npm@latest -g ``` 上面的命令中, `@latest` 表示最新版本, `-g` 表示全局安装。所以,命令的主干是 `npm install npm` ,也就是使用 `npm` 安装自己。之所以可以这样,是因为 `npm` 本身与Node的其他模块没有区别。 然后,运行下面的命令,查看各种信息。 ``` # 查看 npm 命令列表 $ npm help # 查看各个命令的简单用法 $ npm -l # 查看 npm 的版本 $ npm -v # 查看 npm 的配置 $ npm config list -l ``` ## npm init `npm init` 用来初始化生成一个新的 `package.json` 文件。它会向用户提问一系列问题,如果你觉得不用修改默认配置,一路回车就可以了。 如果使用了 `-f` (代表force)、 `-y` (代表yes),则跳过提问阶段,直接生成一个新的 `package.json` 文件。 `$ npm init -y` ## npm set `npm set` 用来设置环境变量。 ``` $ npm set init-author-name '<NAME>' $ npm set init-author-email 'Your email' $ npm set init-author-url 'http://yourdomain.com' $ npm set init-license 'MIT' ``` 上面命令等于为 `npm init` 设置了默认值,以后执行 `npm init` 的时候, `package.json` 的作者姓名、邮件、主页、许可证字段就会自动写入预设的值。这些信息会存放在用户主目录的 `~/.npmrc` 文件,使得用户不用每个项目都输入。如果某个项目有不同的设置,可以针对该项目运行 `npm config` 。 ``` $ npm set save-exact true ``` 上面命令设置加入模块时, `package.json` 将记录模块的确切版本,而不是一个可选的版本范围。 ## npm config ``` $ npm config set prefix $dir ``` 上面的命令将指定的 `$dir` 目录,设为模块的全局安装目录。如果当前有这个目录的写权限,那么运行 `npm install` 的时候,就不再需要 `sudo` 命令授权了。 ``` $ npm config set save-prefix ~ ``` 上面的命令使得 `npm install --save` 和 ``` npm install --save-dev ``` 安装新模块时,允许的版本范围从克拉符号( `^` )改成波浪号( `~` ),即从允许小版本升级,变成只允许补丁包的升级。 ``` $ npm config set init.author.name $name $ npm config set init.author.email $email ``` 上面命令指定使用 `npm init` 时,生成的 `package.json` 文件的字段默认值。 ## npm info `npm info` 命令可以查看每个模块的具体信息。比如,查看underscore模块的信息。 ``` $ npm info underscore { name: 'underscore', description: 'JavaScript\'s functional programming helper library.', 'dist-tags': { latest: '1.5.2', stable: '1.5.2' }, repository: { type: 'git', url: 'git://github.com/jashkenas/underscore.git' }, homepage: 'http://underscorejs.org', main: 'underscore.js', version: '1.5.2', devDependencies: { phantomjs: '1.9.0-1' }, licenses: { type: 'MIT', url: 'https://raw.github.com/jashkenas/underscore/master/LICENSE' }, files: [ 'underscore.js', 'underscore-min.js', 'LICENSE' ], readmeFilename: 'README.md'} ``` 上面命令返回一个JavaScript对象,包含了underscore模块的详细信息。这个对象的每个成员,都可以直接从info命令查询。 ``` $ npm info underscore description JavaScript's functional programming helper library. $ npm info underscore homepage http://underscorejs.org $ npm info underscore version 1.5.2 ``` ## npm search `npm search` 命令用于搜索npm仓库,它后面可以跟字符串,也可以跟正则表达式。 `$ npm search <搜索词>` 下面是一个例子。 ``` $ npm search node-gyp // NAME DESCRIPTION // autogypi Autogypi handles dependencies for node-gyp projects. // grunt-node-gyp Run node-gyp commands from Grunt. // gyp-io Temporary solution to let node-gyp run `rebuild` under… // ... ``` ## npm list `npm list` 命令以树型结构列出当前项目安装的所有模块,以及它们依赖的模块。 `$ npm list` 加上global参数,会列出全局安装的模块。 `$ npm list -global` `npm list` 命令也可以列出单个模块。 ``` $ npm list underscore ``` ## npm install Node模块采用 `npm install` 命令安装。 每个模块可以“全局安装”,也可以“本地安装”。“全局安装”指的是将一个模块安装到系统目录中,各个项目都可以调用。一般来说,全局安装只适用于工具模块,比如 `eslint` 和 `gulp` 。“本地安装”指的是将一个模块下载到当前项目的 `node_modules` 子目录,然后只有在项目目录之中,才能调用这个模块。 ``` # 本地安装 $ npm install <package name# 全局安装 $ sudo npm install -global <package name> $ sudo npm install -g <package name> ``` `npm install` 也支持直接输入Github代码库地址。 ``` $ npm install git://github.com/package/path.git $ npm install git://github.com/package/path.git#0.1.0 ``` 安装之前, `npm install` 会先检查, `node_modules` 目录之中是否已经存在指定模块。如果存在,就不再重新安装了,即使远程仓库已经有了一个新版本,也是如此。 如果你希望,一个模块不管是否安装过,npm 都要强制重新安装,可以使用 `-f` 或 `--force` 参数。 ``` $ npm install <packageName> --force ``` 如果你希望,所有模块都要强制重新安装,那就删除 `node_modules` 目录,重新执行 `npm install` 。 ``` $ rm -rf node_modules $ npm install ``` ### 安装不同版本 install命令总是安装模块的最新版本,如果要安装模块的特定版本,可以在模块名后面加上@和版本号。 ``` $ npm install sax@latest $ npm install [email protected] $ npm install sax@">=0.1.0 <0.2.0" ``` 如果使用 `--save-exact` 参数,会在package.json文件指定安装模块的确切版本。 ``` $ npm install readable-stream --save --save-exact ``` install命令可以使用不同参数,指定所安装的模块属于哪一种性质的依赖关系,即出现在packages.json文件的哪一项中。 * –save:模块名将被添加到dependencies,可以简化为参数 `-S` 。 * –save-dev: 模块名将被添加到devDependencies,可以简化为参数 `-D` 。 ``` $ npm install sax --save $ npm install node-tap --save-dev # 或者 $ npm install sax -S $ npm install node-tap -D ``` 如果要安装beta版本的模块,需要使用下面的命令。 ``` # 安装最新的beta版 $ npm install <module-name>@beta (latest beta) # 安装指定的beta版 $ npm install <module-name>@1.3.1-beta.3 ``` `npm install` 默认会安装 `dependencies` 字段和 `devDependencies` 字段中的所有模块,如果使用 `--production` 参数,可以只安装 `dependencies` 字段的模块。 ``` $ npm install --production # 或者 $ NODE_ENV=production npm install ``` 一旦安装了某个模块,就可以在代码中用 `require` 命令加载这个模块。 ``` var backbone = require('backbone') console.log(backbone.VERSION) ``` ### 避免系统权限 默认情况下,Npm全局模块都安装在系统目录(比如 `/usr/local/lib/` ),普通用户没有写入权限,需要用到 `sudo` 命令。这不是很方便,我们可以在没有root权限的情况下,安装全局模块。 首先,在主目录下新建配置文件 `.npmrc` ,然后在该文件中将 `prefix` 变量定义到主目录下面。 ``` prefix = /home/yourUsername/npm ``` 然后在主目录下新建 `npm` 子目录。 `$ mkdir ~/npm` 此后,全局安装的模块都会安装在这个子目录中,npm也会到 `~/npm/bin` 目录去寻找命令。 最后,将这个路径在 `.bash_profile` 文件(或 `.bashrc` 文件)中加入PATH变量。 ``` export PATH=~/npm/bin:$PATH ``` ## npm update,npm uninstall `npm update` 命令可以更新本地安装的模块。 ``` # 升级当前项目的指定模块 $ npm update [package name] # 升级全局安装的模块 $ npm update -global [package name] ``` 它会先到远程仓库查询最新版本,然后查询本地版本。如果本地版本不存在,或者远程版本较新,就会安装。 使用 `-S` 或 `--save` 参数,可以在安装的时候更新 `package.json` 里面模块的版本号。 ``` // 更新之前的package.json dependencies: { dep1: "^1.1.1" } // 更新之后的package.json dependencies: { dep1: "^1.2.2" } ``` 注意,从npm v2.6.1 开始, `npm update` 只更新顶层模块,而不更新依赖的依赖,以前版本是递归更新的。如果想取到老版本的效果,要使用下面的命令。 ``` $ npm --depth 9999 update ``` `npm uninstall` 命令,卸载已安装的模块。 ``` $ npm uninstall [package name] # 卸载全局模块 $ npm uninstall [package name] -global ``` ## npm run `npm` 不仅可以用于模块管理,还可以用于执行脚本。 `package.json` 文件有一个 `scripts` 字段,可以用于指定脚本命令,供 `npm` 直接调用。 ``` { "name": "myproject", "devDependencies": { "jshint": "latest", "browserify": "latest", "mocha": "latest" }, "scripts": { "lint": "jshint **.js", "test": "mocha test/" } } ``` 上面代码中, `scripts` 字段指定了两项命令 `lint` 和 `test` 。命令行输入 `npm run-script lint` 或者 `npm run lint` ,就会执行 `jshint **.js` ,输入 `npm run-script test` 或者 `npm run test` ,就会执行 `mocha test/` 。 `npm run` 是 `npm run-script` 的缩写,一般都使用前者,但是后者可以更好地反应这个命令的本质。 `npm run` 命令会自动在环境变量 `$PATH` 添加 `node_modules/.bin` 目录,所以 `scripts` 字段里面调用命令时不用加上路径,这就避免了全局安装NPM模块。 `npm run` 如果不加任何参数,直接运行,会列出 `package.json` 里面所有可以执行的脚本命令。 npm内置了两个命令简写, `npm test` 等同于执行 `npm run test` , `npm start` 等同于执行 `npm run start` 。 `npm run` 会创建一个Shell,执行指定的命令,并临时将 `node_modules/.bin` 加入PATH变量,这意味着本地模块可以直接运行。 举例来说,你执行ESLint的安装命令。 ``` $ npm i eslint --save-dev ``` 运行上面的命令以后,会产生两个结果。首先,ESLint被安装到当前目录的 `node_modules` 子目录;其次, `node_modules/.bin` 目录会生成一个符号链接 ``` node_modules/.bin/eslint ``` ,指向ESLint模块的可执行脚本。 然后,你就可以在 `package.json` 的 `script` 属性里面,不带路径的引用 `eslint` 这个脚本。 ``` { "name": "<NAME>", "devDependencies": { "eslint": "^1.10.3" }, "scripts": { "lint": "eslint ." } } ``` 等到运行 `npm run lint` 的时候,它会自动执行 ``` ./node_modules/.bin/eslint . ``` 。 如果直接运行 `npm run` 不给出任何参数,就会列出 `scripts` 属性下所有命令。 ``` $ npm run Available scripts in the user-service package: lint jshint **.js test mocha test/ ``` 下面是另一个 `package.json` 文件的例子。 ``` "scripts": { "watch": "watchify client/main.js -o public/app.js -v", "build": "browserify client/main.js -o public/app.js", "start": "npm run watch & nodemon server.js", "test": "node test/all.js" }, ``` 上面代码在 `scripts` 项,定义了四个别名,每个别名都有对应的脚本命令。 ``` $ npm run watch $ npm run build $ npm run start $ npm run test ``` 其中, `start` 和 `test` 属于特殊命令,可以省略 `run` 。 ``` $ npm start $ npm test ``` 如果希望一个操作的输出,是另一个操作的输入,可以借用Linux系统的管道命令,将两个操作连在一起。 ``` "build-js": "browserify browser/main.js | uglifyjs -mc > static/bundle.js" ``` 但是,更方便的写法是引用其他 `npm run` 命令。 ``` "build": "npm run build-js && npm run build-css" ``` 上面的写法是先运行 `npm run build-js` ,然后再运行 `npm run build-css` ,两个命令中间用 `&&` 连接。如果希望两个命令同时平行执行,它们中间可以用 `&` 连接。 下面是一个流操作的例子。 ``` "devDependencies": { "autoprefixer": "latest", "cssmin": "latest" }, "scripts": { "build:css": "autoprefixer -b 'last 2 versions' < assets/styles/main.css | cssmin > dist/main.css" } ``` 写在 `scripts` 属性中的命令,也可以在 `node_modules/.bin` 目录中直接写成bash脚本。下面是一个bash脚本。 ``` #!/bin/bash cd site/main browserify browser/main.js | uglifyjs -mc > static/bundle.js ``` 假定上面的脚本文件名为build.sh,并且权限为可执行,就可以在scripts属性中引用该文件。 ``` "build-js": "bin/build.sh" ``` ### 参数 `npm run` 命令还可以添加参数。 ``` "scripts": { "test": "mocha test/" } ``` 上面代码指定 `npm test` ,实际运行 `mocha test/` 。如果要通过 `npm test` 命令,将参数传到mocha,则参数之前要加上两个连词线。 ``` $ npm run test -- anothertest.js # 等同于 $ mocha test/ anothertest.js ``` 上面命令表示,mocha要运行所有 `test` 子目录的测试脚本,以及另外一个测试脚本 `anothertest.js` 。 `npm run` 本身有一个参数 `-s` ,表示关闭npm本身的输出,只输出脚本产生的结果。 ``` // 输出npm命令头 $ npm run test // 不输出npm命令头 $ npm run -s test ``` ### scripts脚本命令最佳实践 `scripts` 字段的脚本命令,有一些最佳实践,可以方便开发。首先,安装 `npm-run-all` 模块。 这个模块用于运行多个 `scripts` 脚本命令。 ``` # 继发执行 $ npm-run-all build:html build:js # 等同于 $ npm run build:html && npm run build:js # 并行执行 $ npm-run-all --parallel watch:html watch:js # 等同于 $ npm run watch:html & npm run watch:js # 混合执行 $ npm-run-all clean lint --parallel watch:html watch:js # 等同于 $ npm-run-all clean lint $ npm-run-all --parallel watch:html watch:js # 通配符 $ npm-run-all --parallel watch:* ``` (1)start脚本命令 `start` 脚本命令,用于启动应用程序。 ``` "start": "npm-run-all --parallel dev serve" ``` 上面命令并行执行 `dev` 脚本命令和 `serve` 脚本命令,等同于下面的形式。 ``` $ npm run dev & npm run serve ``` 如果start脚本没有配置, `npm start` 命令默认执行下面的脚本,前提是模块的根目录存在一个server.js文件。 `$ node server.js` (2)dev脚本命令 `dev` 脚本命令,规定开发阶段所要做的处理,比如构建网页资源。 ``` "dev": "npm-run-all dev:*" ``` 上面命令用于继发执行所有 `dev` 的子命令。 ``` "predev:sass": "node-sass --source-map src/css/hoodie.css.map --output-style nested src/sass/base.scss src/css/hoodie.css" ``` 上面命令将sass文件编译为css文件,并生成source map文件。 ``` "dev:sass": "node-sass --source-map src/css/hoodie.css.map --watch --output-style nested src/sass/base.scss src/css/hoodie.css" ``` 上面命令会监视sass文件的变动,只要有变动,就自动将其编译为css文件。 ``` "dev:autoprefix": "postcss --use autoprefixer --autoprefixer.browsers \"> 5%\" --output src/css/hoodie.css src/css/hoodie.css" ``` 上面命令为css文件加上浏览器前缀,限制条件是只考虑市场份额大于5%的浏览器。 (3)serve脚本命令 `serve` 脚本命令用于启动服务。 ``` "serve": "live-server dist/ --port=9090" ``` 上面命令启动服务,用的是live-server模块,将服务启动在9090端口,展示 `dist` 子目录。 `live-server` 模块有三个功能。 * 启动一个HTTP服务器,展示指定目录的 `index.html` 文件,通过该文件加载各种网络资源,这是 `file://` 协议做不到的。 * 添加自动刷新功能。只要指定目录之中,文件有任何变化,它就会刷新页面。 * `npm run serve` 命令执行以后,自动打开浏览器。、 以前,上面三个功能需要三个模块来完成: `http-server` 、 `live-reload` 和 `opener` ,现在只要 `live-server` 一个模块就够了。 (4)test脚本命令 `test` 脚本命令用于执行测试。 ``` "test": "npm-run-all test:*", "test:lint": "sass-lint --verbose --config .sass-lint.yml src/sass/*" ``` 上面命令规定,执行测试时,运行 `lint` 脚本,检查脚本之中的语法错误。 (5)prod脚本命令 `prod` 脚本命令,规定进入生产环境时需要做的处理。 ``` "prod": "npm-run-all prod:*", "prod:sass": "node-sass --output-style compressed src/sass/base.scss src/css/prod/hoodie.min.css", "prod:autoprefix": "postcss --use autoprefixer --autoprefixer.browsers "> 5%" --output src/css/prod/hoodie.min.css src/css/prod/hoodie.min.css" ``` 上面命令将sass文件转为css文件,并加上浏览器前缀。 (6)help脚本命令 `help` 脚本命令用于展示帮助信息。 ``` "help": "markdown-chalk --input DEVELOPMENT.md" ``` 上面命令之中, `markdown-chalk` 模块用于将指定的markdown文件,转为彩色文本显示在终端之中。 (7)docs脚本命令 `docs` 脚本命令用于生成文档。 ``` "docs": "kss-node --source src/sass --homepage ../../styleguide.md" ``` 上面命令使用 `kss-node` 模块,提供源码的注释生成markdown格式的文档。 ### pre- 和 post- 脚本 `npm run` 为每条命令提供了 `pre-` 和 `post-` 两个钩子(hook)。以 `npm run lint` 为例,执行这条命令之前,npm会先查看有没有定义prelint和postlint两个钩子,如果有的话,就会先执行 `npm run prelint` ,然后执行 `npm run lint` ,最后执行 `npm run postlint` 。 ``` { "name": "myproject", "devDependencies": { "eslint": "latest" "karma": "latest" }, "scripts": { "lint": "eslint --cache --ext .js --ext .jsx src", "test": "karma start --log-leve=error karma.config.js --single-run=true", "pretest": "npm run lint", "posttest": "echo 'Finished running tests'" } } ``` 上面代码是一个 `package.json` 文件的例子。如果执行 `npm test` ,会按下面的顺序执行相应的命令。 * `pretest` * `test` * `posttest` 如果执行过程出错,就不会执行排在后面的脚本,即如果prelint脚本执行出错,就不会接着执行lint和postlint脚本。 ``` { "test": "karma start", "test:lint": "eslint . --ext .js --ext .jsx", "pretest": "npm run test:lint" } ``` 上面代码中,在运行 `npm run test` 之前,会自动检查代码,即运行 `npm run test:lint` 命令。 下面是一些常见的 `pre-` 和 `post-` 脚本。 * `prepublish` :发布一个模块前执行。 * `postpublish` :发布一个模块后执行。 * `preinstall` :用户执行 `npm install` 命令时,先执行该脚本。 * `postinstall` :用户执行 `npm install` 命令时,安装结束后执行该脚本,通常用于将下载的源码编译成用户需要的格式,比如有些模块需要在用户机器上跟本地的C++模块一起编译。 * `preuninstall` :卸载一个模块前执行。 * `postuninstall` :卸载一个模块后执行。 * `preversion` :更改模块版本前执行。 * `postversion` :更改模块版本后执行。 * `pretest` :运行 `npm test` 命令前执行。 * `posttest` :运行 `npm test` 命令后执行。 * `prestop` :运行 `npm stop` 命令前执行。 * `poststop` :运行 `npm stop` 命令后执行。 * `prestart` :运行 `npm start` 命令前执行。 * `poststart` :运行 `npm start` 命令后执行。 * `prerestart` :运行 `npm restart` 命令前执行。 * `postrestart` :运行 `npm restart` 命令后执行。 对于最后一个 `npm restart` 命令,如果没有设置 `restart` 脚本, `prerestart` 和 `postrestart` 会依次执行stop和start脚本。 另外,不能在 `pre` 脚本之前再加 `pre` ,即 `prepretest` 脚本不起作用。 注意,即使Npm可以自动运行 `pre` 和 `post` 脚本,也可以手动执行它们。 `$ npm run prepublish` 下面是 `post install` 的例子。 ``` { "postinstall": "node lib/post_install.js" } ``` 上面的这个命令,主要用于处理从Git仓库拉下来的源码。比如,有些源码是用TypeScript写的,可能需要转换一下。 下面是 `publish` 钩子的一个例子。 ``` { "dist:modules": "babel ./src --out-dir ./dist-modules", "gh-pages": "webpack", "gh-pages:deploy": "gh-pages -d gh-pages", "prepublish": "npm run dist:modules", "postpublish": "npm run gh-pages && npm run gh-pages:deploy" } ``` 上面命令在运行 `npm run publish` 时,会先执行Babel编译,然后调用Webpack构建,最后发到Github Pages上面。 以上都是npm相关操作的钩子,如果安装某些模块,还能支持Git相关的钩子。下面以husky模块为例。 ``` $ npm install husky --save-dev ``` 安装以后,就能在 `package.json` 添加 `precommit` 、 `prepush` 等钩子。 ``` { "scripts": { "lint": "eslint yourJsFiles.js", "precommit": "npm run test && npm run lint", "prepush": "npm run test && npm run lint", "...": "..." } } ``` 类似作用的模块还有 `pre-commit` 、 `precommit-hook` 等。 ### 内部变量 scripts字段可以使用一些内部变量,主要是package.json的各种字段。 比如,package.json的内容是 ``` {"name":"foo", "version":"1.2.5"} ``` ,那么变量 `npm_package_name` 的值是foo,变量 `npm_package_version` 的值是1.2.5。 ``` { "scripts":{ "bundle": "mkdir -p build/$npm_package_version/" } } ``` 运行 `npm run bundle` 以后,将会生成 `build/1.2.5/` 子目录。 `config` 字段也可以用于设置内部字段。 ``` "name": "fooproject", "config": { "reporter": "xunit" }, "scripts": { "test": "mocha test/ --reporter $npm_package_config_reporter" } ``` 上面代码中,变量 ``` npm_package_config_reporter ``` 对应的就是reporter。 ### 通配符 npm的通配符的规则如下。 * `*` 匹配0个或多个字符 * `?` 匹配1个字符 * `[...]` 匹配某个范围的字符。如果该范围的第一个字符是 `!` 或 `^` ,则匹配不在该范围的字符。 * ``` !(pattern|pattern|pattern) ``` 匹配任何不符合给定的模式 * ``` ?(pattern|pattern|pattern) ``` 匹配0个或1个给定的模式 * ``` +(pattern|pattern|pattern) ``` 匹配1个或多个给定的模式 * `*(a|b|c)` 匹配0个或多个给定的模式 * ``` @(pattern|pat*|pat?erN) ``` 只匹配给定模式之一 * `**` 如果出现在路径部分,表示0个或多个子目录。 ## npm link 开发NPM模块的时候,有时我们会希望,边开发边试用,比如本地调试的时候, `require('myModule')` 会自动加载本机开发中的模块。Node规定,使用一个模块时,需要将其安装到全局的或项目的 `node_modules` 目录之中。对于开发中的模块,解决方法就是在全局的 `node_modules` 目录之中,生成一个符号链接,指向模块的本地目录。 `npm link` 就能起到这个作用,会自动建立这个符号链接。 请设想这样一个场景,你开发了一个模块 `myModule` ,目录为 `src/myModule` ,你自己的项目 `myProject` 要用到这个模块,项目目录为 `src/myProject` 。首先,在模块目录( `src/myModule` )下运行 `npm link` 命令。 ``` src/myModule$ npm link ``` 上面的命令会在NPM的全局模块目录内,生成一个符号链接文件,该文件的名字就是 `package.json` 文件中指定的模块名。 ``` /path/to/global/node_modules/myModule -> src/myModule ``` 这个时候,已经可以全局调用 `myModule` 模块了。但是,如果我们要让这个模块安装在项目内,还要进行下面的步骤。 切换到项目目录,再次运行 `npm link` 命令,并指定模块名。 ``` src/myProject$ npm link myModule ``` 上面命令等同于生成了本地模块的符号链接。 ``` src/myProject/node_modules/myModule -> /path/to/global/node_modules/myModule ``` 然后,就可以在你的项目中,加载该模块了。 这样一来, `myModule` 的任何变化,都可以直接反映在 `myProject` 项目之中。但是,这样也出现了风险,任何在 `myProject` 目录中对 `myModule` 的修改,都会反映到模块的源码中。 如果你的项目不再需要该模块,可以在项目目录内使用 `npm unlink` 命令,删除符号链接。 ``` src/myProject$ npm unlink myModule ``` ## npm bin `npm bin` 命令显示相对于当前目录的,Node模块的可执行脚本所在的目录(即 `.bin` 目录)。 ``` # 项目根目录下执行 $ npm bin ./node_modules/.bin ``` ## npm adduser `npm adduser` 用于在npmjs.com注册一个用户。 ``` $ npm adduser Username: YOUR_USER_NAME Password: YOUR_PASSWORD Email: <EMAIL> ``` ## npm publish `npm publish` 用于将当前模块发布到 `npmjs.com` 。执行之前,需要向 `npmjs.com` 申请用户名。 `$ npm adduser` 如果已经注册过,就使用下面的命令登录。 `$ npm login` 登录以后,就可以使用 `npm publish` 命令发布。 `$ npm publish` 如果当前模块是一个beta版,比如 `1.3.1-beta.3` ,那么发布的时候需要使用 `tag` 参数,将其发布到指定标签,默认的发布标签是 `latest` 。 ``` $ npm publish --tag beta ``` 如果发布私有模块,模块初始化的时候,需要加上 `scope` 参数。只有npm的付费用户才能发布私有模块。 ``` $ npm init --scope=<yourscope> ``` 如果你的模块是用ES6写的,那么发布的时候,最好转成ES5。首先,需要安装Babel。 ``` $ npm install --save-dev babel-cli@6 babel-preset-es2015@6 ``` 然后,在 `package.json` 里面写入 `build` 脚本。 ``` "scripts": { "build": "babel source --presets babel-preset-es2015 --out-dir distribution", "prepublish": "npm run build" } ``` 运行上面的脚本,会将 `source` 目录里面的ES6源码文件,转为 `distribution` 目录里面的ES5源码文件。然后,在项目根目录下面创建两个文件 `.npmignore` 和 `.gitignore` ,分别写入以下内容。 ``` // .npmignore source // .gitignore node_modules distribution ``` ## npm deprecate 如果想废弃某个版本的模块,可以使用 `npm deprecate` 命令。 ``` $ npm deprecate my-thing@"< 0.2.3" "critical bug fixed in v0.2.3" ``` 运行上面的命令以后,小于 `0.2.3` 版本的模块的 `package.json` 都会写入一行警告,用户安装这些版本时,这行警告就会在命令行显示。 ## npm owner 模块的维护者可以发布新版本。 `npm owner` 命令用于管理模块的维护者。 ``` # 列出指定模块的维护者 $ npm owner ls <package name# 新增维护者 $ npm owner add <user> <package name# 删除维护者 $ npm owner rm <user> <package name> ``` ## 其他命令 ### npm home,npm repo `npm home` 命令可以打开一个模块的主页, `npm repo` 命令则是打开一个模块的代码仓库。 ``` $ npm home $package $ npm repo $package ``` 这两个命令不需要模块先安装。 ### npm outdated `npm outdated` 命令检查当前项目所依赖的模块,是否已经有新版本。 `$ npm outdated` 它会输出当前版本(current version)、应当安装的版本(wanted version)和最新发布的版本(latest version)。 ### npm prune `npm prune` 检查当前项目的 `node_modules` 目录中,是否有 `package.json` 里面没有提到的模块,然后将所有这些模块输出在命令行。 `$ npm prune` ### npm shrinkwrap `npm shrinkwrap` 的作用是锁定当前项目的依赖模块的版本。 `$ npm shrinkwrap` 运行该命令后,会在当前项目的根目录下生成一个 `npm-shrinkwrap.json` 文件,内容是 `node_modules` 目录下所有已经安装的模块,以及它们的精确版本。 下次运行 `npm install` 命令时, `npm` 发现当前目录下有 `npm-shrinkwrap.json` 文件,就会只安装里面提到的模块,且版本也会保持一致。 * <NAME>, task automation with npm run: npm run命令(package.json文件的script属性)的用法 * <NAME>, How to Use npm as a Build Tool * justjs, npm link: developing your own npm modules without tears * hoodie-css, Development Environment Help * <NAME>, How to make use of npm’s package distribution tags to create release channels * <NAME>, How to Build and Publish ES6 npm Modules Today, with Babel os模块提供与操作系统相关的方法。 ## API ### os.EOL `os.EOL` 属性是一个常量,返回当前操作系统的换行符(Windows系统是 `\r\n` ,其他系统是 `\n` )。 ``` const fs = require(`fs`); // bad fs.readFile('./myFile.txt', 'utf8', (err, data) => { data.split('\r\n').forEach(line => { // do something }); }); // good const os = require('os'); fs.readFile('./myFile.txt', 'utf8', (err, data) => { data.split(os.EOL).forEach(line => { // do something }); }); ``` ### os.arch() `os.arch` 方法返回当前计算机的架构。 ``` require(`os`).arch() // "x64" ``` ### os.tmpdir() `os.tmpdir` 方法返回操作系统默认的临时文件目录。 ## Socket通信 下面例子列出当前系列的所有IP地址。 ``` var os = require('os'); var interfaces = os.networkInterfaces(); for (item in interfaces) { console.log('Network interface name: ' + item); for (att in interfaces[item]) { var address = interfaces[item][att]; console.log('Family: ' + address.family); console.log('IP Address: ' + address.address); console.log('Is Internal: ' + address.internal); console.log(''); } console.log('=================================='); } ``` 每个项目的根目录下面,一般都有一个 `package.json` 文件,定义了这个项目所需要的各种模块,以及项目的配置信息(比如名称、版本、许可证等元数据)。 `npm install` 命令根据这个配置文件,自动下载所需的模块,也就是配置项目所需的运行和开发环境。 下面是一个最简单的package.json文件,只定义两项元数据:项目名称和项目版本。 ``` { "name" : "xxx", "version" : "0.0.0", } ``` `package.json` 文件就是一个JSON对象,该对象的每一个成员就是当前项目的一项设置。比如 `name` 就是项目名称, `version` 是版本(遵守“大版本.次要版本.小版本”的格式)。 下面是一个更完整的package.json文件。 ``` { "name": "<NAME>", "version": "0.0.1", "author": "张三", "description": "第一个node.js程序", "keywords":["node.js","javascript"], "repository": { "type": "git", "url": "https://path/to/url" }, "license":"MIT", "engines": {"node": "0.10.x"}, "bugs":{"url":"http://path/to/bug","email":"<EMAIL>"}, "contributors":[{"name":"李四","email":"<EMAIL>"}], "scripts": { "start": "node index.js" }, "dependencies": { "express": "latest", "mongoose": "~3.8.3", "handlebars-runtime": "~1.0.12", "express3-handlebars": "~0.5.0", "MD5": "~1.2.0" }, "devDependencies": { "bower": "~1.2.8", "grunt": "~0.4.1", "grunt-contrib-concat": "~0.3.0", "grunt-contrib-jshint": "~0.7.2", "grunt-contrib-uglify": "~0.2.7", "grunt-contrib-clean": "~0.5.0", "browserify": "2.36.1", "grunt-browserify": "~1.3.0", } } ``` 下面详细解释package.json文件的各个字段。 ## scripts字段 `scripts` 指定了运行脚本命令的npm命令行缩写,比如start指定了运行 `npm run start` 时,所要执行的命令。 下面的设置指定了 `npm run preinstall` 、 `npm run postinstall` 、 `npm run start` 、 `npm run test` 时,所要执行的命令。 ``` "scripts": { "preinstall": "echo here it comes!", "postinstall": "echo there it goes!", "start": "node index.js", "test": "tap test/*.js" } ``` ## dependencies字段,devDependencies字段 `dependencies` 字段指定了项目运行所依赖的模块, `devDependencies` 指定项目开发所需要的模块。 它们都指向一个对象。该对象的各个成员,分别由模块名和对应的版本要求组成,表示依赖的模块及其版本范围。 ``` { "devDependencies": { "browserify": "~13.0.0", "karma-browserify": "~5.0.1" } } ``` 对应的版本可以加上各种限定,主要有以下几种: * 指定版本:比如 `1.2.2` ,遵循“大版本.次要版本.小版本”的格式规定,安装时只安装指定版本。 * 波浪号(tilde)+指定版本:比如 `~1.2.2` ,表示安装1.2.x的最新版本(不低于1.2.2),但是不安装1.3.x,也就是说安装时不改变大版本号和次要版本号。 * 插入号(caret)+指定版本:比如ˆ1.2.2,表示安装1.x.x的最新版本(不低于1.2.2),但是不安装2.x.x,也就是说安装时不改变大版本号。需要注意的是,如果大版本号为0,则插入号的行为与波浪号相同,这是因为此时处于开发阶段,即使是次要版本号变动,也可能带来程序的不兼容。 * latest:安装最新版本。 package.json文件可以手工编写,也可以使用 `npm init` 命令自动生成。 `$ npm init` 这个命令采用互动方式,要求用户回答一些问题,然后在当前目录生成一个基本的package.json文件。所有问题之中,只有项目名称(name)和项目版本(version)是必填的,其他都是选填的。 有了package.json文件,直接使用npm install命令,就会在当前目录中安装所需要的模块。 `$ npm install` 如果一个模块不在 `package.json` 文件之中,可以单独安装这个模块,并使用相应的参数,将其写入 `package.json` 文件之中。 ``` $ npm install express --save $ npm install express --save-dev ``` 上面代码表示单独安装express模块, `--save` 参数表示将该模块写入 `dependencies` 属性, `--save-dev` 表示将该模块写入 `devDependencies` 属性。 ## peerDependencies 有时,你的项目和所依赖的模块,都会同时依赖另一个模块,但是所依赖的版本不一样。比如,你的项目依赖A模块和B模块的1.0版,而A模块本身又依赖B模块的2.0版。 大多数情况下,这不构成问题,B模块的两个版本可以并存,同时运行。但是,有一种情况,会出现问题,就是这种依赖关系将暴露给用户。 最典型的场景就是插件,比如A模块是B模块的插件。用户安装的B模块是1.0版本,但是A插件只能和2.0版本的B模块一起使用。这时,用户要是将1.0版本的B的实例传给A,就会出现问题。因此,需要一种机制,在模板安装的时候提醒用户,如果A和B一起安装,那么B必须是2.0模块。 `peerDependencies` 字段,就是用来供插件指定其所需要的主工具的版本。 ``` { "name": "chai-as-promised", "peerDependencies": { "chai": "1.x" } } ``` 上面代码指定,安装 `chai-as-promised` 模块时,主程序 `chai` 必须一起安装,而且 `chai` 的版本必须是 `1.x` 。如果你的项目指定的依赖是 `chai` 的2.0版本,就会报错。 注意,从npm 3.0版开始, `peerDependencies` 不再会默认安装了。 ## bin字段 bin项用来指定各个内部命令对应的可执行文件的位置。 ``` "bin": { "someTool": "./bin/someTool.js" } ``` 上面代码指定,someTool 命令对应的可执行文件为 bin 子目录下的 someTool.js。Npm会寻找这个文件,在 `node_modules/.bin/` 目录下建立符号链接。在上面的例子中,someTool.js会建立符号链接 ``` node_modules/.bin/someTool ``` 。由于 `node_modules/.bin/` 目录会在运行时加入系统的PATH变量,因此在运行npm时,就可以不带路径,直接通过命令来调用这些脚本。 因此,像下面这样的写法可以采用简写。 ``` scripts: { start: './node_modules/bin/someTool.js build' } // 简写为 scripts: { start: 'someTool build' } ``` 所有 `node_modules/.bin/` 目录下的命令,都可以用 `npm run [命令]` 的格式运行。在命令行下,键入 `npm run` ,然后按tab键,就会显示所有可以使用的命令。 ## main字段 `main` 字段指定了加载的入口文件, ``` require('moduleName') ``` 就会加载这个文件。这个字段的默认值是模块根目录下面的 `index.js` 。 ## config 字段 `config` 字段用于添加命令行的环境变量。 下面是一个 `package.json` 文件。 ``` { "name" : "foo", "config" : { "port" : "8080" }, "scripts" : { "start" : "node server.js" } } ``` 然后,在 `server.js` 脚本就可以引用 `config` 字段的值。 ``` http .createServer(...) .listen(process.env.npm_package_config_port) ``` 用户执行 `npm run start` 命令时,这个脚本就可以得到值。 `$ npm run start` 用户可以改变这个值。 ``` $ npm config set foo:port 80 ``` ## 其他 ### browser字段 browser指定该模板供浏览器使用的版本。Browserify这样的浏览器打包工具,通过它就知道该打包那个文件。 ``` "browser": { "tipso": "./node_modules/tipso/src/tipso.js" }, ``` ### engines 字段 `engines` 字段指明了该模块运行的平台,比如 Node 的某个版本或者浏览器。 ``` { "engines" : { "node" : ">=0.10.3 <0.12" } } ``` 该字段也可以指定适用的 `npm` 版本。 ``` { "engines" : { "npm" : "~1.0.20" } } ``` ### man字段 man用来指定当前模块的man文档的位置。 ``` "man" :[ "./doc/calc.1" ] ``` ### preferGlobal字段 preferGlobal的值是布尔值,表示当用户不将该模块安装为全局模块时(即不用–global参数),要不要显示警告,表示该模块的本意就是安装为全局模块。 ### style字段 style指定供浏览器使用时,样式文件所在的位置。样式文件打包工具parcelify,通过它知道样式文件的打包位置。 ``` "style": [ "./node_modules/tipso/src/tipso.css" ] ``` ## path.join() `path.join` 方法用于连接路径。该方法的主要用途在于,会正确使用当前系统的路径分隔符,Unix系统是”/“,Windows系统是”\“。 ``` var path = require('path'); path.join(mydir, "foo"); ``` 上面代码在Unix系统下,会返回路径 `mydir/foo` 。 ## path.resolve() `path.resolve` 方法用于将相对路径转为绝对路径。 它可以接受多个参数,依次表示所要进入的路径,直到将最后一个参数转为绝对路径。如果根据参数无法得到绝对路径,就以当前所在路径作为基准。除了根目录,该方法的返回值都不带尾部的斜杠。 ``` // 格式 path.resolve([from ...], to) // 实例 path.resolve('foo/bar', '/tmp/file/', '..', 'a/../subfile') ``` 上面代码的实例,执行效果类似下面的命令。 ``` $ cd foo/bar $ cd /tmp/file/ $ cd .. $ cd a/../subfile $ pwd ``` 更多例子。 ``` path.resolve('/foo/bar', './baz') // '/foo/bar/baz' path.resolve('/foo/bar', '/tmp/file/') // '/tmp/file' path.resolve('wwwroot', 'static_files/png/', '../gif/image.gif') // 如果当前目录是/home/myself/node,返回 // /home/myself/node/wwwroot/static_files/gif/image.gif ``` 该方法忽略非字符串的参数。 ## accessSync() `accessSync` 方法用于同步读取一个路径。 下面的代码可以用于判断一个目录是否存在。 ``` function exists(pth, mode) { try { fs.accessSync(pth, mode); return true; } catch (e) { return false; } } ``` ## path.relative `path.relative` 方法接受两个参数,这两个参数都应该是绝对路径。该方法返回第二个路径相对于第一个路径的那个相对路径。 ``` path.relative('/data/orandea/test/aaa', '/data/orandea/impl/bbb') // '../../impl/bbb' ``` 上面代码中,如果当前目录是 ``` /data/orandea/test/aaa ``` ,进入 `path.relative` 返回的相对路径,就会到达 ``` /data/orandea/impl/bbb ``` 。 如果 `path.relative` 方法的两个参数相同,则返回一个空字符串。 ## path.parse() `path.parse()` 方法可以返回路径各部分的信息。 ``` var myFilePath = '/someDir/someFile.json'; path.parse(myFilePath).base // "someFile.json" path.parse(myFilePath).name // "someFile" path.parse(myFilePath).ext // ".json" ``` `process` 对象是 Node 的一个全局对象,提供当前 Node 进程的信息。它可以在脚本的任意位置使用,不必通过 `require` 命令加载。该对象部署了 `EventEmitter` 接口。 ## 属性 `process` 对象提供一系列属性,用于返回系统信息。 * `process.argv` :返回一个数组,成员是当前进程的所有命令行参数。 * `process.env` :返回一个对象,成员为当前Shell的环境变量,比如 `process.env.HOME` 。 * ``` process.installPrefix ``` :返回一个字符串,表示 Node 安装路径的前缀,比如 `/usr/local` 。相应地,Node 的执行文件目录为 `/usr/local/bin/node` 。 * `process.pid` :返回一个数字,表示当前进程的进程号。 * `process.platform` :返回一个字符串,表示当前的操作系统,比如 `Linux` 。 * `process.title` :返回一个字符串,默认值为 `node` ,可以自定义该值。 * `process.version` :返回一个字符串,表示当前使用的 Node 版本,比如 `v7.10.0` 。 `process` 对象还有一些属性,用来指向 Shell 提供的接口。 ### process.stdout `process.stdout` 属性返回一个对象,表示标准输出。该对象的 `write` 方法等同于 `console.log` ,可用在标准输出向用户显示内容。 ``` console.log = function(d) { process.stdout.write(d + '\n'); }; ``` 下面代码表示将一个文件导向标准输出。 fs.createReadStream('wow.txt') .pipe(process.stdout); ``` 上面代码中,由于 `process.stdout` 和 `process.stdin` 与其他进程的通信,都是流(stream)形式,所以必须通过 `pipe` 管道命令中介。 上面代码通过 `pipe` 方法,先将文件数据压缩,然后再导向标准输出。 ### process.stdin `process.stdin` 返回一个对象,表示标准输入。 ``` process.stdin.pipe(process.stdout) ``` 上面代码表示将标准输入导向标准输出。 由于stdin和stdout都部署了stream接口,所以可以使用stream接口的方法。 ``` process.stdin.setEncoding('utf8'); process.stdin.on('readable', function() { var chunk = process.stdin.read(); if (chunk !== null) { process.stdout.write('data: ' + chunk); } }); process.stdin.on('end', function() { process.stdout.write('end'); }); ``` ### stderr `process.stderr` 属性指向标准错误。 ### process.argv,process.execPath,process.execArgv `process.argv` 属性返回一个数组,由命令行执行脚本时的各个参数组成。它的第一个成员总是 `node` ,第二个成员是脚本文件名,其余成员是脚本文件的参数。 请看下面的例子,新建一个脚本文件 `argv.js` 。 ``` // argv.js console.log("argv: ", process.argv); ``` 命令行下调用这个脚本,会得到以下结果。 ``` $ node argv.js a b c [ 'node', '/path/to/argv.js', 'a', 'b', 'c' ] ``` 上面代码表示, `argv` 返回数组的成员依次是命令行的各个部分,真正的参数实际上是从 `process.argv[2]` 开始。要得到真正的参数部分,可以把 `argv.js` 改写成下面这样。 ``` // argv.js var myArgs = process.argv.slice(2); console.log(myArgs); ``` `process.execPath` 属性返回执行当前脚本的Node二进制文件的绝对路径。 ``` > process.execPath '/usr/local/bin/node' > ``` `process.execArgv` 属性返回一个数组,成员是命令行下执行脚本时,在 Node 可执行文件与脚本文件之间的命令行参数。 ``` # script.js的代码为 # console.log(process.execArgv); $ node --harmony script.js --version ``` ### process.env `process.env` 属性返回一个对象,包含了当前Shell的所有环境变量。比如, `process.env.HOME` 返回用户的主目录。 通常的做法是,新建一个环境变量 `NODE_ENV` ,用它确定当前所处的开发阶段,生产阶段设为 `production` ,开发阶段设为 `develop` 或 `staging` ,然后在脚本中读取 `process.env.NODE_ENV` 即可。 运行脚本时,改变环境变量,可以采用下面的写法。 ``` $ export NODE_ENV=production && node app.js # 或者 $ NODE_ENV=production node app.js ``` ## 方法 `process` 对象提供以下方法: * `process.chdir()` :切换工作目录到指定目录。 * `process.cwd()` :返回运行当前脚本的工作目录的路径。 * `process.exit()` :退出当前进程。 * `process.getgid()` :返回当前进程的组ID(数值)。 * `process.getuid()` :返回当前进程的用户ID(数值)。 * `process.nextTick()` :指定回调函数在当前执行栈的尾部、下一次Event Loop之前执行。 * `process.on()` :监听事件。 * `process.setgid()` :指定当前进程的组,可以使用数字ID,也可以使用字符串ID。 * `process.setuid()` :指定当前进程的用户,可以使用数字ID,也可以使用字符串ID。 ### process.cwd(),process.chdir() `cwd` 方法返回进程的当前目录(绝对路径), `chdir` 方法用来切换目录。 ``` > process.cwd() '/home/aaa' > process.chdir('/home/bbb') > process.cwd() '/home/bbb' ``` 注意, `process.cwd()` 与 `__dirname` 的区别。前者进程发起时的位置,后者是脚本的位置,两者可能是不一致的。比如, ``` node ./code/program.js ``` ,对于 `process.cwd()` 来说,返回的是当前目录( `.` );对于 `__dirname` 来说,返回是脚本所在目录,即 `./code/program.js` 。 ## process.nextTick() `process.nextTick` 将任务放到当前一轮事件循环(Event Loop)的尾部。 ``` process.nextTick(function () { console.log('下一次Event Loop即将开始!'); }); ``` 上面代码可以用 `setTimeout(f,0)` 改写,效果接近,但是原理不同。 ``` setTimeout(function () { console.log('已经到了下一轮Event Loop!'); }, 0) ``` `setTimeout(f,0)` 是将任务放到下一轮事件循环的头部,因此 `nextTick` 会比它先执行。另外, `nextTick` 的效率更高,因为不用检查是否到了指定时间。 根据Node的事件循环的实现,基本上,进入下一轮事件循环后的执行顺序如下。 * `setTimeout(f,0)` * 各种到期的回调函数 * `process.nextTick` push(), sort(), reverse(), and splice() ### process.exit() `process.exit` 方法用来退出当前进程。它可以接受一个数值参数,如果参数大于0,表示执行失败;如果等于0表示执行成功。 ``` if (err) { process.exit(1); } else { process.exit(0); } ``` 如果不带有参数, `exit` 方法的参数默认为0。 注意, `process.exit()` 很多时候是不需要的。因为如果没有错误,一旦事件循环之中没有待完成的任务,Node 本来就会退出进程,不需要调用 `process.exit(0)` 。这时如果调用了,进程会立刻退出,不管有没有异步任务还在执行,所以不如等 Node 自然退出。另一方面,如果发生错误,Node 往往也会退出进程,也不一定要调用 `process.exit(1)` 。 ``` function printUsageStdout() { process.stdout.write('...some long text ...'); } if (true) { printUsageToStdout(); process.exit(1); } ``` 上面的代码可能不会达到预期效果。因为 `process.stdout` 有时会变成异步,不能保证一定会在当前事件循环之中输出所有内容,而 `process.exit` 会使当前进程立刻退出。 更安全的方法是使用 `exitcode` 属性,指定退出状态,然后再抛出一个错误。 ``` if (true) { printUsageToStdout(); process.exitCode = 1; throw new Error("xx condition failed"); } ``` `process.exit()` 执行时,会触发 `exit` 事件。 ### process.on() `process` 对象部署了EventEmitter接口,可以使用 `on` 方法监听各种事件,并指定回调函数。 ``` process.on('uncaughtException', function(err){ console.error('got an error: %s', err.message); process.exit(1); }); setTimeout(function(){ throw new Error('fail'); }, 100); ``` 上面代码是 `process` 监听Node的一个全局性事件 `uncaughtException` ,只要有错误没有捕获,就会触发这个事件。 `process` 支持的事件还有下面这些。 * `data` 事件:数据输出输入时触发 * `SIGINT` 事件:接收到系统信号 `SIGINT` 时触发,主要是用户按 `Ctrl + c` 时触发。 * `SIGTERM` 事件:系统发出进程终止信号 `SIGTERM` 时触发 * `exit` 事件:进程退出前触发 ``` process.on('SIGINT', function () { console.log('Got a SIGINT. Goodbye cruel world'); process.exit(0); }); // 也可以忽略这个信号 process.on('SIGINT', function() { console.log("Ignored Ctrl-C"); }); ``` 使用时,向该进程发出系统信号,就会导致进程退出。 ``` $ kill -s SIGINT [process_id] ``` `SIGTERM` 信号表示内核要求当前进程停止,进程可以自行停止,也可以忽略这个信号。 var server = http.createServer(function (req, res) { // ... }); process.on('SIGTERM', function () { server.close(function () { process.exit(0); }); }); ``` 上面代码表示,进程接到 `SIGTERM` 信号之后,关闭服务器,然后退出进程。需要注意的是,这时进程不会马上退出,而是要回应完最后一个请求,处理完所有回调函数,然后再退出。 `exit` 事件在Node进程退出前触发。 ``` process.on('exit', function() { console.log('Goodbye'); }); ``` ### process.kill() `process.kill` 方法用来对指定ID的线程发送信号,默认为 `SIGINT` 信号。 ``` process.kill(process.pid, 'SIGTERM'); ``` 上面代码用于杀死当前进程。 ``` process.on('SIGTERM', function(){ console.log('terminating'); process.exit(1); }); setTimeout(function(){ console.log('sending SIGTERM to process %d', process.pid); process.kill(process.pid, 'SIGTERM'); }, 500); setTimeout(function(){ console.log('never called'); }, 1000); ``` 上面代码中,500毫秒后向当前进程发送SIGTERM信号(终结进程),因此1000毫秒后的指定事件不会被触发。 ## 事件 ### exit事件 当前进程退出时,会触发 `exit` 事件,可以对该事件指定回调函数。 ``` process.on('exit', function () { fs.writeFileSync('/tmp/myfile', '需要保存到硬盘的信息'); }); ``` 下面是一个例子,进程退出时,显示一段日志。 ``` process.on("exit", code => console.log("exiting with code: " + code)) ``` 注意,此时回调函数只能执行同步操作,不能包含异步操作,因为执行完回调函数,进程就会退出,无法监听到回调函数的操作结果。 ``` process.on('exit', function(code) { // 不会执行 setTimeout(function() { console.log('This will not run'); }, 0); }); ``` 上面代码在 `exit` 事件的回调函数里面,指定了一个下一轮事件循环,所要执行的操作。这是无效的,不会得到执行。 ### beforeExit事件 beforeExit事件在Node清空了Event Loop以后,再没有任何待处理的任务时触发。正常情况下,如果没有任何待处理的任务,Node进程会自动退出,设置beforeExit事件的监听函数以后,就可以提供一个机会,再部署一些任务,使得Node进程不退出。 beforeExit事件与exit事件的主要区别是,beforeExit的监听函数可以部署异步任务,而exit不行。 此外,如果是显式终止程序(比如调用process.exit()),或者因为发生未捕获的错误,而导致进程退出,这些场合不会触发beforeExit事件。因此,不能使用该事件替代exit事件。 当前进程抛出一个没有被捕捉的错误时,会触发 `uncaughtException` 事件。 ``` process.on('uncaughtException', function (err) { console.error('An uncaught error occurred!'); console.error(err.stack); throw new Error('未捕获错误'); }); ``` 部署 `uncaughtException` 事件的监听函数,是免于Node进程终止的最后措施,否则Node就要执行 `process.exit()` 。出于除错的目的,并不建议发生错误后,还保持进程运行。 抛出错误之前部署的异步操作,还是会继续执行。只有完成以后,Node进程才会退出。 ``` process.on('uncaughtException', function(err) { console.log('Caught exception: ' + err); }); setTimeout(function() { console.log('本行依然执行'); }, 500); // 下面的表达式抛出错误 nonexistentFunc(); ``` 上面代码中,抛出错误之后,此前setTimeout指定的回调函数亦然会执行。 ### 信号事件 操作系统内核向Node进程发出信号,会触发信号事件。实际开发中,主要对SIGTERM和SIGINT信号部署监听函数,这两个信号在非Windows平台会导致进程退出,但是只要部署了监听函数,Node进程收到信号后就不会退出。 ``` // 读取标准输入,这主要是为了不让当前进程退出 process.stdin.resume(); process.on('SIGINT', function() { console.log('SIGINT信号,按Control-D退出'); }); ``` 上面代码部署了SIGINT信号的监听函数,当用户按下Ctrl-C后,会显示提示文字。 ## 进程的退出码 进程退出时,会返回一个整数值,表示退出时的状态。这个整数值就叫做退出码。下面是常见的Node进程退出码。 * 0,正常退出 * 1,发生未捕获错误 * 5,V8执行错误 * 8,不正确的参数 * 128 + 信号值,如果Node接受到退出信号(比如SIGKILL或SIGHUP),它的退出码就是128加上信号值。由于128的二进制形式是10000000, 所以退出码的后七位就是信号值。 Bash可以使用环境变量 `$?` ,获取上一步操作的退出码。 ``` $ node nonexist.js Error: Cannot find 'nonexist.js' $ echo $? 1 ``` 上面代码中,Node执行一个不存在的脚本文件,结果报错,退出码就是1。 * <NAME>, Graceful shutdown in node.js `querystring` 模块主要用来解析查询字符串。 ## querystring.parse() `querystring.parse()` 方法用于将一个查询字符串解析为 JavaScript 对象。 ``` var str = 'foo=bar&abc=xyz&abc=123'; querystring.parse(str) // { foo: 'bar', abc: [ 'xyz', '123' ] } ``` `parse` 方法一共可以接受四个参数。 ``` querystring.parse(str[, sep[, eq[, options]]]) ``` * `str` :需要解析的查询字符串 * `sep` :多个键值对之间的分隔符,默认为 `&` * `eq` :键名与键值之间的分隔符,默认为 `=` * `options` :配置对象,它有两个属性, `decodeURIComponent` 属性是一个函数,用来将编码后的字符串还原,默认是 ``` querystring.unescape() ``` , `maxKeys` 属性指定最多解析多少个属性,默认是 `1000` ,如果设为 `0` 就表示不限制属性的最大数量。 前面的例子省略了后面三个参数,完整的调用形式如下。 ``` querystring.parse( 'w=%D6%D0%CE%C4&foo=bar', null, null, { decodeURIComponent: gbkDecodeURIComponent } ) ``` `parse` 方法也可以用来解析一般的字符串。 ``` var str = 'name:Sophie;shape:fox;condition:new'; querystring.parse(str, ';', ':') // { // name: 'Sophie', // shape: 'fox', // condition: 'new', // } ``` 数据读写可以看作是事件模式(Event)的特例,不断发送的数据块好比一个个的事件。读数据是 `read` 事件,写数据是 `write` 事件,而数据块是事件附带的信息。Node 为这类情况提供了一个特殊接口 `Stream` 。 ### 概念 ”数据流“(stream)是处理系统缓存的一种方式。操作系统采用数据块(chunk)的方式读取数据,每收到一次数据,就存入缓存。Node应用程序有两种缓存的处理方式,第一种是等到所有数据接收完毕,一次性从缓存读取,这就是传统的读取文件的方式;第二种是采用“数据流”的方式,收到一块数据,就读取一块,即在数据还没有接收完成时,就开始处理它。 第一种方式先将数据全部读入内存,然后处理,优点是符合直觉,流程非常自然,缺点是如果遇到大文件,要花很长时间,才能进入数据处理的步骤。第二种方式每次只读入数据的一小块,像“流水”一样,每当系统读入了一小块数据,就会触发一个事件,发出“新数据块”的信号。应用程序只要监听这个事件,就能掌握数据读取的进展,做出相应处理,这样就提高了程序的性能。 fs .createReadStream('./data/customers.csv') .pipe(process.stdout); ``` 上面代码中, `fs.createReadStream` 方法就是以”数据流“的方式读取文件,这可以在文件还没有读取完的情况下,就输出到标准输出。这显然对大文件的读取非常有利。 Unix操作系统从很早以前,就有“数据流”这个概念,它是不同进程之间传递数据的一种方式。管道命令(pipe)就起到在不同命令之间,连接数据流的作用。“数据流”相当于把较大的数据拆成很小的部分,一个命令只要部署了数据流接口,就可以把一个流的输出接到另一个流的输入。Node引入了这个概念,通过数据流接口为异步读写数据提供的统一接口,无论是硬盘数据、网络数据,还是内存数据,都可以采用这个接口读写。 数据流接口最大特点就是通过事件通信,具有 `readable` 、 `writable` 、 `drain` 、 `data` 、 `end` 、 `close` 等事件,既可以读取数据,也可以写入数据。读写数据时,每读入(或写入)一段数据,就会触发一次 `data` 事件,全部读取(或写入)完毕,触发 `end` 事件。如果发生错误,则触发 `error` 事件。 一个对象只要部署了数据流接口,就可以从它读取数据,或者写入数据。Node内部很多涉及IO处理的对象,都部署了Stream接口,下面就是其中的一些。 * 文件读写 * HTTP 请求的读写 * TCP 连接 * 标准输入输出 ## 可读数据流 Stream 接口分成三类。 * 可读数据流接口,用于对外提供数据。 * 可写数据流接口,用于写入数据。 * 双向数据流接口,用于读取和写入数据,比如Node的tcp sockets、zlib、crypto都部署了这个接口。 “可读数据流”用来产生数据。它表示数据的来源,只要一个对象提供“可读数据流”,就表示你可以从其中读取数据。 ``` var Readable = require('stream').Readable; var rs = new Readable(); rs.push('beep '); rs.push('boop\n'); rs.push(null); 上面代码产生了一个可写数据流,最后将其写入标注输出。可读数据流的 `push` 方法,用来将数据输入缓存。 `rs.push(null)` 中的null,用来告诉rs,数据输入完毕。 “可读数据流”有两种状态:流动态和暂停态。处于流动态时,数据会尽快地从数据源导向用户的程序;处于暂停态时,必须显式调用 `stream.read()` 等指令,“可读数据流”才会释放数据。刚刚新建的时候,“可读数据流”处于暂停态。 三种方法可以让暂停态转为流动态。 * 添加data事件的监听函数 * 调用resume方法 * 调用pipe方法将数据送往一个可写数据流 如果转为流动态时,没有data事件的监听函数,也没有pipe方法的目的地,那么数据将遗失。 以下两种方法可以让流动态转为暂停态。 * 不存在pipe方法的目的地时,调用pause方法 * 存在pipe方法的目的地时,移除所有data事件的监听函数,并且调用unpipe方法,移除所有pipe方法的目的地 注意,只移除data事件的监听函数,并不会自动引发数据流进入“暂停态”。另外,存在pipe方法的目的地时,调用pause方法,并不能保证数据流总是处于暂停态,一旦那些目的地发出数据请求,数据流有可能会继续提供数据。 每当系统有新的数据,该接口可以监听到data事件,从而回调函数。 readableStream.on('data', function(chunk) { data+=chunk; }); readableStream.on('end', function() { console.log(data); }); ``` 上面代码中,fs模块的createReadStream方法,是部署了Stream接口的文件读取方法。该方法对指定的文件,返回一个对象。该对象只要监听data事件,回调函数就能读到数据。 除了data事件,监听readable事件,也可以读到数据。 readableStream.on('readable', function() { while ((chunk=readableStream.read()) !== null) { data += chunk; } }); readableStream.on('end', function() { console.log(data) }); ``` readable事件表示系统缓冲之中有可读的数据,使用read方法去读出数据。如果没有数据可读,read方法会返回null。 “可读数据流”除了read方法,还有以下方法。 * Readable.pause() :暂停数据流。已经存在的数据,也不再触发data事件,数据将保留在缓存之中,此时的数据流称为静态数据流。如果对静态数据流再次调用pause方法,数据流将重新开始流动,但是缓存中现有的数据,不会再触发data事件。 * Readable.resume():恢复暂停的数据流。 * readable.unpipe():从管道中移除目的地数据流。如果该方法使用时带有参数,会阻止“可读数据流”进入某个特定的目的地数据流。如果使用时不带有参数,则会移除所有的目的地数据流。 ### readable 属性 一个数据流的 `readable` 属性返回一个布尔值。如果数据流是一个仍然打开的可读数据流,就返回 `true` ,否则返回 `false` 。 ### read() read方法从系统缓存读取并返回数据。如果读不到数据,则返回null。 该方法可以接受一个整数作为参数,表示所要读取数据的数量,然后会返回该数量的数据。如果读不到足够数量的数据,返回null。如果不提供这个参数,默认返回系统缓存之中的所有数据。 只在“暂停态”时,该方法才有必要手动调用。“流动态”时,该方法是自动调用的,直到系统缓存之中的数据被读光。 ``` var readable = getReadableStreamSomehow(); readable.on('readable', function() { var chunk; while (null !== (chunk = readable.read())) { console.log('got %d bytes of data', chunk.length); } }); ``` 如果该方法返回一个数据块,那么它就触发了data事件。 ### _read() 可读数据流的 `_read` 方法,可以将数据放入可读数据流。 ``` var Readable = require('stream').Readable; var rs = Readable(); var c = 97; rs._read = function () { rs.push(String.fromCharCode(c++)); if (c > 'z'.charCodeAt(0)) rs.push(null); }; 运行结果如下。 ``` $ node read1.js abcdefghijklmnopqrstuvwxyz ``` ### setEncoding() 调用该方法,会使得数据流返回指定编码的字符串,而不是缓存之中的二进制对象。比如,调用 `setEncoding('utf8')` ,数据流会返回UTF-8字符串,调用 `setEncoding('hex')` ,数据流会返回16进制的字符串。 `setEncoding` 的参数是字符串的编码方法,比如 `utf8` 、 `ascii` 、 `base64` 等。 该方法会正确处理多字节的字符,而缓存的方法 ``` buf.toString(encoding) ``` 不会。所以如果想要从数据流读取字符串,应该总是使用该方法。 ``` var readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', function(chunk) { assert.equal(typeof chunk, 'string'); console.log('got %d characters of string data', chunk.length); }); ``` ### resume() `resume` 方法会使得“可读数据流”继续释放 `data` 事件,即转为流动态。 ``` // 新建一个readable数据流 var readable = getReadableStreamSomehow(); readable.resume(); readable.on('end', function(chunk) { console.log('数据流到达尾部,未读取任务数据'); }); ``` 上面代码中,调用 `resume` 方法使得数据流进入流动态,只定义 `end` 事件的监听函数,不定义 `data` 事件的监听函数,表示不从数据流读取任何数据,只监听数据流到达尾部。 ### pause() `pause` 方法使得流动态的数据流,停止释放 `data` 事件,转而进入暂停态。任何此时已经可以读到的数据,都将停留在系统缓存。 ``` // 新建一个readable数据流 var readable = getReadableStreamSomehow(); readable.on('data', function(chunk) { console.log('读取%d字节的数据', chunk.length); readable.pause(); console.log('接下来的1秒内不读取数据'); setTimeout(function() { console.log('数据恢复读取'); readable.resume(); }, 1000); }); ``` ### isPaused() 该方法返回一个布尔值,表示“可读数据流”被客户端手动暂停(即调用了pause方法),目前还没有调用resume方法。 ``` var readable = new stream.Readable readable.isPaused() // === false readable.pause() readable.isPaused() // === true readable.resume() readable.isPaused() // === false ``` ### pipe() pipe方法是自动传送数据的机制,就像管道一样。它从“可读数据流”读出所有数据,将其写出指定的目的地。整个过程是自动的。 `src.pipe(dst)` pipe方法必须在可读数据流上调用,它的参数必须是可写数据流。 readableStream.pipe(writableStream); ``` 上面代码使用pipe方法,将file1的内容写入file2。整个过程由pipe方法管理,不用手动干预,所以可以将传送数据写得很简洁。 pipe方法返回目的地的数据流,因此可以使用链式写法,将多个数据流操作连在一起。 ``` a.pipe(b).pipe(c).pipe(d) // 等同于 a.pipe(b); b.pipe(c); c.pipe(d); ``` fs.createReadStream('input.txt.gz') .pipe(zlib.createGunzip()) .pipe(fs.createWriteStream('output.txt')); ``` 上面代码采用链式写法,先读取文件,然后进行压缩,最后输出。 下面的写法模拟了Unix系统的cat命令,将标准输出写入标准输入。 ``` process.stdin.pipe(process.stdout); ``` 当来源地的数据流读取完成,默认会调用目的地的end方法,就不再能够写入。对pipe方法传入第二个参数 `{ end: false }` ,可以让目的地的数据流保持打开。 ``` reader.pipe(writer, { end: false }); reader.on('end', function() { writer.end('Goodbye\n'); }); ``` 上面代码中,目的地数据流默认不会调用end方法,只能手动调用,因此“Goodbye”会被写入。 ### unpipe() 该方法移除pipe方法指定的数据流目的地。如果没有参数,则移除所有的pipe方法目的地。如果有参数,则移除该参数指定的目的地。如果没有匹配参数的目的地,则不会产生任何效果。 ``` var readable = getReadableStreamSomehow(); var writable = fs.createWriteStream('file.txt'); readable.pipe(writable); setTimeout(function() { console.log('停止写入file.txt'); readable.unpipe(writable); console.log('手动关闭file.txt的写入数据流'); writable.end(); }, 1000); ``` 上面代码写入file.txt的时间,只有1秒钟,然后就停止写入。 下面代码中, `s` 是一个readable数据流,它可以监听以下事件。 ``` s.on('data', f); // 收到新的数据时,data事件就会发生,触发f() s.on('end', f); // 数据读取完以后,不会再收到数据了,end事件发生,触发f() s.on('error', f); // 发生错误时,error事件发生,触发f() s.readable // => true if it is a readable stream that is still open s.pause(); // Pause "data" events. For throttling uploads, e.g. s.resume(); // Resume again ``` (1)readable readable事件在数据流能够向外提供数据时触发。 ``` var readable = getReadableStreamSomehow(); readable.on('readable', function() { // there is some data to read now }); ``` 上面代码将标准输入的数据读出。 read方法接受一个整数作为参数,表示以多少个字节为单位进行读取。 上面代码将以3个字节为单位进行输出内容。 (2)data 对于那些没有显式暂停的数据流,添加data事件监听函数,会将数据流切换到流动态,尽快向外提供数据。 ``` var readable = getReadableStreamSomehow(); readable.on('data', function(chunk) { console.log('got %d bytes of data', chunk.length); }); ``` (3)end 无法再读取到数据时,会触发end事件。也就是说,只有当前数据被完全读取完,才会触发end事件,比如不停地调用read方法。 ``` var readable = getReadableStreamSomehow(); readable.on('data', function(chunk) { console.log('got %d bytes of data', chunk.length); }); readable.on('end', function() { console.log('there will be no more data.'); }); ``` (4)close 数据源关闭时,close事件被触发。并不是所有的数据流都支持这个事件。 (5)error 当读取数据发生错误时,error事件被触发。 ## 继承可读数据流接口 可读数据流又分成两种,一种是 pull 模式,自己拉数据,就好像用吸管吸水,只有你吸了,水才会上来;另一种是 push 模式,数据自动推送给你,就好像水从水龙头自动涌出来。如果监听 `data` 事件,那么自动激活 push 模式;如果自己从数据流读取数据,那就是在使用 pull 模式。 任何对象都可以部署可读数据流的接口。 ``` var Readable = require('stream').Readable; var util = require('util'); function MyObject(options) { if (! (this instanceof MyObject)) return new MyObject(options); if (! options) options = {}; options.objectMode = true; Readable.call(this, options); } util.inherits(MyObject, Readable); MyObject.prototype._read = function read() { var self = this; someMethodGetData(function(err, data) { if (err) self.emit('error', err); else self.push(data); }); }; ``` 上面代码中,构造函数 `MyObject` 继承了读数据流的接口。 `options.objectMode` 设为 `true` ,是为了设置数据流处理的是对象,而不是字符串或者 buffer。此外,还要在 `MyObject.prototype` 上部署 `_read` 方法,每当数据流要读取数据,就会调用这个方法。在这个方法里面,我们取到数据,使用 `stream.push(data)` 将数据放进数据流。 然后, `MyObject` 的实例就可以使用“读数据流”的接口了。 myObj.on('data', function(data) { console.log(data); }); ``` 上面是 push 模式,下面是 pull 模式。 var data = myObj.read(); ``` `myObj` 也可以暂停/恢复读数据。 ``` myObj.pause(); setTimeout(function () { myObj.resume(); }, 5000); ``` ### 实例: fs 模块的读数据流 `fs` 模块的 `createReadStream` 方法,就可以创建一个读取数据的数据流。 ``` var fs = require('fs'); var stream = fs.createReadStream('readme.txt'); stream.setEncoding('utf8'); ``` 上面代码创建了一个文本文件 `readme.txt` 的数据流。由于这个数据流会当作文本处理,所以要用 `setEncoding` 方法设定编码。 然后,监听 `data` 事件,获取每一个数据块;监听 `end` 事件,当数据传送结束,再统一处理。 ``` var data = ''; stream.on('data', function(chunk) { data += chunk; }) stream.on('end', function() { console.log('Data length: %d', data.length); }); ``` 监听 `readable` 事件,也可以取得与监听 `data` 事件同样的效果。 ``` var data = ''; stream.on('readable', function() { var chunk; while(chunk = stream.read()) { data += chunk; } }); ``` 数据流还有 `pause` 和 `resume` 方法,可以暂停和恢复数据传送。 ``` // 暂停 stream.pause(); // 1秒后恢复 setTimeout(stream.resume(), 1000); ``` 注意,数据流新建以后,默认状态是暂停,只有指定了 `data` 事件的回调函数,或者调用了 `resume` 方法,数据才会开发发送。 如果要同时使用 `readable` 与 `data` 事件,可以像下面这样写。 ``` stream.pause(); var pulledData = ''; var pushedData = ''; stream.on('readable', function() { var chunk; while(chunk = stream.read()) { pulledData += chunk; } }); stream.on('data', function(chunk) { pushedData += chunk; }); ``` 上面代码中,显式调用 `pause` 方法,会使得 `readable` 事件释放一个 `data` 事件,否则 `data` 监听无效。 如果觉得 `data` 事件和 `end` 事件写起来太麻烦,Stream 接口还提供了 `pipe` 方法,自动处理这两个事件。数据流通过 `pipe` 方法,可以方便地导向其他具有Stream接口的对象。 上面代码先打开文本文件 `wow.txt` ,然后压缩,再导向标准输出。 ``` fs.createReadStream('wow.txt') .pipe(zlib.createGzip()) .pipe(fs.createWriteStream('wow.gz')); ``` 上面代码压缩文件 `wow.txt` 以后,又将其写回压缩文件。 下面代码新建一个Stream实例,然后指定写入事件和终止事件的回调函数,再将其接到标准输入之上。 ``` var stream = require('stream'); var Stream = stream.Stream; var ws = new Stream; ws.writable = true; ws.write = function(data) { console.log("input=" + data); } ws.end = function(data) { console.log("bye"); } process.stdin.pipe(ws); ``` 调用上面的脚本,会产生以下结果。 ``` $ node pipe_out.js hello input=hello ^d bye ``` 上面代码调用脚本下,键入 `hello` ,会输出 `input=hello` 。然后按下 `ctrl-d` ,会输出 `bye` 。使用管道命令,可以看得更清楚。 ``` $ echo hello | node pipe_out.js input=hello bye ``` ## 可写数据流 “可读数据流”用来对外释放数据,“可写数据流”则是用来接收数据。它允许你将数据写入某个目的地。它是数据写入的一种抽象,不同的数据目的地部署了这个接口以后,就可以用统一的方法写入。 以下是部署了可写数据流的一些场合。 * 客户端的http requests * 服务器的http responses * fs write streams * zlib streams * crypto streams * tcp sockets * child process stdin * process.stdout, process.stderr 只要调用 `stream.write(o)` ,就能将数据写入可读数据流。 ``` stream.write(payload, callback) ``` 可以指定回调函数 `callback` ,一旦缓存中的数据释放( `payload` ),就会调用这个回调函数。 部署“可写数据流”,必须继承 `stream.Writable` ,以及实现 `stream._write` 方法。下面是一个例子,数据库的写入接口部署“可写数据流”接口。 ``` var Writable = require('stream').Writable; var util = require('util'); module.exports = DatabaseWriteStream; function DatabaseWriteStream(options) { if (! (this instanceof DatabaseWriteStream)) return new DatabaseWriteStream(options); if (! options) options = {}; options.objectMode = true; Writable.call(this, options); } util.inherits(DatabaseWriteStream, Writable); DatabaseWriteStream.prototype._write = function write(doc, encoding, callback) { insertIntoDatabase(JSON.stringify(doc), callback); }; ``` 上面代码中, `_write` 方法执行实际的写入操作,它必须接受三个参数。 * `chunk` :要写入的数据块 * `encoding` :如果写入的是字符串,必须字符串的编码 * `callback` :写入完成后或发生错误时的回调函数 下面是用法的例子。 var Thermometer = require('./thermometer'); var thermomether = Thermometer(); thermomether.on('data', function(temp) { db.write({when: Date.now(), temperature: temp}); }); ``` 下面是fs模块的可写数据流的例子。 readableStream.on('data', function(chunk) { writableStream.write(chunk); }); ``` 上面代码中,fs模块的 `createWriteStream` 方法针对特定文件,创建了一个“可写数据流”,本质上就是对写入操作部署了 `Stream` 接口。然后,“可写数据流”的 `write` 方法,可以将数据写入文件。 ### writable属性 `writable` 属性返回一个布尔值。如果数据流仍然打开,并且可写,就返回 `true` ,否则返回 `false` 。 `s.writeable` `write` 方法用于向“可写数据流”写入数据。它接受两个参数,一个是写入的内容,可以是字符串,也可以是一个 `stream` 对象(比如可读数据流)或 `buffer` 对象(表示二进制数据),另一个是写入完成后的回调函数,它是可选的。 ``` s.write(buffer); // 写入二进制数据 s.write(string, encoding) // 写入字符串,编码默认为utf-8 ``` `write` 方法返回一个布尔值,表示本次数据是否处理完成。如果返回 `true` ,就表示可以写入新的数据了。如果等待写入的数据被缓存了,就返回 `false` ,表示此时不能立刻写入新的数据。不过,返回 `false` 的情况下,也可以继续传入新的数据等待写入。只是这时,新的数据不会真的写入,只会缓存在内存中。为了避免内存消耗,比较好的做法还是等待该方法返回 `true` ,然后再写入。 ``` var fs = require('fs'); var ws = fs.createWriteStream('message.txt'); ws.write('beep '); setTimeout(function () { ws.end('boop\n'); }, 1000); ``` 上面代码调用end方法,数据就不再写入了。 ### cork(),uncork() cork方法可以强制等待写入的数据进入缓存。当调用uncork方法或end方法时,缓存的数据就会吐出。 ### setDefaultEncoding() setDefaultEncoding方法用于将写入的数据编码成新的格式。它返回一个布尔值,表示编码是否成功,如果返回false就表示编码失败。 ### end() `end` 方法用于终止“可写数据流”。该方法可以接受三个参数,全部都是可选参数。第一个参数是最后所要写入的数据,可以是字符串,也可以是 `stream` 对象或 `buffer` 对象;第二个参数是写入编码;第三个参数是一个回调函数, `finish` 事件发生时,会触发这个回调函数。 ``` s.end() // 关闭可写数据流 s.end(buffer) // 最后一段写入二进制数据,然后关闭可写数据流 s.end(str, encoding) // 最后一段写入字符串,然后关闭可写数据流 ``` 上面代码会在数据写入结束时,在尾部写入“world!”。 调用end方法之后,再写入数据会报错。 (1)drain事件 ``` writable.write(chunk) ``` 返回 `false` 以后,当缓存数据全部写入完成,可以继续写入时,会触发 `drain` 事件,表示缓存空了。 `s.on('drain', f);` 下面是一个例子。 ``` function writeOneMillionTimes(writer, data, encoding, callback) { var i = 1000000; write(); function write() { var ok = true; do { i -= 1; if (i === 0) { writer.write(data, encoding, callback); } else { ok = writer.write(data, encoding); } } while (i > 0 && ok); if (i > 0) { writer.once('drain', write); } } } ``` 上面代码是一个写入100万次的例子,通过drain事件得到可以继续写入的通知。 (2)finish事件 调用end方法时,所有缓存的数据释放,触发finish事件。该事件的回调函数没有参数。 ``` var writer = getWritableStreamSomehow(); for (var i = 0; i < 100; i ++) { writer.write('hello, #' + i + '!\n'); } writer.end('this is the end\n'); writer.on('finish', function() { console.error('all writes are now complete.'); }); ``` (3)pipe事件 “可写数据流”调用pipe方法,将数据流导向写入目的地时,触发该事件。 ``` var writer = getWritableStreamSomehow(); var reader = getReadableStreamSomehow(); writer.on('pipe', function(src) { console.error('something is piping into the writer'); assert.equal(src, reader); }); reader.pipe(writer); ``` (4)unpipe事件 “可读数据流”调用unpipe方法,将可写数据流移出写入目的地时,触发该事件。 ``` var writer = getWritableStreamSomehow(); var reader = getReadableStreamSomehow(); writer.on('unpipe', function(src) { console.error('something has stopped piping into the writer'); assert.equal(src, reader); }); reader.pipe(writer); reader.unpipe(writer); ``` (5)error事件 如果写入数据或pipe数据时发生错误,就会触发该事件。 该事件的回调函数,接受一个Error对象作为参数。 ## pipe 方法 你可能会问为什么数据库要部署“可写数据流”接口,而不是直接使用原始的写入接口。答案就是为了可以使用 `pipe` 方法。 var Thermometer = require('./thermometer'); var thermomether = Thermometer(); thermomether.pipe(db); // 10秒后断开连接 setTimeout(function () { thermometer.unpipe(db); }, 10e3); ``` 当可读数据流与可写数据流通过 ``` readable.pipe(writable) ``` 结合在一起时,数据会自动调整到消费者的速率。在内部, `pipe` 使用“可写数据流”的 `.write()` 方法的返回值,来决定是否是否暂停读数据:如果 `writable.write` 返回 `true` ,表明数据已经写入完毕,缓存已经空了;如果返回 `false` ,就表示 `可写数据流` 正在缓存写入的数据,这意味着可以读取数据。等到”可写数据流“排空,就会释放 `drain` 事件,告诉数据源可以恢复释放数据了。 ## 转换数据流 转换数据流用于将可读数据流释放的数据,转换成另一种格式,然后再发给可写数据流。 下面的例子是将一个JavaScript对象的数据流,转为JSON字符串的数据流。 ``` // json_encode_stream.js var Transform = require('stream').Transform; var inherits = require('util').inherits; module.exports = JSONEncode; function JSONEncode(options) { if ( ! (this instanceof JSONEncode)) return new JSONEncode(options); if (! options) options = {}; options.objectMode = true; Transform.call(this, options); } inherits(JSONEncode, Transform); JSONEncode.prototype._transform = function _transform(obj, encoding, callback) { try { obj = JSON.stringify(obj); } catch(err) { return callback(err); } this.push(obj); callback(); }; ``` 上面代码中, `_transform` 方法接受原始的JavaScript对象,将它们转为JSON字符串。 然后,可读数据流与可写数据流之间,就可以用转换数据流连起来。 var JSONEncodeStream = require('./json_encode_stream'); var json = JSONEncodeStream(); var Thermometer = require('../thermometer'); var thermometer = Thermometer(); thermometer.pipe(json).pipe(db); ``` ## HTTP请求 HTTP对象使用Stream接口,实现网络数据的读写。 var server = http.createServer(function (req, res) { // req is an http.IncomingMessage, which is a Readable Stream // res is an http.ServerResponse, which is a Writable Stream var body = ''; // we want to get the data as utf8 strings // If you don't set an encoding, then you'll get Buffer objects req.setEncoding('utf8'); // Readable streams emit 'data' events once a listener is added req.on('data', function (chunk) { body += chunk; }); // the end event tells you that you have entire body req.on('end', function () { try { var data = JSON.parse(body); } catch (er) { // uh oh! bad json! res.statusCode = 400; return res.end('error: ' + er.message); } // write back something interesting to the user: res.write(typeof data); res.end(); }); }); server.listen(1337); // $ curl localhost:1337 -d '{}' // object // $ curl localhost:1337 -d '"foo"' // string // $ curl localhost:1337 -d 'not json' // error: Unexpected token o ``` data事件表示读取或写入了一块数据。 ``` req.on('data', function(buf){ // Do something with the Buffer }); ``` 使用req.setEncoding方法,可以设定字符串编码。 ``` req.setEncoding('utf8'); req.on('data', function(str){ // Do something with the String }); ``` end事件,表示读取或写入数据完毕。 http.createServer(function(req, res){ res.writeHead(200); req.on('data', function(data){ res.write(data); }); req.on('end', function(){ res.end(); }); }).listen(3000); ``` 上面代码相当于建立了“回声”服务,将HTTP请求的数据体,用HTTP回应原样发送回去。 system模块提供了pump方法,有点像Linux系统的管道功能,可以将一个数据流,原封不动得转给另一个数据流。所以,上面的例子也可以用pump方法实现。 ``` var http = require('http'), sys = require('sys'); http.createServer(function(req, res){ res.writeHead(200); sys.pump(req, res); }).listen(3000); ``` ## fs模块 fs模块的createReadStream方法用于新建读取数据流,createWriteStream方法用于新建写入数据流。使用这两个方法,可以做出一个用于文件复制的脚本copy.js。 ``` // copy.js var fs = require('fs'); console.log(process.argv[2], '->', process.argv[3]); var readStream = fs.createReadStream(process.argv[2]); var writeStream = fs.createWriteStream(process.argv[3]); readStream.on('data', function (chunk) { writeStream.write(chunk); }); readStream.on('end', function () { writeStream.end(); }); readStream.on('error', function (err) { console.log("ERROR", err); }); writeStream.on('error', function (err) { console.log("ERROR", err); });d all your errors, you wouldn't need to use domains. ``` 上面代码非常容易理解,使用的时候直接提供源文件路径和目标文件路径,就可以了。 Streams对象都具有pipe方法,起到管道作用,将一个数据流输入另一个数据流。所以,上面代码可以重写成下面这样: ## 错误处理 下面是压缩后发送文件的代码。 ``` http.createServer(function (req, res) { // set the content headers fs.createReadStream('filename.txt') .pipe(zlib.createGzip()) .pipe(res) }) ``` 上面的代码没有部署错误处理机制,一旦发生错误,就无法处理。所以,需要加上error事件的监听函数。 ``` http.createServer(function (req, res) { // set the content headers fs.createReadStream('filename.txt') .on('error', onerror) .pipe(zlib.createGzip()) .on('error', onerror) .pipe(res) function onerror(err) { console.error(err.stack) } }) ``` 上面的代码还是存在问题,如果客户端中断下载,写入的数据流就会收不到close事件,一直处于等待状态,从而造成内存泄漏。因此,需要使用on-finished模块用来处理这种情况。 ``` http.createServer(function (req, res) { var stream = fs.createReadStream('filename.txt') // set the content headers stream .on('error', onerror) .pipe(zlib.createGzip()) .on('error', onerror) .pipe(res) onFinished(res, function () { // make sure the stream is always destroyed stream.destroy() }) }) ``` * <NAME>, cs294-101-streams-lecture * <NAME>, What’s New in io.js 1.0 Beta? – Streams3 `url` 模块用于生成和解析URL。该模块使用前,必须加载。 ``` var url = require('url'); ``` ## url.resolve(from, to) `url.resolve` 方法用于生成URL。它的第一个参数是基准URL,其余参数依次根据基准URL,生成对应的位置。 ``` url.resolve('/one/two/three', 'four') // '/one/two/four' url.resolve('http://example.com/', '/one') // 'http://example.com/one' url.resolve('http://example.com/one/', 'two') // 'http://example.com/one/two' url.resolve('http://example.com/one', '/two') // 'http://example.com/two' ``` # 404 File not found The site configured at this address does not contain the requested file. If this is your site, make sure that the filename case matches the URL as well as any file permissions.For root URLs (like `http://example.com/` ) you must provide an `index.html` file. Read the full documentation for more information about using GitHub Pages.
github.com/pelletier/go-toml/v2
go
Go
README [¶](#section-readme) --- ### go-toml v2 Go library for the [TOML](https://toml.io/en/) format. This library supports [TOML v1.0.0](https://toml.io/en/v1.0.0). [🐞 Bug Reports](https://github.com/pelletier/go-toml/issues) [💬 Anything else](https://github.com/pelletier/go-toml/discussions) #### Documentation Full API, examples, and implementation notes are available in the Go documentation. [![Go Reference](https://pkg.go.dev/badge/github.com/pelletier/go-toml/v2.svg)](https://pkg.go.dev/github.com/pelletier/go-toml/v2) #### Import ``` import "github.com/pelletier/go-toml/v2" ``` See [Modules](#readme-Modules). #### Features ##### Stdlib behavior As much as possible, this library is designed to behave similarly as the standard library's `encoding/json`. ##### Performance While go-toml favors usability, it is written with performance in mind. Most operations should not be shockingly slow. See [benchmarks](#readme-benchmarks). ##### Strict mode `Decoder` can be set to "strict mode", which makes it error when some parts of the TOML document was not present in the target structure. This is a great way to check for typos. [See example in the documentation](https://pkg.go.dev/github.com/pelletier/go-toml/v2#example-Decoder.DisallowUnknownFields). ##### Contextualized errors When most decoding errors occur, go-toml returns [`DecodeError`](https://pkg.go.dev/github.com/pelletier/go-toml/v2#DecodeError), which contains a human readable contextualized version of the error. For example: ``` 1| [server] 2| path = 100 | ~~~ cannot decode TOML integer into struct field toml_test.Server.Path of type string 3| port = 50 ``` ##### Local date and time support TOML supports native [local date/times](https://toml.io/en/v1.0.0#local-date-time). It allows to represent a given date, time, or date-time without relation to a timezone or offset. To support this use-case, go-toml provides [`LocalDate`](https://pkg.go.dev/github.com/pelletier/go-toml/v2#LocalDate), [`LocalTime`](https://pkg.go.dev/github.com/pelletier/go-toml/v2#LocalTime), and [`LocalDateTime`](https://pkg.go.dev/github.com/pelletier/go-toml/v2#LocalDateTime). Those types can be transformed to and from `time.Time`, making them convenient yet unambiguous structures for their respective TOML representation. ##### Commented config Since TOML is often used for configuration files, go-toml can emit documents annotated with [comments and commented-out values](https://pkg.go.dev/github.com/pelletier/go-toml/v2#example-Marshal-Commented). For example, it can generate the following file: ``` # Host IP to connect to. host = '127.0.0.1' # Port of the remote server. port = 4242 # Encryption parameters (optional) # [TLS] # cipher = 'AEAD-AES128-GCM-SHA256' # version = 'TLS 1.3' ``` #### Getting started Given the following struct, let's see how to read it and write it as TOML: ``` type MyConfig struct { Version int Name string Tags []string } ``` ##### Unmarshaling [`Unmarshal`](https://pkg.go.dev/github.com/pelletier/go-toml/v2#Unmarshal) reads a TOML document and fills a Go structure with its content. For example: ``` doc := ` version = 2 name = "go-toml" tags = ["go", "toml"] ` var cfg MyConfig err := toml.Unmarshal([]byte(doc), &cfg) if err != nil { panic(err) } fmt.Println("version:", cfg.Version) fmt.Println("name:", cfg.Name) fmt.Println("tags:", cfg.Tags) // Output: // version: 2 // name: go-toml // tags: [go toml] ``` ##### Marshaling [`Marshal`](https://pkg.go.dev/github.com/pelletier/go-toml/v2#Marshal) is the opposite of Unmarshal: it represents a Go structure as a TOML document: ``` cfg := MyConfig{ Version: 2, Name: "go-toml", Tags: []string{"go", "toml"}, } b, err := toml.Marshal(cfg) if err != nil { panic(err) } fmt.Println(string(b)) // Output: // Version = 2 // Name = 'go-toml' // Tags = ['go', 'toml'] ``` #### Unstable API This API does not yet follow the backward compatibility guarantees of this library. They provide early access to features that may have rough edges or an API subject to change. ##### Parser Parser is the unstable API that allows iterative parsing of a TOML document at the AST level. See <https://pkg.go.dev/github.com/pelletier/go-toml/v2/unstable>. #### Benchmarks Execution time speedup compared to other Go TOML libraries: | Benchmark | go-toml v1 | BurntSushi/toml | | --- | --- | --- | | Marshal/HugoFrontMatter-2 | 1.9x | 1.9x | | Marshal/ReferenceFile/map-2 | 1.7x | 1.8x | | Marshal/ReferenceFile/struct-2 | 2.2x | 2.5x | | Unmarshal/HugoFrontMatter-2 | 2.9x | 2.9x | | Unmarshal/ReferenceFile/map-2 | 2.6x | 2.9x | | Unmarshal/ReferenceFile/struct-2 | 4.4x | 5.3x | See more The table above has the results of the most common use-cases. The table below contains the results of all benchmarks, including unrealistic ones. It is provided for completeness. | Benchmark | go-toml v1 | BurntSushi/toml | | --- | --- | --- | | Marshal/SimpleDocument/map-2 | 1.8x | 2.9x | | Marshal/SimpleDocument/struct-2 | 2.7x | 4.2x | | Unmarshal/SimpleDocument/map-2 | 4.5x | 3.1x | | Unmarshal/SimpleDocument/struct-2 | 6.2x | 3.9x | | UnmarshalDataset/example-2 | 3.1x | 3.5x | | UnmarshalDataset/code-2 | 2.3x | 3.1x | | UnmarshalDataset/twitter-2 | 2.5x | 2.6x | | UnmarshalDataset/citm_catalog-2 | 2.1x | 2.2x | | UnmarshalDataset/canada-2 | 1.6x | 1.3x | | UnmarshalDataset/config-2 | 4.3x | 3.2x | | [Geo mean] | 2.7x | 2.8x | This table can be generated with `./ci.sh benchmark -a -html`. #### Modules go-toml uses Go's standard modules system. Installation instructions: * Go ≥ 1.16: Nothing to do. Use the import in your code. The `go` command deals with it automatically. * Go ≥ 1.13: `GO111MODULE=on go get github.com/pelletier/go-toml/v2`. In case of trouble: [Go Modules FAQ](https://github.com/golang/go/wiki/Modules#why-does-installing-a-tool-via-go-get-fail-with-error-cannot-find-main-module). #### Tools Go-toml provides three handy command line tools: * `tomljson`: Reads a TOML file and outputs its JSON representation. ``` $ go install github.com/pelletier/go-toml/v2/cmd/tomljson@latest $ tomljson --help ``` * `jsontoml`: Reads a JSON file and outputs a TOML representation. ``` $ go install github.com/pelletier/go-toml/v2/cmd/jsontoml@latest $ jsontoml --help ``` * `tomll`: Lints and reformats a TOML file. ``` $ go install github.com/pelletier/go-toml/v2/cmd/tomll@latest $ tomll --help ``` ##### Docker image Those tools are also available as a [Docker image](https://github.com/pelletier/go-toml/pkgs/container/go-toml). For example, to use `tomljson`: ``` docker run -i ghcr.io/pelletier/go-toml:v2 tomljson < example.toml ``` Multiple versions are availble on [ghcr.io](https://github.com/pelletier/go-toml/pkgs/container/go-toml). #### Migrating from v1 This section describes the differences between v1 and v2, with some pointers on how to get the original behavior when possible. ##### Decoding / Unmarshal ###### Automatic field name guessing When unmarshaling to a struct, if a key in the TOML document does not exactly match the name of a struct field or any of the `toml`-tagged field, v1 tries multiple variations of the key ([code](https://github.com/pelletier/go-toml/raw/a2e52561804c6cd9392ebf0048ca64fe4af67a43/marshal.go#L775-L781)). V2 instead does a case-insensitive matching, like `encoding/json`. This could impact you if you are relying on casing to differentiate two fields, and one of them is a not using the `toml` struct tag. The recommended solution is to be specific about tag names for those fields using the `toml` struct tag. ###### Ignore preexisting value in interface When decoding into a non-nil `interface{}`, go-toml v1 uses the type of the element in the interface to decode the object. For example: ``` type inner struct { B interface{} } type doc struct { A interface{} } d := doc{ A: inner{ B: "Before", }, } data := ` [A] B = "After" ` toml.Unmarshal([]byte(data), &d) fmt.Printf("toml v1: %#v\n", d) // toml v1: main.doc{A:main.inner{B:"After"}} ``` In this case, field `A` is of type `interface{}`, containing a `inner` struct. V1 sees that type and uses it when decoding the object. When decoding an object into an `interface{}`, V2 instead disregards whatever value the `interface{}` may contain and replaces it with a `map[string]interface{}`. With the same data structure as above, here is what the result looks like: ``` toml.Unmarshal([]byte(data), &d) fmt.Printf("toml v2: %#v\n", d) // toml v2: main.doc{A:map[string]interface {}{"B":"After"}} ``` This is to match `encoding/json`'s behavior. There is no way to make the v2 decoder behave like v1. ###### Values out of array bounds ignored When decoding into an array, v1 returns an error when the number of elements contained in the doc is superior to the capacity of the array. For example: ``` type doc struct { A [2]string } d := doc{} err := toml.Unmarshal([]byte(`A = ["one", "two", "many"]`), &d) fmt.Println(err) // (1, 1): unmarshal: TOML array length (3) exceeds destination array length (2) ``` In the same situation, v2 ignores the last value: ``` err := toml.Unmarshal([]byte(`A = ["one", "two", "many"]`), &d) fmt.Println("err:", err, "d:", d) // err: <nil> d: {[one two]} ``` This is to match `encoding/json`'s behavior. There is no way to make the v2 decoder behave like v1. ###### Support for `toml.Unmarshaler` has been dropped This method was not widely used, poorly defined, and added a lot of complexity. A similar effect can be achieved by implementing the `encoding.TextUnmarshaler` interface and use strings. ###### Support for `default` struct tag has been dropped This feature adds complexity and a poorly defined API for an effect that can be accomplished outside of the library. It does not seem like other format parsers in Go support that feature (the project referenced in the original ticket #202 has not been updated since 2017). Given that go-toml v2 should not touch values not in the document, the same effect can be achieved by pre-filling the struct with defaults (libraries like [go-defaults](https://github.com/mcuadros/go-defaults) can help). Also, string representation is not well defined for all types: it creates issues like #278. The recommended replacement is pre-filling the struct before unmarshaling. ###### `toml.Tree` replacement This structure was the initial attempt at providing a document model for go-toml. It allows manipulating the structure of any document, encoding and decoding from their TOML representation. While a more robust feature was initially planned in go-toml v2, this has been ultimately [removed from scope](https://github.com/pelletier/go-toml/discussions/506#discussioncomment-1526038) of this library, with no plan to add it back at the moment. The closest equivalent at the moment would be to unmarshal into an `interface{}` and use type assertions and/or reflection to manipulate the arbitrary structure. However this would fall short of providing all of the TOML features such as adding comments and be specific about whitespace. ###### `toml.Position` are not retrievable anymore The API for retrieving the position (line, column) of a specific TOML element do not exist anymore. This was done to minimize the amount of concepts introduced by the library (query path), and avoid the performance hit related to storing positions in the absence of a document model, for a feature that seemed to have little use. Errors however have gained more detailed position information. Position retrieval seems better fitted for a document model, which has been [removed from the scope](https://github.com/pelletier/go-toml/discussions/506#discussioncomment-1526038) of go-toml v2 at the moment. ##### Encoding / Marshal ###### Default struct fields order V1 emits struct fields order alphabetically by default. V2 struct fields are emitted in order they are defined. For example: ``` type S struct { B string A string } data := S{ B: "B", A: "A", } b, _ := tomlv1.Marshal(data) fmt.Println("v1:\n" + string(b)) b, _ = tomlv2.Marshal(data) fmt.Println("v2:\n" + string(b)) // Output: // v1: // A = "A" // B = "B" // v2: // B = 'B' // A = 'A' ``` There is no way to make v2 encoder behave like v1. A workaround could be to manually sort the fields alphabetically in the struct definition, or generate struct types using `reflect.StructOf`. ###### No indentation by default V1 automatically indents content of tables by default. V2 does not. However the same behavior can be obtained using [`Encoder.SetIndentTables`](https://pkg.go.dev/github.com/pelletier/go-toml/v2#Encoder.SetIndentTables). For example: ``` data := map[string]interface{}{ "table": map[string]string{ "key": "value", }, } b, _ := tomlv1.Marshal(data) fmt.Println("v1:\n" + string(b)) b, _ = tomlv2.Marshal(data) fmt.Println("v2:\n" + string(b)) buf := bytes.Buffer{} enc := tomlv2.NewEncoder(&buf) enc.SetIndentTables(true) enc.Encode(data) fmt.Println("v2 Encoder:\n" + string(buf.Bytes())) // Output: // v1: // // [table] // key = "value" // // v2: // [table] // key = 'value' // // // v2 Encoder: // [table] // key = 'value' ``` ###### Keys and strings are single quoted V1 always uses double quotes (`"`) around strings and keys that cannot be represented bare (unquoted). V2 uses single quotes instead by default (`'`), unless a character cannot be represented, then falls back to double quotes. As a result of this change, `Encoder.QuoteMapKeys` has been removed, as it is not useful anymore. There is no way to make v2 encoder behave like v1. ###### `TextMarshaler` emits as a string, not TOML Types that implement [`encoding.TextMarshaler`](https://golang.org/pkg/encoding/#TextMarshaler) can emit arbitrary TOML in v1. The encoder would append the result to the output directly. In v2 the result is wrapped in a string. As a result, this interface cannot be implemented by the root object. There is no way to make v2 encoder behave like v1. ###### `Encoder.CompactComments` has been removed Emitting compact comments is now the default behavior of go-toml. This option is not necessary anymore. ###### Struct tags have been merged V1 used to provide multiple struct tags: `comment`, `commented`, `multiline`, `toml`, and `omitempty`. To behave more like the standard library, v2 has merged `toml`, `multiline`, `commented`, and `omitempty`. For example: ``` type doc struct { // v1 F string `toml:"field" multiline:"true" omitempty:"true" commented:"true"` // v2 F string `toml:"field,multiline,omitempty,commented"` } ``` Has a result, the `Encoder.SetTag*` methods have been removed, as there is just one tag now. ###### `Encoder.ArraysWithOneElementPerLine` has been renamed The new name is `Encoder.SetArraysMultiline`. The behavior should be the same. ###### `Encoder.Indentation` has been renamed The new name is `Encoder.SetIndentSymbol`. The behavior should be the same. ###### Embedded structs behave like stdlib V1 defaults to merging embedded struct fields into the embedding struct. This behavior was unexpected because it does not follow the standard library. To avoid breaking backward compatibility, the `Encoder.PromoteAnonymous` method was added to make the encoder behave correctly. Given backward compatibility is not a problem anymore, v2 does the right thing by default: it follows the behavior of `encoding/json`. `Encoder.PromoteAnonymous` has been removed. ##### `query` go-toml v1 provided the [`go-toml/query`](https://github.com/pelletier/go-toml/tree/f99d6bbca119636aeafcf351ee52b3d202782627/query) package. It allowed to run JSONPath-style queries on TOML files. This feature is not available in v2. For a replacement, check out [dasel](https://github.com/TomWright/dasel). This package has been removed because it was essentially not supported anymore (last commit May 2020), increased the complexity of the code base, and more complete solutions exist out there. #### Versioning Go-toml follows [Semantic Versioning](https://semver.org). The supported version of [TOML](https://github.com/toml-lang/toml) is indicated at the beginning of this document. The last two major versions of Go are supported (see [Go Release Policy](https://golang.org/doc/devel/release.html#policy)). #### License The MIT License (MIT). Read [LICENSE](https://github.com/pelletier/go-toml/blob/v2.1.0/LICENSE). Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package toml is a library to read and write TOML documents. ### Index [¶](#pkg-index) * [func Marshal(v interface{}) ([]byte, error)](#Marshal) * [func Unmarshal(data []byte, v interface{}) error](#Unmarshal) * [type DecodeError](#DecodeError) * + [func (e *DecodeError) Error() string](#DecodeError.Error) + [func (e *DecodeError) Key() Key](#DecodeError.Key) + [func (e *DecodeError) Position() (row int, column int)](#DecodeError.Position) + [func (e *DecodeError) String() string](#DecodeError.String) * [type Decoder](#Decoder) * + [func NewDecoder(r io.Reader) *Decoder](#NewDecoder) * + [func (d *Decoder) Decode(v interface{}) error](#Decoder.Decode) + [func (d *Decoder) DisallowUnknownFields() *Decoder](#Decoder.DisallowUnknownFields) * [type Encoder](#Encoder) * + [func NewEncoder(w io.Writer) *Encoder](#NewEncoder) * + [func (enc *Encoder) Encode(v interface{}) error](#Encoder.Encode) + [func (enc *Encoder) SetArraysMultiline(multiline bool) *Encoder](#Encoder.SetArraysMultiline) + [func (enc *Encoder) SetIndentSymbol(s string) *Encoder](#Encoder.SetIndentSymbol) + [func (enc *Encoder) SetIndentTables(indent bool) *Encoder](#Encoder.SetIndentTables) + [func (enc *Encoder) SetTablesInline(inline bool) *Encoder](#Encoder.SetTablesInline) * [type Key](#Key) * [type LocalDate](#LocalDate) * + [func (d LocalDate) AsTime(zone *time.Location) time.Time](#LocalDate.AsTime) + [func (d LocalDate) MarshalText() ([]byte, error)](#LocalDate.MarshalText) + [func (d LocalDate) String() string](#LocalDate.String) + [func (d *LocalDate) UnmarshalText(b []byte) error](#LocalDate.UnmarshalText) * [type LocalDateTime](#LocalDateTime) * + [func (d LocalDateTime) AsTime(zone *time.Location) time.Time](#LocalDateTime.AsTime) + [func (d LocalDateTime) MarshalText() ([]byte, error)](#LocalDateTime.MarshalText) + [func (d LocalDateTime) String() string](#LocalDateTime.String) + [func (d *LocalDateTime) UnmarshalText(data []byte) error](#LocalDateTime.UnmarshalText) * [type LocalTime](#LocalTime) * + [func (d LocalTime) MarshalText() ([]byte, error)](#LocalTime.MarshalText) + [func (d LocalTime) String() string](#LocalTime.String) + [func (d *LocalTime) UnmarshalText(b []byte) error](#LocalTime.UnmarshalText) * [type StrictMissingError](#StrictMissingError) * + [func (s *StrictMissingError) Error() string](#StrictMissingError.Error) + [func (s *StrictMissingError) String() string](#StrictMissingError.String) #### Examples [¶](#pkg-examples) * [DecodeError](#example-DecodeError) * [Decoder.DisallowUnknownFields](#example-Decoder.DisallowUnknownFields) * [Marshal](#example-Marshal) * [Marshal (Commented)](#example-Marshal-Commented) * [Unmarshal](#example-Unmarshal) * [Unmarshal (TextUnmarshal)](#example-Unmarshal-TextUnmarshal) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [Marshal](https://github.com/pelletier/go-toml/blob/v2.1.0/marshaler.go#L22) [¶](#Marshal) ``` func Marshal(v interface{}) ([][byte](/builtin#byte), [error](/builtin#error)) ``` Marshal serializes a Go value as a TOML document. It is a shortcut for Encoder.Encode() with the default options. Example [¶](#example-Marshal) ``` type MyConfig struct { Version int Name string Tags []string } cfg := MyConfig{ Version: 2, Name: "go-toml", Tags: []string{"go", "toml"}, } b, err := toml.Marshal(cfg) if err != nil { panic(err) } fmt.Println(string(b)) ``` ``` Output: Version = 2 Name = 'go-toml' Tags = ['go', 'toml'] ``` Example (Commented) [¶](#example-Marshal-Commented) Example that uses the 'commented' field tag option to generate an example configuration file that has commented out sections (example from go-graphite/graphite-clickhouse). ``` type Common struct { Listen string `toml:"listen" comment:"general listener"` PprofListen string `toml:"pprof-listen" comment:"listener to serve /debug/pprof requests. '-pprof' argument overrides it"` MaxMetricsPerTarget int `toml:"max-metrics-per-target" comment:"limit numbers of queried metrics per target in /render requests, 0 or negative = unlimited"` MemoryReturnInterval time.Duration `toml:"memory-return-interval" comment:"daemon will return the freed memory to the OS when it>0"` } type Costs struct { Cost *int `toml:"cost" comment:"default cost (for wildcarded equalence or matched with regex, or if no value cost set)"` ValuesCost map[string]int `toml:"values-cost" comment:"cost with some value (for equalence without wildcards) (additional tuning, usually not needed)"` } type ClickHouse struct { URL string `toml:"url" comment:"default url, see https://clickhouse.tech/docs/en/interfaces/http. Can be overwritten with query-params"` RenderMaxQueries int `toml:"render-max-queries" comment:"Max queries to render queiries"` RenderConcurrentQueries int `toml:"render-concurrent-queries" comment:"Concurrent queries to render queiries"` TaggedCosts map[string]*Costs `toml:"tagged-costs,commented"` TreeTable string `toml:"tree-table,commented"` ReverseTreeTable string `toml:"reverse-tree-table,commented"` DateTreeTable string `toml:"date-tree-table,commented"` DateTreeTableVersion int `toml:"date-tree-table-version,commented"` TreeTimeout time.Duration `toml:"tree-timeout,commented"` TagTable string `toml:"tag-table,commented"` ExtraPrefix string `toml:"extra-prefix" comment:"add extra prefix (directory in graphite) for all metrics, w/o trailing dot"` ConnectTimeout time.Duration `toml:"connect-timeout" comment:"TCP connection timeout"` DataTableLegacy string `toml:"data-table,commented"` RollupConfLegacy string `toml:"rollup-conf,commented"` MaxDataPoints int `toml:"max-data-points" comment:"max points per metric when internal-aggregation=true"` InternalAggregation bool `toml:"internal-aggregation" comment:"ClickHouse-side aggregation, see doc/aggregation.md"` } type Tags struct { Rules string `toml:"rules"` Date string `toml:"date"` ExtraWhere string `toml:"extra-where"` InputFile string `toml:"input-file"` OutputFile string `toml:"output-file"` } type Config struct { Common Common `toml:"common"` ClickHouse ClickHouse `toml:"clickhouse"` Tags Tags `toml:"tags,commented"` } cfg := &Config{ Common: Common{ Listen: ":9090", PprofListen: "", MaxMetricsPerTarget: 15000, // This is arbitrary value to protect CH from overload MemoryReturnInterval: 0, }, ClickHouse: ClickHouse{ URL: "http://localhost:8123?cancel_http_readonly_queries_on_client_close=1", ExtraPrefix: "", ConnectTimeout: time.Second, DataTableLegacy: "", RollupConfLegacy: "auto", MaxDataPoints: 1048576, InternalAggregation: true, }, Tags: Tags{}, } out, err := toml.Marshal(cfg) if err != nil { panic(err) } err = toml.Unmarshal(out, &cfg) if err != nil { panic(err) } fmt.Println(string(out)) ``` ``` Output: [common] # general listener listen = ':9090' # listener to serve /debug/pprof requests. '-pprof' argument overrides it pprof-listen = '' # limit numbers of queried metrics per target in /render requests, 0 or negative = unlimited max-metrics-per-target = 15000 # daemon will return the freed memory to the OS when it>0 memory-return-interval = 0 [clickhouse] # default url, see https://clickhouse.tech/docs/en/interfaces/http. Can be overwritten with query-params url = 'http://localhost:8123?cancel_http_readonly_queries_on_client_close=1' # Max queries to render queiries render-max-queries = 0 # Concurrent queries to render queiries render-concurrent-queries = 0 # tree-table = '' # reverse-tree-table = '' # date-tree-table = '' # date-tree-table-version = 0 # tree-timeout = 0 # tag-table = '' # add extra prefix (directory in graphite) for all metrics, w/o trailing dot extra-prefix = '' # TCP connection timeout connect-timeout = 1000000000 # data-table = '' # rollup-conf = 'auto' # max points per metric when internal-aggregation=true max-data-points = 1048576 # ClickHouse-side aggregation, see doc/aggregation.md internal-aggregation = true # [tags] # rules = '' # date = '' # extra-where = '' # input-file = '' # output-file = '' ``` #### func [Unmarshal](https://github.com/pelletier/go-toml/blob/v2.1.0/unmarshaler.go#L23) [¶](#Unmarshal) ``` func Unmarshal(data [][byte](/builtin#byte), v interface{}) [error](/builtin#error) ``` Unmarshal deserializes a TOML document into a Go value. It is a shortcut for Decoder.Decode() with the default options. Example [¶](#example-Unmarshal) ``` type MyConfig struct { Version int Name string Tags []string } doc := ` version = 2 name = "go-toml" tags = ["go", "toml"] ` var cfg MyConfig err := toml.Unmarshal([]byte(doc), &cfg) if err != nil { panic(err) } fmt.Println("version:", cfg.Version) fmt.Println("name:", cfg.Name) fmt.Println("tags:", cfg.Tags) ``` ``` Output: version: 2 name: go-toml tags: [go toml] ``` Example (TextUnmarshal) [¶](#example-Unmarshal-TextUnmarshal) ``` package main import ( "fmt" "log" "strconv" "github.com/pelletier/go-toml/v2" ) type customInt int func (i *customInt) UnmarshalText(b []byte) error { x, err := strconv.ParseInt(string(b), 10, 32) if err != nil { return err } *i = customInt(x * 100) return nil } type doc struct { Value customInt } func main() { var x doc data := []byte(`value = "42"`) err := toml.Unmarshal(data, &x) if err != nil { log.Fatal(err) } fmt.Println(x) } ``` ``` Output: {4200} ``` Share Format Run ### Types [¶](#pkg-types) #### type [DecodeError](https://github.com/pelletier/go-toml/blob/v2.1.0/errors.go#L18) [¶](#DecodeError) ``` type DecodeError struct { // contains filtered or unexported fields } ``` DecodeError represents an error encountered during the parsing or decoding of a TOML document. In addition to the error message, it contains the position in the document where it happened, as well as a human-readable representation that shows where the error occurred in the document. Example [¶](#example-DecodeError) ``` doc := `name = 123__456` s := map[string]interface{}{} err := Unmarshal([]byte(doc), &s) fmt.Println(err) var derr *DecodeError if errors.As(err, &derr) { fmt.Println(derr.String()) row, col := derr.Position() fmt.Println("error occurred at row", row, "column", col) } ``` ``` Output: toml: number must have at least one digit between underscores 1| name = 123__456 | ~~ number must have at least one digit between underscores error occurred at row 1 column 11 ``` #### func (*DecodeError) [Error](https://github.com/pelletier/go-toml/blob/v2.1.0/errors.go#L60) [¶](#DecodeError.Error) ``` func (e *[DecodeError](#DecodeError)) Error() [string](/builtin#string) ``` Error returns the error message contained in the DecodeError. #### func (*DecodeError) [Key](https://github.com/pelletier/go-toml/blob/v2.1.0/errors.go#L77) [¶](#DecodeError.Key) ``` func (e *[DecodeError](#DecodeError)) Key() [Key](#Key) ``` Key that was being processed when the error occurred. The key is present only if this DecodeError is part of a StrictMissingError. #### func (*DecodeError) [Position](https://github.com/pelletier/go-toml/blob/v2.1.0/errors.go#L71) [¶](#DecodeError.Position) ``` func (e *[DecodeError](#DecodeError)) Position() (row [int](/builtin#int), column [int](/builtin#int)) ``` Position returns the (line, column) pair indicating where the error occurred in the document. Positions are 1-indexed. #### func (*DecodeError) [String](https://github.com/pelletier/go-toml/blob/v2.1.0/errors.go#L65) [¶](#DecodeError.String) ``` func (e *[DecodeError](#DecodeError)) String() [string](/builtin#string) ``` String returns the human-readable contextualized error. This string is multi-line. #### type [Decoder](https://github.com/pelletier/go-toml/blob/v2.1.0/unmarshaler.go#L32) [¶](#Decoder) ``` type Decoder struct { // contains filtered or unexported fields } ``` Decoder reads and decode a TOML document from an input stream. #### func [NewDecoder](https://github.com/pelletier/go-toml/blob/v2.1.0/unmarshaler.go#L41) [¶](#NewDecoder) ``` func NewDecoder(r [io](/io).[Reader](/io#Reader)) *[Decoder](#Decoder) ``` NewDecoder creates a new Decoder that will read from r. #### func (*Decoder) [Decode](https://github.com/pelletier/go-toml/blob/v2.1.0/unmarshaler.go#L98) [¶](#Decoder.Decode) ``` func (d *[Decoder](#Decoder)) Decode(v interface{}) [error](/builtin#error) ``` * [Type mapping](#hdr-Type_mapping) Decode the whole content of r into v. By default, values in the document that don't exist in the target Go value are ignored. See Decoder.DisallowUnknownFields() to change this behavior. When a TOML local date, time, or date-time is decoded into a time.Time, its value is represented in time.Local timezone. Otherwise the appropriate Local* structure is used. For time values, precision up to the nanosecond is supported by truncating extra digits. Empty tables decoded in an interface{} create an empty initialized map[string]interface{}. Types implementing the encoding.TextUnmarshaler interface are decoded from a TOML string. When decoding a number, go-toml will return an error if the number is out of bounds for the target type (which includes negative numbers when decoding into an unsigned int). If an error occurs while decoding the content of the document, this function returns a toml.DecodeError, providing context about the issue. When using strict mode and a field is missing, a `toml.StrictMissingError` is returned. In any other case, this function returns a standard Go error. #### Type mapping [¶](#hdr-Type_mapping) List of supported TOML types and their associated accepted Go types: ``` String -> string Integer -> uint*, int*, depending on size Float -> float*, depending on size Boolean -> bool Offset Date-Time -> time.Time Local Date-time -> LocalDateTime, time.Time Local Date -> LocalDate, time.Time Local Time -> LocalTime, time.Time Array -> slice and array, depending on elements types Table -> map and struct Inline Table -> same as Table Array of Tables -> same as Array and Table ``` #### func (*Decoder) [DisallowUnknownFields](https://github.com/pelletier/go-toml/blob/v2.1.0/unmarshaler.go#L52) [¶](#Decoder.DisallowUnknownFields) ``` func (d *[Decoder](#Decoder)) DisallowUnknownFields() *[Decoder](#Decoder) ``` DisallowUnknownFields causes the Decoder to return an error when the destination is a struct and the input contains a key that does not match a non-ignored field. In that case, the Decoder returns a StrictMissingError that can be used to retrieve the individual errors as well as generate a human readable description of the missing fields. Example [¶](#example-Decoder.DisallowUnknownFields) ``` type S struct { Key1 string Key3 string } doc := ` key1 = "value1" key2 = "value2" key3 = "value3" ` r := strings.NewReader(doc) d := toml.NewDecoder(r) d.DisallowUnknownFields() s := S{} err := d.Decode(&s) fmt.Println(err.Error()) var details *toml.StrictMissingError if !errors.As(err, &details) { panic(fmt.Sprintf("err should have been a *toml.StrictMissingError, but got %s (%T)", err, err)) } fmt.Println(details.String()) ``` ``` Output: strict mode: fields in the document are missing in the target struct 2| key1 = "value1" 3| key2 = "value2" | ~~~~ missing field 4| key3 = "value3" ``` #### type [Encoder](https://github.com/pelletier/go-toml/blob/v2.1.0/marshaler.go#L35) [¶](#Encoder) ``` type Encoder struct { // contains filtered or unexported fields } ``` Encoder writes a TOML document to an output stream. #### func [NewEncoder](https://github.com/pelletier/go-toml/blob/v2.1.0/marshaler.go#L47) [¶](#NewEncoder) ``` func NewEncoder(w [io](/io).[Writer](/io#Writer)) *[Encoder](#Encoder) ``` NewEncoder returns a new Encoder that writes to w. #### func (*Encoder) [Encode](https://github.com/pelletier/go-toml/blob/v2.1.0/marshaler.go#L158) [¶](#Encoder.Encode) ``` func (enc *[Encoder](#Encoder)) Encode(v interface{}) [error](/builtin#error) ``` * [Encoding rules](#hdr-Encoding_rules) * [Struct tags](#hdr-Struct_tags) Encode writes a TOML representation of v to the stream. If v cannot be represented to TOML it returns an error. #### Encoding rules [¶](#hdr-Encoding_rules) A top level slice containing only maps or structs is encoded as [[table array]]. All slices not matching rule 1 are encoded as [array]. As a result, any map or struct they contain is encoded as an {inline table}. Nil interfaces and nil pointers are not supported. Keys in key-values always have one part. Intermediate tables are always printed. By default, strings are encoded as literal string, unless they contain either a newline character or a single quote. In that case they are emitted as quoted strings. Unsigned integers larger than math.MaxInt64 cannot be encoded. Doing so results in an error. This rule exists because the TOML specification only requires parsers to support at least the 64 bits integer range. Allowing larger numbers would create non-standard TOML documents, which may not be readable (at best) by other implementations. To encode such numbers, a solution is a custom type that implements encoding.TextMarshaler. When encoding structs, fields are encoded in order of definition, with their exact name. Tables and array tables are separated by empty lines. However, consecutive subtables definitions are not. For example: ``` [top1] [top2] [top2.child1] [[array]] [[array]] [array.child2] ``` #### Struct tags [¶](#hdr-Struct_tags) The encoding of each public struct field can be customized by the format string in the "toml" key of the struct field's tag. This follows encoding/json's convention. The format string starts with the name of the field, optionally followed by a comma-separated list of options. The name may be empty in order to provide options without overriding the default name. The "multiline" option emits strings as quoted multi-line TOML strings. It has no effect on fields that would not be encoded as strings. The "inline" option turns fields that would be emitted as tables into inline tables instead. It has no effect on other fields. The "omitempty" option prevents empty values or groups from being emitted. The "commented" option prefixes the value and all its children with a comment symbol. In addition to the "toml" tag struct tag, a "comment" tag can be used to emit a TOML comment before the value being annotated. Comments are ignored inside inline tables. For array tables, the comment is only present before the first element of the array. #### func (*Encoder) [SetArraysMultiline](https://github.com/pelletier/go-toml/blob/v2.1.0/marshaler.go#L71) [¶](#Encoder.SetArraysMultiline) ``` func (enc *[Encoder](#Encoder)) SetArraysMultiline(multiline [bool](/builtin#bool)) *[Encoder](#Encoder) ``` SetArraysMultiline forces the encoder to emit all arrays with one element per line. This behavior can be controlled on an individual struct field basis with the multiline tag: ``` MyField `multiline:"true"` ``` #### func (*Encoder) [SetIndentSymbol](https://github.com/pelletier/go-toml/blob/v2.1.0/marshaler.go#L79) [¶](#Encoder.SetIndentSymbol) ``` func (enc *[Encoder](#Encoder)) SetIndentSymbol(s [string](/builtin#string)) *[Encoder](#Encoder) ``` SetIndentSymbol defines the string that should be used for indentation. The provided string is repeated for each indentation level. Defaults to two spaces. #### func (*Encoder) [SetIndentTables](https://github.com/pelletier/go-toml/blob/v2.1.0/marshaler.go#L85) [¶](#Encoder.SetIndentTables) ``` func (enc *[Encoder](#Encoder)) SetIndentTables(indent [bool](/builtin#bool)) *[Encoder](#Encoder) ``` SetIndentTables forces the encoder to intent tables and array tables. #### func (*Encoder) [SetTablesInline](https://github.com/pelletier/go-toml/blob/v2.1.0/marshaler.go#L60) [¶](#Encoder.SetTablesInline) ``` func (enc *[Encoder](#Encoder)) SetTablesInline(inline [bool](/builtin#bool)) *[Encoder](#Encoder) ``` SetTablesInline forces the encoder to emit all tables inline. This behavior can be controlled on an individual struct field basis with the inline tag: ``` MyField `toml:",inline"` ``` #### type [Key](https://github.com/pelletier/go-toml/blob/v2.1.0/errors.go#L57) [¶](#Key) ``` type Key [][string](/builtin#string) ``` #### type [LocalDate](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L12) [¶](#LocalDate) ``` type LocalDate struct { Year [int](/builtin#int) Month [int](/builtin#int) Day [int](/builtin#int) } ``` LocalDate represents a calendar day in no specific timezone. #### func (LocalDate) [AsTime](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L19) [¶](#LocalDate.AsTime) ``` func (d [LocalDate](#LocalDate)) AsTime(zone *[time](/time).[Location](/time#Location)) [time](/time).[Time](/time#Time) ``` AsTime converts d into a specific time instance at midnight in zone. #### func (LocalDate) [MarshalText](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L29) [¶](#LocalDate.MarshalText) ``` func (d [LocalDate](#LocalDate)) MarshalText() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalText returns [RFC 3339](https://rfc-editor.org/rfc/rfc3339.html) representation of d. #### func (LocalDate) [String](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L24) [¶](#LocalDate.String) ``` func (d [LocalDate](#LocalDate)) String() [string](/builtin#string) ``` String returns [RFC 3339](https://rfc-editor.org/rfc/rfc3339.html) representation of d. #### func (*LocalDate) [UnmarshalText](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L34) [¶](#LocalDate.UnmarshalText) ``` func (d *[LocalDate](#LocalDate)) UnmarshalText(b [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalText parses b using [RFC 3339](https://rfc-editor.org/rfc/rfc3339.html) to fill d. #### type [LocalDateTime](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L90) [¶](#LocalDateTime) ``` type LocalDateTime struct { [LocalDate](#LocalDate) [LocalTime](#LocalTime) } ``` LocalDateTime represents a time of a specific day in no specific timezone. #### func (LocalDateTime) [AsTime](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L96) [¶](#LocalDateTime.AsTime) ``` func (d [LocalDateTime](#LocalDateTime)) AsTime(zone *[time](/time).[Location](/time#Location)) [time](/time).[Time](/time#Time) ``` AsTime converts d into a specific time instance in zone. #### func (LocalDateTime) [MarshalText](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L106) [¶](#LocalDateTime.MarshalText) ``` func (d [LocalDateTime](#LocalDateTime)) MarshalText() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalText returns [RFC 3339](https://rfc-editor.org/rfc/rfc3339.html) representation of d. #### func (LocalDateTime) [String](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L101) [¶](#LocalDateTime.String) ``` func (d [LocalDateTime](#LocalDateTime)) String() [string](/builtin#string) ``` String returns [RFC 3339](https://rfc-editor.org/rfc/rfc3339.html) representation of d. #### func (*LocalDateTime) [UnmarshalText](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L111) [¶](#LocalDateTime.UnmarshalText) ``` func (d *[LocalDateTime](#LocalDateTime)) UnmarshalText(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalText parses b using [RFC 3339](https://rfc-editor.org/rfc/rfc3339.html) to fill d. #### type [LocalTime](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L45) [¶](#LocalTime) ``` type LocalTime struct { Hour [int](/builtin#int) // Hour of the day: [0; 24[ Minute [int](/builtin#int) // Minute of the hour: [0; 60[ Second [int](/builtin#int) // Second of the minute: [0; 60[ Nanosecond [int](/builtin#int) // Nanoseconds within the second: [0, 1000000000[ Precision [int](/builtin#int) // Number of digits to display for Nanosecond. } ``` LocalTime represents a time of day of no specific day in no specific timezone. #### func (LocalTime) [MarshalText](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L72) [¶](#LocalTime.MarshalText) ``` func (d [LocalTime](#LocalTime)) MarshalText() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalText returns [RFC 3339](https://rfc-editor.org/rfc/rfc3339.html) representation of d. #### func (LocalTime) [String](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L57) [¶](#LocalTime.String) ``` func (d [LocalTime](#LocalTime)) String() [string](/builtin#string) ``` String returns [RFC 3339](https://rfc-editor.org/rfc/rfc3339.html) representation of d. If d.Nanosecond and d.Precision are zero, the time won't have a nanosecond component. If d.Nanosecond > 0 but d.Precision = 0, then the minimum number of digits for nanoseconds is provided. #### func (*LocalTime) [UnmarshalText](https://github.com/pelletier/go-toml/blob/v2.1.0/localtime.go#L77) [¶](#LocalTime.UnmarshalText) ``` func (d *[LocalTime](#LocalTime)) UnmarshalText(b [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalText parses b using [RFC 3339](https://rfc-editor.org/rfc/rfc3339.html) to fill d. #### type [StrictMissingError](https://github.com/pelletier/go-toml/blob/v2.1.0/errors.go#L32) [¶](#StrictMissingError) ``` type StrictMissingError struct { // One error per field that could not be found. Errors [][DecodeError](#DecodeError) } ``` StrictMissingError occurs in a TOML document that does not have a corresponding field in the target value. It contains all the missing fields in Errors. Emitted by Decoder when DisallowUnknownFields() was called. #### func (*StrictMissingError) [Error](https://github.com/pelletier/go-toml/blob/v2.1.0/errors.go#L38) [¶](#StrictMissingError.Error) ``` func (s *[StrictMissingError](#StrictMissingError)) Error() [string](/builtin#string) ``` Error returns the canonical string for this error. #### func (*StrictMissingError) [String](https://github.com/pelletier/go-toml/blob/v2.1.0/errors.go#L43) [¶](#StrictMissingError.String) ``` func (s *[StrictMissingError](#StrictMissingError)) String() [string](/builtin#string) ``` String returns a human readable description of all errors.
flask
devdocs
Python
Welcome to Flask ================ ![_images/flask-horizontal.png] section. Flask depends on the [Werkzeug](https://werkzeug.palletsprojects.com) WSGI toolkit, the [Jinja](https://jinja.palletsprojects.com) template engine, and the [Click](https://click.palletsprojects.com) CLI toolkit. Be sure to check their documentation as well as Flask’s when looking for information. User’s Guide ------------ Flask provides configuration and conventions, with sensible defaults, to get started. This section of the documentation explains the different parts of the Flask framework and how they can be used, customized, and extended. Beyond Flask itself, look for community-maintained extensions to add even more functionality. * [Installation](installation/index) + [Python Version](installation/index#python-version) + [Dependencies](installation/index#dependencies) + [Virtual environments](installation/index#virtual-environments) + [Install Flask](installation/index#install-flask) * [Quickstart](quickstart/index) + [A Minimal Application](quickstart/index#a-minimal-application) + [Debug Mode](quickstart/index#debug-mode) + [HTML Escaping](quickstart/index#html-escaping) + [Routing](quickstart/index#routing) + [Static Files](quickstart/index#static-files) + [Rendering Templates](quickstart/index#rendering-templates) + [Accessing Request Data](quickstart/index#accessing-request-data) + [Redirects and Errors](quickstart/index#redirects-and-errors) + [About Responses](quickstart/index#about-responses) + [Sessions](quickstart/index#sessions) + [Message Flashing](quickstart/index#message-flashing) + [Logging](quickstart/index#logging) + [Hooking in WSGI Middleware](quickstart/index#hooking-in-wsgi-middleware) + [Using Flask Extensions](quickstart/index#using-flask-extensions) + [Deploying to a Web Server](quickstart/index#deploying-to-a-web-server) * [Tutorial](https://flask.palletsprojects.com/en/2.3.x/tutorial/) + [Project Layout](https://flask.palletsprojects.com/en/2.3.x/tutorial/layout/) + [Application Setup](https://flask.palletsprojects.com/en/2.3.x/tutorial/factory/) + [Define and Access the Database](https://flask.palletsprojects.com/en/2.3.x/tutorial/database/) + [Blueprints and Views](https://flask.palletsprojects.com/en/2.3.x/tutorial/views/) + [Templates](https://flask.palletsprojects.com/en/2.3.x/tutorial/templates/) + [Static Files](https://flask.palletsprojects.com/en/2.3.x/tutorial/static/) + [Blog Blueprint](https://flask.palletsprojects.com/en/2.3.x/tutorial/blog/) + [Make the Project Installable](https://flask.palletsprojects.com/en/2.3.x/tutorial/install/) + [Test Coverage](https://flask.palletsprojects.com/en/2.3.x/tutorial/tests/) + [Deploy to Production](https://flask.palletsprojects.com/en/2.3.x/tutorial/deploy/) + [Keep Developing!](https://flask.palletsprojects.com/en/2.3.x/tutorial/next/) * [Templates](templating/index) + [Jinja Setup](templating/index#jinja-setup) + [Standard Context](templating/index#standard-context) + [Controlling Autoescaping](templating/index#controlling-autoescaping) + [Registering Filters](templating/index#registering-filters) + [Context Processors](templating/index#context-processors) + [Streaming](templating/index#streaming) * [Testing Flask Applications](testing/index) + [Identifying Tests](testing/index#identifying-tests) + [Fixtures](testing/index#fixtures) + [Sending Requests with the Test Client](testing/index#sending-requests-with-the-test-client) + [Following Redirects](testing/index#following-redirects) + [Accessing and Modifying the Session](testing/index#accessing-and-modifying-the-session) + [Running Commands with the CLI Runner](testing/index#running-commands-with-the-cli-runner) + [Tests that depend on an Active Context](testing/index#tests-that-depend-on-an-active-context) * [Handling Application Errors](errorhandling/index) + [Error Logging Tools](errorhandling/index#error-logging-tools) + [Error Handlers](errorhandling/index#error-handlers) + [Custom Error Pages](errorhandling/index#custom-error-pages) + [Blueprint Error Handlers](errorhandling/index#blueprint-error-handlers) + [Returning API Errors as JSON](errorhandling/index#returning-api-errors-as-json) + [Logging](errorhandling/index#logging) + [Debugging](errorhandling/index#debugging) * [Debugging Application Errors](debugging/index) + [In Production](debugging/index#in-production) + [The Built-In Debugger](debugging/index#the-built-in-debugger) + [External Debuggers](debugging/index#external-debuggers) * [Logging](logging/index) + [Basic Configuration](logging/index#basic-configuration) + [Email Errors to Admins](logging/index#email-errors-to-admins) + [Injecting Request Information](logging/index#injecting-request-information) + [Other Libraries](logging/index#other-libraries) * [Configuration Handling](config/index) + [Configuration Basics](config/index#configuration-basics) + [Debug Mode](config/index#debug-mode) + [Builtin Configuration Values](config/index#builtin-configuration-values) + [Configuring from Python Files](config/index#configuring-from-python-files) + [Configuring from Data Files](config/index#configuring-from-data-files) + [Configuring from Environment Variables](config/index#configuring-from-environment-variables) + [Configuration Best Practices](config/index#configuration-best-practices) + [Development / Production](config/index#development-production) + [Instance Folders](config/index#instance-folders) * [Signals](signals/index) + [Core Signals](signals/index#core-signals) + [Subscribing to Signals](signals/index#subscribing-to-signals) + [Creating Signals](signals/index#creating-signals) + [Sending Signals](signals/index#sending-signals) + [Signals and Flask’s Request Context](signals/index#signals-and-flask-s-request-context) + [Decorator Based Signal Subscriptions](signals/index#decorator-based-signal-subscriptions) * [Class-based Views](views/index) + [Basic Reusable View](views/index#basic-reusable-view) + [URL Variables](views/index#url-variables) + [View Lifetime and `self`](views/index#view-lifetime-and-self) + [View Decorators](views/index#view-decorators) + [Method Hints](views/index#method-hints) + [Method Dispatching and APIs](views/index#method-dispatching-and-apis) * [Application Structure and Lifecycle](lifecycle/index) + [Application Setup](lifecycle/index#application-setup) + [Serving the Application](lifecycle/index#serving-the-application) + [How a Request is Handled](lifecycle/index#how-a-request-is-handled) * [The Application Context](appcontext/index) + [Purpose of the Context](appcontext/index#purpose-of-the-context) + [Lifetime of the Context](appcontext/index#lifetime-of-the-context) + [Manually Push a Context](appcontext/index#manually-push-a-context) + [Storing Data](appcontext/index#storing-data) + [Events and Signals](appcontext/index#events-and-signals) * [The Request Context](reqcontext/index) + [Purpose of the Context](reqcontext/index#purpose-of-the-context) + [Lifetime of the Context](reqcontext/index#lifetime-of-the-context) + [Manually Push a Context](reqcontext/index#manually-push-a-context) + [How the Context Works](reqcontext/index#how-the-context-works) + [Callbacks and Errors](reqcontext/index#callbacks-and-errors) + [Notes On Proxies](reqcontext/index#notes-on-proxies) * [Modular Applications with Blueprints](blueprints/index) + [Why Blueprints?](blueprints/index#why-blueprints) + [The Concept of Blueprints](blueprints/index#the-concept-of-blueprints) + [My First Blueprint](blueprints/index#my-first-blueprint) + [Registering Blueprints](blueprints/index#registering-blueprints) + [Nesting Blueprints](blueprints/index#nesting-blueprints) + [Blueprint Resources](blueprints/index#blueprint-resources) + [Building URLs](blueprints/index#building-urls) + [Blueprint Error Handlers](blueprints/index#blueprint-error-handlers) * [Extensions](extensions/index) + [Finding Extensions](extensions/index#finding-extensions) + [Using Extensions](extensions/index#using-extensions) + [Building Extensions](extensions/index#building-extensions) * [Command Line Interface](cli/index) + [Application Discovery](cli/index#application-discovery) + [Run the Development Server](cli/index#run-the-development-server) + [Open a Shell](cli/index#open-a-shell) + [Environment Variables From dotenv](cli/index#environment-variables-from-dotenv) + [Environment Variables From virtualenv](cli/index#environment-variables-from-virtualenv) + [Custom Commands](cli/index#custom-commands) + [Plugins](cli/index#plugins) + [Custom Scripts](cli/index#custom-scripts) + [PyCharm Integration](cli/index#pycharm-integration) * [Development Server](server/index) + [Command Line](server/index#command-line) + [In Code](server/index#in-code) * [Working with the Shell](shell/index) + [Command Line Interface](shell/index#command-line-interface) + [Creating a Request Context](shell/index#creating-a-request-context) + [Firing Before/After Request](shell/index#firing-before-after-request) + [Further Improving the Shell Experience](shell/index#further-improving-the-shell-experience) * [Patterns for Flask](patterns/index) + [Large Applications as Packages](patterns/packages/index) + [Application Factories](patterns/appfactories/index) + [Application Dispatching](patterns/appdispatch/index) + [Using URL Processors](patterns/urlprocessors/index) + [Using SQLite 3 with Flask](patterns/sqlite3/index) + [SQLAlchemy in Flask](patterns/sqlalchemy/index) + [Uploading Files](patterns/fileuploads/index) + [Caching](patterns/caching/index) + [View Decorators](patterns/viewdecorators/index) + [Form Validation with WTForms](patterns/wtforms/index) + [Template Inheritance](patterns/templateinheritance/index) + [Message Flashing](patterns/flashing/index) + [JavaScript, `fetch`, and JSON](patterns/javascript/index) + [Lazily Loading Views](patterns/lazyloading/index) + [MongoDB with MongoEngine](patterns/mongoengine/index) + [Adding a favicon](patterns/favicon/index) + [Streaming Contents](patterns/streaming/index) + [Deferred Request Callbacks](patterns/deferredcallbacks/index) + [Adding HTTP Method Overrides](patterns/methodoverrides/index) + [Request Content Checksums](patterns/requestchecksum/index) + [Background Tasks with Celery](patterns/celery/index) + [Subclassing Flask](patterns/subclassing/index) + [Single-Page Applications](patterns/singlepageapplications/index) * [Security Considerations](security/index) + [Cross-Site Scripting (XSS)](security/index#cross-site-scripting-xss) + [Cross-Site Request Forgery (CSRF)](security/index#cross-site-request-forgery-csrf) + [JSON Security](security/index#json-security) + [Security Headers](security/index#security-headers) + [Copy/Paste to Terminal](security/index#copy-paste-to-terminal) * [Deploying to Production](deploying/index) + [Self-Hosted Options](deploying/index#self-hosted-options) + [Hosting Platforms](deploying/index#hosting-platforms) * [Using `async` and `await`](async-await/index) + [Performance](async-await/index#performance) + [Background tasks](async-await/index#background-tasks) + [When to use Quart instead](async-await/index#when-to-use-quart-instead) + [Extensions](async-await/index#extensions) + [Other event loops](async-await/index#other-event-loops) API Reference ------------- If you are looking for information on a specific function, class or method, this part of the documentation is for you. * [API](api/index) + [Application Object](api/index#application-object) + [Blueprint Objects](api/index#blueprint-objects) + [Incoming Request Data](api/index#incoming-request-data) + [Response Objects](api/index#response-objects) + [Sessions](api/index#sessions) + [Session Interface](api/index#session-interface) + [Test Client](api/index#test-client) + [Test CLI Runner](api/index#test-cli-runner) + [Application Globals](api/index#application-globals) + [Useful Functions and Classes](api/index#useful-functions-and-classes) + [Message Flashing](api/index#message-flashing) + [JSON Support](api/index#module-flask.json) + [Template Rendering](api/index#template-rendering) + [Configuration](api/index#configuration) + [Stream Helpers](api/index#stream-helpers) + [Useful Internals](api/index#useful-internals) + [Signals](api/index#signals) + [Class-Based Views](api/index#class-based-views) + [URL Route Registrations](api/index#url-route-registrations) + [View Function Options](api/index#view-function-options) + [Command Line Interface](api/index#command-line-interface) Additional Notes ---------------- * [Design Decisions in Flask](design/index) + [The Explicit Application Object](design/index#the-explicit-application-object) + [The Routing System](design/index#the-routing-system) + [One Template Engine](design/index#one-template-engine) + [What does “micro” mean?](design/index#what-does-micro-mean) + [Thread Locals](design/index#thread-locals) + [Async/await and ASGI support](design/index#async-await-and-asgi-support) + [What Flask is, What Flask is Not](design/index#what-flask-is-what-flask-is-not) * [Flask Extension Development](https://flask.palletsprojects.com/en/2.3.x/extensiondev/) + [Naming](https://flask.palletsprojects.com/en/2.3.x/extensiondev/#naming) + [The Extension Class and Initialization](https://flask.palletsprojects.com/en/2.3.x/extensiondev/#the-extension-class-and-initialization) + [Adding Behavior](https://flask.palletsprojects.com/en/2.3.x/extensiondev/#adding-behavior) + [Configuration Techniques](https://flask.palletsprojects.com/en/2.3.x/extensiondev/#configuration-techniques) + [Data During a Request](https://flask.palletsprojects.com/en/2.3.x/extensiondev/#data-during-a-request) + [Views and Models](https://flask.palletsprojects.com/en/2.3.x/extensiondev/#views-and-models) + [Recommended Extension Guidelines](https://flask.palletsprojects.com/en/2.3.x/extensiondev/#recommended-extension-guidelines) * [How to contribute to Flask](https://flask.palletsprojects.com/en/2.3.x/contributing/) + [Support questions](https://flask.palletsprojects.com/en/2.3.x/contributing/#support-questions) + [Reporting issues](https://flask.palletsprojects.com/en/2.3.x/contributing/#reporting-issues) + [Submitting patches](https://flask.palletsprojects.com/en/2.3.x/contributing/#submitting-patches) * [BSD-3-Clause License](https://flask.palletsprojects.com/en/2.3.x/license/) * [Changes](changes/index) + [Version 2.3.3](changes/index#version-2-3-3) + [Version 2.3.2](changes/index#version-2-3-2) + [Version 2.3.1](changes/index#version-2-3-1) + [Version 2.3.0](changes/index#version-2-3-0) + [Version 2.2.5](changes/index#version-2-2-5) + [Version 2.2.4](changes/index#version-2-2-4) + [Version 2.2.3](changes/index#version-2-2-3) + [Version 2.2.2](changes/index#version-2-2-2) + [Version 2.2.1](changes/index#version-2-2-1) + [Version 2.2.0](changes/index#version-2-2-0) + [Version 2.1.3](changes/index#version-2-1-3) + [Version 2.1.2](changes/index#version-2-1-2) + [Version 2.1.1](changes/index#version-2-1-1) + [Version 2.1.0](changes/index#version-2-1-0) + [Version 2.0.3](changes/index#version-2-0-3) + [Version 2.0.2](changes/index#version-2-0-2) + [Version 2.0.1](changes/index#version-2-0-1) + [Version 2.0.0](changes/index#version-2-0-0) + [Version 1.1.4](changes/index#version-1-1-4) + [Version 1.1.3](changes/index#version-1-1-3) + [Version 1.1.2](changes/index#version-1-1-2) + [Version 1.1.1](changes/index#version-1-1-1) + [Version 1.1.0](changes/index#version-1-1-0) + [Version 1.0.4](changes/index#version-1-0-4) + [Version 1.0.3](changes/index#version-1-0-3) + [Version 1.0.2](changes/index#version-1-0-2) + [Version 1.0.1](changes/index#version-1-0-1) + [Version 1.0](changes/index#version-1-0) + [Version 0.12.5](changes/index#version-0-12-5) + [Version 0.12.4](changes/index#version-0-12-4) + [Version 0.12.3](changes/index#version-0-12-3) + [Version 0.12.2](changes/index#version-0-12-2) + [Version 0.12.1](changes/index#version-0-12-1) + [Version 0.12](changes/index#version-0-12) + [Version 0.11.1](changes/index#version-0-11-1) + [Version 0.11](changes/index#version-0-11) + [Version 0.10.1](changes/index#version-0-10-1) + [Version 0.10](changes/index#version-0-10) + [Version 0.9](changes/index#version-0-9) + [Version 0.8.1](changes/index#version-0-8-1) + [Version 0.8](changes/index#version-0-8) + [Version 0.7.2](changes/index#version-0-7-2) + [Version 0.7.1](changes/index#version-0-7-1) + [Version 0.7](changes/index#version-0-7) + [Version 0.6.1](changes/index#version-0-6-1) + [Version 0.6](changes/index#version-0-6) + [Version 0.5.2](changes/index#version-0-5-2) + [Version 0.5.1](changes/index#version-0-5-1) + [Version 0.5](changes/index#version-0-5) + [Version 0.4](changes/index#version-0-4) + [Version 0.3.1](changes/index#version-0-3-1) + [Version 0.3](changes/index#version-0-3) + [Version 0.2](changes/index#version-0-2) + [Version 0.1](changes/index#version-0-1) © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/index.htmlInstallation ============ Python Version -------------- We recommend using the latest version of Python. Flask supports Python 3.8 and newer. Dependencies ------------ These distributions will be installed automatically when installing Flask. * [Werkzeug](https://palletsprojects.com/p/werkzeug/) implements WSGI, the standard Python interface between applications and servers. * [Jinja](https://palletsprojects.com/p/jinja/) is a template language that renders the pages your application serves. * [MarkupSafe](https://palletsprojects.com/p/markupsafe/) comes with Jinja. It escapes untrusted input when rendering templates to avoid injection attacks. * [ItsDangerous](https://palletsprojects.com/p/itsdangerous/) securely signs data to ensure its integrity. This is used to protect Flask’s session cookie. * [Click](https://palletsprojects.com/p/click/) is a framework for writing command line applications. It provides the `flask` command and allows adding custom management commands. * [Blinker](https://blinker.readthedocs.io/) provides support for [Signals](../signals/index). ### Optional dependencies These distributions will not be installed automatically. Flask will detect and use them if you install them. * [python-dotenv](https://github.com/theskumar/python-dotenv#readme) enables support for [Environment Variables From dotenv](../cli/index#dotenv) when running `flask` commands. * [Watchdog](https://pythonhosted.org/watchdog/) provides a faster, more efficient reloader for the development server. ### greenlet You may choose to use gevent or eventlet with your application. In this case, greenlet>=1.0 is required. When using PyPy, PyPy>=7.3.7 is required. These are not minimum supported versions, they only indicate the first versions that added necessary features. You should use the latest versions of each. Virtual environments -------------------- Use a virtual environment to manage the dependencies for your project, both in development and in production. What problem does a virtual environment solve? The more Python projects you have, the more likely it is that you need to work with different versions of Python libraries, or even Python itself. Newer versions of libraries for one project can break compatibility in another project. Virtual environments are independent groups of Python libraries, one for each project. Packages installed for one project will not affect other projects or the operating system’s packages. Python comes bundled with the [`venv`](https://docs.python.org/3/library/venv.html#module-venv "(in Python v3.11)") module to create virtual environments. ### Create an environment Create a project folder and a `.venv` folder within: macOS/LinuxWindows ``` $ mkdir myproject $ cd myproject $ python3 -m venv .venv ``` ``` > mkdir myproject > cd myproject > py -3 -m venv .venv ``` ### Activate the environment Before you work on your project, activate the corresponding environment: macOS/LinuxWindows ``` $ . .venv/bin/activate ``` ``` > .venv\Scripts\activate ``` Your shell prompt will change to show the name of the activated environment. Install Flask ------------- Within the activated environment, use the following command to install Flask: ``` $ pip install Flask ``` Flask is now installed. Check out the [Quickstart](../quickstart/index) or go to the [Documentation Overview](https://flask.palletsprojects.com/en/2.3.x/). © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/installation/Testing Flask Applications ========================== Flask provides utilities for testing an application. This documentation goes over techniques for working with different parts of the application in tests. We will use the [pytest](https://docs.pytest.org/) framework to set up and run our tests. ``` $ pip install pytest ``` The [tutorial](https://flask.palletsprojects.com/en/2.3.x/tutorial/) goes over how to write tests for 100% coverage of the sample Flaskr blog application. See [the tutorial on tests](https://flask.palletsprojects.com/en/2.3.x/tutorial/tests/) for a detailed explanation of specific tests for an application. Identifying Tests ----------------- Tests are typically located in the `tests` folder. Tests are functions that start with `test_`, in Python modules that start with `test_`. Tests can also be further grouped in classes that start with `Test`. It can be difficult to know what to test. Generally, try to test the code that you write, not the code of libraries that you use, since they are already tested. Try to extract complex behaviors as separate functions to test individually. Fixtures -------- Pytest *fixtures* allow writing pieces of code that are reusable across tests. A simple fixture returns a value, but a fixture can also do setup, yield a value, then do teardown. Fixtures for the application, test client, and CLI runner are shown below, they can be placed in `tests/conftest.py`. If you’re using an [application factory](../patterns/appfactories/index), define an `app` fixture to create and configure an app instance. You can add code before and after the `yield` to set up and tear down other resources, such as creating and clearing a database. If you’re not using a factory, you already have an app object you can import and configure directly. You can still use an `app` fixture to set up and tear down resources. ``` import pytest from my_project import create_app @pytest.fixture() def app(): app = create_app() app.config.update({ "TESTING": True, }) # other setup can go here yield app # clean up / reset resources here @pytest.fixture() def client(app): return app.test_client() @pytest.fixture() def runner(app): return app.test_cli_runner() ``` Sending Requests with the Test Client ------------------------------------- The test client makes requests to the application without running a live server. Flask’s client extends [Werkzeug’s client](https://werkzeug.palletsprojects.com/en/2.3.x/test/ "(in Werkzeug v2.3.x)"), see those docs for additional information. The `client` has methods that match the common HTTP request methods, such as `client.get()` and `client.post()`. They take many arguments for building the request; you can find the full documentation in [`EnvironBuilder`](https://werkzeug.palletsprojects.com/en/2.3.x/test/#werkzeug.test.EnvironBuilder "(in Werkzeug v2.3.x)"). Typically you’ll use `path`, `query_string`, `headers`, and `data` or `json`. To make a request, call the method the request should use with the path to the route to test. A [`TestResponse`](https://werkzeug.palletsprojects.com/en/2.3.x/test/#werkzeug.test.TestResponse "(in Werkzeug v2.3.x)") is returned to examine the response data. It has all the usual properties of a response object. You’ll usually look at `response.data`, which is the bytes returned by the view. If you want to use text, Werkzeug 2.1 provides `response.text`, or use `response.get_data(as_text=True)`. ``` def test_request_example(client): response = client.get("/posts") assert b"<h2>Hello, World!</h2>" in response.data ``` Pass a dict `query_string={"key": "value", ...}` to set arguments in the query string (after the `?` in the URL). Pass a dict `headers={}` to set request headers. To send a request body in a POST or PUT request, pass a value to `data`. If raw bytes are passed, that exact body is used. Usually, you’ll pass a dict to set form data. ### Form Data To send form data, pass a dict to `data`. The `Content-Type` header will be set to `multipart/form-data` or `application/x-www-form-urlencoded` automatically. If a value is a file object opened for reading bytes (`"rb"` mode), it will be treated as an uploaded file. To change the detected filename and content type, pass a `(file, filename, content_type)` tuple. File objects will be closed after making the request, so they do not need to use the usual `with open() as f:` pattern. It can be useful to store files in a `tests/resources` folder, then use `pathlib.Path` to get files relative to the current test file. ``` from pathlib import Path # get the resources folder in the tests folder resources = Path(__file__).parent / "resources" def test_edit_user(client): response = client.post("/user/2/edit", data={ "name": "Flask", "theme": "dark", "picture": (resources / "picture.png").open("rb"), }) assert response.status_code == 200 ``` ### JSON Data To send JSON data, pass an object to `json`. The `Content-Type` header will be set to `application/json` automatically. Similarly, if the response contains JSON data, the `response.json` attribute will contain the deserialized object. ``` def test_json_data(client): response = client.post("/graphql", json={ "query": """ query User($id: String!) { user(id: $id) { name theme picture_url } } """, variables={"id": 2}, }) assert response.json["data"]["user"]["name"] == "Flask" ``` Following Redirects ------------------- By default, the client does not make additional requests if the response is a redirect. By passing `follow_redirects=True` to a request method, the client will continue to make requests until a non-redirect response is returned. [`TestResponse.history`](https://werkzeug.palletsprojects.com/en/2.3.x/test/#werkzeug.test.TestResponse.history "(in Werkzeug v2.3.x)") is a tuple of the responses that led up to the final response. Each response has a [`request`](https://werkzeug.palletsprojects.com/en/2.3.x/test/#werkzeug.test.TestResponse.request "(in Werkzeug v2.3.x)") attribute which records the request that produced that response. ``` def test_logout_redirect(client): response = client.get("/logout") # Check that there was one redirect response. assert len(response.history) == 1 # Check that the second request was to the index page. assert response.request.path == "/index" ``` Accessing and Modifying the Session ----------------------------------- To access Flask’s context variables, mainly [`session`](../api/index#flask.session "flask.session"), use the client in a `with` statement. The app and request context will remain active *after* making a request, until the `with` block ends. ``` from flask import session def test_access_session(client): with client: client.post("/auth/login", data={"username": "flask"}) # session is still accessible assert session["user_id"] == 1 # session is no longer accessible ``` If you want to access or set a value in the session *before* making a request, use the client’s [`session_transaction()`](../api/index#flask.testing.FlaskClient.session_transaction "flask.testing.FlaskClient.session_transaction") method in a `with` statement. It returns a session object, and will save the session once the block ends. ``` from flask import session def test_modify_session(client): with client.session_transaction() as session: # set a user id without going through the login route session["user_id"] = 1 # session is saved now response = client.get("/users/me") assert response.json["username"] == "flask" ``` Running Commands with the CLI Runner ------------------------------------ Flask provides [`test_cli_runner()`](../api/index#flask.Flask.test_cli_runner "flask.Flask.test_cli_runner") to create a [`FlaskCliRunner`](../api/index#flask.testing.FlaskCliRunner "flask.testing.FlaskCliRunner"), which runs CLI commands in isolation and captures the output in a [`Result`](https://click.palletsprojects.com/en/8.1.x/api/#click.testing.Result "(in Click v8.1.x)") object. Flask’s runner extends [Click’s runner](https://click.palletsprojects.com/en/8.1.x/testing/ "(in Click v8.1.x)"), see those docs for additional information. Use the runner’s [`invoke()`](../api/index#flask.testing.FlaskCliRunner.invoke "flask.testing.FlaskCliRunner.invoke") method to call commands in the same way they would be called with the `flask` command from the command line. ``` import click @app.cli.command("hello") @click.option("--name", default="World") def hello_command(name): click.echo(f"Hello, {name}!") def test_hello_command(runner): result = runner.invoke(args="hello") assert "World" in result.output result = runner.invoke(args=["hello", "--name", "Flask"]) assert "Flask" in result.output ``` Tests that depend on an Active Context -------------------------------------- You may have functions that are called from views or commands, that expect an active [application context](../appcontext/index) or [request context](../reqcontext/index) because they access `request`, `session`, or `current_app`. Rather than testing them by making a request or invoking the command, you can create and activate a context directly. Use `with app.app_context()` to push an application context. For example, database extensions usually require an active app context to make queries. ``` def test_db_post_model(app): with app.app_context(): post = db.session.query(Post).get(1) ``` Use `with app.test_request_context()` to push a request context. It takes the same arguments as the test client’s request methods. ``` def test_validate_user_edit(app): with app.test_request_context( "/user/2/edit", method="POST", data={"name": ""} ): # call a function that accesses `request` messages = validate_edit_user() assert messages["name"][0] == "Name cannot be empty." ``` Creating a test request context doesn’t run any of the Flask dispatching code, so `before_request` functions are not called. If you need to call these, usually it’s better to make a full request instead. However, it’s possible to call them manually. ``` def test_auth_token(app): with app.test_request_context("/user/2/edit", headers={"X-Auth-Token": "1"}): app.preprocess_request() assert g.user.name == "Flask" ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/testing/Debugging Application Errors ============================ In Production ------------- **Do not run the development server, or enable the built-in debugger, in a production environment.** The debugger allows executing arbitrary Python code from the browser. It’s protected by a pin, but that should not be relied on for security. Use an error logging tool, such as Sentry, as described in [Error Logging Tools](../errorhandling/index#error-logging-tools), or enable logging and notifications as described in [Logging](../logging/index). If you have access to the server, you could add some code to start an external debugger if `request.remote_addr` matches your IP. Some IDE debuggers also have a remote mode so breakpoints on the server can be interacted with locally. Only enable a debugger temporarily. The Built-In Debugger --------------------- The built-in Werkzeug development server provides a debugger which shows an interactive traceback in the browser when an unhandled error occurs during a request. This debugger should only be used during development. ![screenshot of debugger in action](https://flask.palletsprojects.com/en/2.3.x/_images/debugger.png) Warning The debugger allows executing arbitrary Python code from the browser. It is protected by a pin, but still represents a major security risk. Do not run the development server or debugger in a production environment. The debugger is enabled by default when the development server is run in debug mode. ``` $ flask --app hello run --debug ``` When running from Python code, passing `debug=True` enables debug mode, which is mostly equivalent. ``` app.run(debug=True) ``` [Development Server](../server/index) and [Command Line Interface](../cli/index) have more information about running the debugger and debug mode. More information about the debugger can be found in the [Werkzeug documentation](https://werkzeug.palletsprojects.com/debug/). External Debuggers ------------------ External debuggers, such as those provided by IDEs, can offer a more powerful debugging experience than the built-in debugger. They can also be used to step through code during a request before an error is raised, or if no error is raised. Some even have a remote mode so you can debug code running on another machine. When using an external debugger, the app should still be in debug mode, otherwise Flask turns unhandled errors into generic 500 error pages. However, the built-in debugger and reloader should be disabled so they don’t interfere with the external debugger. ``` $ flask --app hello run --debug --no-debugger --no-reload ``` When running from Python: ``` app.run(debug=True, use_debugger=False, use_reloader=False) ``` Disabling these isn’t required, an external debugger will continue to work with the following caveats. * If the built-in debugger is not disabled, it will catch unhandled exceptions before the external debugger can. * If the reloader is not disabled, it could cause an unexpected reload if code changes during a breakpoint. * The development server will still catch unhandled exceptions if the built-in debugger is disabled, otherwise it would crash on any error. If you want that (and usually you don’t) pass `passthrough_errors=True` to `app.run`. ``` app.run( debug=True, passthrough_errors=True, use_debugger=False, use_reloader=False ) ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/debugging/Quickstart ========== Eager to get started? This page gives a good introduction to Flask. Follow [Installation](../installation/index) to set up a project and install Flask first. A Minimal Application --------------------- A minimal Flask application looks something like this: ``` from flask import Flask app = Flask(__name__) @app.route("/") def hello_world(): return "<p>Hello, World!</p>" ``` So what did that code do? 1. First we imported the [`Flask`](../api/index#flask.Flask "flask.Flask") class. An instance of this class will be our WSGI application. 2. Next we create an instance of this class. The first argument is the name of the application’s module or package. `__name__` is a convenient shortcut for this that is appropriate for most cases. This is needed so that Flask knows where to look for resources such as templates and static files. 3. We then use the [`route()`](../api/index#flask.Flask.route "flask.Flask.route") decorator to tell Flask what URL should trigger our function. 4. The function returns the message we want to display in the user’s browser. The default content type is HTML, so HTML in the string will be rendered by the browser. Save it as `hello.py` or something similar. Make sure to not call your application `flask.py` because this would conflict with Flask itself. To run the application, use the `flask` command or `python -m flask`. You need to tell the Flask where your application is with the `--app` option. ``` $ flask --app hello run * Serving Flask app 'hello' * Running on http://127.0.0.1:5000 (Press CTRL+C to quit) ``` Application Discovery Behavior As a shortcut, if the file is named `app.py` or `wsgi.py`, you don’t have to use `--app`. See [Command Line Interface](../cli/index) for more details. This launches a very simple builtin server, which is good enough for testing but probably not what you want to use in production. For deployment options see [Deploying to Production](../deploying/index). Now head over to <http://127.0.0.1:5000/>, and you should see your hello world greeting. If another program is already using port 5000, you’ll see `OSError: [Errno 98]` or `OSError: [WinError 10013]` when the server tries to start. See [Address already in use](../server/index#address-already-in-use) for how to handle that. Externally Visible Server If you run the server you will notice that the server is only accessible from your own computer, not from any other in the network. This is the default because in debugging mode a user of the application can execute arbitrary Python code on your computer. If you have the debugger disabled or trust the users on your network, you can make the server publicly available simply by adding `--host=0.0.0.0` to the command line: ``` $ flask run --host=0.0.0.0 ``` This tells your operating system to listen on all public IPs. Debug Mode ---------- The `flask run` command can do more than just start the development server. By enabling debug mode, the server will automatically reload if code changes, and will show an interactive debugger in the browser if an error occurs during a request. ![The interactive debugger in action.](https://flask.palletsprojects.com/en/2.3.x/_images/debugger.png) Warning The debugger allows executing arbitrary Python code from the browser. It is protected by a pin, but still represents a major security risk. Do not run the development server or debugger in a production environment. To enable debug mode, use the `--debug` option. ``` $ flask --app hello run --debug * Serving Flask app 'hello' * Debug mode: on * Running on http://127.0.0.1:5000 (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: nnn-nnn-nnn ``` See also: * [Development Server](../server/index) and [Command Line Interface](../cli/index) for information about running in debug mode. * [Debugging Application Errors](../debugging/index) for information about using the built-in debugger and other debuggers. * [Logging](../logging/index) and [Handling Application Errors](../errorhandling/index) to log errors and display nice error pages. HTML Escaping ------------- When returning HTML (the default response type in Flask), any user-provided values rendered in the output must be escaped to protect from injection attacks. HTML templates rendered with Jinja, introduced later, will do this automatically. `escape()`, shown here, can be used manually. It is omitted in most examples for brevity, but you should always be aware of how you’re using untrusted data. ``` from markupsafe import escape @app.route("/<name>") def hello(name): return f"Hello, {escape(name)}!" ``` If a user managed to submit the name `<script>alert("bad")</script>`, escaping causes it to be rendered as text, rather than running the script in the user’s browser. `<name>` in the route captures a value from the URL and passes it to the view function. These variable rules are explained below. Routing ------- Modern web applications use meaningful URLs to help users. Users are more likely to like a page and come back if the page uses a meaningful URL they can remember and use to directly visit a page. Use the [`route()`](../api/index#flask.Flask.route "flask.Flask.route") decorator to bind a function to a URL. ``` @app.route('/') def index(): return 'Index Page' @app.route('/hello') def hello(): return 'Hello, World' ``` You can do more! You can make parts of the URL dynamic and attach multiple rules to a function. ### Variable Rules You can add variable sections to a URL by marking sections with `<variable_name>`. Your function then receives the `<variable_name>` as a keyword argument. Optionally, you can use a converter to specify the type of the argument like `<converter:variable_name>`. ``` from markupsafe import escape @app.route('/user/<username>') def show_user_profile(username): # show the user profile for that user return f'User {escape(username)}' @app.route('/post/<int:post_id>') def show_post(post_id): # show the post with the given id, the id is an integer return f'Post {post_id}' @app.route('/path/<path:subpath>') def show_subpath(subpath): # show the subpath after /path/ return f'Subpath {escape(subpath)}' ``` Converter types: | | | | --- | --- | | `string` | (default) accepts any text without a slash | | `int` | accepts positive integers | | `float` | accepts positive floating point values | | `path` | like `string` but also accepts slashes | | `uuid` | accepts UUID strings | ### Unique URLs / Redirection Behavior The following two rules differ in their use of a trailing slash. ``` @app.route('/projects/') def projects(): return 'The project page' @app.route('/about') def about(): return 'The about page' ``` The canonical URL for the `projects` endpoint has a trailing slash. It’s similar to a folder in a file system. If you access the URL without a trailing slash (`/projects`), Flask redirects you to the canonical URL with the trailing slash (`/projects/`). The canonical URL for the `about` endpoint does not have a trailing slash. It’s similar to the pathname of a file. Accessing the URL with a trailing slash (`/about/`) produces a 404 “Not Found” error. This helps keep URLs unique for these resources, which helps search engines avoid indexing the same page twice. ### URL Building To build a URL to a specific function, use the [`url_for()`](../api/index#flask.url_for "flask.url_for") function. It accepts the name of the function as its first argument and any number of keyword arguments, each corresponding to a variable part of the URL rule. Unknown variable parts are appended to the URL as query parameters. Why would you want to build URLs using the URL reversing function [`url_for()`](../api/index#flask.url_for "flask.url_for") instead of hard-coding them into your templates? 1. Reversing is often more descriptive than hard-coding the URLs. 2. You can change your URLs in one go instead of needing to remember to manually change hard-coded URLs. 3. URL building handles escaping of special characters transparently. 4. The generated paths are always absolute, avoiding unexpected behavior of relative paths in browsers. 5. If your application is placed outside the URL root, for example, in `/myapplication` instead of `/`, [`url_for()`](../api/index#flask.url_for "flask.url_for") properly handles that for you. For example, here we use the [`test_request_context()`](../api/index#flask.Flask.test_request_context "flask.Flask.test_request_context") method to try out [`url_for()`](../api/index#flask.url_for "flask.url_for"). [`test_request_context()`](../api/index#flask.Flask.test_request_context "flask.Flask.test_request_context") tells Flask to behave as though it’s handling a request even while we use a Python shell. See [Context Locals](#context-locals). ``` from flask import url_for @app.route('/') def index(): return 'index' @app.route('/login') def login(): return 'login' @app.route('/user/<username>') def profile(username): return f'{username}\'s profile' with app.test_request_context(): print(url_for('index')) print(url_for('login')) print(url_for('login', next='/')) print(url_for('profile', username='<NAME>')) ``` ``` / /login /login?next=/ /user/John%20Doe ``` ### HTTP Methods Web applications use different HTTP methods when accessing URLs. You should familiarize yourself with the HTTP methods as you work with Flask. By default, a route only answers to `GET` requests. You can use the `methods` argument of the [`route()`](../api/index#flask.Flask.route "flask.Flask.route") decorator to handle different HTTP methods. ``` from flask import request @app.route('/login', methods=['GET', 'POST']) def login(): if request.method == 'POST': return do_the_login() else: return show_the_login_form() ``` The example above keeps all methods for the route within one function, which can be useful if each part uses some common data. You can also separate views for different methods into different functions. Flask provides a shortcut for decorating such routes with [`get()`](../api/index#flask.Flask.get "flask.Flask.get"), [`post()`](../api/index#flask.Flask.post "flask.Flask.post"), etc. for each common HTTP method. ``` @app.get('/login') def login_get(): return show_the_login_form() @app.post('/login') def login_post(): return do_the_login() ``` If `GET` is present, Flask automatically adds support for the `HEAD` method and handles `HEAD` requests according to the [HTTP RFC](https://www.ietf.org/rfc/rfc2068.txt). Likewise, `OPTIONS` is automatically implemented for you. Static Files ------------ Dynamic web applications also need static files. That’s usually where the CSS and JavaScript files are coming from. Ideally your web server is configured to serve them for you, but during development Flask can do that as well. Just create a folder called `static` in your package or next to your module and it will be available at `/static` on the application. To generate URLs for static files, use the special `'static'` endpoint name: ``` url_for('static', filename='style.css') ``` The file has to be stored on the filesystem as `static/style.css`. Rendering Templates ------------------- Generating HTML from within Python is not fun, and actually pretty cumbersome because you have to do the HTML escaping on your own to keep the application secure. Because of that Flask configures the [Jinja2](https://palletsprojects.com/p/jinja/) template engine for you automatically. Templates can be used to generate any type of text file. For web applications, you’ll primarily be generating HTML pages, but you can also generate markdown, plain text for emails, and anything else. For a reference to HTML, CSS, and other web APIs, use the [MDN Web Docs](https://developer.mozilla.org/). To render a template you can use the [`render_template()`](../api/index#flask.render_template "flask.render_template") method. All you have to do is provide the name of the template and the variables you want to pass to the template engine as keyword arguments. Here’s a simple example of how to render a template: ``` from flask import render_template @app.route('/hello/') @app.route('/hello/<name>') def hello(name=None): return render_template('hello.html', name=name) ``` Flask will look for templates in the `templates` folder. So if your application is a module, this folder is next to that module, if it’s a package it’s actually inside your package: **Case 1**: a module: ``` /application.py /templates /hello.html ``` **Case 2**: a package: ``` /application /__init__.py /templates /hello.html ``` For templates you can use the full power of Jinja2 templates. Head over to the official [Jinja2 Template Documentation](https://jinja.palletsprojects.com/templates/) for more information. Here is an example template: ``` <!doctype html> <title>Hello from Flask</title> {% if name %} <h1>Hello {{ name }}!</h1> {% else %} <h1>Hello, World!</h1> {% endif %} ``` Inside templates you also have access to the [`config`](../api/index#flask.Flask.config "flask.Flask.config"), [`request`](../api/index#flask.request "flask.request"), [`session`](../api/index#flask.session "flask.session") and [`g`](../api/index#flask.g "flask.g") [[1]](#id3) objects as well as the [`url_for()`](../api/index#flask.url_for "flask.url_for") and [`get_flashed_messages()`](../api/index#flask.get_flashed_messages "flask.get_flashed_messages") functions. Templates are especially useful if inheritance is used. If you want to know how that works, see [Template Inheritance](../patterns/templateinheritance/index). Basically template inheritance makes it possible to keep certain elements on each page (like header, navigation and footer). Automatic escaping is enabled, so if `name` contains HTML it will be escaped automatically. If you can trust a variable and you know that it will be safe HTML (for example because it came from a module that converts wiki markup to HTML) you can mark it as safe by using the `Markup` class or by using the `|safe` filter in the template. Head over to the Jinja 2 documentation for more examples. Here is a basic introduction to how the `Markup` class works: ``` >>> from markupsafe import Markup >>> Markup('<strong>Hello %s!</strong>') % '<blink>hacker</blink>' Markup('<strong>Hello &lt;blink&gt;hacker&lt;/blink&gt;!</strong>') >>> Markup.escape('<blink>hacker</blink>') Markup('&lt;blink&gt;hacker&lt;/blink&gt;') >>> Markup('<em>Marked up</em> &raquo; HTML').striptags() 'Marked up » HTML' ``` Changelog Changed in version 0.5: Autoescaping is no longer enabled for all templates. The following extensions for templates trigger autoescaping: `.html`, `.htm`, `.xml`, `.xhtml`. Templates loaded from a string will have autoescaping disabled. [[1](#id2)] Unsure what that [`g`](../api/index#flask.g "flask.g") object is? It’s something in which you can store information for your own needs. See the documentation for [`flask.g`](../api/index#flask.g "flask.g") and [Using SQLite 3 with Flask](../patterns/sqlite3/index). Accessing Request Data ---------------------- For web applications it’s crucial to react to the data a client sends to the server. In Flask this information is provided by the global [`request`](../api/index#flask.request "flask.request") object. If you have some experience with Python you might be wondering how that object can be global and how Flask manages to still be threadsafe. The answer is context locals: ### Context Locals Insider Information If you want to understand how that works and how you can implement tests with context locals, read this section, otherwise just skip it. Certain objects in Flask are global objects, but not of the usual kind. These objects are actually proxies to objects that are local to a specific context. What a mouthful. But that is actually quite easy to understand. Imagine the context being the handling thread. A request comes in and the web server decides to spawn a new thread (or something else, the underlying object is capable of dealing with concurrency systems other than threads). When Flask starts its internal request handling it figures out that the current thread is the active context and binds the current application and the WSGI environments to that context (thread). It does that in an intelligent way so that one application can invoke another application without breaking. So what does this mean to you? Basically you can completely ignore that this is the case unless you are doing something like unit testing. You will notice that code which depends on a request object will suddenly break because there is no request object. The solution is creating a request object yourself and binding it to the context. The easiest solution for unit testing is to use the [`test_request_context()`](../api/index#flask.Flask.test_request_context "flask.Flask.test_request_context") context manager. In combination with the `with` statement it will bind a test request so that you can interact with it. Here is an example: ``` from flask import request with app.test_request_context('/hello', method='POST'): # now you can do something with the request until the # end of the with block, such as basic assertions: assert request.path == '/hello' assert request.method == 'POST' ``` The other possibility is passing a whole WSGI environment to the [`request_context()`](../api/index#flask.Flask.request_context "flask.Flask.request_context") method: ``` with app.request_context(environ): assert request.method == 'POST' ``` ### The Request Object The request object is documented in the API section and we will not cover it here in detail (see [`Request`](../api/index#flask.Request "flask.Request")). Here is a broad overview of some of the most common operations. First of all you have to import it from the `flask` module: ``` from flask import request ``` The current request method is available by using the [`method`](../api/index#flask.Request.method "flask.Request.method") attribute. To access form data (data transmitted in a `POST` or `PUT` request) you can use the [`form`](../api/index#flask.Request.form "flask.Request.form") attribute. Here is a full example of the two attributes mentioned above: ``` @app.route('/login', methods=['POST', 'GET']) def login(): error = None if request.method == 'POST': if valid_login(request.form['username'], request.form['password']): return log_the_user_in(request.form['username']) else: error = 'Invalid username/password' # the code below is executed if the request method # was GET or the credentials were invalid return render_template('login.html', error=error) ``` What happens if the key does not exist in the `form` attribute? In that case a special [`KeyError`](https://docs.python.org/3/library/exceptions.html#KeyError "(in Python v3.11)") is raised. You can catch it like a standard [`KeyError`](https://docs.python.org/3/library/exceptions.html#KeyError "(in Python v3.11)") but if you don’t do that, a HTTP 400 Bad Request error page is shown instead. So for many situations you don’t have to deal with that problem. To access parameters submitted in the URL (`?key=value`) you can use the [`args`](../api/index#flask.Request.args "flask.Request.args") attribute: ``` searchword = request.args.get('key', '') ``` We recommend accessing URL parameters with `get` or by catching the [`KeyError`](https://docs.python.org/3/library/exceptions.html#KeyError "(in Python v3.11)") because users might change the URL and presenting them a 400 bad request page in that case is not user friendly. For a full list of methods and attributes of the request object, head over to the [`Request`](../api/index#flask.Request "flask.Request") documentation. ### File Uploads You can handle uploaded files with Flask easily. Just make sure not to forget to set the `enctype="multipart/form-data"` attribute on your HTML form, otherwise the browser will not transmit your files at all. Uploaded files are stored in memory or at a temporary location on the filesystem. You can access those files by looking at the `files` attribute on the request object. Each uploaded file is stored in that dictionary. It behaves just like a standard Python `file` object, but it also has a [`save()`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.FileStorage.save "(in Werkzeug v2.3.x)") method that allows you to store that file on the filesystem of the server. Here is a simple example showing how that works: ``` from flask import request @app.route('/upload', methods=['GET', 'POST']) def upload_file(): if request.method == 'POST': f = request.files['the_file'] f.save('/var/www/uploads/uploaded_file.txt') ... ``` If you want to know how the file was named on the client before it was uploaded to your application, you can access the [`filename`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.FileStorage.filename "(in Werkzeug v2.3.x)") attribute. However please keep in mind that this value can be forged so never ever trust that value. If you want to use the filename of the client to store the file on the server, pass it through the [`secure_filename()`](https://werkzeug.palletsprojects.com/en/2.3.x/utils/#werkzeug.utils.secure_filename "(in Werkzeug v2.3.x)") function that Werkzeug provides for you: ``` from werkzeug.utils import secure_filename @app.route('/upload', methods=['GET', 'POST']) def upload_file(): if request.method == 'POST': file = request.files['the_file'] file.save(f"/var/www/uploads/{secure_filename(file.filename)}") ... ``` For some better examples, see [Uploading Files](../patterns/fileuploads/index). ### Cookies To access cookies you can use the [`cookies`](../api/index#flask.Request.cookies "flask.Request.cookies") attribute. To set cookies you can use the [`set_cookie`](../api/index#flask.Response.set_cookie "flask.Response.set_cookie") method of response objects. The [`cookies`](../api/index#flask.Request.cookies "flask.Request.cookies") attribute of request objects is a dictionary with all the cookies the client transmits. If you want to use sessions, do not use the cookies directly but instead use the [Sessions](#sessions) in Flask that add some security on top of cookies for you. Reading cookies: ``` from flask import request @app.route('/') def index(): username = request.cookies.get('username') # use cookies.get(key) instead of cookies[key] to not get a # KeyError if the cookie is missing. ``` Storing cookies: ``` from flask import make_response @app.route('/') def index(): resp = make_response(render_template(...)) resp.set_cookie('username', 'the username') return resp ``` Note that cookies are set on response objects. Since you normally just return strings from the view functions Flask will convert them into response objects for you. If you explicitly want to do that you can use the [`make_response()`](../api/index#flask.make_response "flask.make_response") function and then modify it. Sometimes you might want to set a cookie at a point where the response object does not exist yet. This is possible by utilizing the [Deferred Request Callbacks](../patterns/deferredcallbacks/index) pattern. For this also see [About Responses](#about-responses). Redirects and Errors -------------------- To redirect a user to another endpoint, use the [`redirect()`](../api/index#flask.redirect "flask.redirect") function; to abort a request early with an error code, use the [`abort()`](../api/index#flask.abort "flask.abort") function: ``` from flask import abort, redirect, url_for @app.route('/') def index(): return redirect(url_for('login')) @app.route('/login') def login(): abort(401) this_is_never_executed() ``` This is a rather pointless example because a user will be redirected from the index to a page they cannot access (401 means access denied) but it shows how that works. By default a black and white error page is shown for each error code. If you want to customize the error page, you can use the [`errorhandler()`](../api/index#flask.Flask.errorhandler "flask.Flask.errorhandler") decorator: ``` from flask import render_template @app.errorhandler(404) def page_not_found(error): return render_template('page_not_found.html'), 404 ``` Note the `404` after the [`render_template()`](../api/index#flask.render_template "flask.render_template") call. This tells Flask that the status code of that page should be 404 which means not found. By default 200 is assumed which translates to: all went well. See [Handling Application Errors](../errorhandling/index) for more details. About Responses --------------- The return value from a view function is automatically converted into a response object for you. If the return value is a string it’s converted into a response object with the string as response body, a `200 OK` status code and a *text/html* mimetype. If the return value is a dict or list, `jsonify()` is called to produce a response. The logic that Flask applies to converting return values into response objects is as follows: 1. If a response object of the correct type is returned it’s directly returned from the view. 2. If it’s a string, a response object is created with that data and the default parameters. 3. If it’s an iterator or generator returning strings or bytes, it is treated as a streaming response. 4. If it’s a dict or list, a response object is created using [`jsonify()`](../api/index#flask.json.jsonify "flask.json.jsonify"). 5. If a tuple is returned the items in the tuple can provide extra information. Such tuples have to be in the form `(response, status)`, `(response, headers)`, or `(response, status, headers)`. The `status` value will override the status code and `headers` can be a list or dictionary of additional header values. 6. If none of that works, Flask will assume the return value is a valid WSGI application and convert that into a response object. If you want to get hold of the resulting response object inside the view you can use the [`make_response()`](../api/index#flask.make_response "flask.make_response") function. Imagine you have a view like this: ``` from flask import render_template @app.errorhandler(404) def not_found(error): return render_template('error.html'), 404 ``` You just need to wrap the return expression with [`make_response()`](../api/index#flask.make_response "flask.make_response") and get the response object to modify it, then return it: ``` from flask import make_response @app.errorhandler(404) def not_found(error): resp = make_response(render_template('error.html'), 404) resp.headers['X-Something'] = 'A value' return resp ``` ### APIs with JSON A common response format when writing an API is JSON. It’s easy to get started writing such an API with Flask. If you return a `dict` or `list` from a view, it will be converted to a JSON response. ``` @app.route("/me") def me_api(): user = get_current_user() return { "username": user.username, "theme": user.theme, "image": url_for("user_image", filename=user.image), } @app.route("/users") def users_api(): users = get_all_users() return [user.to_json() for user in users] ``` This is a shortcut to passing the data to the [`jsonify()`](../api/index#flask.json.jsonify "flask.json.jsonify") function, which will serialize any supported JSON data type. That means that all the data in the dict or list must be JSON serializable. For complex types such as database models, you’ll want to use a serialization library to convert the data to valid JSON types first. There are many serialization libraries and Flask API extensions maintained by the community that support more complex applications. Sessions -------- In addition to the request object there is also a second object called [`session`](../api/index#flask.session "flask.session") which allows you to store information specific to a user from one request to the next. This is implemented on top of cookies for you and signs the cookies cryptographically. What this means is that the user could look at the contents of your cookie but not modify it, unless they know the secret key used for signing. In order to use sessions you have to set a secret key. Here is how sessions work: ``` from flask import session # Set the secret key to some random bytes. Keep this really secret! app.secret_key = b'_<KEY> @app.route('/') def index(): if 'username' in session: return f'Logged in as {session["username"]}' return 'You are not logged in' @app.route('/login', methods=['GET', 'POST']) def login(): if request.method == 'POST': session['username'] = request.form['username'] return redirect(url_for('index')) return ''' <form method="post"> <p><input type=text name=username> <p><input type=submit value=Login> </form> ''' @app.route('/logout') def logout(): # remove the username from the session if it's there session.pop('username', None) return redirect(url_for('index')) ``` How to generate good secret keys A secret key should be as random as possible. Your operating system has ways to generate pretty random data based on a cryptographic random generator. Use the following command to quickly generate a value for `Flask.secret_key` (or [`SECRET_KEY`](../config/index#SECRET_KEY "SECRET_KEY")): ``` $ python -c 'import secrets; print(secrets.token_hex())' '<KEY>' ``` A note on cookie-based sessions: Flask will take the values you put into the session object and serialize them into a cookie. If you are finding some values do not persist across requests, cookies are indeed enabled, and you are not getting a clear error message, check the size of the cookie in your page responses compared to the size supported by web browsers. Besides the default client-side based sessions, if you want to handle sessions on the server-side instead, there are several Flask extensions that support this. Message Flashing ---------------- Good applications and user interfaces are all about feedback. If the user does not get enough feedback they will probably end up hating the application. Flask provides a really simple way to give feedback to a user with the flashing system. The flashing system basically makes it possible to record a message at the end of a request and access it on the next (and only the next) request. This is usually combined with a layout template to expose the message. To flash a message use the [`flash()`](../api/index#flask.flash "flask.flash") method, to get hold of the messages you can use [`get_flashed_messages()`](../api/index#flask.get_flashed_messages "flask.get_flashed_messages") which is also available in the templates. See [Message Flashing](../patterns/flashing/index) for a full example. Logging ------- Changelog New in version 0.3. Sometimes you might be in a situation where you deal with data that should be correct, but actually is not. For example you may have some client-side code that sends an HTTP request to the server but it’s obviously malformed. This might be caused by a user tampering with the data, or the client code failing. Most of the time it’s okay to reply with `400 Bad Request` in that situation, but sometimes that won’t do and the code has to continue working. You may still want to log that something fishy happened. This is where loggers come in handy. As of Flask 0.3 a logger is preconfigured for you to use. Here are some example log calls: ``` app.logger.debug('A value for debugging') app.logger.warning('A warning occurred (%d apples)', 42) app.logger.error('An error occurred') ``` The attached [`logger`](../api/index#flask.Flask.logger "flask.Flask.logger") is a standard logging [`Logger`](https://docs.python.org/3/library/logging.html#logging.Logger "(in Python v3.11)"), so head over to the official [`logging`](https://docs.python.org/3/library/logging.html#module-logging "(in Python v3.11)") docs for more information. See [Handling Application Errors](../errorhandling/index). Hooking in WSGI Middleware -------------------------- To add WSGI middleware to your Flask application, wrap the application’s `wsgi_app` attribute. For example, to apply Werkzeug’s [`ProxyFix`](https://werkzeug.palletsprojects.com/en/2.3.x/middleware/proxy_fix/#werkzeug.middleware.proxy_fix.ProxyFix "(in Werkzeug v2.3.x)") middleware for running behind Nginx: ``` from werkzeug.middleware.proxy_fix import ProxyFix app.wsgi_app = ProxyFix(app.wsgi_app) ``` Wrapping `app.wsgi_app` instead of `app` means that `app` still points at your Flask application, not at the middleware, so you can continue to use and configure `app` directly. Using Flask Extensions ---------------------- Extensions are packages that help you accomplish common tasks. For example, Flask-SQLAlchemy provides SQLAlchemy support that makes it simple and easy to use with Flask. For more on Flask extensions, see [Extensions](../extensions/index). Deploying to a Web Server ------------------------- Ready to deploy your new Flask app? See [Deploying to Production](../deploying/index). © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/quickstart/Patterns for Flask ================== Certain features and interactions are common enough that you will find them in most web applications. For example, many applications use a relational database and user authentication. They will open a database connection at the beginning of the request and get the information for the logged in user. At the end of the request, the database connection is closed. These types of patterns may be a bit outside the scope of Flask itself, but Flask makes it easy to implement them. Some common patterns are collected in the following pages. * [Large Applications as Packages](packages/index) + [Simple Packages](packages/index#simple-packages) + [Working with Blueprints](packages/index#working-with-blueprints) * [Application Factories](appfactories/index) + [Basic Factories](appfactories/index#basic-factories) + [Factories & Extensions](appfactories/index#factories-extensions) + [Using Applications](appfactories/index#using-applications) + [Factory Improvements](appfactories/index#factory-improvements) * [Application Dispatching](appdispatch/index) + [Working with this Document](appdispatch/index#working-with-this-document) + [Combining Applications](appdispatch/index#combining-applications) + [Dispatch by Subdomain](appdispatch/index#dispatch-by-subdomain) + [Dispatch by Path](appdispatch/index#dispatch-by-path) * [Using URL Processors](urlprocessors/index) + [Internationalized Application URLs](urlprocessors/index#internationalized-application-urls) + [Internationalized Blueprint URLs](urlprocessors/index#internationalized-blueprint-urls) * [Using SQLite 3 with Flask](sqlite3/index) + [Connect on Demand](sqlite3/index#connect-on-demand) + [Easy Querying](sqlite3/index#easy-querying) + [Initial Schemas](sqlite3/index#initial-schemas) * [SQLAlchemy in Flask](sqlalchemy/index) + [Flask-SQLAlchemy Extension](sqlalchemy/index#flask-sqlalchemy-extension) + [Declarative](sqlalchemy/index#declarative) + [Manual Object Relational Mapping](sqlalchemy/index#manual-object-relational-mapping) + [SQL Abstraction Layer](sqlalchemy/index#sql-abstraction-layer) * [Uploading Files](fileuploads/index) + [A Gentle Introduction](fileuploads/index#a-gentle-introduction) + [Improving Uploads](fileuploads/index#improving-uploads) + [Upload Progress Bars](fileuploads/index#upload-progress-bars) + [An Easier Solution](fileuploads/index#an-easier-solution) * [Caching](caching/index) * [View Decorators](viewdecorators/index) + [Login Required Decorator](viewdecorators/index#login-required-decorator) + [Caching Decorator](viewdecorators/index#caching-decorator) + [Templating Decorator](viewdecorators/index#templating-decorator) + [Endpoint Decorator](viewdecorators/index#endpoint-decorator) * [Form Validation with WTForms](wtforms/index) + [The Forms](wtforms/index#the-forms) + [In the View](wtforms/index#in-the-view) + [Forms in Templates](wtforms/index#forms-in-templates) * [Template Inheritance](templateinheritance/index) + [Base Template](templateinheritance/index#base-template) + [Child Template](templateinheritance/index#child-template) * [Message Flashing](flashing/index) + [Simple Flashing](flashing/index#simple-flashing) + [Flashing With Categories](flashing/index#flashing-with-categories) + [Filtering Flash Messages](flashing/index#filtering-flash-messages) * [JavaScript, `fetch`, and JSON](javascript/index) + [Rendering Templates](javascript/index#rendering-templates) + [Generating URLs](javascript/index#generating-urls) + [Making a Request with `fetch`](javascript/index#making-a-request-with-fetch) + [Following Redirects](javascript/index#following-redirects) + [Replacing Content](javascript/index#replacing-content) + [Return JSON from Views](javascript/index#return-json-from-views) + [Receiving JSON in Views](javascript/index#receiving-json-in-views) * [Lazily Loading Views](lazyloading/index) + [Converting to Centralized URL Map](lazyloading/index#converting-to-centralized-url-map) + [Loading Late](lazyloading/index#loading-late) * [MongoDB with MongoEngine](mongoengine/index) + [Configuration](mongoengine/index#configuration) + [Mapping Documents](mongoengine/index#mapping-documents) + [Creating Data](mongoengine/index#creating-data) + [Queries](mongoengine/index#queries) + [Documentation](mongoengine/index#documentation) * [Adding a favicon](favicon/index) + [See also](favicon/index#see-also) * [Streaming Contents](streaming/index) + [Basic Usage](streaming/index#basic-usage) + [Streaming from Templates](streaming/index#streaming-from-templates) + [Streaming with Context](streaming/index#streaming-with-context) * [Deferred Request Callbacks](deferredcallbacks/index) * [Adding HTTP Method Overrides](methodoverrides/index) * [Request Content Checksums](requestchecksum/index) * [Background Tasks with Celery](celery/index) + [Install](celery/index#install) + [Integrate Celery with Flask](celery/index#integrate-celery-with-flask) + [Application Factory](celery/index#application-factory) + [Defining Tasks](celery/index#defining-tasks) + [Calling Tasks](celery/index#calling-tasks) + [Getting Results](celery/index#getting-results) + [Passing Data to Tasks](celery/index#passing-data-to-tasks) * [Subclassing Flask](subclassing/index) * [Single-Page Applications](singlepageapplications/index) © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/Templates ========= Flask leverages Jinja2 as its template engine. You are obviously free to use a different template engine, but you still have to install Jinja2 to run Flask itself. This requirement is necessary to enable rich extensions. An extension can depend on Jinja2 being present. This section only gives a very quick introduction into how Jinja2 is integrated into Flask. If you want information on the template engine’s syntax itself, head over to the official [Jinja2 Template Documentation](https://jinja.palletsprojects.com/templates/) for more information. Jinja Setup ----------- Unless customized, Jinja2 is configured by Flask as follows: * autoescaping is enabled for all templates ending in `.html`, `.htm`, `.xml`, `.xhtml`, as well as `.svg` when using `render_template()`. * autoescaping is enabled for all strings when using `render_template_string()`. * a template has the ability to opt in/out autoescaping with the `{% autoescape %}` tag. * Flask inserts a couple of global functions and helpers into the Jinja2 context, additionally to the values that are present by default. Standard Context ---------------- The following global variables are available within Jinja2 templates by default: config The current configuration object ([`flask.Flask.config`](../api/index#flask.Flask.config "flask.Flask.config")) Changelog Changed in version 0.10: This is now always available, even in imported templates. New in version 0.6. request The current request object ([`flask.request`](../api/index#flask.request "flask.request")). This variable is unavailable if the template was rendered without an active request context. session The current session object ([`flask.session`](../api/index#flask.session "flask.session")). This variable is unavailable if the template was rendered without an active request context. g The request-bound object for global variables ([`flask.g`](../api/index#flask.g "flask.g")). This variable is unavailable if the template was rendered without an active request context. url_for() The [`flask.url_for()`](../api/index#flask.url_for "flask.url_for") function. get_flashed_messages() The [`flask.get_flashed_messages()`](../api/index#flask.get_flashed_messages "flask.get_flashed_messages") function. The Jinja Context Behavior These variables are added to the context of variables, they are not global variables. The difference is that by default these will not show up in the context of imported templates. This is partially caused by performance considerations, partially to keep things explicit. What does this mean for you? If you have a macro you want to import, that needs to access the request object you have two possibilities: 1. you explicitly pass the request to the macro as parameter, or the attribute of the request object you are interested in. 2. you import the macro “with context”. Importing with context looks like this: ``` {% from '_helpers.html' import my_macro with context %} ``` Controlling Autoescaping ------------------------ Autoescaping is the concept of automatically escaping special characters for you. Special characters in the sense of HTML (or XML, and thus XHTML) are `&`, `>`, `<`, `"` as well as `'`. Because these characters carry specific meanings in documents on their own you have to replace them by so called “entities” if you want to use them for text. Not doing so would not only cause user frustration by the inability to use these characters in text, but can also lead to security problems. (see [Cross-Site Scripting (XSS)](../security/index#security-xss)) Sometimes however you will need to disable autoescaping in templates. This can be the case if you want to explicitly inject HTML into pages, for example if they come from a system that generates secure HTML like a markdown to HTML converter. There are three ways to accomplish that: * In the Python code, wrap the HTML string in a `Markup` object before passing it to the template. This is in general the recommended way. * Inside the template, use the `|safe` filter to explicitly mark a string as safe HTML (`{{ myvariable|safe }}`) * Temporarily disable the autoescape system altogether. To disable the autoescape system in templates, you can use the `{% autoescape %}` block: ``` {% autoescape false %} <p>autoescaping is disabled here <p>{{ will_not_be_escaped }} {% endautoescape %} ``` Whenever you do this, please be very cautious about the variables you are using in this block. Registering Filters ------------------- If you want to register your own filters in Jinja2 you have two ways to do that. You can either put them by hand into the [`jinja_env`](../api/index#flask.Flask.jinja_env "flask.Flask.jinja_env") of the application or use the [`template_filter()`](../api/index#flask.Flask.template_filter "flask.Flask.template_filter") decorator. The two following examples work the same and both reverse an object: ``` @app.template_filter('reverse') def reverse_filter(s): return s[::-1] def reverse_filter(s): return s[::-1] app.jinja_env.filters['reverse'] = reverse_filter ``` In case of the decorator the argument is optional if you want to use the function name as name of the filter. Once registered, you can use the filter in your templates in the same way as Jinja2’s builtin filters, for example if you have a Python list in context called `mylist`: ``` {% for x in mylist | reverse %} {% endfor %} ``` Context Processors ------------------ To inject new variables automatically into the context of a template, context processors exist in Flask. Context processors run before the template is rendered and have the ability to inject new values into the template context. A context processor is a function that returns a dictionary. The keys and values of this dictionary are then merged with the template context, for all templates in the app: ``` @app.context_processor def inject_user(): return dict(user=g.user) ``` The context processor above makes a variable called `user` available in the template with the value of `g.user`. This example is not very interesting because `g` is available in templates anyways, but it gives an idea how this works. Variables are not limited to values; a context processor can also make functions available to templates (since Python allows passing around functions): ``` @app.context_processor def utility_processor(): def format_price(amount, currency="€"): return f"{amount:.2f}{currency}" return dict(format_price=format_price) ``` The context processor above makes the `format_price` function available to all templates: ``` {{ format_price(0.33) }} ``` You could also build `format_price` as a template filter (see [Registering Filters](#registering-filters)), but this demonstrates how to pass functions in a context processor. Streaming --------- It can be useful to not render the whole template as one complete string, instead render it as a stream, yielding smaller incremental strings. This can be used for streaming HTML in chunks to speed up initial page load, or to save memory when rendering a very large template. The Jinja2 template engine supports rendering a template piece by piece, returning an iterator of strings. Flask provides the [`stream_template()`](../api/index#flask.stream_template "flask.stream_template") and [`stream_template_string()`](../api/index#flask.stream_template_string "flask.stream_template_string") functions to make this easier to use. ``` from flask import stream_template @app.get("/timeline") def timeline(): return stream_template("timeline.html") ``` These functions automatically apply the [`stream_with_context()`](../api/index#flask.stream_with_context "flask.stream_with_context") wrapper if a request is active, so that it remains available in the template. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/templating/Handling Application Errors =========================== Applications fail, servers fail. Sooner or later you will see an exception in production. Even if your code is 100% correct, you will still see exceptions from time to time. Why? Because everything else involved will fail. Here are some situations where perfectly fine code can lead to server errors: * the client terminated the request early and the application was still reading from the incoming data * the database server was overloaded and could not handle the query * a filesystem is full * a harddrive crashed * a backend server overloaded * a programming error in a library you are using * network connection of the server to another system failed And that’s just a small sample of issues you could be facing. So how do we deal with that sort of problem? By default if your application runs in production mode, and an exception is raised Flask will display a very simple page for you and log the exception to the [`logger`](../api/index#flask.Flask.logger "flask.Flask.logger"). But there is more you can do, and we will cover some better setups to deal with errors including custom exceptions and 3rd party tools. Error Logging Tools ------------------- Sending error mails, even if just for critical ones, can become overwhelming if enough users are hitting the error and log files are typically never looked at. This is why we recommend using [Sentry](https://sentry.io/) for dealing with application errors. It’s available as a source-available project [on GitHub](https://github.com/getsentry/sentry) and is also available as a [hosted version](https://sentry.io/signup/) which you can try for free. Sentry aggregates duplicate errors, captures the full stack trace and local variables for debugging, and sends you mails based on new errors or frequency thresholds. To use Sentry you need to install the `sentry-sdk` client with extra `flask` dependencies. ``` $ pip install sentry-sdk[flask] ``` And then add this to your Flask app: ``` import sentry_sdk from sentry_sdk.integrations.flask import FlaskIntegration sentry_sdk.init('YOUR_DSN_HERE', integrations=[FlaskIntegration()]) ``` The `YOUR_DSN_HERE` value needs to be replaced with the DSN value you get from your Sentry installation. After installation, failures leading to an Internal Server Error are automatically reported to Sentry and from there you can receive error notifications. See also: * Sentry also supports catching errors from a worker queue (RQ, Celery, etc.) in a similar fashion. See the [Python SDK docs](https://docs.sentry.io/platforms/python/) for more information. * [Flask-specific documentation](https://docs.sentry.io/platforms/python/guides/flask/) Error Handlers -------------- When an error occurs in Flask, an appropriate [HTTP status code](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status) will be returned. 400-499 indicate errors with the client’s request data, or about the data requested. 500-599 indicate errors with the server or application itself. You might want to show custom error pages to the user when an error occurs. This can be done by registering error handlers. An error handler is a function that returns a response when a type of error is raised, similar to how a view is a function that returns a response when a request URL is matched. It is passed the instance of the error being handled, which is most likely a [`HTTPException`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.HTTPException "(in Werkzeug v2.3.x)"). The status code of the response will not be set to the handler’s code. Make sure to provide the appropriate HTTP status code when returning a response from a handler. ### Registering Register handlers by decorating a function with [`errorhandler()`](../api/index#flask.Flask.errorhandler "flask.Flask.errorhandler"). Or use [`register_error_handler()`](../api/index#flask.Flask.register_error_handler "flask.Flask.register_error_handler") to register the function later. Remember to set the error code when returning the response. ``` @app.errorhandler(werkzeug.exceptions.BadRequest) def handle_bad_request(e): return 'bad request!', 400 # or, without the decorator app.register_error_handler(400, handle_bad_request) ``` [`werkzeug.exceptions.HTTPException`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.HTTPException "(in Werkzeug v2.3.x)") subclasses like [`BadRequest`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.BadRequest "(in Werkzeug v2.3.x)") and their HTTP codes are interchangeable when registering handlers. (`BadRequest.code == 400`) Non-standard HTTP codes cannot be registered by code because they are not known by Werkzeug. Instead, define a subclass of [`HTTPException`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.HTTPException "(in Werkzeug v2.3.x)") with the appropriate code and register and raise that exception class. ``` class InsufficientStorage(werkzeug.exceptions.HTTPException): code = 507 description = 'Not enough storage space.' app.register_error_handler(InsufficientStorage, handle_507) raise InsufficientStorage() ``` Handlers can be registered for any exception class, not just [`HTTPException`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.HTTPException "(in Werkzeug v2.3.x)") subclasses or HTTP status codes. Handlers can be registered for a specific class, or for all subclasses of a parent class. ### Handling When building a Flask application you *will* run into exceptions. If some part of your code breaks while handling a request (and you have no error handlers registered), a “500 Internal Server Error” ([`InternalServerError`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.InternalServerError "(in Werkzeug v2.3.x)")) will be returned by default. Similarly, “404 Not Found” ([`NotFound`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.NotFound "(in Werkzeug v2.3.x)")) error will occur if a request is sent to an unregistered route. If a route receives an unallowed request method, a “405 Method Not Allowed” ([`MethodNotAllowed`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.MethodNotAllowed "(in Werkzeug v2.3.x)")) will be raised. These are all subclasses of [`HTTPException`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.HTTPException "(in Werkzeug v2.3.x)") and are provided by default in Flask. Flask gives you the ability to raise any HTTP exception registered by Werkzeug. However, the default HTTP exceptions return simple exception pages. You might want to show custom error pages to the user when an error occurs. This can be done by registering error handlers. When Flask catches an exception while handling a request, it is first looked up by code. If no handler is registered for the code, Flask looks up the error by its class hierarchy; the most specific handler is chosen. If no handler is registered, [`HTTPException`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.HTTPException "(in Werkzeug v2.3.x)") subclasses show a generic message about their code, while other exceptions are converted to a generic “500 Internal Server Error”. For example, if an instance of [`ConnectionRefusedError`](https://docs.python.org/3/library/exceptions.html#ConnectionRefusedError "(in Python v3.11)") is raised, and a handler is registered for [`ConnectionError`](https://docs.python.org/3/library/exceptions.html#ConnectionError "(in Python v3.11)") and [`ConnectionRefusedError`](https://docs.python.org/3/library/exceptions.html#ConnectionRefusedError "(in Python v3.11)"), the more specific [`ConnectionRefusedError`](https://docs.python.org/3/library/exceptions.html#ConnectionRefusedError "(in Python v3.11)") handler is called with the exception instance to generate the response. Handlers registered on the blueprint take precedence over those registered globally on the application, assuming a blueprint is handling the request that raises the exception. However, the blueprint cannot handle 404 routing errors because the 404 occurs at the routing level before the blueprint can be determined. ### Generic Exception Handlers It is possible to register error handlers for very generic base classes such as `HTTPException` or even `Exception`. However, be aware that these will catch more than you might expect. For example, an error handler for `HTTPException` might be useful for turning the default HTML errors pages into JSON. However, this handler will trigger for things you don’t cause directly, such as 404 and 405 errors during routing. Be sure to craft your handler carefully so you don’t lose information about the HTTP error. ``` from flask import json from werkzeug.exceptions import HTTPException @app.errorhandler(HTTPException) def handle_exception(e): """Return JSON instead of HTML for HTTP errors.""" # start with the correct headers and status code from the error response = e.get_response() # replace the body with JSON response.data = json.dumps({ "code": e.code, "name": e.name, "description": e.description, }) response.content_type = "application/json" return response ``` An error handler for `Exception` might seem useful for changing how all errors, even unhandled ones, are presented to the user. However, this is similar to doing `except Exception:` in Python, it will capture *all* otherwise unhandled errors, including all HTTP status codes. In most cases it will be safer to register handlers for more specific exceptions. Since `HTTPException` instances are valid WSGI responses, you could also pass them through directly. ``` from werkzeug.exceptions import HTTPException @app.errorhandler(Exception) def handle_exception(e): # pass through HTTP errors if isinstance(e, HTTPException): return e # now you're handling non-HTTP exceptions only return render_template("500_generic.html", e=e), 500 ``` Error handlers still respect the exception class hierarchy. If you register handlers for both `HTTPException` and `Exception`, the `Exception` handler will not handle `HTTPException` subclasses because it the `HTTPException` handler is more specific. ### Unhandled Exceptions When there is no error handler registered for an exception, a 500 Internal Server Error will be returned instead. See [`flask.Flask.handle_exception()`](../api/index#flask.Flask.handle_exception "flask.Flask.handle_exception") for information about this behavior. If there is an error handler registered for `InternalServerError`, this will be invoked. As of Flask 1.1.0, this error handler will always be passed an instance of `InternalServerError`, not the original unhandled error. The original error is available as `e.original_exception`. An error handler for “500 Internal Server Error” will be passed uncaught exceptions in addition to explicit 500 errors. In debug mode, a handler for “500 Internal Server Error” will not be used. Instead, the interactive debugger will be shown. Custom Error Pages ------------------ Sometimes when building a Flask application, you might want to raise a [`HTTPException`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.HTTPException "(in Werkzeug v2.3.x)") to signal to the user that something is wrong with the request. Fortunately, Flask comes with a handy [`abort()`](../api/index#flask.abort "flask.abort") function that aborts a request with a HTTP error from werkzeug as desired. It will also provide a plain black and white error page for you with a basic description, but nothing fancy. Depending on the error code it is less or more likely for the user to actually see such an error. Consider the code below, we might have a user profile route, and if the user fails to pass a username we can raise a “400 Bad Request”. If the user passes a username and we can’t find it, we raise a “404 Not Found”. ``` from flask import abort, render_template, request # a username needs to be supplied in the query args # a successful request would be like /profile?username=jack @app.route("/profile") def user_profile(): username = request.arg.get("username") # if a username isn't supplied in the request, return a 400 bad request if username is None: abort(400) user = get_user(username=username) # if a user can't be found by their username, return 404 not found if user is None: abort(404) return render_template("profile.html", user=user) ``` Here is another example implementation for a “404 Page Not Found” exception: ``` from flask import render_template @app.errorhandler(404) def page_not_found(e): # note that we set the 404 status explicitly return render_template('404.html'), 404 ``` When using [Application Factories](../patterns/appfactories/index): ``` from flask import Flask, render_template def page_not_found(e): return render_template('404.html'), 404 def create_app(config_filename): app = Flask(__name__) app.register_error_handler(404, page_not_found) return app ``` An example template might be this: ``` {% extends "layout.html" %} {% block title %}Page Not Found{% endblock %} {% block body %} <h1>Page Not Found</h1> <p>What you were looking for is just not there. <p><a href="{{ url_for('index') }}">go somewhere nice</a> {% endblock %} ``` ### Further Examples The above examples wouldn’t actually be an improvement on the default exception pages. We can create a custom 500.html template like this: ``` {% extends "layout.html" %} {% block title %}Internal Server Error{% endblock %} {% block body %} <h1>Internal Server Error</h1> <p>Oops... we seem to have made a mistake, sorry!</p> <p><a href="{{ url_for('index') }}">Go somewhere nice instead</a> {% endblock %} ``` It can be implemented by rendering the template on “500 Internal Server Error”: ``` from flask import render_template @app.errorhandler(500) def internal_server_error(e): # note that we set the 500 status explicitly return render_template('500.html'), 500 ``` When using [Application Factories](../patterns/appfactories/index): ``` from flask import Flask, render_template def internal_server_error(e): return render_template('500.html'), 500 def create_app(): app = Flask(__name__) app.register_error_handler(500, internal_server_error) return app ``` When using [Modular Applications with Blueprints](../blueprints/index): ``` from flask import Blueprint blog = Blueprint('blog', __name__) # as a decorator @blog.errorhandler(500) def internal_server_error(e): return render_template('500.html'), 500 # or with register_error_handler blog.register_error_handler(500, internal_server_error) ``` Blueprint Error Handlers ------------------------ In [Modular Applications with Blueprints](../blueprints/index), most error handlers will work as expected. However, there is a caveat concerning handlers for 404 and 405 exceptions. These error handlers are only invoked from an appropriate `raise` statement or a call to `abort` in another of the blueprint’s view functions; they are not invoked by, e.g., an invalid URL access. This is because the blueprint does not “own” a certain URL space, so the application instance has no way of knowing which blueprint error handler it should run if given an invalid URL. If you would like to execute different handling strategies for these errors based on URL prefixes, they may be defined at the application level using the `request` proxy object. ``` from flask import jsonify, render_template # at the application level # not the blueprint level @app.errorhandler(404) def page_not_found(e): # if a request is in our blog URL space if request.path.startswith('/blog/'): # we return a custom blog 404 page return render_template("blog/404.html"), 404 else: # otherwise we return our generic site-wide 404 page return render_template("404.html"), 404 @app.errorhandler(405) def method_not_allowed(e): # if a request has the wrong method to our API if request.path.startswith('/api/'): # we return a json saying so return jsonify(message="Method Not Allowed"), 405 else: # otherwise we return a generic site-wide 405 page return render_template("405.html"), 405 ``` Returning API Errors as JSON ---------------------------- When building APIs in Flask, some developers realise that the built-in exceptions are not expressive enough for APIs and that the content type of *text/html* they are emitting is not very useful for API consumers. Using the same techniques as above and [`jsonify()`](../api/index#flask.json.jsonify "flask.json.jsonify") we can return JSON responses to API errors. [`abort()`](../api/index#flask.abort "flask.abort") is called with a `description` parameter. The error handler will use that as the JSON error message, and set the status code to 404. ``` from flask import abort, jsonify @app.errorhandler(404) def resource_not_found(e): return jsonify(error=str(e)), 404 @app.route("/cheese") def get_one_cheese(): resource = get_resource() if resource is None: abort(404, description="Resource not found") return jsonify(resource) ``` We can also create custom exception classes. For instance, we can introduce a new custom exception for an API that can take a proper human readable message, a status code for the error and some optional payload to give more context for the error. This is a simple example: ``` from flask import jsonify, request class InvalidAPIUsage(Exception): status_code = 400 def __init__(self, message, status_code=None, payload=None): super().__init__() self.message = message if status_code is not None: self.status_code = status_code self.payload = payload def to_dict(self): rv = dict(self.payload or ()) rv['message'] = self.message return rv @app.errorhandler(InvalidAPIUsage) def invalid_api_usage(e): return jsonify(e.to_dict()), e.status_code # an API app route for getting user information # a correct request might be /api/user?user_id=420 @app.route("/api/user") def user_api(user_id): user_id = request.arg.get("user_id") if not user_id: raise InvalidAPIUsage("No user id provided!") user = get_user(user_id=user_id) if not user: raise InvalidAPIUsage("No such user!", status_code=404) return jsonify(user.to_dict()) ``` A view can now raise that exception with an error message. Additionally some extra payload can be provided as a dictionary through the `payload` parameter. Logging ------- See [Logging](../logging/index) for information about how to log exceptions, such as by emailing them to admins. Debugging --------- See [Debugging Application Errors](../debugging/index) for information about how to debug errors in development and production. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/errorhandling/Logging ======= Flask uses standard Python [`logging`](https://docs.python.org/3/library/logging.html#module-logging "(in Python v3.11)"). Messages about your Flask application are logged with [`app.logger`](../api/index#flask.Flask.logger "flask.Flask.logger"), which takes the same name as [`app.name`](../api/index#flask.Flask.name "flask.Flask.name"). This logger can also be used to log your own messages. ``` @app.route('/login', methods=['POST']) def login(): user = get_user(request.form['username']) if user.check_password(request.form['password']): login_user(user) app.logger.info('%s logged in successfully', user.username) return redirect(url_for('index')) else: app.logger.info('%s failed to log in', user.username) abort(401) ``` If you don’t configure logging, Python’s default log level is usually ‘warning’. Nothing below the configured level will be visible. Basic Configuration ------------------- When you want to configure logging for your project, you should do it as soon as possible when the program starts. If [`app.logger`](../api/index#flask.Flask.logger "flask.Flask.logger") is accessed before logging is configured, it will add a default handler. If possible, configure logging before creating the application object. This example uses [`dictConfig()`](https://docs.python.org/3/library/logging.config.html#logging.config.dictConfig "(in Python v3.11)") to create a logging configuration similar to Flask’s default, except for all logs: ``` from logging.config import dictConfig dictConfig({ 'version': 1, 'formatters': {'default': { 'format': '[%(asctime)s] %(levelname)s in %(module)s: %(message)s', }}, 'handlers': {'wsgi': { 'class': 'logging.StreamHandler', 'stream': 'ext://flask.logging.wsgi_errors_stream', 'formatter': 'default' }}, 'root': { 'level': 'INFO', 'handlers': ['wsgi'] } }) app = Flask(__name__) ``` ### Default Configuration If you do not configure logging yourself, Flask will add a [`StreamHandler`](https://docs.python.org/3/library/logging.handlers.html#logging.StreamHandler "(in Python v3.11)") to [`app.logger`](../api/index#flask.Flask.logger "flask.Flask.logger") automatically. During requests, it will write to the stream specified by the WSGI server in `environ['wsgi.errors']` (which is usually [`sys.stderr`](https://docs.python.org/3/library/sys.html#sys.stderr "(in Python v3.11)")). Outside a request, it will log to [`sys.stderr`](https://docs.python.org/3/library/sys.html#sys.stderr "(in Python v3.11)"). ### Removing the Default Handler If you configured logging after accessing [`app.logger`](../api/index#flask.Flask.logger "flask.Flask.logger"), and need to remove the default handler, you can import and remove it: ``` from flask.logging import default_handler app.logger.removeHandler(default_handler) ``` Email Errors to Admins ---------------------- When running the application on a remote server for production, you probably won’t be looking at the log messages very often. The WSGI server will probably send log messages to a file, and you’ll only check that file if a user tells you something went wrong. To be proactive about discovering and fixing bugs, you can configure a [`logging.handlers.SMTPHandler`](https://docs.python.org/3/library/logging.handlers.html#logging.handlers.SMTPHandler "(in Python v3.11)") to send an email when errors and higher are logged. ``` import logging from logging.handlers import SMTPHandler mail_handler = SMTPHandler( mailhost='127.0.0.1', fromaddr='<EMAIL>', toaddrs=['<EMAIL>'], subject='Application Error' ) mail_handler.setLevel(logging.ERROR) mail_handler.setFormatter(logging.Formatter( '[%(asctime)s] %(levelname)s in %(module)s: %(message)s' )) if not app.debug: app.logger.addHandler(mail_handler) ``` This requires that you have an SMTP server set up on the same server. See the Python docs for more information about configuring the handler. Injecting Request Information ----------------------------- Seeing more information about the request, such as the IP address, may help debugging some errors. You can subclass [`logging.Formatter`](https://docs.python.org/3/library/logging.html#logging.Formatter "(in Python v3.11)") to inject your own fields that can be used in messages. You can change the formatter for Flask’s default handler, the mail handler defined above, or any other handler. ``` from flask import has_request_context, request from flask.logging import default_handler class RequestFormatter(logging.Formatter): def format(self, record): if has_request_context(): record.url = request.url record.remote_addr = request.remote_addr else: record.url = None record.remote_addr = None return super().format(record) formatter = RequestFormatter( '[%(asctime)s] %(remote_addr)s requested %(url)s\n' '%(levelname)s in %(module)s: %(message)s' ) default_handler.setFormatter(formatter) mail_handler.setFormatter(formatter) ``` Other Libraries --------------- Other libraries may use logging extensively, and you want to see relevant messages from those logs too. The simplest way to do this is to add handlers to the root logger instead of only the app logger. ``` from flask.logging import default_handler root = logging.getLogger() root.addHandler(default_handler) root.addHandler(mail_handler) ``` Depending on your project, it may be more useful to configure each logger you care about separately, instead of configuring only the root logger. ``` for logger in ( app.logger, logging.getLogger('sqlalchemy'), logging.getLogger('other_package'), ): logger.addHandler(default_handler) logger.addHandler(mail_handler) ``` ### Werkzeug Werkzeug logs basic request/response information to the `'werkzeug'` logger. If the root logger has no handlers configured, Werkzeug adds a [`StreamHandler`](https://docs.python.org/3/library/logging.handlers.html#logging.StreamHandler "(in Python v3.11)") to its logger. ### Flask Extensions Depending on the situation, an extension may choose to log to [`app.logger`](../api/index#flask.Flask.logger "flask.Flask.logger") or its own named logger. Consult each extension’s documentation for details. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/logging/Configuration Handling ====================== Applications need some kind of configuration. There are different settings you might want to change depending on the application environment like toggling the debug mode, setting the secret key, and other such environment-specific things. The way Flask is designed usually requires the configuration to be available when the application starts up. You can hard code the configuration in the code, which for many small applications is not actually that bad, but there are better ways. Independent of how you load your config, there is a config object available which holds the loaded configuration values: The [`config`](../api/index#flask.Flask.config "flask.Flask.config") attribute of the [`Flask`](../api/index#flask.Flask "flask.Flask") object. This is the place where Flask itself puts certain configuration values and also where extensions can put their configuration values. But this is also where you can have your own configuration. Configuration Basics -------------------- The [`config`](../api/index#flask.Flask.config "flask.Flask.config") is actually a subclass of a dictionary and can be modified just like any dictionary: ``` app = Flask(__name__) app.config['TESTING'] = True ``` Certain configuration values are also forwarded to the [`Flask`](../api/index#flask.Flask "flask.Flask") object so you can read and write them from there: ``` app.testing = True ``` To update multiple keys at once you can use the [`dict.update()`](https://docs.python.org/3/library/stdtypes.html#dict.update "(in Python v3.11)") method: ``` app.config.update( TESTING=True, SECRET_KEY='<KEY>' ) ``` Debug Mode ---------- The [`DEBUG`](#DEBUG "DEBUG") config value is special because it may behave inconsistently if changed after the app has begun setting up. In order to set debug mode reliably, use the `--debug` option on the `flask` or `flask run` command. `flask run` will use the interactive debugger and reloader by default in debug mode. ``` $ flask --app hello run --debug ``` Using the option is recommended. While it is possible to set [`DEBUG`](#DEBUG "DEBUG") in your config or code, this is strongly discouraged. It can’t be read early by the `flask run` command, and some systems or extensions may have already configured themselves based on a previous value. Builtin Configuration Values ---------------------------- The following configuration values are used internally by Flask: `DEBUG` Whether debug mode is enabled. When using `flask run` to start the development server, an interactive debugger will be shown for unhandled exceptions, and the server will be reloaded when code changes. The [`debug`](../api/index#flask.Flask.debug "flask.Flask.debug") attribute maps to this config key. This is set with the `FLASK_DEBUG` environment variable. It may not behave as expected if set in code. **Do not enable debug mode when deploying in production.** Default: `False` `TESTING` Enable testing mode. Exceptions are propagated rather than handled by the the app’s error handlers. Extensions may also change their behavior to facilitate easier testing. You should enable this in your own tests. Default: `False` `PROPAGATE_EXCEPTIONS` Exceptions are re-raised rather than being handled by the app’s error handlers. If not set, this is implicitly true if `TESTING` or `DEBUG` is enabled. Default: `None` `TRAP_HTTP_EXCEPTIONS` If there is no handler for an `HTTPException`-type exception, re-raise it to be handled by the interactive debugger instead of returning it as a simple error response. Default: `False` `TRAP_BAD_REQUEST_ERRORS` Trying to access a key that doesn’t exist from request dicts like `args` and `form` will return a 400 Bad Request error page. Enable this to treat the error as an unhandled exception instead so that you get the interactive debugger. This is a more specific version of `TRAP_HTTP_EXCEPTIONS`. If unset, it is enabled in debug mode. Default: `None` `SECRET_KEY` A secret key that will be used for securely signing the session cookie and can be used for any other security related needs by extensions or your application. It should be a long random `bytes` or `str`. For example, copy the output of this to your config: ``` $ python -c 'import secrets; print(secrets.token_hex())' '<KEY>' ``` **Do not reveal the secret key when posting questions or committing code.** Default: `None` `SESSION_COOKIE_NAME` The name of the session cookie. Can be changed in case you already have a cookie with the same name. Default: `'session'` `SESSION_COOKIE_DOMAIN` The value of the `Domain` parameter on the session cookie. If not set, browsers will only send the cookie to the exact domain it was set from. Otherwise, they will send it to any subdomain of the given value as well. Not setting this value is more restricted and secure than setting it. Default: `None` Changed in version 2.3: Not set by default, does not fall back to `SERVER_NAME`. `SESSION_COOKIE_PATH` The path that the session cookie will be valid for. If not set, the cookie will be valid underneath `APPLICATION_ROOT` or `/` if that is not set. Default: `None` `SESSION_COOKIE_HTTPONLY` Browsers will not allow JavaScript access to cookies marked as “HTTP only” for security. Default: `True` `SESSION_COOKIE_SECURE` Browsers will only send cookies with requests over HTTPS if the cookie is marked “secure”. The application must be served over HTTPS for this to make sense. Default: `False` `SESSION_COOKIE_SAMESITE` Restrict how cookies are sent with requests from external sites. Can be set to `'Lax'` (recommended) or `'Strict'`. See [Set-Cookie options](../security/index#security-cookie). Default: `None` Changelog New in version 1.0. `PERMANENT_SESSION_LIFETIME` If `session.permanent` is true, the cookie’s expiration will be set this number of seconds in the future. Can either be a [`datetime.timedelta`](https://docs.python.org/3/library/datetime.html#datetime.timedelta "(in Python v3.11)") or an `int`. Flask’s default cookie implementation validates that the cryptographic signature is not older than this value. Default: `timedelta(days=31)` (`2678400` seconds) `SESSION_REFRESH_EACH_REQUEST` Control whether the cookie is sent with every response when `session.permanent` is true. Sending the cookie every time (the default) can more reliably keep the session from expiring, but uses more bandwidth. Non-permanent sessions are not affected. Default: `True` `USE_X_SENDFILE` When serving files, set the `X-Sendfile` header instead of serving the data with Flask. Some web servers, such as Apache, recognize this and serve the data more efficiently. This only makes sense when using such a server. Default: `False` `SEND_FILE_MAX_AGE_DEFAULT` When serving files, set the cache control max age to this number of seconds. Can be a [`datetime.timedelta`](https://docs.python.org/3/library/datetime.html#datetime.timedelta "(in Python v3.11)") or an `int`. Override this value on a per-file basis using [`get_send_file_max_age()`](../api/index#flask.Flask.get_send_file_max_age "flask.Flask.get_send_file_max_age") on the application or blueprint. If `None`, `send_file` tells the browser to use conditional requests will be used instead of a timed cache, which is usually preferable. Default: `None` `SERVER_NAME` Inform the application what host and port it is bound to. Required for subdomain route matching support. If set, `url_for` can generate external URLs with only an application context instead of a request context. Default: `None` Changed in version 2.3: Does not affect `SESSION_COOKIE_DOMAIN`. `APPLICATION_ROOT` Inform the application what path it is mounted under by the application / web server. This is used for generating URLs outside the context of a request (inside a request, the dispatcher is responsible for setting `SCRIPT_NAME` instead; see [Application Dispatching](../patterns/appdispatch/index) for examples of dispatch configuration). Will be used for the session cookie path if `SESSION_COOKIE_PATH` is not set. Default: `'/'` `PREFERRED_URL_SCHEME` Use this scheme for generating external URLs when not in a request context. Default: `'http'` `MAX_CONTENT_LENGTH` Don’t read more than this many bytes from the incoming request data. If not set and the request does not specify a `CONTENT_LENGTH`, no data will be read for security. Default: `None` `TEMPLATES_AUTO_RELOAD` Reload templates when they are changed. If not set, it will be enabled in debug mode. Default: `None` `EXPLAIN_TEMPLATE_LOADING` Log debugging information tracing how a template file was loaded. This can be useful to figure out why a template was not loaded or the wrong file appears to be loaded. Default: `False` `MAX_COOKIE_SIZE` Warn if cookie headers are larger than this many bytes. Defaults to `4093`. Larger cookies may be silently ignored by browsers. Set to `0` to disable the warning. Changed in version 2.3: `JSON_AS_ASCII`, `JSON_SORT_KEYS`, `JSONIFY_MIMETYPE`, and `JSONIFY_PRETTYPRINT_REGULAR` were removed. The default `app.json` provider has equivalent attributes instead. Changed in version 2.3: `ENV` was removed. Changelog Changed in version 2.2: Removed `PRESERVE_CONTEXT_ON_EXCEPTION`. Changed in version 1.0: `LOGGER_NAME` and `LOGGER_HANDLER_POLICY` were removed. See [Logging](../logging/index) for information about configuration. Added `ENV` to reflect the `FLASK_ENV` environment variable. Added [`SESSION_COOKIE_SAMESITE`](#SESSION_COOKIE_SAMESITE "SESSION_COOKIE_SAMESITE") to control the session cookie’s `SameSite` option. Added [`MAX_COOKIE_SIZE`](#MAX_COOKIE_SIZE "MAX_COOKIE_SIZE") to control a warning from Werkzeug. New in version 0.11: `SESSION_REFRESH_EACH_REQUEST`, `TEMPLATES_AUTO_RELOAD`, `LOGGER_HANDLER_POLICY`, `EXPLAIN_TEMPLATE_LOADING` New in version 0.10: `JSON_AS_ASCII`, `JSON_SORT_KEYS`, `JSONIFY_PRETTYPRINT_REGULAR` New in version 0.9: `PREFERRED_URL_SCHEME` New in version 0.8: `TRAP_BAD_REQUEST_ERRORS`, `TRAP_HTTP_EXCEPTIONS`, `APPLICATION_ROOT`, `SESSION_COOKIE_DOMAIN`, `SESSION_COOKIE_PATH`, `SESSION_COOKIE_HTTPONLY`, `SESSION_COOKIE_SECURE` New in version 0.7: `PROPAGATE_EXCEPTIONS`, `PRESERVE_CONTEXT_ON_EXCEPTION` New in version 0.6: `MAX_CONTENT_LENGTH` New in version 0.5: `SERVER_NAME` New in version 0.4: `LOGGER_NAME` Configuring from Python Files ----------------------------- Configuration becomes more useful if you can store it in a separate file, ideally located outside the actual application package. You can deploy your application, then separately configure it for the specific deployment. A common pattern is this: ``` app = Flask(__name__) app.config.from_object('yourapplication.default_settings') app.config.from_envvar('YOURAPPLICATION_SETTINGS') ``` This first loads the configuration from the `yourapplication.default_settings` module and then overrides the values with the contents of the file the `YOURAPPLICATION_SETTINGS` environment variable points to. This environment variable can be set in the shell before starting the server: BashFishCMDPowershell ``` $ export YOURAPPLICATION_SETTINGS=/path/to/settings.cfg $ flask run * Running on http://127.0.0.1:5000/ ``` ``` $ set -x YOURAPPLICATION_SETTINGS /path/to/settings.cfg $ flask run * Running on http://127.0.0.1:5000/ ``` ``` > set YOURAPPLICATION_SETTINGS=\path\to\settings.cfg > flask run * Running on http://127.0.0.1:5000/ ``` ``` > $env:YOURAPPLICATION_SETTINGS = "\path\to\settings.cfg" > flask run * Running on http://127.0.0.1:5000/ ``` The configuration files themselves are actual Python files. Only values in uppercase are actually stored in the config object later on. So make sure to use uppercase letters for your config keys. Here is an example of a configuration file: ``` # Example configuration SECRET_KEY = '<KEY>' ``` Make sure to load the configuration very early on, so that extensions have the ability to access the configuration when starting up. There are other methods on the config object as well to load from individual files. For a complete reference, read the [`Config`](../api/index#flask.Config "flask.Config") object’s documentation. Configuring from Data Files --------------------------- It is also possible to load configuration from a file in a format of your choice using [`from_file()`](../api/index#flask.Config.from_file "flask.Config.from_file"). For example to load from a TOML file: ``` import tomllib app.config.from_file("config.toml", load=tomllib.load, text=False) ``` Or from a JSON file: ``` import json app.config.from_file("config.json", load=json.load) ``` Configuring from Environment Variables -------------------------------------- In addition to pointing to configuration files using environment variables, you may find it useful (or necessary) to control your configuration values directly from the environment. Flask can be instructed to load all environment variables starting with a specific prefix into the config using [`from_prefixed_env()`](../api/index#flask.Config.from_prefixed_env "flask.Config.from_prefixed_env"). Environment variables can be set in the shell before starting the server: BashFishCMDPowershell ``` $ export FLASK_SECRET_KEY="<KEY>" $ export FLASK_MAIL_ENABLED=false $ flask run * Running on http://127.0.0.1:5000/ ``` ``` $ set -x FLASK_SECRET_KEY "<KEY>" $ set -x FLASK_MAIL_ENABLED false $ flask run * Running on http://127.0.0.1:5000/ ``` ``` > set FLASK_SECRET_KEY="<KEY>" > set FLASK_MAIL_ENABLED=false > flask run * Running on http://127.0.0.1:5000/ ``` ``` > $env:FLASK_SECRET_KEY = "<KEY>" > $env:FLASK_MAIL_ENABLED = "false" > flask run * Running on http://127.0.0.1:5000/ ``` The variables can then be loaded and accessed via the config with a key equal to the environment variable name without the prefix i.e. ``` app.config.from_prefixed_env() app.config["SECRET_KEY"] # Is "<KEY>" ``` The prefix is `FLASK_` by default. This is configurable via the `prefix` argument of [`from_prefixed_env()`](../api/index#flask.Config.from_prefixed_env "flask.Config.from_prefixed_env"). Values will be parsed to attempt to convert them to a more specific type than strings. By default [`json.loads()`](https://docs.python.org/3/library/json.html#json.loads "(in Python v3.11)") is used, so any valid JSON value is possible, including lists and dicts. This is configurable via the `loads` argument of [`from_prefixed_env()`](../api/index#flask.Config.from_prefixed_env "flask.Config.from_prefixed_env"). When adding a boolean value with the default JSON parsing, only “true” and “false”, lowercase, are valid values. Keep in mind that any non-empty string is considered `True` by Python. It is possible to set keys in nested dictionaries by separating the keys with double underscore (`__`). Any intermediate keys that don’t exist on the parent dict will be initialized to an empty dict. ``` $ export FLASK_MYAPI__credentials__username=user123 ``` ``` app.config["MYAPI"]["credentials"]["username"] # Is "user123" ``` On Windows, environment variable keys are always uppercase, therefore the above example would end up as `MYAPI__CREDENTIALS__USERNAME`. For even more config loading features, including merging and case-insensitive Windows support, try a dedicated library such as [Dynaconf](https://www.dynaconf.com/), which includes integration with Flask. Configuration Best Practices ---------------------------- The downside with the approach mentioned earlier is that it makes testing a little harder. There is no single 100% solution for this problem in general, but there are a couple of things you can keep in mind to improve that experience: 1. Create your application in a function and register blueprints on it. That way you can create multiple instances of your application with different configurations attached which makes unit testing a lot easier. You can use this to pass in configuration as needed. 2. Do not write code that needs the configuration at import time. If you limit yourself to request-only accesses to the configuration you can reconfigure the object later on as needed. 3. Make sure to load the configuration very early on, so that extensions can access the configuration when calling `init_app`. Development / Production ------------------------ Most applications need more than one configuration. There should be at least separate configurations for the production server and the one used during development. The easiest way to handle this is to use a default configuration that is always loaded and part of the version control, and a separate configuration that overrides the values as necessary as mentioned in the example above: ``` app = Flask(__name__) app.config.from_object('yourapplication.default_settings') app.config.from_envvar('YOURAPPLICATION_SETTINGS') ``` Then you just have to add a separate `config.py` file and export `YOURAPPLICATION_SETTINGS=/path/to/config.py` and you are done. However there are alternative ways as well. For example you could use imports or subclassing. What is very popular in the Django world is to make the import explicit in the config file by adding `from yourapplication.default_settings import *` to the top of the file and then overriding the changes by hand. You could also inspect an environment variable like `YOURAPPLICATION_MODE` and set that to `production`, `development` etc and import different hard-coded files based on that. An interesting pattern is also to use classes and inheritance for configuration: ``` class Config(object): TESTING = False class ProductionConfig(Config): DATABASE_URI = 'mysql://user@localhost/foo' class DevelopmentConfig(Config): DATABASE_URI = "sqlite:////tmp/foo.db" class TestingConfig(Config): DATABASE_URI = 'sqlite:///:memory:' TESTING = True ``` To enable such a config you just have to call into [`from_object()`](../api/index#flask.Config.from_object "flask.Config.from_object"): ``` app.config.from_object('configmodule.ProductionConfig') ``` Note that [`from_object()`](../api/index#flask.Config.from_object "flask.Config.from_object") does not instantiate the class object. If you need to instantiate the class, such as to access a property, then you must do so before calling [`from_object()`](../api/index#flask.Config.from_object "flask.Config.from_object"): ``` from configmodule import ProductionConfig app.config.from_object(ProductionConfig()) # Alternatively, import via string: from werkzeug.utils import import_string cfg = import_string('configmodule.ProductionConfig')() app.config.from_object(cfg) ``` Instantiating the configuration object allows you to use `@property` in your configuration classes: ``` class Config(object): """Base config, uses staging database server.""" TESTING = False DB_SERVER = '192.168.1.56' @property def DATABASE_URI(self): # Note: all caps return f"mysql://user@{self.DB_SERVER}/foo" class ProductionConfig(Config): """Uses production database server.""" DB_SERVER = '192.168.19.32' class DevelopmentConfig(Config): DB_SERVER = 'localhost' class TestingConfig(Config): DB_SERVER = 'localhost' DATABASE_URI = 'sqlite:///:memory:' ``` There are many different ways and it’s up to you how you want to manage your configuration files. However here a list of good recommendations: * Keep a default configuration in version control. Either populate the config with this default configuration or import it in your own configuration files before overriding values. * Use an environment variable to switch between the configurations. This can be done from outside the Python interpreter and makes development and deployment much easier because you can quickly and easily switch between different configs without having to touch the code at all. If you are working often on different projects you can even create your own script for sourcing that activates a virtualenv and exports the development configuration for you. * Use a tool like [fabric](https://www.fabfile.org/) to push code and configuration separately to the production server(s). Instance Folders ---------------- Changelog New in version 0.8. Flask 0.8 introduces instance folders. Flask for a long time made it possible to refer to paths relative to the application’s folder directly (via `Flask.root_path`). This was also how many developers loaded configurations stored next to the application. Unfortunately however this only works well if applications are not packages in which case the root path refers to the contents of the package. With Flask 0.8 a new attribute was introduced: `Flask.instance_path`. It refers to a new concept called the “instance folder”. The instance folder is designed to not be under version control and be deployment specific. It’s the perfect place to drop things that either change at runtime or configuration files. You can either explicitly provide the path of the instance folder when creating the Flask application or you can let Flask autodetect the instance folder. For explicit configuration use the `instance_path` parameter: ``` app = Flask(__name__, instance_path='/path/to/instance/folder') ``` Please keep in mind that this path *must* be absolute when provided. If the `instance_path` parameter is not provided the following default locations are used: * Uninstalled module: ``` /myapp.py /instance ``` * Uninstalled package: ``` /myapp /__init__.py /instance ``` * Installed module or package: ``` $PREFIX/lib/pythonX.Y/site-packages/myapp $PREFIX/var/myapp-instance ``` `$PREFIX` is the prefix of your Python installation. This can be `/usr` or the path to your virtualenv. You can print the value of `sys.prefix` to see what the prefix is set to. Since the config object provided loading of configuration files from relative filenames we made it possible to change the loading via filenames to be relative to the instance path if wanted. The behavior of relative paths in config files can be flipped between “relative to the application root” (the default) to “relative to instance folder” via the `instance_relative_config` switch to the application constructor: ``` app = Flask(__name__, instance_relative_config=True) ``` Here is a full example of how to configure Flask to preload the config from a module and then override the config from a file in the instance folder if it exists: ``` app = Flask(__name__, instance_relative_config=True) app.config.from_object('yourapplication.default_settings') app.config.from_pyfile('application.cfg', silent=True) ``` The path to the instance folder can be found via the `Flask.instance_path`. Flask also provides a shortcut to open a file from the instance folder with `Flask.open_instance_resource()`. Example usage for both: ``` filename = os.path.join(app.instance_path, 'application.cfg') with open(filename) as f: config = f.read() # or via open_instance_resource: with app.open_instance_resource('application.cfg') as f: config = f.read() ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/config/Signals ======= Signals are a lightweight way to notify subscribers of certain events during the lifecycle of the application and each request. When an event occurs, it emits the signal, which calls each subscriber. Signals are implemented by the [Blinker](https://pypi.org/project/blinker/) library. See its documentation for detailed information. Flask provides some built-in signals. Extensions may provide their own. Many signals mirror Flask’s decorator-based callbacks with similar names. For example, the [`request_started`](../api/index#flask.request_started "flask.request_started") signal is similar to the [`before_request()`](../api/index#flask.Flask.before_request "flask.Flask.before_request") decorator. The advantage of signals over handlers is that they can be subscribed to temporarily, and can’t directly affect the application. This is useful for testing, metrics, auditing, and more. For example, if you want to know what templates were rendered at what parts of what requests, there is a signal that will notify you of that information. Core Signals ------------ See [Signals](../api/index#core-signals-list) for a list of all built-in signals. The [Application Structure and Lifecycle](../lifecycle/index) page also describes the order that signals and decorators execute. Subscribing to Signals ---------------------- To subscribe to a signal, you can use the [`connect()`](https://blinker.readthedocs.io/en/stable/#blinker.base.Signal.connect "(in Blinker v1.6)") method of a signal. The first argument is the function that should be called when the signal is emitted, the optional second argument specifies a sender. To unsubscribe from a signal, you can use the [`disconnect()`](https://blinker.readthedocs.io/en/stable/#blinker.base.Signal.disconnect "(in Blinker v1.6)") method. For all core Flask signals, the sender is the application that issued the signal. When you subscribe to a signal, be sure to also provide a sender unless you really want to listen for signals from all applications. This is especially true if you are developing an extension. For example, here is a helper context manager that can be used in a unit test to determine which templates were rendered and what variables were passed to the template: ``` from flask import template_rendered from contextlib import contextmanager @contextmanager def captured_templates(app): recorded = [] def record(sender, template, context, **extra): recorded.append((template, context)) template_rendered.connect(record, app) try: yield recorded finally: template_rendered.disconnect(record, app) ``` This can now easily be paired with a test client: ``` with captured_templates(app) as templates: rv = app.test_client().get('/') assert rv.status_code == 200 assert len(templates) == 1 template, context = templates[0] assert template.name == 'index.html' assert len(context['items']) == 10 ``` Make sure to subscribe with an extra `**extra` argument so that your calls don’t fail if Flask introduces new arguments to the signals. All the template rendering in the code issued by the application `app` in the body of the `with` block will now be recorded in the `templates` variable. Whenever a template is rendered, the template object as well as context are appended to it. Additionally there is a convenient helper method ([`connected_to()`](https://blinker.readthedocs.io/en/stable/#blinker.base.Signal.connected_to "(in Blinker v1.6)")) that allows you to temporarily subscribe a function to a signal with a context manager on its own. Because the return value of the context manager cannot be specified that way, you have to pass the list in as an argument: ``` from flask import template_rendered def captured_templates(app, recorded, **extra): def record(sender, template, context): recorded.append((template, context)) return template_rendered.connected_to(record, app) ``` The example above would then look like this: ``` templates = [] with captured_templates(app, templates, **extra): ... template, context = templates[0] ``` Creating Signals ---------------- If you want to use signals in your own application, you can use the blinker library directly. The most common use case are named signals in a custom [`Namespace`](https://blinker.readthedocs.io/en/stable/#blinker.base.Namespace "(in Blinker v1.6)"). This is what is recommended most of the time: ``` from blinker import Namespace my_signals = Namespace() ``` Now you can create new signals like this: ``` model_saved = my_signals.signal('model-saved') ``` The name for the signal here makes it unique and also simplifies debugging. You can access the name of the signal with the [`name`](https://blinker.readthedocs.io/en/stable/#blinker.base.NamedSignal.name "(in Blinker v1.6)") attribute. Sending Signals --------------- If you want to emit a signal, you can do so by calling the [`send()`](https://blinker.readthedocs.io/en/stable/#blinker.base.Signal.send "(in Blinker v1.6)") method. It accepts a sender as first argument and optionally some keyword arguments that are forwarded to the signal subscribers: ``` class Model(object): ... def save(self): model_saved.send(self) ``` Try to always pick a good sender. If you have a class that is emitting a signal, pass `self` as sender. If you are emitting a signal from a random function, you can pass `current_app._get_current_object()` as sender. Passing Proxies as Senders Never pass [`current_app`](../api/index#flask.current_app "flask.current_app") as sender to a signal. Use `current_app._get_current_object()` instead. The reason for this is that [`current_app`](../api/index#flask.current_app "flask.current_app") is a proxy and not the real application object. Signals and Flask’s Request Context ----------------------------------- Signals fully support [The Request Context](../reqcontext/index) when receiving signals. Context-local variables are consistently available between [`request_started`](../api/index#flask.request_started "flask.request_started") and [`request_finished`](../api/index#flask.request_finished "flask.request_finished"), so you can rely on [`flask.g`](../api/index#flask.g "flask.g") and others as needed. Note the limitations described in [Sending Signals](#signals-sending) and the [`request_tearing_down`](../api/index#flask.request_tearing_down "flask.request_tearing_down") signal. Decorator Based Signal Subscriptions ------------------------------------ You can also easily subscribe to signals by using the `connect_via()` decorator: ``` from flask import template_rendered @template_rendered.connect_via(app) def when_template_rendered(sender, template, context, **extra): print(f'Template {template.name} is rendered with {context}') ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/signals/Class-based Views ================= This page introduces using the [`View`](../api/index#flask.views.View "flask.views.View") and [`MethodView`](../api/index#flask.views.MethodView "flask.views.MethodView") classes to write class-based views. A class-based view is a class that acts as a view function. Because it is a class, different instances of the class can be created with different arguments, to change the behavior of the view. This is also known as generic, reusable, or pluggable views. An example of where this is useful is defining a class that creates an API based on the database model it is initialized with. For more complex API behavior and customization, look into the various API extensions for Flask. Basic Reusable View ------------------- Let’s walk through an example converting a view function to a view class. We start with a view function that queries a list of users then renders a template to show the list. ``` @app.route("/users/") def user_list(): users = User.query.all() return render_template("users.html", users=users) ``` This works for the user model, but let’s say you also had more models that needed list pages. You’d need to write another view function for each model, even though the only thing that would change is the model and template name. Instead, you can write a [`View`](../api/index#flask.views.View "flask.views.View") subclass that will query a model and render a template. As the first step, we’ll convert the view to a class without any customization. ``` from flask.views import View class UserList(View): def dispatch_request(self): users = User.query.all() return render_template("users.html", objects=users) app.add_url_rule("/users/", view_func=UserList.as_view("user_list")) ``` The [`View.dispatch_request()`](../api/index#flask.views.View.dispatch_request "flask.views.View.dispatch_request") method is the equivalent of the view function. Calling [`View.as_view()`](../api/index#flask.views.View.as_view "flask.views.View.as_view") method will create a view function that can be registered on the app with its [`add_url_rule()`](../api/index#flask.Flask.add_url_rule "flask.Flask.add_url_rule") method. The first argument to `as_view` is the name to use to refer to the view with [`url_for()`](../api/index#flask.url_for "flask.url_for"). Note You can’t decorate the class with `@app.route()` the way you’d do with a basic view function. Next, we need to be able to register the same view class for different models and templates, to make it more useful than the original function. The class will take two arguments, the model and template, and store them on `self`. Then `dispatch_request` can reference these instead of hard-coded values. ``` class ListView(View): def __init__(self, model, template): self.model = model self.template = template def dispatch_request(self): items = self.model.query.all() return render_template(self.template, items=items) ``` Remember, we create the view function with `View.as_view()` instead of creating the class directly. Any extra arguments passed to `as_view` are then passed when creating the class. Now we can register the same view to handle multiple models. ``` app.add_url_rule( "/users/", view_func=ListView.as_view("user_list", User, "users.html"), ) app.add_url_rule( "/stories/", view_func=ListView.as_view("story_list", Story, "stories.html"), ) ``` URL Variables ------------- Any variables captured by the URL are passed as keyword arguments to the `dispatch_request` method, as they would be for a regular view function. ``` class DetailView(View): def __init__(self, model): self.model = model self.template = f"{model.__name__.lower()}/detail.html" def dispatch_request(self, id) item = self.model.query.get_or_404(id) return render_template(self.template, item=item) app.add_url_rule( "/users/<int:id>", view_func=DetailView.as_view("user_detail", User) ) ``` View Lifetime and `self` ------------------------ By default, a new instance of the view class is created every time a request is handled. This means that it is safe to write other data to `self` during the request, since the next request will not see it, unlike other forms of global state. However, if your view class needs to do a lot of complex initialization, doing it for every request is unnecessary and can be inefficient. To avoid this, set [`View.init_every_request`](../api/index#flask.views.View.init_every_request "flask.views.View.init_every_request") to `False`, which will only create one instance of the class and use it for every request. In this case, writing to `self` is not safe. If you need to store data during the request, use [`g`](../api/index#flask.g "flask.g") instead. In the `ListView` example, nothing writes to `self` during the request, so it is more efficient to create a single instance. ``` class ListView(View): init_every_request = False def __init__(self, model, template): self.model = model self.template = template def dispatch_request(self): items = self.model.query.all() return render_template(self.template, items=items) ``` Different instances will still be created each for each `as_view` call, but not for each request to those views. View Decorators --------------- The view class itself is not the view function. View decorators need to be applied to the view function returned by `as_view`, not the class itself. Set [`View.decorators`](../api/index#flask.views.View.decorators "flask.views.View.decorators") to a list of decorators to apply. ``` class UserList(View): decorators = [cache(minutes=2), login_required] app.add_url_rule('/users/', view_func=UserList.as_view()) ``` If you didn’t set `decorators`, you could apply them manually instead. This is equivalent to: ``` view = UserList.as_view("users_list") view = cache(minutes=2)(view) view = login_required(view) app.add_url_rule('/users/', view_func=view) ``` Keep in mind that order matters. If you’re used to `@decorator` style, this is equivalent to: ``` @app.route("/users/") @login_required @cache(minutes=2) def user_list(): ... ``` Method Hints ------------ A common pattern is to register a view with `methods=["GET", "POST"]`, then check `request.method == "POST"` to decide what to do. Setting [`View.methods`](../api/index#flask.views.View.methods "flask.views.View.methods") is equivalent to passing the list of methods to `add_url_rule` or `route`. ``` class MyView(View): methods = ["GET", "POST"] def dispatch_request(self): if request.method == "POST": ... ... app.add_url_rule('/my-view', view_func=MyView.as_view('my-view')) ``` This is equivalent to the following, except further subclasses can inherit or change the methods. ``` app.add_url_rule( "/my-view", view_func=MyView.as_view("my-view"), methods=["GET", "POST"], ) ``` Method Dispatching and APIs --------------------------- For APIs it can be helpful to use a different function for each HTTP method. [`MethodView`](../api/index#flask.views.MethodView "flask.views.MethodView") extends the basic [`View`](../api/index#flask.views.View "flask.views.View") to dispatch to different methods of the class based on the request method. Each HTTP method maps to a method of the class with the same (lowercase) name. [`MethodView`](../api/index#flask.views.MethodView "flask.views.MethodView") automatically sets [`View.methods`](../api/index#flask.views.View.methods "flask.views.View.methods") based on the methods defined by the class. It even knows how to handle subclasses that override or define other methods. We can make a generic `ItemAPI` class that provides get (detail), patch (edit), and delete methods for a given model. A `GroupAPI` can provide get (list) and post (create) methods. ``` from flask.views import MethodView class ItemAPI(MethodView): init_every_request = False def __init__(self, model): self.model = model self.validator = generate_validator(model) def _get_item(self, id): return self.model.query.get_or_404(id) def get(self, id): item = self._get_item(id) return jsonify(item.to_json()) def patch(self, id): item = self._get_item(id) errors = self.validator.validate(item, request.json) if errors: return jsonify(errors), 400 item.update_from_json(request.json) db.session.commit() return jsonify(item.to_json()) def delete(self, id): item = self._get_item(id) db.session.delete(item) db.session.commit() return "", 204 class GroupAPI(MethodView): init_every_request = False def __init__(self, model): self.model = model self.validator = generate_validator(model, create=True) def get(self): items = self.model.query.all() return jsonify([item.to_json() for item in items]) def post(self): errors = self.validator.validate(request.json) if errors: return jsonify(errors), 400 db.session.add(self.model.from_json(request.json)) db.session.commit() return jsonify(item.to_json()) def register_api(app, model, name): item = ItemAPI.as_view(f"{name}-item", model) group = GroupAPI.as_view(f"{name}-group", model) app.add_url_rule(f"/{name}/<int:id>", view_func=item) app.add_url_rule(f"/{name}/", view_func=group) register_api(app, User, "users") register_api(app, Story, "stories") ``` This produces the following views, a standard REST API! | | | | | --- | --- | --- | | URL | Method | Description | | `/users/` | `GET` | List all users | | `/users/` | `POST` | Create a new user | | `/users/<id>` | `GET` | Show a single user | | `/users/<id>` | `PATCH` | Update a user | | `/users/<id>` | `DELETE` | Delete a user | | `/stories/` | `GET` | List all stories | | `/stories/` | `POST` | Create a new story | | `/stories/<id>` | `GET` | Show a single story | | `/stories/<id>` | `PATCH` | Update a story | | `/stories/<id>` | `DELETE` | Delete a story | © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/views/Application Structure and Lifecycle =================================== Flask makes it pretty easy to write a web application. But there are quite a few different parts to an application and to each request it handles. Knowing what happens during application setup, serving, and handling requests will help you know what’s possible in Flask and how to structure your application. Application Setup ----------------- The first step in creating a Flask application is creating the application object. Each Flask application is an instance of the [`Flask`](../api/index#flask.Flask "flask.Flask") class, which collects all configuration, extensions, and views. ``` from flask import Flask app = Flask(__name__) app.config.from_mapping( SECRET_KEY="dev", ) app.config.from_prefixed_env() @app.route("/") def index(): return "Hello, World!" ``` This is known as the “application setup phase”, it’s the code you write that’s outside any view functions or other handlers. It can be split up between different modules and sub-packages, but all code that you want to be part of your application must be imported in order for it to be registered. All application setup must be completed before you start serving your application and handling requests. This is because WSGI servers divide work between multiple workers, or can be distributed across multiple machines. If the configuration changed in one worker, there’s no way for Flask to ensure consistency between other workers. Flask tries to help developers catch some of these setup ordering issues by showing an error if setup-related methods are called after requests are handled. In that case you’ll see this error: The setup method ‘route’ can no longer be called on the application. It has already handled its first request, any changes will not be applied consistently. Make sure all imports, decorators, functions, etc. needed to set up the application are done before running it. However, it is not possible for Flask to detect all cases of out-of-order setup. In general, don’t do anything to modify the `Flask` app object and `Blueprint` objects from within view functions that run during requests. This includes: * Adding routes, view functions, and other request handlers with `@app.route`, `@app.errorhandler`, `@app.before_request`, etc. * Registering blueprints. * Loading configuration with `app.config`. * Setting up the Jinja template environment with `app.jinja_env`. * Setting a session interface, instead of the default itsdangerous cookie. * Setting a JSON provider with `app.json`, instead of the default provider. * Creating and initializing Flask extensions. Serving the Application ----------------------- Flask is a WSGI application framework. The other half of WSGI is the WSGI server. During development, Flask, through Werkzeug, provides a development WSGI server with the `flask run` CLI command. When you are done with development, use a production server to serve your application, see [Deploying to Production](../deploying/index). Regardless of what server you’re using, it will follow the [**PEP 3333**](https://peps.python.org/pep-3333/) WSGI spec. The WSGI server will be told how to access your Flask application object, which is the WSGI application. Then it will start listening for HTTP requests, translate the request data into a WSGI environ, and call the WSGI application with that data. The WSGI application will return data that is translated into an HTTP response. 1. Browser or other client makes HTTP request. 2. WSGI server receives request. 3. WSGI server converts HTTP data to WSGI `environ` dict. 4. WSGI server calls WSGI application with the `environ`. 5. Flask, the WSGI application, does all its internal processing to route the request to a view function, handle errors, etc. 6. Flask translates View function return into WSGI response data, passes it to WSGI server. 7. WSGI server creates and send an HTTP response. 8. Client receives the HTTP response. ### Middleware The WSGI application above is a callable that behaves in a certain way. Middleware is a WSGI application that wraps another WSGI application. It’s a similar concept to Python decorators. The outermost middleware will be called by the server. It can modify the data passed to it, then call the WSGI application (or further middleware) that it wraps, and so on. And it can take the return value of that call and modify it further. From the WSGI server’s perspective, there is one WSGI application, the one it calls directly. Typically, Flask is the “real” application at the end of the chain of middleware. But even Flask can call further WSGI applications, although that’s an advanced, uncommon use case. A common middleware you’ll see used with Flask is Werkzeug’s [`ProxyFix`](https://werkzeug.palletsprojects.com/en/2.3.x/middleware/proxy_fix/#werkzeug.middleware.proxy_fix.ProxyFix "(in Werkzeug v2.3.x)"), which modifies the request to look like it came directly from a client even if it passed through HTTP proxies on the way. There are other middleware that can handle serving static files, authentication, etc. How a Request is Handled ------------------------ For us, the interesting part of the steps above is when Flask gets called by the WSGI server (or middleware). At that point, it will do quite a lot to handle the request and generate the response. At the most basic, it will match the URL to a view function, call the view function, and pass the return value back to the server. But there are many more parts that you can use to customize its behavior. 1. WSGI server calls the Flask object, which calls [`Flask.wsgi_app()`](../api/index#flask.Flask.wsgi_app "flask.Flask.wsgi_app"). 2. A [`RequestContext`](../api/index#flask.ctx.RequestContext "flask.ctx.RequestContext") object is created. This converts the WSGI `environ` dict into a [`Request`](../api/index#flask.Request "flask.Request") object. It also creates an `AppContext` object. 3. The [app context](../appcontext/index) is pushed, which makes [`current_app`](../api/index#flask.current_app "flask.current_app") and [`g`](../api/index#flask.g "flask.g") available. 4. The [`appcontext_pushed`](../api/index#flask.appcontext_pushed "flask.appcontext_pushed") signal is sent. 5. The [request context](../reqcontext/index) is pushed, which makes [`request`](../api/index#flask.request "flask.request") and [`session`](../api/index#flask.session "flask.session") available. 6. The session is opened, loading any existing session data using the app’s [`session_interface`](../api/index#flask.Flask.session_interface "flask.Flask.session_interface"), an instance of [`SessionInterface`](../api/index#flask.sessions.SessionInterface "flask.sessions.SessionInterface"). 7. The URL is matched against the URL rules registered with the [`route()`](../api/index#flask.Flask.route "flask.Flask.route") decorator during application setup. If there is no match, the error - usually a 404, 405, or redirect - is stored to be handled later. 8. The [`request_started`](../api/index#flask.request_started "flask.request_started") signal is sent. 9. Any [`url_value_preprocessor()`](../api/index#flask.Flask.url_value_preprocessor "flask.Flask.url_value_preprocessor") decorated functions are called. 10. Any [`before_request()`](../api/index#flask.Flask.before_request "flask.Flask.before_request") decorated functions are called. If any of these function returns a value it is treated as the response immediately. 11. If the URL didn’t match a route a few steps ago, that error is raised now. 12. The [`route()`](../api/index#flask.Flask.route "flask.Flask.route") decorated view function associated with the matched URL is called and returns a value to be used as the response. 13. If any step so far raised an exception, and there is an [`errorhandler()`](../api/index#flask.Flask.errorhandler "flask.Flask.errorhandler") decorated function that matches the exception class or HTTP error code, it is called to handle the error and return a response. 14. Whatever returned a response value - a before request function, the view, or an error handler, that value is converted to a [`Response`](../api/index#flask.Response "flask.Response") object. 15. Any [`after_this_request()`](../api/index#flask.after_this_request "flask.after_this_request") decorated functions are called, then cleared. 16. Any [`after_request()`](../api/index#flask.Flask.after_request "flask.Flask.after_request") decorated functions are called, which can modify the response object. 17. The session is saved, persisting any modified session data using the app’s [`session_interface`](../api/index#flask.Flask.session_interface "flask.Flask.session_interface"). 18. The [`request_finished`](../api/index#flask.request_finished "flask.request_finished") signal is sent. 19. If any step so far raised an exception, and it was not handled by an error handler function, it is handled now. HTTP exceptions are treated as responses with their corresponding status code, other exceptions are converted to a generic 500 response. The [`got_request_exception`](../api/index#flask.got_request_exception "flask.got_request_exception") signal is sent. 20. The response object’s status, headers, and body are returned to the WSGI server. 21. Any [`teardown_request()`](../api/index#flask.Flask.teardown_request "flask.Flask.teardown_request") decorated functions are called. 22. The [`request_tearing_down`](../api/index#flask.request_tearing_down "flask.request_tearing_down") signal is sent. 23. The request context is popped, [`request`](../api/index#flask.request "flask.request") and [`session`](../api/index#flask.session "flask.session") are no longer available. 24. Any [`teardown_appcontext()`](../api/index#flask.Flask.teardown_appcontext "flask.Flask.teardown_appcontext") decorated functions are called. 25. The [`appcontext_tearing_down`](../api/index#flask.appcontext_tearing_down "flask.appcontext_tearing_down") signal is sent. 26. The app context is popped, [`current_app`](../api/index#flask.current_app "flask.current_app") and [`g`](../api/index#flask.g "flask.g") are no longer available. 27. The [`appcontext_popped`](../api/index#flask.appcontext_popped "flask.appcontext_popped") signal is sent. There are even more decorators and customization points than this, but that aren’t part of every request lifecycle. They’re more specific to certain things you might use during a request, such as templates, building URLs, or handling JSON data. See the rest of this documentation, as well as the [API](../api/index) to explore further. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/lifecycle/The Application Context ======================= The application context keeps track of the application-level data during a request, CLI command, or other activity. Rather than passing the application around to each function, the [`current_app`](../api/index#flask.current_app "flask.current_app") and [`g`](../api/index#flask.g "flask.g") proxies are accessed instead. This is similar to [The Request Context](../reqcontext/index), which keeps track of request-level data during a request. A corresponding application context is pushed when a request context is pushed. Purpose of the Context ---------------------- The [`Flask`](../api/index#flask.Flask "flask.Flask") application object has attributes, such as [`config`](../api/index#flask.Flask.config "flask.Flask.config"), that are useful to access within views and [CLI commands](../cli/index). However, importing the `app` instance within the modules in your project is prone to circular import issues. When using the [app factory pattern](../patterns/appfactories/index) or writing reusable [blueprints](../blueprints/index) or [extensions](../extensions/index) there won’t be an `app` instance to import at all. Flask solves this issue with the *application context*. Rather than referring to an `app` directly, you use the [`current_app`](../api/index#flask.current_app "flask.current_app") proxy, which points to the application handling the current activity. Flask automatically *pushes* an application context when handling a request. View functions, error handlers, and other functions that run during a request will have access to [`current_app`](../api/index#flask.current_app "flask.current_app"). Flask will also automatically push an app context when running CLI commands registered with [`Flask.cli`](../api/index#flask.Flask.cli "flask.Flask.cli") using `@app.cli.command()`. Lifetime of the Context ----------------------- The application context is created and destroyed as necessary. When a Flask application begins handling a request, it pushes an application context and a [request context](../reqcontext/index). When the request ends it pops the request context then the application context. Typically, an application context will have the same lifetime as a request. See [The Request Context](../reqcontext/index) for more information about how the contexts work and the full life cycle of a request. Manually Push a Context ----------------------- If you try to access [`current_app`](../api/index#flask.current_app "flask.current_app"), or anything that uses it, outside an application context, you’ll get this error message: ``` RuntimeError: Working outside of application context. This typically means that you attempted to use functionality that needed to interface with the current application object in some way. To solve this, set up an application context with app.app_context(). ``` If you see that error while configuring your application, such as when initializing an extension, you can push a context manually since you have direct access to the `app`. Use [`app_context()`](../api/index#flask.Flask.app_context "flask.Flask.app_context") in a `with` block, and everything that runs in the block will have access to [`current_app`](../api/index#flask.current_app "flask.current_app"). ``` def create_app(): app = Flask(__name__) with app.app_context(): init_db() return app ``` If you see that error somewhere else in your code not related to configuring the application, it most likely indicates that you should move that code into a view function or CLI command. Storing Data ------------ The application context is a good place to store common data during a request or CLI command. Flask provides the [`g object`](../api/index#flask.g "flask.g") for this purpose. It is a simple namespace object that has the same lifetime as an application context. Note The `g` name stands for “global”, but that is referring to the data being global *within a context*. The data on `g` is lost after the context ends, and it is not an appropriate place to store data between requests. Use the [`session`](../api/index#flask.session "flask.session") or a database to store data across requests. A common use for [`g`](../api/index#flask.g "flask.g") is to manage resources during a request. 1. `get_X()` creates resource `X` if it does not exist, caching it as `g.X`. 2. `teardown_X()` closes or otherwise deallocates the resource if it exists. It is registered as a [`teardown_appcontext()`](../api/index#flask.Flask.teardown_appcontext "flask.Flask.teardown_appcontext") handler. For example, you can manage a database connection using this pattern: ``` from flask import g def get_db(): if 'db' not in g: g.db = connect_to_database() return g.db @app.teardown_appcontext def teardown_db(exception): db = g.pop('db', None) if db is not None: db.close() ``` During a request, every call to `get_db()` will return the same connection, and it will be closed automatically at the end of the request. You can use [`LocalProxy`](https://werkzeug.palletsprojects.com/en/2.3.x/local/#werkzeug.local.LocalProxy "(in Werkzeug v2.3.x)") to make a new context local from `get_db()`: ``` from werkzeug.local import LocalProxy db = LocalProxy(get_db) ``` Accessing `db` will call `get_db` internally, in the same way that [`current_app`](../api/index#flask.current_app "flask.current_app") works. Events and Signals ------------------ The application will call functions registered with [`teardown_appcontext()`](../api/index#flask.Flask.teardown_appcontext "flask.Flask.teardown_appcontext") when the application context is popped. The following signals are sent: [`appcontext_pushed`](../api/index#flask.appcontext_pushed "flask.appcontext_pushed"), [`appcontext_tearing_down`](../api/index#flask.appcontext_tearing_down "flask.appcontext_tearing_down"), and [`appcontext_popped`](../api/index#flask.appcontext_popped "flask.appcontext_popped"). © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/appcontext/The Request Context =================== The request context keeps track of the request-level data during a request. Rather than passing the request object to each function that runs during a request, the [`request`](../api/index#flask.request "flask.request") and [`session`](../api/index#flask.session "flask.session") proxies are accessed instead. This is similar to [The Application Context](../appcontext/index), which keeps track of the application-level data independent of a request. A corresponding application context is pushed when a request context is pushed. Purpose of the Context ---------------------- When the [`Flask`](../api/index#flask.Flask "flask.Flask") application handles a request, it creates a [`Request`](../api/index#flask.Request "flask.Request") object based on the environment it received from the WSGI server. Because a *worker* (thread, process, or coroutine depending on the server) handles only one request at a time, the request data can be considered global to that worker during that request. Flask uses the term *context local* for this. Flask automatically *pushes* a request context when handling a request. View functions, error handlers, and other functions that run during a request will have access to the [`request`](../api/index#flask.request "flask.request") proxy, which points to the request object for the current request. Lifetime of the Context ----------------------- When a Flask application begins handling a request, it pushes a request context, which also pushes an [app context](../appcontext/index). When the request ends it pops the request context then the application context. The context is unique to each thread (or other worker type). [`request`](../api/index#flask.request "flask.request") cannot be passed to another thread, the other thread has a different context space and will not know about the request the parent thread was pointing to. Context locals are implemented using Python’s [`contextvars`](https://docs.python.org/3/library/contextvars.html#module-contextvars "(in Python v3.11)") and Werkzeug’s [`LocalProxy`](https://werkzeug.palletsprojects.com/en/2.3.x/local/#werkzeug.local.LocalProxy "(in Werkzeug v2.3.x)"). Python manages the lifetime of context vars automatically, and local proxy wraps that low-level interface to make the data easier to work with. Manually Push a Context ----------------------- If you try to access [`request`](../api/index#flask.request "flask.request"), or anything that uses it, outside a request context, you’ll get this error message: ``` RuntimeError: Working outside of request context. This typically means that you attempted to use functionality that needed an active HTTP request. Consult the documentation on testing for information about how to avoid this problem. ``` This should typically only happen when testing code that expects an active request. One option is to use the [`test client`](../api/index#flask.Flask.test_client "flask.Flask.test_client") to simulate a full request. Or you can use [`test_request_context()`](../api/index#flask.Flask.test_request_context "flask.Flask.test_request_context") in a `with` block, and everything that runs in the block will have access to [`request`](../api/index#flask.request "flask.request"), populated with your test data. ``` def generate_report(year): format = request.args.get("format") ... with app.test_request_context( "/make_report/2017", query_string={"format": "short"} ): generate_report() ``` If you see that error somewhere else in your code not related to testing, it most likely indicates that you should move that code into a view function. For information on how to use the request context from the interactive Python shell, see [Working with the Shell](../shell/index). How the Context Works --------------------- The [`Flask.wsgi_app()`](../api/index#flask.Flask.wsgi_app "flask.Flask.wsgi_app") method is called to handle each request. It manages the contexts during the request. Internally, the request and application contexts work like stacks. When contexts are pushed, the proxies that depend on them are available and point at information from the top item. When the request starts, a [`RequestContext`](../api/index#flask.ctx.RequestContext "flask.ctx.RequestContext") is created and pushed, which creates and pushes an [`AppContext`](../api/index#flask.ctx.AppContext "flask.ctx.AppContext") first if a context for that application is not already the top context. While these contexts are pushed, the [`current_app`](../api/index#flask.current_app "flask.current_app"), [`g`](../api/index#flask.g "flask.g"), [`request`](../api/index#flask.request "flask.request"), and [`session`](../api/index#flask.session "flask.session") proxies are available to the original thread handling the request. Other contexts may be pushed to change the proxies during a request. While this is not a common pattern, it can be used in advanced applications to, for example, do internal redirects or chain different applications together. After the request is dispatched and a response is generated and sent, the request context is popped, which then pops the application context. Immediately before they are popped, the [`teardown_request()`](../api/index#flask.Flask.teardown_request "flask.Flask.teardown_request") and [`teardown_appcontext()`](../api/index#flask.Flask.teardown_appcontext "flask.Flask.teardown_appcontext") functions are executed. These execute even if an unhandled exception occurred during dispatch. Callbacks and Errors -------------------- Flask dispatches a request in multiple stages which can affect the request, response, and how errors are handled. The contexts are active during all of these stages. A [`Blueprint`](../api/index#flask.Blueprint "flask.Blueprint") can add handlers for these events that are specific to the blueprint. The handlers for a blueprint will run if the blueprint owns the route that matches the request. 1. Before each request, [`before_request()`](../api/index#flask.Flask.before_request "flask.Flask.before_request") functions are called. If one of these functions return a value, the other functions are skipped. The return value is treated as the response and the view function is not called. 2. If the [`before_request()`](../api/index#flask.Flask.before_request "flask.Flask.before_request") functions did not return a response, the view function for the matched route is called and returns a response. 3. The return value of the view is converted into an actual response object and passed to the [`after_request()`](../api/index#flask.Flask.after_request "flask.Flask.after_request") functions. Each function returns a modified or new response object. 4. After the response is returned, the contexts are popped, which calls the [`teardown_request()`](../api/index#flask.Flask.teardown_request "flask.Flask.teardown_request") and [`teardown_appcontext()`](../api/index#flask.Flask.teardown_appcontext "flask.Flask.teardown_appcontext") functions. These functions are called even if an unhandled exception was raised at any point above. If an exception is raised before the teardown functions, Flask tries to match it with an [`errorhandler()`](../api/index#flask.Flask.errorhandler "flask.Flask.errorhandler") function to handle the exception and return a response. If no error handler is found, or the handler itself raises an exception, Flask returns a generic `500 Internal Server Error` response. The teardown functions are still called, and are passed the exception object. If debug mode is enabled, unhandled exceptions are not converted to a `500` response and instead are propagated to the WSGI server. This allows the development server to present the interactive debugger with the traceback. ### Teardown Callbacks The teardown callbacks are independent of the request dispatch, and are instead called by the contexts when they are popped. The functions are called even if there is an unhandled exception during dispatch, and for manually pushed contexts. This means there is no guarantee that any other parts of the request dispatch have run first. Be sure to write these functions in a way that does not depend on other callbacks and will not fail. During testing, it can be useful to defer popping the contexts after the request ends, so that their data can be accessed in the test function. Use the [`test_client()`](../api/index#flask.Flask.test_client "flask.Flask.test_client") as a `with` block to preserve the contexts until the `with` block exits. ``` from flask import Flask, request app = Flask(__name__) @app.route('/') def hello(): print('during view') return 'Hello, World!' @app.teardown_request def show_teardown(exception): print('after with block') with app.test_request_context(): print('during with block') # teardown functions are called after the context with block exits with app.test_client() as client: client.get('/') # the contexts are not popped even though the request ended print(request.path) # the contexts are popped and teardown functions are called after # the client with block exits ``` ### Signals The following signals are sent: 1. [`request_started`](../api/index#flask.request_started "flask.request_started") is sent before the [`before_request()`](../api/index#flask.Flask.before_request "flask.Flask.before_request") functions are called. 2. [`request_finished`](../api/index#flask.request_finished "flask.request_finished") is sent after the [`after_request()`](../api/index#flask.Flask.after_request "flask.Flask.after_request") functions are called. 3. [`got_request_exception`](../api/index#flask.got_request_exception "flask.got_request_exception") is sent when an exception begins to be handled, but before an [`errorhandler()`](../api/index#flask.Flask.errorhandler "flask.Flask.errorhandler") is looked up or called. 4. [`request_tearing_down`](../api/index#flask.request_tearing_down "flask.request_tearing_down") is sent after the [`teardown_request()`](../api/index#flask.Flask.teardown_request "flask.Flask.teardown_request") functions are called. Notes On Proxies ---------------- Some of the objects provided by Flask are proxies to other objects. The proxies are accessed in the same way for each worker thread, but point to the unique object bound to each worker behind the scenes as described on this page. Most of the time you don’t have to care about that, but there are some exceptions where it is good to know that this object is actually a proxy: * The proxy objects cannot fake their type as the actual object types. If you want to perform instance checks, you have to do that on the object being proxied. * The reference to the proxied object is needed in some situations, such as sending [Signals](../signals/index) or passing data to a background thread. If you need to access the underlying object that is proxied, use the `_get_current_object()` method: ``` app = current_app._get_current_object() my_signal.send(app) ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/reqcontext/Modular Applications with Blueprints ==================================== Changelog New in version 0.7. Flask uses a concept of *blueprints* for making application components and supporting common patterns within an application or across applications. Blueprints can greatly simplify how large applications work and provide a central means for Flask extensions to register operations on applications. A [`Blueprint`](../api/index#flask.Blueprint "flask.Blueprint") object works similarly to a [`Flask`](../api/index#flask.Flask "flask.Flask") application object, but it is not actually an application. Rather it is a *blueprint* of how to construct or extend an application. Why Blueprints? --------------- Blueprints in Flask are intended for these cases: * Factor an application into a set of blueprints. This is ideal for larger applications; a project could instantiate an application object, initialize several extensions, and register a collection of blueprints. * Register a blueprint on an application at a URL prefix and/or subdomain. Parameters in the URL prefix/subdomain become common view arguments (with defaults) across all view functions in the blueprint. * Register a blueprint multiple times on an application with different URL rules. * Provide template filters, static files, templates, and other utilities through blueprints. A blueprint does not have to implement applications or view functions. * Register a blueprint on an application for any of these cases when initializing a Flask extension. A blueprint in Flask is not a pluggable app because it is not actually an application – it’s a set of operations which can be registered on an application, even multiple times. Why not have multiple application objects? You can do that (see [Application Dispatching](../patterns/appdispatch/index)), but your applications will have separate configs and will be managed at the WSGI layer. Blueprints instead provide separation at the Flask level, share application config, and can change an application object as necessary with being registered. The downside is that you cannot unregister a blueprint once an application was created without having to destroy the whole application object. The Concept of Blueprints ------------------------- The basic concept of blueprints is that they record operations to execute when registered on an application. Flask associates view functions with blueprints when dispatching requests and generating URLs from one endpoint to another. My First Blueprint ------------------ This is what a very basic blueprint looks like. In this case we want to implement a blueprint that does simple rendering of static templates: ``` from flask import Blueprint, render_template, abort from jinja2 import TemplateNotFound simple_page = Blueprint('simple_page', __name__, template_folder='templates') @simple_page.route('/', defaults={'page': 'index'}) @simple_page.route('/<page>') def show(page): try: return render_template(f'pages/{page}.html') except TemplateNotFound: abort(404) ``` When you bind a function with the help of the `@simple_page.route` decorator, the blueprint will record the intention of registering the function `show` on the application when it’s later registered. Additionally it will prefix the endpoint of the function with the name of the blueprint which was given to the [`Blueprint`](../api/index#flask.Blueprint "flask.Blueprint") constructor (in this case also `simple_page`). The blueprint’s name does not modify the URL, only the endpoint. Registering Blueprints ---------------------- So how do you register that blueprint? Like this: ``` from flask import Flask from yourapplication.simple_page import simple_page app = Flask(__name__) app.register_blueprint(simple_page) ``` If you check the rules registered on the application, you will find these: ``` >>> app.url_map Map([<Rule '/static/<filename>' (HEAD, OPTIONS, GET) -> static>, <Rule '/<page>' (HEAD, OPTIONS, GET) -> simple_page.show>, <Rule '/' (HEAD, OPTIONS, GET) -> simple_page.show>]) ``` The first one is obviously from the application itself for the static files. The other two are for the `show` function of the `simple_page` blueprint. As you can see, they are also prefixed with the name of the blueprint and separated by a dot (`.`). Blueprints however can also be mounted at different locations: ``` app.register_blueprint(simple_page, url_prefix='/pages') ``` And sure enough, these are the generated rules: ``` >>> app.url_map Map([<Rule '/static/<filename>' (HEAD, OPTIONS, GET) -> static>, <Rule '/pages/<page>' (HEAD, OPTIONS, GET) -> simple_page.show>, <Rule '/pages/' (HEAD, OPTIONS, GET) -> simple_page.show>]) ``` On top of that you can register blueprints multiple times though not every blueprint might respond properly to that. In fact it depends on how the blueprint is implemented if it can be mounted more than once. Nesting Blueprints ------------------ It is possible to register a blueprint on another blueprint. ``` parent = Blueprint('parent', __name__, url_prefix='/parent') child = Blueprint('child', __name__, url_prefix='/child') parent.register_blueprint(child) app.register_blueprint(parent) ``` The child blueprint will gain the parent’s name as a prefix to its name, and child URLs will be prefixed with the parent’s URL prefix. ``` url_for('parent.child.create') /parent/child/create ``` In addition a child blueprint’s will gain their parent’s subdomain, with their subdomain as prefix if present i.e. ``` parent = Blueprint('parent', __name__, subdomain='parent') child = Blueprint('child', __name__, subdomain='child') parent.register_blueprint(child) app.register_blueprint(parent) url_for('parent.child.create', _external=True) "child.parent.domain.tld" ``` Blueprint-specific before request functions, etc. registered with the parent will trigger for the child. If a child does not have an error handler that can handle a given exception, the parent’s will be tried. Blueprint Resources ------------------- Blueprints can provide resources as well. Sometimes you might want to introduce a blueprint only for the resources it provides. ### Blueprint Resource Folder Like for regular applications, blueprints are considered to be contained in a folder. While multiple blueprints can originate from the same folder, it does not have to be the case and it’s usually not recommended. The folder is inferred from the second argument to [`Blueprint`](../api/index#flask.Blueprint "flask.Blueprint") which is usually `__name__`. This argument specifies what logical Python module or package corresponds to the blueprint. If it points to an actual Python package that package (which is a folder on the filesystem) is the resource folder. If it’s a module, the package the module is contained in will be the resource folder. You can access the [`Blueprint.root_path`](../api/index#flask.Blueprint.root_path "flask.Blueprint.root_path") property to see what the resource folder is: ``` >>> simple_page.root_path '/Users/username/TestProject/yourapplication' ``` To quickly open sources from this folder you can use the [`open_resource()`](../api/index#flask.Blueprint.open_resource "flask.Blueprint.open_resource") function: ``` with simple_page.open_resource('static/style.css') as f: code = f.read() ``` ### Static Files A blueprint can expose a folder with static files by providing the path to the folder on the filesystem with the `static_folder` argument. It is either an absolute path or relative to the blueprint’s location: ``` admin = Blueprint('admin', __name__, static_folder='static') ``` By default the rightmost part of the path is where it is exposed on the web. This can be changed with the `static_url_path` argument. Because the folder is called `static` here it will be available at the `url_prefix` of the blueprint + `/static`. If the blueprint has the prefix `/admin`, the static URL will be `/admin/static`. The endpoint is named `blueprint_name.static`. You can generate URLs to it with [`url_for()`](../api/index#flask.url_for "flask.url_for") like you would with the static folder of the application: ``` url_for('admin.static', filename='style.css') ``` However, if the blueprint does not have a `url_prefix`, it is not possible to access the blueprint’s static folder. This is because the URL would be `/static` in this case, and the application’s `/static` route takes precedence. Unlike template folders, blueprint static folders are not searched if the file does not exist in the application static folder. ### Templates If you want the blueprint to expose templates you can do that by providing the `template_folder` parameter to the [`Blueprint`](../api/index#flask.Blueprint "flask.Blueprint") constructor: ``` admin = Blueprint('admin', __name__, template_folder='templates') ``` For static files, the path can be absolute or relative to the blueprint resource folder. The template folder is added to the search path of templates but with a lower priority than the actual application’s template folder. That way you can easily override templates that a blueprint provides in the actual application. This also means that if you don’t want a blueprint template to be accidentally overridden, make sure that no other blueprint or actual application template has the same relative path. When multiple blueprints provide the same relative template path the first blueprint registered takes precedence over the others. So if you have a blueprint in the folder `yourapplication/admin` and you want to render the template `'admin/index.html'` and you have provided `templates` as a `template_folder` you will have to create a file like this: `yourapplication/admin/templates/admin/index.html`. The reason for the extra `admin` folder is to avoid getting our template overridden by a template named `index.html` in the actual application template folder. To further reiterate this: if you have a blueprint named `admin` and you want to render a template called `index.html` which is specific to this blueprint, the best idea is to lay out your templates like this: ``` yourpackage/ blueprints/ admin/ templates/ admin/ index.html __init__.py ``` And then when you want to render the template, use `admin/index.html` as the name to look up the template by. If you encounter problems loading the correct templates enable the `EXPLAIN_TEMPLATE_LOADING` config variable which will instruct Flask to print out the steps it goes through to locate templates on every `render_template` call. Building URLs ------------- If you want to link from one page to another you can use the [`url_for()`](../api/index#flask.url_for "flask.url_for") function just like you normally would do just that you prefix the URL endpoint with the name of the blueprint and a dot (`.`): ``` url_for('admin.index') ``` Additionally if you are in a view function of a blueprint or a rendered template and you want to link to another endpoint of the same blueprint, you can use relative redirects by prefixing the endpoint with a dot only: ``` url_for('.index') ``` This will link to `admin.index` for instance in case the current request was dispatched to any other admin blueprint endpoint. Blueprint Error Handlers ------------------------ Blueprints support the `errorhandler` decorator just like the [`Flask`](../api/index#flask.Flask "flask.Flask") application object, so it is easy to make Blueprint-specific custom error pages. Here is an example for a “404 Page Not Found” exception: ``` @simple_page.errorhandler(404) def page_not_found(e): return render_template('pages/404.html') ``` Most errorhandlers will simply work as expected; however, there is a caveat concerning handlers for 404 and 405 exceptions. These errorhandlers are only invoked from an appropriate `raise` statement or a call to `abort` in another of the blueprint’s view functions; they are not invoked by, e.g., an invalid URL access. This is because the blueprint does not “own” a certain URL space, so the application instance has no way of knowing which blueprint error handler it should run if given an invalid URL. If you would like to execute different handling strategies for these errors based on URL prefixes, they may be defined at the application level using the `request` proxy object: ``` @app.errorhandler(404) @app.errorhandler(405) def _handle_api_error(ex): if request.path.startswith('/api/'): return jsonify(error=str(ex)), ex.code else: return ex ``` See [Handling Application Errors](../errorhandling/index). © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/blueprints/Extensions ========== Extensions are extra packages that add functionality to a Flask application. For example, an extension might add support for sending email or connecting to a database. Some extensions add entire new frameworks to help build certain types of applications, like a REST API. Finding Extensions ------------------ Flask extensions are usually named “Flask-Foo” or “Foo-Flask”. You can search PyPI for packages tagged with [Framework :: Flask](https://pypi.org/search/?c=Framework+%3A%3A+Flask). Using Extensions ---------------- Consult each extension’s documentation for installation, configuration, and usage instructions. Generally, extensions pull their own configuration from [`app.config`](../api/index#flask.Flask.config "flask.Flask.config") and are passed an application instance during initialization. For example, an extension called “Flask-Foo” might be used like this: ``` from flask_foo import Foo foo = Foo() app = Flask(__name__) app.config.update( FOO_BAR='baz', FOO_SPAM='eggs', ) foo.init_app(app) ``` Building Extensions ------------------- While [PyPI](https://pypi.org/search/?c=Framework+%3A%3A+Flask) contains many Flask extensions, you may not find an extension that fits your need. If this is the case, you can create your own, and publish it for others to use as well. Read [Flask Extension Development](https://flask.palletsprojects.com/en/2.3.x/extensiondev/) to develop your own Flask extension. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/extensions/Command Line Interface ====================== Installing Flask installs the `flask` script, a [Click](https://click.palletsprojects.com/) command line interface, in your virtualenv. Executed from the terminal, this script gives access to built-in, extension, and application-defined commands. The `--help` option will give more information about any commands and options. Application Discovery --------------------- The `flask` command is installed by Flask, not your application; it must be told where to find your application in order to use it. The `--app` option is used to specify how to load the application. While `--app` supports a variety of options for specifying your application, most use cases should be simple. Here are the typical values: (nothing) The name “app” or “wsgi” is imported (as a “.py” file, or package), automatically detecting an app (`app` or `application`) or factory (`create_app` or `make_app`). `--app hello` The given name is imported, automatically detecting an app (`app` or `application`) or factory (`create_app` or `make_app`). `--app` has three parts: an optional path that sets the current working directory, a Python file or dotted import path, and an optional variable name of the instance or factory. If the name is a factory, it can optionally be followed by arguments in parentheses. The following values demonstrate these parts: `--app src/hello` Sets the current working directory to `src` then imports `hello`. `--app hello.web` Imports the path `hello.web`. `--app hello:app2` Uses the `app2` Flask instance in `hello`. `--app 'hello:create_app("dev")'` The `create_app` factory in `hello` is called with the string `'dev'` as the argument. If `--app` is not set, the command will try to import “app” or “wsgi” (as a “.py” file, or package) and try to detect an application instance or factory. Within the given import, the command looks for an application instance named `app` or `application`, then any application instance. If no instance is found, the command looks for a factory function named `create_app` or `make_app` that returns an instance. If parentheses follow the factory name, their contents are parsed as Python literals and passed as arguments and keyword arguments to the function. This means that strings must still be in quotes. Run the Development Server -------------------------- The [`run`](../api/index#flask.cli.run_command "flask.cli.run_command") command will start the development server. It replaces the [`Flask.run()`](../api/index#flask.Flask.run "flask.Flask.run") method in most cases. ``` $ flask --app hello run * Serving Flask app "hello" * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) ``` Warning Do not use this command to run your application in production. Only use the development server during development. The development server is provided for convenience, but is not designed to be particularly secure, stable, or efficient. See [Deploying to Production](../deploying/index) for how to run in production. If another program is already using port 5000, you’ll see `OSError: [Errno 98]` or `OSError: [WinError 10013]` when the server tries to start. See [Address already in use](../server/index#address-already-in-use) for how to handle that. ### Debug Mode In debug mode, the `flask run` command will enable the interactive debugger and the reloader by default, and make errors easier to see and debug. To enable debug mode, use the `--debug` option. ``` $ flask --app hello run --debug * Serving Flask app "hello" * Debug mode: on * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) * Restarting with inotify reloader * Debugger is active! * Debugger PIN: 223-456-919 ``` The `--debug` option can also be passed to the top level `flask` command to enable debug mode for any command. The following two `run` calls are equivalent. ``` $ flask --app hello --debug run $ flask --app hello run --debug ``` ### Watch and Ignore Files with the Reloader When using debug mode, the reloader will trigger whenever your Python code or imported modules change. The reloader can watch additional files with the `--extra-files` option. Multiple paths are separated with `:`, or `;` on Windows. ``` $ flask run --extra-files file1:dirA/file2:dirB/ * Running on http://127.0.0.1:8000/ * Detected change in '/path/to/file1', reloading ``` The reloader can also ignore files using [`fnmatch`](https://docs.python.org/3/library/fnmatch.html#module-fnmatch "(in Python v3.11)") patterns with the `--exclude-patterns` option. Multiple patterns are separated with `:`, or `;` on Windows. Open a Shell ------------ To explore the data in your application, you can start an interactive Python shell with the [`shell`](../api/index#flask.cli.shell_command "flask.cli.shell_command") command. An application context will be active, and the app instance will be imported. ``` $ flask shell Python 3.10.0 (default, Oct 27 2021, 06:59:51) [GCC 11.1.0] on linux App: example [production] Instance: /home/david/Projects/pallets/flask/instance >>``` Use [`shell_context_processor()`](../api/index#flask.Flask.shell_context_processor "flask.Flask.shell_context_processor") to add other automatic imports. Environment Variables From dotenv --------------------------------- The `flask` command supports setting any option for any command with environment variables. The variables are named like `FLASK_OPTION` or `FLASK_COMMAND_OPTION`, for example `FLASK_APP` or `FLASK_RUN_PORT`. Rather than passing options every time you run a command, or environment variables every time you open a new terminal, you can use Flask’s dotenv support to set environment variables automatically. If [python-dotenv](https://github.com/theskumar/python-dotenv#readme) is installed, running the `flask` command will set environment variables defined in the files `.env` and `.flaskenv`. You can also specify an extra file to load with the `--env-file` option. Dotenv files can be used to avoid having to set `--app` or `FLASK_APP` manually, and to set configuration using environment variables similar to how some deployment services work. Variables set on the command line are used over those set in `.env`, which are used over those set in `.flaskenv`. `.flaskenv` should be used for public variables, such as `FLASK_APP`, while `.env` should not be committed to your repository so that it can set private variables. Directories are scanned upwards from the directory you call `flask` from to locate the files. The files are only loaded by the `flask` command or calling [`run()`](../api/index#flask.Flask.run "flask.Flask.run"). If you would like to load these files when running in production, you should call [`load_dotenv()`](../api/index#flask.cli.load_dotenv "flask.cli.load_dotenv") manually. ### Setting Command Options Click is configured to load default values for command options from environment variables. The variables use the pattern `FLASK_COMMAND_OPTION`. For example, to set the port for the run command, instead of `flask run --port 8000`: BashFishCMDPowershell ``` $ export FLASK_RUN_PORT=8000 $ flask run * Running on http://127.0.0.1:8000/ ``` ``` $ set -x FLASK_RUN_PORT 8000 $ flask run * Running on http://127.0.0.1:8000/ ``` ``` > set FLASK_RUN_PORT=8000 > flask run * Running on http://127.0.0.1:8000/ ``` ``` > $env:FLASK_RUN_PORT = 8000 > flask run * Running on http://127.0.0.1:8000/ ``` These can be added to the `.flaskenv` file just like `FLASK_APP` to control default command options. ### Disable dotenv The `flask` command will show a message if it detects dotenv files but python-dotenv is not installed. ``` $ flask run * Tip: There are .env files present. Do "pip install python-dotenv" to use them. ``` You can tell Flask not to load dotenv files even when python-dotenv is installed by setting the `FLASK_SKIP_DOTENV` environment variable. This can be useful if you want to load them manually, or if you’re using a project runner that loads them already. Keep in mind that the environment variables must be set before the app loads or it won’t configure as expected. BashFishCMDPowershell ``` $ export FLASK_SKIP_DOTENV=1 $ flask run ``` ``` $ set -x FLASK_SKIP_DOTENV 1 $ flask run ``` ``` > set FLASK_SKIP_DOTENV=1 > flask run ``` ``` > $env:FLASK_SKIP_DOTENV = 1 > flask run ``` Environment Variables From virtualenv ------------------------------------- If you do not want to install dotenv support, you can still set environment variables by adding them to the end of the virtualenv’s `activate` script. Activating the virtualenv will set the variables. BashFishCMDPowershell Unix Bash, `.venv/bin/activate`: ``` $ export FLASK_APP=hello ``` Fish, `.venv/bin/activate.fish`: ``` $ set -x FLASK_APP hello ``` Windows CMD, `.venv\Scripts\activate.bat`: ``` > set FLASK_APP=hello ``` Windows Powershell, `.venv\Scripts\activate.ps1`: ``` > $env:FLASK_APP = "hello" ``` It is preferred to use dotenv support over this, since `.flaskenv` can be committed to the repository so that it works automatically wherever the project is checked out. Custom Commands --------------- The `flask` command is implemented using [Click](https://click.palletsprojects.com/). See that project’s documentation for full information about writing commands. This example adds the command `create-user` that takes the argument `name`. ``` import click from flask import Flask app = Flask(__name__) @app.cli.command("create-user") @click.argument("name") def create_user(name): ... ``` ``` $ flask create-user admin ``` This example adds the same command, but as `user create`, a command in a group. This is useful if you want to organize multiple related commands. ``` import click from flask import Flask from flask.cli import AppGroup app = Flask(__name__) user_cli = AppGroup('user') @user_cli.command('create') @click.argument('name') def create_user(name): ... app.cli.add_command(user_cli) ``` ``` $ flask user create demo ``` See [Running Commands with the CLI Runner](../testing/index#testing-cli) for an overview of how to test your custom commands. ### Registering Commands with Blueprints If your application uses blueprints, you can optionally register CLI commands directly onto them. When your blueprint is registered onto your application, the associated commands will be available to the `flask` command. By default, those commands will be nested in a group matching the name of the blueprint. ``` from flask import Blueprint bp = Blueprint('students', __name__) @bp.cli.command('create') @click.argument('name') def create(name): ... app.register_blueprint(bp) ``` ``` $ flask students create alice ``` You can alter the group name by specifying the `cli_group` parameter when creating the [`Blueprint`](../api/index#flask.Blueprint "flask.Blueprint") object, or later with [`app.register_blueprint(bp, cli_group='...')`](../api/index#flask.Flask.register_blueprint "flask.Flask.register_blueprint"). The following are equivalent: ``` bp = Blueprint('students', __name__, cli_group='other') # or app.register_blueprint(bp, cli_group='other') ``` ``` $ flask other create alice ``` Specifying `cli_group=None` will remove the nesting and merge the commands directly to the application’s level: ``` bp = Blueprint('students', __name__, cli_group=None) # or app.register_blueprint(bp, cli_group=None) ``` ``` $ flask create alice ``` ### Application Context Commands added using the Flask app’s [`cli`](../api/index#flask.Flask.cli "flask.Flask.cli") or [`FlaskGroup`](../api/index#flask.cli.FlaskGroup "flask.cli.FlaskGroup") [`command()`](../api/index#flask.cli.AppGroup.command "flask.cli.AppGroup.command") decorator will be executed with an application context pushed, so your custom commands and parameters have access to the app and its configuration. The [`with_appcontext()`](../api/index#flask.cli.with_appcontext "flask.cli.with_appcontext") decorator can be used to get the same behavior, but is not needed in most cases. ``` import click from flask.cli import with_appcontext @click.command() @with_appcontext def do_work(): ... app.cli.add_command(do_work) ``` Plugins ------- Flask will automatically load commands specified in the `flask.commands` [entry point](https://packaging.python.org/tutorials/packaging-projects/#entry-points). This is useful for extensions that want to add commands when they are installed. Entry points are specified in `pyproject.toml`: ``` [project.entry-points."flask.commands"] my-command = "my_extension.commands:cli" ``` Inside `my_extension/commands.py` you can then export a Click object: ``` import click @click.command() def cli(): ... ``` Once that package is installed in the same virtualenv as your Flask project, you can run `flask my-command` to invoke the command. Custom Scripts -------------- When you are using the app factory pattern, it may be more convenient to define your own Click script. Instead of using `--app` and letting Flask load your application, you can create your own Click object and export it as a [console script](https://packaging.python.org/tutorials/packaging-projects/#console-scripts) entry point. Create an instance of [`FlaskGroup`](../api/index#flask.cli.FlaskGroup "flask.cli.FlaskGroup") and pass it the factory: ``` import click from flask import Flask from flask.cli import FlaskGroup def create_app(): app = Flask('wiki') # other setup return app @click.group(cls=FlaskGroup, create_app=create_app) def cli(): """Management script for the Wiki application.""" ``` Define the entry point in `pyproject.toml`: ``` [project.scripts] wiki = "wiki:cli" ``` Install the application in the virtualenv in editable mode and the custom script is available. Note that you don’t need to set `--app`. ``` $ pip install -e . $ wiki run ``` Errors in Custom Scripts When using a custom script, if you introduce an error in your module-level code, the reloader will fail because it can no longer load the entry point. The `flask` command, being separate from your code, does not have this issue and is recommended in most cases. PyCharm Integration ------------------- PyCharm Professional provides a special Flask run configuration to run the development server. For the Community Edition, and for other commands besides `run`, you need to create a custom run configuration. These instructions should be similar for any other IDE you use. In PyCharm, with your project open, click on *Run* from the menu bar and go to *Edit Configurations*. You’ll see a screen similar to this: ![Screenshot of PyCharm run configuration.] Once you create a configuration for the `flask run`, you can copy and change it to call any other command. Click the *+ (Add New Configuration)* button and select *Python*. Give the configuration a name such as “flask run”. Click the *Script path* dropdown and change it to *Module name*, then input `flask`. The *Parameters* field is set to the CLI command to execute along with any arguments. This example uses `--app hello run --debug`, which will run the development server in debug mode. `--app hello` should be the import or file with your Flask app. If you installed your project as a package in your virtualenv, you may uncheck the *PYTHONPATH* options. This will more accurately match how you deploy later. Click *OK* to save and close the configuration. Select the configuration in the main PyCharm window and click the play button next to it to run the server. Now that you have a configuration for `flask run`, you can copy that configuration and change the *Parameters* argument to run a different CLI command. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/cli/Development Server ================== Flask provides a `run` command to run the application with a development server. In debug mode, this server provides an interactive debugger and will reload when code is changed. Warning Do not use the development server when deploying to production. It is intended for use only during local development. It is not designed to be particularly efficient, stable, or secure. See [Deploying to Production](../deploying/index) for deployment options. Command Line ------------ The `flask run` CLI command is the recommended way to run the development server. Use the `--app` option to point to your application, and the `--debug` option to enable debug mode. ``` $ flask --app hello run --debug ``` This enables debug mode, including the interactive debugger and reloader, and then starts the server on <http://localhost:5000/>. Use `flask run --help` to see the available options, and [Command Line Interface](../cli/index) for detailed instructions about configuring and using the CLI. ### Address already in use If another program is already using port 5000, you’ll see an `OSError` when the server tries to start. It may have one of the following messages: * `OSError: [Errno 98] Address already in use` * `OSError: [WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions` Either identify and stop the other program, or use `flask run --port 5001` to pick a different port. You can use `netstat` or `lsof` to identify what process id is using a port, then use other operating system tools stop that process. The following example shows that process id 6847 is using port 5000. `netstat` (Linux)`lsof` (macOS / Linux)`netstat` (Windows) ``` $ netstat -nlp | grep 5000 tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN 6847/python ``` ``` $ lsof -P -i :5000 Python 6847 IPv4 TCP localhost:5000 (LISTEN) ``` ``` > netstat -ano | findstr 5000 TCP 127.0.0.1:5000 0.0.0.0:0 LISTENING 6847 ``` macOS Monterey and later automatically starts a service that uses port 5000. To disable the service, go to System Preferences, Sharing, and disable “AirPlay Receiver”. ### Deferred Errors on Reload When using the `flask run` command with the reloader, the server will continue to run even if you introduce syntax errors or other initialization errors into the code. Accessing the site will show the interactive debugger for the error, rather than crashing the server. If a syntax error is already present when calling `flask run`, it will fail immediately and show the traceback rather than waiting until the site is accessed. This is intended to make errors more visible initially while still allowing the server to handle errors on reload. In Code ------- The development server can also be started from Python with the [`Flask.run()`](../api/index#flask.Flask.run "flask.Flask.run") method. This method takes arguments similar to the CLI options to control the server. The main difference from the CLI command is that the server will crash if there are errors when reloading. `debug=True` can be passed to enable debug mode. Place the call in a main block, otherwise it will interfere when trying to import and run the application with a production server later. ``` if __name__ == "__main__": app.run(debug=True) ``` ``` $ python hello.py ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/server/Working with the Shell ====================== Changelog New in version 0.3. One of the reasons everybody loves Python is the interactive shell. It basically allows you to execute Python commands in real time and immediately get results back. Flask itself does not come with an interactive shell, because it does not require any specific setup upfront, just import your application and start playing around. There are however some handy helpers to make playing around in the shell a more pleasant experience. The main issue with interactive console sessions is that you’re not triggering a request like a browser does which means that [`g`](../api/index#flask.g "flask.g"), [`request`](../api/index#flask.request "flask.request") and others are not available. But the code you want to test might depend on them, so what can you do? This is where some helper functions come in handy. Keep in mind however that these functions are not only there for interactive shell usage, but also for unit testing and other situations that require a faked request context. Generally it’s recommended that you read [The Request Context](../reqcontext/index) first. Command Line Interface ---------------------- Starting with Flask 0.11 the recommended way to work with the shell is the `flask shell` command which does a lot of this automatically for you. For instance the shell is automatically initialized with a loaded application context. For more information see [Command Line Interface](../cli/index). Creating a Request Context -------------------------- The easiest way to create a proper request context from the shell is by using the [`test_request_context`](../api/index#flask.Flask.test_request_context "flask.Flask.test_request_context") method which creates us a [`RequestContext`](../api/index#flask.ctx.RequestContext "flask.ctx.RequestContext"): ``` >>> ctx = app.test_request_context() ``` Normally you would use the `with` statement to make this request object active, but in the shell it’s easier to use the `push()` and [`pop()`](../api/index#flask.ctx.RequestContext.pop "flask.ctx.RequestContext.pop") methods by hand: ``` >>> ctx.push() ``` From that point onwards you can work with the request object until you call `pop`: ``` >>> ctx.pop() ``` Firing Before/After Request --------------------------- By just creating a request context, you still don’t have run the code that is normally run before a request. This might result in your database being unavailable if you are connecting to the database in a before-request callback or the current user not being stored on the [`g`](../api/index#flask.g "flask.g") object etc. This however can easily be done yourself. Just call [`preprocess_request()`](../api/index#flask.Flask.preprocess_request "flask.Flask.preprocess_request"): ``` >>> ctx = app.test_request_context() >>> ctx.push() >>> app.preprocess_request() ``` Keep in mind that the [`preprocess_request()`](../api/index#flask.Flask.preprocess_request "flask.Flask.preprocess_request") function might return a response object, in that case just ignore it. To shutdown a request, you need to trick a bit before the after request functions (triggered by [`process_response()`](../api/index#flask.Flask.process_response "flask.Flask.process_response")) operate on a response object: ``` >>> app.process_response(app.response_class()) <Response 0 bytes [200 OK]> >>> ctx.pop() ``` The functions registered as [`teardown_request()`](../api/index#flask.Flask.teardown_request "flask.Flask.teardown_request") are automatically called when the context is popped. So this is the perfect place to automatically tear down resources that were needed by the request context (such as database connections). Further Improving the Shell Experience -------------------------------------- If you like the idea of experimenting in a shell, create yourself a module with stuff you want to star import into your interactive session. There you could also define some more helper methods for common things such as initializing the database, dropping tables etc. Just put them into a module (like `shelltools`) and import from there: ``` >>> from shelltools import * ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/shell/Large Applications as Packages ============================== Imagine a simple flask application structure that looks like this: ``` /yourapplication yourapplication.py /static style.css /templates layout.html index.html login.html ... ``` While this is fine for small applications, for larger applications it’s a good idea to use a package instead of a module. The [Tutorial](https://flask.palletsprojects.com/en/2.3.x/tutorial/) is structured to use the package pattern, see the [example code](https://github.com/pallets/flask/tree/main/examples/tutorial). Simple Packages --------------- To convert that into a larger one, just create a new folder `yourapplication` inside the existing one and move everything below it. Then rename `yourapplication.py` to `__init__.py`. (Make sure to delete all `.pyc` files first, otherwise things would most likely break) You should then end up with something like that: ``` /yourapplication /yourapplication __init__.py /static style.css /templates layout.html index.html login.html ... ``` But how do you run your application now? The naive `python yourapplication/__init__.py` will not work. Let’s just say that Python does not want modules in packages to be the startup file. But that is not a big problem, just add a new file called `pyproject.toml` next to the inner `yourapplication` folder with the following contents: ``` [project] name = "yourapplication" dependencies = [ "flask", ] [build-system] requires = ["setuptools"] build-backend = "setuptools.build_meta" ``` Install your application so it is importable: ``` $ pip install -e . ``` To use the `flask` command and run your application you need to set the `--app` option that tells Flask where to find the application instance: ``` $ flask --app yourapplication run ``` What did we gain from this? Now we can restructure the application a bit into multiple modules. The only thing you have to remember is the following quick checklist: 1. the `Flask` application object creation has to be in the `__init__.py` file. That way each module can import it safely and the `__name__` variable will resolve to the correct package. 2. all the view functions (the ones with a [`route()`](../../api/index#flask.Flask.route "flask.Flask.route") decorator on top) have to be imported in the `__init__.py` file. Not the object itself, but the module it is in. Import the view module **after the application object is created**. Here’s an example `__init__.py`: ``` from flask import Flask app = Flask(__name__) import yourapplication.views ``` And this is what `views.py` would look like: ``` from yourapplication import app @app.route('/') def index(): return 'Hello World!' ``` You should then end up with something like that: ``` /yourapplication pyproject.toml /yourapplication __init__.py views.py /static style.css /templates layout.html index.html login.html ... ``` Circular Imports Every Python programmer hates them, and yet we just added some: circular imports (That’s when two modules depend on each other. In this case `views.py` depends on `__init__.py`). Be advised that this is a bad idea in general but here it is actually fine. The reason for this is that we are not actually using the views in `__init__.py` and just ensuring the module is imported and we are doing that at the bottom of the file. Working with Blueprints ----------------------- If you have larger applications it’s recommended to divide them into smaller groups where each group is implemented with the help of a blueprint. For a gentle introduction into this topic refer to the [Modular Applications with Blueprints](../../blueprints/index) chapter of the documentation. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/packages/API === This part of the documentation covers all the interfaces of Flask. For parts where Flask depends on external libraries, we document the most important right here and provide links to the canonical documentation. Application Object ------------------ `class flask.Flask(import_name, static_url_path=None, static_folder='static', static_host=None, host_matching=False, subdomain_matching=False, template_folder='templates', instance_path=None, instance_relative_config=False, root_path=None)` The flask object implements a WSGI application and acts as the central object. It is passed the name of the module or package of the application. Once it is created it will act as a central registry for the view functions, the URL rules, template configuration and much more. The name of the package is used to resolve resources from inside the package or the folder the module is contained in depending on if the package parameter resolves to an actual python package (a folder with an `__init__.py` file inside) or a standard module (just a `.py` file). For more information about resource loading, see [`open_resource()`](#flask.Flask.open_resource "flask.Flask.open_resource"). Usually you create a [`Flask`](#flask.Flask "flask.Flask") instance in your main module or in the `__init__.py` file of your package like this: ``` from flask import Flask app = Flask(__name__) ``` About the First Parameter The idea of the first parameter is to give Flask an idea of what belongs to your application. This name is used to find resources on the filesystem, can be used by extensions to improve debugging information and a lot more. So it’s important what you provide there. If you are using a single module, `__name__` is always the correct value. If you however are using a package, it’s usually recommended to hardcode the name of your package there. For example if your application is defined in `yourapplication/app.py` you should create it with one of the two versions below: ``` app = Flask('yourapplication') app = Flask(__name__.split('.')[0]) ``` Why is that? The application will work even with `__name__`, thanks to how resources are looked up. However it will make debugging more painful. Certain extensions can make assumptions based on the import name of your application. For example the Flask-SQLAlchemy extension will look for the code in your application that triggered an SQL query in debug mode. If the import name is not properly set up, that debugging information is lost. (For example it would only pick up SQL queries in `yourapplication.app` and not `yourapplication.views.frontend`) Changelog New in version 1.0: The `host_matching` and `static_host` parameters were added. New in version 1.0: The `subdomain_matching` parameter was added. Subdomain matching needs to be enabled manually now. Setting [`SERVER_NAME`](../config/index#SERVER_NAME "SERVER_NAME") does not implicitly enable it. New in version 0.11: The `root_path` parameter was added. New in version 0.8: The `instance_path` and `instance_relative_config` parameters were added. New in version 0.7: The `static_url_path`, `static_folder`, and `template_folder` parameters were added. Parameters: * **import_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – the name of the application package * **static_url_path** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – can be used to specify a different path for the static files on the web. Defaults to the name of the `static_folder` folder. * **static_folder** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [os.PathLike](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.11)") *|* *None*) – The folder with static files that is served at `static_url_path`. Relative to the application `root_path` or an absolute path. Defaults to `'static'`. * **static_host** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the host to use when adding the static route. Defaults to None. Required when using `host_matching=True` with a `static_folder` configured. * **host_matching** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – set `url_map.host_matching` attribute. Defaults to False. * **subdomain_matching** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – consider the subdomain relative to [`SERVER_NAME`](../config/index#SERVER_NAME "SERVER_NAME") when matching routes. Defaults to False. * **template_folder** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [os.PathLike](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.11)") *|* *None*) – the folder that contains the templates that should be used by the application. Defaults to `'templates'` folder in the root path of the application. * **instance_path** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – An alternative instance path for the application. By default the folder `'instance'` next to the package or module is assumed to be the instance path. * **instance_relative_config** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – if set to `True` relative filenames for loading the config are assumed to be relative to the instance path instead of the application root. * **root_path** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – The path to the root of the application files. This should only be set manually when it can’t be detected automatically, such as for namespace packages. `aborter` An instance of [`aborter_class`](#flask.Flask.aborter_class "flask.Flask.aborter_class") created by [`make_aborter()`](#flask.Flask.make_aborter "flask.Flask.make_aborter"). This is called by [`flask.abort()`](#flask.abort "flask.abort") to raise HTTP errors, and can be called directly as well. Changelog New in version 2.2: Moved from `flask.abort`, which calls this object. `aborter_class` alias of [`Aborter`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.Aborter "(in Werkzeug v2.3.x)") `add_template_filter(f, name=None)` Register a custom template filter. Works exactly like the [`template_filter()`](#flask.Flask.template_filter "flask.Flask.template_filter") decorator. Parameters: * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the optional name of the filter, otherwise the function name will be used. * **f** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")*[**[**...**]**,* [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")*]*) – Return type: None `add_template_global(f, name=None)` Register a custom template global function. Works exactly like the [`template_global()`](#flask.Flask.template_global "flask.Flask.template_global") decorator. Changelog New in version 0.10. Parameters: * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the optional name of the global function, otherwise the function name will be used. * **f** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")*[**[**...**]**,* [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")*]*) – Return type: None `add_template_test(f, name=None)` Register a custom template test. Works exactly like the [`template_test()`](#flask.Flask.template_test "flask.Flask.template_test") decorator. Changelog New in version 0.10. Parameters: * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the optional name of the test, otherwise the function name will be used. * **f** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")*[**[**...**]**,* [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")*]*) – Return type: None `add_url_rule(rule, endpoint=None, view_func=None, provide_automatic_options=None, **options)` Register a rule for routing incoming requests and building URLs. The [`route()`](#flask.Flask.route "flask.Flask.route") decorator is a shortcut to call this with the `view_func` argument. These are equivalent: ``` @app.route("/") def index(): ... ``` ``` def index(): ... app.add_url_rule("/", view_func=index) ``` See [URL Route Registrations](#url-route-registrations). The endpoint name for the route defaults to the name of the view function if the `endpoint` parameter isn’t passed. An error will be raised if a function has already been registered for the endpoint. The `methods` parameter defaults to `["GET"]`. `HEAD` is always added automatically, and `OPTIONS` is added automatically by default. `view_func` does not necessarily need to be passed, but if the rule should participate in routing an endpoint name must be associated with a view function at some point with the [`endpoint()`](#flask.Flask.endpoint "flask.Flask.endpoint") decorator. ``` app.add_url_rule("/", endpoint="index") @app.endpoint("index") def index(): ... ``` If `view_func` has a `required_methods` attribute, those methods are added to the passed and automatic methods. If it has a `provide_automatic_methods` attribute, it is used as the default if the parameter is not passed. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The URL rule string. * **endpoint** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – The endpoint name to associate with the rule and view function. Used when routing and building URLs. Defaults to `view_func.__name__`. * **view_func** (*ft.RouteCallable* *|* *None*) – The view function to associate with the endpoint name. * **provide_automatic_options** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") *|* *None*) – Add the `OPTIONS` method and respond to `OPTIONS` requests automatically. * **options** (*t.Any*) – Extra options passed to the [`Rule`](https://werkzeug.palletsprojects.com/en/2.3.x/routing/#werkzeug.routing.Rule "(in Werkzeug v2.3.x)") object. Return type: None `after_request(f)` Register a function to run after each request to this object. The function is called with the response object, and must return a response object. This allows the functions to modify or replace the response before it is sent. If a function raises an exception, any remaining `after_request` functions will not be called. Therefore, this should not be used for actions that must execute, such as to close resources. Use [`teardown_request()`](#flask.Flask.teardown_request "flask.Flask.teardown_request") for that. This is available on both app and blueprint objects. When used on an app, this executes after every request. When used on a blueprint, this executes after every request that the blueprint handles. To register with a blueprint and execute after every request, use [`Blueprint.after_app_request()`](#flask.Blueprint.after_app_request "flask.Blueprint.after_app_request"). Parameters: **f** (*T_after_request*) – Return type: *T_after_request* `after_request_funcs: dict[ft.AppOrBlueprintKey, list[ft.AfterRequestCallable]]` A data structure of functions to call at the end of each request, in the format `{scope: [functions]}`. The `scope` key is the name of a blueprint the functions are active for, or `None` for all requests. To register a function, use the [`after_request()`](#flask.Flask.after_request "flask.Flask.after_request") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `app_context()` Create an [`AppContext`](#flask.ctx.AppContext "flask.ctx.AppContext"). Use as a `with` block to push the context, which will make [`current_app`](#flask.current_app "flask.current_app") point at this application. An application context is automatically pushed by `RequestContext.push()` when handling a request, and when running a CLI command. Use this to manually create a context outside of these situations. ``` with app.app_context(): init_db() ``` See [The Application Context](../appcontext/index). Changelog New in version 0.9. Return type: [AppContext](#flask.ctx.AppContext "flask.ctx.AppContext") `app_ctx_globals_class` alias of [`_AppCtxGlobals`](#flask.ctx._AppCtxGlobals "flask.ctx._AppCtxGlobals") `async_to_sync(func)` Return a sync function that will run the coroutine function. ``` result = app.async_to_sync(func)(*args, **kwargs) ``` Override this method to change how the app converts async code to be synchronously callable. Changelog New in version 2.0. Parameters: **func** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")*[**[**...**]**,* [Coroutine](https://docs.python.org/3/library/typing.html#typing.Coroutine "(in Python v3.11)")*]*) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[…], [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")] `auto_find_instance_path()` Tries to locate the instance path if it was not provided to the constructor of the application class. It will basically calculate the path to a folder named `instance` next to your main file or the package. Changelog New in version 0.8. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") `before_request(f)` Register a function to run before each request. For example, this can be used to open a database connection, or to load the logged in user from the session. ``` @app.before_request def load_user(): if "user_id" in session: g.user = db.session.get(session["user_id"]) ``` The function will be called without any arguments. If it returns a non-`None` value, the value is handled as if it was the return value from the view, and further request handling is stopped. This is available on both app and blueprint objects. When used on an app, this executes before every request. When used on a blueprint, this executes before every request that the blueprint handles. To register with a blueprint and execute before every request, use [`Blueprint.before_app_request()`](#flask.Blueprint.before_app_request "flask.Blueprint.before_app_request"). Parameters: **f** (*T_before_request*) – Return type: *T_before_request* `before_request_funcs: dict[ft.AppOrBlueprintKey, list[ft.BeforeRequestCallable]]` A data structure of functions to call at the beginning of each request, in the format `{scope: [functions]}`. The `scope` key is the name of a blueprint the functions are active for, or `None` for all requests. To register a function, use the [`before_request()`](#flask.Flask.before_request "flask.Flask.before_request") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `blueprints: dict[str, Blueprint]` Maps registered blueprint names to blueprint objects. The dict retains the order the blueprints were registered in. Blueprints can be registered multiple times, this dict does not track how often they were attached. Changelog New in version 0.7. `cli` The Click command group for registering CLI commands for this object. The commands are available from the `flask` command once the application has been discovered and blueprints have been registered. `config` The configuration dictionary as [`Config`](#flask.Config "flask.Config"). This behaves exactly like a regular dictionary but supports additional methods to load a config from files. `config_class` alias of [`Config`](#flask.Config "flask.config.Config") `context_processor(f)` Registers a template context processor function. These functions run before rendering a template. The keys of the returned dict are added as variables available in the template. This is available on both app and blueprint objects. When used on an app, this is called for every rendered template. When used on a blueprint, this is called for templates rendered from the blueprint’s views. To register with a blueprint and affect every template, use [`Blueprint.app_context_processor()`](#flask.Blueprint.app_context_processor "flask.Blueprint.app_context_processor"). Parameters: **f** (*T_template_context_processor*) – Return type: *T_template_context_processor* `create_global_jinja_loader()` Creates the loader for the Jinja2 environment. Can be used to override just the loader and keeping the rest unchanged. It’s discouraged to override this function. Instead one should override the [`jinja_loader()`](#flask.Flask.jinja_loader "flask.Flask.jinja_loader") function instead. The global loader dispatches between the loaders of the application and the individual blueprints. Changelog New in version 0.7. Return type: *DispatchingJinjaLoader* `create_jinja_environment()` Create the Jinja environment based on [`jinja_options`](#flask.Flask.jinja_options "flask.Flask.jinja_options") and the various Jinja-related methods of the app. Changing [`jinja_options`](#flask.Flask.jinja_options "flask.Flask.jinja_options") after this will have no effect. Also adds Flask-related globals and filters to the environment. Changelog Changed in version 0.11: `Environment.auto_reload` set in accordance with `TEMPLATES_AUTO_RELOAD` configuration option. New in version 0.5. Return type: *Environment* `create_url_adapter(request)` Creates a URL adapter for the given request. The URL adapter is created at a point where the request context is not yet set up so the request is passed explicitly. Changelog Changed in version 1.0: [`SERVER_NAME`](../config/index#SERVER_NAME "SERVER_NAME") no longer implicitly enables subdomain matching. Use `subdomain_matching` instead. Changed in version 0.9: This can now also be called without a request object when the URL adapter is created for the application context. New in version 0.6. Parameters: **request** ([Request](#flask.Request "flask.wrappers.Request") *|* *None*) – Return type: [MapAdapter](https://werkzeug.palletsprojects.com/en/2.3.x/routing/#werkzeug.routing.MapAdapter "(in Werkzeug v2.3.x)") | None `property debug: bool` Whether debug mode is enabled. When using `flask run` to start the development server, an interactive debugger will be shown for unhandled exceptions, and the server will be reloaded when code changes. This maps to the [`DEBUG`](../config/index#DEBUG "DEBUG") config key. It may not behave as expected if set late. **Do not enable debug mode when deploying in production.** Default: `False` `default_config = {'APPLICATION_ROOT': '/', 'DEBUG': None, 'EXPLAIN_TEMPLATE_LOADING': False, 'MAX_CONTENT_LENGTH': None, 'MAX_COOKIE_SIZE': 4093, 'PERMANENT_SESSION_LIFETIME': datetime.timedelta(days=31), 'PREFERRED_URL_SCHEME': 'http', 'PROPAGATE_EXCEPTIONS': None, 'SECRET_KEY': None, 'SEND_FILE_MAX_AGE_DEFAULT': None, 'SERVER_NAME': None, 'SESSION_COOKIE_DOMAIN': None, 'SESSION_COOKIE_HTTPONLY': True, 'SESSION_COOKIE_NAME': 'session', 'SESSION_COOKIE_PATH': None, 'SESSION_COOKIE_SAMESITE': None, 'SESSION_COOKIE_SECURE': False, 'SESSION_REFRESH_EACH_REQUEST': True, 'TEMPLATES_AUTO_RELOAD': None, 'TESTING': False, 'TRAP_BAD_REQUEST_ERRORS': None, 'TRAP_HTTP_EXCEPTIONS': False, 'USE_X_SENDFILE': False}` Default configuration parameters. `delete(rule, **options)` Shortcut for [`route()`](#flask.Flask.route "flask.Flask.route") with `methods=["DELETE"]`. Changelog New in version 2.0. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_route*], *T_route*] `dispatch_request()` Does the request dispatching. Matches the URL and returns the return value of the view or error handler. This does not have to be a response object. In order to convert the return value to a proper response object, call [`make_response()`](#flask.make_response "flask.make_response"). Changelog Changed in version 0.7: This no longer does the exception handling, this code was moved to the new [`full_dispatch_request()`](#flask.Flask.full_dispatch_request "flask.Flask.full_dispatch_request"). Return type: ft.ResponseReturnValue `do_teardown_appcontext(exc=<object object>)` Called right before the application context is popped. When handling a request, the application context is popped after the request context. See [`do_teardown_request()`](#flask.Flask.do_teardown_request "flask.Flask.do_teardown_request"). This calls all functions decorated with [`teardown_appcontext()`](#flask.Flask.teardown_appcontext "flask.Flask.teardown_appcontext"). Then the [`appcontext_tearing_down`](#flask.appcontext_tearing_down "flask.appcontext_tearing_down") signal is sent. This is called by [`AppContext.pop()`](#flask.ctx.AppContext.pop "flask.ctx.AppContext.pop"). Changelog New in version 0.9. Parameters: **exc** ([BaseException](https://docs.python.org/3/library/exceptions.html#BaseException "(in Python v3.11)") *|* *None*) – Return type: None `do_teardown_request(exc=<object object>)` Called after the request is dispatched and the response is returned, right before the request context is popped. This calls all functions decorated with [`teardown_request()`](#flask.Flask.teardown_request "flask.Flask.teardown_request"), and [`Blueprint.teardown_request()`](#flask.Blueprint.teardown_request "flask.Blueprint.teardown_request") if a blueprint handled the request. Finally, the [`request_tearing_down`](#flask.request_tearing_down "flask.request_tearing_down") signal is sent. This is called by [`RequestContext.pop()`](#flask.ctx.RequestContext.pop "flask.ctx.RequestContext.pop"), which may be delayed during testing to maintain access to resources. Parameters: **exc** ([BaseException](https://docs.python.org/3/library/exceptions.html#BaseException "(in Python v3.11)") *|* *None*) – An unhandled exception raised while dispatching the request. Detected from the current exception information if not passed. Passed to each teardown function. Return type: None Changelog Changed in version 0.9: Added the `exc` argument. `endpoint(endpoint)` Decorate a view function to register it for the given endpoint. Used if a rule is added without a `view_func` with [`add_url_rule()`](#flask.Flask.add_url_rule "flask.Flask.add_url_rule"). ``` app.add_url_rule("/ex", endpoint="example") @app.endpoint("example") def example(): ... ``` Parameters: **endpoint** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The endpoint name to associate with the view function. Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*F*], *F*] `ensure_sync(func)` Ensure that the function is synchronous for WSGI workers. Plain `def` functions are returned as-is. `async def` functions are wrapped to run and wait for the response. Override this method to change how the app runs async views. Changelog New in version 2.0. Parameters: **func** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)") `error_handler_spec: dict[ft.AppOrBlueprintKey, dict[int | None, dict[type[Exception], ft.ErrorHandlerCallable]]]` A data structure of registered error handlers, in the format `{scope: {code: {class: handler}}}`. The `scope` key is the name of a blueprint the handlers are active for, or `None` for all requests. The `code` key is the HTTP status code for `HTTPException`, or `None` for other exceptions. The innermost dictionary maps exception classes to handler functions. To register an error handler, use the [`errorhandler()`](#flask.Flask.errorhandler "flask.Flask.errorhandler") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `errorhandler(code_or_exception)` Register a function to handle errors by code or exception class. A decorator that is used to register a function given an error code. Example: ``` @app.errorhandler(404) def page_not_found(error): return 'This page does not exist', 404 ``` You can also register handlers for arbitrary exceptions: ``` @app.errorhandler(DatabaseError) def special_exception_handler(error): return 'Database connection failed', 500 ``` This is available on both app and blueprint objects. When used on an app, this can handle errors from every request. When used on a blueprint, this can handle errors from requests that the blueprint handles. To register with a blueprint and affect every request, use [`Blueprint.app_errorhandler()`](#flask.Blueprint.app_errorhandler "flask.Blueprint.app_errorhandler"). Changelog New in version 0.7: Use [`register_error_handler()`](#flask.Flask.register_error_handler "flask.Flask.register_error_handler") instead of modifying [`error_handler_spec`](#flask.Flask.error_handler_spec "flask.Flask.error_handler_spec") directly, for application wide error handlers. New in version 0.7: One can now additionally also register custom exception types that do not necessarily have to be a subclass of the [`HTTPException`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.HTTPException "(in Werkzeug v2.3.x)") class. Parameters: **code_or_exception** ([type](https://docs.python.org/3/library/functions.html#type "(in Python v3.11)")*[*[Exception](https://docs.python.org/3/library/exceptions.html#Exception "(in Python v3.11)")*]* *|* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)")) – the code as integer for the handler, or an arbitrary exception Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_error_handler*], *T_error_handler*] `extensions: dict` a place where extensions can store application specific state. For example this is where an extension could store database engines and similar things. The key must match the name of the extension module. For example in case of a “Flask-Foo” extension in `flask_foo`, the key would be `'foo'`. Changelog New in version 0.7. `full_dispatch_request()` Dispatches the request and on top of that performs request pre and postprocessing as well as HTTP exception catching and error handling. Changelog New in version 0.7. Return type: [Response](#flask.Response "flask.wrappers.Response") `get(rule, **options)` Shortcut for [`route()`](#flask.Flask.route "flask.Flask.route") with `methods=["GET"]`. Changelog New in version 2.0. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_route*], *T_route*] `get_send_file_max_age(filename)` Used by [`send_file()`](#flask.send_file "flask.send_file") to determine the `max_age` cache value for a given file path if it wasn’t passed. By default, this returns [`SEND_FILE_MAX_AGE_DEFAULT`](../config/index#SEND_FILE_MAX_AGE_DEFAULT "SEND_FILE_MAX_AGE_DEFAULT") from the configuration of [`current_app`](#flask.current_app "flask.current_app"). This defaults to `None`, which tells the browser to use conditional requests instead of a timed cache, which is usually preferable. Changelog Changed in version 2.0: The default configuration is `None` instead of 12 hours. New in version 0.9. Parameters: **filename** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – Return type: [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") | None `property got_first_request: bool` This attribute is set to `True` if the application started handling the first request. Deprecated since version 2.3: Will be removed in Flask 2.4. Changelog New in version 0.8. `handle_exception(e)` Handle an exception that did not have an error handler associated with it, or that was raised from an error handler. This always causes a 500 `InternalServerError`. Always sends the [`got_request_exception`](#flask.got_request_exception "flask.got_request_exception") signal. If [`PROPAGATE_EXCEPTIONS`](../config/index#PROPAGATE_EXCEPTIONS "PROPAGATE_EXCEPTIONS") is `True`, such as in debug mode, the error will be re-raised so that the debugger can display it. Otherwise, the original exception is logged, and an [`InternalServerError`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.InternalServerError "(in Werkzeug v2.3.x)") is returned. If an error handler is registered for `InternalServerError` or `500`, it will be used. For consistency, the handler will always receive the `InternalServerError`. The original unhandled exception is available as `e.original_exception`. Changelog Changed in version 1.1.0: Always passes the `InternalServerError` instance to the handler, setting `original_exception` to the unhandled error. Changed in version 1.1.0: `after_request` functions and other finalization is done even for the default 500 response when there is no handler. New in version 0.3. Parameters: **e** ([Exception](https://docs.python.org/3/library/exceptions.html#Exception "(in Python v3.11)")) – Return type: [Response](#flask.Response "flask.wrappers.Response") `handle_http_exception(e)` Handles an HTTP exception. By default this will invoke the registered error handlers and fall back to returning the exception as response. Changelog Changed in version 1.0.3: `RoutingException`, used internally for actions such as slash redirects during routing, is not passed to error handlers. Changed in version 1.0: Exceptions are looked up by code *and* by MRO, so `HTTPException` subclasses can be handled with a catch-all handler for the base `HTTPException`. New in version 0.3. Parameters: **e** (*HTTPException*) – Return type: HTTPException | ft.ResponseReturnValue `handle_url_build_error(error, endpoint, values)` Called by [`url_for()`](#flask.Flask.url_for "flask.Flask.url_for") if a `BuildError` was raised. If this returns a value, it will be returned by `url_for`, otherwise the error will be re-raised. Each function in [`url_build_error_handlers`](#flask.Flask.url_build_error_handlers "flask.Flask.url_build_error_handlers") is called with `error`, `endpoint` and `values`. If a function returns `None` or raises a `BuildError`, it is skipped. Otherwise, its return value is returned by `url_for`. Parameters: * **error** (*BuildError*) – The active `BuildError` being handled. * **endpoint** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The endpoint being built. * **values** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")*,* [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")*]*) – The keyword arguments passed to `url_for`. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") `handle_user_exception(e)` This method is called whenever an exception occurs that should be handled. A special case is `HTTPException` which is forwarded to the [`handle_http_exception()`](#flask.Flask.handle_http_exception "flask.Flask.handle_http_exception") method. This function will either return a response value or reraise the exception with the same traceback. Changelog Changed in version 1.0: Key errors raised from request data like `form` show the bad key in debug mode rather than a generic bad request message. New in version 0.7. Parameters: **e** ([Exception](https://docs.python.org/3/library/exceptions.html#Exception "(in Python v3.11)")) – Return type: HTTPException | ft.ResponseReturnValue `property has_static_folder: bool` `True` if [`static_folder`](#flask.Flask.static_folder "flask.Flask.static_folder") is set. Changelog New in version 0.5. `import_name` The name of the package or module that this object belongs to. Do not change this once it is set by the constructor. `inject_url_defaults(endpoint, values)` Injects the URL defaults for the given endpoint directly into the values dictionary passed. This is used internally and automatically called on URL building. Changelog New in version 0.7. Parameters: * **endpoint** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **values** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)")) – Return type: None `instance_path` Holds the path to the instance folder. Changelog New in version 0.8. `iter_blueprints()` Iterates over all blueprints by the order they were registered. Changelog New in version 0.11. Return type: t.ValuesView[[Blueprint](#flask.Blueprint "flask.Blueprint")] `property jinja_env: Environment` The Jinja environment used to load templates. The environment is created the first time this property is accessed. Changing [`jinja_options`](#flask.Flask.jinja_options "flask.Flask.jinja_options") after that will have no effect. `jinja_environment` alias of `Environment` `property jinja_loader: FileSystemLoader | None` The Jinja loader for this object’s templates. By default this is a class [`jinja2.loaders.FileSystemLoader`](https://jinja.palletsprojects.com/en/3.1.x/api/#jinja2.FileSystemLoader "(in Jinja v3.1.x)") to [`template_folder`](#flask.Flask.template_folder "flask.Flask.template_folder") if it is set. Changelog New in version 0.5. `jinja_options: dict = {}` Options that are passed to the Jinja environment in [`create_jinja_environment()`](#flask.Flask.create_jinja_environment "flask.Flask.create_jinja_environment"). Changing these options after the environment is created (accessing [`jinja_env`](#flask.Flask.jinja_env "flask.Flask.jinja_env")) will have no effect. Changelog Changed in version 1.1.0: This is a `dict` instead of an `ImmutableDict` to allow easier configuration. `json: JSONProvider` Provides access to JSON methods. Functions in `flask.json` will call methods on this provider when the application context is active. Used for handling JSON requests and responses. An instance of [`json_provider_class`](#flask.Flask.json_provider_class "flask.Flask.json_provider_class"). Can be customized by changing that attribute on a subclass, or by assigning to this attribute afterwards. The default, [`DefaultJSONProvider`](#flask.json.provider.DefaultJSONProvider "flask.json.provider.DefaultJSONProvider"), uses Python’s built-in [`json`](https://docs.python.org/3/library/json.html#module-json "(in Python v3.11)") library. A different provider can use a different JSON library. Changelog New in version 2.2. `json_provider_class` alias of [`DefaultJSONProvider`](#flask.json.provider.DefaultJSONProvider "flask.json.provider.DefaultJSONProvider") `log_exception(exc_info)` Logs an exception. This is called by [`handle_exception()`](#flask.Flask.handle_exception "flask.Flask.handle_exception") if debugging is disabled and right before the handler is called. The default implementation logs the exception as error on the [`logger`](#flask.Flask.logger "flask.Flask.logger"). Changelog New in version 0.8. Parameters: **exc_info** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.11)")*[*[type](https://docs.python.org/3/library/functions.html#type "(in Python v3.11)")*,* [BaseException](https://docs.python.org/3/library/exceptions.html#BaseException "(in Python v3.11)")*,* *traceback**]* *|* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.11)")*[**None**,* *None**,* *None**]*) – Return type: None `property logger: Logger` A standard Python [`Logger`](https://docs.python.org/3/library/logging.html#logging.Logger "(in Python v3.11)") for the app, with the same name as [`name`](#flask.Flask.name "flask.Flask.name"). In debug mode, the logger’s `level` will be set to `DEBUG`. If there are no handlers configured, a default handler will be added. See [Logging](../logging/index) for more information. Changelog Changed in version 1.1.0: The logger takes the same name as [`name`](#flask.Flask.name "flask.Flask.name") rather than hard-coding `"flask.app"`. Changed in version 1.0.0: Behavior was simplified. The logger is always named `"flask.app"`. The level is only set during configuration, it doesn’t check `app.debug` each time. Only one format is used, not different ones depending on `app.debug`. No handlers are removed, and a handler is only added if no handlers are already configured. New in version 0.3. `make_aborter()` Create the object to assign to [`aborter`](#flask.Flask.aborter "flask.Flask.aborter"). That object is called by [`flask.abort()`](#flask.abort "flask.abort") to raise HTTP errors, and can be called directly as well. By default, this creates an instance of [`aborter_class`](#flask.Flask.aborter_class "flask.Flask.aborter_class"), which defaults to [`werkzeug.exceptions.Aborter`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.Aborter "(in Werkzeug v2.3.x)"). Changelog New in version 2.2. Return type: [Aborter](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.Aborter "(in Werkzeug v2.3.x)") `make_config(instance_relative=False)` Used to create the config attribute by the Flask constructor. The `instance_relative` parameter is passed in from the constructor of Flask (there named `instance_relative_config`) and indicates if the config should be relative to the instance path or the root path of the application. Changelog New in version 0.8. Parameters: **instance_relative** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Return type: [Config](#flask.Config "flask.config.Config") `make_default_options_response()` This method is called to create the default `OPTIONS` response. This can be changed through subclassing to change the default behavior of `OPTIONS` responses. Changelog New in version 0.7. Return type: [Response](#flask.Response "flask.wrappers.Response") `make_response(rv)` Convert the return value from a view function to an instance of [`response_class`](#flask.Flask.response_class "flask.Flask.response_class"). Parameters: **rv** (*ft.ResponseReturnValue*) – the return value from the view function. The view function must return a response. Returning `None`, or the view ending without returning, is not allowed. The following types are allowed for `view_rv`: `str` A response object is created with the string encoded to UTF-8 as the body. `bytes` A response object is created with the bytes as the body. `dict` A dictionary that will be jsonify’d before being returned. `list` A list that will be jsonify’d before being returned. `generator or iterator` A generator that returns `str` or `bytes` to be streamed as the response. `tuple` Either `(body, status, headers)`, `(body, status)`, or `(body, headers)`, where `body` is any of the other types allowed here, `status` is a string or an integer, and `headers` is a dictionary or a list of `(key, value)` tuples. If `body` is a [`response_class`](#flask.Flask.response_class "flask.Flask.response_class") instance, `status` overwrites the exiting value and `headers` are extended. [`response_class`](#flask.Flask.response_class "flask.Flask.response_class") The object is returned unchanged. `other Response class` The object is coerced to [`response_class`](#flask.Flask.response_class "flask.Flask.response_class"). [`callable()`](https://docs.python.org/3/library/functions.html#callable "(in Python v3.11)") The function is called as a WSGI application. The result is used to create a response object. Return type: [Response](#flask.Response "flask.Response") Changelog Changed in version 2.2: A generator will be converted to a streaming response. A list will be converted to a JSON response. Changed in version 1.1: A dict will be converted to a JSON response. Changed in version 0.9: Previously a tuple was interpreted as the arguments for the response object. `make_shell_context()` Returns the shell context for an interactive shell for this application. This runs all the registered shell context processors. Changelog New in version 0.11. Return type: [dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)") `property name: str` The name of the application. This is usually the import name with the difference that it’s guessed from the run file if the import name is main. This name is used as a display name when Flask needs the name of the application. It can be set and overridden to change the value. Changelog New in version 0.8. `open_instance_resource(resource, mode='rb')` Opens a resource from the application’s instance folder ([`instance_path`](#flask.Flask.instance_path "flask.Flask.instance_path")). Otherwise works like [`open_resource()`](#flask.Flask.open_resource "flask.Flask.open_resource"). Instance resources can also be opened for writing. Parameters: * **resource** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – the name of the resource. To access resources within subfolders use forward slashes as separator. * **mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – resource file opening mode, default is ‘rb’. Return type: [IO](https://docs.python.org/3/library/typing.html#typing.IO "(in Python v3.11)") `open_resource(resource, mode='rb')` Open a resource file relative to [`root_path`](#flask.Flask.root_path "flask.Flask.root_path") for reading. For example, if the file `schema.sql` is next to the file `app.py` where the `Flask` app is defined, it can be opened with: ``` with app.open_resource("schema.sql") as f: conn.executescript(f.read()) ``` Parameters: * **resource** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Path to the resource relative to [`root_path`](#flask.Flask.root_path "flask.Flask.root_path"). * **mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Open the file in this mode. Only reading is supported, valid values are “r” (or “rt”) and “rb”. Return type: [IO](https://docs.python.org/3/library/typing.html#typing.IO "(in Python v3.11)") `patch(rule, **options)` Shortcut for [`route()`](#flask.Flask.route "flask.Flask.route") with `methods=["PATCH"]`. Changelog New in version 2.0. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_route*], *T_route*] `permanent_session_lifetime` A [`timedelta`](https://docs.python.org/3/library/datetime.html#datetime.timedelta "(in Python v3.11)") which is used to set the expiration date of a permanent session. The default is 31 days which makes a permanent session survive for roughly one month. This attribute can also be configured from the config with the `PERMANENT_SESSION_LIFETIME` configuration key. Defaults to `timedelta(days=31)` `post(rule, **options)` Shortcut for [`route()`](#flask.Flask.route "flask.Flask.route") with `methods=["POST"]`. Changelog New in version 2.0. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_route*], *T_route*] `preprocess_request()` Called before the request is dispatched. Calls [`url_value_preprocessors`](#flask.Flask.url_value_preprocessors "flask.Flask.url_value_preprocessors") registered with the app and the current blueprint (if any). Then calls [`before_request_funcs`](#flask.Flask.before_request_funcs "flask.Flask.before_request_funcs") registered with the app and the blueprint. If any [`before_request()`](#flask.Flask.before_request "flask.Flask.before_request") handler returns a non-None value, the value is handled as if it was the return value from the view, and further request handling is stopped. Return type: ft.ResponseReturnValue | None `process_response(response)` Can be overridden in order to modify the response object before it’s sent to the WSGI server. By default this will call all the [`after_request()`](#flask.Flask.after_request "flask.Flask.after_request") decorated functions. Changelog Changed in version 0.5: As of Flask 0.5 the functions registered for after request execution are called in reverse order of registration. Parameters: **response** ([Response](#flask.Response "flask.wrappers.Response")) – a [`response_class`](#flask.Flask.response_class "flask.Flask.response_class") object. Returns: a new response object or the same, has to be an instance of [`response_class`](#flask.Flask.response_class "flask.Flask.response_class"). Return type: [Response](#flask.Response "flask.wrappers.Response") `put(rule, **options)` Shortcut for [`route()`](#flask.Flask.route "flask.Flask.route") with `methods=["PUT"]`. Changelog New in version 2.0. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_route*], *T_route*] `redirect(location, code=302)` Create a redirect response object. This is called by [`flask.redirect()`](#flask.redirect "flask.redirect"), and can be called directly as well. Parameters: * **location** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The URL to redirect to. * **code** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)")) – The status code for the redirect. Return type: [Response](https://werkzeug.palletsprojects.com/en/2.3.x/wrappers/#werkzeug.wrappers.Response "(in Werkzeug v2.3.x)") Changelog New in version 2.2: Moved from `flask.redirect`, which calls this method. `register_blueprint(blueprint, **options)` Register a [`Blueprint`](#flask.Blueprint "flask.Blueprint") on the application. Keyword arguments passed to this method will override the defaults set on the blueprint. Calls the blueprint’s [`register()`](#flask.Blueprint.register "flask.Blueprint.register") method after recording the blueprint in the application’s [`blueprints`](#flask.Flask.blueprints "flask.Flask.blueprints"). Parameters: * **blueprint** ([Blueprint](#flask.Blueprint "flask.Blueprint")) – The blueprint to register. * **url_prefix** – Blueprint routes will be prefixed with this. * **subdomain** – Blueprint routes will match on this subdomain. * **url_defaults** – Blueprint routes will use these default values for view arguments. * **options** (*t.Any*) – Additional keyword arguments are passed to [`BlueprintSetupState`](#flask.blueprints.BlueprintSetupState "flask.blueprints.BlueprintSetupState"). They can be accessed in [`record()`](#flask.Blueprint.record "flask.Blueprint.record") callbacks. Return type: None Changelog Changed in version 2.0.1: The `name` option can be used to change the (pre-dotted) name the blueprint is registered with. This allows the same blueprint to be registered multiple times with unique names for `url_for`. New in version 0.7. `register_error_handler(code_or_exception, f)` Alternative error attach function to the [`errorhandler()`](#flask.Flask.errorhandler "flask.Flask.errorhandler") decorator that is more straightforward to use for non decorator usage. Changelog New in version 0.7. Parameters: * **code_or_exception** ([type](https://docs.python.org/3/library/functions.html#type "(in Python v3.11)")*[*[Exception](https://docs.python.org/3/library/exceptions.html#Exception "(in Python v3.11)")*]* *|* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)")) – * **f** (*ft.ErrorHandlerCallable*) – Return type: None `request_class` alias of [`Request`](#flask.Request "flask.wrappers.Request") `request_context(environ)` Create a [`RequestContext`](#flask.ctx.RequestContext "flask.ctx.RequestContext") representing a WSGI environment. Use a `with` block to push the context, which will make [`request`](#flask.request "flask.request") point at this request. See [The Request Context](../reqcontext/index). Typically you should not call this from your own code. A request context is automatically pushed by the [`wsgi_app()`](#flask.Flask.wsgi_app "flask.Flask.wsgi_app") when handling a request. Use [`test_request_context()`](#flask.Flask.test_request_context "flask.Flask.test_request_context") to create an environment and context instead of this method. Parameters: **environ** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)")) – a WSGI environment Return type: [RequestContext](#flask.ctx.RequestContext "flask.ctx.RequestContext") `response_class` alias of [`Response`](#flask.Response "flask.wrappers.Response") `root_path` Absolute path to the package on the filesystem. Used to look up resources contained in the package. `route(rule, **options)` Decorate a view function to register it with the given URL rule and options. Calls [`add_url_rule()`](#flask.Flask.add_url_rule "flask.Flask.add_url_rule"), which has more details about the implementation. ``` @app.route("/") def index(): return "Hello, World!" ``` See [URL Route Registrations](#url-route-registrations). The endpoint name for the route defaults to the name of the view function if the `endpoint` parameter isn’t passed. The `methods` parameter defaults to `["GET"]`. `HEAD` and `OPTIONS` are added automatically. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The URL rule string. * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Extra options passed to the [`Rule`](https://werkzeug.palletsprojects.com/en/2.3.x/routing/#werkzeug.routing.Rule "(in Werkzeug v2.3.x)") object. Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_route*], *T_route*] `run(host=None, port=None, debug=None, load_dotenv=True, **options)` Runs the application on a local development server. Do not use `run()` in a production setting. It is not intended to meet security and performance requirements for a production server. Instead, see [Deploying to Production](../deploying/index) for WSGI server recommendations. If the [`debug`](#flask.Flask.debug "flask.Flask.debug") flag is set the server will automatically reload for code changes and show a debugger in case an exception happened. If you want to run the application in debug mode, but disable the code execution on the interactive debugger, you can pass `use_evalex=False` as parameter. This will keep the debugger’s traceback screen active, but disable code execution. It is not recommended to use this function for development with automatic reloading as this is badly supported. Instead you should be using the **flask** command line script’s `run` support. Keep in Mind Flask will suppress any server error with a generic error page unless it is in debug mode. As such to enable just the interactive debugger without the code reloading, you have to invoke [`run()`](#flask.Flask.run "flask.Flask.run") with `debug=True` and `use_reloader=False`. Setting `use_debugger` to `True` without being in debug mode won’t catch any exceptions because there won’t be any to catch. Parameters: * **host** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the hostname to listen on. Set this to `'0.0.0.0'` to have the server available externally as well. Defaults to `'127.0.0.1'` or the host in the `SERVER_NAME` config variable if present. * **port** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") *|* *None*) – the port of the webserver. Defaults to `5000` or the port defined in the `SERVER_NAME` config variable if present. * **debug** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") *|* *None*) – if given, enable or disable debug mode. See [`debug`](#flask.Flask.debug "flask.Flask.debug"). * **load_dotenv** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Load the nearest `.env` and `.flaskenv` files to set environment variables. Will also change the working directory to the directory containing the first file found. * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – the options to be forwarded to the underlying Werkzeug server. See [`werkzeug.serving.run_simple()`](https://werkzeug.palletsprojects.com/en/2.3.x/serving/#werkzeug.serving.run_simple "(in Werkzeug v2.3.x)") for more information. Return type: None Changelog Changed in version 1.0: If installed, python-dotenv will be used to load environment variables from `.env` and `.flaskenv` files. The `FLASK_DEBUG` environment variable will override [`debug`](#flask.Flask.debug "flask.Flask.debug"). Threaded mode is enabled by default. Changed in version 0.10: The default port is now picked from the `SERVER_NAME` variable. `secret_key` If a secret key is set, cryptographic components can use this to sign cookies and other things. Set this to a complex random value when you want to use the secure cookie for instance. This attribute can also be configured from the config with the [`SECRET_KEY`](../config/index#SECRET_KEY "SECRET_KEY") configuration key. Defaults to `None`. `select_jinja_autoescape(filename)` Returns `True` if autoescaping should be active for the given template name. If no template name is given, returns `True`. Changelog Changed in version 2.2: Autoescaping is now enabled by default for `.svg` files. New in version 0.5. Parameters: **filename** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") `send_static_file(filename)` The view function used to serve files from [`static_folder`](#flask.Flask.static_folder "flask.Flask.static_folder"). A route is automatically registered for this view at [`static_url_path`](#flask.Flask.static_url_path "flask.Flask.static_url_path") if [`static_folder`](#flask.Flask.static_folder "flask.Flask.static_folder") is set. Changelog New in version 0.5. Parameters: **filename** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Return type: [Response](#flask.Response "flask.Response") `session_interface: SessionInterface = <flask.sessions.SecureCookieSessionInterface object>` the session interface to use. By default an instance of [`SecureCookieSessionInterface`](#flask.sessions.SecureCookieSessionInterface "flask.sessions.SecureCookieSessionInterface") is used here. Changelog New in version 0.8. `shell_context_processor(f)` Registers a shell context processor function. Changelog New in version 0.11. Parameters: **f** (*T_shell_context_processor*) – Return type: *T_shell_context_processor* `shell_context_processors: list[ft.ShellContextProcessorCallable]` A list of shell context processor functions that should be run when a shell context is created. Changelog New in version 0.11. `should_ignore_error(error)` This is called to figure out if an error should be ignored or not as far as the teardown system is concerned. If this function returns `True` then the teardown handlers will not be passed the error. Changelog New in version 0.10. Parameters: **error** ([BaseException](https://docs.python.org/3/library/exceptions.html#BaseException "(in Python v3.11)") *|* *None*) – Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") `property static_folder: str | None` The absolute path to the configured static folder. `None` if no static folder is set. `property static_url_path: str | None` The URL prefix that the static route will be accessible from. If it was not configured during init, it is derived from [`static_folder`](#flask.Flask.static_folder "flask.Flask.static_folder"). `teardown_appcontext(f)` Registers a function to be called when the application context is popped. The application context is typically popped after the request context for each request, at the end of CLI commands, or after a manually pushed context ends. ``` with app.app_context(): ... ``` When the `with` block exits (or `ctx.pop()` is called), the teardown functions are called just before the app context is made inactive. Since a request context typically also manages an application context it would also be called when you pop a request context. When a teardown function was called because of an unhandled exception it will be passed an error object. If an [`errorhandler()`](#flask.Flask.errorhandler "flask.Flask.errorhandler") is registered, it will handle the exception and the teardown will not receive it. Teardown functions must avoid raising exceptions. If they execute code that might fail they must surround that code with a `try`/`except` block and log any errors. The return values of teardown functions are ignored. Changelog New in version 0.9. Parameters: **f** (*T_teardown*) – Return type: *T_teardown* `teardown_appcontext_funcs: list[ft.TeardownCallable]` A list of functions that are called when the application context is destroyed. Since the application context is also torn down if the request ends this is the place to store code that disconnects from databases. Changelog New in version 0.9. `teardown_request(f)` Register a function to be called when the request context is popped. Typically this happens at the end of each request, but contexts may be pushed manually as well during testing. ``` with app.test_request_context(): ... ``` When the `with` block exits (or `ctx.pop()` is called), the teardown functions are called just before the request context is made inactive. When a teardown function was called because of an unhandled exception it will be passed an error object. If an [`errorhandler()`](#flask.Flask.errorhandler "flask.Flask.errorhandler") is registered, it will handle the exception and the teardown will not receive it. Teardown functions must avoid raising exceptions. If they execute code that might fail they must surround that code with a `try`/`except` block and log any errors. The return values of teardown functions are ignored. This is available on both app and blueprint objects. When used on an app, this executes after every request. When used on a blueprint, this executes after every request that the blueprint handles. To register with a blueprint and execute after every request, use [`Blueprint.teardown_app_request()`](#flask.Blueprint.teardown_app_request "flask.Blueprint.teardown_app_request"). Parameters: **f** (*T_teardown*) – Return type: *T_teardown* `teardown_request_funcs: dict[ft.AppOrBlueprintKey, list[ft.TeardownCallable]]` A data structure of functions to call at the end of each request even if an exception is raised, in the format `{scope: [functions]}`. The `scope` key is the name of a blueprint the functions are active for, or `None` for all requests. To register a function, use the [`teardown_request()`](#flask.Flask.teardown_request "flask.Flask.teardown_request") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `template_context_processors: dict[ft.AppOrBlueprintKey, list[ft.TemplateContextProcessorCallable]]` A data structure of functions to call to pass extra context values when rendering templates, in the format `{scope: [functions]}`. The `scope` key is the name of a blueprint the functions are active for, or `None` for all requests. To register a function, use the [`context_processor()`](#flask.Flask.context_processor "flask.Flask.context_processor") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `template_filter(name=None)` A decorator that is used to register custom template filter. You can specify a name for the filter, otherwise the function name will be used. Example: ``` @app.template_filter() def reverse(s): return s[::-1] ``` Parameters: **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the optional name of the filter, otherwise the function name will be used. Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_template_filter*], *T_template_filter*] `template_folder` The path to the templates folder, relative to [`root_path`](#flask.Flask.root_path "flask.Flask.root_path"), to add to the template loader. `None` if templates should not be added. `template_global(name=None)` A decorator that is used to register a custom template global function. You can specify a name for the global function, otherwise the function name will be used. Example: ``` @app.template_global() def double(n): return 2 * n ``` Changelog New in version 0.10. Parameters: **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the optional name of the global function, otherwise the function name will be used. Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_template_global*], *T_template_global*] `template_test(name=None)` A decorator that is used to register custom template test. You can specify a name for the test, otherwise the function name will be used. Example: ``` @app.template_test() def is_prime(n): if n == 2: return True for i in range(2, int(math.ceil(math.sqrt(n))) + 1): if n % i == 0: return False return True ``` Changelog New in version 0.10. Parameters: **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the optional name of the test, otherwise the function name will be used. Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_template_test*], *T_template_test*] `test_cli_runner(**kwargs)` Create a CLI runner for testing CLI commands. See [Running Commands with the CLI Runner](../testing/index#testing-cli). Returns an instance of [`test_cli_runner_class`](#flask.Flask.test_cli_runner_class "flask.Flask.test_cli_runner_class"), by default [`FlaskCliRunner`](#flask.testing.FlaskCliRunner "flask.testing.FlaskCliRunner"). The Flask app object is passed as the first argument. Changelog New in version 1.0. Parameters: **kwargs** (*t.Any*) – Return type: [FlaskCliRunner](#flask.testing.FlaskCliRunner "flask.testing.FlaskCliRunner") `test_cli_runner_class: type[FlaskCliRunner] | None = None` The [`CliRunner`](https://click.palletsprojects.com/en/8.1.x/api/#click.testing.CliRunner "(in Click v8.1.x)") subclass, by default [`FlaskCliRunner`](#flask.testing.FlaskCliRunner "flask.testing.FlaskCliRunner") that is used by [`test_cli_runner()`](#flask.Flask.test_cli_runner "flask.Flask.test_cli_runner"). Its `__init__` method should take a Flask app object as the first argument. Changelog New in version 1.0. `test_client(use_cookies=True, **kwargs)` Creates a test client for this application. For information about unit testing head over to [Testing Flask Applications](../testing/index). Note that if you are testing for assertions or exceptions in your application code, you must set `app.testing = True` in order for the exceptions to propagate to the test client. Otherwise, the exception will be handled by the application (not visible to the test client) and the only indication of an AssertionError or other exception will be a 500 status code response to the test client. See the [`testing`](#flask.Flask.testing "flask.Flask.testing") attribute. For example: ``` app.testing = True client = app.test_client() ``` The test client can be used in a `with` block to defer the closing down of the context until the end of the `with` block. This is useful if you want to access the context locals for testing: ``` with app.test_client() as c: rv = c.get('/?vodka=42') assert request.args['vodka'] == '42' ``` Additionally, you may pass optional keyword arguments that will then be passed to the application’s [`test_client_class`](#flask.Flask.test_client_class "flask.Flask.test_client_class") constructor. For example: ``` from flask.testing import FlaskClient class CustomClient(FlaskClient): def __init__(self, *args, **kwargs): self._authentication = kwargs.pop("authentication") super(CustomClient,self).__init__( *args, **kwargs) app.test_client_class = CustomClient client = app.test_client(authentication='Basic ....') ``` See [`FlaskClient`](#flask.testing.FlaskClient "flask.testing.FlaskClient") for more information. Changelog Changed in version 0.11: Added `**kwargs` to support passing additional keyword arguments to the constructor of [`test_client_class`](#flask.Flask.test_client_class "flask.Flask.test_client_class"). New in version 0.7: The `use_cookies` parameter was added as well as the ability to override the client to be used by setting the [`test_client_class`](#flask.Flask.test_client_class "flask.Flask.test_client_class") attribute. Changed in version 0.4: added support for `with` block usage for the client. Parameters: * **use_cookies** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – * **kwargs** (*t.Any*) – Return type: [FlaskClient](#flask.testing.FlaskClient "flask.testing.FlaskClient") `test_client_class: type[FlaskClient] | None = None` The [`test_client()`](#flask.Flask.test_client "flask.Flask.test_client") method creates an instance of this test client class. Defaults to [`FlaskClient`](#flask.testing.FlaskClient "flask.testing.FlaskClient"). Changelog New in version 0.7. `test_request_context(*args, **kwargs)` Create a [`RequestContext`](#flask.ctx.RequestContext "flask.ctx.RequestContext") for a WSGI environment created from the given values. This is mostly useful during testing, where you may want to run a function that uses request data without dispatching a full request. See [The Request Context](../reqcontext/index). Use a `with` block to push the context, which will make [`request`](#flask.request "flask.request") point at the request for the created environment. ``` with app.test_request_context(...): generate_report() ``` When using the shell, it may be easier to push and pop the context manually to avoid indentation. ``` ctx = app.test_request_context(...) ctx.push() ... ctx.pop() ``` Takes the same arguments as Werkzeug’s [`EnvironBuilder`](https://werkzeug.palletsprojects.com/en/2.3.x/test/#werkzeug.test.EnvironBuilder "(in Werkzeug v2.3.x)"), with some defaults from the application. See the linked Werkzeug docs for most of the available arguments. Flask-specific behavior is listed here. Parameters: * **path** – URL path being requested. * **base_url** – Base URL where the app is being served, which `path` is relative to. If not given, built from [`PREFERRED_URL_SCHEME`](../config/index#PREFERRED_URL_SCHEME "PREFERRED_URL_SCHEME"), `subdomain`, [`SERVER_NAME`](../config/index#SERVER_NAME "SERVER_NAME"), and [`APPLICATION_ROOT`](../config/index#APPLICATION_ROOT "APPLICATION_ROOT"). * **subdomain** – Subdomain name to append to [`SERVER_NAME`](../config/index#SERVER_NAME "SERVER_NAME"). * **url_scheme** – Scheme to use instead of [`PREFERRED_URL_SCHEME`](../config/index#PREFERRED_URL_SCHEME "PREFERRED_URL_SCHEME"). * **data** – The request body, either as a string or a dict of form keys and values. * **json** – If given, this is serialized as JSON and passed as `data`. Also defaults `content_type` to `application/json`. * **args** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – other positional arguments passed to [`EnvironBuilder`](https://werkzeug.palletsprojects.com/en/2.3.x/test/#werkzeug.test.EnvironBuilder "(in Werkzeug v2.3.x)"). * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – other keyword arguments passed to [`EnvironBuilder`](https://werkzeug.palletsprojects.com/en/2.3.x/test/#werkzeug.test.EnvironBuilder "(in Werkzeug v2.3.x)"). Return type: [RequestContext](#flask.ctx.RequestContext "flask.ctx.RequestContext") `testing` The testing flag. Set this to `True` to enable the test mode of Flask extensions (and in the future probably also Flask itself). For example this might activate test helpers that have an additional runtime cost which should not be enabled by default. If this is enabled and PROPAGATE_EXCEPTIONS is not changed from the default it’s implicitly enabled. This attribute can also be configured from the config with the `TESTING` configuration key. Defaults to `False`. `trap_http_exception(e)` Checks if an HTTP exception should be trapped or not. By default this will return `False` for all exceptions except for a bad request key error if `TRAP_BAD_REQUEST_ERRORS` is set to `True`. It also returns `True` if `TRAP_HTTP_EXCEPTIONS` is set to `True`. This is called for all HTTP exceptions raised by a view function. If it returns `True` for any exception the error handler for this exception is not called and it shows up as regular exception in the traceback. This is helpful for debugging implicitly raised HTTP exceptions. Changelog Changed in version 1.0: Bad request errors are not trapped by default in debug mode. New in version 0.8. Parameters: **e** ([Exception](https://docs.python.org/3/library/exceptions.html#Exception "(in Python v3.11)")) – Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") `update_template_context(context)` Update the template context with some commonly used variables. This injects request, session, config and g into the template context as well as everything template context processors want to inject. Note that the as of Flask 0.6, the original values in the context will not be overridden if a context processor decides to return a value with the same key. Parameters: **context** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)")) – the context as a dictionary that is updated in place to add extra variables. Return type: None `url_build_error_handlers: list[t.Callable[[Exception, str, dict[str, t.Any]], str]]` A list of functions that are called by [`handle_url_build_error()`](#flask.Flask.handle_url_build_error "flask.Flask.handle_url_build_error") when [`url_for()`](#flask.Flask.url_for "flask.Flask.url_for") raises a `BuildError`. Each function is called with `error`, `endpoint` and `values`. If a function returns `None` or raises a `BuildError`, it is skipped. Otherwise, its return value is returned by `url_for`. Changelog New in version 0.9. `url_default_functions: dict[ft.AppOrBlueprintKey, list[ft.URLDefaultCallable]]` A data structure of functions to call to modify the keyword arguments when generating URLs, in the format `{scope: [functions]}`. The `scope` key is the name of a blueprint the functions are active for, or `None` for all requests. To register a function, use the [`url_defaults()`](#flask.Flask.url_defaults "flask.Flask.url_defaults") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `url_defaults(f)` Callback function for URL defaults for all view functions of the application. It’s called with the endpoint and values and should update the values passed in place. This is available on both app and blueprint objects. When used on an app, this is called for every request. When used on a blueprint, this is called for requests that the blueprint handles. To register with a blueprint and affect every request, use [`Blueprint.app_url_defaults()`](#flask.Blueprint.app_url_defaults "flask.Blueprint.app_url_defaults"). Parameters: **f** (*T_url_defaults*) – Return type: *T_url_defaults* `url_for(endpoint, *, _anchor=None, _method=None, _scheme=None, _external=None, **values)` Generate a URL to the given endpoint with the given values. This is called by [`flask.url_for()`](#flask.url_for "flask.url_for"), and can be called directly as well. An *endpoint* is the name of a URL rule, usually added with [`@app.route()`](#flask.Flask.route "flask.Flask.route"), and usually the same name as the view function. A route defined in a [`Blueprint`](#flask.Blueprint "flask.Blueprint") will prepend the blueprint’s name separated by a `.` to the endpoint. In some cases, such as email messages, you want URLs to include the scheme and domain, like `https://example.com/hello`. When not in an active request, URLs will be external by default, but this requires setting [`SERVER_NAME`](../config/index#SERVER_NAME "SERVER_NAME") so Flask knows what domain to use. [`APPLICATION_ROOT`](../config/index#APPLICATION_ROOT "APPLICATION_ROOT") and [`PREFERRED_URL_SCHEME`](../config/index#PREFERRED_URL_SCHEME "PREFERRED_URL_SCHEME") should also be configured as needed. This config is only used when not in an active request. Functions can be decorated with [`url_defaults()`](#flask.Flask.url_defaults "flask.Flask.url_defaults") to modify keyword arguments before the URL is built. If building fails for some reason, such as an unknown endpoint or incorrect values, the app’s [`handle_url_build_error()`](#flask.Flask.handle_url_build_error "flask.Flask.handle_url_build_error") method is called. If that returns a string, that is returned, otherwise a `BuildError` is raised. Parameters: * **endpoint** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The endpoint name associated with the URL to generate. If this starts with a `.`, the current blueprint name (if any) will be used. * **_anchor** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – If given, append this as `#anchor` to the URL. * **_method** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – If given, generate the URL associated with this method for the endpoint. * **_scheme** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – If given, the URL will have this scheme if it is external. * **_external** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") *|* *None*) – If given, prefer the URL to be internal (False) or require it to be external (True). External URLs include the scheme and domain. When not in an active request, URLs are external by default. * **values** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Values to use for the variable parts of the URL rule. Unknown keys are appended as query string arguments, like `?a=b&c=d`. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") Changelog New in version 2.2: Moved from `flask.url_for`, which calls this method. `url_map` The [`Map`](https://werkzeug.palletsprojects.com/en/2.3.x/routing/#werkzeug.routing.Map "(in Werkzeug v2.3.x)") for this instance. You can use this to change the routing converters after the class was created but before any routes are connected. Example: ``` from werkzeug.routing import BaseConverter class ListConverter(BaseConverter): def to_python(self, value): return value.split(',') def to_url(self, values): return ','.join(super(ListConverter, self).to_url(value) for value in values) app = Flask(__name__) app.url_map.converters['list'] = ListConverter ``` `url_map_class` alias of [`Map`](https://werkzeug.palletsprojects.com/en/2.3.x/routing/#werkzeug.routing.Map "(in Werkzeug v2.3.x)") `url_rule_class` alias of [`Rule`](https://werkzeug.palletsprojects.com/en/2.3.x/routing/#werkzeug.routing.Rule "(in Werkzeug v2.3.x)") `url_value_preprocessor(f)` Register a URL value preprocessor function for all view functions in the application. These functions will be called before the [`before_request()`](#flask.Flask.before_request "flask.Flask.before_request") functions. The function can modify the values captured from the matched url before they are passed to the view. For example, this can be used to pop a common language code value and place it in `g` rather than pass it to every view. The function is passed the endpoint name and values dict. The return value is ignored. This is available on both app and blueprint objects. When used on an app, this is called for every request. When used on a blueprint, this is called for requests that the blueprint handles. To register with a blueprint and affect every request, use [`Blueprint.app_url_value_preprocessor()`](#flask.Blueprint.app_url_value_preprocessor "flask.Blueprint.app_url_value_preprocessor"). Parameters: **f** (*T_url_value_preprocessor*) – Return type: *T_url_value_preprocessor* `url_value_preprocessors: dict[ft.AppOrBlueprintKey, list[ft.URLValuePreprocessorCallable]]` A data structure of functions to call to modify the keyword arguments passed to the view function, in the format `{scope: [functions]}`. The `scope` key is the name of a blueprint the functions are active for, or `None` for all requests. To register a function, use the [`url_value_preprocessor()`](#flask.Flask.url_value_preprocessor "flask.Flask.url_value_preprocessor") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `view_functions: dict[str, t.Callable]` A dictionary mapping endpoint names to view functions. To register a view function, use the [`route()`](#flask.Flask.route "flask.Flask.route") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `wsgi_app(environ, start_response)` The actual WSGI application. This is not implemented in `__call__()` so that middlewares can be applied without losing a reference to the app object. Instead of doing this: ``` app = MyMiddleware(app) ``` It’s a better idea to do this instead: ``` app.wsgi_app = MyMiddleware(app.wsgi_app) ``` Then you still have the original application object around and can continue to call methods on it. Changelog Changed in version 0.7: Teardown events for the request and app contexts are called even if an unhandled error occurs. Other events may not be called depending on when an error occurs during dispatch. See [Callbacks and Errors](../reqcontext/index#callbacks-and-errors). Parameters: * **environ** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)")) – A WSGI environment. * **start_response** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")) – A callable accepting a status code, a list of headers, and an optional exception context to start the response. Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") Blueprint Objects ----------------- `class flask.Blueprint(name, import_name, static_folder=None, static_url_path=None, template_folder=None, url_prefix=None, subdomain=None, url_defaults=None, root_path=None, cli_group=<object object>)` Represents a blueprint, a collection of routes and other app-related functions that can be registered on a real application later. A blueprint is an object that allows defining application functions without requiring an application object ahead of time. It uses the same decorators as [`Flask`](#flask.Flask "flask.Flask"), but defers the need for an application by recording them for later registration. Decorating a function with a blueprint creates a deferred function that is called with [`BlueprintSetupState`](#flask.blueprints.BlueprintSetupState "flask.blueprints.BlueprintSetupState") when the blueprint is registered on an application. See [Modular Applications with Blueprints](../blueprints/index) for more information. Parameters: * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The name of the blueprint. Will be prepended to each endpoint name. * **import_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The name of the blueprint package, usually `__name__`. This helps locate the `root_path` for the blueprint. * **static_folder** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [os.PathLike](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.11)") *|* *None*) – A folder with static files that should be served by the blueprint’s static route. The path is relative to the blueprint’s root path. Blueprint static files are disabled by default. * **static_url_path** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – The url to serve static files from. Defaults to `static_folder`. If the blueprint does not have a `url_prefix`, the app’s static route will take precedence, and the blueprint’s static files won’t be accessible. * **template_folder** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [os.PathLike](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.11)") *|* *None*) – A folder with templates that should be added to the app’s template search path. The path is relative to the blueprint’s root path. Blueprint templates are disabled by default. Blueprint templates have a lower precedence than those in the app’s templates folder. * **url_prefix** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – A path to prepend to all of the blueprint’s URLs, to make them distinct from the rest of the app’s routes. * **subdomain** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – A subdomain that blueprint routes will match on by default. * **url_defaults** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)") *|* *None*) – A dict of default values that blueprint routes will receive by default. * **root_path** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – By default, the blueprint will automatically set this based on `import_name`. In certain situations this automatic detection can fail, so the path can be specified manually instead. * **cli_group** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – Changelog Changed in version 1.1.0: Blueprints have a `cli` group to register nested CLI commands. The `cli_group` parameter controls the name of the group under the `flask` command. New in version 0.7. `add_app_template_filter(f, name=None)` Register a template filter, available in any template rendered by the application. Works like the [`app_template_filter()`](#flask.Blueprint.app_template_filter "flask.Blueprint.app_template_filter") decorator. Equivalent to [`Flask.add_template_filter()`](#flask.Flask.add_template_filter "flask.Flask.add_template_filter"). Parameters: * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the optional name of the filter, otherwise the function name will be used. * **f** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")*[**[**...**]**,* [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")*]*) – Return type: None `add_app_template_global(f, name=None)` Register a template global, available in any template rendered by the application. Works like the [`app_template_global()`](#flask.Blueprint.app_template_global "flask.Blueprint.app_template_global") decorator. Equivalent to [`Flask.add_template_global()`](#flask.Flask.add_template_global "flask.Flask.add_template_global"). Changelog New in version 0.10. Parameters: * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the optional name of the global, otherwise the function name will be used. * **f** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")*[**[**...**]**,* [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")*]*) – Return type: None `add_app_template_test(f, name=None)` Register a template test, available in any template rendered by the application. Works like the [`app_template_test()`](#flask.Blueprint.app_template_test "flask.Blueprint.app_template_test") decorator. Equivalent to [`Flask.add_template_test()`](#flask.Flask.add_template_test "flask.Flask.add_template_test"). Changelog New in version 0.10. Parameters: * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the optional name of the test, otherwise the function name will be used. * **f** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")*[**[**...**]**,* [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")*]*) – Return type: None `add_url_rule(rule, endpoint=None, view_func=None, provide_automatic_options=None, **options)` Register a URL rule with the blueprint. See [`Flask.add_url_rule()`](#flask.Flask.add_url_rule "flask.Flask.add_url_rule") for full documentation. The URL rule is prefixed with the blueprint’s URL prefix. The endpoint name, used with [`url_for()`](#flask.url_for "flask.url_for"), is prefixed with the blueprint’s name. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **endpoint** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – * **view_func** (*ft.RouteCallable* *|* *None*) – * **provide_automatic_options** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") *|* *None*) – * **options** (*t.Any*) – Return type: None `after_app_request(f)` Like [`after_request()`](#flask.Blueprint.after_request "flask.Blueprint.after_request"), but after every request, not only those handled by the blueprint. Equivalent to [`Flask.after_request()`](#flask.Flask.after_request "flask.Flask.after_request"). Parameters: **f** (*T_after_request*) – Return type: *T_after_request* `after_request(f)` Register a function to run after each request to this object. The function is called with the response object, and must return a response object. This allows the functions to modify or replace the response before it is sent. If a function raises an exception, any remaining `after_request` functions will not be called. Therefore, this should not be used for actions that must execute, such as to close resources. Use [`teardown_request()`](#flask.Blueprint.teardown_request "flask.Blueprint.teardown_request") for that. This is available on both app and blueprint objects. When used on an app, this executes after every request. When used on a blueprint, this executes after every request that the blueprint handles. To register with a blueprint and execute after every request, use [`Blueprint.after_app_request()`](#flask.Blueprint.after_app_request "flask.Blueprint.after_app_request"). Parameters: **f** (*T_after_request*) – Return type: *T_after_request* `after_request_funcs: dict[ft.AppOrBlueprintKey, list[ft.AfterRequestCallable]]` A data structure of functions to call at the end of each request, in the format `{scope: [functions]}`. The `scope` key is the name of a blueprint the functions are active for, or `None` for all requests. To register a function, use the [`after_request()`](#flask.Blueprint.after_request "flask.Blueprint.after_request") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `app_context_processor(f)` Like [`context_processor()`](#flask.Blueprint.context_processor "flask.Blueprint.context_processor"), but for templates rendered by every view, not only by the blueprint. Equivalent to [`Flask.context_processor()`](#flask.Flask.context_processor "flask.Flask.context_processor"). Parameters: **f** (*T_template_context_processor*) – Return type: *T_template_context_processor* `app_errorhandler(code)` Like [`errorhandler()`](#flask.Blueprint.errorhandler "flask.Blueprint.errorhandler"), but for every request, not only those handled by the blueprint. Equivalent to [`Flask.errorhandler()`](#flask.Flask.errorhandler "flask.Flask.errorhandler"). Parameters: **code** ([type](https://docs.python.org/3/library/functions.html#type "(in Python v3.11)")*[*[Exception](https://docs.python.org/3/library/exceptions.html#Exception "(in Python v3.11)")*]* *|* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_error_handler*], *T_error_handler*] `app_template_filter(name=None)` Register a template filter, available in any template rendered by the application. Equivalent to [`Flask.template_filter()`](#flask.Flask.template_filter "flask.Flask.template_filter"). Parameters: **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the optional name of the filter, otherwise the function name will be used. Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_template_filter*], *T_template_filter*] `app_template_global(name=None)` Register a template global, available in any template rendered by the application. Equivalent to [`Flask.template_global()`](#flask.Flask.template_global "flask.Flask.template_global"). Changelog New in version 0.10. Parameters: **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the optional name of the global, otherwise the function name will be used. Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_template_global*], *T_template_global*] `app_template_test(name=None)` Register a template test, available in any template rendered by the application. Equivalent to [`Flask.template_test()`](#flask.Flask.template_test "flask.Flask.template_test"). Changelog New in version 0.10. Parameters: **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the optional name of the test, otherwise the function name will be used. Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_template_test*], *T_template_test*] `app_url_defaults(f)` Like [`url_defaults()`](#flask.Blueprint.url_defaults "flask.Blueprint.url_defaults"), but for every request, not only those handled by the blueprint. Equivalent to [`Flask.url_defaults()`](#flask.Flask.url_defaults "flask.Flask.url_defaults"). Parameters: **f** (*T_url_defaults*) – Return type: *T_url_defaults* `app_url_value_preprocessor(f)` Like [`url_value_preprocessor()`](#flask.Blueprint.url_value_preprocessor "flask.Blueprint.url_value_preprocessor"), but for every request, not only those handled by the blueprint. Equivalent to [`Flask.url_value_preprocessor()`](#flask.Flask.url_value_preprocessor "flask.Flask.url_value_preprocessor"). Parameters: **f** (*T_url_value_preprocessor*) – Return type: *T_url_value_preprocessor* `before_app_request(f)` Like [`before_request()`](#flask.Blueprint.before_request "flask.Blueprint.before_request"), but before every request, not only those handled by the blueprint. Equivalent to [`Flask.before_request()`](#flask.Flask.before_request "flask.Flask.before_request"). Parameters: **f** (*T_before_request*) – Return type: *T_before_request* `before_request(f)` Register a function to run before each request. For example, this can be used to open a database connection, or to load the logged in user from the session. ``` @app.before_request def load_user(): if "user_id" in session: g.user = db.session.get(session["user_id"]) ``` The function will be called without any arguments. If it returns a non-`None` value, the value is handled as if it was the return value from the view, and further request handling is stopped. This is available on both app and blueprint objects. When used on an app, this executes before every request. When used on a blueprint, this executes before every request that the blueprint handles. To register with a blueprint and execute before every request, use [`Blueprint.before_app_request()`](#flask.Blueprint.before_app_request "flask.Blueprint.before_app_request"). Parameters: **f** (*T_before_request*) – Return type: *T_before_request* `before_request_funcs: dict[ft.AppOrBlueprintKey, list[ft.BeforeRequestCallable]]` A data structure of functions to call at the beginning of each request, in the format `{scope: [functions]}`. The `scope` key is the name of a blueprint the functions are active for, or `None` for all requests. To register a function, use the [`before_request()`](#flask.Blueprint.before_request "flask.Blueprint.before_request") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `cli` The Click command group for registering CLI commands for this object. The commands are available from the `flask` command once the application has been discovered and blueprints have been registered. `context_processor(f)` Registers a template context processor function. These functions run before rendering a template. The keys of the returned dict are added as variables available in the template. This is available on both app and blueprint objects. When used on an app, this is called for every rendered template. When used on a blueprint, this is called for templates rendered from the blueprint’s views. To register with a blueprint and affect every template, use [`Blueprint.app_context_processor()`](#flask.Blueprint.app_context_processor "flask.Blueprint.app_context_processor"). Parameters: **f** (*T_template_context_processor*) – Return type: *T_template_context_processor* `delete(rule, **options)` Shortcut for [`route()`](#flask.Blueprint.route "flask.Blueprint.route") with `methods=["DELETE"]`. Changelog New in version 2.0. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_route*], *T_route*] `endpoint(endpoint)` Decorate a view function to register it for the given endpoint. Used if a rule is added without a `view_func` with [`add_url_rule()`](#flask.Blueprint.add_url_rule "flask.Blueprint.add_url_rule"). ``` app.add_url_rule("/ex", endpoint="example") @app.endpoint("example") def example(): ... ``` Parameters: **endpoint** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The endpoint name to associate with the view function. Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*F*], *F*] `error_handler_spec: dict[ft.AppOrBlueprintKey, dict[int | None, dict[type[Exception], ft.ErrorHandlerCallable]]]` A data structure of registered error handlers, in the format `{scope: {code: {class: handler}}}`. The `scope` key is the name of a blueprint the handlers are active for, or `None` for all requests. The `code` key is the HTTP status code for `HTTPException`, or `None` for other exceptions. The innermost dictionary maps exception classes to handler functions. To register an error handler, use the [`errorhandler()`](#flask.Blueprint.errorhandler "flask.Blueprint.errorhandler") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `errorhandler(code_or_exception)` Register a function to handle errors by code or exception class. A decorator that is used to register a function given an error code. Example: ``` @app.errorhandler(404) def page_not_found(error): return 'This page does not exist', 404 ``` You can also register handlers for arbitrary exceptions: ``` @app.errorhandler(DatabaseError) def special_exception_handler(error): return 'Database connection failed', 500 ``` This is available on both app and blueprint objects. When used on an app, this can handle errors from every request. When used on a blueprint, this can handle errors from requests that the blueprint handles. To register with a blueprint and affect every request, use [`Blueprint.app_errorhandler()`](#flask.Blueprint.app_errorhandler "flask.Blueprint.app_errorhandler"). Changelog New in version 0.7: Use [`register_error_handler()`](#flask.Blueprint.register_error_handler "flask.Blueprint.register_error_handler") instead of modifying [`error_handler_spec`](#flask.Blueprint.error_handler_spec "flask.Blueprint.error_handler_spec") directly, for application wide error handlers. New in version 0.7: One can now additionally also register custom exception types that do not necessarily have to be a subclass of the [`HTTPException`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.HTTPException "(in Werkzeug v2.3.x)") class. Parameters: **code_or_exception** ([type](https://docs.python.org/3/library/functions.html#type "(in Python v3.11)")*[*[Exception](https://docs.python.org/3/library/exceptions.html#Exception "(in Python v3.11)")*]* *|* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)")) – the code as integer for the handler, or an arbitrary exception Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_error_handler*], *T_error_handler*] `get(rule, **options)` Shortcut for [`route()`](#flask.Blueprint.route "flask.Blueprint.route") with `methods=["GET"]`. Changelog New in version 2.0. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_route*], *T_route*] `get_send_file_max_age(filename)` Used by [`send_file()`](#flask.send_file "flask.send_file") to determine the `max_age` cache value for a given file path if it wasn’t passed. By default, this returns [`SEND_FILE_MAX_AGE_DEFAULT`](../config/index#SEND_FILE_MAX_AGE_DEFAULT "SEND_FILE_MAX_AGE_DEFAULT") from the configuration of [`current_app`](#flask.current_app "flask.current_app"). This defaults to `None`, which tells the browser to use conditional requests instead of a timed cache, which is usually preferable. Changelog Changed in version 2.0: The default configuration is `None` instead of 12 hours. New in version 0.9. Parameters: **filename** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – Return type: [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") | None `property has_static_folder: bool` `True` if [`static_folder`](#flask.Blueprint.static_folder "flask.Blueprint.static_folder") is set. Changelog New in version 0.5. `import_name` The name of the package or module that this object belongs to. Do not change this once it is set by the constructor. `property jinja_loader: FileSystemLoader | None` The Jinja loader for this object’s templates. By default this is a class [`jinja2.loaders.FileSystemLoader`](https://jinja.palletsprojects.com/en/3.1.x/api/#jinja2.FileSystemLoader "(in Jinja v3.1.x)") to [`template_folder`](#flask.Blueprint.template_folder "flask.Blueprint.template_folder") if it is set. Changelog New in version 0.5. `make_setup_state(app, options, first_registration=False)` Creates an instance of [`BlueprintSetupState()`](#flask.blueprints.BlueprintSetupState "flask.blueprints.BlueprintSetupState") object that is later passed to the register callback functions. Subclasses can override this to return a subclass of the setup state. Parameters: * **app** ([Flask](#flask.Flask "flask.Flask")) – * **options** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)")) – * **first_registration** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Return type: [BlueprintSetupState](#flask.blueprints.BlueprintSetupState "flask.blueprints.BlueprintSetupState") `open_resource(resource, mode='rb')` Open a resource file relative to [`root_path`](#flask.Blueprint.root_path "flask.Blueprint.root_path") for reading. For example, if the file `schema.sql` is next to the file `app.py` where the `Flask` app is defined, it can be opened with: ``` with app.open_resource("schema.sql") as f: conn.executescript(f.read()) ``` Parameters: * **resource** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Path to the resource relative to [`root_path`](#flask.Blueprint.root_path "flask.Blueprint.root_path"). * **mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Open the file in this mode. Only reading is supported, valid values are “r” (or “rt”) and “rb”. Return type: [IO](https://docs.python.org/3/library/typing.html#typing.IO "(in Python v3.11)") `patch(rule, **options)` Shortcut for [`route()`](#flask.Blueprint.route "flask.Blueprint.route") with `methods=["PATCH"]`. Changelog New in version 2.0. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_route*], *T_route*] `post(rule, **options)` Shortcut for [`route()`](#flask.Blueprint.route "flask.Blueprint.route") with `methods=["POST"]`. Changelog New in version 2.0. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_route*], *T_route*] `put(rule, **options)` Shortcut for [`route()`](#flask.Blueprint.route "flask.Blueprint.route") with `methods=["PUT"]`. Changelog New in version 2.0. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_route*], *T_route*] `record(func)` Registers a function that is called when the blueprint is registered on the application. This function is called with the state as argument as returned by the [`make_setup_state()`](#flask.Blueprint.make_setup_state "flask.Blueprint.make_setup_state") method. Parameters: **func** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")) – Return type: None `record_once(func)` Works like [`record()`](#flask.Blueprint.record "flask.Blueprint.record") but wraps the function in another function that will ensure the function is only called once. If the blueprint is registered a second time on the application, the function passed is not called. Parameters: **func** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")) – Return type: None `register(app, options)` Called by [`Flask.register_blueprint()`](#flask.Flask.register_blueprint "flask.Flask.register_blueprint") to register all views and callbacks registered on the blueprint with the application. Creates a [`BlueprintSetupState`](#flask.blueprints.BlueprintSetupState "flask.blueprints.BlueprintSetupState") and calls each [`record()`](#flask.Blueprint.record "flask.Blueprint.record") callback with it. Parameters: * **app** ([Flask](#flask.Flask "flask.Flask")) – The application this blueprint is being registered with. * **options** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)")) – Keyword arguments forwarded from [`register_blueprint()`](#flask.Flask.register_blueprint "flask.Flask.register_blueprint"). Return type: None Changed in version 2.3: Nested blueprints now correctly apply subdomains. Changelog Changed in version 2.1: Registering the same blueprint with the same name multiple times is an error. Changed in version 2.0.1: Nested blueprints are registered with their dotted name. This allows different blueprints with the same name to be nested at different locations. Changed in version 2.0.1: The `name` option can be used to change the (pre-dotted) name the blueprint is registered with. This allows the same blueprint to be registered multiple times with unique names for `url_for`. `register_blueprint(blueprint, **options)` Register a [`Blueprint`](#flask.Blueprint "flask.Blueprint") on this blueprint. Keyword arguments passed to this method will override the defaults set on the blueprint. Changelog Changed in version 2.0.1: The `name` option can be used to change the (pre-dotted) name the blueprint is registered with. This allows the same blueprint to be registered multiple times with unique names for `url_for`. New in version 2.0. Parameters: * **blueprint** ([Blueprint](#flask.Blueprint "flask.blueprints.Blueprint")) – * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: None `register_error_handler(code_or_exception, f)` Alternative error attach function to the [`errorhandler()`](#flask.Blueprint.errorhandler "flask.Blueprint.errorhandler") decorator that is more straightforward to use for non decorator usage. Changelog New in version 0.7. Parameters: * **code_or_exception** ([type](https://docs.python.org/3/library/functions.html#type "(in Python v3.11)")*[*[Exception](https://docs.python.org/3/library/exceptions.html#Exception "(in Python v3.11)")*]* *|* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)")) – * **f** (*ft.ErrorHandlerCallable*) – Return type: None `root_path` Absolute path to the package on the filesystem. Used to look up resources contained in the package. `route(rule, **options)` Decorate a view function to register it with the given URL rule and options. Calls [`add_url_rule()`](#flask.Blueprint.add_url_rule "flask.Blueprint.add_url_rule"), which has more details about the implementation. ``` @app.route("/") def index(): return "Hello, World!" ``` See [URL Route Registrations](#url-route-registrations). The endpoint name for the route defaults to the name of the view function if the `endpoint` parameter isn’t passed. The `methods` parameter defaults to `["GET"]`. `HEAD` and `OPTIONS` are added automatically. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The URL rule string. * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Extra options passed to the [`Rule`](https://werkzeug.palletsprojects.com/en/2.3.x/routing/#werkzeug.routing.Rule "(in Werkzeug v2.3.x)") object. Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*T_route*], *T_route*] `send_static_file(filename)` The view function used to serve files from [`static_folder`](#flask.Blueprint.static_folder "flask.Blueprint.static_folder"). A route is automatically registered for this view at [`static_url_path`](#flask.Blueprint.static_url_path "flask.Blueprint.static_url_path") if [`static_folder`](#flask.Blueprint.static_folder "flask.Blueprint.static_folder") is set. Changelog New in version 0.5. Parameters: **filename** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Return type: [Response](#flask.Response "flask.Response") `property static_folder: str | None` The absolute path to the configured static folder. `None` if no static folder is set. `property static_url_path: str | None` The URL prefix that the static route will be accessible from. If it was not configured during init, it is derived from [`static_folder`](#flask.Blueprint.static_folder "flask.Blueprint.static_folder"). `teardown_app_request(f)` Like [`teardown_request()`](#flask.Blueprint.teardown_request "flask.Blueprint.teardown_request"), but after every request, not only those handled by the blueprint. Equivalent to [`Flask.teardown_request()`](#flask.Flask.teardown_request "flask.Flask.teardown_request"). Parameters: **f** (*T_teardown*) – Return type: *T_teardown* `teardown_request(f)` Register a function to be called when the request context is popped. Typically this happens at the end of each request, but contexts may be pushed manually as well during testing. ``` with app.test_request_context(): ... ``` When the `with` block exits (or `ctx.pop()` is called), the teardown functions are called just before the request context is made inactive. When a teardown function was called because of an unhandled exception it will be passed an error object. If an [`errorhandler()`](#flask.Blueprint.errorhandler "flask.Blueprint.errorhandler") is registered, it will handle the exception and the teardown will not receive it. Teardown functions must avoid raising exceptions. If they execute code that might fail they must surround that code with a `try`/`except` block and log any errors. The return values of teardown functions are ignored. This is available on both app and blueprint objects. When used on an app, this executes after every request. When used on a blueprint, this executes after every request that the blueprint handles. To register with a blueprint and execute after every request, use [`Blueprint.teardown_app_request()`](#flask.Blueprint.teardown_app_request "flask.Blueprint.teardown_app_request"). Parameters: **f** (*T_teardown*) – Return type: *T_teardown* `teardown_request_funcs: dict[ft.AppOrBlueprintKey, list[ft.TeardownCallable]]` A data structure of functions to call at the end of each request even if an exception is raised, in the format `{scope: [functions]}`. The `scope` key is the name of a blueprint the functions are active for, or `None` for all requests. To register a function, use the [`teardown_request()`](#flask.Blueprint.teardown_request "flask.Blueprint.teardown_request") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `template_context_processors: dict[ft.AppOrBlueprintKey, list[ft.TemplateContextProcessorCallable]]` A data structure of functions to call to pass extra context values when rendering templates, in the format `{scope: [functions]}`. The `scope` key is the name of a blueprint the functions are active for, or `None` for all requests. To register a function, use the [`context_processor()`](#flask.Blueprint.context_processor "flask.Blueprint.context_processor") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `template_folder` The path to the templates folder, relative to [`root_path`](#flask.Blueprint.root_path "flask.Blueprint.root_path"), to add to the template loader. `None` if templates should not be added. `url_default_functions: dict[ft.AppOrBlueprintKey, list[ft.URLDefaultCallable]]` A data structure of functions to call to modify the keyword arguments when generating URLs, in the format `{scope: [functions]}`. The `scope` key is the name of a blueprint the functions are active for, or `None` for all requests. To register a function, use the [`url_defaults()`](#flask.Blueprint.url_defaults "flask.Blueprint.url_defaults") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `url_defaults(f)` Callback function for URL defaults for all view functions of the application. It’s called with the endpoint and values and should update the values passed in place. This is available on both app and blueprint objects. When used on an app, this is called for every request. When used on a blueprint, this is called for requests that the blueprint handles. To register with a blueprint and affect every request, use [`Blueprint.app_url_defaults()`](#flask.Blueprint.app_url_defaults "flask.Blueprint.app_url_defaults"). Parameters: **f** (*T_url_defaults*) – Return type: *T_url_defaults* `url_value_preprocessor(f)` Register a URL value preprocessor function for all view functions in the application. These functions will be called before the [`before_request()`](#flask.Blueprint.before_request "flask.Blueprint.before_request") functions. The function can modify the values captured from the matched url before they are passed to the view. For example, this can be used to pop a common language code value and place it in `g` rather than pass it to every view. The function is passed the endpoint name and values dict. The return value is ignored. This is available on both app and blueprint objects. When used on an app, this is called for every request. When used on a blueprint, this is called for requests that the blueprint handles. To register with a blueprint and affect every request, use [`Blueprint.app_url_value_preprocessor()`](#flask.Blueprint.app_url_value_preprocessor "flask.Blueprint.app_url_value_preprocessor"). Parameters: **f** (*T_url_value_preprocessor*) – Return type: *T_url_value_preprocessor* `url_value_preprocessors: dict[ft.AppOrBlueprintKey, list[ft.URLValuePreprocessorCallable]]` A data structure of functions to call to modify the keyword arguments passed to the view function, in the format `{scope: [functions]}`. The `scope` key is the name of a blueprint the functions are active for, or `None` for all requests. To register a function, use the [`url_value_preprocessor()`](#flask.Blueprint.url_value_preprocessor "flask.Blueprint.url_value_preprocessor") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. `view_functions: dict[str, t.Callable]` A dictionary mapping endpoint names to view functions. To register a view function, use the [`route()`](#flask.Blueprint.route "flask.Blueprint.route") decorator. This data structure is internal. It should not be modified directly and its format may change at any time. Incoming Request Data --------------------- `class flask.Request(environ, populate_request=True, shallow=False)` The request object used by default in Flask. Remembers the matched endpoint and view arguments. It is what ends up as [`request`](#flask.request "flask.request"). If you want to replace the request object used you can subclass this and set [`request_class`](#flask.Flask.request_class "flask.Flask.request_class") to your subclass. The request object is a [`Request`](https://werkzeug.palletsprojects.com/en/2.3.x/wrappers/#werkzeug.wrappers.Request "(in Werkzeug v2.3.x)") subclass and provides all of the attributes Werkzeug defines plus a few Flask specific ones. Parameters: * **environ** (*WSGIEnvironment*) – * **populate_request** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – * **shallow** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – `property accept_charsets: CharsetAccept` List of charsets this client supports as [`CharsetAccept`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.CharsetAccept "(in Werkzeug v2.3.x)") object. `property accept_encodings: Accept` List of encodings this client accepts. Encodings in a HTTP term are compression encodings such as gzip. For charsets have a look at `accept_charset`. `property accept_languages: LanguageAccept` List of languages this client accepts as [`LanguageAccept`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.LanguageAccept "(in Werkzeug v2.3.x)") object. `property accept_mimetypes: MIMEAccept` List of mimetypes this client supports as [`MIMEAccept`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.MIMEAccept "(in Werkzeug v2.3.x)") object. `access_control_request_headers` Sent with a preflight request to indicate which headers will be sent with the cross origin request. Set `access_control_allow_headers` on the response to indicate which headers are allowed. `access_control_request_method` Sent with a preflight request to indicate which method will be used for the cross origin request. Set `access_control_allow_methods` on the response to indicate which methods are allowed. `property access_route: list[str]` If a forwarded header exists this is a list of all ip addresses from the client ip to the last proxy server. `classmethod application(f)` Decorate a function as responder that accepts the request as the last argument. This works like the `responder()` decorator but the function is passed the request object as the last argument and the request object will be closed automatically: ``` @Request.application def my_wsgi_app(request): return Response('Hello World!') ``` As of Werkzeug 0.14 HTTP exceptions are automatically caught and converted to responses instead of failing. Parameters: **f** (*t.Callable**[**[*[Request](#flask.Request "flask.Request")*]**,* *WSGIApplication**]*) – the WSGI callable to decorate Returns: a new WSGI callable Return type: WSGIApplication `property args: MultiDict[str, str]` The parsed URL parameters (the part in the URL after the question mark). By default an [`ImmutableMultiDict`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.ImmutableMultiDict "(in Werkzeug v2.3.x)") is returned from this function. This can be changed by setting [`parameter_storage_class`](#flask.Request.parameter_storage_class "flask.Request.parameter_storage_class") to a different type. This might be necessary if the order of the form data is important. Changed in version 2.3: Invalid bytes remain percent encoded. `property authorization: Authorization | None` The `Authorization` header parsed into an `Authorization` object. `None` if the header is not present. Changed in version 2.3: `Authorization` is no longer a `dict`. The `token` attribute was added for auth schemes that use a token instead of parameters. `property base_url: str` Like [`url`](#flask.Request.url "flask.Request.url") but without the query string. `property blueprint: str | None` The registered name of the current blueprint. This will be `None` if the endpoint is not part of a blueprint, or if URL matching failed or has not been performed yet. This does not necessarily match the name the blueprint was created with. It may have been nested, or registered with a different name. `property blueprints: list[str]` The registered names of the current blueprint upwards through parent blueprints. This will be an empty list if there is no current blueprint, or if URL matching failed. Changelog New in version 2.0.1. `property cache_control: RequestCacheControl` A [`RequestCacheControl`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.RequestCacheControl "(in Werkzeug v2.3.x)") object for the incoming cache control headers. `property charset: str` The charset used to decode body, form, and cookie data. Defaults to UTF-8. Deprecated since version 2.3: Will be removed in Werkzeug 3.0. Request data must always be UTF-8. `close()` Closes associated resources of this request object. This closes all file handles explicitly. You can also use the request object in a with statement which will automatically close it. Changelog New in version 0.9. Return type: None `content_encoding` The Content-Encoding entity-header field is used as a modifier to the media-type. When present, its value indicates what additional content codings have been applied to the entity-body, and thus what decoding mechanisms must be applied in order to obtain the media-type referenced by the Content-Type header field. Changelog New in version 0.9. `property content_length: int | None` The Content-Length entity-header field indicates the size of the entity-body in bytes or, in the case of the HEAD method, the size of the entity-body that would have been sent had the request been a GET. `content_md5` The Content-MD5 entity-header field, as defined in RFC 1864, is an MD5 digest of the entity-body for the purpose of providing an end-to-end message integrity check (MIC) of the entity-body. (Note: a MIC is good for detecting accidental modification of the entity-body in transit, but is not proof against malicious attacks.) Changelog New in version 0.9. `content_type` The Content-Type entity-header field indicates the media type of the entity-body sent to the recipient or, in the case of the HEAD method, the media type that would have been sent had the request been a GET. `property cookies: ImmutableMultiDict[str, str]` A [`dict`](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)") with the contents of all cookies transmitted with the request. `property data: bytes` The raw data read from [`stream`](#flask.Request.stream "flask.Request.stream"). Will be empty if the request represents form data. To get the raw data even if it represents form data, use [`get_data()`](#flask.Request.get_data "flask.Request.get_data"). `date` The Date general-header field represents the date and time at which the message was originated, having the same semantics as orig-date in RFC 822. Changelog Changed in version 2.0: The datetime object is timezone-aware. `dict_storage_class` alias of [`ImmutableMultiDict`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.ImmutableMultiDict "(in Werkzeug v2.3.x)") `property encoding_errors: str` How errors when decoding bytes are handled. Defaults to “replace”. Deprecated since version 2.3: Will be removed in Werkzeug 3.0. `property endpoint: str | None` The endpoint that matched the request URL. This will be `None` if matching failed or has not been performed yet. This in combination with [`view_args`](#flask.Request.view_args "flask.Request.view_args") can be used to reconstruct the same URL or a modified URL. `environ: WSGIEnvironment` The WSGI environment containing HTTP headers and information from the WSGI server. `property files: ImmutableMultiDict[str, FileStorage]` [`MultiDict`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.MultiDict "(in Werkzeug v2.3.x)") object containing all uploaded files. Each key in [`files`](#flask.Request.files "flask.Request.files") is the name from the `<input type="file" name="">`. Each value in [`files`](#flask.Request.files "flask.Request.files") is a Werkzeug [`FileStorage`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.FileStorage "(in Werkzeug v2.3.x)") object. It basically behaves like a standard file object you know from Python, with the difference that it also has a [`save()`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.FileStorage.save "(in Werkzeug v2.3.x)") function that can store the file on the filesystem. Note that [`files`](#flask.Request.files "flask.Request.files") will only contain data if the request method was POST, PUT or PATCH and the `<form>` that posted to the request had `enctype="multipart/form-data"`. It will be empty otherwise. See the [`MultiDict`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.MultiDict "(in Werkzeug v2.3.x)") / [`FileStorage`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.FileStorage "(in Werkzeug v2.3.x)") documentation for more details about the used data structure. `property form: ImmutableMultiDict[str, str]` The form parameters. By default an [`ImmutableMultiDict`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.ImmutableMultiDict "(in Werkzeug v2.3.x)") is returned from this function. This can be changed by setting [`parameter_storage_class`](#flask.Request.parameter_storage_class "flask.Request.parameter_storage_class") to a different type. This might be necessary if the order of the form data is important. Please keep in mind that file uploads will not end up here, but instead in the [`files`](#flask.Request.files "flask.Request.files") attribute. Changelog Changed in version 0.9: Previous to Werkzeug 0.9 this would only contain form data for POST and PUT requests. `form_data_parser_class` alias of [`FormDataParser`](https://werkzeug.palletsprojects.com/en/2.3.x/http/#werkzeug.formparser.FormDataParser "(in Werkzeug v2.3.x)") `classmethod from_values(*args, **kwargs)` Create a new request object based on the values provided. If environ is given missing values are filled from there. This method is useful for small scripts when you need to simulate a request from an URL. Do not use this method for unittesting, there is a full featured client object (`Client`) that allows to create multipart requests, support for cookies etc. This accepts the same options as the [`EnvironBuilder`](https://werkzeug.palletsprojects.com/en/2.3.x/test/#werkzeug.test.EnvironBuilder "(in Werkzeug v2.3.x)"). Changelog Changed in version 0.5: This method now accepts the same arguments as [`EnvironBuilder`](https://werkzeug.palletsprojects.com/en/2.3.x/test/#werkzeug.test.EnvironBuilder "(in Werkzeug v2.3.x)"). Because of this the `environ` parameter is now called `environ_overrides`. Returns: request object Parameters: * **args** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Request](https://werkzeug.palletsprojects.com/en/2.3.x/wrappers/#werkzeug.wrappers.Request "(in Werkzeug v2.3.x)") `property full_path: str` Requested path, including the query string. `get_data(cache=True, as_text=False, parse_form_data=False)` This reads the buffered incoming data from the client into one bytes object. By default this is cached but that behavior can be changed by setting `cache` to `False`. Usually it’s a bad idea to call this method without checking the content length first as a client could send dozens of megabytes or more to cause memory problems on the server. Note that if the form data was already parsed this method will not return anything as form data parsing does not cache the data like this method does. To implicitly invoke form data parsing function set `parse_form_data` to `True`. When this is done the return value of this method will be an empty string if the form parser handles the data. This generally is not necessary as if the whole data is cached (which is the default) the form parser will used the cached data to parse the form data. Please be generally aware of checking the content length first in any case before calling this method to avoid exhausting server memory. If `as_text` is set to `True` the return value will be a decoded string. Changelog New in version 0.9. Parameters: * **cache** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – * **as_text** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – * **parse_form_data** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Return type: [bytes](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.11)") | [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") `get_json(force=False, silent=False, cache=True)` Parse [`data`](#flask.Request.data "flask.Request.data") as JSON. If the mimetype does not indicate JSON (*application/json*, see [`is_json`](#flask.Request.is_json "flask.Request.is_json")), or parsing fails, [`on_json_loading_failed()`](#flask.Request.on_json_loading_failed "flask.Request.on_json_loading_failed") is called and its return value is used as the return value. By default this raises a 415 Unsupported Media Type resp. Parameters: * **force** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Ignore the mimetype and always try to parse JSON. * **silent** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Silence mimetype and parsing errors, and return `None` instead. * **cache** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Store the parsed JSON to return for subsequent calls. Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") | None Changed in version 2.3: Raise a 415 error instead of 400. Changelog Changed in version 2.1: Raise a 400 error if the content type is incorrect. `headers` The headers received with the request. `property host: str` The host name the request was made to, including the port if it’s non-standard. Validated with [`trusted_hosts`](#flask.Request.trusted_hosts "flask.Request.trusted_hosts"). `property host_url: str` The request URL scheme and host only. `property if_match: ETags` An object containing all the etags in the `If-Match` header. Return type: [`ETags`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.ETags "(in Werkzeug v2.3.x)") `property if_modified_since: datetime | None` The parsed `If-Modified-Since` header as a datetime object. Changelog Changed in version 2.0: The datetime object is timezone-aware. `property if_none_match: ETags` An object containing all the etags in the `If-None-Match` header. Return type: [`ETags`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.ETags "(in Werkzeug v2.3.x)") `property if_range: IfRange` The parsed `If-Range` header. Changelog Changed in version 2.0: `IfRange.date` is timezone-aware. New in version 0.7. `property if_unmodified_since: datetime | None` The parsed `If-Unmodified-Since` header as a datetime object. Changelog Changed in version 2.0: The datetime object is timezone-aware. `input_stream` The raw WSGI input stream, without any safety checks. This is dangerous to use. It does not guard against infinite streams or reading past [`content_length`](#flask.Request.content_length "flask.Request.content_length") or [`max_content_length`](#flask.Request.max_content_length "flask.Request.max_content_length"). Use [`stream`](#flask.Request.stream "flask.Request.stream") instead. `property is_json: bool` Check if the mimetype indicates JSON data, either *application/json* or *application/*+json*. `is_multiprocess` boolean that is `True` if the application is served by a WSGI server that spawns multiple processes. `is_multithread` boolean that is `True` if the application is served by a multithreaded WSGI server. `is_run_once` boolean that is `True` if the application will be executed only once in a process lifetime. This is the case for CGI for example, but it’s not guaranteed that the execution only happens one time. `property is_secure: bool` `True` if the request was made with a secure protocol (HTTPS or WSS). `property json: Any | None` The parsed JSON data if [`mimetype`](#flask.Request.mimetype "flask.Request.mimetype") indicates JSON (*application/json*, see [`is_json`](#flask.Request.is_json "flask.Request.is_json")). Calls [`get_json()`](#flask.Request.get_json "flask.Request.get_json") with default arguments. If the request content type is not `application/json`, this will raise a 415 Unsupported Media Type error. Changed in version 2.3: Raise a 415 error instead of 400. Changelog Changed in version 2.1: Raise a 400 error if the content type is incorrect. `list_storage_class` alias of [`ImmutableList`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.ImmutableList "(in Werkzeug v2.3.x)") `make_form_data_parser()` Creates the form data parser. Instantiates the [`form_data_parser_class`](#flask.Request.form_data_parser_class "flask.Request.form_data_parser_class") with some parameters. Changelog New in version 0.8. Return type: [FormDataParser](https://werkzeug.palletsprojects.com/en/2.3.x/http/#werkzeug.formparser.FormDataParser "(in Werkzeug v2.3.x)") `property max_content_length: int | None` Read-only view of the `MAX_CONTENT_LENGTH` config key. `max_form_memory_size: int | None = None` the maximum form field size. This is forwarded to the form data parsing function (`parse_form_data()`). When set and the [`form`](#flask.Request.form "flask.Request.form") or [`files`](#flask.Request.files "flask.Request.files") attribute is accessed and the data in memory for post data is longer than the specified value a [`RequestEntityTooLarge`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.RequestEntityTooLarge "(in Werkzeug v2.3.x)") exception is raised. Changelog New in version 0.5. `max_form_parts = 1000` The maximum number of multipart parts to parse, passed to [`form_data_parser_class`](#flask.Request.form_data_parser_class "flask.Request.form_data_parser_class"). Parsing form data with more than this many parts will raise `RequestEntityTooLarge`. Changelog New in version 2.2.3. `max_forwards` The Max-Forwards request-header field provides a mechanism with the TRACE and OPTIONS methods to limit the number of proxies or gateways that can forward the request to the next inbound server. `method` The method the request was made with, such as `GET`. `property mimetype: str` Like [`content_type`](#flask.Request.content_type "flask.Request.content_type"), but without parameters (eg, without charset, type etc.) and always lowercase. For example if the content type is `text/HTML; charset=utf-8` the mimetype would be `'text/html'`. `property mimetype_params: dict[str, str]` The mimetype parameters as dict. For example if the content type is `text/html; charset=utf-8` the params would be `{'charset': 'utf-8'}`. `on_json_loading_failed(e)` Called if [`get_json()`](#flask.Request.get_json "flask.Request.get_json") fails and isn’t silenced. If this method returns a value, it is used as the return value for [`get_json()`](#flask.Request.get_json "flask.Request.get_json"). The default implementation raises [`BadRequest`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.BadRequest "(in Werkzeug v2.3.x)"). Parameters: **e** ([ValueError](https://docs.python.org/3/library/exceptions.html#ValueError "(in Python v3.11)") *|* *None*) – If parsing failed, this is the exception. It will be `None` if the content type wasn’t `application/json`. Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") Changed in version 2.3: Raise a 415 error instead of 400. `origin` The host that the request originated from. Set `access_control_allow_origin` on the response to indicate which origins are allowed. `parameter_storage_class` alias of [`ImmutableMultiDict`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.ImmutableMultiDict "(in Werkzeug v2.3.x)") `path` The path part of the URL after [`root_path`](#flask.Request.root_path "flask.Request.root_path"). This is the path used for routing within the application. `property pragma: HeaderSet` The Pragma general-header field is used to include implementation-specific directives that might apply to any recipient along the request/response chain. All pragma directives specify optional behavior from the viewpoint of the protocol; however, some systems MAY require that behavior be consistent with the directives. `query_string` The part of the URL after the “?”. This is the raw value, use [`args`](#flask.Request.args "flask.Request.args") for the parsed values. `property range: Range | None` The parsed `Range` header. Changelog New in version 0.7. Return type: [`Range`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.Range "(in Werkzeug v2.3.x)") `referrer` The Referer[sic] request-header field allows the client to specify, for the server’s benefit, the address (URI) of the resource from which the Request-URI was obtained (the “referrer”, although the header field is misspelled). `remote_addr` The address of the client sending the request. `remote_user` If the server supports user authentication, and the script is protected, this attribute contains the username the user has authenticated as. `root_path` The prefix that the application is mounted under, without a trailing slash. [`path`](#flask.Request.path "flask.Request.path") comes after this. `property root_url: str` The request URL scheme, host, and root path. This is the root that the application is accessed from. `routing_exception: Exception | None = None` If matching the URL failed, this is the exception that will be raised / was raised as part of the request handling. This is usually a [`NotFound`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.NotFound "(in Werkzeug v2.3.x)") exception or something similar. `scheme` The URL scheme of the protocol the request used, such as `https` or `wss`. `property script_root: str` Alias for `self.root_path`. `environ["SCRIPT_ROOT"]` without a trailing slash. `server` The address of the server. `(host, port)`, `(path, None)` for unix sockets, or `None` if not known. `shallow: bool` Set when creating the request object. If `True`, reading from the request body will cause a `RuntimeException`. Useful to prevent modifying the stream from middleware. `property stream: IO[bytes]` The WSGI input stream, with safety checks. This stream can only be consumed once. Use [`get_data()`](#flask.Request.get_data "flask.Request.get_data") to get the full data as bytes or text. The [`data`](#flask.Request.data "flask.Request.data") attribute will contain the full bytes only if they do not represent form data. The [`form`](#flask.Request.form "flask.Request.form") attribute will contain the parsed form data in that case. Unlike [`input_stream`](#flask.Request.input_stream "flask.Request.input_stream"), this stream guards against infinite streams or reading past [`content_length`](#flask.Request.content_length "flask.Request.content_length") or [`max_content_length`](#flask.Request.max_content_length "flask.Request.max_content_length"). If `max_content_length` is set, it can be enforced on streams if `wsgi.input_terminated` is set. Otherwise, an empty stream is returned. If the limit is reached before the underlying stream is exhausted (such as a file that is too large, or an infinite stream), the remaining contents of the stream cannot be read safely. Depending on how the server handles this, clients may show a “connection reset” failure instead of seeing the 413 response. Changed in version 2.3: Check `max_content_length` preemptively and while reading. Changelog Changed in version 0.9: The stream is always set (but may be consumed) even if form parsing was accessed first. `trusted_hosts: list[str] | None = None` Valid host names when handling requests. By default all hosts are trusted, which means that whatever the client says the host is will be accepted. Because `Host` and `X-Forwarded-Host` headers can be set to any value by a malicious client, it is recommended to either set this property or implement similar validation in the proxy (if the application is being run behind one). Changelog New in version 0.9. `property url: str` The full request URL with the scheme, host, root path, path, and query string. `property url_charset: str` The charset to use when decoding percent-encoded bytes in [`args`](#flask.Request.args "flask.Request.args"). Defaults to the value of [`charset`](#flask.Request.charset "flask.Request.charset"), which defaults to UTF-8. Deprecated since version 2.3: Will be removed in Werkzeug 3.0. Percent-encoded bytes must always be UTF-8. Changelog New in version 0.6. `property url_root: str` Alias for [`root_url`](#flask.Request.root_url "flask.Request.root_url"). The URL with scheme, host, and root path. For example, `https://example.com/app/`. `url_rule: Rule | None = None` The internal URL rule that matched the request. This can be useful to inspect which methods are allowed for the URL from a before/after handler (`request.url_rule.methods`) etc. Though if the request’s method was invalid for the URL rule, the valid list is available in `routing_exception.valid_methods` instead (an attribute of the Werkzeug exception [`MethodNotAllowed`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.MethodNotAllowed "(in Werkzeug v2.3.x)")) because the request was never internally bound. Changelog New in version 0.6. `property user_agent: UserAgent` The user agent. Use `user_agent.string` to get the header value. Set [`user_agent_class`](#flask.Request.user_agent_class "flask.Request.user_agent_class") to a subclass of [`UserAgent`](https://werkzeug.palletsprojects.com/en/2.3.x/utils/#werkzeug.user_agent.UserAgent "(in Werkzeug v2.3.x)") to provide parsing for the other properties or other extended data. Changelog Changed in version 2.1: The built-in parser was removed. Set `user_agent_class` to a `UserAgent` subclass to parse data from the string. `user_agent_class` alias of [`UserAgent`](https://werkzeug.palletsprojects.com/en/2.3.x/utils/#werkzeug.user_agent.UserAgent "(in Werkzeug v2.3.x)") `property values: CombinedMultiDict[str, str]` A [`werkzeug.datastructures.CombinedMultiDict`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.CombinedMultiDict "(in Werkzeug v2.3.x)") that combines [`args`](#flask.Request.args "flask.Request.args") and [`form`](#flask.Request.form "flask.Request.form"). For GET requests, only `args` are present, not `form`. Changelog Changed in version 2.0: For GET requests, only `args` are present, not `form`. `view_args: dict[str, t.Any] | None = None` A dict of view arguments that matched the request. If an exception happened when matching, this will be `None`. `property want_form_data_parsed: bool` `True` if the request method carries content. By default this is true if a `Content-Type` is sent. Changelog New in version 0.8. `flask.request` To access incoming request data, you can use the global `request` object. Flask parses incoming request data for you and gives you access to it through that global object. Internally Flask makes sure that you always get the correct data for the active thread if you are in a multithreaded environment. This is a proxy. See [Notes On Proxies](../reqcontext/index#notes-on-proxies) for more information. The request object is an instance of a [`Request`](#flask.Request "flask.Request"). Response Objects ---------------- `class flask.Response(response=None, status=None, headers=None, mimetype=None, content_type=None, direct_passthrough=False)` The response object that is used by default in Flask. Works like the response object from Werkzeug but is set to have an HTML mimetype by default. Quite often you don’t have to create this object yourself because [`make_response()`](#flask.Flask.make_response "flask.Flask.make_response") will take care of that for you. If you want to replace the response object used you can subclass this and set [`response_class`](#flask.Flask.response_class "flask.Flask.response_class") to your subclass. Changelog Changed in version 1.0: JSON support is added to the response, like the request. This is useful when testing to get the test client response data as JSON. Changed in version 1.0: Added [`max_cookie_size`](#flask.Response.max_cookie_size "flask.Response.max_cookie_size"). Parameters: * **response** ([Iterable](https://docs.python.org/3/library/typing.html#typing.Iterable "(in Python v3.11)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")*]* *|* [Iterable](https://docs.python.org/3/library/typing.html#typing.Iterable "(in Python v3.11)")*[*[bytes](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.11)")*]*) – * **status** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") *|* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *HTTPStatus* *|* *None*) – * **headers** ([Headers](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.Headers "(in Werkzeug v2.3.x)")) – * **mimetype** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – * **content_type** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – * **direct_passthrough** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – `accept_ranges` The `Accept-Ranges` header. Even though the name would indicate that multiple values are supported, it must be one string token only. The values `'bytes'` and `'none'` are common. Changelog New in version 0.7. `property access_control_allow_credentials: bool` Whether credentials can be shared by the browser to JavaScript code. As part of the preflight request it indicates whether credentials can be used on the cross origin request. `access_control_allow_headers` Which headers can be sent with the cross origin request. `access_control_allow_methods` Which methods can be used for the cross origin request. `access_control_allow_origin` The origin or ‘*’ for any origin that may make cross origin requests. `access_control_expose_headers` Which headers can be shared by the browser to JavaScript code. `access_control_max_age` The maximum age in seconds the access control settings can be cached for. `add_etag(overwrite=False, weak=False)` Add an etag for the current response if there is none yet. Changelog Changed in version 2.0: SHA-1 is used to generate the value. MD5 may not be available in some environments. Parameters: * **overwrite** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – * **weak** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Return type: None `age` The Age response-header field conveys the sender’s estimate of the amount of time since the response (or its revalidation) was generated at the origin server. Age values are non-negative decimal integers, representing time in seconds. `property allow: HeaderSet` The Allow entity-header field lists the set of methods supported by the resource identified by the Request-URI. The purpose of this field is strictly to inform the recipient of valid methods associated with the resource. An Allow header field MUST be present in a 405 (Method Not Allowed) response. `autocorrect_location_header = False` If a redirect `Location` header is a relative URL, make it an absolute URL, including scheme and domain. Changelog Changed in version 2.1: This is disabled by default, so responses will send relative redirects. New in version 0.8. `automatically_set_content_length = True` Should this response object automatically set the content-length header if possible? This is true by default. Changelog New in version 0.8. `property cache_control: ResponseCacheControl` The Cache-Control general-header field is used to specify directives that MUST be obeyed by all caching mechanisms along the request/response chain. `calculate_content_length()` Returns the content length if available or `None` otherwise. Return type: [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") | None `call_on_close(func)` Adds a function to the internal list of functions that should be called as part of closing down the response. Since 0.7 this function also returns the function that was passed so that this can be used as a decorator. Changelog New in version 0.6. Parameters: **func** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")*[**[**]**,* [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")*]*) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[], [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")] `property charset: str` The charset used to encode body and cookie data. Defaults to UTF-8. Deprecated since version 2.3: Will be removed in Werkzeug 3.0. Response data must always be UTF-8. `close()` Close the wrapped response if possible. You can also use the object in a with statement which will automatically close it. Changelog New in version 0.9: Can now be used in a with statement. Return type: None `content_encoding` The Content-Encoding entity-header field is used as a modifier to the media-type. When present, its value indicates what additional content codings have been applied to the entity-body, and thus what decoding mechanisms must be applied in order to obtain the media-type referenced by the Content-Type header field. `property content_language: HeaderSet` The Content-Language entity-header field describes the natural language(s) of the intended audience for the enclosed entity. Note that this might not be equivalent to all the languages used within the entity-body. `content_length` The Content-Length entity-header field indicates the size of the entity-body, in decimal number of OCTETs, sent to the recipient or, in the case of the HEAD method, the size of the entity-body that would have been sent had the request been a GET. `content_location` The Content-Location entity-header field MAY be used to supply the resource location for the entity enclosed in the message when that entity is accessible from a location separate from the requested resource’s URI. `content_md5` The Content-MD5 entity-header field, as defined in RFC 1864, is an MD5 digest of the entity-body for the purpose of providing an end-to-end message integrity check (MIC) of the entity-body. (Note: a MIC is good for detecting accidental modification of the entity-body in transit, but is not proof against malicious attacks.) `property content_range: ContentRange` The `Content-Range` header as a [`ContentRange`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.ContentRange "(in Werkzeug v2.3.x)") object. Available even if the header is not set. Changelog New in version 0.7. `property content_security_policy: ContentSecurityPolicy` The `Content-Security-Policy` header as a `ContentSecurityPolicy` object. Available even if the header is not set. The Content-Security-Policy header adds an additional layer of security to help detect and mitigate certain types of attacks. `property content_security_policy_report_only: ContentSecurityPolicy` The `Content-Security-policy-report-only` header as a `ContentSecurityPolicy` object. Available even if the header is not set. The Content-Security-Policy-Report-Only header adds a csp policy that is not enforced but is reported thereby helping detect certain types of attacks. `content_type` The Content-Type entity-header field indicates the media type of the entity-body sent to the recipient or, in the case of the HEAD method, the media type that would have been sent had the request been a GET. `cross_origin_embedder_policy` Prevents a document from loading any cross-origin resources that do not explicitly grant the document permission. Values must be a member of the `werkzeug.http.COEP` enum. `cross_origin_opener_policy` Allows control over sharing of browsing context group with cross-origin documents. Values must be a member of the `werkzeug.http.COOP` enum. `property data: bytes | str` A descriptor that calls [`get_data()`](#flask.Response.get_data "flask.Response.get_data") and [`set_data()`](#flask.Response.set_data "flask.Response.set_data"). `date` The Date general-header field represents the date and time at which the message was originated, having the same semantics as orig-date in RFC 822. Changelog Changed in version 2.0: The datetime object is timezone-aware. `default_mimetype: str | None = 'text/html'` the default mimetype if none is provided. `default_status = 200` the default status if none is provided. `delete_cookie(key, path='/', domain=None, secure=False, httponly=False, samesite=None)` Delete a cookie. Fails silently if key doesn’t exist. Parameters: * **key** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – the key (name) of the cookie to be deleted. * **path** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – if the cookie that should be deleted was limited to a path, the path has to be defined here. * **domain** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – if the cookie that should be deleted was limited to a domain, that domain has to be defined here. * **secure** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – If `True`, the cookie will only be available via HTTPS. * **httponly** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Disallow JavaScript access to the cookie. * **samesite** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – Limit the scope of the cookie to only be attached to requests that are “same-site”. Return type: None `direct_passthrough` Pass the response body directly through as the WSGI iterable. This can be used when the body is a binary file or other iterator of bytes, to skip some unnecessary checks. Use [`send_file()`](https://werkzeug.palletsprojects.com/en/2.3.x/utils/#werkzeug.utils.send_file "(in Werkzeug v2.3.x)") instead of setting this manually. `expires` The Expires entity-header field gives the date/time after which the response is considered stale. A stale cache entry may not normally be returned by a cache. Changelog Changed in version 2.0: The datetime object is timezone-aware. `classmethod force_type(response, environ=None)` Enforce that the WSGI response is a response object of the current type. Werkzeug will use the [`Response`](#flask.Response "flask.Response") internally in many situations like the exceptions. If you call `get_response()` on an exception you will get back a regular [`Response`](#flask.Response "flask.Response") object, even if you are using a custom subclass. This method can enforce a given response type, and it will also convert arbitrary WSGI callables into response objects if an environ is provided: ``` # convert a Werkzeug response object into an instance of the # MyResponseClass subclass. response = MyResponseClass.force_type(response) # convert any WSGI application into a response object response = MyResponseClass.force_type(response, environ) ``` This is especially useful if you want to post-process responses in the main dispatcher and use functionality provided by your subclass. Keep in mind that this will modify response objects in place if possible! Parameters: * **response** ([Response](#flask.Response "flask.Response")) – a response object or wsgi application. * **environ** (*WSGIEnvironment* *|* *None*) – a WSGI environment object. Returns: a response object. Return type: [Response](#flask.Response "flask.Response") `freeze()` Make the response object ready to be pickled. Does the following: * Buffer the response into a list, ignoring `implicity_sequence_conversion` and [`direct_passthrough`](#flask.Response.direct_passthrough "flask.Response.direct_passthrough"). * Set the `Content-Length` header. * Generate an `ETag` header if one is not already set. Changelog Changed in version 2.1: Removed the `no_etag` parameter. Changed in version 2.0: An `ETag` header is always added. Changed in version 0.6: The `Content-Length` header is set. Return type: None `classmethod from_app(app, environ, buffered=False)` Create a new response object from an application output. This works best if you pass it an application that returns a generator all the time. Sometimes applications may use the `write()` callable returned by the `start_response` function. This tries to resolve such edge cases automatically. But if you don’t get the expected output you should set `buffered` to `True` which enforces buffering. Parameters: * **app** (*WSGIApplication*) – the WSGI application to execute. * **environ** (*WSGIEnvironment*) – the WSGI environment to execute against. * **buffered** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – set to `True` to enforce buffering. Returns: a response object. Return type: [Response](#flask.Response "flask.Response") `get_app_iter(environ)` Returns the application iterator for the given environ. Depending on the request method and the current status code the return value might be an empty response rather than the one from the response. If the request method is `HEAD` or the status code is in a range where the HTTP specification requires an empty response, an empty iterable is returned. Changelog New in version 0.6. Parameters: **environ** (*WSGIEnvironment*) – the WSGI environment of the request. Returns: a response iterable. Return type: t.Iterable[[bytes](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.11)")] `get_data(as_text=False)` The string representation of the response body. Whenever you call this property the response iterable is encoded and flattened. This can lead to unwanted behavior if you stream big data. This behavior can be disabled by setting [`implicit_sequence_conversion`](#flask.Response.implicit_sequence_conversion "flask.Response.implicit_sequence_conversion") to `False`. If `as_text` is set to `True` the return value will be a decoded string. Changelog New in version 0.9. Parameters: **as_text** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Return type: [bytes](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.11)") | [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") `get_etag()` Return a tuple in the form `(etag, is_weak)`. If there is no ETag the return value is `(None, None)`. Return type: [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.11)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)"), [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")] | [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.11)")[None, None] `get_json(force=False, silent=False)` Parse [`data`](#flask.Response.data "flask.Response.data") as JSON. Useful during testing. If the mimetype does not indicate JSON (*application/json*, see [`is_json`](#flask.Response.is_json "flask.Response.is_json")), this returns `None`. Unlike [`Request.get_json()`](#flask.Request.get_json "flask.Request.get_json"), the result is not cached. Parameters: * **force** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Ignore the mimetype and always try to parse JSON. * **silent** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Silence parsing errors and return `None` instead. Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") | None `get_wsgi_headers(environ)` This is automatically called right before the response is started and returns headers modified for the given environment. It returns a copy of the headers from the response with some modifications applied if necessary. For example the location header (if present) is joined with the root URL of the environment. Also the content length is automatically set to zero here for certain status codes. Changelog Changed in version 0.6: Previously that function was called `fix_headers` and modified the response object in place. Also since 0.6, IRIs in location and content-location headers are handled properly. Also starting with 0.6, Werkzeug will attempt to set the content length if it is able to figure it out on its own. This is the case if all the strings in the response iterable are already encoded and the iterable is buffered. Parameters: **environ** (*WSGIEnvironment*) – the WSGI environment of the request. Returns: returns a new [`Headers`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.Headers "(in Werkzeug v2.3.x)") object. Return type: Headers `get_wsgi_response(environ)` Returns the final WSGI response as tuple. The first item in the tuple is the application iterator, the second the status and the third the list of headers. The response returned is created specially for the given environment. For example if the request method in the WSGI environment is `'HEAD'` the response will be empty and only the headers and status code will be present. Changelog New in version 0.6. Parameters: **environ** (*WSGIEnvironment*) – the WSGI environment of the request. Returns: an `(app_iter, status, headers)` tuple. Return type: [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.11)")[t.Iterable[[bytes](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.11)")], [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)"), [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.11)")[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.11)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)"), [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")]]] `implicit_sequence_conversion = True` if set to `False` accessing properties on the response object will not try to consume the response iterator and convert it into a list. Changelog New in version 0.6.2: That attribute was previously called `implicit_seqence_conversion`. (Notice the typo). If you did use this feature, you have to adapt your code to the name change. `property is_json: bool` Check if the mimetype indicates JSON data, either *application/json* or *application/*+json*. `property is_sequence: bool` If the iterator is buffered, this property will be `True`. A response object will consider an iterator to be buffered if the response attribute is a list or tuple. Changelog New in version 0.6. `property is_streamed: bool` If the response is streamed (the response is not an iterable with a length information) this property is `True`. In this case streamed means that there is no information about the number of iterations. This is usually `True` if a generator is passed to the response object. This is useful for checking before applying some sort of post filtering that should not take place for streamed responses. `iter_encoded()` Iter the response encoded with the encoding of the response. If the response object is invoked as WSGI application the return value of this method is used as application iterator unless [`direct_passthrough`](#flask.Response.direct_passthrough "flask.Response.direct_passthrough") was activated. Return type: [Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator "(in Python v3.11)")[[bytes](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.11)")] `property json: Any | None` The parsed JSON data if [`mimetype`](#flask.Response.mimetype "flask.Response.mimetype") indicates JSON (*application/json*, see [`is_json`](#flask.Response.is_json "flask.Response.is_json")). Calls [`get_json()`](#flask.Response.get_json "flask.Response.get_json") with default arguments. `last_modified` The Last-Modified entity-header field indicates the date and time at which the origin server believes the variant was last modified. Changelog Changed in version 2.0: The datetime object is timezone-aware. `location` The Location response-header field is used to redirect the recipient to a location other than the Request-URI for completion of the request or identification of a new resource. `make_conditional(request_or_environ, accept_ranges=False, complete_length=None)` Make the response conditional to the request. This method works best if an etag was defined for the response already. The `add_etag` method can be used to do that. If called without etag just the date header is set. This does nothing if the request method in the request or environ is anything but GET or HEAD. For optimal performance when handling range requests, it’s recommended that your response data object implements `seekable`, `seek` and `tell` methods as described by [`io.IOBase`](https://docs.python.org/3/library/io.html#io.IOBase "(in Python v3.11)"). Objects returned by `wrap_file()` automatically implement those methods. It does not remove the body of the response because that’s something the `__call__()` function does for us automatically. Returns self so that you can do `return resp.make_conditional(req)` but modifies the object in-place. Parameters: * **request_or_environ** (*WSGIEnvironment* *|* [Request](#flask.Request "flask.Request")) – a request object or WSGI environment to be used to make the response conditional against. * **accept_ranges** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") *|* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – This parameter dictates the value of `Accept-Ranges` header. If `False` (default), the header is not set. If `True`, it will be set to `"bytes"`. If it’s a string, it will use this value. * **complete_length** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") *|* *None*) – Will be used only in valid Range Requests. It will set `Content-Range` complete length value and compute `Content-Length` real value. This parameter is mandatory for successful Range Requests completion. Raises: [`RequestedRangeNotSatisfiable`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.RequestedRangeNotSatisfiable "(in Werkzeug v2.3.x)") if `Range` header could not be parsed or satisfied. Return type: [Response](#flask.Response "flask.Response") Changelog Changed in version 2.0: Range processing is skipped if length is 0 instead of raising a 416 Range Not Satisfiable error. `make_sequence()` Converts the response iterator in a list. By default this happens automatically if required. If `implicit_sequence_conversion` is disabled, this method is not automatically called and some properties might raise exceptions. This also encodes all the items. Changelog New in version 0.6. Return type: None `property max_cookie_size: int` Read-only view of the [`MAX_COOKIE_SIZE`](../config/index#MAX_COOKIE_SIZE "MAX_COOKIE_SIZE") config key. See [`max_cookie_size`](https://werkzeug.palletsprojects.com/en/2.3.x/wrappers/#werkzeug.wrappers.Response.max_cookie_size "(in Werkzeug v2.3.x)") in Werkzeug’s docs. `property mimetype: str | None` The mimetype (content type without charset etc.) `property mimetype_params: dict[str, str]` The mimetype parameters as dict. For example if the content type is `text/html; charset=utf-8` the params would be `{'charset': 'utf-8'}`. Changelog New in version 0.5. `response: t.Iterable[str] | t.Iterable[bytes]` The response body to send as the WSGI iterable. A list of strings or bytes represents a fixed-length response, any other iterable is a streaming response. Strings are encoded to bytes as UTF-8. Do not set to a plain string or bytes, that will cause sending the response to be very inefficient as it will iterate one byte at a time. `property retry_after: datetime | None` The Retry-After response-header field can be used with a 503 (Service Unavailable) response to indicate how long the service is expected to be unavailable to the requesting client. Time in seconds until expiration or date. Changelog Changed in version 2.0: The datetime object is timezone-aware. `set_cookie(key, value='', max_age=None, expires=None, path='/', domain=None, secure=False, httponly=False, samesite=None)` Sets a cookie. A warning is raised if the size of the cookie header exceeds [`max_cookie_size`](#flask.Response.max_cookie_size "flask.Response.max_cookie_size"), but the header will still be set. Parameters: * **key** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – the key (name) of the cookie to be set. * **value** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – the value of the cookie. * **max_age** ([timedelta](https://docs.python.org/3/library/datetime.html#datetime.timedelta "(in Python v3.11)") *|* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") *|* *None*) – should be a number of seconds, or `None` (default) if the cookie should last only as long as the client’s browser session. * **expires** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime "(in Python v3.11)") *|* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") *|* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.11)") *|* *None*) – should be a `datetime` object or UNIX timestamp. * **path** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – limits the cookie to a given path, per default it will span the whole domain. * **domain** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – if you want to set a cross-domain cookie. For example, `domain=".example.com"` will set a cookie that is readable by the domain `www.example.com`, `foo.example.com` etc. Otherwise, a cookie will only be readable by the domain that set it. * **secure** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – If `True`, the cookie will only be available via HTTPS. * **httponly** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Disallow JavaScript access to the cookie. * **samesite** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – Limit the scope of the cookie to only be attached to requests that are “same-site”. Return type: None `set_data(value)` Sets a new string as response. The value must be a string or bytes. If a string is set it’s encoded to the charset of the response (utf-8 by default). Changelog New in version 0.9. Parameters: **value** ([bytes](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.11)") *|* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Return type: None `set_etag(etag, weak=False)` Set the etag, and override the old one if there was one. Parameters: * **etag** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **weak** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Return type: None `property status: str` The HTTP status code as a string. `property status_code: int` The HTTP status code as a number. `property stream: ResponseStream` The response iterable as write-only stream. `property vary: HeaderSet` The Vary field value indicates the set of request-header fields that fully determines, while the response is fresh, whether a cache is permitted to use the response to reply to a subsequent request without revalidation. `property www_authenticate: WWWAuthenticate` The `WWW-Authenticate` header parsed into a `WWWAuthenticate` object. Modifying the object will modify the header value. This header is not set by default. To set this header, assign an instance of `WWWAuthenticate` to this attribute. ``` response.www_authenticate = WWWAuthenticate( "basic", {"realm": "Authentication Required"} ) ``` Multiple values for this header can be sent to give the client multiple options. Assign a list to set multiple headers. However, modifying the items in the list will not automatically update the header values, and accessing this attribute will only ever return the first value. To unset this header, assign `None` or use `del`. Changed in version 2.3: This attribute can be assigned to to set the header. A list can be assigned to set multiple header values. Use `del` to unset the header. Changed in version 2.3: `WWWAuthenticate` is no longer a `dict`. The `token` attribute was added for auth challenges that use a token instead of parameters. Sessions -------- If you have set [`Flask.secret_key`](#flask.Flask.secret_key "flask.Flask.secret_key") (or configured it from [`SECRET_KEY`](../config/index#SECRET_KEY "SECRET_KEY")) you can use sessions in Flask applications. A session makes it possible to remember information from one request to another. The way Flask does this is by using a signed cookie. The user can look at the session contents, but can’t modify it unless they know the secret key, so make sure to set that to something complex and unguessable. To access the current session you can use the [`session`](#flask.session "flask.session") object: `class flask.session` The session object works pretty much like an ordinary dict, with the difference that it keeps track of modifications. This is a proxy. See [Notes On Proxies](../reqcontext/index#notes-on-proxies) for more information. The following attributes are interesting: `new` `True` if the session is new, `False` otherwise. `modified` `True` if the session object detected a modification. Be advised that modifications on mutable structures are not picked up automatically, in that situation you have to explicitly set the attribute to `True` yourself. Here an example: ``` # this change is not picked up because a mutable object (here # a list) is changed. session['objects'].append(42) # so mark it as modified yourself session.modified = True ``` `permanent` If set to `True` the session lives for [`permanent_session_lifetime`](#flask.Flask.permanent_session_lifetime "flask.Flask.permanent_session_lifetime") seconds. The default is 31 days. If set to `False` (which is the default) the session will be deleted when the user closes the browser. Session Interface ----------------- Changelog New in version 0.8. The session interface provides a simple way to replace the session implementation that Flask is using. `class flask.sessions.SessionInterface` The basic interface you have to implement in order to replace the default session interface which uses werkzeug’s securecookie implementation. The only methods you have to implement are [`open_session()`](#flask.sessions.SessionInterface.open_session "flask.sessions.SessionInterface.open_session") and [`save_session()`](#flask.sessions.SessionInterface.save_session "flask.sessions.SessionInterface.save_session"), the others have useful defaults which you don’t need to change. The session object returned by the [`open_session()`](#flask.sessions.SessionInterface.open_session "flask.sessions.SessionInterface.open_session") method has to provide a dictionary like interface plus the properties and methods from the [`SessionMixin`](#flask.sessions.SessionMixin "flask.sessions.SessionMixin"). We recommend just subclassing a dict and adding that mixin: ``` class Session(dict, SessionMixin): pass ``` If [`open_session()`](#flask.sessions.SessionInterface.open_session "flask.sessions.SessionInterface.open_session") returns `None` Flask will call into [`make_null_session()`](#flask.sessions.SessionInterface.make_null_session "flask.sessions.SessionInterface.make_null_session") to create a session that acts as replacement if the session support cannot work because some requirement is not fulfilled. The default [`NullSession`](#flask.sessions.NullSession "flask.sessions.NullSession") class that is created will complain that the secret key was not set. To replace the session interface on an application all you have to do is to assign [`flask.Flask.session_interface`](#flask.Flask.session_interface "flask.Flask.session_interface"): ``` app = Flask(__name__) app.session_interface = MySessionInterface() ``` Multiple requests with the same session may be sent and handled concurrently. When implementing a new session interface, consider whether reads or writes to the backing store must be synchronized. There is no guarantee on the order in which the session for each request is opened or saved, it will occur in the order that requests begin and end processing. Changelog New in version 0.8. `get_cookie_domain(app)` The value of the `Domain` parameter on the session cookie. If not set, browsers will only send the cookie to the exact domain it was set from. Otherwise, they will send it to any subdomain of the given value as well. Uses the [`SESSION_COOKIE_DOMAIN`](../config/index#SESSION_COOKIE_DOMAIN "SESSION_COOKIE_DOMAIN") config. Changed in version 2.3: Not set by default, does not fall back to `SERVER_NAME`. Parameters: **app** ([Flask](#flask.Flask "flask.Flask")) – Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") | None `get_cookie_httponly(app)` Returns True if the session cookie should be httponly. This currently just returns the value of the `SESSION_COOKIE_HTTPONLY` config var. Parameters: **app** ([Flask](#flask.Flask "flask.Flask")) – Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") `get_cookie_name(app)` The name of the session cookie. Uses``app.config[“SESSION_COOKIE_NAME”]``. Parameters: **app** ([Flask](#flask.Flask "flask.Flask")) – Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") `get_cookie_path(app)` Returns the path for which the cookie should be valid. The default implementation uses the value from the `SESSION_COOKIE_PATH` config var if it’s set, and falls back to `APPLICATION_ROOT` or uses `/` if it’s `None`. Parameters: **app** ([Flask](#flask.Flask "flask.Flask")) – Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") `get_cookie_samesite(app)` Return `'Strict'` or `'Lax'` if the cookie should use the `SameSite` attribute. This currently just returns the value of the [`SESSION_COOKIE_SAMESITE`](../config/index#SESSION_COOKIE_SAMESITE "SESSION_COOKIE_SAMESITE") setting. Parameters: **app** ([Flask](#flask.Flask "flask.Flask")) – Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") `get_cookie_secure(app)` Returns True if the cookie should be secure. This currently just returns the value of the `SESSION_COOKIE_SECURE` setting. Parameters: **app** ([Flask](#flask.Flask "flask.Flask")) – Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") `get_expiration_time(app, session)` A helper method that returns an expiration date for the session or `None` if the session is linked to the browser session. The default implementation returns now + the permanent session lifetime configured on the application. Parameters: * **app** ([Flask](#flask.Flask "flask.Flask")) – * **session** ([SessionMixin](#flask.sessions.SessionMixin "flask.sessions.SessionMixin")) – Return type: datetime | None `is_null_session(obj)` Checks if a given object is a null session. Null sessions are not asked to be saved. This checks if the object is an instance of [`null_session_class`](#flask.sessions.SessionInterface.null_session_class "flask.sessions.SessionInterface.null_session_class") by default. Parameters: **obj** ([object](https://docs.python.org/3/library/functions.html#object "(in Python v3.11)")) – Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") `make_null_session(app)` Creates a null session which acts as a replacement object if the real session support could not be loaded due to a configuration error. This mainly aids the user experience because the job of the null session is to still support lookup without complaining but modifications are answered with a helpful error message of what failed. This creates an instance of [`null_session_class`](#flask.sessions.SessionInterface.null_session_class "flask.sessions.SessionInterface.null_session_class") by default. Parameters: **app** ([Flask](#flask.Flask "flask.Flask")) – Return type: [NullSession](#flask.sessions.NullSession "flask.sessions.NullSession") `null_session_class` [`make_null_session()`](#flask.sessions.SessionInterface.make_null_session "flask.sessions.SessionInterface.make_null_session") will look here for the class that should be created when a null session is requested. Likewise the [`is_null_session()`](#flask.sessions.SessionInterface.is_null_session "flask.sessions.SessionInterface.is_null_session") method will perform a typecheck against this type. alias of [`NullSession`](#flask.sessions.NullSession "flask.sessions.NullSession") `open_session(app, request)` This is called at the beginning of each request, after pushing the request context, before matching the URL. This must return an object which implements a dictionary-like interface as well as the [`SessionMixin`](#flask.sessions.SessionMixin "flask.sessions.SessionMixin") interface. This will return `None` to indicate that loading failed in some way that is not immediately an error. The request context will fall back to using [`make_null_session()`](#flask.sessions.SessionInterface.make_null_session "flask.sessions.SessionInterface.make_null_session") in this case. Parameters: * **app** ([Flask](#flask.Flask "flask.Flask")) – * **request** ([Request](#flask.Request "flask.Request")) – Return type: [SessionMixin](#flask.sessions.SessionMixin "flask.sessions.SessionMixin") | None `pickle_based = False` A flag that indicates if the session interface is pickle based. This can be used by Flask extensions to make a decision in regards to how to deal with the session object. Changelog New in version 0.10. `save_session(app, session, response)` This is called at the end of each request, after generating a response, before removing the request context. It is skipped if [`is_null_session()`](#flask.sessions.SessionInterface.is_null_session "flask.sessions.SessionInterface.is_null_session") returns `True`. Parameters: * **app** ([Flask](#flask.Flask "flask.Flask")) – * **session** ([SessionMixin](#flask.sessions.SessionMixin "flask.sessions.SessionMixin")) – * **response** ([Response](#flask.Response "flask.Response")) – Return type: None `should_set_cookie(app, session)` Used by session backends to determine if a `Set-Cookie` header should be set for this session cookie for this response. If the session has been modified, the cookie is set. If the session is permanent and the `SESSION_REFRESH_EACH_REQUEST` config is true, the cookie is always set. This check is usually skipped if the session was deleted. Changelog New in version 0.11. Parameters: * **app** ([Flask](#flask.Flask "flask.Flask")) – * **session** ([SessionMixin](#flask.sessions.SessionMixin "flask.sessions.SessionMixin")) – Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") `class flask.sessions.SecureCookieSessionInterface` The default session interface that stores sessions in signed cookies through the `itsdangerous` module. `static digest_method(string=b'', *, usedforsecurity=True)` the hash function to use for the signature. The default is sha1 `key_derivation = 'hmac'` the name of the itsdangerous supported key derivation. The default is hmac. `open_session(app, request)` This is called at the beginning of each request, after pushing the request context, before matching the URL. This must return an object which implements a dictionary-like interface as well as the [`SessionMixin`](#flask.sessions.SessionMixin "flask.sessions.SessionMixin") interface. This will return `None` to indicate that loading failed in some way that is not immediately an error. The request context will fall back to using `make_null_session()` in this case. Parameters: * **app** ([Flask](#flask.Flask "flask.Flask")) – * **request** ([Request](#flask.Request "flask.Request")) – Return type: [SecureCookieSession](#flask.sessions.SecureCookieSession "flask.sessions.SecureCookieSession") | None `salt = 'cookie-session'` the salt that should be applied on top of the secret key for the signing of cookie based sessions. `save_session(app, session, response)` This is called at the end of each request, after generating a response, before removing the request context. It is skipped if `is_null_session()` returns `True`. Parameters: * **app** ([Flask](#flask.Flask "flask.Flask")) – * **session** ([SessionMixin](#flask.sessions.SessionMixin "flask.sessions.SessionMixin")) – * **response** ([Response](#flask.Response "flask.Response")) – Return type: None `serializer = <flask.json.tag.TaggedJSONSerializer object>` A python serializer for the payload. The default is a compact JSON derived serializer with support for some extra Python types such as datetime objects or tuples. `session_class` alias of [`SecureCookieSession`](#flask.sessions.SecureCookieSession "flask.sessions.SecureCookieSession") `class flask.sessions.SecureCookieSession(initial=None)` Base class for sessions based on signed cookies. This session backend will set the [`modified`](#flask.sessions.SecureCookieSession.modified "flask.sessions.SecureCookieSession.modified") and [`accessed`](#flask.sessions.SecureCookieSession.accessed "flask.sessions.SecureCookieSession.accessed") attributes. It cannot reliably track whether a session is new (vs. empty), so `new` remains hard coded to `False`. Parameters: **initial** (*t.Any*) – `accessed = False` header, which allows caching proxies to cache different pages for different users. `get(key, default=None)` Return the value for key if key is in the dictionary, else default. Parameters: * **key** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **default** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") *|* *None*) – Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") `modified = False` When data is changed, this is set to `True`. Only the session dictionary itself is tracked; if the session contains mutable data (for example a nested dict) then this must be set to `True` manually when modifying that data. The session cookie will only be written to the response if this is `True`. `setdefault(key, default=None)` Insert key with a value of default if key is not in the dictionary. Return the value for key if key is in the dictionary, else default. Parameters: * **key** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **default** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") *|* *None*) – Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") `class flask.sessions.NullSession(initial=None)` Class used to generate nicer error messages if sessions are not available. Will still allow read-only access to the empty session but fail on setting. Parameters: **initial** (*t.Any*) – `clear() → None. Remove all items from D.` Parameters: * **args** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [NoReturn](https://docs.python.org/3/library/typing.html#typing.NoReturn "(in Python v3.11)") `pop(k[, d]) → v, remove specified key and return the corresponding value.` If the key is not found, return the default if given; otherwise, raise a KeyError. Parameters: * **args** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [NoReturn](https://docs.python.org/3/library/typing.html#typing.NoReturn "(in Python v3.11)") `popitem(*args, **kwargs)` Remove and return a (key, value) pair as a 2-tuple. Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty. Parameters: * **args** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [NoReturn](https://docs.python.org/3/library/typing.html#typing.NoReturn "(in Python v3.11)") `setdefault(*args, **kwargs)` Insert key with a value of default if key is not in the dictionary. Return the value for key if key is in the dictionary, else default. Parameters: * **args** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [NoReturn](https://docs.python.org/3/library/typing.html#typing.NoReturn "(in Python v3.11)") `update([E, ]**F) → None. Update D from dict/iterable E and F.` If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Parameters: * **args** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [NoReturn](https://docs.python.org/3/library/typing.html#typing.NoReturn "(in Python v3.11)") `class flask.sessions.SessionMixin` Expands a basic dictionary with session attributes. `accessed = True` Some implementations can detect when session data is read or written and set this when that happens. The mixin default is hard coded to `True`. `modified = True` Some implementations can detect changes to the session and set this when that happens. The mixin default is hard coded to `True`. `property permanent: bool` This reflects the `'_permanent'` key in the dict. Notice The [`PERMANENT_SESSION_LIFETIME`](../config/index#PERMANENT_SESSION_LIFETIME "PERMANENT_SESSION_LIFETIME") config can be an integer or `timedelta`. The [`permanent_session_lifetime`](#flask.Flask.permanent_session_lifetime "flask.Flask.permanent_session_lifetime") attribute is always a `timedelta`. Test Client ----------- `class flask.testing.FlaskClient(*args, **kwargs)` Works like a regular Werkzeug test client but has knowledge about Flask’s contexts to defer the cleanup of the request context until the end of a `with` block. For general information about how to use this class refer to [`werkzeug.test.Client`](https://werkzeug.palletsprojects.com/en/2.3.x/test/#werkzeug.test.Client "(in Werkzeug v2.3.x)"). Changelog Changed in version 0.12: `app.test_client()` includes preset default environment, which can be set after instantiation of the `app.test_client()` object in `client.environ_base`. Basic usage is outlined in the [Testing Flask Applications](../testing/index) chapter. Parameters: * **args** (*t.Any*) – * **kwargs** (*t.Any*) – `open(*args, buffered=False, follow_redirects=False, **kwargs)` Generate an environ dict from the given arguments, make a request to the application using it, and return the response. Parameters: * **args** (*t.Any*) – Passed to `EnvironBuilder` to create the environ for the request. If a single arg is passed, it can be an existing `EnvironBuilder` or an environ dict. * **buffered** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Convert the iterator returned by the app into a list. If the iterator has a `close()` method, it is called automatically. * **follow_redirects** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Make additional requests to follow HTTP redirects until a non-redirect status is returned. `TestResponse.history` lists the intermediate responses. * **kwargs** (*t.Any*) – Return type: TestResponse Changelog Changed in version 2.1: Removed the `as_tuple` parameter. Changed in version 2.0: The request input stream is closed when calling `response.close()`. Input streams for redirects are automatically closed. Changed in version 0.5: If a dict is provided as file in the dict for the `data` parameter the content type has to be called `content_type` instead of `mimetype`. This change was made for consistency with `werkzeug.FileWrapper`. Changed in version 0.5: Added the `follow_redirects` parameter. `session_transaction(*args, **kwargs)` When used in combination with a `with` statement this opens a session transaction. This can be used to modify the session that the test client uses. Once the `with` block is left the session is stored back. ``` with client.session_transaction() as session: session['value'] = 42 ``` Internally this is implemented by going through a temporary test request context and since session handling could depend on request variables this function accepts the same arguments as [`test_request_context()`](#flask.Flask.test_request_context "flask.Flask.test_request_context") which are directly passed through. Parameters: * **args** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Generator](https://docs.python.org/3/library/typing.html#typing.Generator "(in Python v3.11)")[[SessionMixin](#flask.sessions.SessionMixin "flask.sessions.SessionMixin"), None, None] Test CLI Runner --------------- `class flask.testing.FlaskCliRunner(app, **kwargs)` A [`CliRunner`](https://click.palletsprojects.com/en/8.1.x/api/#click.testing.CliRunner "(in Click v8.1.x)") for testing a Flask app’s CLI commands. Typically created using [`test_cli_runner()`](#flask.Flask.test_cli_runner "flask.Flask.test_cli_runner"). See [Running Commands with the CLI Runner](../testing/index#testing-cli). Parameters: * **app** ([Flask](#flask.Flask "flask.Flask")) – * **kwargs** (*t.Any*) – `invoke(cli=None, args=None, **kwargs)` Invokes a CLI command in an isolated environment. See [`CliRunner.invoke`](https://click.palletsprojects.com/en/8.1.x/api/#click.testing.CliRunner.invoke "(in Click v8.1.x)") for full method documentation. See [Running Commands with the CLI Runner](../testing/index#testing-cli) for examples. If the `obj` argument is not given, passes an instance of [`ScriptInfo`](#flask.cli.ScriptInfo "flask.cli.ScriptInfo") that knows how to load the Flask app being tested. Parameters: * **cli** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") *|* *None*) – Command object to invoke. Default is the app’s `cli` group. * **args** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") *|* *None*) – List of strings to invoke the command with. * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Returns: a [`Result`](https://click.palletsprojects.com/en/8.1.x/api/#click.testing.Result "(in Click v8.1.x)") object. Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") Application Globals ------------------- To share data that is valid for one request only from one function to another, a global variable is not good enough because it would break in threaded environments. Flask provides you with a special object that ensures it is only valid for the active request and that will return different values for each request. In a nutshell: it does the right thing, like it does for [`request`](#flask.request "flask.request") and [`session`](#flask.session "flask.session"). `flask.g` A namespace object that can store data during an [application context](../appcontext/index). This is an instance of [`Flask.app_ctx_globals_class`](#flask.Flask.app_ctx_globals_class "flask.Flask.app_ctx_globals_class"), which defaults to [`ctx._AppCtxGlobals`](#flask.ctx._AppCtxGlobals "flask.ctx._AppCtxGlobals"). This is a good place to store resources during a request. For example, a `before_request` function could load a user object from a session id, then set `g.user` to be used in the view function. This is a proxy. See [Notes On Proxies](../reqcontext/index#notes-on-proxies) for more information. Changelog Changed in version 0.10: Bound to the application context instead of the request context. `class flask.ctx._AppCtxGlobals` A plain object. Used as a namespace for storing data during an application context. Creating an app context automatically creates this object, which is made available as the `g` proxy. 'key' in g Check whether an attribute is present. Changelog New in version 0.10. iter(g) Return an iterator over the attribute names. Changelog New in version 0.10. `get(name, default=None)` Get an attribute by name, or a default value. Like [`dict.get()`](https://docs.python.org/3/library/stdtypes.html#dict.get "(in Python v3.11)"). Parameters: * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Name of attribute to get. * **default** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") *|* *None*) – Value to return if the attribute is not present. Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") Changelog New in version 0.10. `pop(name, default=<object object>)` Get and remove an attribute by name. Like [`dict.pop()`](https://docs.python.org/3/library/stdtypes.html#dict.pop "(in Python v3.11)"). Parameters: * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Name of attribute to pop. * **default** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Value to return if the attribute is not present, instead of raising a `KeyError`. Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") Changelog New in version 0.11. `setdefault(name, default=None)` Get the value of an attribute if it is present, otherwise set and return a default value. Like [`dict.setdefault()`](https://docs.python.org/3/library/stdtypes.html#dict.setdefault "(in Python v3.11)"). Parameters: * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Name of attribute to get. * **default** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") *|* *None*) – Value to set and return if the attribute is not present. Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") Changelog New in version 0.11. Useful Functions and Classes ---------------------------- `flask.current_app` A proxy to the application handling the current request. This is useful to access the application without needing to import it, or if it can’t be imported, such as when using the application factory pattern or in blueprints and extensions. This is only available when an [application context](../appcontext/index) is pushed. This happens automatically during requests and CLI commands. It can be controlled manually with [`app_context()`](#flask.Flask.app_context "flask.Flask.app_context"). This is a proxy. See [Notes On Proxies](../reqcontext/index#notes-on-proxies) for more information. `flask.has_request_context()` If you have code that wants to test if a request context is there or not this function can be used. For instance, you may want to take advantage of request information if the request object is available, but fail silently if it is unavailable. ``` class User(db.Model): def __init__(self, username, remote_addr=None): self.username = username if remote_addr is None and has_request_context(): remote_addr = request.remote_addr self.remote_addr = remote_addr ``` Alternatively you can also just test any of the context bound objects (such as [`request`](#flask.request "flask.request") or [`g`](#flask.g "flask.g")) for truthness: ``` class User(db.Model): def __init__(self, username, remote_addr=None): self.username = username if remote_addr is None and request: remote_addr = request.remote_addr self.remote_addr = remote_addr ``` Changelog New in version 0.7. Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") `flask.copy_current_request_context(f)` A helper function that decorates a function to retain the current request context. This is useful when working with greenlets. The moment the function is decorated a copy of the request context is created and then pushed when the function is called. The current session is also included in the copied request context. Example: ``` import gevent from flask import copy_current_request_context @app.route('/') def index(): @copy_current_request_context def do_some_work(): # do some work here, it can access flask.request or # flask.session like you would otherwise in the view function. ... gevent.spawn(do_some_work) return 'Regular response' ``` Changelog New in version 0.10. Parameters: **f** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)") `flask.has_app_context()` Works like [`has_request_context()`](#flask.has_request_context "flask.has_request_context") but for the application context. You can also just do a boolean check on the [`current_app`](#flask.current_app "flask.current_app") object instead. Changelog New in version 0.9. Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") `flask.url_for(endpoint, *, _anchor=None, _method=None, _scheme=None, _external=None, **values)` Generate a URL to the given endpoint with the given values. This requires an active request or application context, and calls [`current_app.url_for()`](#flask.Flask.url_for "flask.Flask.url_for"). See that method for full documentation. Parameters: * **endpoint** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The endpoint name associated with the URL to generate. If this starts with a `.`, the current blueprint name (if any) will be used. * **_anchor** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – If given, append this as `#anchor` to the URL. * **_method** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – If given, generate the URL associated with this method for the endpoint. * **_scheme** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – If given, the URL will have this scheme if it is external. * **_external** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") *|* *None*) – If given, prefer the URL to be internal (False) or require it to be external (True). External URLs include the scheme and domain. When not in an active request, URLs are external by default. * **values** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Values to use for the variable parts of the URL rule. Unknown keys are appended as query string arguments, like `?a=b&c=d`. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") Changelog Changed in version 2.2: Calls `current_app.url_for`, allowing an app to override the behavior. Changed in version 0.10: The `_scheme` parameter was added. Changed in version 0.9: The `_anchor` and `_method` parameters were added. Changed in version 0.9: Calls `app.handle_url_build_error` on build errors. `flask.abort(code, *args, **kwargs)` Raise an [`HTTPException`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.HTTPException "(in Werkzeug v2.3.x)") for the given status code. If [`current_app`](#flask.current_app "flask.current_app") is available, it will call its [`aborter`](#flask.Flask.aborter "flask.Flask.aborter") object, otherwise it will use [`werkzeug.exceptions.abort()`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.abort "(in Werkzeug v2.3.x)"). Parameters: * **code** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") *|* *BaseResponse*) – The status code for the exception, which must be registered in `app.aborter`. * **args** (*t.Any*) – Passed to the exception. * **kwargs** (*t.Any*) – Passed to the exception. Return type: t.NoReturn Changelog New in version 2.2: Calls `current_app.aborter` if available instead of always using Werkzeug’s default `abort`. `flask.redirect(location, code=302, Response=None)` Create a redirect response object. If [`current_app`](#flask.current_app "flask.current_app") is available, it will use its [`redirect()`](#flask.Flask.redirect "flask.Flask.redirect") method, otherwise it will use [`werkzeug.utils.redirect()`](https://werkzeug.palletsprojects.com/en/2.3.x/utils/#werkzeug.utils.redirect "(in Werkzeug v2.3.x)"). Parameters: * **location** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The URL to redirect to. * **code** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)")) – The status code for the redirect. * **Response** ([type](https://docs.python.org/3/library/functions.html#type "(in Python v3.11)")*[**BaseResponse**]* *|* *None*) – The response class to use. Not used when `current_app` is active, which uses `app.response_class`. Return type: BaseResponse Changelog New in version 2.2: Calls `current_app.redirect` if available instead of always using Werkzeug’s default `redirect`. `flask.make_response(*args)` Sometimes it is necessary to set additional headers in a view. Because views do not have to return response objects but can return a value that is converted into a response object by Flask itself, it becomes tricky to add headers to it. This function can be called instead of using a return and you will get a response object which you can use to attach headers. If view looked like this and you want to add a new header: ``` def index(): return render_template('index.html', foo=42) ``` You can now do something like this: ``` def index(): response = make_response(render_template('index.html', foo=42)) response.headers['X-Parachutes'] = 'parachutes are cool' return response ``` This function accepts the very same arguments you can return from a view function. This for example creates a response with a 404 error code: ``` response = make_response(render_template('not_found.html'), 404) ``` The other use case of this function is to force the return value of a view function into a response which is helpful with view decorators: ``` response = make_response(view_function()) response.headers['X-Parachutes'] = 'parachutes are cool' ``` Internally this function does the following things: * if no arguments are passed, it creates a new response argument * if one argument is passed, [`flask.Flask.make_response()`](#flask.Flask.make_response "flask.Flask.make_response") is invoked with it. * if more than one argument is passed, the arguments are passed to the [`flask.Flask.make_response()`](#flask.Flask.make_response "flask.Flask.make_response") function as tuple. Changelog New in version 0.6. Parameters: **args** (*t.Any*) – Return type: [Response](#flask.Response "flask.Response") `flask.after_this_request(f)` Executes a function after this request. This is useful to modify response objects. The function is passed the response object and has to return the same or a new one. Example: ``` @app.route('/') def index(): @after_this_request def add_header(response): response.headers['X-Foo'] = 'Parachute' return response return 'Hello World!' ``` This is more useful if a function other than the view function wants to modify a response. For instance think of a decorator that wants to add some headers without converting the return value into a response object. Changelog New in version 0.9. Parameters: **f** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")*[**[**ResponseClass**]**,* *ResponseClass**]* *|* [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")*[**[**ResponseClass**]**,* [Awaitable](https://docs.python.org/3/library/typing.html#typing.Awaitable "(in Python v3.11)")*[**ResponseClass**]**]*) – Return type: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*ResponseClass*], *ResponseClass*] | [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")[[*ResponseClass*], [Awaitable](https://docs.python.org/3/library/typing.html#typing.Awaitable "(in Python v3.11)")[*ResponseClass*]] `flask.send_file(path_or_file, mimetype=None, as_attachment=False, download_name=None, conditional=True, etag=True, last_modified=None, max_age=None)` Send the contents of a file to the client. The first argument can be a file path or a file-like object. Paths are preferred in most cases because Werkzeug can manage the file and get extra information from the path. Passing a file-like object requires that the file is opened in binary mode, and is mostly useful when building a file in memory with [`io.BytesIO`](https://docs.python.org/3/library/io.html#io.BytesIO "(in Python v3.11)"). Never pass file paths provided by a user. The path is assumed to be trusted, so a user could craft a path to access a file you didn’t intend. Use [`send_from_directory()`](#flask.send_from_directory "flask.send_from_directory") to safely serve user-requested paths from within a directory. If the WSGI server sets a `file_wrapper` in `environ`, it is used, otherwise Werkzeug’s built-in wrapper is used. Alternatively, if the HTTP server supports `X-Sendfile`, configuring Flask with `USE_X_SENDFILE = True` will tell the server to send the given path, which is much more efficient than reading it in Python. Parameters: * **path_or_file** ([os.PathLike](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.11)") *|* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *t.BinaryIO*) – The path to the file to send, relative to the current working directory if a relative path is given. Alternatively, a file-like object opened in binary mode. Make sure the file pointer is seeked to the start of the data. * **mimetype** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – The MIME type to send for the file. If not provided, it will try to detect it from the file name. * **as_attachment** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Indicate to a browser that it should offer to save the file instead of displaying it. * **download_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – The default name browsers will use when saving the file. Defaults to the passed file name. * **conditional** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Enable conditional and range responses based on request headers. Requires passing a file path and `environ`. * **etag** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") *|* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Calculate an ETag for the file, which requires passing a file path. Can also be a string to use instead. * **last_modified** (*datetime* *|* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") *|* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.11)") *|* *None*) – The last modified time to send for the file, in seconds. If not provided, it will try to detect it from the file path. * **max_age** (*None* *|* *(*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") *|* *t.Callable**[**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None**]**,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") *|* *None**]**)*) – How long the client should cache the file, in seconds. If set, `Cache-Control` will be `public`, otherwise it will be `no-cache` to prefer conditional caching. Return type: [Response](#flask.Response "flask.Response") Changelog Changed in version 2.0: `download_name` replaces the `attachment_filename` parameter. If `as_attachment=False`, it is passed with `Content-Disposition: inline` instead. Changed in version 2.0: `max_age` replaces the `cache_timeout` parameter. `conditional` is enabled and `max_age` is not set by default. Changed in version 2.0: `etag` replaces the `add_etags` parameter. It can be a string to use instead of generating one. Changed in version 2.0: Passing a file-like object that inherits from [`TextIOBase`](https://docs.python.org/3/library/io.html#io.TextIOBase "(in Python v3.11)") will raise a [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "(in Python v3.11)") rather than sending an empty file. New in version 2.0: Moved the implementation to Werkzeug. This is now a wrapper to pass some Flask-specific arguments. Changed in version 1.1: `filename` may be a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.11)") object. Changed in version 1.1: Passing a [`BytesIO`](https://docs.python.org/3/library/io.html#io.BytesIO "(in Python v3.11)") object supports range requests. Changed in version 1.0.3: Filenames are encoded with ASCII instead of Latin-1 for broader compatibility with WSGI servers. Changed in version 1.0: UTF-8 filenames as specified in [**RFC 2231**](https://datatracker.ietf.org/doc/html/rfc2231.html) are supported. Changed in version 0.12: The filename is no longer automatically inferred from file objects. If you want to use automatic MIME and etag support, pass a filename via `filename_or_fp` or `attachment_filename`. Changed in version 0.12: `attachment_filename` is preferred over `filename` for MIME detection. Changed in version 0.9: `cache_timeout` defaults to [`Flask.get_send_file_max_age()`](#flask.Flask.get_send_file_max_age "flask.Flask.get_send_file_max_age"). Changed in version 0.7: MIME guessing and etag support for file-like objects was deprecated because it was unreliable. Pass a filename if you are able to, otherwise attach an etag yourself. Changed in version 0.5: The `add_etags`, `cache_timeout` and `conditional` parameters were added. The default behavior is to add etags. New in version 0.2. `flask.send_from_directory(directory, path, **kwargs)` Send a file from within a directory using [`send_file()`](#flask.send_file "flask.send_file"). ``` @app.route("/uploads/<path:name>") def download_file(name): return send_from_directory( app.config['UPLOAD_FOLDER'], name, as_attachment=True ) ``` This is a secure way to serve files from a folder, such as static files or uploads. Uses [`safe_join()`](https://werkzeug.palletsprojects.com/en/2.3.x/utils/#werkzeug.security.safe_join "(in Werkzeug v2.3.x)") to ensure the path coming from the client is not maliciously crafted to point outside the specified directory. If the final path does not point to an existing regular file, raises a 404 [`NotFound`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.NotFound "(in Werkzeug v2.3.x)") error. Parameters: * **directory** ([os.PathLike](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.11)") *|* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The directory that `path` must be located under, relative to the current application’s root path. * **path** ([os.PathLike](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.11)") *|* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The path to the file to send, relative to `directory`. * **kwargs** (*t.Any*) – Arguments to pass to [`send_file()`](#flask.send_file "flask.send_file"). Return type: [Response](#flask.Response "flask.Response") Changelog Changed in version 2.0: `path` replaces the `filename` parameter. New in version 2.0: Moved the implementation to Werkzeug. This is now a wrapper to pass some Flask-specific arguments. New in version 0.5. Message Flashing ---------------- `flask.flash(message, category='message')` Flashes a message to the next request. In order to remove the flashed message from the session and to display it to the user, the template has to call [`get_flashed_messages()`](#flask.get_flashed_messages "flask.get_flashed_messages"). Changelog Changed in version 0.3: `category` parameter added. Parameters: * **message** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – the message to be flashed. * **category** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – the category for the message. The following values are recommended: `'message'` for any kind of message, `'error'` for errors, `'info'` for information messages and `'warning'` for warnings. However any kind of string can be used as category. Return type: None `flask.get_flashed_messages(with_categories=False, category_filter=())` Pulls all flashed messages from the session and returns them. Further calls in the same request to the function will return the same messages. By default just the messages are returned, but when `with_categories` is set to `True`, the return value will be a list of tuples in the form `(category, message)` instead. Filter the flashed messages to one or more categories by providing those categories in `category_filter`. This allows rendering categories in separate html blocks. The `with_categories` and `category_filter` arguments are distinct: * `with_categories` controls whether categories are returned with message text (`True` gives a tuple, where `False` gives just the message text). * `category_filter` filters the messages down to only those matching the provided categories. See [Message Flashing](../patterns/flashing/index) for examples. Changelog Changed in version 0.9: `category_filter` parameter added. Changed in version 0.3: `with_categories` parameter added. Parameters: * **with_categories** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – set to `True` to also receive categories. * **category_filter** ([Iterable](https://docs.python.org/3/library/typing.html#typing.Iterable "(in Python v3.11)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")*]*) – filter of categories to limit return values. Only categories in the list will be returned. Return type: [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.11)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")] | [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.11)")[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.11)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)"), [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")]] JSON Support ------------ Flask uses Python’s built-in [`json`](https://docs.python.org/3/library/json.html#module-json "(in Python v3.11)") module for handling JSON by default. The JSON implementation can be changed by assigning a different provider to [`flask.Flask.json_provider_class`](#flask.Flask.json_provider_class "flask.Flask.json_provider_class") or [`flask.Flask.json`](#flask.Flask.json "flask.Flask.json"). The functions provided by `flask.json` will use methods on `app.json` if an app context is active. Jinja’s `|tojson` filter is configured to use the app’s JSON provider. The filter marks the output with `|safe`. Use it to render data inside HTML `<script>` tags. ``` <script> const names = {{ names|tojson }}; renderChart(names, {{ axis_data|tojson }}); </script``` `flask.json.jsonify(*args, **kwargs)` Serialize the given arguments as JSON, and return a [`Response`](#flask.Response "flask.Response") object with the `application/json` mimetype. A dict or list returned from a view will be converted to a JSON response automatically without needing to call this. This requires an active request or application context, and calls [`app.json.response()`](#flask.json.provider.JSONProvider.response "flask.json.provider.JSONProvider.response"). In debug mode, the output is formatted with indentation to make it easier to read. This may also be controlled by the provider. Either positional or keyword arguments can be given, not both. If no arguments are given, `None` is serialized. Parameters: * **args** (*t.Any*) – A single value to serialize, or multiple values to treat as a list to serialize. * **kwargs** (*t.Any*) – Treat as a dict to serialize. Return type: [Response](#flask.Response "flask.Response") Changelog Changed in version 2.2: Calls `current_app.json.response`, allowing an app to override the behavior. Changed in version 2.0.2: [`decimal.Decimal`](https://docs.python.org/3/library/decimal.html#decimal.Decimal "(in Python v3.11)") is supported by converting to a string. Changed in version 0.11: Added support for serializing top-level arrays. This was a security risk in ancient browsers. See [JSON Security](../security/index#security-json). New in version 0.2. `flask.json.dumps(obj, **kwargs)` Serialize data as JSON. If [`current_app`](#flask.current_app "flask.current_app") is available, it will use its [`app.json.dumps()`](#flask.json.provider.JSONProvider.dumps "flask.json.provider.JSONProvider.dumps") method, otherwise it will use [`json.dumps()`](https://docs.python.org/3/library/json.html#json.dumps "(in Python v3.11)"). Parameters: * **obj** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – The data to serialize. * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Arguments passed to the `dumps` implementation. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") Changed in version 2.3: The `app` parameter was removed. Changelog Changed in version 2.2: Calls `current_app.json.dumps`, allowing an app to override the behavior. Changed in version 2.0.2: [`decimal.Decimal`](https://docs.python.org/3/library/decimal.html#decimal.Decimal "(in Python v3.11)") is supported by converting to a string. Changed in version 2.0: `encoding` will be removed in Flask 2.1. Changed in version 1.0.3: `app` can be passed directly, rather than requiring an app context for configuration. `flask.json.dump(obj, fp, **kwargs)` Serialize data as JSON and write to a file. If [`current_app`](#flask.current_app "flask.current_app") is available, it will use its [`app.json.dump()`](#flask.json.provider.JSONProvider.dump "flask.json.provider.JSONProvider.dump") method, otherwise it will use [`json.dump()`](https://docs.python.org/3/library/json.html#json.dump "(in Python v3.11)"). Parameters: * **obj** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – The data to serialize. * **fp** ([IO](https://docs.python.org/3/library/typing.html#typing.IO "(in Python v3.11)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")*]*) – A file opened for writing text. Should use the UTF-8 encoding to be valid JSON. * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Arguments passed to the `dump` implementation. Return type: None Changed in version 2.3: The `app` parameter was removed. Changelog Changed in version 2.2: Calls `current_app.json.dump`, allowing an app to override the behavior. Changed in version 2.0: Writing to a binary file, and the `encoding` argument, will be removed in Flask 2.1. `flask.json.loads(s, **kwargs)` Deserialize data as JSON. If [`current_app`](#flask.current_app "flask.current_app") is available, it will use its [`app.json.loads()`](#flask.json.provider.JSONProvider.loads "flask.json.provider.JSONProvider.loads") method, otherwise it will use [`json.loads()`](https://docs.python.org/3/library/json.html#json.loads "(in Python v3.11)"). Parameters: * **s** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [bytes](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.11)")) – Text or UTF-8 bytes. * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Arguments passed to the `loads` implementation. Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") Changed in version 2.3: The `app` parameter was removed. Changelog Changed in version 2.2: Calls `current_app.json.loads`, allowing an app to override the behavior. Changed in version 2.0: `encoding` will be removed in Flask 2.1. The data must be a string or UTF-8 bytes. Changed in version 1.0.3: `app` can be passed directly, rather than requiring an app context for configuration. `flask.json.load(fp, **kwargs)` Deserialize data as JSON read from a file. If [`current_app`](#flask.current_app "flask.current_app") is available, it will use its [`app.json.load()`](#flask.json.provider.JSONProvider.load "flask.json.provider.JSONProvider.load") method, otherwise it will use [`json.load()`](https://docs.python.org/3/library/json.html#json.load "(in Python v3.11)"). Parameters: * **fp** ([IO](https://docs.python.org/3/library/typing.html#typing.IO "(in Python v3.11)")) – A file opened for reading text or UTF-8 bytes. * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Arguments passed to the `load` implementation. Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") Changed in version 2.3: The `app` parameter was removed. Changelog Changed in version 2.2: Calls `current_app.json.load`, allowing an app to override the behavior. Changed in version 2.2: The `app` parameter will be removed in Flask 2.3. Changed in version 2.0: `encoding` will be removed in Flask 2.1. The file must be text mode, or binary mode with UTF-8 bytes. `class flask.json.provider.JSONProvider(app)` A standard set of JSON operations for an application. Subclasses of this can be used to customize JSON behavior or use different JSON libraries. To implement a provider for a specific library, subclass this base class and implement at least [`dumps()`](#flask.json.provider.JSONProvider.dumps "flask.json.provider.JSONProvider.dumps") and [`loads()`](#flask.json.provider.JSONProvider.loads "flask.json.provider.JSONProvider.loads"). All other methods have default implementations. To use a different provider, either subclass `Flask` and set [`json_provider_class`](#flask.Flask.json_provider_class "flask.Flask.json_provider_class") to a provider class, or set [`app.json`](#flask.Flask.json "flask.Flask.json") to an instance of the class. Parameters: **app** ([Flask](#flask.Flask "flask.Flask")) – An application instance. This will be stored as a `weakref.proxy` on the `_app` attribute. Changelog New in version 2.2. `dumps(obj, **kwargs)` Serialize data as JSON. Parameters: * **obj** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – The data to serialize. * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – May be passed to the underlying JSON library. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") `dump(obj, fp, **kwargs)` Serialize data as JSON and write to a file. Parameters: * **obj** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – The data to serialize. * **fp** ([IO](https://docs.python.org/3/library/typing.html#typing.IO "(in Python v3.11)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")*]*) – A file opened for writing text. Should use the UTF-8 encoding to be valid JSON. * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – May be passed to the underlying JSON library. Return type: None `loads(s, **kwargs)` Deserialize data as JSON. Parameters: * **s** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [bytes](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.11)")) – Text or UTF-8 bytes. * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – May be passed to the underlying JSON library. Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") `load(fp, **kwargs)` Deserialize data as JSON read from a file. Parameters: * **fp** ([IO](https://docs.python.org/3/library/typing.html#typing.IO "(in Python v3.11)")) – A file opened for reading text or UTF-8 bytes. * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – May be passed to the underlying JSON library. Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") `response(*args, **kwargs)` Serialize the given arguments as JSON, and return a [`Response`](#flask.Response "flask.Response") object with the `application/json` mimetype. The [`jsonify()`](#flask.json.jsonify "flask.json.jsonify") function calls this method for the current application. Either positional or keyword arguments can be given, not both. If no arguments are given, `None` is serialized. Parameters: * **args** (*t.Any*) – A single value to serialize, or multiple values to treat as a list to serialize. * **kwargs** (*t.Any*) – Treat as a dict to serialize. Return type: [Response](#flask.Response "flask.Response") `class flask.json.provider.DefaultJSONProvider(app)` Provide JSON operations using Python’s built-in [`json`](https://docs.python.org/3/library/json.html#module-json "(in Python v3.11)") library. Serializes the following additional data types: * [`datetime.datetime`](https://docs.python.org/3/library/datetime.html#datetime.datetime "(in Python v3.11)") and [`datetime.date`](https://docs.python.org/3/library/datetime.html#datetime.date "(in Python v3.11)") are serialized to [**RFC 822**](https://datatracker.ietf.org/doc/html/rfc822.html) strings. This is the same as the HTTP date format. * [`uuid.UUID`](https://docs.python.org/3/library/uuid.html#uuid.UUID "(in Python v3.11)") is serialized to a string. * `dataclasses.dataclass` is passed to [`dataclasses.asdict()`](https://docs.python.org/3/library/dataclasses.html#dataclasses.asdict "(in Python v3.11)"). * `Markup` (or any object with a `__html__` method) will call the `__html__` method to get a string. Parameters: **app** ([Flask](#flask.Flask "flask.Flask")) – `static default(o)` Apply this function to any object that `json.dumps()` does not know how to serialize. It should return a valid JSON type or raise a `TypeError`. Parameters: **o** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") `ensure_ascii = True` Replace non-ASCII characters with escape sequences. This may be more compatible with some clients, but can be disabled for better performance and size. `sort_keys = True` Sort the keys in any serialized dicts. This may be useful for some caching situations, but can be disabled for better performance. When enabled, keys must all be strings, they are not converted before sorting. `compact: bool | None = None` If `True`, or `None` out of debug mode, the [`response()`](#flask.json.provider.DefaultJSONProvider.response "flask.json.provider.DefaultJSONProvider.response") output will not add indentation, newlines, or spaces. If `False`, or `None` in debug mode, it will use a non-compact representation. `mimetype = 'application/json'` The mimetype set in [`response()`](#flask.json.provider.DefaultJSONProvider.response "flask.json.provider.DefaultJSONProvider.response"). `dumps(obj, **kwargs)` Serialize data as JSON to a string. Keyword arguments are passed to [`json.dumps()`](https://docs.python.org/3/library/json.html#json.dumps "(in Python v3.11)"). Sets some parameter defaults from the [`default`](#flask.json.provider.DefaultJSONProvider.default "flask.json.provider.DefaultJSONProvider.default"), [`ensure_ascii`](#flask.json.provider.DefaultJSONProvider.ensure_ascii "flask.json.provider.DefaultJSONProvider.ensure_ascii"), and [`sort_keys`](#flask.json.provider.DefaultJSONProvider.sort_keys "flask.json.provider.DefaultJSONProvider.sort_keys") attributes. Parameters: * **obj** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – The data to serialize. * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Passed to [`json.dumps()`](https://docs.python.org/3/library/json.html#json.dumps "(in Python v3.11)"). Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") `loads(s, **kwargs)` Deserialize data as JSON from a string or bytes. Parameters: * **s** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [bytes](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.11)")) – Text or UTF-8 bytes. * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Passed to [`json.loads()`](https://docs.python.org/3/library/json.html#json.loads "(in Python v3.11)"). Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") `response(*args, **kwargs)` Serialize the given arguments as JSON, and return a [`Response`](#flask.Response "flask.Response") object with it. The response mimetype will be “application/json” and can be changed with [`mimetype`](#flask.json.provider.DefaultJSONProvider.mimetype "flask.json.provider.DefaultJSONProvider.mimetype"). If [`compact`](#flask.json.provider.DefaultJSONProvider.compact "flask.json.provider.DefaultJSONProvider.compact") is `False` or debug mode is enabled, the output will be formatted to be easier to read. Either positional or keyword arguments can be given, not both. If no arguments are given, `None` is serialized. Parameters: * **args** (*t.Any*) – A single value to serialize, or multiple values to treat as a list to serialize. * **kwargs** (*t.Any*) – Treat as a dict to serialize. Return type: [Response](#flask.Response "flask.Response") ### Tagged JSON A compact representation for lossless serialization of non-standard JSON types. [`SecureCookieSessionInterface`](#flask.sessions.SecureCookieSessionInterface "flask.sessions.SecureCookieSessionInterface") uses this to serialize the session data, but it may be useful in other places. It can be extended to support other types. `class flask.json.tag.TaggedJSONSerializer` Serializer that uses a tag system to compactly represent objects that are not JSON types. Passed as the intermediate serializer to `itsdangerous.Serializer`. The following extra types are supported: * [`dict`](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)") * [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.11)") * [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.11)") * `Markup` * [`UUID`](https://docs.python.org/3/library/uuid.html#uuid.UUID "(in Python v3.11)") * [`datetime`](https://docs.python.org/3/library/datetime.html#datetime.datetime "(in Python v3.11)") `default_tags = [<class 'flask.json.tag.TagDict'>, <class 'flask.json.tag.PassDict'>, <class 'flask.json.tag.TagTuple'>, <class 'flask.json.tag.PassList'>, <class 'flask.json.tag.TagBytes'>, <class 'flask.json.tag.TagMarkup'>, <class 'flask.json.tag.TagUUID'>, <class 'flask.json.tag.TagDateTime'>]` Tag classes to bind when creating the serializer. Other tags can be added later using [`register()`](#flask.json.tag.TaggedJSONSerializer.register "flask.json.tag.TaggedJSONSerializer.register"). `dumps(value)` Tag the value and dump it to a compact JSON string. Parameters: **value** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") `loads(value)` Load data from a JSON string and deserialized any tagged objects. Parameters: **value** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") `register(tag_class, force=False, index=None)` Register a new tag with this serializer. Parameters: * **tag_class** ([type](https://docs.python.org/3/library/functions.html#type "(in Python v3.11)")*[*[flask.json.tag.JSONTag](#flask.json.tag.JSONTag "flask.json.tag.JSONTag")*]*) – tag class to register. Will be instantiated with this serializer instance. * **force** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – overwrite an existing tag. If false (default), a [`KeyError`](https://docs.python.org/3/library/exceptions.html#KeyError "(in Python v3.11)") is raised. * **index** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.11)") *|* *None*) – index to insert the new tag in the tag order. Useful when the new tag is a special case of an existing tag. If `None` (default), the tag is appended to the end of the order. Raises: [**KeyError**](https://docs.python.org/3/library/exceptions.html#KeyError "(in Python v3.11)") – if the tag key is already registered and `force` is not true. Return type: None `tag(value)` Convert a value to a tagged representation if necessary. Parameters: **value** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)"), [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")] `untag(value)` Convert a tagged representation back to the original type. Parameters: **value** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")*,* [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")*]*) – Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") `class flask.json.tag.JSONTag(serializer)` Base class for defining type tags for [`TaggedJSONSerializer`](#flask.json.tag.TaggedJSONSerializer "flask.json.tag.TaggedJSONSerializer"). Parameters: **serializer** ([TaggedJSONSerializer](#flask.json.tag.TaggedJSONSerializer "flask.json.tag.TaggedJSONSerializer")) – `check(value)` Check if the given value should be tagged by this tag. Parameters: **value** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") `key: str | None = None` The tag to mark the serialized object with. If `None`, this tag is only used as an intermediate step during tagging. `tag(value)` Convert the value to a valid JSON type and add the tag structure around it. Parameters: **value** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") `to_json(value)` Convert the Python object to an object that is a valid JSON type. The tag will be added later. Parameters: **value** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") `to_python(value)` Convert the JSON representation back to the correct type. The tag will already be removed. Parameters: **value** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") Let’s see an example that adds support for [`OrderedDict`](https://docs.python.org/3/library/collections.html#collections.OrderedDict "(in Python v3.11)"). Dicts don’t have an order in JSON, so to handle this we will dump the items as a list of `[key, value]` pairs. Subclass [`JSONTag`](#flask.json.tag.JSONTag "flask.json.tag.JSONTag") and give it the new key `' od'` to identify the type. The session serializer processes dicts first, so insert the new tag at the front of the order since `OrderedDict` must be processed before `dict`. ``` from flask.json.tag import JSONTag class TagOrderedDict(JSONTag): __slots__ = ('serializer',) key = ' od' def check(self, value): return isinstance(value, OrderedDict) def to_json(self, value): return [[k, self.serializer.tag(v)] for k, v in iteritems(value)] def to_python(self, value): return OrderedDict(value) app.session_interface.serializer.register(TagOrderedDict, index=0) ``` Template Rendering ------------------ `flask.render_template(template_name_or_list, **context)` Render a template by name with the given context. Parameters: * **template_name_or_list** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [Template](https://jinja.palletsprojects.com/en/3.1.x/api/#jinja2.Template "(in Jinja v3.1.x)") *|* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.11)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [jinja2.environment.Template](https://jinja.palletsprojects.com/en/3.1.x/api/#jinja2.Template "(in Jinja v3.1.x)")*]*) – The name of the template to render. If a list is given, the first name to exist will be rendered. * **context** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – The variables to make available in the template. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") `flask.render_template_string(source, **context)` Render a template from the given source string with the given context. Parameters: * **source** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The source code of the template to render. * **context** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – The variables to make available in the template. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") `flask.stream_template(template_name_or_list, **context)` Render a template by name with the given context as a stream. This returns an iterator of strings, which can be used as a streaming response from a view. Parameters: * **template_name_or_list** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [Template](https://jinja.palletsprojects.com/en/3.1.x/api/#jinja2.Template "(in Jinja v3.1.x)") *|* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.11)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [jinja2.environment.Template](https://jinja.palletsprojects.com/en/3.1.x/api/#jinja2.Template "(in Jinja v3.1.x)")*]*) – The name of the template to render. If a list is given, the first name to exist will be rendered. * **context** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – The variables to make available in the template. Return type: [Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator "(in Python v3.11)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")] Changelog New in version 2.2. `flask.stream_template_string(source, **context)` Render a template from the given source string with the given context as a stream. This returns an iterator of strings, which can be used as a streaming response from a view. Parameters: * **source** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – The source code of the template to render. * **context** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – The variables to make available in the template. Return type: [Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator "(in Python v3.11)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")] Changelog New in version 2.2. `flask.get_template_attribute(template_name, attribute)` Loads a macro (or variable) a template exports. This can be used to invoke a macro from within Python code. If you for example have a template named `_cider.html` with the following contents: ``` {% macro hello(name) %}Hello {{ name }}!{% endmacro %} ``` You can access this from Python code like this: ``` hello = get_template_attribute('_cider.html', 'hello') return hello('World') ``` Changelog New in version 0.2. Parameters: * **template_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – the name of the template * **attribute** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – the name of the variable of macro to access Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") Configuration ------------- `class flask.Config(root_path, defaults=None)` Works exactly like a dict but provides ways to fill it from files or special dictionaries. There are two common patterns to populate the config. Either you can fill the config from a config file: ``` app.config.from_pyfile('yourconfig.cfg') ``` Or alternatively you can define the configuration options in the module that calls [`from_object()`](#flask.Config.from_object "flask.Config.from_object") or provide an import path to a module that should be loaded. It is also possible to tell it to use the same module and with that provide the configuration values just before the call: ``` DEBUG = True SECRET_KEY = 'development key' app.config.from_object(__name__) ``` In both cases (loading from any Python file or loading from modules), only uppercase keys are added to the config. This makes it possible to use lowercase values in the config file for temporary values that are not added to the config or to define the config keys in the same file that implements the application. Probably the most interesting way to load configurations is from an environment variable pointing to a file: ``` app.config.from_envvar('YOURAPPLICATION_SETTINGS') ``` In this case before launching the application you have to set this environment variable to the file you want to use. On Linux and OS X use the export statement: ``` export YOURAPPLICATION_SETTINGS='/path/to/config/file' ``` On windows use `set` instead. Parameters: * **root_path** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [os.PathLike](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.11)")) – path to which files are read relative from. When the config object is created by the application, this is the application’s [`root_path`](#flask.Flask.root_path "flask.Flask.root_path"). * **defaults** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)") *|* *None*) – an optional dictionary of default values `from_envvar(variable_name, silent=False)` Loads a configuration from an environment variable pointing to a configuration file. This is basically just a shortcut with nicer error messages for this line of code: ``` app.config.from_pyfile(os.environ['YOURAPPLICATION_SETTINGS']) ``` Parameters: * **variable_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – name of the environment variable * **silent** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – set to `True` if you want silent failure for missing files. Returns: `True` if the file was loaded successfully. Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") `from_file(filename, load, silent=False, text=True)` Update the values in the config from a file that is loaded using the `load` parameter. The loaded data is passed to the [`from_mapping()`](#flask.Config.from_mapping "flask.Config.from_mapping") method. ``` import json app.config.from_file("config.json", load=json.load) import tomllib app.config.from_file("config.toml", load=tomllib.load, text=False) ``` Parameters: * **filename** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [PathLike](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.11)")) – The path to the data file. This can be an absolute path or relative to the config root path. * **load** (`Callable[[Reader], Mapping]` where `Reader` implements a `read` method.) – A callable that takes a file handle and returns a mapping of loaded data from the file. * **silent** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Ignore the file if it doesn’t exist. * **text** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Open the file in text or binary mode. Returns: `True` if the file was loaded successfully. Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") Changed in version 2.3: The `text` parameter was added. Changelog New in version 2.0. `from_mapping(mapping=None, **kwargs)` Updates the config like `update()` ignoring items with non-upper keys. Returns: Always returns `True`. Parameters: * **mapping** ([Mapping](https://docs.python.org/3/library/typing.html#typing.Mapping "(in Python v3.11)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")*,* [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")*]* *|* *None*) – * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") Changelog New in version 0.11. `from_object(obj)` Updates the values from the given object. An object can be of one of the following two types: * a string: in this case the object with that name will be imported * an actual object reference: that object is used directly Objects are usually either modules or classes. [`from_object()`](#flask.Config.from_object "flask.Config.from_object") loads only the uppercase attributes of the module/class. A `dict` object will not work with [`from_object()`](#flask.Config.from_object "flask.Config.from_object") because the keys of a `dict` are not attributes of the `dict` class. Example of module-based configuration: ``` app.config.from_object('yourapplication.default_config') from yourapplication import default_config app.config.from_object(default_config) ``` Nothing is done to the object before loading. If the object is a class and has `@property` attributes, it needs to be instantiated before being passed to this method. You should not use this function to load the actual configuration but rather configuration defaults. The actual config should be loaded with [`from_pyfile()`](#flask.Config.from_pyfile "flask.Config.from_pyfile") and ideally from a location not within the package because the package might be installed system wide. See [Development / Production](../config/index#config-dev-prod) for an example of class-based configuration using [`from_object()`](#flask.Config.from_object "flask.Config.from_object"). Parameters: **obj** ([object](https://docs.python.org/3/library/functions.html#object "(in Python v3.11)") *|* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – an import name or object Return type: None `from_prefixed_env(prefix='FLASK', *, loads=<function loads>)` Load any environment variables that start with `FLASK_`, dropping the prefix from the env key for the config key. Values are passed through a loading function to attempt to convert them to more specific types than strings. Keys are loaded in [`sorted()`](https://docs.python.org/3/library/functions.html#sorted "(in Python v3.11)") order. The default loading function attempts to parse values as any valid JSON type, including dicts and lists. Specific items in nested dicts can be set by separating the keys with double underscores (`__`). If an intermediate key doesn’t exist, it will be initialized to an empty dict. Parameters: * **prefix** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – Load env vars that start with this prefix, separated with an underscore (`_`). * **loads** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")*[**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")*]**,* [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")*]*) – Pass each string value to this function and use the returned value as the config value. If any error is raised it is ignored and the value remains a string. The default is [`json.loads()`](#flask.json.loads "flask.json.loads"). Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") Changelog New in version 2.1. `from_pyfile(filename, silent=False)` Updates the values in the config from a Python file. This function behaves as if the file was imported as module with the [`from_object()`](#flask.Config.from_object "flask.Config.from_object") function. Parameters: * **filename** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [PathLike](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.11)")) – the filename of the config. This can either be an absolute filename or a filename relative to the root path. * **silent** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – set to `True` if you want silent failure for missing files. Returns: `True` if the file was loaded successfully. Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") Changelog New in version 0.7: `silent` parameter. `get_namespace(namespace, lowercase=True, trim_namespace=True)` Returns a dictionary containing a subset of configuration options that match the specified namespace/prefix. Example usage: ``` app.config['IMAGE_STORE_TYPE'] = 'fs' app.config['IMAGE_STORE_PATH'] = '/var/app/images' app.config['IMAGE_STORE_BASE_URL'] = 'http://img.website.com' image_store_config = app.config.get_namespace('IMAGE_STORE_') ``` The resulting dictionary `image_store_config` would look like: ``` { 'type': 'fs', 'path': '/var/app/images', 'base_url': 'http://img.website.com' } ``` This is often useful when configuration options map directly to keyword arguments in functions or class constructors. Parameters: * **namespace** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – a configuration namespace * **lowercase** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – a flag indicating if the keys of the resulting dictionary should be lowercase * **trim_namespace** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – a flag indicating if the keys of the resulting dictionary should not include the namespace Return type: [dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)"), [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")] Changelog New in version 0.11. Stream Helpers -------------- `flask.stream_with_context(generator_or_function)` Request contexts disappear when the response is started on the server. This is done for efficiency reasons and to make it less likely to encounter memory leaks with badly written WSGI middlewares. The downside is that if you are using streamed responses, the generator cannot access request bound information any more. This function however can help you keep the context around for longer: ``` from flask import stream_with_context, request, Response @app.route('/stream') def streamed_response(): @stream_with_context def generate(): yield 'Hello ' yield request.args['name'] yield '!' return Response(generate()) ``` Alternatively it can also be used around a specific generator: ``` from flask import stream_with_context, request, Response @app.route('/stream') def streamed_response(): def generate(): yield 'Hello ' yield request.args['name'] yield '!' return Response(stream_with_context(generate())) ``` Changelog New in version 0.9. Parameters: **generator_or_function** ([Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator "(in Python v3.11)") *|* [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)")*[**[**...**]**,* [Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator "(in Python v3.11)")*]*) – Return type: [Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator "(in Python v3.11)") Useful Internals ---------------- `class flask.ctx.RequestContext(app, environ, request=None, session=None)` The request context contains per-request information. The Flask app creates and pushes it at the beginning of the request, then pops it at the end of the request. It will create the URL adapter and request object for the WSGI environment provided. Do not attempt to use this class directly, instead use [`test_request_context()`](#flask.Flask.test_request_context "flask.Flask.test_request_context") and [`request_context()`](#flask.Flask.request_context "flask.Flask.request_context") to create this object. When the request context is popped, it will evaluate all the functions registered on the application for teardown execution ([`teardown_request()`](#flask.Flask.teardown_request "flask.Flask.teardown_request")). The request context is automatically popped at the end of the request. When using the interactive debugger, the context will be restored so `request` is still accessible. Similarly, the test client can preserve the context after the request ends. However, teardown functions may already have closed some resources such as database connections. Parameters: * **app** ([Flask](#flask.Flask "flask.Flask")) – * **environ** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.11)")) – * **request** ([Request](#flask.Request "flask.Request") *|* *None*) – * **session** ([SessionMixin](#flask.sessions.SessionMixin "flask.sessions.SessionMixin") *|* *None*) – `copy()` Creates a copy of this request context with the same request object. This can be used to move a request context to a different greenlet. Because the actual request object is the same this cannot be used to move a request context to a different thread unless access to the request object is locked. Changelog Changed in version 1.1: The current session object is used instead of reloading the original data. This prevents `flask.session` pointing to an out-of-date object. New in version 0.10. Return type: [RequestContext](#flask.ctx.RequestContext "flask.ctx.RequestContext") `match_request()` Can be overridden by a subclass to hook into the matching of the request. Return type: None `pop(exc=<object object>)` Pops the request context and unbinds it by doing that. This will also trigger the execution of functions registered by the [`teardown_request()`](#flask.Flask.teardown_request "flask.Flask.teardown_request") decorator. Changelog Changed in version 0.9: Added the `exc` argument. Parameters: **exc** ([BaseException](https://docs.python.org/3/library/exceptions.html#BaseException "(in Python v3.11)") *|* *None*) – Return type: None `flask.globals.request_ctx` The current [`RequestContext`](#flask.ctx.RequestContext "flask.ctx.RequestContext"). If a request context is not active, accessing attributes on this proxy will raise a `RuntimeError`. This is an internal object that is essential to how Flask handles requests. Accessing this should not be needed in most cases. Most likely you want [`request`](#flask.request "flask.request") and [`session`](#flask.session "flask.session") instead. `class flask.ctx.AppContext(app)` The app context contains application-specific information. An app context is created and pushed at the beginning of each request if one is not already active. An app context is also pushed when running CLI commands. Parameters: **app** ([Flask](#flask.Flask "flask.Flask")) – `pop(exc=<object object>)` Pops the app context. Parameters: **exc** ([BaseException](https://docs.python.org/3/library/exceptions.html#BaseException "(in Python v3.11)") *|* *None*) – Return type: None `push()` Binds the app context to the current context. Return type: None `flask.globals.app_ctx` The current [`AppContext`](#flask.ctx.AppContext "flask.ctx.AppContext"). If an app context is not active, accessing attributes on this proxy will raise a `RuntimeError`. This is an internal object that is essential to how Flask handles requests. Accessing this should not be needed in most cases. Most likely you want [`current_app`](#flask.current_app "flask.current_app") and [`g`](#flask.g "flask.g") instead. `class flask.blueprints.BlueprintSetupState(blueprint, app, options, first_registration)` Temporary holder object for registering a blueprint with the application. An instance of this class is created by the [`make_setup_state()`](#flask.Blueprint.make_setup_state "flask.Blueprint.make_setup_state") method and later passed to all register callback functions. Parameters: * **blueprint** ([Blueprint](#flask.Blueprint "flask.blueprints.Blueprint")) – * **app** ([Flask](#flask.Flask "flask.Flask")) – * **options** (*t.Any*) – * **first_registration** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – `add_url_rule(rule, endpoint=None, view_func=None, **options)` A helper method to register a rule (and optionally a view function) to the application. The endpoint is automatically prefixed with the blueprint’s name. Parameters: * **rule** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **endpoint** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – * **view_func** ([Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.11)") *|* *None*) – * **options** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: None `app` a reference to the current application `blueprint` a reference to the blueprint that created this setup state. `first_registration` as blueprints can be registered multiple times with the application and not everything wants to be registered multiple times on it, this attribute can be used to figure out if the blueprint was registered in the past already. `options` a dictionary with all options that were passed to the [`register_blueprint()`](#flask.Flask.register_blueprint "flask.Flask.register_blueprint") method. `subdomain` The subdomain that the blueprint should be active for, `None` otherwise. `url_defaults` A dictionary with URL defaults that is added to each and every URL that was defined with the blueprint. `url_prefix` The prefix that should be used for all URLs defined on the blueprint. Signals ------- Signals are provided by the [Blinker](https://blinker.readthedocs.io/) library. See [Signals](../signals/index) for an introduction. `flask.template_rendered` This signal is sent when a template was successfully rendered. The signal is invoked with the instance of the template as `template` and the context as dictionary (named `context`). Example subscriber: ``` def log_template_renders(sender, template, context, **extra): sender.logger.debug('Rendering template "%s" with context %s', template.name or 'string template', context) from flask import template_rendered template_rendered.connect(log_template_renders, app) ``` flask.before_render_template This signal is sent before template rendering process. The signal is invoked with the instance of the template as `template` and the context as dictionary (named `context`). Example subscriber: ``` def log_template_renders(sender, template, context, **extra): sender.logger.debug('Rendering template "%s" with context %s', template.name or 'string template', context) from flask import before_render_template before_render_template.connect(log_template_renders, app) ``` `flask.request_started` This signal is sent when the request context is set up, before any request processing happens. Because the request context is already bound, the subscriber can access the request with the standard global proxies such as [`request`](#flask.request "flask.request"). Example subscriber: ``` def log_request(sender, **extra): sender.logger.debug('Request context is set up') from flask import request_started request_started.connect(log_request, app) ``` `flask.request_finished` This signal is sent right before the response is sent to the client. It is passed the response to be sent named `response`. Example subscriber: ``` def log_response(sender, response, **extra): sender.logger.debug('Request context is about to close down. ' 'Response: %s', response) from flask import request_finished request_finished.connect(log_response, app) ``` `flask.got_request_exception` This signal is sent when an unhandled exception happens during request processing, including when debugging. The exception is passed to the subscriber as `exception`. This signal is not sent for [`HTTPException`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.HTTPException "(in Werkzeug v2.3.x)"), or other exceptions that have error handlers registered, unless the exception was raised from an error handler. This example shows how to do some extra logging if a theoretical `SecurityException` was raised: ``` from flask import got_request_exception def log_security_exception(sender, exception, **extra): if not isinstance(exception, SecurityException): return security_logger.exception( f"SecurityException at {request.url!r}", exc_info=exception, ) got_request_exception.connect(log_security_exception, app) ``` `flask.request_tearing_down` This signal is sent when the request is tearing down. This is always called, even if an exception is caused. Currently functions listening to this signal are called after the regular teardown handlers, but this is not something you can rely on. Example subscriber: ``` def close_db_connection(sender, **extra): session.close() from flask import request_tearing_down request_tearing_down.connect(close_db_connection, app) ``` As of Flask 0.9, this will also be passed an `exc` keyword argument that has a reference to the exception that caused the teardown if there was one. `flask.appcontext_tearing_down` This signal is sent when the app context is tearing down. This is always called, even if an exception is caused. Currently functions listening to this signal are called after the regular teardown handlers, but this is not something you can rely on. Example subscriber: ``` def close_db_connection(sender, **extra): session.close() from flask import appcontext_tearing_down appcontext_tearing_down.connect(close_db_connection, app) ``` This will also be passed an `exc` keyword argument that has a reference to the exception that caused the teardown if there was one. `flask.appcontext_pushed` This signal is sent when an application context is pushed. The sender is the application. This is usually useful for unittests in order to temporarily hook in information. For instance it can be used to set a resource early onto the `g` object. Example usage: ``` from contextlib import contextmanager from flask import appcontext_pushed @contextmanager def user_set(app, user): def handler(sender, **kwargs): g.user = user with appcontext_pushed.connected_to(handler, app): yield ``` And in the testcode: ``` def test_user_me(self): with user_set(app, 'john'): c = app.test_client() resp = c.get('/users/me') assert resp.data == 'username=john' ``` Changelog New in version 0.10. `flask.appcontext_popped` This signal is sent when an application context is popped. The sender is the application. This usually falls in line with the [`appcontext_tearing_down`](#flask.appcontext_tearing_down "flask.appcontext_tearing_down") signal. Changelog New in version 0.10. `flask.message_flashed` This signal is sent when the application is flashing a message. The messages is sent as `message` keyword argument and the category as `category`. Example subscriber: ``` recorded = [] def record(sender, message, category, **extra): recorded.append((message, category)) from flask import message_flashed message_flashed.connect(record, app) ``` Changelog New in version 0.10. `signals.signals_available` Deprecated since version 2.3: Will be removed in Flask 2.4. Signals are always available Class-Based Views ----------------- Changelog New in version 0.7. `class flask.views.View` Subclass this class and override [`dispatch_request()`](#flask.views.View.dispatch_request "flask.views.View.dispatch_request") to create a generic class-based view. Call [`as_view()`](#flask.views.View.as_view "flask.views.View.as_view") to create a view function that creates an instance of the class with the given arguments and calls its `dispatch_request` method with any URL variables. See [Class-based Views](../views/index) for a detailed guide. ``` class Hello(View): init_every_request = False def dispatch_request(self, name): return f"Hello, {name}!" app.add_url_rule( "/hello/<name>", view_func=Hello.as_view("hello") ) ``` Set [`methods`](#flask.views.View.methods "flask.views.View.methods") on the class to change what methods the view accepts. Set [`decorators`](#flask.views.View.decorators "flask.views.View.decorators") on the class to apply a list of decorators to the generated view function. Decorators applied to the class itself will not be applied to the generated view function! Set [`init_every_request`](#flask.views.View.init_every_request "flask.views.View.init_every_request") to `False` for efficiency, unless you need to store request-global data on `self`. `classmethod as_view(name, *class_args, **class_kwargs)` Convert the class into a view function that can be registered for a route. By default, the generated view will create a new instance of the view class for every request and call its [`dispatch_request()`](#flask.views.View.dispatch_request "flask.views.View.dispatch_request") method. If the view class sets [`init_every_request`](#flask.views.View.init_every_request "flask.views.View.init_every_request") to `False`, the same instance will be used for every request. Except for `name`, all other arguments passed to this method are forwarded to the view class `__init__` method. Changelog Changed in version 2.2: Added the `init_every_request` class attribute. Parameters: * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")) – * **class_args** (*t.Any*) – * **class_kwargs** (*t.Any*) – Return type: ft.RouteCallable `decorators: ClassVar[list[Callable]] = []` A list of decorators to apply, in order, to the generated view function. Remember that `@decorator` syntax is applied bottom to top, so the first decorator in the list would be the bottom decorator. Changelog New in version 0.8. `dispatch_request()` The actual view function behavior. Subclasses must override this and return a valid response. Any variables from the URL rule are passed as keyword arguments. Return type: ft.ResponseReturnValue `init_every_request: ClassVar[bool] = True` Create a new instance of this view class for every request by default. If a view subclass sets this to `False`, the same instance is used for every request. A single instance is more efficient, especially if complex setup is done during init. However, storing data on `self` is no longer safe across requests, and [`g`](#flask.g "flask.g") should be used instead. Changelog New in version 2.2. `methods: ClassVar[Collection[str] | None] = None` The methods this view is registered for. Uses the same default (`["GET", "HEAD", "OPTIONS"]`) as `route` and `add_url_rule` by default. `provide_automatic_options: ClassVar[bool | None] = None` Control whether the `OPTIONS` method is handled automatically. Uses the same default (`True`) as `route` and `add_url_rule` by default. `class flask.views.MethodView` Dispatches request methods to the corresponding instance methods. For example, if you implement a `get` method, it will be used to handle `GET` requests. This can be useful for defining a REST API. `methods` is automatically set based on the methods defined on the class. See [Class-based Views](../views/index) for a detailed guide. ``` class CounterAPI(MethodView): def get(self): return str(session.get("counter", 0)) def post(self): session["counter"] = session.get("counter", 0) + 1 return redirect(url_for("counter")) app.add_url_rule( "/counter", view_func=CounterAPI.as_view("counter") ) ``` `dispatch_request(**kwargs)` The actual view function behavior. Subclasses must override this and return a valid response. Any variables from the URL rule are passed as keyword arguments. Parameters: **kwargs** (*t.Any*) – Return type: ft.ResponseReturnValue URL Route Registrations ----------------------- Generally there are three ways to define rules for the routing system: 1. You can use the [`flask.Flask.route()`](#flask.Flask.route "flask.Flask.route") decorator. 2. You can use the [`flask.Flask.add_url_rule()`](#flask.Flask.add_url_rule "flask.Flask.add_url_rule") function. 3. You can directly access the underlying Werkzeug routing system which is exposed as [`flask.Flask.url_map`](#flask.Flask.url_map "flask.Flask.url_map"). Variable parts in the route can be specified with angular brackets (`/user/<username>`). By default a variable part in the URL accepts any string without a slash however a different converter can be specified as well by using `<converter:name>`. Variable parts are passed to the view function as keyword arguments. The following converters are available: | | | | --- | --- | | `string` | accepts any text without a slash (the default) | | `int` | accepts integers | | `float` | like `int` but for floating point values | | `path` | like the default but also accepts slashes | | `any` | matches one of the items provided | | `uuid` | accepts UUID strings | Custom converters can be defined using [`flask.Flask.url_map`](#flask.Flask.url_map "flask.Flask.url_map"). Here are some examples: ``` @app.route('/') def index(): pass @app.route('/<username>') def show_user(username): pass @app.route('/post/<int:post_id>') def show_post(post_id): pass ``` An important detail to keep in mind is how Flask deals with trailing slashes. The idea is to keep each URL unique so the following rules apply: 1. If a rule ends with a slash and is requested without a slash by the user, the user is automatically redirected to the same page with a trailing slash attached. 2. If a rule does not end with a trailing slash and the user requests the page with a trailing slash, a 404 not found is raised. This is consistent with how web servers deal with static files. This also makes it possible to use relative link targets safely. You can also define multiple rules for the same function. They have to be unique however. Defaults can also be specified. Here for example is a definition for a URL that accepts an optional page: ``` @app.route('/users/', defaults={'page': 1}) @app.route('/users/page/<int:page>') def show_users(page): pass ``` This specifies that `/users/` will be the URL for page one and `/users/page/N` will be the URL for page `N`. If a URL contains a default value, it will be redirected to its simpler form with a 301 redirect. In the above example, `/users/page/1` will be redirected to `/users/`. If your route handles `GET` and `POST` requests, make sure the default route only handles `GET`, as redirects can’t preserve form data. ``` @app.route('/region/', defaults={'id': 1}) @app.route('/region/<int:id>', methods=['GET', 'POST']) def region(id): pass ``` Here are the parameters that [`route()`](#flask.Flask.route "flask.Flask.route") and [`add_url_rule()`](#flask.Flask.add_url_rule "flask.Flask.add_url_rule") accept. The only difference is that with the route parameter the view function is defined with the decorator instead of the `view_func` parameter. | | | | --- | --- | | `rule` | the URL rule as string | | `endpoint` | the endpoint for the registered URL rule. Flask itself assumes that the name of the view function is the name of the endpoint if not explicitly stated. | | `view_func` | the function to call when serving a request to the provided endpoint. If this is not provided one can specify the function later by storing it in the [`view_functions`](#flask.Flask.view_functions "flask.Flask.view_functions") dictionary with the endpoint as key. | | `defaults` | A dictionary with defaults for this rule. See the example above for how defaults work. | | `subdomain` | specifies the rule for the subdomain in case subdomain matching is in use. If not specified the default subdomain is assumed. | | `**options` | the options to be forwarded to the underlying [`Rule`](https://werkzeug.palletsprojects.com/en/2.3.x/routing/#werkzeug.routing.Rule "(in Werkzeug v2.3.x)") object. A change to Werkzeug is handling of method options. methods is a list of methods this rule should be limited to (`GET`, `POST` etc.). By default a rule just listens for `GET` (and implicitly `HEAD`). Starting with Flask 0.6, `OPTIONS` is implicitly added and handled by the standard request handling. They have to be specified as keyword arguments. | View Function Options --------------------- For internal usage the view functions can have some attributes attached to customize behavior the view function would normally not have control over. The following attributes can be provided optionally to either override some defaults to [`add_url_rule()`](#flask.Flask.add_url_rule "flask.Flask.add_url_rule") or general behavior: * `__name__`: The name of a function is by default used as endpoint. If endpoint is provided explicitly this value is used. Additionally this will be prefixed with the name of the blueprint by default which cannot be customized from the function itself. * `methods`: If methods are not provided when the URL rule is added, Flask will look on the view function object itself if a `methods` attribute exists. If it does, it will pull the information for the methods from there. * `provide_automatic_options`: if this attribute is set Flask will either force enable or disable the automatic implementation of the HTTP `OPTIONS` response. This can be useful when working with decorators that want to customize the `OPTIONS` response on a per-view basis. * `required_methods`: if this attribute is set, Flask will always add these methods when registering a URL rule even if the methods were explicitly overridden in the `route()` call. Full example: ``` def index(): if request.method == 'OPTIONS': # custom options handling here ... return 'Hello World!' index.provide_automatic_options = False index.methods = ['GET', 'OPTIONS'] app.add_url_rule('/', index) ``` Changelog New in version 0.8: The `provide_automatic_options` functionality was added. Command Line Interface ---------------------- `class flask.cli.FlaskGroup(add_default_commands=True, create_app=None, add_version_option=True, load_dotenv=True, set_debug_flag=True, **extra)` Special subclass of the [`AppGroup`](#flask.cli.AppGroup "flask.cli.AppGroup") group that supports loading more commands from the configured Flask app. Normally a developer does not have to interface with this class but there are some very advanced use cases for which it makes sense to create an instance of this. see [Custom Scripts](../cli/index#custom-scripts). Parameters: * **add_default_commands** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – if this is True then the default run and shell commands will be added. * **add_version_option** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – adds the `--version` option. * **create_app** (*t.Callable**[**...**,* [Flask](#flask.Flask "flask.Flask")*]* *|* *None*) – an optional callback that is passed the script info and returns the loaded app. * **load_dotenv** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Load the nearest `.env` and `.flaskenv` files to set environment variables. Will also change the working directory to the directory containing the first file found. * **set_debug_flag** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – Set the app’s debug flag. * **extra** (*t.Any*) – Changelog Changed in version 2.2: Added the `-A/--app`, `--debug/--no-debug`, `-e/--env-file` options. Changed in version 2.2: An app context is pushed when running `app.cli` commands, so `@with_appcontext` is no longer required for those commands. Changed in version 1.0: If installed, python-dotenv will be used to load environment variables from `.env` and `.flaskenv` files. `get_command(ctx, name)` Given a context and a command name, this returns a `Command` object if it exists or returns `None`. `list_commands(ctx)` Returns a list of subcommand names in the order they should appear. `make_context(info_name, args, parent=None, **extra)` This function when given an info name and arguments will kick off the parsing and create a new `Context`. It does not invoke the actual command callback though. To quickly customize the context class used without overriding this method, set the `context_class` attribute. Parameters: * **info_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – the info name for this invocation. Generally this is the most descriptive name for the script or command. For the toplevel script it’s usually the name of the script, for commands below it it’s the name of the command. * **args** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.11)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")*]*) – the arguments to parse as list of strings. * **parent** ([Context](https://click.palletsprojects.com/en/8.1.x/api/#click.Context "(in Click v8.1.x)") *|* *None*) – the parent context if available. * **extra** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – extra keyword arguments forwarded to the context constructor. Return type: [Context](https://click.palletsprojects.com/en/8.1.x/api/#click.Context "(in Click v8.1.x)") Changed in version 8.0: Added the `context_class` attribute. `parse_args(ctx, args)` Given a context and a list of arguments this creates the parser and parses the arguments, then modifies the context as necessary. This is automatically invoked by [`make_context()`](#flask.cli.FlaskGroup.make_context "flask.cli.FlaskGroup.make_context"). Parameters: * **ctx** ([Context](https://click.palletsprojects.com/en/8.1.x/api/#click.Context "(in Click v8.1.x)")) – * **args** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.11)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")*]*) – Return type: [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.11)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")] `class flask.cli.AppGroup(name=None, commands=None, **attrs)` This works similar to a regular click [`Group`](https://click.palletsprojects.com/en/8.1.x/api/#click.Group "(in Click v8.1.x)") but it changes the behavior of the [`command()`](#flask.cli.AppGroup.command "flask.cli.AppGroup.command") decorator so that it automatically wraps the functions in [`with_appcontext()`](#flask.cli.with_appcontext "flask.cli.with_appcontext"). Not to be confused with [`FlaskGroup`](#flask.cli.FlaskGroup "flask.cli.FlaskGroup"). Parameters: * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – * **commands** ([Dict](https://docs.python.org/3/library/typing.html#typing.Dict "(in Python v3.11)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)")*,* [Command](https://click.palletsprojects.com/en/8.1.x/api/#click.Command "(in Click v8.1.x)")*]* *|* [Sequence](https://docs.python.org/3/library/typing.html#typing.Sequence "(in Python v3.11)")*[*[Command](https://click.palletsprojects.com/en/8.1.x/api/#click.Command "(in Click v8.1.x)")*]* *|* *None*) – * **attrs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – `command(*args, **kwargs)` This works exactly like the method of the same name on a regular [`click.Group`](https://click.palletsprojects.com/en/8.1.x/api/#click.Group "(in Click v8.1.x)") but it wraps callbacks in [`with_appcontext()`](#flask.cli.with_appcontext "flask.cli.with_appcontext") unless it’s disabled by passing `with_appcontext=False`. `group(*args, **kwargs)` This works exactly like the method of the same name on a regular [`click.Group`](https://click.palletsprojects.com/en/8.1.x/api/#click.Group "(in Click v8.1.x)") but it defaults the group class to [`AppGroup`](#flask.cli.AppGroup "flask.cli.AppGroup"). `class flask.cli.ScriptInfo(app_import_path=None, create_app=None, set_debug_flag=True)` Helper object to deal with Flask applications. This is usually not necessary to interface with as it’s used internally in the dispatching to click. In future versions of Flask this object will most likely play a bigger role. Typically it’s created automatically by the [`FlaskGroup`](#flask.cli.FlaskGroup "flask.cli.FlaskGroup") but you can also manually create it and pass it onwards as click object. Parameters: * **app_import_path** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* *None*) – * **create_app** (*t.Callable**[**...**,* [Flask](#flask.Flask "flask.Flask")*]* *|* *None*) – * **set_debug_flag** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)")) – `app_import_path` Optionally the import path for the Flask application. `create_app` Optionally a function that is passed the script info to create the instance of the application. `data: dict[t.Any, t.Any]` A dictionary with arbitrary data that can be associated with this script info. `load_app()` Loads the Flask app (if not yet loaded) and returns it. Calling this multiple times will just result in the already loaded app to be returned. Return type: [Flask](#flask.Flask "flask.Flask") `flask.cli.load_dotenv(path=None)` Load “dotenv” files in order of precedence to set environment variables. If an env var is already set it is not overwritten, so earlier files in the list are preferred over later files. This is a no-op if [python-dotenv](https://github.com/theskumar/python-dotenv#readme) is not installed. Parameters: **path** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.11)") *|* [PathLike](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.11)") *|* *None*) – Load the file at this location instead of searching. Returns: `True` if a file was loaded. Return type: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.11)") Changelog Changed in version 2.0: The current directory is not changed to the location of the loaded file. Changed in version 2.0: When loading the env files, set the default encoding to UTF-8. Changed in version 1.1.0: Returns `False` when python-dotenv is not installed, or when the given path isn’t a file. New in version 1.0. `flask.cli.with_appcontext(f)` Wraps a callback so that it’s guaranteed to be executed with the script’s application context. Custom commands (and their options) registered under `app.cli` or `blueprint.cli` will always have an app context available, this decorator is not required in that case. Changelog Changed in version 2.2: The app context is active for subcommands as well as the decorated callback. The app context is always available to `app.cli` command and parameter callbacks. `flask.cli.pass_script_info(f)` Marks a function so that an instance of [`ScriptInfo`](#flask.cli.ScriptInfo "flask.cli.ScriptInfo") is passed as first argument to the click callback. Parameters: **f** (*F*) – Return type: *F* `flask.cli.run_command = <Command run>` Run a local development server. This server is for development purposes only. It does not provide the stability, security, or performance of production WSGI servers. The reloader and debugger are enabled by default with the ‘–debug’ option. Parameters: * **args** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") `flask.cli.shell_command = <Command shell>` Run an interactive Python shell in the context of a given Flask application. The application will populate the default namespace of this shell according to its configuration. This is useful for executing small snippets of management code without having to manually configure the application. Parameters: * **args** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – * **kwargs** ([Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)")) – Return type: [Any](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.11)") © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/api/Application Factories ===================== If you are already using packages and blueprints for your application ([Modular Applications with Blueprints](../../blueprints/index)) there are a couple of really nice ways to further improve the experience. A common pattern is creating the application object when the blueprint is imported. But if you move the creation of this object into a function, you can then create multiple instances of this app later. So why would you want to do this? 1. Testing. You can have instances of the application with different settings to test every case. 2. Multiple instances. Imagine you want to run different versions of the same application. Of course you could have multiple instances with different configs set up in your webserver, but if you use factories, you can have multiple instances of the same application running in the same application process which can be handy. So how would you then actually implement that? Basic Factories --------------- The idea is to set up the application in a function. Like this: ``` def create_app(config_filename): app = Flask(__name__) app.config.from_pyfile(config_filename) from yourapplication.model import db db.init_app(app) from yourapplication.views.admin import admin from yourapplication.views.frontend import frontend app.register_blueprint(admin) app.register_blueprint(frontend) return app ``` The downside is that you cannot use the application object in the blueprints at import time. You can however use it from within a request. How do you get access to the application with the config? Use [`current_app`](../../api/index#flask.current_app "flask.current_app"): ``` from flask import current_app, Blueprint, render_template admin = Blueprint('admin', __name__, url_prefix='/admin') @admin.route('/') def index(): return render_template(current_app.config['INDEX_TEMPLATE']) ``` Here we look up the name of a template in the config. Factories & Extensions ---------------------- It’s preferable to create your extensions and app factories so that the extension object does not initially get bound to the application. Using [Flask-SQLAlchemy](https://flask-sqlalchemy.palletsprojects.com/), as an example, you should not do something along those lines: ``` def create_app(config_filename): app = Flask(__name__) app.config.from_pyfile(config_filename) db = SQLAlchemy(app) ``` But, rather, in model.py (or equivalent): ``` db = SQLAlchemy() ``` and in your application.py (or equivalent): ``` def create_app(config_filename): app = Flask(__name__) app.config.from_pyfile(config_filename) from yourapplication.model import db db.init_app(app) ``` Using this design pattern, no application-specific state is stored on the extension object, so one extension object can be used for multiple apps. For more information about the design of extensions refer to [Flask Extension Development](https://flask.palletsprojects.com/en/2.3.x/extensiondev/). Using Applications ------------------ To run such an application, you can use the **flask** command: ``` $ flask --app hello run ``` Flask will automatically detect the factory if it is named `create_app` or `make_app` in `hello`. You can also pass arguments to the factory like this: ``` $ flask --app hello:create_app(local_auth=True) run ``` Then the `create_app` factory in `myapp` is called with the keyword argument `local_auth=True`. See [Command Line Interface](../../cli/index) for more detail. Factory Improvements -------------------- The factory function above is not very clever, but you can improve it. The following changes are straightforward to implement: 1. Make it possible to pass in configuration values for unit tests so that you don’t have to create config files on the filesystem. 2. Call a function from a blueprint when the application is setting up so that you have a place to modify attributes of the application (like hooking in before/after request handlers etc.) 3. Add in WSGI middlewares when the application is being created if necessary. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/appfactories/Application Dispatching ======================= Application dispatching is the process of combining multiple Flask applications on the WSGI level. You can combine not only Flask applications but any WSGI application. This would allow you to run a Django and a Flask application in the same interpreter side by side if you want. The usefulness of this depends on how the applications work internally. The fundamental difference from [Large Applications as Packages](../packages/index) is that in this case you are running the same or different Flask applications that are entirely isolated from each other. They run different configurations and are dispatched on the WSGI level. Working with this Document -------------------------- Each of the techniques and examples below results in an `application` object that can be run with any WSGI server. For production, see [Deploying to Production](../../deploying/index). For development, Werkzeug provides a server through [`werkzeug.serving.run_simple()`](https://werkzeug.palletsprojects.com/en/2.3.x/serving/#werkzeug.serving.run_simple "(in Werkzeug v2.3.x)"): ``` from werkzeug.serving import run_simple run_simple('localhost', 5000, application, use_reloader=True) ``` Note that [`run_simple`](https://werkzeug.palletsprojects.com/en/2.3.x/serving/#werkzeug.serving.run_simple "(in Werkzeug v2.3.x)") is not intended for use in production. Use a production WSGI server. See [Deploying to Production](../../deploying/index). In order to use the interactive debugger, debugging must be enabled both on the application and the simple server. Here is the “hello world” example with debugging and [`run_simple`](https://werkzeug.palletsprojects.com/en/2.3.x/serving/#werkzeug.serving.run_simple "(in Werkzeug v2.3.x)"): ``` from flask import Flask from werkzeug.serving import run_simple app = Flask(__name__) app.debug = True @app.route('/') def hello_world(): return 'Hello World!' if __name__ == '__main__': run_simple('localhost', 5000, app, use_reloader=True, use_debugger=True, use_evalex=True) ``` Combining Applications ---------------------- If you have entirely separated applications and you want them to work next to each other in the same Python interpreter process you can take advantage of the `werkzeug.wsgi.DispatcherMiddleware`. The idea here is that each Flask application is a valid WSGI application and they are combined by the dispatcher middleware into a larger one that is dispatched based on prefix. For example you could have your main application run on `/` and your backend interface on `/backend`: ``` from werkzeug.middleware.dispatcher import DispatcherMiddleware from frontend_app import application as frontend from backend_app import application as backend application = DispatcherMiddleware(frontend, { '/backend': backend }) ``` Dispatch by Subdomain --------------------- Sometimes you might want to use multiple instances of the same application with different configurations. Assuming the application is created inside a function and you can call that function to instantiate it, that is really easy to implement. In order to develop your application to support creating new instances in functions have a look at the [Application Factories](../appfactories/index) pattern. A very common example would be creating applications per subdomain. For instance you configure your webserver to dispatch all requests for all subdomains to your application and you then use the subdomain information to create user-specific instances. Once you have your server set up to listen on all subdomains you can use a very simple WSGI application to do the dynamic application creation. The perfect level for abstraction in that regard is the WSGI layer. You write your own WSGI application that looks at the request that comes and delegates it to your Flask application. If that application does not exist yet, it is dynamically created and remembered: ``` from threading import Lock class SubdomainDispatcher: def __init__(self, domain, create_app): self.domain = domain self.create_app = create_app self.lock = Lock() self.instances = {} def get_application(self, host): host = host.split(':')[0] assert host.endswith(self.domain), 'Configuration error' subdomain = host[:-len(self.domain)].rstrip('.') with self.lock: app = self.instances.get(subdomain) if app is None: app = self.create_app(subdomain) self.instances[subdomain] = app return app def __call__(self, environ, start_response): app = self.get_application(environ['HTTP_HOST']) return app(environ, start_response) ``` This dispatcher can then be used like this: ``` from myapplication import create_app, get_user_for_subdomain from werkzeug.exceptions import NotFound def make_app(subdomain): user = get_user_for_subdomain(subdomain) if user is None: # if there is no user for that subdomain we still have # to return a WSGI application that handles that request. # We can then just return the NotFound() exception as # application which will render a default 404 page. # You might also redirect the user to the main page then return NotFound() # otherwise create the application for the specific user return create_app(user) application = SubdomainDispatcher('example.com', make_app) ``` Dispatch by Path ---------------- Dispatching by a path on the URL is very similar. Instead of looking at the `Host` header to figure out the subdomain one simply looks at the request path up to the first slash: ``` from threading import Lock from werkzeug.wsgi import pop_path_info, peek_path_info class PathDispatcher: def __init__(self, default_app, create_app): self.default_app = default_app self.create_app = create_app self.lock = Lock() self.instances = {} def get_application(self, prefix): with self.lock: app = self.instances.get(prefix) if app is None: app = self.create_app(prefix) if app is not None: self.instances[prefix] = app return app def __call__(self, environ, start_response): app = self.get_application(peek_path_info(environ)) if app is not None: pop_path_info(environ) else: app = self.default_app return app(environ, start_response) ``` The big difference between this and the subdomain one is that this one falls back to another application if the creator function returns `None`: ``` from myapplication import create_app, default_app, get_user_for_prefix def make_app(prefix): user = get_user_for_prefix(prefix) if user is not None: return create_app(user) application = PathDispatcher(default_app, make_app) ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/appdispatch/Using URL Processors ==================== Changelog New in version 0.7. Flask 0.7 introduces the concept of URL processors. The idea is that you might have a bunch of resources with common parts in the URL that you don’t always explicitly want to provide. For instance you might have a bunch of URLs that have the language code in it but you don’t want to have to handle it in every single function yourself. URL processors are especially helpful when combined with blueprints. We will handle both application specific URL processors here as well as blueprint specifics. Internationalized Application URLs ---------------------------------- Consider an application like this: ``` from flask import Flask, g app = Flask(__name__) @app.route('/<lang_code>/') def index(lang_code): g.lang_code = lang_code ... @app.route('/<lang_code>/about') def about(lang_code): g.lang_code = lang_code ... ``` This is an awful lot of repetition as you have to handle the language code setting on the [`g`](../../api/index#flask.g "flask.g") object yourself in every single function. Sure, a decorator could be used to simplify this, but if you want to generate URLs from one function to another you would have to still provide the language code explicitly which can be annoying. For the latter, this is where [`url_defaults()`](../../api/index#flask.Flask.url_defaults "flask.Flask.url_defaults") functions come in. They can automatically inject values into a call to [`url_for()`](../../api/index#flask.url_for "flask.url_for"). The code below checks if the language code is not yet in the dictionary of URL values and if the endpoint wants a value named `'lang_code'`: ``` @app.url_defaults def add_language_code(endpoint, values): if 'lang_code' in values or not g.lang_code: return if app.url_map.is_endpoint_expecting(endpoint, 'lang_code'): values['lang_code'] = g.lang_code ``` The method [`is_endpoint_expecting()`](https://werkzeug.palletsprojects.com/en/2.3.x/routing/#werkzeug.routing.Map.is_endpoint_expecting "(in Werkzeug v2.3.x)") of the URL map can be used to figure out if it would make sense to provide a language code for the given endpoint. The reverse of that function are [`url_value_preprocessor()`](../../api/index#flask.Flask.url_value_preprocessor "flask.Flask.url_value_preprocessor")s. They are executed right after the request was matched and can execute code based on the URL values. The idea is that they pull information out of the values dictionary and put it somewhere else: ``` @app.url_value_preprocessor def pull_lang_code(endpoint, values): g.lang_code = values.pop('lang_code', None) ``` That way you no longer have to do the `lang_code` assignment to [`g`](../../api/index#flask.g "flask.g") in every function. You can further improve that by writing your own decorator that prefixes URLs with the language code, but the more beautiful solution is using a blueprint. Once the `'lang_code'` is popped from the values dictionary and it will no longer be forwarded to the view function reducing the code to this: ``` from flask import Flask, g app = Flask(__name__) @app.url_defaults def add_language_code(endpoint, values): if 'lang_code' in values or not g.lang_code: return if app.url_map.is_endpoint_expecting(endpoint, 'lang_code'): values['lang_code'] = g.lang_code @app.url_value_preprocessor def pull_lang_code(endpoint, values): g.lang_code = values.pop('lang_code', None) @app.route('/<lang_code>/') def index(): ... @app.route('/<lang_code>/about') def about(): ... ``` Internationalized Blueprint URLs -------------------------------- Because blueprints can automatically prefix all URLs with a common string it’s easy to automatically do that for every function. Furthermore blueprints can have per-blueprint URL processors which removes a whole lot of logic from the [`url_defaults()`](../../api/index#flask.Flask.url_defaults "flask.Flask.url_defaults") function because it no longer has to check if the URL is really interested in a `'lang_code'` parameter: ``` from flask import Blueprint, g bp = Blueprint('frontend', __name__, url_prefix='/<lang_code>') @bp.url_defaults def add_language_code(endpoint, values): values.setdefault('lang_code', g.lang_code) @bp.url_value_preprocessor def pull_lang_code(endpoint, values): g.lang_code = values.pop('lang_code') @bp.route('/') def index(): ... @bp.route('/about') def about(): ... ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/urlprocessors/Using SQLite 3 with Flask ========================= In Flask you can easily implement the opening of database connections on demand and closing them when the context dies (usually at the end of the request). Here is a simple example of how you can use SQLite 3 with Flask: ``` import sqlite3 from flask import g DATABASE = '/path/to/database.db' def get_db(): db = getattr(g, '_database', None) if db is None: db = g._database = sqlite3.connect(DATABASE) return db @app.teardown_appcontext def close_connection(exception): db = getattr(g, '_database', None) if db is not None: db.close() ``` Now, to use the database, the application must either have an active application context (which is always true if there is a request in flight) or create an application context itself. At that point the `get_db` function can be used to get the current database connection. Whenever the context is destroyed the database connection will be terminated. Example: ``` @app.route('/') def index(): cur = get_db().cursor() ... ``` Note Please keep in mind that the teardown request and appcontext functions are always executed, even if a before-request handler failed or was never executed. Because of this we have to make sure here that the database is there before we close it. Connect on Demand ----------------- The upside of this approach (connecting on first use) is that this will only open the connection if truly necessary. If you want to use this code outside a request context you can use it in a Python shell by opening the application context by hand: ``` with app.app_context(): # now you can use get_db() ``` Easy Querying ------------- Now in each request handling function you can access `get_db()` to get the current open database connection. To simplify working with SQLite, a row factory function is useful. It is executed for every result returned from the database to convert the result. For instance, in order to get dictionaries instead of tuples, this could be inserted into the `get_db` function we created above: ``` def make_dicts(cursor, row): return dict((cursor.description[idx][0], value) for idx, value in enumerate(row)) db.row_factory = make_dicts ``` This will make the sqlite3 module return dicts for this database connection, which are much nicer to deal with. Even more simply, we could place this in `get_db` instead: ``` db.row_factory = sqlite3.Row ``` This would use Row objects rather than dicts to return the results of queries. These are `namedtuple` s, so we can access them either by index or by key. For example, assuming we have a `sqlite3.Row` called `r` for the rows `id`, `FirstName`, `LastName`, and `MiddleInitial`: ``` >>> # You can get values based on the row's name >>> r['FirstName'] John >>> # Or, you can get them based on index >>> r[1] John # Row objects are also iterable: >>> for value in r: ... print(value) 1 John Doe M ``` Additionally, it is a good idea to provide a query function that combines getting the cursor, executing and fetching the results: ``` def query_db(query, args=(), one=False): cur = get_db().execute(query, args) rv = cur.fetchall() cur.close() return (rv[0] if rv else None) if one else rv ``` This handy little function, in combination with a row factory, makes working with the database much more pleasant than it is by just using the raw cursor and connection objects. Here is how you can use it: ``` for user in query_db('select * from users'): print(user['username'], 'has the id', user['user_id']) ``` Or if you just want a single result: ``` user = query_db('select * from users where username = ?', [the_username], one=True) if user is None: print('No such user') else: print(the_username, 'has the id', user['user_id']) ``` To pass variable parts to the SQL statement, use a question mark in the statement and pass in the arguments as a list. Never directly add them to the SQL statement with string formatting because this makes it possible to attack the application using [SQL Injections](https://en.wikipedia.org/wiki/SQL_injection). Initial Schemas --------------- Relational databases need schemas, so applications often ship a `schema.sql` file that creates the database. It’s a good idea to provide a function that creates the database based on that schema. This function can do that for you: ``` def init_db(): with app.app_context(): db = get_db() with app.open_resource('schema.sql', mode='r') as f: db.cursor().executescript(f.read()) db.commit() ``` You can then create such a database from the Python shell: ``` >>> from yourapplication import init_db >>> init_db() ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/sqlite3/SQLAlchemy in Flask =================== Many people prefer [SQLAlchemy](https://www.sqlalchemy.org/) for database access. In this case it’s encouraged to use a package instead of a module for your flask application and drop the models into a separate module ([Large Applications as Packages](../packages/index)). While that is not necessary, it makes a lot of sense. There are four very common ways to use SQLAlchemy. I will outline each of them here: Flask-SQLAlchemy Extension -------------------------- Because SQLAlchemy is a common database abstraction layer and object relational mapper that requires a little bit of configuration effort, there is a Flask extension that handles that for you. This is recommended if you want to get started quickly. You can download [Flask-SQLAlchemy](https://flask-sqlalchemy.palletsprojects.com/) from [PyPI](https://pypi.org/project/Flask-SQLAlchemy/). Declarative ----------- The declarative extension in SQLAlchemy is the most recent method of using SQLAlchemy. It allows you to define tables and models in one go, similar to how Django works. In addition to the following text I recommend the official documentation on the [declarative](https://docs.sqlalchemy.org/en/latest/orm/extensions/declarative/) extension. Here’s the example `database.py` module for your application: ``` from sqlalchemy import create_engine from sqlalchemy.orm import scoped_session, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:////tmp/test.db') db_session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=engine)) Base = declarative_base() Base.query = db_session.query_property() def init_db(): # import all modules here that might define models so that # they will be registered properly on the metadata. Otherwise # you will have to import them first before calling init_db() import yourapplication.models Base.metadata.create_all(bind=engine) ``` To define your models, just subclass the `Base` class that was created by the code above. If you are wondering why we don’t have to care about threads here (like we did in the SQLite3 example above with the [`g`](../../api/index#flask.g "flask.g") object): that’s because SQLAlchemy does that for us already with the [`scoped_session`](https://docs.sqlalchemy.org/en/20/orm/contextual.html#sqlalchemy.orm.scoped_session "(in SQLAlchemy v2.0)"). To use SQLAlchemy in a declarative way with your application, you just have to put the following code into your application module. Flask will automatically remove database sessions at the end of the request or when the application shuts down: ``` from yourapplication.database import db_session @app.teardown_appcontext def shutdown_session(exception=None): db_session.remove() ``` Here is an example model (put this into `models.py`, e.g.): ``` from sqlalchemy import Column, Integer, String from yourapplication.database import Base class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) name = Column(String(50), unique=True) email = Column(String(120), unique=True) def __init__(self, name=None, email=None): self.name = name self.email = email def __repr__(self): return f'<User {self.name!r}>' ``` To create the database you can use the `init_db` function: ``` >>> from yourapplication.database import init_db >>> init_db() ``` You can insert entries into the database like this: ``` >>> from yourapplication.database import db_session >>> from yourapplication.models import User >>> u = User('admin', '<EMAIL>') >>> db_session.add(u) >>> db_session.commit() ``` Querying is simple as well: ``` >>> User.query.all() [<User 'admin'>] >>> User.query.filter(User.name == 'admin').first() <User 'admin'``` Manual Object Relational Mapping -------------------------------- Manual object relational mapping has a few upsides and a few downsides versus the declarative approach from above. The main difference is that you define tables and classes separately and map them together. It’s more flexible but a little more to type. In general it works like the declarative approach, so make sure to also split up your application into multiple modules in a package. Here is an example `database.py` module for your application: ``` from sqlalchemy import create_engine, MetaData from sqlalchemy.orm import scoped_session, sessionmaker engine = create_engine('sqlite:////tmp/test.db') metadata = MetaData() db_session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=engine)) def init_db(): metadata.create_all(bind=engine) ``` As in the declarative approach, you need to close the session after each request or application context shutdown. Put this into your application module: ``` from yourapplication.database import db_session @app.teardown_appcontext def shutdown_session(exception=None): db_session.remove() ``` Here is an example table and model (put this into `models.py`): ``` from sqlalchemy import Table, Column, Integer, String from sqlalchemy.orm import mapper from yourapplication.database import metadata, db_session class User(object): query = db_session.query_property() def __init__(self, name=None, email=None): self.name = name self.email = email def __repr__(self): return f'<User {self.name!r}>' users = Table('users', metadata, Column('id', Integer, primary_key=True), Column('name', String(50), unique=True), Column('email', String(120), unique=True) ) mapper(User, users) ``` Querying and inserting works exactly the same as in the example above. SQL Abstraction Layer --------------------- If you just want to use the database system (and SQL) abstraction layer you basically only need the engine: ``` from sqlalchemy import create_engine, MetaData, Table engine = create_engine('sqlite:////tmp/test.db') metadata = MetaData(bind=engine) ``` Then you can either declare the tables in your code like in the examples above, or automatically load them: ``` from sqlalchemy import Table users = Table('users', metadata, autoload=True) ``` To insert data you can use the `insert` method. We have to get a connection first so that we can use a transaction: ``` >>> con = engine.connect() >>> con.execute(users.insert(), name='admin', email='admin@localhost') ``` SQLAlchemy will automatically commit for us. To query your database, you use the engine directly or use a connection: ``` >>> users.select(users.c.id == 1).execute().first() (1, 'admin', 'admin@localhost') ``` These results are also dict-like tuples: ``` >>> r = users.select(users.c.id == 1).execute().first() >>> r['name'] 'admin' ``` You can also pass strings of SQL statements to the `execute()` method: ``` >>> engine.execute('select * from users where id = :1', [1]).first() (1, 'admin', 'admin@localhost') ``` For more information about SQLAlchemy, head over to the [website](https://www.sqlalchemy.org/). © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/sqlalchemy/Uploading Files =============== Ah yes, the good old problem of file uploads. The basic idea of file uploads is actually quite simple. It basically works like this: 1. A `<form>` tag is marked with `enctype=multipart/form-data` and an `<input type=file>` is placed in that form. 2. The application accesses the file from the `files` dictionary on the request object. 3. use the [`save()`](https://werkzeug.palletsprojects.com/en/2.3.x/datastructures/#werkzeug.datastructures.FileStorage.save "(in Werkzeug v2.3.x)") method of the file to save the file permanently somewhere on the filesystem. A Gentle Introduction --------------------- Let’s start with a very basic application that uploads a file to a specific upload folder and displays a file to the user. Let’s look at the bootstrapping code for our application: ``` import os from flask import Flask, flash, request, redirect, url_for from werkzeug.utils import secure_filename UPLOAD_FOLDER = '/path/to/the/uploads' ALLOWED_EXTENSIONS = {'txt', 'pdf', 'png', 'jpg', 'jpeg', 'gif'} app = Flask(__name__) app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER ``` So first we need a couple of imports. Most should be straightforward, the `werkzeug.secure_filename()` is explained a little bit later. The `UPLOAD_FOLDER` is where we will store the uploaded files and the `ALLOWED_EXTENSIONS` is the set of allowed file extensions. Why do we limit the extensions that are allowed? You probably don’t want your users to be able to upload everything there if the server is directly sending out the data to the client. That way you can make sure that users are not able to upload HTML files that would cause XSS problems (see [Cross-Site Scripting (XSS)](../../security/index#security-xss)). Also make sure to disallow `.php` files if the server executes them, but who has PHP installed on their server, right? :) Next the functions that check if an extension is valid and that uploads the file and redirects the user to the URL for the uploaded file: ``` def allowed_file(filename): return '.' in filename and \ filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS @app.route('/', methods=['GET', 'POST']) def upload_file(): if request.method == 'POST': # check if the post request has the file part if 'file' not in request.files: flash('No file part') return redirect(request.url) file = request.files['file'] # If the user does not select a file, the browser submits an # empty file without a filename. if file.filename == '': flash('No selected file') return redirect(request.url) if file and allowed_file(file.filename): filename = secure_filename(file.filename) file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename)) return redirect(url_for('download_file', name=filename)) return ''' <!doctype html> <title>Upload new File</title> <h1>Upload new File</h1> <form method=post enctype=multipart/form-data> <input type=file name=file> <input type=submit value=Upload> </form> ''' ``` So what does that [`secure_filename()`](https://werkzeug.palletsprojects.com/en/2.3.x/utils/#werkzeug.utils.secure_filename "(in Werkzeug v2.3.x)") function actually do? Now the problem is that there is that principle called “never trust user input”. This is also true for the filename of an uploaded file. All submitted form data can be forged, and filenames can be dangerous. For the moment just remember: always use that function to secure a filename before storing it directly on the filesystem. Information for the Pros So you’re interested in what that [`secure_filename()`](https://werkzeug.palletsprojects.com/en/2.3.x/utils/#werkzeug.utils.secure_filename "(in Werkzeug v2.3.x)") function does and what the problem is if you’re not using it? So just imagine someone would send the following information as `filename` to your application: ``` filename = "../../../../home/username/.bashrc" ``` Assuming the number of `../` is correct and you would join this with the `UPLOAD_FOLDER` the user might have the ability to modify a file on the server’s filesystem he or she should not modify. This does require some knowledge about how the application looks like, but trust me, hackers are patient :) Now let’s look how that function works: ``` >>> secure_filename('../../../../home/username/.bashrc') 'home_username_.bashrc' ``` We want to be able to serve the uploaded files so they can be downloaded by users. We’ll define a `download_file` view to serve files in the upload folder by name. `url_for("download_file", name=name)` generates download URLs. ``` from flask import send_from_directory @app.route('/uploads/<name>') def download_file(name): return send_from_directory(app.config["UPLOAD_FOLDER"], name) ``` If you’re using middleware or the HTTP server to serve files, you can register the `download_file` endpoint as `build_only` so `url_for` will work without a view function. ``` app.add_url_rule( "/uploads/<name>", endpoint="download_file", build_only=True ) ``` Improving Uploads ----------------- Changelog New in version 0.6. So how exactly does Flask handle uploads? Well it will store them in the webserver’s memory if the files are reasonably small, otherwise in a temporary location (as returned by [`tempfile.gettempdir()`](https://docs.python.org/3/library/tempfile.html#tempfile.gettempdir "(in Python v3.11)")). But how do you specify the maximum file size after which an upload is aborted? By default Flask will happily accept file uploads with an unlimited amount of memory, but you can limit that by setting the `MAX_CONTENT_LENGTH` config key: ``` from flask import Flask, Request app = Flask(__name__) app.config['MAX_CONTENT_LENGTH'] = 16 * 1000 * 1000 ``` The code above will limit the maximum allowed payload to 16 megabytes. If a larger file is transmitted, Flask will raise a [`RequestEntityTooLarge`](https://werkzeug.palletsprojects.com/en/2.3.x/exceptions/#werkzeug.exceptions.RequestEntityTooLarge "(in Werkzeug v2.3.x)") exception. Connection Reset Issue When using the local development server, you may get a connection reset error instead of a 413 response. You will get the correct status response when running the app with a production WSGI server. This feature was added in Flask 0.6 but can be achieved in older versions as well by subclassing the request object. For more information on that consult the Werkzeug documentation on file handling. Upload Progress Bars -------------------- A while ago many developers had the idea to read the incoming file in small chunks and store the upload progress in the database to be able to poll the progress with JavaScript from the client. The client asks the server every 5 seconds how much it has transmitted, but this is something it should already know. An Easier Solution ------------------ Now there are better solutions that work faster and are more reliable. There are JavaScript libraries like [jQuery](https://jquery.com/) that have form plugins to ease the construction of progress bar. Because the common pattern for file uploads exists almost unchanged in all applications dealing with uploads, there are also some Flask extensions that implement a full fledged upload mechanism that allows controlling which file extensions are allowed to be uploaded. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/fileuploads/Caching ======= When your application runs slow, throw some caches in. Well, at least it’s the easiest way to speed up things. What does a cache do? Say you have a function that takes some time to complete but the results would still be good enough if they were 5 minutes old. So then the idea is that you actually put the result of that calculation into a cache for some time. Flask itself does not provide caching for you, but [Flask-Caching](https://flask-caching.readthedocs.io/en/latest/), an extension for Flask does. Flask-Caching supports various backends, and it is even possible to develop your own caching backend. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/caching/View Decorators =============== Python has a really interesting feature called function decorators. This allows some really neat things for web applications. Because each view in Flask is a function, decorators can be used to inject additional functionality to one or more functions. The [`route()`](../../api/index#flask.Flask.route "flask.Flask.route") decorator is the one you probably used already. But there are use cases for implementing your own decorator. For instance, imagine you have a view that should only be used by people that are logged in. If a user goes to the site and is not logged in, they should be redirected to the login page. This is a good example of a use case where a decorator is an excellent solution. Login Required Decorator ------------------------ So let’s implement such a decorator. A decorator is a function that wraps and replaces another function. Since the original function is replaced, you need to remember to copy the original function’s information to the new function. Use [`functools.wraps()`](https://docs.python.org/3/library/functools.html#functools.wraps "(in Python v3.11)") to handle this for you. This example assumes that the login page is called `'login'` and that the current user is stored in `g.user` and is `None` if there is no-one logged in. ``` from functools import wraps from flask import g, request, redirect, url_for def login_required(f): @wraps(f) def decorated_function(*args, **kwargs): if g.user is None: return redirect(url_for('login', next=request.url)) return f(*args, **kwargs) return decorated_function ``` To use the decorator, apply it as innermost decorator to a view function. When applying further decorators, always remember that the [`route()`](../../api/index#flask.Flask.route "flask.Flask.route") decorator is the outermost. ``` @app.route('/secret_page') @login_required def secret_page(): pass ``` Note The `next` value will exist in `request.args` after a `GET` request for the login page. You’ll have to pass it along when sending the `POST` request from the login form. You can do this with a hidden input tag, then retrieve it from `request.form` when logging the user in. ``` <input type="hidden" value="{{ request.args.get('next', '') }}"/``` Caching Decorator ----------------- Imagine you have a view function that does an expensive calculation and because of that you would like to cache the generated results for a certain amount of time. A decorator would be nice for that. We’re assuming you have set up a cache like mentioned in [Caching](../caching/index). Here is an example cache function. It generates the cache key from a specific prefix (actually a format string) and the current path of the request. Notice that we are using a function that first creates the decorator that then decorates the function. Sounds awful? Unfortunately it is a little bit more complex, but the code should still be straightforward to read. The decorated function will then work as follows 1. get the unique cache key for the current request based on the current path. 2. get the value for that key from the cache. If the cache returned something we will return that value. 3. otherwise the original function is called and the return value is stored in the cache for the timeout provided (by default 5 minutes). Here the code: ``` from functools import wraps from flask import request def cached(timeout=5 * 60, key='view/{}'): def decorator(f): @wraps(f) def decorated_function(*args, **kwargs): cache_key = key.format(request.path) rv = cache.get(cache_key) if rv is not None: return rv rv = f(*args, **kwargs) cache.set(cache_key, rv, timeout=timeout) return rv return decorated_function return decorator ``` Notice that this assumes an instantiated `cache` object is available, see [Caching](../caching/index). Templating Decorator -------------------- A common pattern invented by the TurboGears guys a while back is a templating decorator. The idea of that decorator is that you return a dictionary with the values passed to the template from the view function and the template is automatically rendered. With that, the following three examples do exactly the same: ``` @app.route('/') def index(): return render_template('index.html', value=42) @app.route('/') @templated('index.html') def index(): return dict(value=42) @app.route('/') @templated() def index(): return dict(value=42) ``` As you can see, if no template name is provided it will use the endpoint of the URL map with dots converted to slashes + `'.html'`. Otherwise the provided template name is used. When the decorated function returns, the dictionary returned is passed to the template rendering function. If `None` is returned, an empty dictionary is assumed, if something else than a dictionary is returned we return it from the function unchanged. That way you can still use the redirect function or return simple strings. Here is the code for that decorator: ``` from functools import wraps from flask import request, render_template def templated(template=None): def decorator(f): @wraps(f) def decorated_function(*args, **kwargs): template_name = template if template_name is None: template_name = f"{request.endpoint.replace('.', '/')}.html" ctx = f(*args, **kwargs) if ctx is None: ctx = {} elif not isinstance(ctx, dict): return ctx return render_template(template_name, **ctx) return decorated_function return decorator ``` Endpoint Decorator ------------------ When you want to use the werkzeug routing system for more flexibility you need to map the endpoint as defined in the [`Rule`](https://werkzeug.palletsprojects.com/en/2.3.x/routing/#werkzeug.routing.Rule "(in Werkzeug v2.3.x)") to a view function. This is possible with this decorator. For example: ``` from flask import Flask from werkzeug.routing import Rule app = Flask(__name__) app.url_map.add(Rule('/', endpoint='index')) @app.endpoint('index') def my_index(): return "Hello world" ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/viewdecorators/Form Validation with WTForms ============================ When you have to work with form data submitted by a browser view, code quickly becomes very hard to read. There are libraries out there designed to make this process easier to manage. One of them is [WTForms](https://wtforms.readthedocs.io/) which we will handle here. If you find yourself in the situation of having many forms, you might want to give it a try. When you are working with WTForms you have to define your forms as classes first. I recommend breaking up the application into multiple modules ([Large Applications as Packages](../packages/index)) for that and adding a separate module for the forms. Getting the most out of WTForms with an Extension The [Flask-WTF](https://flask-wtf.readthedocs.io/) extension expands on this pattern and adds a few little helpers that make working with forms and Flask more fun. You can get it from [PyPI](https://pypi.org/project/Flask-WTF/). The Forms --------- This is an example form for a typical registration page: ``` from wtforms import Form, BooleanField, StringField, PasswordField, validators class RegistrationForm(Form): username = StringField('Username', [validators.Length(min=4, max=25)]) email = StringField('Email Address', [validators.Length(min=6, max=35)]) password = PasswordField('New Password', [ validators.DataRequired(), validators.EqualTo('confirm', message='Passwords must match') ]) confirm = PasswordField('Repeat Password') accept_tos = BooleanField('I accept the TOS', [validators.DataRequired()]) ``` In the View ----------- In the view function, the usage of this form looks like this: ``` @app.route('/register', methods=['GET', 'POST']) def register(): form = RegistrationForm(request.form) if request.method == 'POST' and form.validate(): user = User(form.username.data, form.email.data, form.password.data) db_session.add(user) flash('Thanks for registering') return redirect(url_for('login')) return render_template('register.html', form=form) ``` Notice we’re implying that the view is using SQLAlchemy here ([SQLAlchemy in Flask](../sqlalchemy/index)), but that’s not a requirement, of course. Adapt the code as necessary. Things to remember: 1. create the form from the request `form` value if the data is submitted via the HTTP `POST` method and `args` if the data is submitted as `GET`. 2. to validate the data, call the `validate()` method, which will return `True` if the data validates, `False` otherwise. 3. to access individual values from the form, access `form.<NAME>.data`. Forms in Templates ------------------ Now to the template side. When you pass the form to the templates, you can easily render them there. Look at the following example template to see how easy this is. WTForms does half the form generation for us already. To make it even nicer, we can write a macro that renders a field with label and a list of errors if there are any. Here’s an example `_formhelpers.html` template with such a macro: ``` {% macro render_field(field) %} <dt>{{ field.label }} <dd>{{ field(**kwargs)|safe }} {% if field.errors %} <ul class=errors> {% for error in field.errors %} <li>{{ error }}</li> {% endfor %} </ul> {% endif %} </dd> {% endmacro %} ``` This macro accepts a couple of keyword arguments that are forwarded to WTForm’s field function, which renders the field for us. The keyword arguments will be inserted as HTML attributes. So, for example, you can call `render_field(form.username, class='username')` to add a class to the input element. Note that WTForms returns standard Python strings, so we have to tell Jinja2 that this data is already HTML-escaped with the `|safe` filter. Here is the `register.html` template for the function we used above, which takes advantage of the `_formhelpers.html` template: ``` {% from "_formhelpers.html" import render_field %} <form method=post> <dl> {{ render_field(form.username) }} {{ render_field(form.email) }} {{ render_field(form.password) }} {{ render_field(form.confirm) }} {{ render_field(form.accept_tos) }} </dl> <p><input type=submit value=Register> </form``` For more information about WTForms, head over to the [WTForms website](https://wtforms.readthedocs.io/). © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/wtforms/Template Inheritance ==================== The most powerful part of Jinja is template inheritance. Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines **blocks** that child templates can override. Sounds complicated but is very basic. It’s easiest to understand it by starting with an example. Base Template ------------- This template, which we’ll call `layout.html`, defines a simple HTML skeleton document that you might use for a simple two-column page. It’s the job of “child” templates to fill the empty blocks with content: ``` <!doctype html> <html> <head> {% block head %} <link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}"> <title>{% block title %}{% endblock %} - My Webpage</title> {% endblock %} </head> <body> <div id="content">{% block content %}{% endblock %}</div> <div id="footer"> {% block footer %} &copy; Copyright 2010 by <a href="http://domain.invalid/">you</a>. {% endblock %} </div> </body> </html``` In this example, the `{% block %}` tags define four blocks that child templates can fill in. All the `block` tag does is tell the template engine that a child template may override those portions of the template. Child Template -------------- A child template might look like this: ``` {% extends "layout.html" %} {% block title %}Index{% endblock %} {% block head %} {{ super() }} <style type="text/css"> .important { color: #336699; } </style> {% endblock %} {% block content %} <h1>Index</h1> <p class="important"> Welcome on my awesome homepage. {% endblock %} ``` The `{% extends %}` tag is the key here. It tells the template engine that this template “extends” another template. When the template system evaluates this template, first it locates the parent. The extends tag must be the first tag in the template. To render the contents of a block defined in the parent template, use `{{ super() }}`. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/templateinheritance/Message Flashing ================ Good applications and user interfaces are all about feedback. If the user does not get enough feedback they will probably end up hating the application. Flask provides a really simple way to give feedback to a user with the flashing system. The flashing system basically makes it possible to record a message at the end of a request and access it next request and only next request. This is usually combined with a layout template that does this. Note that browsers and sometimes web servers enforce a limit on cookie sizes. This means that flashing messages that are too large for session cookies causes message flashing to fail silently. Simple Flashing --------------- So here is a full example: ``` from flask import Flask, flash, redirect, render_template, \ request, url_for app = Flask(__name__) app.secret_key = b'_<KEY> @app.route('/') def index(): return render_template('index.html') @app.route('/login', methods=['GET', 'POST']) def login(): error = None if request.method == 'POST': if request.form['username'] != 'admin' or \ request.form['password'] != 'secret': error = 'Invalid credentials' else: flash('You were successfully logged in') return redirect(url_for('index')) return render_template('login.html', error=error) ``` And here is the `layout.html` template which does the magic: ``` <!doctype html> <title>My Application</title> {% with messages = get_flashed_messages() %} {% if messages %} <ul class=flashes> {% for message in messages %} <li>{{ message }}</li> {% endfor %} </ul> {% endif %} {% endwith %} {% block body %}{% endblock %} ``` Here is the `index.html` template which inherits from `layout.html`: ``` {% extends "layout.html" %} {% block body %} <h1>Overview</h1> <p>Do you want to <a href="{{ url_for('login') }}">log in?</a> {% endblock %} ``` And here is the `login.html` template which also inherits from `layout.html`: ``` {% extends "layout.html" %} {% block body %} <h1>Login</h1> {% if error %} <p class=error><strong>Error:</strong> {{ error }} {% endif %} <form method=post> <dl> <dt>Username: <dd><input type=text name=username value="{{ request.form.username }}"> <dt>Password: <dd><input type=password name=password> </dl> <p><input type=submit value=Login> </form> {% endblock %} ``` Flashing With Categories ------------------------ Changelog New in version 0.3. It is also possible to provide categories when flashing a message. The default category if nothing is provided is `'message'`. Alternative categories can be used to give the user better feedback. For example error messages could be displayed with a red background. To flash a message with a different category, just use the second argument to the [`flash()`](../../api/index#flask.flash "flask.flash") function: ``` flash('Invalid password provided', 'error') ``` Inside the template you then have to tell the [`get_flashed_messages()`](../../api/index#flask.get_flashed_messages "flask.get_flashed_messages") function to also return the categories. The loop looks slightly different in that situation then: ``` {% with messages = get_flashed_messages(with_categories=true) %} {% if messages %} <ul class=flashes> {% for category, message in messages %} <li class="{{ category }}">{{ message }}</li> {% endfor %} </ul> {% endif %} {% endwith %} ``` This is just one example of how to render these flashed messages. One might also use the category to add a prefix such as `<strong>Error:</strong>` to the message. Filtering Flash Messages ------------------------ Changelog New in version 0.9. Optionally you can pass a list of categories which filters the results of [`get_flashed_messages()`](../../api/index#flask.get_flashed_messages "flask.get_flashed_messages"). This is useful if you wish to render each category in a separate block. ``` {% with errors = get_flashed_messages(category_filter=["error"]) %} {% if errors %} <div class="alert-message block-message error"> <a class="close" href="#">×</a> <ul> {%- for msg in errors %} <li>{{ msg }}</li> {% endfor -%} </ul> </div> {% endif %} {% endwith %} ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/flashing/JavaScript, fetch, and JSON =========================== You may want to make your HTML page dynamic, by changing data without reloading the entire page. Instead of submitting an HTML `<form>` and performing a redirect to re-render the template, you can add [JavaScript](https://developer.mozilla.org/Web/JavaScript) that calls [`fetch()`](https://developer.mozilla.org/Web/API/Fetch_API) and replaces content on the page. [`fetch()`](https://developer.mozilla.org/Web/API/Fetch_API) is the modern, built-in JavaScript solution to making requests from a page. You may have heard of other “AJAX” methods and libraries, such as [`XMLHttpRequest()`](https://developer.mozilla.org/Web/API/XMLHttpRequest) or [jQuery](https://jquery.com/). These are no longer needed in modern browsers, although you may choose to use them or another library depending on your application’s requirements. These docs will only focus on built-in JavaScript features. Rendering Templates ------------------- It is important to understand the difference between templates and JavaScript. Templates are rendered on the server, before the response is sent to the user’s browser. JavaScript runs in the user’s browser, after the template is rendered and sent. Therefore, it is impossible to use JavaScript to affect how the Jinja template is rendered, but it is possible to render data into the JavaScript that will run. To provide data to JavaScript when rendering the template, use the [`tojson()`](https://jinja.palletsprojects.com/en/3.1.x/templates/#jinja-filters.tojson "(in Jinja v3.1.x)") filter in a `<script>` block. This will convert the data to a valid JavaScript object, and ensure that any unsafe HTML characters are rendered safely. If you do not use the `tojson` filter, you will get a `SyntaxError` in the browser console. ``` data = generate_report() return render_template("report.html", chart_data=data) ``` ``` <script> const chart_data = {{ chart_data|tojson }} chartLib.makeChart(chart_data) </script``` A less common pattern is to add the data to a `data-` attribute on an HTML tag. In this case, you must use single quotes around the value, not double quotes, otherwise you will produce invalid or unsafe HTML. ``` <div data-chart='{{ chart_data|tojson }}'></div``` Generating URLs --------------- The other way to get data from the server to JavaScript is to make a request for it. First, you need to know the URL to request. The simplest way to generate URLs is to continue to use [`url_for()`](../../api/index#flask.url_for "flask.url_for") when rendering the template. For example: ``` const user_url = {{ url_for("user", id=current_user.id)|tojson }} fetch(user_url).then(...) ``` However, you might need to generate a URL based on information you only know in JavaScript. As discussed above, JavaScript runs in the user’s browser, not as part of the template rendering, so you can’t use `url_for` at that point. In this case, you need to know the “root URL” under which your application is served. In simple setups, this is `/`, but it might also be something else, like `https://example.com/myapp/`. A simple way to tell your JavaScript code about this root is to set it as a global variable when rendering the template. Then you can use it when generating URLs from JavaScript. ``` const SCRIPT_ROOT = {{ request.script_root|tojson }} let user_id = ... // do something to get a user id from the page let user_url = `${SCRIPT_ROOT}/user/${user_id}` fetch(user_url).then(...) ``` Making a Request with `fetch` ----------------------------- [`fetch()`](https://developer.mozilla.org/Web/API/Fetch_API) takes two arguments, a URL and an object with other options, and returns a [`Promise`](https://developer.mozilla.org/Web/JavaScript/Reference/Global_Objects/Promise). We won’t cover all the available options, and will only use `then()` on the promise, not other callbacks or `await` syntax. Read the linked MDN docs for more information about those features. By default, the GET method is used. If the response contains JSON, it can be used with a `then()` callback chain. ``` const room_url = {{ url_for("room_detail", id=room.id)|tojson }} fetch(room_url) .then(response => response.json()) .then(data => { // data is a parsed JSON object }) ``` To send data, use a data method such as POST, and pass the `body` option. The most common types for data are form data or JSON data. To send form data, pass a populated [`FormData`](https://developer.mozilla.org/en-US/docs/Web/API/FormData) object. This uses the same format as an HTML form, and would be accessed with `request.form` in a Flask view. ``` let data = new FormData() data.append("name", "Flask Room") data.append("description", "Talk about Flask here.") fetch(room_url, { "method": "POST", "body": data, }).then(...) ``` In general, prefer sending request data as form data, as would be used when submitting an HTML form. JSON can represent more complex data, but unless you need that it’s better to stick with the simpler format. When sending JSON data, the `Content-Type: application/json` header must be sent as well, otherwise Flask will return a 400 error. ``` let data = { "name": "Flask Room", "description": "Talk about Flask here.", } fetch(room_url, { "method": "POST", "headers": {"Content-Type": "application/json"}, "body": JSON.stringify(data), }).then(...) ``` Following Redirects ------------------- A response might be a redirect, for example if you logged in with JavaScript instead of a traditional HTML form, and your view returned a redirect instead of JSON. JavaScript requests do follow redirects, but they don’t change the page. If you want to make the page change you can inspect the response and apply the redirect manually. ``` fetch("/login", {"body": ...}).then( response => { if (response.redirected) { window.location = response.url } else { showLoginError() } } ) ``` Replacing Content ----------------- A response might be new HTML, either a new section of the page to add or replace, or an entirely new page. In general, if you’re returning the entire page, it would be better to handle that with a redirect as shown in the previous section. The following example shows how to replace a `<div>` with the HTML returned by a request. ``` <div id="geology-fact"> {{ include "geology_fact.html" }} </div> <script> const geology_url = {{ url_for("geology_fact")|tojson }} const geology_div = getElementById("geology-fact") fetch(geology_url) .then(response => response.text) .then(text => geology_div.innerHtml = text) </script``` Return JSON from Views ---------------------- To return a JSON object from your API view, you can directly return a dict from the view. It will be serialized to JSON automatically. ``` @app.route("/user/<int:id>") def user_detail(id): user = User.query.get_or_404(id) return { "username": User.username, "email": User.email, "picture": url_for("static", filename=f"users/{id}/profile.png"), } ``` If you want to return another JSON type, use the [`jsonify()`](../../api/index#flask.json.jsonify "flask.json.jsonify") function, which creates a response object with the given data serialized to JSON. ``` from flask import jsonify @app.route("/users") def user_list(): users = User.query.order_by(User.name).all() return jsonify([u.to_json() for u in users]) ``` It is usually not a good idea to return file data in a JSON response. JSON cannot represent binary data directly, so it must be base64 encoded, which can be slow, takes more bandwidth to send, and is not as easy to cache. Instead, serve files using one view, and generate a URL to the desired file to include in the JSON. Then the client can make a separate request to get the linked resource after getting the JSON. Receiving JSON in Views ----------------------- Use the [`json`](../../api/index#flask.Request.json "flask.Request.json") property of the [`request`](../../api/index#flask.request "flask.request") object to decode the request’s body as JSON. If the body is not valid JSON, or the `Content-Type` header is not set to `application/json`, a 400 Bad Request error will be raised. ``` from flask import request @app.post("/user/<int:id>") def user_update(id): user = User.query.get_or_404(id) user.update_from_json(request.json) db.session.commit() return user.to_json() ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/javascript/Lazily Loading Views ==================== Flask is usually used with the decorators. Decorators are simple and you have the URL right next to the function that is called for that specific URL. However there is a downside to this approach: it means all your code that uses decorators has to be imported upfront or Flask will never actually find your function. This can be a problem if your application has to import quick. It might have to do that on systems like Google’s App Engine or other systems. So if you suddenly notice that your application outgrows this approach you can fall back to a centralized URL mapping. The system that enables having a central URL map is the [`add_url_rule()`](../../api/index#flask.Flask.add_url_rule "flask.Flask.add_url_rule") function. Instead of using decorators, you have a file that sets up the application with all URLs. Converting to Centralized URL Map --------------------------------- Imagine the current application looks somewhat like this: ``` from flask import Flask app = Flask(__name__) @app.route('/') def index(): pass @app.route('/user/<username>') def user(username): pass ``` Then, with the centralized approach you would have one file with the views (`views.py`) but without any decorator: ``` def index(): pass def user(username): pass ``` And then a file that sets up an application which maps the functions to URLs: ``` from flask import Flask from yourapplication import views app = Flask(__name__) app.add_url_rule('/', view_func=views.index) app.add_url_rule('/user/<username>', view_func=views.user) ``` Loading Late ------------ So far we only split up the views and the routing, but the module is still loaded upfront. The trick is to actually load the view function as needed. This can be accomplished with a helper class that behaves just like a function but internally imports the real function on first use: ``` from werkzeug.utils import import_string, cached_property class LazyView(object): def __init__(self, import_name): self.__module__, self.__name__ = import_name.rsplit('.', 1) self.import_name = import_name @cached_property def view(self): return import_string(self.import_name) def __call__(self, *args, **kwargs): return self.view(*args, **kwargs) ``` What’s important here is is that `__module__` and `__name__` are properly set. This is used by Flask internally to figure out how to name the URL rules in case you don’t provide a name for the rule yourself. Then you can define your central place to combine the views like this: ``` from flask import Flask from yourapplication.helpers import LazyView app = Flask(__name__) app.add_url_rule('/', view_func=LazyView('yourapplication.views.index')) app.add_url_rule('/user/<username>', view_func=LazyView('yourapplication.views.user')) ``` You can further optimize this in terms of amount of keystrokes needed to write this by having a function that calls into [`add_url_rule()`](../../api/index#flask.Flask.add_url_rule "flask.Flask.add_url_rule") by prefixing a string with the project name and a dot, and by wrapping `view_func` in a `LazyView` as needed. ``` def url(import_name, url_rules=[], **options): view = LazyView(f"yourapplication.{import_name}") for url_rule in url_rules: app.add_url_rule(url_rule, view_func=view, **options) # add a single route to the index view url('views.index', ['/']) # add two routes to a single function endpoint url_rules = ['/user/','/user/<username>'] url('views.user', url_rules) ``` One thing to keep in mind is that before and after request handlers have to be in a file that is imported upfront to work properly on the first request. The same goes for any kind of remaining decorator. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/lazyloading/MongoDB with MongoEngine ======================== Using a document database like MongoDB is a common alternative to relational SQL databases. This pattern shows how to use [MongoEngine](http://mongoengine.org), a document mapper library, to integrate with MongoDB. A running MongoDB server and [Flask-MongoEngine](https://flask-mongoengine.readthedocs.io) are required. ``` pip install flask-mongoengine ``` Configuration ------------- Basic setup can be done by defining `MONGODB_SETTINGS` on `app.config` and creating a `MongoEngine` instance. ``` from flask import Flask from flask_mongoengine import MongoEngine app = Flask(__name__) app.config['MONGODB_SETTINGS'] = { "db": "myapp", } db = MongoEngine(app) ``` Mapping Documents ----------------- To declare a model that represents a Mongo document, create a class that inherits from `Document` and declare each of the fields. ``` import mongoengine as me class Movie(me.Document): title = me.StringField(required=True) year = me.IntField() rated = me.StringField() director = me.StringField() actors = me.ListField() ``` If the document has nested fields, use `EmbeddedDocument` to defined the fields of the embedded document and `EmbeddedDocumentField` to declare it on the parent document. ``` class Imdb(me.EmbeddedDocument): imdb_id = me.StringField() rating = me.DecimalField() votes = me.IntField() class Movie(me.Document): ... imdb = me.EmbeddedDocumentField(Imdb) ``` Creating Data ------------- Instantiate your document class with keyword arguments for the fields. You can also assign values to the field attributes after instantiation. Then call `doc.save()`. ``` bttf = Movie(title="Back To The Future", year=1985) bttf.actors = [ "<NAME>", "<NAME>" ] bttf.imdb = Imdb(imdb_id="tt0088763", rating=8.5) bttf.save() ``` Queries ------- Use the class `objects` attribute to make queries. A keyword argument looks for an equal value on the field. ``` bttf = Movies.objects(title="Back To The Future").get_or_404() ``` Query operators may be used by concatenating them with the field name using a double-underscore. `objects`, and queries returned by calling it, are iterable. ``` some_theron_movie = Movie.objects(actors__in=["<NAME>"]).first() for recents in Movie.objects(year__gte=2017): print(recents.title) ``` Documentation ------------- There are many more ways to define and query documents with MongoEngine. For more information, check out the [official documentation](http://mongoengine.org). Flask-MongoEngine adds helpful utilities on top of MongoEngine. Check out their [documentation](https://flask-mongoengine.readthedocs.io) as well. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/mongoengine/Adding a favicon ================ A “favicon” is an icon used by browsers for tabs and bookmarks. This helps to distinguish your website and to give it a unique brand. A common question is how to add a favicon to a Flask application. First, of course, you need an icon. It should be 16 × 16 pixels and in the ICO file format. This is not a requirement but a de-facto standard supported by all relevant browsers. Put the icon in your static directory as `favicon.ico`. Now, to get browsers to find your icon, the correct way is to add a link tag in your HTML. So, for example: ``` <link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}"``` That’s all you need for most browsers, however some really old ones do not support this standard. The old de-facto standard is to serve this file, with this name, at the website root. If your application is not mounted at the root path of the domain you either need to configure the web server to serve the icon at the root or if you can’t do that you’re out of luck. If however your application is the root you can simply route a redirect: ``` app.add_url_rule('/favicon.ico', redirect_to=url_for('static', filename='favicon.ico')) ``` If you want to save the extra redirect request you can also write a view using [`send_from_directory()`](../../api/index#flask.send_from_directory "flask.send_from_directory"): ``` import os from flask import send_from_directory @app.route('/favicon.ico') def favicon(): return send_from_directory(os.path.join(app.root_path, 'static'), 'favicon.ico', mimetype='image/vnd.microsoft.icon') ``` We can leave out the explicit mimetype and it will be guessed, but we may as well specify it to avoid the extra guessing, as it will always be the same. The above will serve the icon via your application and if possible it’s better to configure your dedicated web server to serve it; refer to the web server’s documentation. See also -------- * The [Favicon](https://en.wikipedia.org/wiki/Favicon) article on Wikipedia © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/favicon/Streaming Contents ================== Sometimes you want to send an enormous amount of data to the client, much more than you want to keep in memory. When you are generating the data on the fly though, how do you send that back to the client without the roundtrip to the filesystem? The answer is by using generators and direct responses. Basic Usage ----------- This is a basic view function that generates a lot of CSV data on the fly. The trick is to have an inner function that uses a generator to generate data and to then invoke that function and pass it to a response object: ``` @app.route('/large.csv') def generate_large_csv(): def generate(): for row in iter_all_rows(): yield f"{','.join(row)}\n" return generate(), {"Content-Type": "text/csv"} ``` Each `yield` expression is directly sent to the browser. Note though that some WSGI middlewares might break streaming, so be careful there in debug environments with profilers and other things you might have enabled. Streaming from Templates ------------------------ The Jinja2 template engine supports rendering a template piece by piece, returning an iterator of strings. Flask provides the [`stream_template()`](../../api/index#flask.stream_template "flask.stream_template") and [`stream_template_string()`](../../api/index#flask.stream_template_string "flask.stream_template_string") functions to make this easier to use. ``` from flask import stream_template @app.get("/timeline") def timeline(): return stream_template("timeline.html") ``` The parts yielded by the render stream tend to match statement blocks in the template. Streaming with Context ---------------------- The [`request`](../../api/index#flask.request "flask.request") will not be active while the generator is running, because the view has already returned at that point. If you try to access `request`, you’ll get a `RuntimeError`. If your generator function relies on data in `request`, use the [`stream_with_context()`](../../api/index#flask.stream_with_context "flask.stream_with_context") wrapper. This will keep the request context active during the generator. ``` from flask import stream_with_context, request from markupsafe import escape @app.route('/stream') def streamed_response(): def generate(): yield '<p>Hello ' yield escape(request.args['name']) yield '!</p>' return stream_with_context(generate()) ``` It can also be used as a decorator. ``` @stream_with_context def generate(): ... return generate() ``` The [`stream_template()`](../../api/index#flask.stream_template "flask.stream_template") and [`stream_template_string()`](../../api/index#flask.stream_template_string "flask.stream_template_string") functions automatically use [`stream_with_context()`](../../api/index#flask.stream_with_context "flask.stream_with_context") if a request is active. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/streaming/Deferred Request Callbacks ========================== One of the design principles of Flask is that response objects are created and passed down a chain of potential callbacks that can modify them or replace them. When the request handling starts, there is no response object yet. It is created as necessary either by a view function or by some other component in the system. What happens if you want to modify the response at a point where the response does not exist yet? A common example for that would be a [`before_request()`](../../api/index#flask.Flask.before_request "flask.Flask.before_request") callback that wants to set a cookie on the response object. One way is to avoid the situation. Very often that is possible. For instance you can try to move that logic into a [`after_request()`](../../api/index#flask.Flask.after_request "flask.Flask.after_request") callback instead. However, sometimes moving code there makes it more complicated or awkward to reason about. As an alternative, you can use [`after_this_request()`](../../api/index#flask.after_this_request "flask.after_this_request") to register callbacks that will execute after only the current request. This way you can defer code execution from anywhere in the application, based on the current request. At any time during a request, we can register a function to be called at the end of the request. For example you can remember the current language of the user in a cookie in a [`before_request()`](../../api/index#flask.Flask.before_request "flask.Flask.before_request") callback: ``` from flask import request, after_this_request @app.before_request def detect_user_language(): language = request.cookies.get('user_lang') if language is None: language = guess_language_from_request() # when the response exists, set a cookie with the language @after_this_request def remember_language(response): response.set_cookie('user_lang', language) return response g.language = language ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/deferredcallbacks/Adding HTTP Method Overrides ============================ Some HTTP proxies do not support arbitrary HTTP methods or newer HTTP methods (such as PATCH). In that case it’s possible to “proxy” HTTP methods through another HTTP method in total violation of the protocol. The way this works is by letting the client do an HTTP POST request and set the `X-HTTP-Method-Override` header. Then the method is replaced with the header value before being passed to Flask. This can be accomplished with an HTTP middleware: ``` class HTTPMethodOverrideMiddleware(object): allowed_methods = frozenset([ 'GET', 'HEAD', 'POST', 'DELETE', 'PUT', 'PATCH', 'OPTIONS' ]) bodyless_methods = frozenset(['GET', 'HEAD', 'OPTIONS', 'DELETE']) def __init__(self, app): self.app = app def __call__(self, environ, start_response): method = environ.get('HTTP_X_HTTP_METHOD_OVERRIDE', '').upper() if method in self.allowed_methods: environ['REQUEST_METHOD'] = method if method in self.bodyless_methods: environ['CONTENT_LENGTH'] = '0' return self.app(environ, start_response) ``` To use this with Flask, wrap the app object with the middleware: ``` from flask import Flask app = Flask(__name__) app.wsgi_app = HTTPMethodOverrideMiddleware(app.wsgi_app) ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/methodoverrides/Request Content Checksums ========================= Various pieces of code can consume the request data and preprocess it. For instance JSON data ends up on the request object already read and processed, form data ends up there as well but goes through a different code path. This seems inconvenient when you want to calculate the checksum of the incoming request data. This is necessary sometimes for some APIs. Fortunately this is however very simple to change by wrapping the input stream. The following example calculates the SHA1 checksum of the incoming data as it gets read and stores it in the WSGI environment: ``` import hashlib class ChecksumCalcStream(object): def __init__(self, stream): self._stream = stream self._hash = hashlib.sha1() def read(self, bytes): rv = self._stream.read(bytes) self._hash.update(rv) return rv def readline(self, size_hint): rv = self._stream.readline(size_hint) self._hash.update(rv) return rv def generate_checksum(request): env = request.environ stream = ChecksumCalcStream(env['wsgi.input']) env['wsgi.input'] = stream return stream._hash ``` To use this, all you need to do is to hook the calculating stream in before the request starts consuming data. (Eg: be careful accessing `request.form` or anything of that nature. `before_request_handlers` for instance should be careful not to access it). Example usage: ``` @app.route('/special-api', methods=['POST']) def special_api(): hash = generate_checksum(request) # Accessing this parses the input stream files = request.files # At this point the hash is fully constructed. checksum = hash.hexdigest() return f"Hash was: {checksum}" ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/requestchecksum/Single-Page Applications ======================== Flask can be used to serve Single-Page Applications (SPA) by placing static files produced by your frontend framework in a subfolder inside of your project. You will also need to create a catch-all endpoint that routes all requests to your SPA. The following example demonstrates how to serve an SPA along with an API: ``` from flask import Flask, jsonify app = Flask(__name__, static_folder='app', static_url_path="/app") @app.route("/heartbeat") def heartbeat(): return jsonify({"status": "healthy"}) @app.route('/', defaults={'path': ''}) @app.route('/<path:path>') def catch_all(path): return app.send_static_file("index.html") ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/singlepageapplications/Using async and await ===================== Changelog New in version 2.0. Routes, error handlers, before request, after request, and teardown functions can all be coroutine functions if Flask is installed with the `async` extra (`pip install flask[async]`). This allows views to be defined with `async def` and use `await`. ``` @app.route("/get-data") async def get_data(): data = await async_db_query(...) return jsonify(data) ``` Pluggable class-based views also support handlers that are implemented as coroutines. This applies to the [`dispatch_request()`](../api/index#flask.views.View.dispatch_request "flask.views.View.dispatch_request") method in views that inherit from the [`flask.views.View`](../api/index#flask.views.View "flask.views.View") class, as well as all the HTTP method handlers in views that inherit from the [`flask.views.MethodView`](../api/index#flask.views.MethodView "flask.views.MethodView") class. Using `async` on Windows on Python 3.8 Python 3.8 has a bug related to asyncio on Windows. If you encounter something like `ValueError: set_wakeup_fd only works in main thread`, please upgrade to Python 3.9. Using `async` with greenlet When using gevent or eventlet to serve an application or patch the runtime, greenlet>=1.0 is required. When using PyPy, PyPy>=7.3.7 is required. Performance ----------- Async functions require an event loop to run. Flask, as a WSGI application, uses one worker to handle one request/response cycle. When a request comes in to an async view, Flask will start an event loop in a thread, run the view function there, then return the result. Each request still ties up one worker, even for async views. The upside is that you can run async code within a view, for example to make multiple concurrent database queries, HTTP requests to an external API, etc. However, the number of requests your application can handle at one time will remain the same. **Async is not inherently faster than sync code.** Async is beneficial when performing concurrent IO-bound tasks, but will probably not improve CPU-bound tasks. Traditional Flask views will still be appropriate for most use cases, but Flask’s async support enables writing and using code that wasn’t possible natively before. Background tasks ---------------- Async functions will run in an event loop until they complete, at which stage the event loop will stop. This means any additional spawned tasks that haven’t completed when the async function completes will be cancelled. Therefore you cannot spawn background tasks, for example via `asyncio.create_task`. If you wish to use background tasks it is best to use a task queue to trigger background work, rather than spawn tasks in a view function. With that in mind you can spawn asyncio tasks by serving Flask with an ASGI server and utilising the asgiref WsgiToAsgi adapter as described in [ASGI](../deploying/asgi/index). This works as the adapter creates an event loop that runs continually. When to use Quart instead ------------------------- Flask’s async support is less performant than async-first frameworks due to the way it is implemented. If you have a mainly async codebase it would make sense to consider [Quart](https://github.com/pallets/quart). Quart is a reimplementation of Flask based on the [ASGI](https://asgi.readthedocs.io/en/latest/) standard instead of WSGI. This allows it to handle many concurrent requests, long running requests, and websockets without requiring multiple worker processes or threads. It has also already been possible to run Flask with Gevent or Eventlet to get many of the benefits of async request handling. These libraries patch low-level Python functions to accomplish this, whereas `async`/ `await` and ASGI use standard, modern Python capabilities. Deciding whether you should use Flask, Quart, or something else is ultimately up to understanding the specific needs of your project. Extensions ---------- Flask extensions predating Flask’s async support do not expect async views. If they provide decorators to add functionality to views, those will probably not work with async views because they will not await the function or be awaitable. Other functions they provide will not be awaitable either and will probably be blocking if called within an async view. Extension authors can support async functions by utilising the [`flask.Flask.ensure_sync()`](../api/index#flask.Flask.ensure_sync "flask.Flask.ensure_sync") method. For example, if the extension provides a view function decorator add `ensure_sync` before calling the decorated function, ``` def extension(func): @wraps(func) def wrapper(*args, **kwargs): ... # Extension logic return current_app.ensure_sync(func)(*args, **kwargs) return wrapper ``` Check the changelog of the extension you want to use to see if they’ve implemented async support, or make a feature request or PR to them. Other event loops ----------------- At the moment Flask only supports [`asyncio`](https://docs.python.org/3/library/asyncio.html#module-asyncio "(in Python v3.11)"). It’s possible to override [`flask.Flask.ensure_sync()`](../api/index#flask.Flask.ensure_sync "flask.Flask.ensure_sync") to change how async functions are wrapped to use a different library. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/async-await/Security Considerations ======================= Web applications usually face all kinds of security problems and it’s very hard to get everything right. Flask tries to solve a few of these things for you, but there are a couple more you have to take care of yourself. Cross-Site Scripting (XSS) -------------------------- Cross site scripting is the concept of injecting arbitrary HTML (and with it JavaScript) into the context of a website. To remedy this, developers have to properly escape text so that it cannot include arbitrary HTML tags. For more information on that have a look at the Wikipedia article on [Cross-Site Scripting](https://en.wikipedia.org/wiki/Cross-site_scripting). Flask configures Jinja2 to automatically escape all values unless explicitly told otherwise. This should rule out all XSS problems caused in templates, but there are still other places where you have to be careful: * generating HTML without the help of Jinja2 * calling `Markup` on data submitted by users * sending out HTML from uploaded files, never do that, use the `Content-Disposition: attachment` header to prevent that problem. * sending out textfiles from uploaded files. Some browsers are using content-type guessing based on the first few bytes so users could trick a browser to execute HTML. Another thing that is very important are unquoted attributes. While Jinja2 can protect you from XSS issues by escaping HTML, there is one thing it cannot protect you from: XSS by attribute injection. To counter this possible attack vector, be sure to always quote your attributes with either double or single quotes when using Jinja expressions in them: ``` <input value="{{ value }}"``` Why is this necessary? Because if you would not be doing that, an attacker could easily inject custom JavaScript handlers. For example an attacker could inject this piece of HTML+JavaScript: ``` onmouseover=alert(document.cookie) ``` When the user would then move with the mouse over the input, the cookie would be presented to the user in an alert window. But instead of showing the cookie to the user, a good attacker might also execute any other JavaScript code. In combination with CSS injections the attacker might even make the element fill out the entire page so that the user would just have to have the mouse anywhere on the page to trigger the attack. There is one class of XSS issues that Jinja’s escaping does not protect against. The `a` tag’s `href` attribute can contain a `javascript:` URI, which the browser will execute when clicked if not secured properly. ``` <a href="{{ value }}">click here</a> <a href="javascript:alert('unsafe');">click here</a``` To prevent this, you’ll need to set the [Content Security Policy (CSP)](#security-csp) response header. Cross-Site Request Forgery (CSRF) --------------------------------- Another big problem is CSRF. This is a very complex topic and I won’t outline it here in detail just mention what it is and how to theoretically prevent it. If your authentication information is stored in cookies, you have implicit state management. The state of “being logged in” is controlled by a cookie, and that cookie is sent with each request to a page. Unfortunately that includes requests triggered by 3rd party sites. If you don’t keep that in mind, some people might be able to trick your application’s users with social engineering to do stupid things without them knowing. Say you have a specific URL that, when you sent `POST` requests to will delete a user’s profile (say `http://example.com/user/delete`). If an attacker now creates a page that sends a post request to that page with some JavaScript they just have to trick some users to load that page and their profiles will end up being deleted. Imagine you were to run Facebook with millions of concurrent users and someone would send out links to images of little kittens. When users would go to that page, their profiles would get deleted while they are looking at images of fluffy cats. How can you prevent that? Basically for each request that modifies content on the server you would have to either use a one-time token and store that in the cookie **and** also transmit it with the form data. After receiving the data on the server again, you would then have to compare the two tokens and ensure they are equal. Why does Flask not do that for you? The ideal place for this to happen is the form validation framework, which does not exist in Flask. JSON Security ------------- In Flask 0.10 and lower, `jsonify()` did not serialize top-level arrays to JSON. This was because of a security vulnerability in ECMAScript 4. ECMAScript 5 closed this vulnerability, so only extremely old browsers are still vulnerable. All of these browsers have [other more serious vulnerabilities](https://github.com/pallets/flask/issues/248#issuecomment-59934857), so this behavior was changed and `jsonify()` now supports serializing arrays. Security Headers ---------------- Browsers recognize various response headers in order to control security. We recommend reviewing each of the headers below for use in your application. The [Flask-Talisman](https://github.com/GoogleCloudPlatform/flask-talisman) extension can be used to manage HTTPS and the security headers for you. ### HTTP Strict Transport Security (HSTS) Tells the browser to convert all HTTP requests to HTTPS, preventing man-in-the-middle (MITM) attacks. ``` response.headers['Strict-Transport-Security'] = 'max-age=31536000; includeSubDomains' ``` * <https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security ### Content Security Policy (CSP) Tell the browser where it can load various types of resource from. This header should be used whenever possible, but requires some work to define the correct policy for your site. A very strict policy would be: ``` response.headers['Content-Security-Policy'] = "default-src 'self'" ``` * <https://csp.withgoogle.com/docs/index.html> * <https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy ### X-Content-Type-Options Forces the browser to honor the response content type instead of trying to detect it, which can be abused to generate a cross-site scripting (XSS) attack. ``` response.headers['X-Content-Type-Options'] = 'nosniff' ``` * <https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options ### X-Frame-Options Prevents external sites from embedding your site in an `iframe`. This prevents a class of attacks where clicks in the outer frame can be translated invisibly to clicks on your page’s elements. This is also known as “clickjacking”. ``` response.headers['X-Frame-Options'] = 'SAMEORIGIN' ``` * <https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options ### Set-Cookie options These options can be added to a `Set-Cookie` header to improve their security. Flask has configuration options to set these on the session cookie. They can be set on other cookies too. * `Secure` limits cookies to HTTPS traffic only. * `HttpOnly` protects the contents of cookies from being read with JavaScript. * `SameSite` restricts how cookies are sent with requests from external sites. Can be set to `'Lax'` (recommended) or `'Strict'`. `Lax` prevents sending cookies with CSRF-prone requests from external sites, such as submitting a form. `Strict` prevents sending cookies with all external requests, including following regular links. ``` app.config.update( SESSION_COOKIE_SECURE=True, SESSION_COOKIE_HTTPONLY=True, SESSION_COOKIE_SAMESITE='Lax', ) response.set_cookie('username', 'flask', secure=True, httponly=True, samesite='Lax') ``` Specifying `Expires` or `Max-Age` options, will remove the cookie after the given time, or the current time plus the age, respectively. If neither option is set, the cookie will be removed when the browser is closed. ``` # cookie expires after 10 minutes response.set_cookie('snakes', '3', max_age=600) ``` For the session cookie, if [`session.permanent`](../api/index#flask.session.permanent "flask.session.permanent") is set, then [`PERMANENT_SESSION_LIFETIME`](../config/index#PERMANENT_SESSION_LIFETIME "PERMANENT_SESSION_LIFETIME") is used to set the expiration. Flask’s default cookie implementation validates that the cryptographic signature is not older than this value. Lowering this value may help mitigate replay attacks, where intercepted cookies can be sent at a later time. ``` app.config.update( PERMANENT_SESSION_LIFETIME=600 ) @app.route('/login', methods=['POST']) def login(): ... session.clear() session['user_id'] = user.id session.permanent = True ... ``` Use `itsdangerous.TimedSerializer` to sign and validate other cookie values (or any values that need secure signatures). * <https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies> * <https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie ### HTTP Public Key Pinning (HPKP) This tells the browser to authenticate with the server using only the specific certificate key to prevent MITM attacks. Warning Be careful when enabling this, as it is very difficult to undo if you set up or upgrade your key incorrectly. * <https://developer.mozilla.org/en-US/docs/Web/HTTP/Public_Key_Pinning Copy/Paste to Terminal ---------------------- Hidden characters such as the backspace character (`\b`, `^H`) can cause text to render differently in HTML than how it is interpreted if [pasted into a terminal](https://security.stackexchange.com/q/39118). For example, `import y\bose\bm\bi\bt\be\b` renders as `import yosemite` in HTML, but the backspaces are applied when pasted into a terminal, and it becomes `import os`. If you expect users to copy and paste untrusted code from your site, such as from comments posted by users on a technical blog, consider applying extra filtering, such as replacing all `\b` characters. ``` body = body.replace("\b", "") ``` Most modern terminals will warn about and remove hidden characters when pasting, so this isn’t strictly necessary. It’s also possible to craft dangerous commands in other ways that aren’t possible to filter. Depending on your site’s use case, it may be good to show a warning about copying code in general. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/security/Changes ======= Version 2.3.3 ------------- Unreleased * Python 3.12 compatibility. * Require Werkzeug >= 2.3.6. * Refactor how an app’s root and instance paths are determined. [#5160](https://github.com/pallets/flask/issues/5160) Version 2.3.2 ------------- Released 2023-05-01 * Set `Vary: Cookie` header when the session is accessed, modified, or refreshed. * Update Werkzeug requirement to >=2.3.3 to apply recent bug fixes. Version 2.3.1 ------------- Released 2023-04-25 * Restore deprecated `from flask import Markup`. [#5084](https://github.com/pallets/flask/issues/5084) Version 2.3.0 ------------- Released 2023-04-25 * Drop support for Python 3.7. [#5072](https://github.com/pallets/flask/pull/5072) * Update minimum requirements to the latest versions: Werkzeug>=2.3.0, Jinja2>3.1.2, itsdangerous>=2.1.2, click>=8.1.3. * Remove previously deprecated code. [#4995](https://github.com/pallets/flask/pull/4995) + The `push` and `pop` methods of the deprecated `_app_ctx_stack` and `_request_ctx_stack` objects are removed. `top` still exists to give extensions more time to update, but it will be removed. + The `FLASK_ENV` environment variable, `ENV` config key, and `app.env` property are removed. + The `session_cookie_name`, `send_file_max_age_default`, `use_x_sendfile`, `propagate_exceptions`, and `templates_auto_reload` properties on `app` are removed. + The `JSON_AS_ASCII`, `JSON_SORT_KEYS`, `JSONIFY_MIMETYPE`, and `JSONIFY_PRETTYPRINT_REGULAR` config keys are removed. + The `app.before_first_request` and `bp.before_app_first_request` decorators are removed. + `json_encoder` and `json_decoder` attributes on app and blueprint, and the corresponding `json.JSONEncoder` and `JSONDecoder` classes, are removed. + The `json.htmlsafe_dumps` and `htmlsafe_dump` functions are removed. + Calling setup methods on blueprints after registration is an error instead of a warning. [#4997](https://github.com/pallets/flask/pull/4997) * Importing `escape` and `Markup` from `flask` is deprecated. Import them directly from `markupsafe` instead. [#4996](https://github.com/pallets/flask/pull/4996) * The `app.got_first_request` property is deprecated. [#4997](https://github.com/pallets/flask/pull/4997) * The `locked_cached_property` decorator is deprecated. Use a lock inside the decorated function if locking is needed. [#4993](https://github.com/pallets/flask/issues/4993) * Signals are always available. `blinker>=1.6.2` is a required dependency. The `signals_available` attribute is deprecated. [#5056](https://github.com/pallets/flask/issues/5056) * Signals support `async` subscriber functions. [#5049](https://github.com/pallets/flask/pull/5049) * Remove uses of locks that could cause requests to block each other very briefly. [#4993](https://github.com/pallets/flask/issues/4993) * Use modern packaging metadata with `pyproject.toml` instead of `setup.cfg`. [#4947](https://github.com/pallets/flask/pull/4947) * Ensure subdomains are applied with nested blueprints. [#4834](https://github.com/pallets/flask/issues/4834) * `config.from_file` can use `text=False` to indicate that the parser wants a binary file instead. [#4989](https://github.com/pallets/flask/issues/4989) * If a blueprint is created with an empty name it raises a `ValueError`. [#5010](https://github.com/pallets/flask/issues/5010) * `SESSION_COOKIE_DOMAIN` does not fall back to `SERVER_NAME`. The default is not to set the domain, which modern browsers interpret as an exact match rather than a subdomain match. Warnings about `localhost` and IP addresses are also removed. [#5051](https://github.com/pallets/flask/issues/5051) * The `routes` command shows each rule’s `subdomain` or `host` when domain matching is in use. [#5004](https://github.com/pallets/flask/issues/5004) * Use postponed evaluation of annotations. [#5071](https://github.com/pallets/flask/pull/5071) Version 2.2.5 ------------- Released 2023-05-02 * Update for compatibility with Werkzeug 2.3.3. * Set `Vary: Cookie` header when the session is accessed, modified, or refreshed. Version 2.2.4 ------------- Released 2023-04-25 * Update for compatibility with Werkzeug 2.3. Version 2.2.3 ------------- Released 2023-02-15 * Autoescape is enabled by default for `.svg` template files. [#4831](https://github.com/pallets/flask/issues/4831) * Fix the type of `template_folder` to accept `pathlib.Path`. [#4892](https://github.com/pallets/flask/issues/4892) * Add `--debug` option to the `flask run` command. [#4777](https://github.com/pallets/flask/issues/4777) Version 2.2.2 ------------- Released 2022-08-08 * Update Werkzeug dependency to >= 2.2.2. This includes fixes related to the new faster router, header parsing, and the development server. [#4754](https://github.com/pallets/flask/pull/4754) * Fix the default value for `app.env` to be `"production"`. This attribute remains deprecated. [#4740](https://github.com/pallets/flask/issues/4740) Version 2.2.1 ------------- Released 2022-08-03 * Setting or accessing `json_encoder` or `json_decoder` raises a deprecation warning. [#4732](https://github.com/pallets/flask/issues/4732) Version 2.2.0 ------------- Released 2022-08-01 * Remove previously deprecated code. [#4667](https://github.com/pallets/flask/pull/4667) + Old names for some `send_file` parameters have been removed. `download_name` replaces `attachment_filename`, `max_age` replaces `cache_timeout`, and `etag` replaces `add_etags`. Additionally, `path` replaces `filename` in `send_from_directory`. + The `RequestContext.g` property returning `AppContext.g` is removed. * Update Werkzeug dependency to >= 2.2. * The app and request contexts are managed using Python context vars directly rather than Werkzeug’s `LocalStack`. This should result in better performance and memory use. [#4682](https://github.com/pallets/flask/pull/4682) + Extension maintainers, be aware that `_app_ctx_stack.top` and `_request_ctx_stack.top` are deprecated. Store data on `g` instead using a unique prefix, like `g._extension_name_attr`. * The `FLASK_ENV` environment variable and `app.env` attribute are deprecated, removing the distinction between development and debug mode. Debug mode should be controlled directly using the `--debug` option or `app.run(debug=True)`. [#4714](https://github.com/pallets/flask/issues/4714) * Some attributes that proxied config keys on `app` are deprecated: `session_cookie_name`, `send_file_max_age_default`, `use_x_sendfile`, `propagate_exceptions`, and `templates_auto_reload`. Use the relevant config keys instead. [#4716](https://github.com/pallets/flask/issues/4716) * Add new customization points to the `Flask` app object for many previously global behaviors. + `flask.url_for` will call `app.url_for`. [#4568](https://github.com/pallets/flask/issues/4568) + `flask.abort` will call `app.aborter`. `Flask.aborter_class` and `Flask.make_aborter` can be used to customize this aborter. [#4567](https://github.com/pallets/flask/issues/4567) + `flask.redirect` will call `app.redirect`. [#4569](https://github.com/pallets/flask/issues/4569) + `flask.json` is an instance of `JSONProvider`. A different provider can be set to use a different JSON library. `flask.jsonify` will call `app.json.response`, other functions in `flask.json` will call corresponding functions in `app.json`. [#4692](https://github.com/pallets/flask/pull/4692) * JSON configuration is moved to attributes on the default `app.json` provider. `JSON_AS_ASCII`, `JSON_SORT_KEYS`, `JSONIFY_MIMETYPE`, and `JSONIFY_PRETTYPRINT_REGULAR` are deprecated. [#4692](https://github.com/pallets/flask/pull/4692) * Setting custom `json_encoder` and `json_decoder` classes on the app or a blueprint, and the corresponding `json.JSONEncoder` and `JSONDecoder` classes, are deprecated. JSON behavior can now be overridden using the `app.json` provider interface. [#4692](https://github.com/pallets/flask/pull/4692) * `json.htmlsafe_dumps` and `json.htmlsafe_dump` are deprecated, the function is built-in to Jinja now. [#4692](https://github.com/pallets/flask/pull/4692) * Refactor `register_error_handler` to consolidate error checking. Rewrite some error messages to be more consistent. [#4559](https://github.com/pallets/flask/issues/4559) * Use Blueprint decorators and functions intended for setup after registering the blueprint will show a warning. In the next version, this will become an error just like the application setup methods. [#4571](https://github.com/pallets/flask/issues/4571) * `before_first_request` is deprecated. Run setup code when creating the application instead. [#4605](https://github.com/pallets/flask/issues/4605) * Added the `View.init_every_request` class attribute. If a view subclass sets this to `False`, the view will not create a new instance on every request. [#2520](https://github.com/pallets/flask/issues/2520). * A `flask.cli.FlaskGroup` Click group can be nested as a sub-command in a custom CLI. [#3263](https://github.com/pallets/flask/issues/3263) * Add `--app` and `--debug` options to the `flask` CLI, instead of requiring that they are set through environment variables. [#2836](https://github.com/pallets/flask/issues/2836) * Add `--env-file` option to the `flask` CLI. This allows specifying a dotenv file to load in addition to `.env` and `.flaskenv`. [#3108](https://github.com/pallets/flask/issues/3108) * It is no longer required to decorate custom CLI commands on `app.cli` or `blueprint.cli` with `@with_appcontext`, an app context will already be active at that point. [#2410](https://github.com/pallets/flask/issues/2410) * `SessionInterface.get_expiration_time` uses a timezone-aware value. [#4645](https://github.com/pallets/flask/pull/4645) * View functions can return generators directly instead of wrapping them in a `Response`. [#4629](https://github.com/pallets/flask/pull/4629) * Add `stream_template` and `stream_template_string` functions to render a template as a stream of pieces. [#4629](https://github.com/pallets/flask/pull/4629) * A new implementation of context preservation during debugging and testing. [#4666](https://github.com/pallets/flask/pull/4666) + `request`, `g`, and other context-locals point to the correct data when running code in the interactive debugger console. [#2836](https://github.com/pallets/flask/issues/2836) + Teardown functions are always run at the end of the request, even if the context is preserved. They are also run after the preserved context is popped. + `stream_with_context` preserves context separately from a `with client` block. It will be cleaned up when `response.get_data()` or `response.close()` is called. * Allow returning a list from a view function, to convert it to a JSON response like a dict is. [#4672](https://github.com/pallets/flask/issues/4672) * When type checking, allow `TypedDict` to be returned from view functions. [#4695](https://github.com/pallets/flask/pull/4695) * Remove the `--eager-loading/--lazy-loading` options from the `flask run` command. The app is always eager loaded the first time, then lazily loaded in the reloader. The reloader always prints errors immediately but continues serving. Remove the internal `DispatchingApp` middleware used by the previous implementation. [#4715](https://github.com/pallets/flask/issues/4715) Version 2.1.3 ------------- Released 2022-07-13 * Inline some optional imports that are only used for certain CLI commands. [#4606](https://github.com/pallets/flask/pull/4606) * Relax type annotation for `after_request` functions. [#4600](https://github.com/pallets/flask/issues/4600) * `instance_path` for namespace packages uses the path closest to the imported submodule. [#4610](https://github.com/pallets/flask/issues/4610) * Clearer error message when `render_template` and `render_template_string` are used outside an application context. [#4693](https://github.com/pallets/flask/pull/4693) Version 2.1.2 ------------- Released 2022-04-28 * Fix type annotation for `json.loads`, it accepts str or bytes. [#4519](https://github.com/pallets/flask/issues/4519) * The `--cert` and `--key` options on `flask run` can be given in either order. [#4459](https://github.com/pallets/flask/issues/4459) Version 2.1.1 ------------- Released on 2022-03-30 * Set the minimum required version of importlib_metadata to 3.6.0, which is required on Python < 3.10. [#4502](https://github.com/pallets/flask/issues/4502) Version 2.1.0 ------------- Released 2022-03-28 * Drop support for Python 3.6. [#4335](https://github.com/pallets/flask/pull/4335) * Update Click dependency to >= 8.0. [#4008](https://github.com/pallets/flask/pull/4008) * Remove previously deprecated code. [#4337](https://github.com/pallets/flask/pull/4337) + The CLI does not pass `script_info` to app factory functions. + `config.from_json` is replaced by `config.from_file(name, load=json.load)`. + `json` functions no longer take an `encoding` parameter. + `safe_join` is removed, use `werkzeug.utils.safe_join` instead. + `total_seconds` is removed, use `timedelta.total_seconds` instead. + The same blueprint cannot be registered with the same name. Use `name=` when registering to specify a unique name. + The test client’s `as_tuple` parameter is removed. Use `response.request.environ` instead. [#4417](https://github.com/pallets/flask/pull/4417) * Some parameters in `send_file` and `send_from_directory` were renamed in 2.0. The deprecation period for the old names is extended to 2.2. Be sure to test with deprecation warnings visible. + `attachment_filename` is renamed to `download_name`. + `cache_timeout` is renamed to `max_age`. + `add_etags` is renamed to `etag`. + `filename` is renamed to `path`. * The `RequestContext.g` property is deprecated. Use `g` directly or `AppContext.g` instead. [#3898](https://github.com/pallets/flask/issues/3898) * `copy_current_request_context` can decorate async functions. [#4303](https://github.com/pallets/flask/pull/4303) * The CLI uses `importlib.metadata` instead of `setuptools` to load command entry points. [#4419](https://github.com/pallets/flask/issues/4419) * Overriding `FlaskClient.open` will not cause an error on redirect. [#3396](https://github.com/pallets/flask/issues/3396) * Add an `--exclude-patterns` option to the `flask run` CLI command to specify patterns that will be ignored by the reloader. [#4188](https://github.com/pallets/flask/issues/4188) * When using lazy loading (the default with the debugger), the Click context from the `flask run` command remains available in the loader thread. [#4460](https://github.com/pallets/flask/issues/4460) * Deleting the session cookie uses the `httponly` flag. [#4485](https://github.com/pallets/flask/issues/4485) * Relax typing for `errorhandler` to allow the user to use more precise types and decorate the same function multiple times. [#4095](https://github.com/pallets/flask/issues/4095), [#4295](https://github.com/pallets/flask/issues/4295), [#4297](https://github.com/pallets/flask/issues/4297) * Fix typing for `__exit__` methods for better compatibility with `ExitStack`. [#4474](https://github.com/pallets/flask/issues/4474) * From Werkzeug, for redirect responses the `Location` header URL will remain relative, and exclude the scheme and domain, by default. [#4496](https://github.com/pallets/flask/pull/4496) * Add `Config.from_prefixed_env()` to load config values from environment variables that start with `FLASK_` or another prefix. This parses values as JSON by default, and allows setting keys in nested dicts. [#4479](https://github.com/pallets/flask/pull/4479) Version 2.0.3 ------------- Released 2022-02-14 * The test client’s `as_tuple` parameter is deprecated and will be removed in Werkzeug 2.1. It is now also deprecated in Flask, to be removed in Flask 2.1, while remaining compatible with both in 2.0.x. Use `response.request.environ` instead. [#4341](https://github.com/pallets/flask/pull/4341) * Fix type annotation for `errorhandler` decorator. [#4295](https://github.com/pallets/flask/issues/4295) * Revert a change to the CLI that caused it to hide `ImportError` tracebacks when importing the application. [#4307](https://github.com/pallets/flask/issues/4307) * `app.json_encoder` and `json_decoder` are only passed to `dumps` and `loads` if they have custom behavior. This improves performance, mainly on PyPy. [#4349](https://github.com/pallets/flask/issues/4349) * Clearer error message when `after_this_request` is used outside a request context. [#4333](https://github.com/pallets/flask/issues/4333) Version 2.0.2 ------------- Released 2021-10-04 * Fix type annotation for `teardown_*` methods. [#4093](https://github.com/pallets/flask/issues/4093) * Fix type annotation for `before_request` and `before_app_request` decorators. [#4104](https://github.com/pallets/flask/issues/4104) * Fixed the issue where typing requires template global decorators to accept functions with no arguments. [#4098](https://github.com/pallets/flask/issues/4098) * Support View and MethodView instances with async handlers. [#4112](https://github.com/pallets/flask/issues/4112) * Enhance typing of `app.errorhandler` decorator. [#4095](https://github.com/pallets/flask/issues/4095) * Fix registering a blueprint twice with differing names. [#4124](https://github.com/pallets/flask/issues/4124) * Fix the type of `static_folder` to accept `pathlib.Path`. [#4150](https://github.com/pallets/flask/issues/4150) * `jsonify` handles `decimal.Decimal` by encoding to `str`. [#4157](https://github.com/pallets/flask/issues/4157) * Correctly handle raising deferred errors in CLI lazy loading. [#4096](https://github.com/pallets/flask/issues/4096) * The CLI loader handles `**kwargs` in a `create_app` function. [#4170](https://github.com/pallets/flask/issues/4170) * Fix the order of `before_request` and other callbacks that trigger before the view returns. They are called from the app down to the closest nested blueprint. [#4229](https://github.com/pallets/flask/issues/4229) Version 2.0.1 ------------- Released 2021-05-21 * Re-add the `filename` parameter in `send_from_directory`. The `filename` parameter has been renamed to `path`, the old name is deprecated. [#4019](https://github.com/pallets/flask/pull/4019) * Mark top-level names as exported so type checking understands imports in user projects. [#4024](https://github.com/pallets/flask/issues/4024) * Fix type annotation for `g` and inform mypy that it is a namespace object that has arbitrary attributes. [#4020](https://github.com/pallets/flask/issues/4020) * Fix some types that weren’t available in Python 3.6.0. [#4040](https://github.com/pallets/flask/issues/4040) * Improve typing for `send_file`, `send_from_directory`, and `get_send_file_max_age`. [#4044](https://github.com/pallets/flask/issues/4044), [#4026](https://github.com/pallets/flask/pull/4026) * Show an error when a blueprint name contains a dot. The `.` has special meaning, it is used to separate (nested) blueprint names and the endpoint name. [#4041](https://github.com/pallets/flask/issues/4041) * Combine URL prefixes when nesting blueprints that were created with a `url_prefix` value. [#4037](https://github.com/pallets/flask/issues/4037) * Revert a change to the order that URL matching was done. The URL is again matched after the session is loaded, so the session is available in custom URL converters. [#4053](https://github.com/pallets/flask/issues/4053) * Re-add deprecated `Config.from_json`, which was accidentally removed early. [#4078](https://github.com/pallets/flask/issues/4078) * Improve typing for some functions using `Callable` in their type signatures, focusing on decorator factories. [#4060](https://github.com/pallets/flask/issues/4060) * Nested blueprints are registered with their dotted name. This allows different blueprints with the same name to be nested at different locations. [#4069](https://github.com/pallets/flask/issues/4069) * `register_blueprint` takes a `name` option to change the (pre-dotted) name the blueprint is registered with. This allows the same blueprint to be registered multiple times with unique names for `url_for`. Registering the same blueprint with the same name multiple times is deprecated. [#1091](https://github.com/pallets/flask/issues/1091) * Improve typing for `stream_with_context`. [#4052](https://github.com/pallets/flask/issues/4052) Version 2.0.0 ------------- Released 2021-05-11 * Drop support for Python 2 and 3.5. * Bump minimum versions of other Pallets projects: Werkzeug >= 2, Jinja2 >= 3, MarkupSafe >= 2, ItsDangerous >= 2, Click >= 8. Be sure to check the change logs for each project. For better compatibility with other applications (e.g. Celery) that still require Click 7, there is no hard dependency on Click 8 yet, but using Click 7 will trigger a DeprecationWarning and Flask 2.1 will depend on Click 8. * JSON support no longer uses simplejson. To use another JSON module, override `app.json_encoder` and `json_decoder`. [#3555](https://github.com/pallets/flask/issues/3555) * The `encoding` option to JSON functions is deprecated. [#3562](https://github.com/pallets/flask/pull/3562) * Passing `script_info` to app factory functions is deprecated. This was not portable outside the `flask` command. Use `click.get_current_context().obj` if it’s needed. [#3552](https://github.com/pallets/flask/issues/3552) * The CLI shows better error messages when the app failed to load when looking up commands. [#2741](https://github.com/pallets/flask/issues/2741) * Add `SessionInterface.get_cookie_name` to allow setting the session cookie name dynamically. [#3369](https://github.com/pallets/flask/pull/3369) * Add `Config.from_file` to load config using arbitrary file loaders, such as `toml.load` or `json.load`. `Config.from_json` is deprecated in favor of this. [#3398](https://github.com/pallets/flask/pull/3398) * The `flask run` command will only defer errors on reload. Errors present during the initial call will cause the server to exit with the traceback immediately. [#3431](https://github.com/pallets/flask/issues/3431) * `send_file` raises a `ValueError` when passed an `io` object in text mode. Previously, it would respond with 200 OK and an empty file. [#3358](https://github.com/pallets/flask/issues/3358) * When using ad-hoc certificates, check for the cryptography library instead of PyOpenSSL. [#3492](https://github.com/pallets/flask/pull/3492) * When specifying a factory function with `FLASK_APP`, keyword argument can be passed. [#3553](https://github.com/pallets/flask/issues/3553) * When loading a `.env` or `.flaskenv` file, the current working directory is no longer changed to the location of the file. [#3560](https://github.com/pallets/flask/pull/3560) * When returning a `(response, headers)` tuple from a view, the headers replace rather than extend existing headers on the response. For example, this allows setting the `Content-Type` for `jsonify()`. Use `response.headers.extend()` if extending is desired. [#3628](https://github.com/pallets/flask/issues/3628) * The `Scaffold` class provides a common API for the `Flask` and `Blueprint` classes. `Blueprint` information is stored in attributes just like `Flask`, rather than opaque lambda functions. This is intended to improve consistency and maintainability. [#3215](https://github.com/pallets/flask/issues/3215) * Include `samesite` and `secure` options when removing the session cookie. [#3726](https://github.com/pallets/flask/pull/3726) * Support passing a `pathlib.Path` to `static_folder`. [#3579](https://github.com/pallets/flask/pull/3579) * `send_file` and `send_from_directory` are wrappers around the implementations in `werkzeug.utils`. [#3828](https://github.com/pallets/flask/pull/3828) * Some `send_file` parameters have been renamed, the old names are deprecated. `attachment_filename` is renamed to `download_name`. `cache_timeout` is renamed to `max_age`. `add_etags` is renamed to `etag`. [#3828](https://github.com/pallets/flask/pull/3828), [#3883](https://github.com/pallets/flask/pull/3883) * `send_file` passes `download_name` even if `as_attachment=False` by using `Content-Disposition: inline`. [#3828](https://github.com/pallets/flask/pull/3828) * `send_file` sets `conditional=True` and `max_age=None` by default. `Cache-Control` is set to `no-cache` if `max_age` is not set, otherwise `public`. This tells browsers to validate conditional requests instead of using a timed cache. [#3828](https://github.com/pallets/flask/pull/3828) * `helpers.safe_join` is deprecated. Use `werkzeug.utils.safe_join` instead. [#3828](https://github.com/pallets/flask/pull/3828) * The request context does route matching before opening the session. This could allow a session interface to change behavior based on `request.endpoint`. [#3776](https://github.com/pallets/flask/issues/3776) * Use Jinja’s implementation of the `|tojson` filter. [#3881](https://github.com/pallets/flask/issues/3881) * Add route decorators for common HTTP methods. For example, `@app.post("/login")` is a shortcut for `@app.route("/login", methods=["POST"])`. [#3907](https://github.com/pallets/flask/pull/3907) * Support async views, error handlers, before and after request, and teardown functions. [#3412](https://github.com/pallets/flask/pull/3412) * Support nesting blueprints. [#593](https://github.com/pallets/flask/issues/593), [#1548](https://github.com/pallets/flask/issues/1548), [#3923](https://github.com/pallets/flask/pull/3923) * Set the default encoding to “UTF-8” when loading `.env` and `.flaskenv` files to allow to use non-ASCII characters. [#3931](https://github.com/pallets/flask/issues/3931) * `flask shell` sets up tab and history completion like the default `python` shell if `readline` is installed. [#3941](https://github.com/pallets/flask/issues/3941) * `helpers.total_seconds()` is deprecated. Use `timedelta.total_seconds()` instead. [#3962](https://github.com/pallets/flask/pull/3962) * Add type hinting. [#3973](https://github.com/pallets/flask/pull/3973). Version 1.1.4 ------------- Released 2021-05-13 * Update `static_folder` to use `_compat.fspath` instead of `os.fspath` to continue supporting Python < 3.6 [#4050](https://github.com/pallets/flask/issues/4050) Version 1.1.3 ------------- Released 2021-05-13 * Set maximum versions of Werkzeug, Jinja, Click, and ItsDangerous. [#4043](https://github.com/pallets/flask/issues/4043) * Re-add support for passing a `pathlib.Path` for `static_folder`. [#3579](https://github.com/pallets/flask/pull/3579) Version 1.1.2 ------------- Released 2020-04-03 * Work around an issue when running the `flask` command with an external debugger on Windows. [#3297](https://github.com/pallets/flask/issues/3297) * The static route will not catch all URLs if the `Flask` `static_folder` argument ends with a slash. [#3452](https://github.com/pallets/flask/issues/3452) Version 1.1.1 ------------- Released 2019-07-08 * The `flask.json_available` flag was added back for compatibility with some extensions. It will raise a deprecation warning when used, and will be removed in version 2.0.0. [#3288](https://github.com/pallets/flask/issues/3288) Version 1.1.0 ------------- Released 2019-07-04 * Bump minimum Werkzeug version to >= 0.15. * Drop support for Python 3.4. * Error handlers for `InternalServerError` or `500` will always be passed an instance of `InternalServerError`. If they are invoked due to an unhandled exception, that original exception is now available as `e.original_exception` rather than being passed directly to the handler. The same is true if the handler is for the base `HTTPException`. This makes error handler behavior more consistent. [#3266](https://github.com/pallets/flask/pull/3266) + `Flask.finalize_request` is called for all unhandled exceptions even if there is no `500` error handler. * `Flask.logger` takes the same name as `Flask.name` (the value passed as `Flask(import_name)`. This reverts 1.0’s behavior of always logging to `"flask.app"`, in order to support multiple apps in the same process. A warning will be shown if old configuration is detected that needs to be moved. [#2866](https://github.com/pallets/flask/issues/2866) * `RequestContext.copy` includes the current session object in the request context copy. This prevents `session` pointing to an out-of-date object. [#2935](https://github.com/pallets/flask/issues/2935) * Using built-in RequestContext, unprintable Unicode characters in Host header will result in a HTTP 400 response and not HTTP 500 as previously. [#2994](https://github.com/pallets/flask/pull/2994) * `send_file` supports `PathLike` objects as described in [**PEP 519**](https://peps.python.org/pep-0519/), to support `pathlib` in Python 3. [#3059](https://github.com/pallets/flask/pull/3059) * `send_file` supports `BytesIO` partial content. [#2957](https://github.com/pallets/flask/issues/2957) * `open_resource` accepts the “rt” file mode. This still does the same thing as “r”. [#3163](https://github.com/pallets/flask/issues/3163) * The `MethodView.methods` attribute set in a base class is used by subclasses. [#3138](https://github.com/pallets/flask/issues/3138) * `Flask.jinja_options` is a `dict` instead of an `ImmutableDict` to allow easier configuration. Changes must still be made before creating the environment. [#3190](https://github.com/pallets/flask/pull/3190) * Flask’s `JSONMixin` for the request and response wrappers was moved into Werkzeug. Use Werkzeug’s version with Flask-specific support. This bumps the Werkzeug dependency to >= 0.15. [#3125](https://github.com/pallets/flask/issues/3125) * The `flask` command entry point is simplified to take advantage of Werkzeug 0.15’s better reloader support. This bumps the Werkzeug dependency to >= 0.15. [#3022](https://github.com/pallets/flask/issues/3022) * Support `static_url_path` that ends with a forward slash. [#3134](https://github.com/pallets/flask/issues/3134) * Support empty `static_folder` without requiring setting an empty `static_url_path` as well. [#3124](https://github.com/pallets/flask/pull/3124) * `jsonify` supports `dataclass` objects. [#3195](https://github.com/pallets/flask/pull/3195) * Allow customizing the `Flask.url_map_class` used for routing. [#3069](https://github.com/pallets/flask/pull/3069) * The development server port can be set to 0, which tells the OS to pick an available port. [#2926](https://github.com/pallets/flask/issues/2926) * The return value from `cli.load_dotenv` is more consistent with the documentation. It will return `False` if python-dotenv is not installed, or if the given path isn’t a file. [#2937](https://github.com/pallets/flask/issues/2937) * Signaling support has a stub for the `connect_via` method when the Blinker library is not installed. [#3208](https://github.com/pallets/flask/pull/3208) * Add an `--extra-files` option to the `flask run` CLI command to specify extra files that will trigger the reloader on change. [#2897](https://github.com/pallets/flask/issues/2897) * Allow returning a dictionary from a view function. Similar to how returning a string will produce a `text/html` response, returning a dict will call `jsonify` to produce a `application/json` response. [#3111](https://github.com/pallets/flask/pull/3111) * Blueprints have a `cli` Click group like `app.cli`. CLI commands registered with a blueprint will be available as a group under the `flask` command. [#1357](https://github.com/pallets/flask/issues/1357). * When using the test client as a context manager (`with client:`), all preserved request contexts are popped when the block exits, ensuring nested contexts are cleaned up correctly. [#3157](https://github.com/pallets/flask/pull/3157) * Show a better error message when the view return type is not supported. [#3214](https://github.com/pallets/flask/issues/3214) * `flask.testing.make_test_environ_builder()` has been deprecated in favour of a new class `flask.testing.EnvironBuilder`. [#3232](https://github.com/pallets/flask/pull/3232) * The `flask run` command no longer fails if Python is not built with SSL support. Using the `--cert` option will show an appropriate error message. [#3211](https://github.com/pallets/flask/issues/3211) * URL matching now occurs after the request context is pushed, rather than when it’s created. This allows custom URL converters to access the app and request contexts, such as to query a database for an id. [#3088](https://github.com/pallets/flask/issues/3088) Version 1.0.4 ------------- Released 2019-07-04 * The key information for `BadRequestKeyError` is no longer cleared outside debug mode, so error handlers can still access it. This requires upgrading to Werkzeug 0.15.5. [#3249](https://github.com/pallets/flask/issues/3249) * `send_file` url quotes the “:” and “/” characters for more compatible UTF-8 filename support in some browsers. [#3074](https://github.com/pallets/flask/issues/3074) * Fixes for [**PEP 451**](https://peps.python.org/pep-0451/) import loaders and pytest 5.x. [#3275](https://github.com/pallets/flask/issues/3275) * Show message about dotenv on stderr instead of stdout. [#3285](https://github.com/pallets/flask/issues/3285) Version 1.0.3 ------------- Released 2019-05-17 * `send_file` encodes filenames as ASCII instead of Latin-1 (ISO-8859-1). This fixes compatibility with Gunicorn, which is stricter about header encodings than [**PEP 3333**](https://peps.python.org/pep-3333/). [#2766](https://github.com/pallets/flask/issues/2766) * Allow custom CLIs using `FlaskGroup` to set the debug flag without it always being overwritten based on environment variables. [#2765](https://github.com/pallets/flask/pull/2765) * `flask --version` outputs Werkzeug’s version and simplifies the Python version. [#2825](https://github.com/pallets/flask/pull/2825) * `send_file` handles an `attachment_filename` that is a native Python 2 string (bytes) with UTF-8 coded bytes. [#2933](https://github.com/pallets/flask/issues/2933) * A catch-all error handler registered for `HTTPException` will not handle `RoutingException`, which is used internally during routing. This fixes the unexpected behavior that had been introduced in 1.0. [#2986](https://github.com/pallets/flask/pull/2986) * Passing the `json` argument to `app.test_client` does not push/pop an extra app context. [#2900](https://github.com/pallets/flask/issues/2900) Version 1.0.2 ------------- Released 2018-05-02 * Fix more backwards compatibility issues with merging slashes between a blueprint prefix and route. [#2748](https://github.com/pallets/flask/pull/2748) * Fix error with `flask routes` command when there are no routes. [#2751](https://github.com/pallets/flask/issues/2751) Version 1.0.1 ------------- Released 2018-04-29 * Fix registering partials (with no `__name__`) as view functions. [#2730](https://github.com/pallets/flask/pull/2730) * Don’t treat lists returned from view functions the same as tuples. Only tuples are interpreted as response data. [#2736](https://github.com/pallets/flask/issues/2736) * Extra slashes between a blueprint’s `url_prefix` and a route URL are merged. This fixes some backwards compatibility issues with the change in 1.0. [#2731](https://github.com/pallets/flask/issues/2731), [#2742](https://github.com/pallets/flask/issues/2742) * Only trap `BadRequestKeyError` errors in debug mode, not all `BadRequest` errors. This allows `abort(400)` to continue working as expected. [#2735](https://github.com/pallets/flask/issues/2735) * The `FLASK_SKIP_DOTENV` environment variable can be set to `1` to skip automatically loading dotenv files. [#2722](https://github.com/pallets/flask/issues/2722) Version 1.0 ----------- Released 2018-04-26 * Python 2.6 and 3.3 are no longer supported. * Bump minimum dependency versions to the latest stable versions: Werkzeug >= 0.14, Jinja >= 2.10, itsdangerous >= 0.24, Click >= 5.1. [#2586](https://github.com/pallets/flask/issues/2586) * Skip `app.run` when a Flask application is run from the command line. This avoids some behavior that was confusing to debug. * Change the default for `JSONIFY_PRETTYPRINT_REGULAR` to `False`. `~json.jsonify` returns a compact format by default, and an indented format in debug mode. [#2193](https://github.com/pallets/flask/pull/2193) * `Flask.__init__` accepts the `host_matching` argument and sets it on `Flask.url_map`. [#1559](https://github.com/pallets/flask/issues/1559) * `Flask.__init__` accepts the `static_host` argument and passes it as the `host` argument when defining the static route. [#1559](https://github.com/pallets/flask/issues/1559) * `send_file` supports Unicode in `attachment_filename`. [#2223](https://github.com/pallets/flask/pull/2223) * Pass `_scheme` argument from `url_for` to `Flask.handle_url_build_error`. [#2017](https://github.com/pallets/flask/pull/2017) * `Flask.add_url_rule` accepts the `provide_automatic_options` argument to disable adding the `OPTIONS` method. [#1489](https://github.com/pallets/flask/pull/1489) * `MethodView` subclasses inherit method handlers from base classes. [#1936](https://github.com/pallets/flask/pull/1936) * Errors caused while opening the session at the beginning of the request are handled by the app’s error handlers. [#2254](https://github.com/pallets/flask/pull/2254) * Blueprints gained `Blueprint.json_encoder` and `Blueprint.json_decoder` attributes to override the app’s encoder and decoder. [#1898](https://github.com/pallets/flask/pull/1898) * `Flask.make_response` raises `TypeError` instead of `ValueError` for bad response types. The error messages have been improved to describe why the type is invalid. [#2256](https://github.com/pallets/flask/pull/2256) * Add `routes` CLI command to output routes registered on the application. [#2259](https://github.com/pallets/flask/pull/2259) * Show warning when session cookie domain is a bare hostname or an IP address, as these may not behave properly in some browsers, such as Chrome. [#2282](https://github.com/pallets/flask/pull/2282) * Allow IP address as exact session cookie domain. [#2282](https://github.com/pallets/flask/pull/2282) * `SESSION_COOKIE_DOMAIN` is set if it is detected through `SERVER_NAME`. [#2282](https://github.com/pallets/flask/pull/2282) * Auto-detect zero-argument app factory called `create_app` or `make_app` from `FLASK_APP`. [#2297](https://github.com/pallets/flask/pull/2297) * Factory functions are not required to take a `script_info` parameter to work with the `flask` command. If they take a single parameter or a parameter named `script_info`, the `ScriptInfo` object will be passed. [#2319](https://github.com/pallets/flask/pull/2319) * `FLASK_APP` can be set to an app factory, with arguments if needed, for example `FLASK_APP=myproject.app:create_app('dev')`. [#2326](https://github.com/pallets/flask/pull/2326) * `FLASK_APP` can point to local packages that are not installed in editable mode, although `pip install -e` is still preferred. [#2414](https://github.com/pallets/flask/pull/2414) * The `View` class attribute `View.provide_automatic_options` is set in `View.as_view`, to be detected by `Flask.add_url_rule`. [#2316](https://github.com/pallets/flask/pull/2316) * Error handling will try handlers registered for `blueprint, code`, `app, code`, `blueprint, exception`, `app, exception`. [#2314](https://github.com/pallets/flask/pull/2314) * `Cookie` is added to the response’s `Vary` header if the session is accessed at all during the request (and not deleted). [#2288](https://github.com/pallets/flask/pull/2288) * `Flask.test_request_context` accepts `subdomain` and `url_scheme` arguments for use when building the base URL. [#1621](https://github.com/pallets/flask/pull/1621) * Set `APPLICATION_ROOT` to `'/'` by default. This was already the implicit default when it was set to `None`. * `TRAP_BAD_REQUEST_ERRORS` is enabled by default in debug mode. `BadRequestKeyError` has a message with the bad key in debug mode instead of the generic bad request message. [#2348](https://github.com/pallets/flask/pull/2348) * Allow registering new tags with `TaggedJSONSerializer` to support storing other types in the session cookie. [#2352](https://github.com/pallets/flask/pull/2352) * Only open the session if the request has not been pushed onto the context stack yet. This allows `stream_with_context` generators to access the same session that the containing view uses. [#2354](https://github.com/pallets/flask/pull/2354) * Add `json` keyword argument for the test client request methods. This will dump the given object as JSON and set the appropriate content type. [#2358](https://github.com/pallets/flask/pull/2358) * Extract JSON handling to a mixin applied to both the `Request` and `Response` classes. This adds the `Response.is_json` and `Response.get_json` methods to the response to make testing JSON response much easier. [#2358](https://github.com/pallets/flask/pull/2358) * Removed error handler caching because it caused unexpected results for some exception inheritance hierarchies. Register handlers explicitly for each exception if you want to avoid traversing the MRO. [#2362](https://github.com/pallets/flask/pull/2362) * Fix incorrect JSON encoding of aware, non-UTC datetimes. [#2374](https://github.com/pallets/flask/pull/2374) * Template auto reloading will honor debug mode even even if `Flask.jinja_env` was already accessed. [#2373](https://github.com/pallets/flask/pull/2373) * The following old deprecated code was removed. [#2385](https://github.com/pallets/flask/issues/2385) + `flask.ext` - import extensions directly by their name instead of through the `flask.ext` namespace. For example, `import flask.ext.sqlalchemy` becomes `import flask_sqlalchemy`. + `Flask.init_jinja_globals` - extend `Flask.create_jinja_environment` instead. + `Flask.error_handlers` - tracked by `Flask.error_handler_spec`, use `Flask.errorhandler` to register handlers. + `Flask.request_globals_class` - use `Flask.app_ctx_globals_class` instead. + `Flask.static_path` - use `Flask.static_url_path` instead. + `Request.module` - use `Request.blueprint` instead. * The `Request.json` property is no longer deprecated. [#1421](https://github.com/pallets/flask/issues/1421) * Support passing a `EnvironBuilder` or `dict` to `test_client.open`. [#2412](https://github.com/pallets/flask/pull/2412) * The `flask` command and `Flask.run` will load environment variables from `.env` and `.flaskenv` files if python-dotenv is installed. [#2416](https://github.com/pallets/flask/pull/2416) * When passing a full URL to the test client, the scheme in the URL is used instead of `PREFERRED_URL_SCHEME`. [#2430](https://github.com/pallets/flask/pull/2430) * `Flask.logger` has been simplified. `LOGGER_NAME` and `LOGGER_HANDLER_POLICY` config was removed. The logger is always named `flask.app`. The level is only set on first access, it doesn’t check `Flask.debug` each time. Only one format is used, not different ones depending on `Flask.debug`. No handlers are removed, and a handler is only added if no handlers are already configured. [#2436](https://github.com/pallets/flask/pull/2436) * Blueprint view function names may not contain dots. [#2450](https://github.com/pallets/flask/pull/2450) * Fix a `ValueError` caused by invalid `Range` requests in some cases. [#2526](https://github.com/pallets/flask/issues/2526) * The development server uses threads by default. [#2529](https://github.com/pallets/flask/pull/2529) * Loading config files with `silent=True` will ignore `ENOTDIR` errors. [#2581](https://github.com/pallets/flask/pull/2581) * Pass `--cert` and `--key` options to `flask run` to run the development server over HTTPS. [#2606](https://github.com/pallets/flask/pull/2606) * Added `SESSION_COOKIE_SAMESITE` to control the `SameSite` attribute on the session cookie. [#2607](https://github.com/pallets/flask/pull/2607) * Added `Flask.test_cli_runner` to create a Click runner that can invoke Flask CLI commands for testing. [#2636](https://github.com/pallets/flask/pull/2636) * Subdomain matching is disabled by default and setting `SERVER_NAME` does not implicitly enable it. It can be enabled by passing `subdomain_matching=True` to the `Flask` constructor. [#2635](https://github.com/pallets/flask/pull/2635) * A single trailing slash is stripped from the blueprint `url_prefix` when it is registered with the app. [#2629](https://github.com/pallets/flask/pull/2629) * `Request.get_json` doesn’t cache the result if parsing fails when `silent` is true. [#2651](https://github.com/pallets/flask/issues/2651) * `Request.get_json` no longer accepts arbitrary encodings. Incoming JSON should be encoded using UTF-8 per [**RFC 8259**](https://datatracker.ietf.org/doc/html/rfc8259.html), but Flask will autodetect UTF-8, -16, or -32. [#2691](https://github.com/pallets/flask/pull/2691) * Added `MAX_COOKIE_SIZE` and `Response.max_cookie_size` to control when Werkzeug warns about large cookies that browsers may ignore. [#2693](https://github.com/pallets/flask/pull/2693) * Updated documentation theme to make docs look better in small windows. [#2709](https://github.com/pallets/flask/pull/2709) * Rewrote the tutorial docs and example project to take a more structured approach to help new users avoid common pitfalls. [#2676](https://github.com/pallets/flask/pull/2676) Version 0.12.5 -------------- Released 2020-02-10 * Pin Werkzeug to < 1.0.0. [#3497](https://github.com/pallets/flask/issues/3497) Version 0.12.4 -------------- Released 2018-04-29 * Repackage 0.12.3 to fix package layout issue. [#2728](https://github.com/pallets/flask/issues/2728) Version 0.12.3 -------------- Released 2018-04-26 * `Request.get_json` no longer accepts arbitrary encodings. Incoming JSON should be encoded using UTF-8 per [**RFC 8259**](https://datatracker.ietf.org/doc/html/rfc8259.html), but Flask will autodetect UTF-8, -16, or -32. [#2692](https://github.com/pallets/flask/issues/2692) * Fix a Python warning about imports when using `python -m flask`. [#2666](https://github.com/pallets/flask/issues/2666) * Fix a `ValueError` caused by invalid `Range` requests in some cases. Version 0.12.2 -------------- Released 2017-05-16 * Fix a bug in `safe_join` on Windows. Version 0.12.1 -------------- Released 2017-03-31 * Prevent `flask run` from showing a `NoAppException` when an `ImportError` occurs within the imported application module. * Fix encoding behavior of `app.config.from_pyfile` for Python 3. [#2118](https://github.com/pallets/flask/issues/2118) * Use the `SERVER_NAME` config if it is present as default values for `app.run`. [#2109](https://github.com/pallets/flask/issues/2109), [#2152](https://github.com/pallets/flask/pull/2152) * Call `ctx.auto_pop` with the exception object instead of `None`, in the event that a `BaseException` such as `KeyboardInterrupt` is raised in a request handler. Version 0.12 ------------ Released 2016-12-21, codename Punsch * The cli command now responds to `--version`. * Mimetype guessing and ETag generation for file-like objects in `send_file` has been removed. [#104](https://github.com/pallets/flask/issues/104), :pr`1849` * Mimetype guessing in `send_file` now fails loudly and doesn’t fall back to `application/octet-stream`. [#1988](https://github.com/pallets/flask/pull/1988) * Make `flask.safe_join` able to join multiple paths like `os.path.join` [#1730](https://github.com/pallets/flask/pull/1730) * Revert a behavior change that made the dev server crash instead of returning an Internal Server Error. [#2006](https://github.com/pallets/flask/pull/2006) * Correctly invoke response handlers for both regular request dispatching as well as error handlers. * Disable logger propagation by default for the app logger. * Add support for range requests in `send_file`. * `app.test_client` includes preset default environment, which can now be directly set, instead of per `client.get`. * Fix crash when running under PyPy3. [#1814](https://github.com/pallets/flask/pull/1814) Version 0.11.1 -------------- Released 2016-06-07 * Fixed a bug that prevented `FLASK_APP=foobar/__init__.py` from working. [#1872](https://github.com/pallets/flask/pull/1872) Version 0.11 ------------ Released 2016-05-29, codename Absinthe * Added support to serializing top-level arrays to `jsonify`. This introduces a security risk in ancient browsers. * Added before_render_template signal. * Added `**kwargs` to `Flask.test_client` to support passing additional keyword arguments to the constructor of `Flask.test_client_class`. * Added `SESSION_REFRESH_EACH_REQUEST` config key that controls the set-cookie behavior. If set to `True` a permanent session will be refreshed each request and get their lifetime extended, if set to `False` it will only be modified if the session actually modifies. Non permanent sessions are not affected by this and will always expire if the browser window closes. * Made Flask support custom JSON mimetypes for incoming data. * Added support for returning tuples in the form `(response, headers)` from a view function. * Added `Config.from_json`. * Added `Flask.config_class`. * Added `Config.get_namespace`. * Templates are no longer automatically reloaded outside of debug mode. This can be configured with the new `TEMPLATES_AUTO_RELOAD` config key. * Added a workaround for a limitation in Python 3.3’s namespace loader. * Added support for explicit root paths when using Python 3.3’s namespace packages. * Added `flask` and the `flask.cli` module to start the local debug server through the click CLI system. This is recommended over the old `flask.run()` method as it works faster and more reliable due to a different design and also replaces `Flask-Script`. * Error handlers that match specific classes are now checked first, thereby allowing catching exceptions that are subclasses of HTTP exceptions (in `werkzeug.exceptions`). This makes it possible for an extension author to create exceptions that will by default result in the HTTP error of their choosing, but may be caught with a custom error handler if desired. * Added `Config.from_mapping`. * Flask will now log by default even if debug is disabled. The log format is now hardcoded but the default log handling can be disabled through the `LOGGER_HANDLER_POLICY` configuration key. * Removed deprecated module functionality. * Added the `EXPLAIN_TEMPLATE_LOADING` config flag which when enabled will instruct Flask to explain how it locates templates. This should help users debug when the wrong templates are loaded. * Enforce blueprint handling in the order they were registered for template loading. * Ported test suite to py.test. * Deprecated `request.json` in favour of `request.get_json()`. * Add “pretty” and “compressed” separators definitions in jsonify() method. Reduces JSON response size when `JSONIFY_PRETTYPRINT_REGULAR=False` by removing unnecessary white space included by default after separators. * JSON responses are now terminated with a newline character, because it is a convention that UNIX text files end with a newline and some clients don’t deal well when this newline is missing. [#1262](https://github.com/pallets/flask/pull/1262) * The automatically provided `OPTIONS` method is now correctly disabled if the user registered an overriding rule with the lowercase-version `options`. [#1288](https://github.com/pallets/flask/issues/1288) * `flask.json.jsonify` now supports the `datetime.date` type. [#1326](https://github.com/pallets/flask/pull/1326) * Don’t leak exception info of already caught exceptions to context teardown handlers. [#1393](https://github.com/pallets/flask/pull/1393) * Allow custom Jinja environment subclasses. [#1422](https://github.com/pallets/flask/pull/1422) * Updated extension dev guidelines. * `flask.g` now has `pop()` and `setdefault` methods. * Turn on autoescape for `flask.templating.render_template_string` by default. [#1515](https://github.com/pallets/flask/pull/1515) * `flask.ext` is now deprecated. [#1484](https://github.com/pallets/flask/pull/1484) * `send_from_directory` now raises BadRequest if the filename is invalid on the server OS. [#1763](https://github.com/pallets/flask/pull/1763) * Added the `JSONIFY_MIMETYPE` configuration variable. [#1728](https://github.com/pallets/flask/pull/1728) * Exceptions during teardown handling will no longer leave bad application contexts lingering around. * Fixed broken `test_appcontext_signals()` test case. * Raise an `AttributeError` in `helpers.find_package` with a useful message explaining why it is raised when a [**PEP 302**](https://peps.python.org/pep-0302/) import hook is used without an `is_package()` method. * Fixed an issue causing exceptions raised before entering a request or app context to be passed to teardown handlers. * Fixed an issue with query parameters getting removed from requests in the test client when absolute URLs were requested. * Made `@before_first_request` into a decorator as intended. * Fixed an etags bug when sending a file streams with a name. * Fixed `send_from_directory` not expanding to the application root path correctly. * Changed logic of before first request handlers to flip the flag after invoking. This will allow some uses that are potentially dangerous but should probably be permitted. * Fixed Python 3 bug when a handler from `app.url_build_error_handlers` reraises the `BuildError`. Version 0.10.1 -------------- Released 2013-06-14 * Fixed an issue where `|tojson` was not quoting single quotes which made the filter not work properly in HTML attributes. Now it’s possible to use that filter in single quoted attributes. This should make using that filter with angular.js easier. * Added support for byte strings back to the session system. This broke compatibility with the common case of people putting binary data for token verification into the session. * Fixed an issue where registering the same method twice for the same endpoint would trigger an exception incorrectly. Version 0.10 ------------ Released 2013-06-13, codename Limoncello * Changed default cookie serialization format from pickle to JSON to limit the impact an attacker can do if the secret key leaks. * Added `template_test` methods in addition to the already existing `template_filter` method family. * Added `template_global` methods in addition to the already existing `template_filter` method family. * Set the content-length header for x-sendfile. * `tojson` filter now does not escape script blocks in HTML5 parsers. * `tojson` used in templates is now safe by default. This was allowed due to the different escaping behavior. * Flask will now raise an error if you attempt to register a new function on an already used endpoint. * Added wrapper module around simplejson and added default serialization of datetime objects. This allows much easier customization of how JSON is handled by Flask or any Flask extension. * Removed deprecated internal `flask.session` module alias. Use `flask.sessions` instead to get the session module. This is not to be confused with `flask.session` the session proxy. * Templates can now be rendered without request context. The behavior is slightly different as the `request`, `session` and `g` objects will not be available and blueprint’s context processors are not called. * The config object is now available to the template as a real global and not through a context processor which makes it available even in imported templates by default. * Added an option to generate non-ascii encoded JSON which should result in less bytes being transmitted over the network. It’s disabled by default to not cause confusion with existing libraries that might expect `flask.json.dumps` to return bytes by default. * `flask.g` is now stored on the app context instead of the request context. * `flask.g` now gained a `get()` method for not erroring out on non existing items. * `flask.g` now can be used with the `in` operator to see what’s defined and it now is iterable and will yield all attributes stored. * `flask.Flask.request_globals_class` got renamed to `flask.Flask.app_ctx_globals_class` which is a better name to what it does since 0.10. * `request`, `session` and `g` are now also added as proxies to the template context which makes them available in imported templates. One has to be very careful with those though because usage outside of macros might cause caching. * Flask will no longer invoke the wrong error handlers if a proxy exception is passed through. * Added a workaround for chrome’s cookies in localhost not working as intended with domain names. * Changed logic for picking defaults for cookie values from sessions to work better with Google Chrome. * Added `message_flashed` signal that simplifies flashing testing. * Added support for copying of request contexts for better working with greenlets. * Removed custom JSON HTTP exception subclasses. If you were relying on them you can reintroduce them again yourself trivially. Using them however is strongly discouraged as the interface was flawed. * Python requirements changed: requiring Python 2.6 or 2.7 now to prepare for Python 3.3 port. * Changed how the teardown system is informed about exceptions. This is now more reliable in case something handles an exception halfway through the error handling process. * Request context preservation in debug mode now keeps the exception information around which means that teardown handlers are able to distinguish error from success cases. * Added the `JSONIFY_PRETTYPRINT_REGULAR` configuration variable. * Flask now orders JSON keys by default to not trash HTTP caches due to different hash seeds between different workers. * Added `appcontext_pushed` and `appcontext_popped` signals. * The builtin run method now takes the `SERVER_NAME` into account when picking the default port to run on. * Added `flask.request.get_json()` as a replacement for the old `flask.request.json` property. Version 0.9 ----------- Released 2012-07-01, codename Campari * The `Request.on_json_loading_failed` now returns a JSON formatted response by default. * The `url_for` function now can generate anchors to the generated links. * The `url_for` function now can also explicitly generate URL rules specific to a given HTTP method. * Logger now only returns the debug log setting if it was not set explicitly. * Unregister a circular dependency between the WSGI environment and the request object when shutting down the request. This means that environ `werkzeug.request` will be `None` after the response was returned to the WSGI server but has the advantage that the garbage collector is not needed on CPython to tear down the request unless the user created circular dependencies themselves. * Session is now stored after callbacks so that if the session payload is stored in the session you can still modify it in an after request callback. * The `Flask` class will avoid importing the provided import name if it can (the required first parameter), to benefit tools which build Flask instances programmatically. The Flask class will fall back to using import on systems with custom module hooks, e.g. Google App Engine, or when the import name is inside a zip archive (usually an egg) prior to Python 2.7. * Blueprints now have a decorator to add custom template filters application wide, `Blueprint.app_template_filter`. * The Flask and Blueprint classes now have a non-decorator method for adding custom template filters application wide, `Flask.add_template_filter` and `Blueprint.add_app_template_filter`. * The `get_flashed_messages` function now allows rendering flashed message categories in separate blocks, through a `category_filter` argument. * The `Flask.run` method now accepts `None` for `host` and `port` arguments, using default values when `None`. This allows for calling run using configuration values, e.g. `app.run(app.config.get('MYHOST'), app.config.get('MYPORT'))`, with proper behavior whether or not a config file is provided. * The `render_template` method now accepts a either an iterable of template names or a single template name. Previously, it only accepted a single template name. On an iterable, the first template found is rendered. * Added `Flask.app_context` which works very similar to the request context but only provides access to the current application. This also adds support for URL generation without an active request context. * View functions can now return a tuple with the first instance being an instance of `Response`. This allows for returning `jsonify(error="error msg"), 400` from a view function. * `Flask` and `Blueprint` now provide a `get_send_file_max_age` hook for subclasses to override behavior of serving static files from Flask when using `Flask.send_static_file` (used for the default static file handler) and `helpers.send_file`. This hook is provided a filename, which for example allows changing cache controls by file extension. The default max-age for `send_file` and static files can be configured through a new `SEND_FILE_MAX_AGE_DEFAULT` configuration variable, which is used in the default `get_send_file_max_age` implementation. * Fixed an assumption in sessions implementation which could break message flashing on sessions implementations which use external storage. * Changed the behavior of tuple return values from functions. They are no longer arguments to the response object, they now have a defined meaning. * Added `Flask.request_globals_class` to allow a specific class to be used on creation of the `g` instance of each request. * Added `required_methods` attribute to view functions to force-add methods on registration. * Added `flask.after_this_request`. * Added `flask.stream_with_context` and the ability to push contexts multiple times without producing unexpected behavior. Version 0.8.1 ------------- Released 2012-07-01 * Fixed an issue with the undocumented `flask.session` module to not work properly on Python 2.5. It should not be used but did cause some problems for package managers. Version 0.8 ----------- Released 2011-09-29, codename Rakija * Refactored session support into a session interface so that the implementation of the sessions can be changed without having to override the Flask class. * Empty session cookies are now deleted properly automatically. * View functions can now opt out of getting the automatic OPTIONS implementation. * HTTP exceptions and Bad Request errors can now be trapped so that they show up normally in the traceback. * Flask in debug mode is now detecting some common problems and tries to warn you about them. * Flask in debug mode will now complain with an assertion error if a view was attached after the first request was handled. This gives earlier feedback when users forget to import view code ahead of time. * Added the ability to register callbacks that are only triggered once at the beginning of the first request with `Flask.before_first_request`. * Malformed JSON data will now trigger a bad request HTTP exception instead of a value error which usually would result in a 500 internal server error if not handled. This is a backwards incompatible change. * Applications now not only have a root path where the resources and modules are located but also an instance path which is the designated place to drop files that are modified at runtime (uploads etc.). Also this is conceptually only instance depending and outside version control so it’s the perfect place to put configuration files etc. * Added the `APPLICATION_ROOT` configuration variable. * Implemented `TestClient.session_transaction` to easily modify sessions from the test environment. * Refactored test client internally. The `APPLICATION_ROOT` configuration variable as well as `SERVER_NAME` are now properly used by the test client as defaults. * Added `View.decorators` to support simpler decorating of pluggable (class-based) views. * Fixed an issue where the test client if used with the “with” statement did not trigger the execution of the teardown handlers. * Added finer control over the session cookie parameters. * HEAD requests to a method view now automatically dispatch to the `get` method if no handler was implemented. * Implemented the virtual `flask.ext` package to import extensions from. * The context preservation on exceptions is now an integral component of Flask itself and no longer of the test client. This cleaned up some internal logic and lowers the odds of runaway request contexts in unittests. * Fixed the Jinja2 environment’s `list_templates` method not returning the correct names when blueprints or modules were involved. Version 0.7.2 ------------- Released 2011-07-06 * Fixed an issue with URL processors not properly working on blueprints. Version 0.7.1 ------------- Released 2011-06-29 * Added missing future import that broke 2.5 compatibility. * Fixed an infinite redirect issue with blueprints. Version 0.7 ----------- Released 2011-06-28, codename Grappa * Added `Flask.make_default_options_response` which can be used by subclasses to alter the default behavior for `OPTIONS` responses. * Unbound locals now raise a proper `RuntimeError` instead of an `AttributeError`. * Mimetype guessing and etag support based on file objects is now deprecated for `send_file` because it was unreliable. Pass filenames instead or attach your own etags and provide a proper mimetype by hand. * Static file handling for modules now requires the name of the static folder to be supplied explicitly. The previous autodetection was not reliable and caused issues on Google’s App Engine. Until 1.0 the old behavior will continue to work but issue dependency warnings. * Fixed a problem for Flask to run on jython. * Added a `PROPAGATE_EXCEPTIONS` configuration variable that can be used to flip the setting of exception propagation which previously was linked to `DEBUG` alone and is now linked to either `DEBUG` or `TESTING`. * Flask no longer internally depends on rules being added through the `add_url_rule` function and can now also accept regular werkzeug rules added to the url map. * Added an `endpoint` method to the flask application object which allows one to register a callback to an arbitrary endpoint with a decorator. * Use Last-Modified for static file sending instead of Date which was incorrectly introduced in 0.6. * Added `create_jinja_loader` to override the loader creation process. * Implemented a silent flag for `config.from_pyfile`. * Added `teardown_request` decorator, for functions that should run at the end of a request regardless of whether an exception occurred. Also the behavior for `after_request` was changed. It’s now no longer executed when an exception is raised. * Implemented `has_request_context`. * Deprecated `init_jinja_globals`. Override the `Flask.create_jinja_environment` method instead to achieve the same functionality. * Added `safe_join`. * The automatic JSON request data unpacking now looks at the charset mimetype parameter. * Don’t modify the session on `get_flashed_messages` if there are no messages in the session. * `before_request` handlers are now able to abort requests with errors. * It is not possible to define user exception handlers. That way you can provide custom error messages from a central hub for certain errors that might occur during request processing (for instance database connection errors, timeouts from remote resources etc.). * Blueprints can provide blueprint specific error handlers. * Implemented generic class-based views. Version 0.6.1 ------------- Released 2010-12-31 * Fixed an issue where the default `OPTIONS` response was not exposing all valid methods in the `Allow` header. * Jinja2 template loading syntax now allows “./” in front of a template load path. Previously this caused issues with module setups. * Fixed an issue where the subdomain setting for modules was ignored for the static folder. * Fixed a security problem that allowed clients to download arbitrary files if the host server was a windows based operating system and the client uses backslashes to escape the directory the files where exposed from. Version 0.6 ----------- Released 2010-07-27, codename Whisky * After request functions are now called in reverse order of registration. * OPTIONS is now automatically implemented by Flask unless the application explicitly adds ‘OPTIONS’ as method to the URL rule. In this case no automatic OPTIONS handling kicks in. * Static rules are now even in place if there is no static folder for the module. This was implemented to aid GAE which will remove the static folder if it’s part of a mapping in the .yml file. * `Flask.config` is now available in the templates as `config`. * Context processors will no longer override values passed directly to the render function. * Added the ability to limit the incoming request data with the new `MAX_CONTENT_LENGTH` configuration value. * The endpoint for the `Module.add_url_rule` method is now optional to be consistent with the function of the same name on the application object. * Added a `make_response` function that simplifies creating response object instances in views. * Added signalling support based on blinker. This feature is currently optional and supposed to be used by extensions and applications. If you want to use it, make sure to have `blinker` installed. * Refactored the way URL adapters are created. This process is now fully customizable with the `Flask.create_url_adapter` method. * Modules can now register for a subdomain instead of just an URL prefix. This makes it possible to bind a whole module to a configurable subdomain. Version 0.5.2 ------------- Released 2010-07-15 * Fixed another issue with loading templates from directories when modules were used. Version 0.5.1 ------------- Released 2010-07-06 * Fixes an issue with template loading from directories when modules where used. Version 0.5 ----------- Released 2010-07-06, codename Calvados * Fixed a bug with subdomains that was caused by the inability to specify the server name. The server name can now be set with the `SERVER_NAME` config key. This key is now also used to set the session cookie cross-subdomain wide. * Autoescaping is no longer active for all templates. Instead it is only active for `.html`, `.htm`, `.xml` and `.xhtml`. Inside templates this behavior can be changed with the `autoescape` tag. * Refactored Flask internally. It now consists of more than a single file. * `send_file` now emits etags and has the ability to do conditional responses builtin. * (temporarily) dropped support for zipped applications. This was a rarely used feature and led to some confusing behavior. * Added support for per-package template and static-file directories. * Removed support for `create_jinja_loader` which is no longer used in 0.5 due to the improved module support. * Added a helper function to expose files from any directory. Version 0.4 ----------- Released 2010-06-18, codename Rakia * Added the ability to register application wide error handlers from modules. * `Flask.after_request` handlers are now also invoked if the request dies with an exception and an error handling page kicks in. * Test client has not the ability to preserve the request context for a little longer. This can also be used to trigger custom requests that do not pop the request stack for testing. * Because the Python standard library caches loggers, the name of the logger is configurable now to better support unittests. * Added `TESTING` switch that can activate unittesting helpers. * The logger switches to `DEBUG` mode now if debug is enabled. Version 0.3.1 ------------- Released 2010-05-28 * Fixed a error reporting bug with `Config.from_envvar`. * Removed some unused code. * Release does no longer include development leftover files (.git folder for themes, built documentation in zip and pdf file and some .pyc files) Version 0.3 ----------- Released 2010-05-28, codename Schnaps * Added support for categories for flashed messages. * The application now configures a `logging.Handler` and will log request handling exceptions to that logger when not in debug mode. This makes it possible to receive mails on server errors for example. * Added support for context binding that does not require the use of the with statement for playing in the console. * The request context is now available within the with statement making it possible to further push the request context or pop it. * Added support for configurations. Version 0.2 ----------- Released 2010-05-12, codename J?germeister * Various bugfixes * Integrated JSON support * Added `get_template_attribute` helper function. * `Flask.add_url_rule` can now also register a view function. * Refactored internal request dispatching. * Server listens on 127.0.0.1 by default now to fix issues with chrome. * Added external URL support. * Added support for `send_file`. * Module support and internal request handling refactoring to better support pluggable applications. * Sessions can be set to be permanent now on a per-session basis. * Better error reporting on missing secret keys. * Added support for Google Appengine. Version 0.1 ----------- Released 2010-04-16 * First public preview release. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/changes/Background Tasks with Celery ============================ If your application has a long running task, such as processing some uploaded data or sending email, you don’t want to wait for it to finish during a request. Instead, use a task queue to send the necessary data to another process that will run the task in the background while the request returns immediately. [Celery](https://celery.readthedocs.io) is a powerful task queue that can be used for simple background tasks as well as complex multi-stage programs and schedules. This guide will show you how to configure Celery using Flask. Read Celery’s [First Steps with Celery](https://celery.readthedocs.io/en/latest/getting-started/first-steps-with-celery.html) guide to learn how to use Celery itself. The Flask repository contains [an example](https://github.com/pallets/flask/tree/main/examples/celery) based on the information on this page, which also shows how to use JavaScript to submit tasks and poll for progress and results. Install ------- Install Celery from PyPI, for example using pip: ``` $ pip install celery ``` Integrate Celery with Flask --------------------------- You can use Celery without any integration with Flask, but it’s convenient to configure it through Flask’s config, and to let tasks access the Flask application. Celery uses similar ideas to Flask, with a `Celery` app object that has configuration and registers tasks. While creating a Flask app, use the following code to create and configure a Celery app as well. ``` from celery import Celery, Task def celery_init_app(app: Flask) -> Celery: class FlaskTask(Task): def __call__(self, *args: object, **kwargs: object) -> object: with app.app_context(): return self.run(*args, **kwargs) celery_app = Celery(app.name, task_cls=FlaskTask) celery_app.config_from_object(app.config["CELERY"]) celery_app.set_default() app.extensions["celery"] = celery_app return celery_app ``` This creates and returns a `Celery` app object. Celery [configuration](https://celery.readthedocs.io/en/stable/userguide/configuration.html) is taken from the `CELERY` key in the Flask configuration. The Celery app is set as the default, so that it is seen during each request. The `Task` subclass automatically runs task functions with a Flask app context active, so that services like your database connections are available. Here’s a basic `example.py` that configures Celery to use Redis for communication. We enable a result backend, but ignore results by default. This allows us to store results only for tasks where we care about the result. ``` from flask import Flask app = Flask(__name__) app.config.from_mapping( CELERY=dict( broker_url="redis://localhost", result_backend="redis://localhost", task_ignore_result=True, ), ) celery_app = celery_init_app(app) ``` Point the `celery worker` command at this and it will find the `celery_app` object. ``` $ celery -A example worker --loglevel INFO ``` You can also run the `celery beat` command to run tasks on a schedule. See Celery’s docs for more information about defining schedules. ``` $ celery -A example beat --loglevel INFO ``` Application Factory ------------------- When using the Flask application factory pattern, call the `celery_init_app` function inside the factory. It sets `app.extensions["celery"]` to the Celery app object, which can be used to get the Celery app from the Flask app returned by the factory. ``` def create_app() -> Flask: app = Flask(__name__) app.config.from_mapping( CELERY=dict( broker_url="redis://localhost", result_backend="redis://localhost", task_ignore_result=True, ), ) app.config.from_prefixed_env() celery_init_app(app) return app ``` To use `celery` commands, Celery needs an app object, but that’s no longer directly available. Create a `make_celery.py` file that calls the Flask app factory and gets the Celery app from the returned Flask app. ``` from example import create_app flask_app = create_app() celery_app = flask_app.extensions["celery"] ``` Point the `celery` command to this file. ``` $ celery -A make_celery worker --loglevel INFO $ celery -A make_celery beat --loglevel INFO ``` Defining Tasks -------------- Using `@celery_app.task` to decorate task functions requires access to the `celery_app` object, which won’t be available when using the factory pattern. It also means that the decorated tasks are tied to the specific Flask and Celery app instances, which could be an issue during testing if you change configuration for a test. Instead, use Celery’s `@shared_task` decorator. This creates task objects that will access whatever the “current app” is, which is a similar concept to Flask’s blueprints and app context. This is why we called `celery_app.set_default()` above. Here’s an example task that adds two numbers together and returns the result. ``` from celery import shared_task @shared_task(ignore_result=False) def add_together(a: int, b: int) -> int: return a + b ``` Earlier, we configured Celery to ignore task results by default. Since we want to know the return value of this task, we set `ignore_result=False`. On the other hand, a task that didn’t need a result, such as sending an email, wouldn’t set this. Calling Tasks ------------- The decorated function becomes a task object with methods to call it in the background. The simplest way is to use the `delay(*args, **kwargs)` method. See Celery’s docs for more methods. A Celery worker must be running to run the task. Starting a worker is shown in the previous sections. ``` from flask import request @app.post("/add") def start_add() -> dict[str, object]: a = request.form.get("a", type=int) b = request.form.get("b", type=int) result = add_together.delay(a, b) return {"result_id": result.id} ``` The route doesn’t get the task’s result immediately. That would defeat the purpose by blocking the response. Instead, we return the running task’s result id, which we can use later to get the result. Getting Results --------------- To fetch the result of the task we started above, we’ll add another route that takes the result id we returned before. We return whether the task is finished (ready), whether it finished successfully, and what the return value (or error) was if it is finished. ``` from celery.result import AsyncResult @app.get("/result/<id>") def task_result(id: str) -> dict[str, object]: result = AsyncResult(id) return { "ready": result.ready(), "successful": result.successful(), "value": result.result if result.ready() else None, } ``` Now you can start the task using the first route, then poll for the result using the second route. This keeps the Flask request workers from being blocked waiting for tasks to finish. The Flask repository contains [an example](https://github.com/pallets/flask/tree/main/examples/celery) using JavaScript to submit tasks and poll for progress and results. Passing Data to Tasks --------------------- The “add” task above took two integers as arguments. To pass arguments to tasks, Celery has to serialize them to a format that it can pass to other processes. Therefore, passing complex objects is not recommended. For example, it would be impossible to pass a SQLAlchemy model object, since that object is probably not serializable and is tied to the session that queried it. Pass the minimal amount of data necessary to fetch or recreate any complex data within the task. Consider a task that will run when the logged in user asks for an archive of their data. The Flask request knows the logged in user, and has the user object queried from the database. It got that by querying the database for a given id, so the task can do the same thing. Pass the user’s id rather than the user object. ``` @shared_task def generate_user_archive(user_id: str) -> None: user = db.session.get(User, user_id) ... generate_user_archive.delay(current_user.id) ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/celery/Subclassing Flask ================= The [`Flask`](../../api/index#flask.Flask "flask.Flask") class is designed for subclassing. For example, you may want to override how request parameters are handled to preserve their order: ``` from flask import Flask, Request from werkzeug.datastructures import ImmutableOrderedMultiDict class MyRequest(Request): """Request subclass to override request parameter storage""" parameter_storage_class = ImmutableOrderedMultiDict class MyFlask(Flask): """Flask subclass using the custom request class""" request_class = MyRequest ``` This is the recommended approach for overriding or augmenting Flask’s internal functionality. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/patterns/subclassing/Design Decisions in Flask ========================= If you are curious why Flask does certain things the way it does and not differently, this section is for you. This should give you an idea about some of the design decisions that may appear arbitrary and surprising at first, especially in direct comparison with other frameworks. The Explicit Application Object ------------------------------- A Python web application based on WSGI has to have one central callable object that implements the actual application. In Flask this is an instance of the [`Flask`](../api/index#flask.Flask "flask.Flask") class. Each Flask application has to create an instance of this class itself and pass it the name of the module, but why can’t Flask do that itself? Without such an explicit application object the following code: ``` from flask import Flask app = Flask(__name__) @app.route('/') def index(): return 'Hello World!' ``` Would look like this instead: ``` from hypothetical_flask import route @route('/') def index(): return 'Hello World!' ``` There are three major reasons for this. The most important one is that implicit application objects require that there may only be one instance at the time. There are ways to fake multiple applications with a single application object, like maintaining a stack of applications, but this causes some problems I won’t outline here in detail. Now the question is: when does a microframework need more than one application at the same time? A good example for this is unit testing. When you want to test something it can be very helpful to create a minimal application to test specific behavior. When the application object is deleted everything it allocated will be freed again. Another thing that becomes possible when you have an explicit object lying around in your code is that you can subclass the base class ([`Flask`](../api/index#flask.Flask "flask.Flask")) to alter specific behavior. This would not be possible without hacks if the object were created ahead of time for you based on a class that is not exposed to you. But there is another very important reason why Flask depends on an explicit instantiation of that class: the package name. Whenever you create a Flask instance you usually pass it `__name__` as package name. Flask depends on that information to properly load resources relative to your module. With Python’s outstanding support for reflection it can then access the package to figure out where the templates and static files are stored (see [`open_resource()`](../api/index#flask.Flask.open_resource "flask.Flask.open_resource")). Now obviously there are frameworks around that do not need any configuration and will still be able to load templates relative to your application module. But they have to use the current working directory for that, which is a very unreliable way to determine where the application is. The current working directory is process-wide and if you are running multiple applications in one process (which could happen in a webserver without you knowing) the paths will be off. Worse: many webservers do not set the working directory to the directory of your application but to the document root which does not have to be the same folder. The third reason is “explicit is better than implicit”. That object is your WSGI application, you don’t have to remember anything else. If you want to apply a WSGI middleware, just wrap it and you’re done (though there are better ways to do that so that you do not lose the reference to the application object [`wsgi_app()`](../api/index#flask.Flask.wsgi_app "flask.Flask.wsgi_app")). Furthermore this design makes it possible to use a factory function to create the application which is very helpful for unit testing and similar things ([Application Factories](../patterns/appfactories/index)). The Routing System ------------------ Flask uses the Werkzeug routing system which was designed to automatically order routes by complexity. This means that you can declare routes in arbitrary order and they will still work as expected. This is a requirement if you want to properly implement decorator based routing since decorators could be fired in undefined order when the application is split into multiple modules. Another design decision with the Werkzeug routing system is that routes in Werkzeug try to ensure that URLs are unique. Werkzeug will go quite far with that in that it will automatically redirect to a canonical URL if a route is ambiguous. One Template Engine ------------------- Flask decides on one template engine: Jinja2. Why doesn’t Flask have a pluggable template engine interface? You can obviously use a different template engine, but Flask will still configure Jinja2 for you. While that limitation that Jinja2 is *always* configured will probably go away, the decision to bundle one template engine and use that will not. Template engines are like programming languages and each of those engines has a certain understanding about how things work. On the surface they all work the same: you tell the engine to evaluate a template with a set of variables and take the return value as string. But that’s about where similarities end. Jinja2 for example has an extensive filter system, a certain way to do template inheritance, support for reusable blocks (macros) that can be used from inside templates and also from Python code, supports iterative template rendering, configurable syntax and more. On the other hand an engine like Genshi is based on XML stream evaluation, template inheritance by taking the availability of XPath into account and more. Mako on the other hand treats templates similar to Python modules. When it comes to connecting a template engine with an application or framework there is more than just rendering templates. For instance, Flask uses Jinja2’s extensive autoescaping support. Also it provides ways to access macros from Jinja2 templates. A template abstraction layer that would not take the unique features of the template engines away is a science on its own and a too large undertaking for a microframework like Flask. Furthermore extensions can then easily depend on one template language being present. You can easily use your own templating language, but an extension could still depend on Jinja itself. What does “micro” mean? ----------------------- “Micro” does not mean that your whole web application has to fit into a single Python file (although it certainly can), nor does it mean that Flask is lacking in functionality. The “micro” in microframework means Flask aims to keep the core simple but extensible. Flask won’t make many decisions for you, such as what database to use. Those decisions that it does make, such as what templating engine to use, are easy to change. Everything else is up to you, so that Flask can be everything you need and nothing you don’t. By default, Flask does not include a database abstraction layer, form validation or anything else where different libraries already exist that can handle that. Instead, Flask supports extensions to add such functionality to your application as if it was implemented in Flask itself. Numerous extensions provide database integration, form validation, upload handling, various open authentication technologies, and more. Flask may be “micro”, but it’s ready for production use on a variety of needs. Why does Flask call itself a microframework and yet it depends on two libraries (namely Werkzeug and Jinja2). Why shouldn’t it? If we look over to the Ruby side of web development there we have a protocol very similar to WSGI. Just that it’s called Rack there, but besides that it looks very much like a WSGI rendition for Ruby. But nearly all applications in Ruby land do not work with Rack directly, but on top of a library with the same name. This Rack library has two equivalents in Python: WebOb (formerly Paste) and Werkzeug. Paste is still around but from my understanding it’s sort of deprecated in favour of WebOb. The development of WebOb and Werkzeug started side by side with similar ideas in mind: be a good implementation of WSGI for other applications to take advantage. Flask is a framework that takes advantage of the work already done by Werkzeug to properly interface WSGI (which can be a complex task at times). Thanks to recent developments in the Python package infrastructure, packages with dependencies are no longer an issue and there are very few reasons against having libraries that depend on others. Thread Locals ------------- Flask uses thread local objects (context local objects in fact, they support greenlet contexts as well) for request, session and an extra object you can put your own things on ([`g`](../api/index#flask.g "flask.g")). Why is that and isn’t that a bad idea? Yes it is usually not such a bright idea to use thread locals. They cause troubles for servers that are not based on the concept of threads and make large applications harder to maintain. However Flask is just not designed for large applications or asynchronous servers. Flask wants to make it quick and easy to write a traditional web application. Async/await and ASGI support ---------------------------- Flask supports `async` coroutines for view functions by executing the coroutine on a separate thread instead of using an event loop on the main thread as an async-first (ASGI) framework would. This is necessary for Flask to remain backwards compatible with extensions and code built before `async` was introduced into Python. This compromise introduces a performance cost compared with the ASGI frameworks, due to the overhead of the threads. Due to how tied to WSGI Flask’s code is, it’s not clear if it’s possible to make the `Flask` class support ASGI and WSGI at the same time. Work is currently being done in Werkzeug to work with ASGI, which may eventually enable support in Flask as well. See [Using async and await](../async-await/index) for more discussion. What Flask is, What Flask is Not -------------------------------- Flask will never have a database layer. It will not have a form library or anything else in that direction. Flask itself just bridges to Werkzeug to implement a proper WSGI application and to Jinja2 to handle templating. It also binds to a few common standard library packages such as logging. Everything else is up for extensions. Why is this the case? Because people have different preferences and requirements and Flask could not meet those if it would force any of this into the core. The majority of web applications will need a template engine in some sort. However not every application needs a SQL database. As your codebase grows, you are free to make the design decisions appropriate for your project. Flask will continue to provide a very simple glue layer to the best that Python has to offer. You can implement advanced patterns in SQLAlchemy or another database tool, introduce non-relational data persistence as appropriate, and take advantage of framework-agnostic tools built for WSGI, the Python web interface. The idea of Flask is to build a good foundation for all applications. Everything else is up to you or extensions. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/design/Deploying to Production ======================= After developing your application, you’ll want to make it available publicly to other users. When you’re developing locally, you’re probably using the built-in development server, debugger, and reloader. These should not be used in production. Instead, you should use a dedicated WSGI server or hosting platform, some of which will be described here. “Production” means “not development”, which applies whether you’re serving your application publicly to millions of users or privately / locally to a single user. **Do not use the development server when deploying to production. It is intended for use only during local development. It is not designed to be particularly secure, stable, or efficient.** Self-Hosted Options ------------------- Flask is a WSGI *application*. A WSGI *server* is used to run the application, converting incoming HTTP requests to the standard WSGI environ, and converting outgoing WSGI responses to HTTP responses. The primary goal of these docs is to familiarize you with the concepts involved in running a WSGI application using a production WSGI server and HTTP server. There are many WSGI servers and HTTP servers, with many configuration possibilities. The pages below discuss the most common servers, and show the basics of running each one. The next section discusses platforms that can manage this for you. * [Gunicorn](gunicorn/index) * [Waitress](waitress/index) * [mod_wsgi](mod_wsgi/index) * [uWSGI](uwsgi/index) * [gevent](gevent/index) * [eventlet](eventlet/index) * [ASGI](asgi/index) WSGI servers have HTTP servers built-in. However, a dedicated HTTP server may be safer, more efficient, or more capable. Putting an HTTP server in front of the WSGI server is called a “reverse proxy.” * [Tell Flask it is Behind a Proxy](proxy_fix/index) * [nginx](nginx/index) * [Apache httpd](apache-httpd/index) This list is not exhaustive, and you should evaluate these and other servers based on your application’s needs. Different servers will have different capabilities, configuration, and support. Hosting Platforms ----------------- There are many services available for hosting web applications without needing to maintain your own server, networking, domain, etc. Some services may have a free tier up to a certain time or bandwidth. Many of these services use one of the WSGI servers described above, or a similar interface. The links below are for some of the most common platforms, which have instructions for Flask, WSGI, or Python. * [PythonAnywhere](https://help.pythonanywhere.com/pages/Flask/) * [Google App Engine](https://cloud.google.com/appengine/docs/standard/python3/building-app) * [Google Cloud Run](https://cloud.google.com/run/docs/quickstarts/build-and-deploy/deploy-python-service) * [AWS Elastic Beanstalk](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-flask.html) * [Microsoft Azure](https://docs.microsoft.com/en-us/azure/app-service/quickstart-python) This list is not exhaustive, and you should evaluate these and other services based on your application’s needs. Different services will have different capabilities, configuration, pricing, and support. You’ll probably need to [Tell Flask it is Behind a Proxy](proxy_fix/index) when using most hosting platforms. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/deploying/Gunicorn ======== [Gunicorn](https://gunicorn.org/) is a pure Python WSGI server with simple configuration and multiple worker implementations for performance tuning. * It tends to integrate easily with hosting platforms. * It does not support Windows (but does run on WSL). * It is easy to install as it does not require additional dependencies or compilation. * It has built-in async worker support using gevent or eventlet. This page outlines the basics of running Gunicorn. Be sure to read its [documentation](https://docs.gunicorn.org/) and use `gunicorn --help` to understand what features are available. Installing ---------- Gunicorn is easy to install, as it does not require external dependencies or compilation. It runs on Windows only under WSL. Create a virtualenv, install your application, then install `gunicorn`. ``` $ cd hello-app $ python -m venv .venv $ . .venv/bin/activate $ pip install . # install your application $ pip install gunicorn ``` Running ------- The only required argument to Gunicorn tells it how to load your Flask application. The syntax is `{module_import}:{app_variable}`. `module_import` is the dotted import name to the module with your application. `app_variable` is the variable with the application. It can also be a function call (with any arguments) if you’re using the app factory pattern. ``` # equivalent to 'from hello import app' $ gunicorn -w 4 'hello:app' # equivalent to 'from hello import create_app; create_app()' $ gunicorn -w 4 'hello:create_app()' Starting gunicorn 20.1.0 Listening at: http://127.0.0.1:8000 (x) Using worker: sync Booting worker with pid: x Booting worker with pid: x Booting worker with pid: x Booting worker with pid: x ``` The `-w` option specifies the number of processes to run; a starting value could be `CPU * 2`. The default is only 1 worker, which is probably not what you want for the default worker type. Logs for each request aren’t shown by default, only worker info and errors are shown. To show access logs on stdout, use the `--access-logfile=-` option. Binding Externally ------------------ Gunicorn should not be run as root because it would cause your application code to run as root, which is not secure. However, this means it will not be possible to bind to port 80 or 443. Instead, a reverse proxy such as [nginx](../nginx/index) or [Apache httpd](../apache-httpd/index) should be used in front of Gunicorn. You can bind to all external IPs on a non-privileged port using the `-b 0.0.0.0` option. Don’t do this when using a reverse proxy setup, otherwise it will be possible to bypass the proxy. ``` $ gunicorn -w 4 -b 0.0.0.0 'hello:create_app()' Listening at: http://0.0.0.0:8000 (x) ``` `0.0.0.0` is not a valid address to navigate to, you’d use a specific IP address in your browser. Async with gevent or eventlet ----------------------------- The default sync worker is appropriate for many use cases. If you need asynchronous support, Gunicorn provides workers using either [gevent](https://www.gevent.org/) or [eventlet](https://eventlet.net/). This is not the same as Python’s `async/await`, or the ASGI server spec. You must actually use gevent/eventlet in your own code to see any benefit to using the workers. When using either gevent or eventlet, greenlet>=1.0 is required, otherwise context locals such as `request` will not work as expected. When using PyPy, PyPy>=7.3.7 is required. To use gevent: ``` $ gunicorn -k gevent 'hello:create_app()' Starting gunicorn 20.1.0 Listening at: http://127.0.0.1:8000 (x) Using worker: gevent Booting worker with pid: x ``` To use eventlet: ``` $ gunicorn -k eventlet 'hello:create_app()' Starting gunicorn 20.1.0 Listening at: http://127.0.0.1:8000 (x) Using worker: eventlet Booting worker with pid: x ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/deploying/gunicorn/gevent ====== Prefer using [Gunicorn](../gunicorn/index) or [uWSGI](../uwsgi/index) with gevent workers rather than using [gevent](https://www.gevent.org/) directly. Gunicorn and uWSGI provide much more configurable and production-tested servers. [gevent](https://www.gevent.org/) allows writing asynchronous, coroutine-based code that looks like standard synchronous Python. It uses [greenlet](https://greenlet.readthedocs.io/en/latest/) to enable task switching without writing `async/await` or using `asyncio`. [eventlet](../eventlet/index) is another library that does the same thing. Certain dependencies you have, or other considerations, may affect which of the two you choose to use. gevent provides a WSGI server that can handle many connections at once instead of one per worker process. You must actually use gevent in your own code to see any benefit to using the server. Installing ---------- When using gevent, greenlet>=1.0 is required, otherwise context locals such as `request` will not work as expected. When using PyPy, PyPy>=7.3.7 is required. Create a virtualenv, install your application, then install `gevent`. ``` $ cd hello-app $ python -m venv .venv $ . .venv/bin/activate $ pip install . # install your application $ pip install gevent ``` Running ------- To use gevent to serve your application, write a script that imports its `WSGIServer`, as well as your app or app factory. `wsgi.py` ``` from gevent.pywsgi import WSGIServer from hello import create_app app = create_app() http_server = WSGIServer(("127.0.0.1", 8000), app) http_server.serve_forever() ``` ``` $ python wsgi.py ``` No output is shown when the server starts. Binding Externally ------------------ gevent should not be run as root because it would cause your application code to run as root, which is not secure. However, this means it will not be possible to bind to port 80 or 443. Instead, a reverse proxy such as [nginx](../nginx/index) or [Apache httpd](../apache-httpd/index) should be used in front of gevent. You can bind to all external IPs on a non-privileged port by using `0.0.0.0` in the server arguments shown in the previous section. Don’t do this when using a reverse proxy setup, otherwise it will be possible to bypass the proxy. `0.0.0.0` is not a valid address to navigate to, you’d use a specific IP address in your browser. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/deploying/gevent/ASGI ==== If you’d like to use an ASGI server you will need to utilise WSGI to ASGI middleware. The asgiref [WsgiToAsgi](https://github.com/django/asgiref#wsgi-to-asgi-adapter) adapter is recommended as it integrates with the event loop used for Flask’s [Using async and await](../../async-await/index#async-await) support. You can use the adapter by wrapping the Flask app, ``` from asgiref.wsgi import WsgiToAsgi from flask import Flask app = Flask(__name__) ... asgi_app = WsgiToAsgi(app) ``` and then serving the `asgi_app` with the ASGI server, e.g. using [Hypercorn](https://gitlab.com/pgjones/hypercorn), ``` $ hypercorn module:asgi_app ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/deploying/asgi/Waitress ======== [Waitress](https://docs.pylonsproject.org/projects/waitress/) is a pure Python WSGI server. * It is easy to configure. * It supports Windows directly. * It is easy to install as it does not require additional dependencies or compilation. * It does not support streaming requests, full request data is always buffered. * It uses a single process with multiple thread workers. This page outlines the basics of running Waitress. Be sure to read its documentation and `waitress-serve --help` to understand what features are available. Installing ---------- Create a virtualenv, install your application, then install `waitress`. ``` $ cd hello-app $ python -m venv .venv $ . .venv/bin/activate $ pip install . # install your application $ pip install waitress ``` Running ------- The only required argument to `waitress-serve` tells it how to load your Flask application. The syntax is `{module}:{app}`. `module` is the dotted import name to the module with your application. `app` is the variable with the application. If you’re using the app factory pattern, use `--call {module}:{factory}` instead. ``` # equivalent to 'from hello import app' $ waitress-serve --host 127.0.0.1 hello:app # equivalent to 'from hello import create_app; create_app()' $ waitress-serve --host 127.0.0.1 --call hello:create_app Serving on http://127.0.0.1:8080 ``` The `--host` option binds the server to local `127.0.0.1` only. Logs for each request aren’t shown, only errors are shown. Logging can be configured through the Python interface instead of the command line. Binding Externally ------------------ Waitress should not be run as root because it would cause your application code to run as root, which is not secure. However, this means it will not be possible to bind to port 80 or 443. Instead, a reverse proxy such as [nginx](../nginx/index) or [Apache httpd](../apache-httpd/index) should be used in front of Waitress. You can bind to all external IPs on a non-privileged port by not specifying the `--host` option. Don’t do this when using a revers proxy setup, otherwise it will be possible to bypass the proxy. `0.0.0.0` is not a valid address to navigate to, you’d use a specific IP address in your browser. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/deploying/waitress/mod_wsgi ========= [mod_wsgi](https://modwsgi.readthedocs.io/) is a WSGI server integrated with the [Apache httpd](https://httpd.apache.org/) server. The modern [mod_wsgi-express](https://pypi.org/project/mod-wsgi/) command makes it easy to configure and start the server without needing to write Apache httpd configuration. * Tightly integrated with Apache httpd. * Supports Windows directly. * Requires a compiler and the Apache development headers to install. * Does not require a reverse proxy setup. This page outlines the basics of running mod_wsgi-express, not the more complex installation and configuration with httpd. Be sure to read the [mod_wsgi-express](https://pypi.org/project/mod-wsgi/), [mod_wsgi](https://modwsgi.readthedocs.io/), and [Apache httpd](https://httpd.apache.org/) documentation to understand what features are available. Installing ---------- Installing mod_wsgi requires a compiler and the Apache server and development headers installed. You will get an error if they are not. How to install them depends on the OS and package manager that you use. Create a virtualenv, install your application, then install `mod_wsgi`. ``` $ cd hello-app $ python -m venv .venv $ . .venv/bin/activate $ pip install . # install your application $ pip install mod_wsgi ``` Running ------- The only argument to `mod_wsgi-express` specifies a script containing your Flask application, which must be called `application`. You can write a small script to import your app with this name, or to create it if using the app factory pattern. `wsgi.py` ``` from hello import app application = app ``` `wsgi.py` ``` from hello import create_app application = create_app() ``` Now run the `mod_wsgi-express start-server` command. ``` $ mod_wsgi-express start-server wsgi.py --processes 4 ``` The `--processes` option specifies the number of worker processes to run; a starting value could be `CPU * 2`. Logs for each request aren’t show in the terminal. If an error occurs, its information is written to the error log file shown when starting the server. Binding Externally ------------------ Unlike the other WSGI servers in these docs, mod_wsgi can be run as root to bind to privileged ports like 80 and 443. However, it must be configured to drop permissions to a different user and group for the worker processes. For example, if you created a `hello` user and group, you should install your virtualenv and application as that user, then tell mod_wsgi to drop to that user after starting. ``` $ sudo /home/hello/.venv/bin/mod_wsgi-express start-server \ /home/hello/wsgi.py \ --user hello --group hello --port 80 --processes 4 ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/deploying/mod_wsgi/eventlet ======== Prefer using [Gunicorn](../gunicorn/index) with eventlet workers rather than using [eventlet](https://eventlet.net/) directly. Gunicorn provides a much more configurable and production-tested server. [eventlet](https://eventlet.net/) allows writing asynchronous, coroutine-based code that looks like standard synchronous Python. It uses [greenlet](https://greenlet.readthedocs.io/en/latest/) to enable task switching without writing `async/await` or using `asyncio`. [gevent](../gevent/index) is another library that does the same thing. Certain dependencies you have, or other considerations, may affect which of the two you choose to use. eventlet provides a WSGI server that can handle many connections at once instead of one per worker process. You must actually use eventlet in your own code to see any benefit to using the server. Installing ---------- When using eventlet, greenlet>=1.0 is required, otherwise context locals such as `request` will not work as expected. When using PyPy, PyPy>=7.3.7 is required. Create a virtualenv, install your application, then install `eventlet`. ``` $ cd hello-app $ python -m venv .venv $ . .venv/bin/activate $ pip install . # install your application $ pip install eventlet ``` Running ------- To use eventlet to serve your application, write a script that imports its `wsgi.server`, as well as your app or app factory. `wsgi.py` ``` import eventlet from eventlet import wsgi from hello import create_app app = create_app() wsgi.server(eventlet.listen(("127.0.0.1", 8000)), app) ``` ``` $ python wsgi.py (x) wsgi starting up on http://127.0.0.1:8000 ``` Binding Externally ------------------ eventlet should not be run as root because it would cause your application code to run as root, which is not secure. However, this means it will not be possible to bind to port 80 or 443. Instead, a reverse proxy such as [nginx](../nginx/index) or [Apache httpd](../apache-httpd/index) should be used in front of eventlet. You can bind to all external IPs on a non-privileged port by using `0.0.0.0` in the server arguments shown in the previous section. Don’t do this when using a reverse proxy setup, otherwise it will be possible to bypass the proxy. `0.0.0.0` is not a valid address to navigate to, you’d use a specific IP address in your browser. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/deploying/eventlet/Tell Flask it is Behind a Proxy =============================== When using a reverse proxy, or many Python hosting platforms, the proxy will intercept and forward all external requests to the local WSGI server. From the WSGI server and Flask application’s perspectives, requests are now coming from the HTTP server to the local address, rather than from the remote address to the external server address. HTTP servers should set `X-Forwarded-` headers to pass on the real values to the application. The application can then be told to trust and use those values by wrapping it with the [X-Forwarded-For Proxy Fix](https://werkzeug.palletsprojects.com/en/2.3.x/middleware/proxy_fix/ "(in Werkzeug v2.3.x)") middleware provided by Werkzeug. This middleware should only be used if the application is actually behind a proxy, and should be configured with the number of proxies that are chained in front of it. Not all proxies set all the headers. Since incoming headers can be faked, you must set how many proxies are setting each header so the middleware knows what to trust. ``` from werkzeug.middleware.proxy_fix import ProxyFix app.wsgi_app = ProxyFix( app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_prefix=1 ) ``` Remember, only apply this middleware if you are behind a proxy, and set the correct number of proxies that set each header. It can be a security issue if you get this configuration wrong. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/deploying/proxy_fix/uWSGI ===== [uWSGI](https://uwsgi-docs.readthedocs.io/en/latest/) is a fast, compiled server suite with extensive configuration and capabilities beyond a basic server. * It can be very performant due to being a compiled program. * It is complex to configure beyond the basic application, and has so many options that it can be difficult for beginners to understand. * It does not support Windows (but does run on WSL). * It requires a compiler to install in some cases. This page outlines the basics of running uWSGI. Be sure to read its documentation to understand what features are available. Installing ---------- uWSGI has multiple ways to install it. The most straightforward is to install the `pyuwsgi` package, which provides precompiled wheels for common platforms. However, it does not provide SSL support, which can be provided with a reverse proxy instead. Create a virtualenv, install your application, then install `pyuwsgi`. ``` $ cd hello-app $ python -m venv .venv $ . .venv/bin/activate $ pip install . # install your application $ pip install pyuwsgi ``` If you have a compiler available, you can install the `uwsgi` package instead. Or install the `pyuwsgi` package from sdist instead of wheel. Either method will include SSL support. ``` $ pip install uwsgi # or $ pip install --no-binary pyuwsgi pyuwsgi ``` Running ------- The most basic way to run uWSGI is to tell it to start an HTTP server and import your application. ``` $ uwsgi --http 127.0.0.1:8000 --master -p 4 -w hello:app *** Starting uWSGI 2.0.20 (64bit) on [x] *** *** Operational MODE: preforking *** mounting hello:app on / spawned uWSGI master process (pid: x) spawned uWSGI worker 1 (pid: x, cores: 1) spawned uWSGI worker 2 (pid: x, cores: 1) spawned uWSGI worker 3 (pid: x, cores: 1) spawned uWSGI worker 4 (pid: x, cores: 1) spawned uWSGI http 1 (pid: x) ``` If you’re using the app factory pattern, you’ll need to create a small Python file to create the app, then point uWSGI at that. `wsgi.py` ``` from hello import create_app app = create_app() ``` ``` $ uwsgi --http 127.0.0.1:8000 --master -p 4 -w wsgi:app ``` The `--http` option starts an HTTP server at 127.0.0.1 port 8000. The `--master` option specifies the standard worker manager. The `-p` option starts 4 worker processes; a starting value could be `CPU * 2`. The `-w` option tells uWSGI how to import your application Binding Externally ------------------ uWSGI should not be run as root with the configuration shown in this doc because it would cause your application code to run as root, which is not secure. However, this means it will not be possible to bind to port 80 or 443. Instead, a reverse proxy such as [nginx](../nginx/index) or [Apache httpd](../apache-httpd/index) should be used in front of uWSGI. It is possible to run uWSGI as root securely, but that is beyond the scope of this doc. uWSGI has optimized integration with [Nginx uWSGI](https://uwsgi-docs.readthedocs.io/en/latest/Nginx.html) and [Apache mod_proxy_uwsgi](https://uwsgi-docs.readthedocs.io/en/latest/Apache.html#mod-proxy-uwsgi), and possibly other servers, instead of using a standard HTTP proxy. That configuration is beyond the scope of this doc, see the links for more information. You can bind to all external IPs on a non-privileged port using the `--http 0.0.0.0:8000` option. Don’t do this when using a reverse proxy setup, otherwise it will be possible to bypass the proxy. ``` $ uwsgi --http 0.0.0.0:8000 --master -p 4 -w wsgi:app ``` `0.0.0.0` is not a valid address to navigate to, you’d use a specific IP address in your browser. Async with gevent ----------------- The default sync worker is appropriate for many use cases. If you need asynchronous support, uWSGI provides a [gevent](https://www.gevent.org/) worker. This is not the same as Python’s `async/await`, or the ASGI server spec. You must actually use gevent in your own code to see any benefit to using the worker. When using gevent, greenlet>=1.0 is required, otherwise context locals such as `request` will not work as expected. When using PyPy, PyPy>=7.3.7 is required. ``` $ uwsgi --http 127.0.0.1:8000 --master --gevent 100 -w wsgi:app *** Starting uWSGI 2.0.20 (64bit) on [x] *** *** Operational MODE: async *** mounting hello:app on / spawned uWSGI master process (pid: x) spawned uWSGI worker 1 (pid: x, cores: 100) spawned uWSGI http 1 (pid: x) *** running gevent loop engine [addr:x] *** ``` © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/deploying/uwsgi/Apache httpd ============ [Apache httpd](https://httpd.apache.org/) is a fast, production level HTTP server. When serving your application with one of the WSGI servers listed in [Deploying to Production](../index), it is often good or necessary to put a dedicated HTTP server in front of it. This “reverse proxy” can handle incoming requests, TLS, and other security and performance concerns better than the WSGI server. httpd can be installed using your system package manager, or a pre-built executable for Windows. Installing and running httpd itself is outside the scope of this doc. This page outlines the basics of configuring httpd to proxy your application. Be sure to read its documentation to understand what features are available. Domain Name ----------- Acquiring and configuring a domain name is outside the scope of this doc. In general, you will buy a domain name from a registrar, pay for server space with a hosting provider, and then point your registrar at the hosting provider’s name servers. To simulate this, you can also edit your `hosts` file, located at `/etc/hosts` on Linux. Add a line that associates a name with the local IP. Modern Linux systems may be configured to treat any domain name that ends with `.localhost` like this without adding it to the `hosts` file. `/etc/hosts` ``` 127.0.0.1 hello.localhost ``` Configuration ------------- The httpd configuration is located at `/etc/httpd/conf/httpd.conf` on Linux. It may be different depending on your operating system. Check the docs and look for `httpd.conf`. Remove or comment out any existing `DocumentRoot` directive. Add the config lines below. We’ll assume the WSGI server is listening locally at `http://127.0.0.1:8000`. `/etc/httpd/conf/httpd.conf` ``` LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so ProxyPass / http://127.0.0.1:8000/ RequestHeader set X-Forwarded-Proto http RequestHeader set X-Forwarded-Prefix / ``` The `LoadModule` lines might already exist. If so, make sure they are uncommented instead of adding them manually. Then [Tell Flask it is Behind a Proxy](../proxy_fix/index) so that your application uses the `X-Forwarded` headers. `X-Forwarded-For` and `X-Forwarded-Host` are automatically set by `ProxyPass`. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/deploying/apache-httpd/nginx ===== [nginx](https://nginx.org/) is a fast, production level HTTP server. When serving your application with one of the WSGI servers listed in [Deploying to Production](../index), it is often good or necessary to put a dedicated HTTP server in front of it. This “reverse proxy” can handle incoming requests, TLS, and other security and performance concerns better than the WSGI server. Nginx can be installed using your system package manager, or a pre-built executable for Windows. Installing and running Nginx itself is outside the scope of this doc. This page outlines the basics of configuring Nginx to proxy your application. Be sure to read its documentation to understand what features are available. Domain Name ----------- Acquiring and configuring a domain name is outside the scope of this doc. In general, you will buy a domain name from a registrar, pay for server space with a hosting provider, and then point your registrar at the hosting provider’s name servers. To simulate this, you can also edit your `hosts` file, located at `/etc/hosts` on Linux. Add a line that associates a name with the local IP. Modern Linux systems may be configured to treat any domain name that ends with `.localhost` like this without adding it to the `hosts` file. `/etc/hosts` ``` 127.0.0.1 hello.localhost ``` Configuration ------------- The nginx configuration is located at `/etc/nginx/nginx.conf` on Linux. It may be different depending on your operating system. Check the docs and look for `nginx.conf`. Remove or comment out any existing `server` section. Add a `server` section and use the `proxy_pass` directive to point to the address the WSGI server is listening on. We’ll assume the WSGI server is listening locally at `http://127.0.0.1:8000`. `/etc/nginx.conf` ``` server { listen 80; server_name _; location / { proxy_pass http://127.0.0.1:8000/; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Prefix /; } } ``` Then [Tell Flask it is Behind a Proxy](../proxy_fix/index) so that your application uses these headers. © 2007–2022 Pallets Licensed under the BSD 3-clause License. <https://flask.palletsprojects.com/en/2.3.x/deploying/nginx/flask~2.3
AzureRMR
cran
R
Package ‘AzureRMR’ September 21, 2023 Title Interface to 'Azure Resource Manager' Version 2.4.4 Description A lightweight but powerful R interface to the 'Azure Resource Manager' REST API. The pack- age exposes a comprehensive class framework and related tools for creating, updating and delet- ing 'Azure' resource groups, resources and templates. While 'AzureRMR' can be used to man- age any 'Azure' service, it can also be extended by other packages to provide extra functional- ity for specific services. Part of the 'AzureR' family of packages. URL https://github.com/Azure/AzureRMR https://github.com/Azure/AzureR BugReports https://github.com/Azure/AzureRMR/issues License MIT + file LICENSE VignetteBuilder knitr Depends R (>= 3.3) Imports AzureGraph (>= 1.2.0), AzureAuth (>= 1.2.1), utils, parallel, httr (>= 1.3), jsonlite, R6, uuid Suggests knitr, rmarkdown, testthat, httpuv, AzureStor RoxygenNote 7.2.1 NeedsCompilation no Author <NAME> [aut, cre], Microsoft [cph] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-09-21 08:50:02 UTC R topics documented: az_resourc... 2 az_resource_grou... 5 az_r... 9 az_role_assignmen... 11 az_role_definitio... 12 az_subscriptio... 13 az_templat... 15 build_template_definitio... 17 call_azure_r... 20 create_azure_logi... 22 init_poo... 24 is_azure_logi... 26 is_ur... 27 loc... 28 rba... 29 az_resource Azure resource class Description Class representing a generic Azure resource. Format An R6 object of class az_resource. Methods • new(...): Initialize a new resource object. See ’Initialization’ for more details. • delete(confirm=TRUE, wait=FALSE): Delete this resource, after a confirmation check. Op- tionally wait for the delete to finish. • update(...): Update this resource on the host. • sync_fields(): Synchronise the R object with the resource it represents in Azure. Returns the properties$provisioningState field, so you can query this programmatically to check if a resource has finished provisioning. Not all resource types require explicit provisioning, in which case this method will return NULL. • set_api_version(api_version, stable_only=TRUE): Set the API version to use when in- teracting with the host. If api_version is not supplied, use the latest version available, either the latest stable version (if stable_only=TRUE) or the latest preview version (if stable_only=FALSE). • get_api_version(): Get the current API version. • get_subresource(type, name): Get a sub-resource of this resource. See ’Sub-resources’ below. • create_subresource(type, name, ...): Create a sub-resource of this resource. • delete_subresource(type, name, confirm=TRUE): Delete a sub-resource of this resource. • do_operation(...): Carry out an operation. See ’Operations’ for more details. • set_tags(..., keep_existing=TRUE): Set the tags on this resource. The tags can be either names or name-value pairs. To delete a tag, set it to NULL. • get_tags(): Get the tags on this resource. • create_lock(name, level): Create a management lock on this resource. • get_lock(name): Returns a management lock object. • delete_lock(name): Deletes a management lock object. • list_locks(): List all locks that apply to this resource. Note this includes locks created at the subscription or resource group level. • add_role_assignment(name, ...): Adds a new role assignment. See ’Role-based access control’ below. • get_role_assignment(id): Retrieves an existing role assignment. • remove_role_assignment(id): Removes an existing role assignment. • list_role_assignments(): Lists role assignments. • get_role_definition(id): Retrieves an existing role definition. • list_role_definitions() Lists role definitions. Initialization There are multiple ways to initialize a new resource object. The new() method can retrieve an exist- ing resource, deploy/create a new resource, or create an empty/null object (without communicating with the host), based on the arguments you supply. All of these initialization options have the following arguments in common. 1. token: An OAuth 2.0 token, as generated by get_azure_token. 2. subscription: The subscription ID. 3. api_version: Optionally, the API version to use when interacting with the host. By default, this is NULL in which case the latest API version will be used. 4. A set of identifying arguments: • resource_group: The resource group containing the resource. • id: The full ID of the resource. This is a string of the form /subscriptions/{uuid}/resourceGroups/{resourc • provider: The provider of the resource, eg Microsoft.Compute. • path: The path to the resource, eg virtualMachines. • type: The combination of provider and path, eg Microsoft.Compute/virtualMachines. • name: The name of the resource instance, eg myWindowsVM. Providing id will fill in the values for all the other identifying arguments. Similarly, providing type will fill in the values for provider and path. Unless you provide id, you must also provide name. The default behaviour for new() is to retrieve an existing resource, which occurs if you supply only the arguments listed above. If you also supply an argument deployed_properties=NULL, this will create a null object. If you supply any other (named) arguments, new() will create a new object on the host, with the supplied arguments as parameters. Generally, the easiest way to initialize an object is via the get_resource, create_resource or list_resources methods of the az_resource_group class, which will handle all the gory details automatically. Operations The do_operation() method allows you to carry out arbitrary operations on the resource. It takes the following arguments: • op: The operation in question, which will be appended to the URL path of the request. • options: A named list giving the URL query parameters. • ...: Other named arguments passed to call_azure_rm, and then to the appropriate call in httr. In particular, use body to supply the body of a PUT, POST or PATCH request. • http_verb: The HTTP verb as a string, one of GET, PUT, POST, DELETE, HEAD or PATCH. Consult the Azure documentation for your resource to find out what operations are supported. Sub-resources Some resource types can have sub-resources: objects exposed by Resource Manager that make up a part of their parent’s functionality. For example, a storage account (type Microsoft.Storage/storageAccounts) provides the blob storage service, which can be accessed via Resource Manager as a sub-resource of type Microsoft.Storage/storageAccounts/blobServices/default. To retrieve an existing sub-resource, use the get_subresource() method. You do not need to include the parent resource’s type and name. For example, if res is a resource for a storage account, and you want to retrieve the sub-resource for the blob container "myblobs", call res$get_subresource(type="blobServices/default/containers", name="myblobs") Notice that the storage account’s resource type and name are omitted from the get_subresource arguments. Similarly, to create a new subresource, call the create_subresource() method with the same naming convention, passing any required fields as named arguments; and to delete it, call delete_subresource(). Role-based access control AzureRMR implements a subset of the full RBAC functionality within Azure Active Directory. You can retrieve role definitions and add and remove role assignments, at the subscription, resource group and resource levels. See rbac for more information. See Also az_resource_group, call_azure_rm, call_azure_url, Resources API reference For role-based access control methods, see rbac For management locks, see lock Examples ## Not run: # recommended way to retrieve a resource: via a resource group object # storage account: stor <- resgroup$get_resource(type="Microsoft.Storage/storageAccounts", name="mystorage") # virtual machine: vm <- resgroup$get_resource(type="Microsoft.Compute/virtualMachines", name="myvm") ## carry out operations on a resource # storage account: get access keys stor$do_operation("listKeys", http_verb="POST") # virtual machine: run a script vm$do_operation("runCommand", body=list( commandId="RunShellScript", # RunPowerShellScript for Windows script=as.list("ifconfig > /tmp/ifconfig.out") ), encode="json", http_verb="POST") ## retrieve properties # storage account: endpoint URIs stor$properties$primaryEndpoints$file stor$properties$primaryEndpoints$blob # virtual machine: hardware profile vm$properties$hardwareProfile ## update a resource: resizing a VM properties <- list(hardwareProfile=list(vmSize="Standard_DS3_v2")) vm$do_operation(http_verb="PATCH", body=list(properties=properties), encode="json") # sync with Azure: useful to track resource creation/update status vm$sync_fields() ## subresource: create a public blob container stor$create_subresource(type="blobservices/default/containers", name="mycontainer", properties=list(publicAccess="container")) ## delete a subresource and resource stor$delete_subresource(type="blobservices/default/containers", name="mycontainer") stor$delete() ## End(Not run) az_resource_group Azure resource group class Description Class representing an Azure resource group. Format An R6 object of class az_resource_group. Methods • new(token, subscription, id, ...): Initialize a resource group object. See ’Initialization’ for more details. • delete(confirm=TRUE): Delete this resource group, after a confirmation check. This is asyn- chronous: while the method returns immediately, the delete operation continues on the host in the background. For resource groups containing a large number of deployed resources, this may take some time to complete. • sync_fields(): Synchronise the R object with the resource group it represents in Azure. • list_templates(filter, top): List deployed templates in this resource group. filter and top are optional arguments to filter the results; see the Azure documentation for more details. If top is specified, the returned list will have a maximum of this many items. • get_template(name): Return an object representing an existing template. • deploy_template(...): Deploy a new template. See ’Templates’ for more details. By default, AzureRMR will set the createdBy tag on a newly-deployed template to the value AzureR/AzureRMR. • delete_template(name, confirm=TRUE, free_resources=FALSE): Delete a deployed tem- plate, and optionally free any resources that were created. • get_resource(...): Return an object representing an existing resource. See ’Resources’ for more details. • create_resource(...): Create a new resource. By default, AzureRMR will set the createdBy tag on a newly-created resource to the value AzureR/AzureRMR. • delete_resource(..., confirm=TRUE, wait=FALSE): Delete an existing resource. Option- ally wait for the delete to finish. • resource_exists(...): Check if a resource exists. • list_resources(filter, expand, top): Return a list of resource group objects for this subscription. filter, expand and top are optional arguments to filter the results; see the Azure documentation for more details. If top is specified, the returned list will have a maxi- mum of this many items. • do_operation(...): Carry out an operation. See ’Operations’ for more details. • set_tags(..., keep_existing=TRUE): Set the tags on this resource group. The tags can be either names or name-value pairs. To delete a tag, set it to NULL. • get_tags(): Get the tags on this resource group. • create_lock(name, level): Create a management lock on this resource group (which will propagate to all resources within it). • get_lock(name): Returns a management lock object. • delete_lock(name): Deletes a management lock object. • list_locks(): List all locks that apply to this resource group. Note this includes locks created at the subscription level, and for any resources within the resource group. • add_role_assignment(name, ...): Adds a new role assignment. See ’Role-based access control’ below. • get_role_assignment(id): Retrieves an existing role assignment. • remove_role_assignment(id): Removes an existing role assignment. • list_role_assignments(): Lists role assignments. • get_role_definition(id): Retrieves an existing role definition. • list_role_definitions() Lists role definitions. Initialization Initializing a new object of this class can either retrieve an existing resource group, or create a new resource group on the host. Generally, the easiest way to create a resource group object is via the get_resource_group, create_resource_group or list_resource_groups methods of the az_subscription class, which handle this automatically. To create a resource group object in isolation, supply (at least) an Oauth 2.0 token of class AzureTo- ken, the subscription ID, and the resource group name. If this object refers to a new resource group, supply the location as well (use the list_locations method of the az_subscription class for possible locations). You can also pass any optional parameters for the resource group as named arguments to new(). Templates To deploy a new template, pass the following arguments to deploy_template(): • name: The name of the deployment. • template: The template to deploy. This can be provided in a number of ways: 1. A nested list of name-value pairs representing the parsed JSON 2. The name of a template file 3. A vector of strings containing unparsed JSON 4. A URL from which the template can be downloaded • parameters: The parameters for the template. This can be provided using any of the same methods as the template argument. • wait: Optionally, whether or not to wait until the deployment is complete before returning. Defaults to FALSE. Retrieving or deleting a deployed template requires only the name of the deployment. Resources There are a number of arguments to get_resource(), create_resource() and delete_resource() that serve to identify the specific resource in question: • id: The full ID of the resource, including subscription ID and resource group. • provider: The provider of the resource, eg Microsoft.Compute. • path: The full path to the resource, eg virtualMachines. • type: The combination of provider and path, eg Microsoft.Compute/virtualMachines. • name: The name of the resource instance, eg myWindowsVM. Providing the id argument will fill in the values for all the other arguments. Similarly, providing the type argument will fill in the values for provider and path. Unless you provide id, you must also provide name. To create/deploy a new resource, specify any extra parameters that the provider needs as named arguments to create_resource(). Like deploy_template(), create_resource() also takes an optional wait argument that specifies whether to wait until resource creation is complete before returning. Operations The do_operation() method allows you to carry out arbitrary operations on the resource group. It takes the following arguments: • op: The operation in question, which will be appended to the URL path of the request. • options: A named list giving the URL query parameters. • ...: Other named arguments passed to call_azure_rm, and then to the appropriate call in httr. In particular, use body to supply the body of a PUT, POST or PATCH request, and api_version to set the API version. • http_verb: The HTTP verb as a string, one of GET, PUT, POST, DELETE, HEAD or PATCH. Consult the Azure documentation for what operations are supported. Role-based access control AzureRMR implements a subset of the full RBAC functionality within Azure Active Directory. You can retrieve role definitions and add and remove role assignments, at the subscription, resource group and resource levels. See rbac for more information. See Also az_subscription, az_template, az_resource, Azure resource group overview, Resources API refer- ence, Template API reference For role-based access control methods, see rbac For management locks, see lock Examples ## Not run: # recommended way to retrieve a resource group object rg <- get_azure_login("myaadtenant")$ get_subscription("subscription_id")$ get_resource_group("rgname") # list resources & templates in this resource group rg$list_resources() rg$list_templates() # get a resource (virtual machine) rg$get_resource(type="Microsoft.Compute/virtualMachines", name="myvm") # create a resource (storage account) rg$create_resource(type="Microsoft.Storage/storageAccounts", name="mystorage", kind="StorageV2", sku=list(name="Standard_LRS")) # delete a resource rg$delete_resource(type="Microsoft.Storage/storageAccounts", name="mystorage") # deploy a template rg$deploy_template("tplname", template="template.json", parameters="parameters.json") # deploy a template with parameters inline rg$deploy_template("mydeployment", template="template.json", parameters=list(parm1="foo", parm2="bar")) # delete a template and free resources rg$delete_template("tplname", free_resources=TRUE) # delete the resource group itself rg$delete() ## End(Not run) az_rm Azure Resource Manager Description Base class for interacting with Azure Resource Manager. Format An R6 object of class az_rm. Methods • new(tenant, app, ...): Initialize a new ARM connection with the given credentials. See ’Authentication‘ for more details. • list_subscriptions(): Returns a list of objects, one for each subscription associated with this app ID. • get_subscription(id): Returns an object representing a subscription. • get_subscription_by_name(name): Returns the subscription with the given name (as op- posed to a GUID). • do_operation(...): Carry out an operation. See ’Operations’ for more details. Authentication The recommended way to authenticate with ARM is via the get_azure_login function, which creates a new instance of this class. To authenticate with the az_rm class directly, provide the following arguments to the new method: • tenant: Your tenant ID. This can be a name ("myaadtenant"), a fully qualified domain name ("myaadtenant.onmicrosoft.com" or "mycompanyname.com"), or a GUID. • app: The client/app ID to use to authenticate with Azure Active Directory. The default is to login interactively using the Azure CLI cross-platform app, but it’s recommended to supply your own app credentials if possible. • password: if auth_type == "client_credentials", the app secret; if auth_type == "resource_owner", your account password. • username: if auth_type == "resource_owner", your username. • certificate: If ‘auth_type == "client_credentials", a certificate to authenticate with. This is a more secure alternative to using an app secret. • auth_type: The OAuth authentication method to use, one of "client_credentials", "autho- rization_code", "device_code" or "resource_owner". See get_azure_token for how the default method is chosen, along with some caveats. • version: The Azure Active Directory version to use for authenticating. • host: your ARM host. Defaults to https://management.azure.com/. Change this if you are using a government or private cloud. • aad_host: Azure Active Directory host for authentication. Defaults to https://login.microsoftonline.com/. Change this if you are using a government or private cloud. • ...: Further arguments to pass to get_azure_token. • scopes: The Azure Service Management scopes (permissions) to obtain for this login. Only for version=2. • token: Optionally, an OAuth 2.0 token, of class AzureToken. This allows you to reuse the authentication details for an existing session. If supplied, all other arguments will be ignored. Operations The do_operation() method allows you to carry out arbitrary operations on the Resource Manager endpoint. It takes the following arguments: • op: The operation in question, which will be appended to the URL path of the request. • options: A named list giving the URL query parameters. • ...: Other named arguments passed to call_azure_rm, and then to the appropriate call in httr. In particular, use body to supply the body of a PUT, POST or PATCH request, and api_version to set the API version. • http_verb: The HTTP verb as a string, one of GET, PUT, POST, DELETE, HEAD or PATCH. Consult the Azure documentation for what operations are supported. See Also create_azure_login, get_azure_login Azure Resource Manager overview, REST API reference Examples ## Not run: # start a new Resource Manager session az <- az_rm$new(tenant="myaadtenant.onmicrosoft.com", app="app_id", password="password") # authenticate with credentials in a file az <- az_rm$new(config_file="creds.json") # authenticate with device code az <- az_rm$new(tenant="myaadtenant.onmicrosoft.com", app="app_id", auth_type="device_code") # retrieve a list of subscription objects az$list_subscriptions() # a specific subscription az$get_subscription("subscription_id") ## End(Not run) az_role_assignment Azure role assignment class Description Azure role assignment class Format An R6 object of class az_role_assignment. Fields • id: The full resource ID for this role assignment. • type: The resource type for a role assignment. Always Microsoft.Authorization/roleAssignments. • name: A GUID that identifies this role assignment. • role_name: The role definition name (in text), eg "Contributor". • properties: Properties for the role definition. • token: An OAuth token, obtained via get_azure_token. Methods • remove(confirm=TRUE): Removes this role assignment. Initialization The recommended way to create new instances of this class is via the add_role_assignment and get_role_assignment methods for subscription, resource group and resource objects. Technically role assignments and role definitions are Azure resources, and could be implemented as subclasses of az_resource. AzureRMR treats them as distinct, due to limited RBAC functionality currently supported. See Also add_role_assignment, get_role_assignment, get_role_definition, az_role_definition Overview of role-based access control az_role_definition Azure role definition class Description Azure role definition class Format An R6 object of class az_role_definition. Fields • id: The full resource ID for this role definition. • type: The resource type for a role definition. Always Microsoft.Authorization/roleDefinitions. • name: A GUID that identifies this role definition. • properties: Properties for the role definition. Methods This class has no methods. Initialization The recommended way to create new instances of this class is via the get_role_definition method for subscription, resource group and resource objects. Technically role assignments and role definitions are Azure resources, and could be implemented as subclasses of az_resource. AzureRMR treats them as distinct, due to limited RBAC functionality currently supported. In particular, role definitions are read-only: you can retrieve a definition, but not modify it, nor create new definitions. See Also get_role_definition, get_role_assignment, az_role_assignment Overview of role-based access control az_subscription Azure subscription class Description Class representing an Azure subscription. Format An R6 object of class az_subscription. Methods • new(token, id, ...): Initialize a subscription object. • list_resource_groups(filter, top): Return a list of resource group objects for this sub- scription. filter and top are optional arguments to filter the results; see the Azure documen- tation for more details. If top is specified, the returned list will have a maximum of this many items. • get_resource_group(name): Return an object representing an existing resource group. • create_resource_group(name, location): Create a new resource group in the specified region/location, and return an object representing it. By default, AzureRMR will set the createdBy tag on a newly-created resource group to the value AzureR/AzureRMR. • delete_resource_group(name, confirm=TRUE): Delete a resource group, after asking for confirmation. • resource_group_exists(name): Check if a resource group exists. • list_resources(filter, expand, top): List all resources deployed under this subscrip- tion. filter, expand and top are optional arguments to filter the results; see the Azure documentation for more details. If top is specified, the returned list will have a maximum of this many items. • list_locations(info=c("partial", "all")): List locations available. The default info="partial" returns a subset of the information about each location; set info="all" to return everything. • get_provider_api_version(provider, type, which=1, stable_only=TRUE): Get the cur- rent API version for the given resource provider and type. If no resource type is supplied, returns a vector of API versions, one for each resource type for the given provider. If neither provider nor type is supplied, returns the API versions for all resources and providers. Set stable_only=FALSE to allow preview APIs to be returned. Set which to a number > 1 to return an API other than the most recent. • do_operation(...): Carry out an operation. See ’Operations’ for more details. • create_lock(name, level): Create a management lock on this subscription (which will propagate to all resources within it). • get_lock(name): Returns a management lock object. • delete_lock(name): Deletes a management lock object. • list_locks(): List all locks that exist in this subscription. • add_role_assignment(name, ...): Adds a new role assignment. See ’Role-based access control’ below. • get_role_assignment(id): Retrieves an existing role assignment. • remove_role_assignment(id): Removes an existing role assignment. • list_role_assignments(): Lists role assignments. • get_role_definition(id): Retrieves an existing role definition. • list_role_definitions() Lists role definitions. • get_tags() Get the tags on this subscription. Details Generally, the easiest way to create a subscription object is via the get_subscription or list_subscriptions methods of the az_rm class. To create a subscription object in isolation, call the new() method and supply an Oauth 2.0 token of class AzureToken, along with the ID of the subscription. Operations The do_operation() method allows you to carry out arbitrary operations on the subscription. It takes the following arguments: • op: The operation in question, which will be appended to the URL path of the request. • options: A named list giving the URL query parameters. • ...: Other named arguments passed to call_azure_rm, and then to the appropriate call in httr. In particular, use body to supply the body of a PUT, POST or PATCH request, and api_version to set the API version. • http_verb: The HTTP verb as a string, one of GET, PUT, POST, DELETE, HEAD or PATCH. Consult the Azure documentation for what operations are supported. Role-based access control AzureRMR implements a subset of the full RBAC functionality within Azure Active Directory. You can retrieve role definitions and add and remove role assignments, at the subscription, resource group and resource levels. See rbac for more information. See Also Azure Resource Manager overview For role-based access control methods, see rbac For management locks, see lock Examples ## Not run: # recommended way to retrieve a subscription object sub <- get_azure_login("myaadtenant")$ get_subscription("subscription_id") # retrieve list of resource group objects under this subscription sub$list_resource_groups() # get a resource group sub$get_resource_group("rgname") # check if a resource group exists, and if not, create it rg_exists <- sub$resource_group_exists("rgname") if(!rg_exists) sub$create_resource_group("rgname", location="australiaeast") # delete a resource group sub$delete_resource_group("rgname") # get provider API versions for some resource types sub$get_provider_api_version("Microsoft.Compute", "virtualMachines") sub$get_provider_api_version("Microsoft.Storage", "storageAccounts") ## End(Not run) az_template Azure template class Description Class representing an Azure deployment template. Format An R6 object of class az_template. Methods • new(token, subscription, resource_group, name, ...): Initialize a new template ob- ject. See ’Initialization’ for more details. • check(): Check the deployment status of the template; throw an error if the template has been deleted. • cancel(free_resources=FALSE): Cancel an in-progress deployment. Optionally free any resources that have already been created. • delete(confirm=TRUE, free_resources=FALSE): Delete a deployed template, after a con- firmation check. Optionally free any resources that were created. If the template was deployed in Complete mode (its resource group is exclusive to its use), the latter process will delete the entire resource group. Otherwise resources are deleted in the order given by the template’s output resources list; in this case, some may be left behind if the ordering is incompatible with dependencies. • list_resources(): Returns a list of Azure resource objects that were created by the tem- plate. This returns top-level resources only, not those that represent functionality provided by another resource. • get_tags(): Returns the tags for the deployment template (note: this is not the same as the tags applied to resources that are deployed). Initialization Initializing a new object of this class can either retrieve an existing template, or deploy a new template on the host. Generally, the easiest way to create a template object is via the get_template, deploy_template or list_templates methods of the az_resource_group class, which handle the details automatically. To initialize an object that refers to an existing deployment, supply the following arguments to new(): • token: An OAuth 2.0 token, as generated by get_azure_token. • subscription: The subscription ID. • resource_group: The resource group. • name: The deployment name‘. If you also supply the following arguments to new(), a new template will be deployed: • template: The template to deploy. This can be provided in a number of ways: 1. A nested list of R objects, which will be converted to JSON via jsonlite::toJSON 2. A vector of strings containing unparsed JSON 3. The name of a template file 4. A URL from which the host can download the template • parameters: The parameters for the template. This can be provided using any of the same methods as the template argument. • wait: Optionally, whether to wait until the deployment is complete. Defaults to FALSE, in which case the method will return immediately. You can use the build_template_definition and build_template_parameters helper func- tions to construct the inputs for deploying a template. These can take as inputs R lists, JSON text strings, or file connections, and can also be extended by other packages. See Also az_resource_group, az_resource, build_template_definition, build_template_parameters Template overview, Template API reference Examples ## Not run: # recommended way to deploy a template: via a resource group object tpl <- resgroup$deploy_template("mydeployment", template="template.json", parameters="parameters.json") # retrieve list of created resource objects tpl$list_resources() # delete template (will not touch resources) tpl$delete() # delete template and free resources tpl$delete(free_resources=TRUE) ## End(Not run) build_template_definition Build the JSON for a template and its parameters Description Build the JSON for a template and its parameters Usage build_template_definition(...) ## Default S3 method: build_template_definition(parameters = named_list(), variables = named_list(), functions = list(), resources = list(), outputs = named_list(), schema = "2019-04-01", version = "1.0.0.0", api_profile = NULL, ...) build_template_parameters(...) ## Default S3 method: build_template_parameters(...) Arguments ... For build_template_parameters, named arguments giving the values of each template parameter. For build_template_definition, further arguments passed to class methods. parameters For build_template_definition, the parameter names and types for the tem- plate. See ’Details’ below. variables Internal variables used by the template. functions User-defined functions used by the template. resources List of resources that the template should deploy. outputs The template outputs. schema, version, api_profile Less commonly used arguments that can be used to customise the template. See the guide to template syntax on Microsoft Docs, linked below. Details build_template_definition is used to generate a template from its components. The main ar- guments are parameters, variables, functions, resources and outputs. Each of these can be specified in various ways: • As character strings containing unparsed JSON text. • As an R list of (nested) objects, which will be converted to JSON via jsonlite::toJSON. • A connection pointing to a JSON file or object. • For the parameters argument, this can also be a character vector containing the types of each parameter. build_template_parameters is for creating the list of parameters to be passed along with the template. Its arguments should all be named, and contain either the JSON text or an R list giving the parsed JSON. Both of these are generics and can be extended by other packages to handle specific deployment scenarios, eg virtual machines. Value The JSON text for the template definition and its parameters. See Also az_template, jsonlite::toJSON Guide to template syntax Examples # dummy example # note that 'resources' arg should be a _list_ of resources build_template_definition(resources=list(list(name="resource here"))) # specifying parameters as a list build_template_definition(parameters=list(par1=list(type="string")), resources=list(list(name="resource here"))) # specifying parameters as a vector build_template_definition(parameters=c(par1="string"), resources=list(list(name="resource here"))) # a user-defined function build_template_definition( parameters=c(name="string"), functions=list( list( namespace="mynamespace", members=list( prefixedName=list( parameters=list( list(name="name", type="string") ), output=list( type="string", value="[concat('AzureR', parameters('name'))]" ) ) ) ) ) ) # realistic example: storage account build_template_definition( parameters=c( name="string", location="string", sku="string" ), variables=list( id="[resourceId('Microsoft.Storage/storageAccounts', parameters('name'))]" ), resources=list( list( name="[parameters('name')]", location="[parameters('location')]", type="Microsoft.Storage/storageAccounts", apiVersion="2018-07-01", sku=list( name="[parameters('sku')]" ), kind="Storage" ) ), outputs=list( storageId="[variables('id')]" ) ) # providing JSON text as input build_template_definition( parameters=c(name="string", location="string", sku="string"), resources='[ { "name": "[parameters(\'name\')]", "location": "[parameters(\'location\')]", "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2018-07-01", "sku": { "name": "[parameters(\'sku\')]" }, "kind": "Storage" } ]' ) # parameter values build_template_parameters(name="mystorageacct", location="westus", sku="Standard_LRS") build_template_parameters( param='{ "name": "myname", "properties": { "prop1": 42, "prop2": "hello" } }' ) param_json <- '{ "name": "myname", "properties": { "prop1": 42, "prop2": "hello" } }' build_template_parameters(param=textConnection(param_json)) ## Not run: # reading JSON definitions from files build_template_definition( parameters=file("parameter_def.json"), resources=file("resource_def.json") ) build_template_parameters(name="myres_name", complex_type=file("myres_params.json")) ## End(Not run) call_azure_rm Call the Azure Resource Manager REST API Description Call the Azure Resource Manager REST API Usage call_azure_rm(token, subscription, operation, ..., options = list(), api_version = getOption("azure_api_version")) call_azure_url(token, url, ..., body = NULL, encode = "json", http_verb = c("GET", "DELETE", "PUT", "POST", "HEAD", "PATCH"), http_status_handler = c("stop", "warn", "message", "pass"), auto_refresh = TRUE) Arguments token An Azure OAuth token, of class AzureToken. subscription For call_azure_rm, a subscription ID. operation The operation to perform, which will form part of the URL path. ... Other arguments passed to lower-level code, ultimately to the appropriate func- tions in httr. options A named list giving the URL query parameters. api_version The API version to use, which will form part of the URL sent to the host. url A complete URL to send to the host. body The body of the request, for PUT/POST/PATCH. encode The encoding (really content-type) for the request body. The default value "json" means to serialize a list body into a JSON object. If you pass an already- serialized JSON object as the body, set encode to "raw". http_verb The HTTP verb as a string, one of GET, PUT, POST, DELETE, HEAD or PATCH. http_status_handler How to handle in R the HTTP status code of a response. "stop", "warn" or "message" will call the appropriate handlers in httr, while "pass" ignores the status code. auto_refresh Whether to refresh/renew the OAuth token if it is no longer valid. Details These functions form the low-level interface between R and Azure. call_azure_rm builds a URL from its arguments and passes it to call_azure_url. Authentication is handled automatically. Value If http_status_handler is one of "stop", "warn" or "message", the status code of the response is checked. If an error is not thrown, the parsed content of the response is returned with the status code attached as the "status" attribute. If http_status_handler is "pass", the entire response is returned without modification. See Also httr::GET, httr::PUT, httr::POST, httr::DELETE, httr::stop_for_status, httr::content create_azure_login Login to Azure Resource Manager Description Login to Azure Resource Manager Usage create_azure_login(tenant = "common", app = .az_cli_app_id, password = NULL, username = NULL, certificate = NULL, auth_type = NULL, version = 2, host = "https://management.azure.com/", aad_host = "https://login.microsoftonline.com/", scopes = ".default", config_file = NULL, token = NULL, graph_host = "https://graph.microsoft.com/", ...) get_azure_login(tenant = "common", selection = NULL, app = NULL, scopes = NULL, auth_type = NULL, refresh = TRUE) delete_azure_login(tenant = "common", confirm = TRUE) list_azure_logins() Arguments tenant The Azure Active Directory tenant for which to obtain a login client. Can be a name ("myaadtenant"), a fully qualified domain name ("myaadtenant.onmicrosoft.com" or "mycompanyname.com"), or a GUID. The default is to login via the "com- mon" tenant, which will infer your actual tenant from your credentials. app The client/app ID to use to authenticate with Azure Active Directory. The de- fault is to login interactively using the Azure CLI cross-platform app, but you can supply your own app credentials as well. password If auth_type == "client_credentials", the app secret; if auth_type == "resource_owner", your account password. username If auth_type == "resource_owner", your username. certificate If ‘auth_type == "client_credentials", a certificate to authenticate with. This is a more secure alternative to using an app secret. auth_type The OAuth authentication method to use, one of "client_credentials", "autho- rization_code", "device_code" or "resource_owner". If NULL, this is chosen based on the presence of the username and password arguments. version The Azure Active Directory version to use for authenticating. host Your ARM host. Defaults to https://management.azure.com/. Change this if you are using a government or private cloud. aad_host Azure Active Directory host for authentication. Defaults to https://login.microsoftonline.com/. Change this if you are using a government or private cloud. scopes The Azure Service Management scopes (permissions) to obtain for this login. Only for version=2. config_file Optionally, a JSON file containing any of the arguments listed above. Argu- ments supplied in this file take priority over those supplied on the command line. You can also use the output from the Azure CLI az ad sp create-for-rbac command. token Optionally, an OAuth 2.0 token, of class AzureToken. This allows you to reuse the authentication details for an existing session. If supplied, the other argu- ments above to create_azure_login will be ignored. graph_host The Microsoft Graph endpoint. See ’Microsoft Graph integration’ below. ... For create_azure_login, other arguments passed to get_azure_token. selection For get_azure_login, if you have multiple logins for a given tenant, which one to use. This can be a number, or the input MD5 hash of the token used for the login. If not supplied, get_azure_login will print a menu and ask you to choose a login. refresh For get_azure_login, whether to refresh the authentication token on loading the client. confirm For delete_azure_login, whether to ask for confirmation before deleting. Details create_azure_login creates a login client to authenticate with Azure Resource Manager (ARM), using the supplied arguments. The Azure Active Directory (AAD) authentication token is obtained using get_azure_token, which automatically caches and reuses tokens for subsequent sessions. Note that credentials are only cached if you allowed AzureRMR to create a data directory at package startup. create_azure_login() without any arguments is roughly equivalent to the Azure CLI command az login. get_azure_login returns a login client by retrieving previously saved credentials. It searches for saved credentials according to the supplied tenant; if multiple logins are found, it will prompt for you to choose one. One difference between create_azure_login and get_azure_login is the former will delete any previously saved credentials that match the arguments it was given. You can use this to force AzureRMR to remove obsolete tokens that may be lying around. Value For get_azure_login and create_azure_login, an object of class az_rm, representing the ARM login client. For list_azure_logins, a (possibly nested) list of such objects. If the AzureRMR data directory for saving credentials does not exist, get_azure_login will throw an error. Microsoft Graph integration If the AzureGraph package is installed and the graph_host argument is not NULL, create_azure_login will also create a login client for Microsoft Graph with the same credentials. This is to facili- tate working with registered apps and service principals, eg when managing roles and permissions. Some Azure services also require creating service principals as part of creating a resource (eg Azure Kubernetes Service), and keeping the Graph credentials consistent with ARM helps ensure nothing breaks. Linux DSVM note If you are using a Linux Data Science Virtual Machine in Azure, you may have problems running create_azure_login() (ie, without any arguments). In this case, try create_azure_login(auth_type="device_code"). See Also az_rm, AzureAuth::get_azure_token for more details on authentication methods, AzureGraph::create_graph_login for the corresponding function to create a Microsoft Graph login client Azure Resource Manager overview, REST API reference Authentication in Azure Active Directory Azure CLI documentation Examples ## Not run: # without any arguments, this will create a client using your AAD credentials az <- create_azure_login() # retrieve the login in subsequent sessions az <- get_azure_login() # this will create a Resource Manager client for the AAD tenant 'myaadtenant.onmicrosoft.com', # using the client_credentials method az <- create_azure_login("myaadtenant", app="app_id", password="password") # you can also login using credentials in a json file az <- create_azure_login(config_file="~/creds.json") ## End(Not run) init_pool Manage parallel Azure connections Description Manage parallel Azure connections Usage init_pool(size = 10, restart = FALSE, ...) delete_pool() pool_exists() pool_size() pool_export(...) pool_lapply(...) pool_sapply(...) pool_map(...) pool_call(...) pool_evalq(...) Arguments size For init_pool, the number of background R processes to create. Limit this is you are low on memory. restart For init_pool, whether to terminate an already running pool first. ... Other arguments passed on to functions in the parallel package. See below. Details AzureRMR provides the ability to parallelise communicating with Azure by utilizing a pool of R processes in the background. This often leads to major speedups in scenarios like downloading large numbers of small files, or working with a cluster of virtual machines. This functionality is intended for use by packages that extend AzureRMR (and was originally implemented as part of the AzureStor package), but can also be called directly by the end-user. A small API consisting of the following functions is currently provided for managing the pool. They pass their arguments down to the corresponding functions in the parallel package. • init_pool initialises the pool, creating it if necessary. The pool is created by calling parallel::makeCluster with the pool size and any additional arguments. If init_pool is called and the current pool is smaller than size, it is resized. • delete_pool shuts down the background processes and deletes the pool. • pool_exists checks for the existence of the pool, returning a TRUE/FALSE value. • pool_size returns the size of the pool, or zero if the pool does not exist. • pool_export exports variables to the pool nodes. It calls parallel::clusterExport with the given arguments. • pool_lapply, pool_sapply and pool_map carry out work on the pool. They call parallel::parLapply, parallel::parSapply and parallel::clusterMap with the given arguments. • pool_call and pool_evalq execute code on the pool nodes. They call parallel::clusterCall and parallel::clusterEvalQ with the given arguments. The pool is persistent for the session or until terminated by delete_pool. You should initialise the pool by calling init_pool before running any code on it. This restores the original state of the pool nodes by removing any objects that may be in memory, and resetting the working directory to the master working directory. See Also parallel::makeCluster, parallel::clusterCall, parallel::parLapply Examples ## Not run: init_pool() pool_size() x <- 42 pool_export("x") pool_sapply(1:5, function(i) i + x) init_pool() # error: x no longer exists on nodes try(pool_sapply(1:5, function(i) i + x)) delete_pool() ## End(Not run) is_azure_login Informational functions Description These functions return whether the object is of the corresponding AzureRMR class. Usage is_azure_login(object) is_subscription(object) is_resource_group(object) is_resource(object) is_template(object) is_role_definition(object) is_role_assignment(object) Arguments object An R object. Value A boolean. is_url Miscellaneous utility functions Description Miscellaneous utility functions Usage is_url(x, https_only = FALSE) get_paged_list(lst, token, next_link_name = "nextLink", value_name = "value") Arguments x For is_url, An R object. https_only For is_url, whether to allow only HTTPS URLs. lst A named list of objects. token For get_paged_list, an Azure OAuth token, of class AzureToken. next_link_name, value_name For get_paged_list, the names of the next link and value components in the lst argument. The default values are correct for Resource Manager. Details get_paged_list reconstructs a complete list of objects from a paged response. Many Resource Manager list operations will return paged output, that is, the response contains a subset of all items, along with a URL to query to retrieve the next subset. get_paged_list retrieves each subset and returns all items in a single list. Value For get_paged_list, a list. For is_url, whether the object appears to be a URL (is character of length 1, and starts with the string "http"). Optionally, restricts the check to HTTPS URLs only. lock Management locks Description Create, retrieve and delete locks. These are methods for the az_subscription, az_resource_group and az_resource classes. Usage create_lock(name, level = c("cannotdelete", "readonly"), notes = "") get_lock(name) delete_lock(name) list_locks() Arguments • name: The name of a lock. • level: The level of protection that the lock provides. • notes: An optional character string to describe the lock. Details Management locks in Resource Manager can be assigned at the subscription, resource group, or resource level. They serve to protect a resource against unwanted changes. A lock can either protect against deletion (level="cannotdelete") or against modification of any kind (level="readonly"). Locks assigned at parent scopes also apply to lower ones, recursively. The most restrictive lock in the inheritance takes precedence. To modify/delete a resource, any existing locks for its subscription and resource group must also be removed. Note if you logged in via a custom service principal, it must have "Owner" or "User Access Admin- istrator" access to manage locks. Value The create_lock and get_lock methods return a lock object, which is itself an Azure resource. The list_locks method returns a list of such objects. The delete_lock method returns NULL on a successful delete. The get_role_definition method returns an object of class az_role_definition. This is a plain-old-data R6 class (no methods), which can be used as input for creating role assignments (see the examples below). The list_role_definitions method returns a list of az_role_definition if the as_data_frame argument is FALSE. If this is TRUE, it instead returns a data frame containing the most broadly use- ful fields for each role definition: the definition ID and role name. See Also rbac Overview of management locks Examples ## Not run: az <- get_azure_login("myaadtenant") sub <- az$get_subscription("subscription_id") rg <- sub$get_resource_group("rgname") res <- rg$get_resource(type="provider_type", name="resname") sub$create_lock("lock1", "cannotdelete") rg$create_lock("lock2", "cannotdelete") # error! resource is locked res$delete() # subscription level rg$delete_lock("lock2") sub$delete_lock("lock1") # now it works res$delete() ## End(Not run) rbac Role-based access control (RBAC) Description Basic methods for RBAC: manage role assignments and retrieve role definitions. These are methods for the az_subscription, az_resource_group and az_resource classes. Usage add_role_assignment(principal, role, scope = NULL) get_role_assignment(id) remove_role_assignment(id, confirm = TRUE) list_role_assignments(filter = "atScope()", as_data_frame = TRUE) get_role_definition(id) list_role_definitions(filter=NULL, as_data_frame = TRUE) Arguments • principal: For add_role_assignment, the principal for which to assign a role. This can be a GUID, or an object of class az_user, az_app or az_storage_principal (from the AzureGraph package). • role: For add_role_assignment, the role to assign the principal. This can be a GUID, a string giving the role name (eg "Contributor"), or an object of class [az_role_definition]. • scope: For add_role_assignment, an optional scope for the assignment. • id: A role ID. For get_role_assignment and remove_role_assignment, this is a role as- signment GUID. For get_role_definition, this can be a role definition GUID or a role name. • confirm: For remove_role_assignment, whether to ask for confirmation before removing the role assignment. • filter: For list_role_assignments and list_role_definitions, an optional filter con- dition to limit the returned roles. • as_data_frame: For list_role_assignments and list_role_definitions, whether to return a data frame or a list of objects. See ’Value’ below. Details AzureRMR implements a subset of the full RBAC functionality within Azure Active Directory. You can retrieve role definitions and add and remove role assignments, at the subscription, resource group and resource levels. Value The add_role_assignment and get_role_assignment methods return an object of class az_role_assignment. This is a simple R6 class, with one method: remove to remove the assignment. The list_role_assignments method returns a list of az_role_assignment objects if the as_data_frame argument is FALSE. If this is TRUE, it instead returns a data frame containing the most broadly use- ful fields for each assigned role: the role assignment ID, the principal, and the role name. The get_role_definition method returns an object of class az_role_definition. This is a plain-old-data R6 class (no methods), which can be used as input for creating role assignments (see the examples below). The list_role_definitions method returns a list of az_role_definition if the as_data_frame argument is FALSE. If this is TRUE, it instead returns a data frame containing the most broadly use- ful fields for each role definition: the definition ID and role name. See Also az_rm, az_role_definition, az_role_assignment Overview of role-based access control Examples ## Not run: az <- get_azure_login("myaadtenant") sub <- az$get_subscription("subscription_id") rg <- sub$get_resource_group("rgname") res <- rg$get_resource(type="provider_type", name="resname") sub$list_role_definitions() sub$list_role_assignments() sub$get_role_definition("Contributor") # get an app using the AzureGraph package app <- get_graph_login("myaadtenant")$get_app("app_id") # subscription level asn1 <- sub$add_role_assignment(app, "Reader") # resource group level asn2 <- rg$add_role_assignment(app, "Contributor") # resource level asn3 <- res$add_role_assignment(app, "Owner") res$remove_role_assignment(asn3$id) rg$remove_role_assignment(asn2$id) sub$remove_role_assignment(asn1$id) ## End(Not run)
wast
rust
Rust
Crate wast === A crate for low-level parsing of the WebAssembly text formats: WAT and WAST. This crate is intended to be a low-level detail of the `wat` crate, providing a low-level parsing API for parsing WebAssembly text format structures. The API provided by this crate is very similar to `syn` and provides the ability to write customized parsers which may be an extension to the core WebAssembly text format. For more documentation see the `parser` module. High-level Overview --- This crate provides a few major pieces of functionality * `lexer` - this is a raw lexer for the wasm text format. This is not customizable, but if you’d like to iterate over raw tokens this is the module for you. You likely won’t use this much. * `parser` - this is the workhorse of this crate. The `parser` module provides the `Parse` trait primarily and utilities around working with a `Parser` to parse streams of tokens. * `Module` - this contains an Abstract Syntax Tree (AST) of the WebAssembly Text format (WAT) as well as the unofficial WAST format. This also has a `Module::encode` method to emit a module in its binary form. Stability and WebAssembly Features --- This crate provides support for many in-progress WebAssembly features such as reference types, multi-value, etc. Be sure to check out the documentation of the `wast` crate for policy information on crate stability vs WebAssembly Features. The tl;dr; version is that this crate will issue semver-non-breaking releases which will break the parsing of the text format. This crate, unlike `wast`, is expected to have numerous Rust public API changes, all of which will be accompanied with a semver-breaking release. Compile-time Cargo features --- This crate has a `wasm-module` feature which is turned on by default which includes all necessary support to parse full WebAssembly modules. If you don’t need this (for example you’re parsing your own s-expression format) then this feature can be disabled. Modules --- * annotationCommon annotations used to parse WebAssembly text files. * componentTypes and support for parsing the component model text format. * coreTypes and support for parsing the core wasm text format. * kwCommon keyword used to parse WebAssembly text files. * lexerDefinition of a lexer for the WebAssembly text format. * parserTraits for parsing the WebAssembly Text format * tokenCommon tokens that implement the `Parse` trait which are otherwise not associated specifically with the wasm text format per se (useful in other contexts too perhaps). Macros --- * annotationA macro, like `custom_keyword`, to create a type which can be used to parse/peek annotation directives. * custom_keywordA macro to create a custom keyword parser. * custom_reservedA macro for defining custom reserved symbols. Structs --- * ErrorA convenience error type to tie together all the detailed errors produced by this crate. * WastA parsed representation of a `*.wast` file. * WastInvoke * WastThread Enums --- * QuoteWat * WastArg * WastDirectiveThe different kinds of directives found in a `*.wast` file. * WastExecute * WastRet * WatA `*.wat` file parser, or a parser for one parenthesized module. Crate wast === A crate for low-level parsing of the WebAssembly text formats: WAT and WAST. This crate is intended to be a low-level detail of the `wat` crate, providing a low-level parsing API for parsing WebAssembly text format structures. The API provided by this crate is very similar to `syn` and provides the ability to write customized parsers which may be an extension to the core WebAssembly text format. For more documentation see the `parser` module. High-level Overview --- This crate provides a few major pieces of functionality * `lexer` - this is a raw lexer for the wasm text format. This is not customizable, but if you’d like to iterate over raw tokens this is the module for you. You likely won’t use this much. * `parser` - this is the workhorse of this crate. The `parser` module provides the `Parse` trait primarily and utilities around working with a `Parser` to parse streams of tokens. * `Module` - this contains an Abstract Syntax Tree (AST) of the WebAssembly Text format (WAT) as well as the unofficial WAST format. This also has a `Module::encode` method to emit a module in its binary form. Stability and WebAssembly Features --- This crate provides support for many in-progress WebAssembly features such as reference types, multi-value, etc. Be sure to check out the documentation of the `wast` crate for policy information on crate stability vs WebAssembly Features. The tl;dr; version is that this crate will issue semver-non-breaking releases which will break the parsing of the text format. This crate, unlike `wast`, is expected to have numerous Rust public API changes, all of which will be accompanied with a semver-breaking release. Compile-time Cargo features --- This crate has a `wasm-module` feature which is turned on by default which includes all necessary support to parse full WebAssembly modules. If you don’t need this (for example you’re parsing your own s-expression format) then this feature can be disabled. Modules --- * annotationCommon annotations used to parse WebAssembly text files. * componentTypes and support for parsing the component model text format. * coreTypes and support for parsing the core wasm text format. * kwCommon keyword used to parse WebAssembly text files. * lexerDefinition of a lexer for the WebAssembly text format. * parserTraits for parsing the WebAssembly Text format * tokenCommon tokens that implement the `Parse` trait which are otherwise not associated specifically with the wasm text format per se (useful in other contexts too perhaps). Macros --- * annotationA macro, like `custom_keyword`, to create a type which can be used to parse/peek annotation directives. * custom_keywordA macro to create a custom keyword parser. * custom_reservedA macro for defining custom reserved symbols. Structs --- * ErrorA convenience error type to tie together all the detailed errors produced by this crate. * WastA parsed representation of a `*.wast` file. * WastInvoke * WastThread Enums --- * QuoteWat * WastArg * WastDirectiveThe different kinds of directives found in a `*.wast` file. * WastExecute * WastRet * WatA `*.wat` file parser, or a parser for one parenthesized module. Module wast::parser === Traits for parsing the WebAssembly Text format This module contains the traits, abstractions, and utilities needed to define custom parsers for WebAssembly text format items. This module exposes a recursive descent parsing strategy and centers around the `Parse` trait for defining new fragments of WebAssembly text syntax. The top-level `parse` function can be used to fully parse AST fragments: ``` use wast::Wat; use wast::parser::{self, ParseBuffer}; let wat = "(module (func))"; let buf = ParseBuffer::new(wat)?; let module = parser::parse::<Wat>(&buf)?; ``` and you can also define your own new syntax with the `Parse` trait: ``` use wast::kw; use wast::core::{Import, Func}; use wast::parser::{Parser, Parse, Result}; // Fields of a WebAssembly which only allow imports and functions, and all // imports must come before all the functions struct OnlyImportsAndFunctions<'a> { imports: Vec<Import<'a>>, functions: Vec<Func<'a>>, } impl<'a> Parse<'a> for OnlyImportsAndFunctions<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { // While the second token is `import` (the first is `(`, so we care // about the second) we parse an `ast::ModuleImport` inside of // parentheses. The `parens` function here ensures that what we // parse inside of it is surrounded by `(` and `)`. let mut imports = Vec::new(); while parser.peek2::<kw::import>()? { let import = parser.parens(|p| p.parse())?; imports.push(import); } // Afterwards we assume everything else is a function. Note that // `parse` here is a generic function and type inference figures out // that we're parsing functions here and imports above. let mut functions = Vec::new(); while !parser.is_empty() { let func = parser.parens(|p| p.parse())?; functions.push(func); } Ok(OnlyImportsAndFunctions { imports, functions }) } } ``` This module is heavily inspired by `syn` so you can likely also draw inspiration from the excellent examples in the `syn` crate. Structs --- * CursorAn immutable cursor into a list of tokens. * Lookahead1A helpful structure to perform a lookahead of one token to determine what to parse. * ParseBufferA low-level buffer of tokens which represents a completely lexed file. * ParserAn in-progress parser for the tokens of a WebAssembly text file. Traits --- * ParseA trait for parsing a fragment of syntax in a recursive descent fashion. * PeekA trait for types which be used to “peek” to see if they’re the next token in an input stream of `Parser`. Functions --- * parseA top-level convenience parsing function that parses a `T` from `buf` and requires that all tokens in `buf` are consume. Type Aliases --- * ResultA convenience type definition for `Result` where the error is hardwired to `Error`. Module wast::lexer === Definition of a lexer for the WebAssembly text format. This module provides a `Lexer` type which is an iterate over the raw tokens of a WebAssembly text file. A `Lexer` accounts for every single byte in a WebAssembly text field, returning tokens even for comments and whitespace. Typically you’ll ignore comments and whitespace, however. If you’d like to iterate over the tokens in a file you can do so via: ``` use wast::lexer::Lexer; let wat = "(module (func $foo))"; for token in Lexer::new(wat).iter(0) { println!("{:?}", token?); } ``` Note that you’ll typically not use this module but will rather use `ParseBuffer` instead. Structs --- * IntegerA fully parsed integer from a source string with a payload ready to parse into an integral type. * IntegerKindDescription of the parsed integer from the source. * LexerA structure used to lex the s-expression syntax of WAT files. * TokenA single token parsed from a `Lexer`. Enums --- * FloatPossible parsed float values * FloatKindDescription of a parsed float from the source. * LexErrorErrors that can be generated while lexing. * SignTokenA sign token for an integer. * TokenKindClassification of what was parsed from the input stream. Trait wast::parser::Parse === ``` pub trait Parse<'a>: Sized { // Required method fn parse(parser: Parser<'a>) -> Result<Self>; } ``` A trait for parsing a fragment of syntax in a recursive descent fashion. The `Parse` trait is main abstraction you’ll be working with when defining custom parser or custom syntax for your WebAssembly text format (or when using the official format items). Almost all items in the `core` module implement the `Parse` trait, and you’ll commonly use this with: * The top-level `parse` function to parse an entire input. * The intermediate `Parser::parse` function to parse an item out of an input stream and then parse remaining items. Implementation of `Parse` take a `Parser` as input and will mutate the parser as they parse syntax. Once a token is consume it cannot be “un-consumed”. Utilities such as `Parser::peek` and `Parser::lookahead1` can be used to determine what to parse next. ### When to parse `(` and `)`? Conventionally types are not responsible for parsing their own `(` and `)` tokens which surround the type. For example WebAssembly imports look like: ``` (import "foo" "bar" (func (type 0))) ``` but the `Import` type parser looks like: ``` impl<'a> Parse<'a> for Import<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { parser.parse::<kw::import>()?; // ... } } ``` It is assumed here that the `(` and `)` tokens which surround an `import` statement in the WebAssembly text format are parsed by the parent item parsing `Import`. Note that this is just a convention, so it’s not necessarily required for all types. It’s recommended that your types stick to this convention where possible to avoid nested calls to `Parser::parens` or accidentally trying to parse too many parenthesis. Examples --- Let’s say you want to define your own WebAssembly text format which only contains imports and functions. You also require all imports to be listed before all functions. An example `Parse` implementation might look like: ``` use wast::core::{Import, Func}; use wast::kw; use wast::parser::{Parser, Parse, Result}; // Fields of a WebAssembly which only allow imports and functions, and all // imports must come before all the functions struct OnlyImportsAndFunctions<'a> { imports: Vec<Import<'a>>, functions: Vec<Func<'a>>, } impl<'a> Parse<'a> for OnlyImportsAndFunctions<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { // While the second token is `import` (the first is `(`, so we care // about the second) we parse an `ast::ModuleImport` inside of // parentheses. The `parens` function here ensures that what we // parse inside of it is surrounded by `(` and `)`. let mut imports = Vec::new(); while parser.peek2::<kw::import>()? { let import = parser.parens(|p| p.parse())?; imports.push(import); } // Afterwards we assume everything else is a function. Note that // `parse` here is a generic function and type inference figures out // that we're parsing functions here and imports above. let mut functions = Vec::new(); while !parser.is_empty() { let func = parser.parens(|p| p.parse())?; functions.push(func); } Ok(OnlyImportsAndFunctions { imports, functions }) } } ``` Required Methods --- #### fn parse(parser: Parser<'a>) -> Result<SelfAttempts to parse `Self` from `parser`, returning an error if it could not be parsed. This method will mutate the state of `parser` after attempting to parse an instance of `Self`. If an error happens then it is likely fatal and there is no guarantee of how many tokens have been consumed from `parser`. As recommended in the documentation of `Parse`, implementations of this function should not start out by parsing `(` and `)` tokens, but rather parents calling recursive parsers should parse the `(` and `)` tokens for their child item that’s being parsed. ##### Errors This function will return an error if `Self` could not be parsed. Note that creating an `Error` is not exactly a cheap operation, so `Error` is typically fatal and propagated all the way back to the top parse call site. Implementations on Foreign Types --- ### impl<'a, T> Parse<'a> for Box<T>where T: Parse<'a>, #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for (i8, Span) #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for Vec<ComponentExport<'a>#### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for (u8, Span) #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for Vec<CanonOpt<'a>#### fn parse(parser: Parser<'a>) -> Result<Self### impl Parse<'_> for String #### fn parse(parser: Parser<'_>) -> Result<Self### impl<'a> Parse<'a> for &'a str #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for Vec<CoreInstanceExport<'a>#### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for Option<NameAnnotation<'a>#### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for (u32, Span) #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for Vec<ComponentTypeDecl<'a>#### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for i8 #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for Vec<CoreInstantiationArg<'a>#### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for u8 #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for &'a [u8] #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for Vec<InstantiationArg<'a>#### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for (i32, Span) #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for i16 #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for Vec<ModuleTypeDecl<'a>#### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for i32 #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for (i64, Span) #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for (u64, Span) #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for u16 #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for u32 #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for u64 #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for (u16, Span) #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for (i16, Span) #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a, T: Peek + Parse<'a>> Parse<'a> for Option<T#### fn parse(parser: Parser<'a>) -> Result<Option<T>### impl<'a> Parse<'a> for i64 #### fn parse(parser: Parser<'a>) -> Result<Self### impl<'a> Parse<'a> for Vec<InstanceTypeDecl<'a>#### fn parse(parser: Parser<'a>) -> Result<SelfImplementors --- ### impl<'a> Parse<'a> for CanonOpt<'a### impl<'a> Parse<'a> for ComponentExportAliasKind ### impl<'a> Parse<'a> for ComponentExportKind<'a### impl<'a> Parse<'a> for ComponentExternName<'a### impl<'a> Parse<'a> for ComponentField<'a### impl<'a> Parse<'a> for ComponentOuterAliasKind ### impl<'a> Parse<'a> for ComponentTypeDecl<'a### impl<'a> Parse<'a> for ComponentValType<'a### impl<'a> Parse<'a> for CoreFuncKind<'a### impl<'a> Parse<'a> for CoreInstanceKind<'a### impl<'a> Parse<'a> for CoreInstantiationArgKind<'a### impl<'a> Parse<'a> for CoreTypeDef<'a### impl<'a> Parse<'a> for FuncKind<'a### impl<'a> Parse<'a> for InstanceKind<'a### impl<'a> Parse<'a> for InstanceTypeDecl<'a### impl<'a> Parse<'a> for InstantiationArgKind<'a### impl<'a> Parse<'a> for ModuleTypeDecl<'a### impl<'a> Parse<'a> for PrimitiveValType ### impl<'a> Parse<'a> for Refinement<'a### impl<'a> Parse<'a> for TypeBounds<'a### impl<'a> Parse<'a> for wast::component::TypeDef<'a### impl<'a> Parse<'a> for WastVal<'a### impl<'a> Parse<'a> for wast::core::Custom<'a### impl<'a> Parse<'a> for CustomPlace ### impl<'a> Parse<'a> for CustomPlaceAnchor ### impl<'a> Parse<'a> for DataVal<'a### impl<'a> Parse<'a> for ExportKind ### impl<'a> Parse<'a> for HeapType<'a### impl<'a> Parse<'a> for Instruction<'a### impl<'a> Parse<'a> for MemoryType ### impl<'a> Parse<'a> for ModuleField<'a### impl<'a> Parse<'a> for StorageType<'a### impl<'a> Parse<'a> for TagType<'a### impl<'a> Parse<'a> for wast::core::TypeDef<'a### impl<'a> Parse<'a> for V128Const ### impl<'a> Parse<'a> for V128Pattern ### impl<'a> Parse<'a> for ValType<'a### impl<'a> Parse<'a> for WastArgCore<'a### impl<'a> Parse<'a> for WastRetCore<'a### impl<'a> Parse<'a> for QuoteWat<'a### impl<'a> Parse<'a> for WastArg<'a### impl<'a> Parse<'a> for WastDirective<'a### impl<'a> Parse<'a> for WastExecute<'a### impl<'a> Parse<'a> for WastRet<'a### impl<'a> Parse<'a> for Wat<'a### impl<'a> Parse<'a> for Index<'a### impl<'a> Parse<'a> for custom ### impl<'a> Parse<'a> for dylink_0 ### impl<'a> Parse<'a> for name ### impl<'a> Parse<'a> for producers ### impl<'a> Parse<'a> for Alias<'a### impl<'a> Parse<'a> for CanonLift<'a### impl<'a> Parse<'a> for CanonLower<'a### impl<'a> Parse<'a> for CanonResourceDrop<'a### impl<'a> Parse<'a> for CanonResourceNew<'a### impl<'a> Parse<'a> for CanonResourceRep<'a### impl<'a> Parse<'a> for CanonicalFunc<'a### impl<'a> Parse<'a> for Component<'a### impl<'a> Parse<'a> for ComponentExport<'a### impl<'a> Parse<'a> for ComponentExportType<'a### impl<'a> Parse<'a> for ComponentFunctionParam<'a### impl<'a> Parse<'a> for ComponentFunctionResult<'a### impl<'a> Parse<'a> for ComponentFunctionType<'a### impl<'a> Parse<'a> for ComponentImport<'a### impl<'a> Parse<'a> for ComponentType<'a### impl<'a> Parse<'a> for ComponentValTypeUse<'a### impl<'a> Parse<'a> for CoreFunc<'a### impl<'a> Parse<'a> for CoreInstance<'a### impl<'a> Parse<'a> for CoreInstanceExport<'a### impl<'a> Parse<'a> for CoreInstantiationArg<'a### impl<'a> Parse<'a> for CoreModule<'a### impl<'a> Parse<'a> for CoreType<'a### impl<'a> Parse<'a> for wast::component::Custom<'a### impl<'a> Parse<'a> for Enum<'a### impl<'a> Parse<'a> for Flags<'a### impl<'a> Parse<'a> for wast::component::Func<'a### impl<'a> Parse<'a> for InlineComponentValType<'a### impl<'a> Parse<'a> for wast::component::InlineExport<'a### impl<'a> Parse<'a> for wast::component::InlineImport<'a### impl<'a> Parse<'a> for Instance<'a### impl<'a> Parse<'a> for InstanceType<'a### impl<'a> Parse<'a> for InstantiationArg<'a### impl<'a> Parse<'a> for wast::component::ItemSig<'a### impl<'a> Parse<'a> for ItemSigNoName<'a### impl<'a> Parse<'a> for List<'a### impl<'a> Parse<'a> for ModuleType<'a### impl<'a> Parse<'a> for NestedComponent<'a### impl<'a> Parse<'a> for OptionType<'a### impl<'a> Parse<'a> for Record<'a### impl<'a> Parse<'a> for RecordField<'a### impl<'a> Parse<'a> for ResourceType<'a### impl<'a> Parse<'a> for ResultType<'a### impl<'a> Parse<'a> for Start<'a### impl<'a> Parse<'a> for Tuple<'a### impl<'a> Parse<'a> for Variant<'a### impl<'a> Parse<'a> for VariantCase<'a### impl<'a> Parse<'a> for ArrayCopy<'a### impl<'a> Parse<'a> for ArrayFill<'a### impl<'a> Parse<'a> for ArrayInit<'a### impl<'a> Parse<'a> for ArrayNewData<'a### impl<'a> Parse<'a> for ArrayNewElem<'a### impl<'a> Parse<'a> for ArrayNewFixed<'a### impl<'a> Parse<'a> for ArrayType<'a### impl<'a> Parse<'a> for BlockType<'a### impl<'a> Parse<'a> for BrOnCast<'a### impl<'a> Parse<'a> for BrOnCastFail<'a### impl<'a> Parse<'a> for BrTableIndices<'a### impl<'a> Parse<'a> for CallIndirect<'a### impl<'a> Parse<'a> for Data<'a### impl<'a> Parse<'a> for Dylink0<'a### impl<'a> Parse<'a> for Elem<'a### impl<'a> Parse<'a> for Export<'a### impl<'a> Parse<'a> for ExportType<'a### impl<'a> Parse<'a> for Expression<'a### impl<'a> Parse<'a> for wast::core::Func<'a### impl<'a> Parse<'a> for FuncBindType<'a### impl<'a> Parse<'a> for FunctionType<'a### impl<'a> Parse<'a> for FunctionTypeNoNames<'a### impl<'a> Parse<'a> for Global<'a### impl<'a> Parse<'a> for GlobalType<'a### impl<'a> Parse<'a> for I8x16Shuffle ### impl<'a> Parse<'a> for Import<'a### impl<'a> Parse<'a> for wast::core::InlineExport<'a### impl<'a> Parse<'a> for wast::core::InlineImport<'a### impl<'a> Parse<'a> for wast::core::ItemSig<'a### impl<'a> Parse<'a> for LaneArg ### impl<'a> Parse<'a> for LetType<'a### impl<'a> Parse<'a> for Limits64 ### impl<'a> Parse<'a> for Limits ### impl<'a> Parse<'a> for LocalParser<'a### impl<'a> Parse<'a> for Memory<'a### impl<'a> Parse<'a> for MemoryArg<'a### impl<'a> Parse<'a> for MemoryCopy<'a### impl<'a> Parse<'a> for MemoryInit<'a### impl<'a> Parse<'a> for Module<'a### impl<'a> Parse<'a> for Producers<'a### impl<'a> Parse<'a> for RawCustomSection<'a### impl<'a> Parse<'a> for Rec<'a### impl<'a> Parse<'a> for RefCast<'a### impl<'a> Parse<'a> for RefTest<'a### impl<'a> Parse<'a> for RefType<'a### impl<'a> Parse<'a> for SelectTypes<'a### impl<'a> Parse<'a> for StructAccess<'a### impl<'a> Parse<'a> for StructType<'a### impl<'a> Parse<'a> for Table<'a### impl<'a> Parse<'a> for TableArg<'a### impl<'a> Parse<'a> for TableCopy<'a### impl<'a> Parse<'a> for TableInit<'a### impl<'a> Parse<'a> for TableType<'a### impl<'a> Parse<'a> for Tag<'a### impl<'a> Parse<'a> for TryTable<'a### impl<'a> Parse<'a> for Type<'a### impl<'a> Parse<'a> for after ### impl<'a> Parse<'a> for alias ### impl<'a> Parse<'a> for any ### impl<'a> Parse<'a> for anyfunc ### impl<'a> Parse<'a> for anyref ### impl<'a> Parse<'a> for arg ### impl<'a> Parse<'a> for array ### impl<'a> Parse<'a> for arrayref ### impl<'a> Parse<'a> for assert_exception ### impl<'a> Parse<'a> for assert_exhaustion ### impl<'a> Parse<'a> for assert_invalid ### impl<'a> Parse<'a> for assert_malformed ### impl<'a> Parse<'a> for assert_return ### impl<'a> Parse<'a> for assert_trap ### impl<'a> Parse<'a> for assert_unlinkable ### impl<'a> Parse<'a> for before ### impl<'a> Parse<'a> for binary ### impl<'a> Parse<'a> for block ### impl<'a> Parse<'a> for bool_ ### impl<'a> Parse<'a> for borrow ### impl<'a> Parse<'a> for canon ### impl<'a> Parse<'a> for case ### impl<'a> Parse<'a> for catch ### impl<'a> Parse<'a> for catch_all ### impl<'a> Parse<'a> for catch_all_ref ### impl<'a> Parse<'a> for catch_ref ### impl<'a> Parse<'a> for char ### impl<'a> Parse<'a> for code ### impl<'a> Parse<'a> for component ### impl<'a> Parse<'a> for core ### impl<'a> Parse<'a> for data ### impl<'a> Parse<'a> for declare ### impl<'a> Parse<'a> for delegate ### impl<'a> Parse<'a> for do ### impl<'a> Parse<'a> for dtor ### impl<'a> Parse<'a> for elem ### impl<'a> Parse<'a> for else ### impl<'a> Parse<'a> for end ### impl<'a> Parse<'a> for enum_ ### impl<'a> Parse<'a> for eq ### impl<'a> Parse<'a> for eqref ### impl<'a> Parse<'a> for error ### impl<'a> Parse<'a> for exn ### impl<'a> Parse<'a> for exnref ### impl<'a> Parse<'a> for export ### impl<'a> Parse<'a> for export_info ### impl<'a> Parse<'a> for extern ### impl<'a> Parse<'a> for externref ### impl<'a> Parse<'a> for f32 ### impl<'a> Parse<'a> for f32x4 ### impl<'a> Parse<'a> for f64 ### impl<'a> Parse<'a> for f64x2 ### impl<'a> Parse<'a> for false_ ### impl<'a> Parse<'a> for field ### impl<'a> Parse<'a> for final ### impl<'a> Parse<'a> for first ### impl<'a> Parse<'a> for flags ### impl<'a> Parse<'a> for float32 ### impl<'a> Parse<'a> for float64 ### impl<'a> Parse<'a> for func ### impl<'a> Parse<'a> for funcref ### impl<'a> Parse<'a> for get ### impl<'a> Parse<'a> for global ### impl<'a> Parse<'a> for i8 ### impl<'a> Parse<'a> for i8x16 ### impl<'a> Parse<'a> for i16 ### impl<'a> Parse<'a> for i16x8 ### impl<'a> Parse<'a> for i31 ### impl<'a> Parse<'a> for i31ref ### impl<'a> Parse<'a> for i32 ### impl<'a> Parse<'a> for i32x4 ### impl<'a> Parse<'a> for i64 ### impl<'a> Parse<'a> for i64x2 ### impl<'a> Parse<'a> for if ### impl<'a> Parse<'a> for import ### impl<'a> Parse<'a> for import_info ### impl<'a> Parse<'a> for instance ### impl<'a> Parse<'a> for instantiate ### impl<'a> Parse<'a> for interface ### impl<'a> Parse<'a> for invoke ### impl<'a> Parse<'a> for item ### impl<'a> Parse<'a> for language ### impl<'a> Parse<'a> for last ### impl<'a> Parse<'a> for lift ### impl<'a> Parse<'a> for list ### impl<'a> Parse<'a> for local ### impl<'a> Parse<'a> for loop ### impl<'a> Parse<'a> for lower ### impl<'a> Parse<'a> for mem_info ### impl<'a> Parse<'a> for memory ### impl<'a> Parse<'a> for module ### impl<'a> Parse<'a> for modulecode ### impl<'a> Parse<'a> for mut ### impl<'a> Parse<'a> for nan_arithmetic ### impl<'a> Parse<'a> for nan_canonical ### impl<'a> Parse<'a> for needed ### impl<'a> Parse<'a> for noextern ### impl<'a> Parse<'a> for nofunc ### impl<'a> Parse<'a> for none ### impl<'a> Parse<'a> for null ### impl<'a> Parse<'a> for nullexternref ### impl<'a> Parse<'a> for nullfuncref ### impl<'a> Parse<'a> for nullref ### impl<'a> Parse<'a> for offset ### impl<'a> Parse<'a> for option ### impl<'a> Parse<'a> for outer ### impl<'a> Parse<'a> for own ### impl<'a> Parse<'a> for param ### impl<'a> Parse<'a> for parent ### impl<'a> Parse<'a> for passive ### impl<'a> Parse<'a> for post_return ### impl<'a> Parse<'a> for processed_by ### impl<'a> Parse<'a> for quote ### impl<'a> Parse<'a> for realloc ### impl<'a> Parse<'a> for rec ### impl<'a> Parse<'a> for record ### impl<'a> Parse<'a> for ref ### impl<'a> Parse<'a> for ref_func ### impl<'a> Parse<'a> for ref_null ### impl<'a> Parse<'a> for refines ### impl<'a> Parse<'a> for register ### impl<'a> Parse<'a> for rep ### impl<'a> Parse<'a> for resource ### impl<'a> Parse<'a> for resource_drop ### impl<'a> Parse<'a> for resource_new ### impl<'a> Parse<'a> for resource_rep ### impl<'a> Parse<'a> for result ### impl<'a> Parse<'a> for s8 ### impl<'a> Parse<'a> for s16 ### impl<'a> Parse<'a> for s32 ### impl<'a> Parse<'a> for s64 ### impl<'a> Parse<'a> for sdk ### impl<'a> Parse<'a> for shared ### impl<'a> Parse<'a> for start ### impl<'a> Parse<'a> for string ### impl<'a> Parse<'a> for string_latin1_utf16 ### impl<'a> Parse<'a> for string_utf8 ### impl<'a> Parse<'a> for string_utf16 ### impl<'a> Parse<'a> for struct ### impl<'a> Parse<'a> for structref ### impl<'a> Parse<'a> for sub ### impl<'a> Parse<'a> for table ### impl<'a> Parse<'a> for tag ### impl<'a> Parse<'a> for then ### impl<'a> Parse<'a> for thread ### impl<'a> Parse<'a> for true_ ### impl<'a> Parse<'a> for try ### impl<'a> Parse<'a> for tuple ### impl<'a> Parse<'a> for type ### impl<'a> Parse<'a> for u8 ### impl<'a> Parse<'a> for u16 ### impl<'a> Parse<'a> for u32 ### impl<'a> Parse<'a> for u64 ### impl<'a> Parse<'a> for v128 ### impl<'a> Parse<'a> for value ### impl<'a> Parse<'a> for variant ### impl<'a> Parse<'a> for wait ### impl<'a> Parse<'a> for with ### impl<'a> Parse<'a> for Wast<'a### impl<'a> Parse<'a> for WastInvoke<'a### impl<'a> Parse<'a> for WastThread<'a### impl<'a> Parse<'a> for Float32 ### impl<'a> Parse<'a> for Float64 ### impl<'a> Parse<'a> for Id<'a### impl<'a> Parse<'a> for NameAnnotation<'a### impl<'a, K> Parse<'a> for IndexOrCoreRef<'a, K>where K: Parse<'a> + Default, ### impl<'a, K> Parse<'a> for IndexOrRef<'a, K>where K: Parse<'a> + Default, ### impl<'a, K: Parse<'a>> Parse<'a> for CoreItemRef<'a, K### impl<'a, K: Parse<'a>> Parse<'a> for wast::component::ItemRef<'a, K### impl<'a, K: Parse<'a>> Parse<'a> for wast::token::ItemRef<'a, K### impl<'a, T> Parse<'a> for NanPattern<T>where T: Parse<'a>, ### impl<'a, T: Parse<'a>> Parse<'a> for ComponentTypeUse<'a, T### impl<'a, T: Parse<'a>> Parse<'a> for CoreTypeUse<'a, T### impl<'a, T: Peek + Parse<'a>> Parse<'a> for TypeUse<'a, T### impl<'a, const CORE: bool> Parse<'a> for InlineExportAlias<'a, COREStruct wast::parser::Parser === ``` pub struct Parser<'a> { /* private fields */ } ``` An in-progress parser for the tokens of a WebAssembly text file. A `Parser` is argument to the `Parse` trait and is now the input stream is interacted with to parse new items. Cloning `Parser` or copying a parser refers to the same stream of tokens to parse, you cannot clone a `Parser` and clone two items. For more information about a `Parser` see its methods. Implementations --- ### impl<'a> Parser<'a#### pub fn is_empty(self) -> bool Returns whether there are no more `Token` tokens to parse from this `Parser`. This indicates that either we’ve reached the end of the input, or we’re a sub-`Parser` inside of a parenthesized expression and we’ve hit the `)` token. Note that if `false` is returned there *may* be more comments. Comments and whitespace are not considered for whether this parser is empty. #### pub fn parse<T: Parse<'a>>(self) -> Result<TParses a `T` from this `Parser`. This method has a trivial definition (it simply calls `T::parse`) but is here for syntactic purposes. This is what you’ll call 99% of the time in a `Parse` implementation in order to parse sub-items. Typically you always want to use `?` with the result of this method, you should not handle errors and decide what else to parse. To handle branches in parsing, use `Parser::peek`. ##### Examples A good example of using `parse` is to see how the `TableType` type is parsed in this crate. A `TableType` is defined in the official specification as `tabletype` and is defined as: ``` tabletype ::= lim:limits et:reftype ``` so to parse a `TableType` we recursively need to parse a `Limits` and a `RefType` ``` struct TableType<'a> { limits: Limits, elem: RefType<'a>, } impl<'a> Parse<'a> for TableType<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { // parse the `lim` then `et` in sequence Ok(TableType { limits: parser.parse()?, elem: parser.parse()?, }) } } ``` #### pub fn peek<T: Peek>(self) -> Result<boolPerforms a cheap test to see whether the current token in this stream is `T`. This method can be used to efficiently determine what next to parse. The `Peek` trait is defined for types which can be used to test if they’re the next item in the input stream. Nothing is actually parsed in this method, nor does this mutate the state of this `Parser`. Instead, this simply performs a check. This method is frequently combined with the `Parser::lookahead1` method to automatically produce nice error messages if some tokens aren’t found. ##### Examples For an example of using the `peek` method let’s take a look at parsing the `Limits` type. This is defined in the official spec as: ``` limits ::= n:u32 | n:u32 m:u32 ``` which means that it’s either one `u32` token or two, so we need to know whether to consume two tokens or one: ``` struct Limits { min: u32, max: Option<u32>, } impl<'a> Parse<'a> for Limits { fn parse(parser: Parser<'a>) -> Result<Self> { // Always parse the first number... let min = parser.parse()?; // ... and then test if there's a second number before parsing let max = if parser.peek::<u32>()? { Some(parser.parse()?) } else { None }; Ok(Limits { min, max }) } } ``` #### pub fn peek2<T: Peek>(self) -> Result<boolSame as the `Parser::peek` method, except checks the next token, not the current token. #### pub fn peek3<T: Peek>(self) -> Result<boolSame as the `Parser::peek2` method, except checks the next next token, not the next token. #### pub fn lookahead1(self) -> Lookahead1<'aA helper structure to perform a sequence of `peek` operations and if they all fail produce a nice error message. This method purely exists for conveniently producing error messages and provides no functionality that `Parser::peek` doesn’t already give. The `Lookahead1` structure has one main method `Lookahead1::peek`, which is the same method as `Parser::peek`. The difference is that the `Lookahead1::error` method needs no arguments. ##### Examples Let’s look at the parsing of `Index`. This type is either a `u32` or an `Id` and is used in name resolution primarily. The official grammar for an index is: ``` idx ::= x:u32 | v:id ``` Which is to say that an index is either a `u32` or an `Id`. When parsing an `Index` we can do: ``` enum Index<'a> { Num(u32), Id(Id<'a>), } impl<'a> Parse<'a> for Index<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { let mut l = parser.lookahead1(); if l.peek::<Id>()? { Ok(Index::Id(parser.parse()?)) } else if l.peek::<u32>()? { Ok(Index::Num(parser.parse()?)) } else { // produces error message of `expected identifier or u32` Err(l.error()) } } } ``` #### pub fn parens<T>(self, f: impl FnOnce(Parser<'a>) -> Result<T>) -> Result<TParse an item surrounded by parentheses. WebAssembly’s text format is all based on s-expressions, so naturally you’re going to want to parse a lot of parenthesized things! As noted in the documentation of `Parse` you typically don’t parse your own surrounding `(` and `)` tokens, but the parser above you parsed them for you. This is method method the parser above you uses. This method will parse a `(` token, and then call `f` on a sub-parser which when finished asserts that a `)` token is the next token. This requires that `f` consumes all tokens leading up to the paired `)`. Usage will often simply be `parser.parens(|p| p.parse())?` to automatically parse a type within parentheses, but you can, as always, go crazy and do whatever you’d like too. ##### Examples A good example of this is to see how a `Module` is parsed. This isn’t the exact definition, but it’s close enough! ``` struct Module<'a> { fields: Vec<ModuleField<'a>>, } impl<'a> Parse<'a> for Module<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { // Modules start out with a `module` keyword parser.parse::<kw::module>()?; // And then everything else is `(field ...)`, so while we've got // items left we continuously parse parenthesized items. let mut fields = Vec::new(); while !parser.is_empty() { fields.push(parser.parens(|p| p.parse())?); } Ok(Module { fields }) } } ``` #### pub fn parens_depth(&self) -> usize Return the depth of nested parens we’ve parsed so far. This is a low-level method that is only useful for implementing recursion limits in custom parsers. #### pub fn step<F, T>(self, f: F) -> Result<T>where F: FnOnce(Cursor<'a>) -> Result<(T, Cursor<'a>)>, A low-level parsing method you probably won’t use. This is used to implement parsing of the most primitive types in the `core` module. You probably don’t want to use this, but probably want to use something like `Parser::parse` or `Parser::parens`. #### pub fn error(self, msg: impl Display) -> Error Creates an error whose line/column information is pointing at the current token. This is used to produce human-readable error messages which point to the right location in the input stream, and the `msg` here is arbitrary text used to associate with the error and indicate why it was generated. #### pub fn error_at(self, span: Span, msg: impl Display) -> Error Creates an error whose line/column information is pointing at the given span. #### pub fn cur_span(&self) -> Span Returns the span of the current token #### pub fn prev_span(&self) -> Span Returns the span of the previous token #### pub fn register_annotation<'b>(self, annotation: &'b str) -> impl Drop + 'bwhere 'a: 'b, Registers a new known annotation with this parser to allow parsing annotations with this name. WebAssembly annotations are a proposal for the text format which allows decorating the text format with custom structured information. By default all annotations are ignored when parsing, but the whole purpose of them is to sometimes parse them! To support parsing text annotations this method is used to allow annotations and their tokens to *not* be skipped. Once an annotation is registered with this method, then while the return value has not been dropped (e.g. the scope of where this function is called) annotations with the name `annotation` will be parse of the token stream and not implicitly skipped. ##### Skipping annotations The behavior of skipping unknown/unregistered annotations can be somewhat subtle and surprising, so if you’re interested in parsing annotations it’s important to point out the importance of this method and where to call it. Generally when parsing tokens you’ll be bottoming out in various `Cursor` methods. These are all documented as advancing the stream as much as possible to the next token, skipping “irrelevant stuff” like comments, whitespace, etc. The `Cursor` methods will also skip unknown annotations. This means that if you parse *any* token, it will skip over any number of annotations that are unknown at all times. To parse an annotation you must, before parsing any token of the annotation, register the annotation via this method. This includes the beginning `(` token, which is otherwise skipped if the annotation isn’t marked as registered. Typically parser parse the *contents* of an s-expression, so this means that the outer parser of an s-expression must register the custom annotation name, rather than the inner parser. ##### Return This function returns an RAII guard which, when dropped, will unregister the `annotation` given. Parsing `annotation` is only supported while the returned value is still alive, and once dropped the parser will go back to skipping annotations with the name `annotation`. ##### Example Let’s see an example of how the `@name` annotation is parsed for modules to get an idea of how this works: ``` struct Module<'a> { name: Option<NameAnnotation<'a>>, } impl<'a> Parse<'a> for Module<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { // Modules start out with a `module` keyword parser.parse::<kw::module>()?; // Next may be `(@name "foo")`. Typically this annotation would // skipped, but we don't want it skipped, so we register it. // Note that the parse implementation of // `Option<NameAnnotation>` is the one that consumes the // parentheses here. let _r = parser.register_annotation("name"); let name = parser.parse()?; // ... and normally you'd otherwise parse module fields here ... Ok(Module { name }) } } ``` Another example is how we parse the `@custom` annotation. Note that this is parsed as part of `ModuleField`, so note how the annotation is registered *before* we parse the parentheses of the annotation. ``` struct Module<'a> { fields: Vec<ModuleField<'a>>, } impl<'a> Parse<'a> for Module<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { // Modules start out with a `module` keyword parser.parse::<kw::module>()?; // register the `@custom` annotation *first* before we start // parsing fields, because each field is contained in // parentheses and to parse the parentheses of an annotation we // have to known to not skip it. let _r = parser.register_annotation("custom"); let mut fields = Vec::new(); while !parser.is_empty() { fields.push(parser.parens(|p| p.parse())?); } Ok(Module { fields }) } } enum ModuleField<'a> { Custom(Custom<'a>), // ... } impl<'a> Parse<'a> for ModuleField<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { // Note that because we have previously registered the `@custom` // annotation with the parser we known that `peek` methods like // this, working on the annotation token, are enabled to ever // return `true`. if parser.peek::<annotation::custom>()? { return Ok(ModuleField::Custom(parser.parse()?)); } // .. typically we'd parse other module fields here... Err(parser.error("unknown module field")) } } ``` Trait Implementations --- ### impl<'a> Clone for Parser<'a#### fn clone(&self) -> Parser<'aReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. --- ### impl<'a> !RefUnwindSafe for Parser<'a### impl<'a> !Send for Parser<'a### impl<'a> !Sync for Parser<'a### impl<'a> Unpin for Parser<'a### impl<'a> !UnwindSafe for Parser<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct wast::core::Module === ``` pub struct Module<'a> { pub span: Span, pub id: Option<Id<'a>>, pub name: Option<NameAnnotation<'a>>, pub kind: ModuleKind<'a>, } ``` A parsed WebAssembly core module. Fields --- `span: Span`Where this `module` was defined `id: Option<Id<'a>>`An optional identifier this module is known by `name: Option<NameAnnotation<'a>>`An optional `@name` annotation for this module `kind: ModuleKind<'a>`What kind of module this was parsed as. Implementations --- ### impl<'a> Module<'a#### pub fn resolve(&mut self) -> Result<Names<'a>, ErrorPerforms a name resolution pass on this `Module`, resolving all symbolic names to indices. The WAT format contains a number of shorthands to make it easier to write, such as inline exports, inline imports, inline type definitions, etc. Additionally it allows using symbolic names such as `$foo` instead of using indices. This module will postprocess an AST to remove all of this syntactic sugar, preparing the AST for binary emission. This is where expansion and name resolution happens. This function will mutate the AST of this `Module` and replace all `Index` arguments with `Index::Num`. This will also expand inline exports/imports listed on fields and handle various other shorthands of the text format. If successful the AST was modified to be ready for binary encoding. A `Names` structure is also returned so if you’d like to do your own name lookups on the result you can do so as well. ##### Errors If an error happens during resolution, such a name resolution error or items are found in the wrong order, then an error is returned. #### pub fn encode(&mut self) -> Result<Vec<u8>, ErrorEncodes this `Module` to its binary form. This function will take the textual representation in `Module` and perform all steps necessary to convert it to a binary WebAssembly module, suitable for writing to a `*.wasm` file. This function may internally modify the `Module`, for example: * Name resolution is performed to ensure that `Index::Id` isn’t present anywhere in the AST. * Inline shorthands such as imports/exports/types are all expanded to be dedicated fields of the module. * Module fields may be shuffled around to preserve index ordering from expansions. After all of this expansion has happened the module will be converted to its binary form and returned as a `Vec<u8>`. This is then suitable to hand off to other wasm runtimes and such. ##### Errors This function can return an error for name resolution errors and other expansion-related errors. Trait Implementations --- ### impl<'a> Debug for Module<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. --- ### impl<'a> RefUnwindSafe for Module<'a### impl<'a> Send for Module<'a### impl<'a> Sync for Module<'a### impl<'a> Unpin for Module<'a### impl<'a> UnwindSafe for Module<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Module wast::annotation === Common annotations used to parse WebAssembly text files. Structs --- * custom * dylink_0 * name * producers Module wast::component === Types and support for parsing the component model text format. Structs --- * AliasAn alias to a component item. * CanonLiftInformation relating to lifting a core function. * CanonLowerInformation relating to lowering a component function. * CanonResourceDropInformation relating to the `resource.drop` intrinsic. * CanonResourceNewInformation relating to the `resource.new` intrinsic. * CanonResourceRepInformation relating to the `resource.rep` intrinsic. * CanonicalFuncA WebAssembly canonical function to be inserted into a component. * ComponentA parsed WebAssembly component module. * ComponentExportAn entry in a WebAssembly component’s export section. * ComponentExportTypeThe type of an exported item from an component or instance type. * ComponentFunctionParamA parameter of a `ComponentFunctionType`. * ComponentFunctionResultA result of a `ComponentFunctionType`. * ComponentFunctionTypeA component function type with parameters and result. * ComponentImportAn `import` statement and entry in a WebAssembly component. * ComponentTypeA type definition for a component type. * ComponentValTypeUseA value type declaration used for values in import signatures. * CoreFuncA declared core function. * CoreInstanceA core instance defined by instantiation or exporting core items. * CoreInstanceExportAn exported item as part of a core instance. * CoreInstantiationArgAn argument to instantiate a core module. * CoreItemRefParses core item references. * CoreModuleA core WebAssembly module to be created as part of a component. * CoreTypeA core type declaration. * CustomA custom section within a component. * EnumAn enum type. * FlagsA flags type. * FuncA declared component function. * IndexOrCoreRefConvenience structure to parse `$f` or `(item $f)`. * IndexOrRefConvenience structure to parse `$f` or `(item $f)`. * InlineComponentValTypeAn inline-only component value type. * InlineExportA listing of inline `(export "foo" <url>)` statements on a WebAssembly component item in its textual format. * InlineExportAliasA inline alias for component exported items. * InlineImportA listing of a inline `(import "foo")` statement. * InstanceA component instance defined by instantiation or exporting items. * InstanceTypeA type definition for an instance type. * InstantiationArgAn argument to instantiate a component. * ItemRefParses component item references. * ItemSigAn item signature for imported items. * ItemSigNoNameAn item signature for imported items. * ListA list type. * ModuleTypeA type definition for a core module. * NestedComponentA nested WebAssembly component. * OptionTypeAn optional type. * RecordA record defined type. * RecordFieldA record type field. * ResourceTypeA type definition for an instance type. * ResultTypeA result type. * StartA function to call at instantiation time. * TupleA tuple type. * TypeA type declaration in a component. * VariantA variant defined type. * VariantCaseA case of a variant type. Enums --- * AliasTargetThe target of a component alias. * CanonOptCanonical ABI options. * CanonicalFuncKindPossible ways to define a canonical function in the text format. * ComponentDefinedType * ComponentExportAliasKindRepresents the kind of instance export alias. * ComponentExportKindThe kind of exported item. * ComponentExternNameThe different ways an import can be named. * ComponentFieldA listing of all possible fields that can make up a WebAssembly component. * ComponentKindThe different kinds of ways to define a component. * ComponentOuterAliasKindRepresents the kind of outer alias. * ComponentTypeDeclA declaration of a component type. * ComponentTypeUseA reference to a type defined in this component. * ComponentValTypeA component value type. * CoreFuncKindRepresents the kind of core functions. * CoreInstanceKindThe kinds of core instances in the text format. * CoreInstantiationArgKindThe kind of core instantiation argument. * CoreModuleKindPossible ways to define a core module in the text format. * CoreTypeDefRepresents a core type definition. * CoreTypeUseA reference to a core type defined in this component. * FuncKindRepresents the kind of component functions. * InstanceKindThe kinds of instances in the text format. * InstanceTypeDeclA declaration of an instance type. * InstantiationArgKindThe kind of instantiation argument. * ItemSigKindThe kind of signatures for imported items. * ModuleTypeDeclThe declarations of a `ModuleType`. * NestedComponentKindThe different kinds of ways to define a nested component. * PrimitiveValTypeA primitive value type. * RefinementA refinement for a variant case. * TypeBoundsRepresents the bounds applied to types being imported. * TypeDefA definition of a component type. * WastValExpression that can be used inside of `invoke` expressions for core wasm functions. Module wast::core === Types and support for parsing the core wasm text format. Structs --- * ArrayCopyExtra data associated with the `array.copy` instruction * ArrayFillExtra data associated with the `array.fill` instruction * ArrayInitExtra data associated with the `array.init_[data/elem]` instruction * ArrayNewDataExtra data associated with the `array.new_data` instruction * ArrayNewElemExtra data associated with the `array.new_elem` instruction * ArrayNewFixedExtra data associated with the `array.new_fixed` instruction * ArrayTypeAn array type with fields. * BlockTypeExtra information associated with block-related instructions. * BrOnCastExtra data associated with the `br_on_cast` instruction * BrOnCastFailExtra data associated with the `br_on_cast_fail` instruction * BrTableIndicesExtra information associated with the `br_table` instruction. * CallIndirectExtra data associated with the `call_indirect` instruction. * DataA `data` directive in a WebAssembly module. * Dylink0A `dylink.0` custom section * ElemAn `elem` segment in a WebAssembly module. * ExportA entry in a WebAssembly module’s export section. * ExportTypeThe type of an exported item from a module or instance. * ExpressionAn expression, or a list of instructions, in the WebAssembly text format. * FuncA WebAssembly function to be inserted into a module. * FuncBindTypeExtra information associated with the func.bind instruction. * FunctionTypeA function type with parameters and results. * FunctionTypeNoNamesA function type with parameters and results. * GlobalA WebAssembly global in a module * GlobalTypeType for a `global` in a wasm module * I8x16ShuffleLanes being shuffled in the `i8x16.shuffle` instruction * ImportAn `import` statement and entry in a WebAssembly module. * InlineExportA listing of inline `(export "foo")` statements on a WebAssembly item in its textual format. * InlineImportA listing of a inline `(import "foo")` statement. * ItemSig * LaneArgPayload for lane-related instructions. Unsigned with no + prefix. * LetTypeExtra information associated with the let instruction. * LimitsMin/max limits used for tables/memories. * Limits64Min/max limits used for 64-bit memories * LoadOrStoreLaneExtra data associated with the `loadN_lane` and `storeN_lane` instructions. * LocalA local for a `func` or `let` instruction. * LocalParserParser for `local` instruction. * MemArgPayload for memory-related instructions indicating offset/alignment of memory accesses. * MemoryA defined WebAssembly memory instance inside of a module. * MemoryArgExtra data associated with unary memory instructions. * MemoryCopyExtra data associated with the `memory.copy` instruction * MemoryInitExtra data associated with the `memory.init` instruction * ModuleA parsed WebAssembly core module. * NamesRepresentation of the results of name resolution for a module. * ProducersA producers custom section * RawCustomSectionA wasm custom section within a module. * RecA recursion group declaration in a module * RefCastExtra data associated with the `ref.cast` instruction * RefTestExtra data associated with the `ref.test` instruction * RefTypeA reference type in a wasm module. * SelectTypesPayload of the `select` instructions * StructAccessExtra data associated with the `struct.get/set` instructions * StructFieldA field of a struct type. * StructTypeA struct type with fields. * TableA WebAssembly `table` directive in a module. * TableArgExtra data associated with unary table instructions. * TableCopyExtra data associated with the `table.copy` instruction. * TableInitExtra data associated with the `table.init` instruction * TableTypeConfiguration for a table of a wasm mdoule * TagA WebAssembly tag directive, part of the exception handling proposal. * TryTable * TryTableCatch * TryTableCatchAll * TypeA type declaration in a module * TypeUseA reference to a type defined in this module. Enums --- * CustomA custom section within a wasm module. * CustomPlacePossible locations to place a custom section within a module. * CustomPlaceAnchorKnown sections that custom sections can be placed relative to. * DataKindDifferent kinds of data segments, either passive or active. * DataValDiffernet ways the value of a data segment can be defined. * Dylink0SubsectionPossible subsections of the `dylink.0` custom section * ElemKindDifferent ways to define an element segment in an mdoule. * ElemPayloadDifferent ways to define the element segment payload in a module. * ExportKindDifferent kinds of elements that can be exported from a WebAssembly module, contained in an `Export`. * FuncKindPossible ways to define a function in the text format. * GlobalKindDifferent kinds of globals that can be defined in a module. * HeapTypeA heap type for a reference type * InstructionA listing of all WebAssembly instructions that can be in a module that this crate currently parses. * ItemKind * MemoryKindDifferent syntactical ways a memory can be defined in a module. * MemoryTypeConfiguration for a memory of a wasm module * ModuleFieldA listing of all possible fields that can make up a WebAssembly module. * ModuleKindThe different kinds of ways to define a module. * NanPatternEither a NaN pattern (`nan:canonical`, `nan:arithmetic`) or a value of type `T`. * StorageTypeThe types of values that may be used in a struct or array. * TableKindDifferent ways to textually define a table. * TagKindDifferent kinds of tags that can be defined in a module. * TagTypeListing of various types of tags that can be defined in a wasm module. * TryTableCatchKind * TypeDefA definition of a type. * V128ConstDifferent ways to specify a `v128.const` instruction * V128PatternA version of `V128Const` that allows `NanPattern`s. * ValTypeThe value types for a wasm module. * WastArgCoreExpression that can be used inside of `invoke` expressions for core wasm functions. * WastRetCoreExpressions that can be used inside of `assert_return` to validate the return value of a core wasm function. Module wast::kw === Common keyword used to parse WebAssembly text files. Structs --- * after * alias * any * anyfunc * anyref * arg * array * arrayref * assert_exception * assert_exhaustion * assert_invalid * assert_malformed * assert_return * assert_trap * assert_unlinkable * before * binary * block * bool_ * borrow * canon * case * catch * catch_all * catch_all_ref * catch_ref * char * code * component * core * data * declare * delegate * do * dtor * elem * else * end * enum_ * eq * eqref * error * exn * exnref * export * export_info * extern * externref * f32 * f32x4 * f64 * f64x2 * false_ * field * final * first * flags * float32 * float64 * func * funcref * get * global * i8 * i8x16 * i16 * i16x8 * i31 * i31ref * i32 * i32x4 * i64 * i64x2 * if * import * import_info * instance * instantiate * interface * invoke * item * language * last * lift * list * local * loop * lower * mem_info * memory * module * modulecode * mut * nan_arithmetic * nan_canonical * needed * noextern * nofunc * none * null * nullexternref * nullfuncref * nullref * offset * option * outer * own * param * parent * passive * post_return * processed_by * quote * realloc * rec * record * ref * ref_func * ref_null * refines * register * rep * resource * resource_drop * resource_new * resource_rep * result * s8 * s16 * s32 * s64 * sdk * shared * start * string * string_latin1_utf16 * string_utf8 * string_utf16 * struct * structref * sub * table * tag * then * thread * true_ * try * tuple * type * u8 * u16 * u32 * u64 * v128 * value * variant * wait * with Module wast::token === Common tokens that implement the `Parse` trait which are otherwise not associated specifically with the wasm text format per se (useful in other contexts too perhaps). Structs --- * Float32A parsed floating-point type * Float64A parsed floating-point type * IdAn identifier in a WebAssembly module, prefixed by `$` in the textual format. * ItemRefParses `(func $foo)` * LParenA convenience type to use with `Parser::peek` to see if the next token is an s-expression. * NameAnnotationAn `@name` annotation in source, currently of the form `@name "foo"` * SpanA position in the original source stream, used to render errors. Enums --- * IndexA reference to another item in a wasm module. Macro wast::annotation === ``` macro_rules! annotation { ($name:ident) => { ... }; ($name:ident = $annotation:expr) => { ... }; } ``` A macro, like `custom_keyword`, to create a type which can be used to parse/peek annotation directives. Note that when you’re parsing custom annotations it can be somewhat tricky due to the nature that most of them are skipped. You’ll want to be sure to consult the documentation of `Parser::register_annotation` when using this macro. Examples --- To see an example of how to use this macro, let’s invent our own syntax for the producers section which looks like: ``` (@producer "wat" "1.0.2") ``` Here, for simplicity, we’ll assume everything is a `processed-by` directive, but you could get much more fancy with this as well. ``` // First we define the custom annotation keyword we're using, and by // convention we define it in an `annotation` module. mod annotation { wast::annotation!(producer); } struct Producer<'a> { name: &'a str, version: &'a str, } impl<'a> Parse<'a> for Producer<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { // Remember that parser conventionally parse the *interior* of an // s-expression, so we parse our `@producer` annotation and then we // parse the payload of our annotation. parser.parse::<annotation::producer>()?; Ok(Producer { name: parser.parse()?, version: parser.parse()?, }) } } ``` Note though that this is only half of the parser for annotations. The other half is calling the `register_annotation` method at the right time to ensure the parser doesn’t automatically skip our `@producer` directive. Note that we *can’t* call it inside the `Parse for Producer` definition because that’s too late and the annotation would already have been skipped. Instead we’ll need to call it from a higher level-parser before the parenthesis have been parsed, like so: ``` struct Module<'a> { fields: Vec<ModuleField<'a>>, } impl<'a> Parse<'a> for Module<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { // .. parse module header here ... // register our custom `@producer` annotation before we start // parsing the parentheses of each field let _r = parser.register_annotation("producer"); let mut fields = Vec::new(); while !parser.is_empty() { fields.push(parser.parens(|p| p.parse())?); } Ok(Module { fields }) } } enum ModuleField<'a> { Producer(Producer<'a>), // ... } impl<'a> Parse<'a> for ModuleField<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { // and here `peek` works and our delegated parsing works because the // annotation has been registered. if parser.peek::<annotation::producer>()? { return Ok(ModuleField::Producer(parser.parse()?)); } // .. typically we'd parse other module fields here... Err(parser.error("unknown module field")) } } ``` Macro wast::custom_keyword === ``` macro_rules! custom_keyword { ($name:ident) => { ... }; ($name:ident = $kw:expr) => { ... }; } ``` A macro to create a custom keyword parser. This macro is invoked in one of two forms: ``` // keyword derived from the Rust identifier: wast::custom_keyword!(foo); // or an explicitly specified string representation of the keyword: wast::custom_keyword!(my_keyword = "the-wasm-keyword"); ``` This can then be used to parse custom keyword for custom items, such as: ``` use wast::parser::{Parser, Result, Parse}; struct InlineModule<'a> { inline_text: &'a str, } mod kw { wast::custom_keyword!(inline); } // Parse an inline string module of the form: // // (inline "(module (func))") impl<'a> Parse<'a> for InlineModule<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { parser.parse::<kw::inline>()?; Ok(InlineModule { inline_text: parser.parse()?, }) } } ``` Note that the keyword name can only start with a lower-case letter, i.e. ‘a’..‘z’. Macro wast::custom_reserved === ``` macro_rules! custom_reserved { ($name:ident) => { ... }; ($name:ident = $rsv:expr) => { ... }; } ``` A macro for defining custom reserved symbols. This is like `custom_keyword!` but for reserved symbols (`Token::Reserved`) instead of keywords (`Token::Keyword`). ``` use wast::parser::{Parser, Result, Parse}; // Define a custom reserved symbol, the "spaceship" operator: `<=>`. wast::custom_reserved!(spaceship = "<=>"); /// A "three-way comparison" like `(<=> a b)` that returns -1 if `a` is less /// than `b`, 0 if they're equal, and 1 if `a` is greater than `b`. struct ThreeWayComparison<'a> { lhs: wast::core::Expression<'a>, rhs: wast::core::Expression<'a>, } impl<'a> Parse<'a> for ThreeWayComparison<'a> { fn parse(parser: Parser<'a>) -> Result<Self> { parser.parse::<spaceship>()?; let lhs = parser.parse()?; let rhs = parser.parse()?; Ok(ThreeWayComparison { lhs, rhs }) } } ``` Struct wast::Error === ``` pub struct Error { /* private fields */ } ``` A convenience error type to tie together all the detailed errors produced by this crate. This type can be created from a `LexError`. This also contains storage for file/text information so a nice error can be rendered along the same lines of rustc’s own error messages (minus the color). This type is typically suitable for use in public APIs for consumers of this crate. Implementations --- ### impl Error #### pub fn new(span: Span, message: String) -> Error Creates a new error with the given `message` which is targeted at the given `span` Note that you’ll want to ensure that `set_text` or `set_path` is called on the resulting error to improve the rendering of the error message. #### pub fn span(&self) -> Span Return the `Span` for this error. #### pub fn set_text(&mut self, contents: &str) To provide a more useful error this function can be used to extract relevant textual information about this error into the error itself. The `contents` here should be the full text of the original file being parsed, and this will extract a sub-slice as necessary to render in the `Display` implementation later on. #### pub fn set_path(&mut self, path: &Path) To provide a more useful error this function can be used to set the file name that this error is associated with. The `path` here will be stored in this error and later rendered in the `Display` implementation. #### pub fn lex_error(&self) -> Option<&LexErrorReturns the underlying `LexError`, if any, that describes this error. #### pub fn message(&self) -> String Returns the underlying message, if any, that describes this error. Trait Implementations --- ### impl Debug for Error #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Error ### impl Send for Error ### impl Sync for Error ### impl Unpin for Error ### impl UnwindSafe for Error Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToString for Twhere T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct wast::Wast === ``` pub struct Wast<'a> { pub directives: Vec<WastDirective<'a>>, } ``` A parsed representation of a `*.wast` file. WAST files are not officially specified but are used in the official test suite to write official spec tests for wasm. This type represents a parsed `*.wast` file which parses a list of directives in a file. Fields --- `directives: Vec<WastDirective<'a>>`Trait Implementations --- ### impl<'a> Debug for Wast<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. --- ### impl<'a> RefUnwindSafe for Wast<'a### impl<'a> Send for Wast<'a### impl<'a> Sync for Wast<'a### impl<'a> Unpin for Wast<'a### impl<'a> UnwindSafe for Wast<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct wast::WastInvoke === ``` pub struct WastInvoke<'a> { pub span: Span, pub module: Option<Id<'a>>, pub name: &'a str, pub args: Vec<WastArg<'a>>, } ``` Fields --- `span: Span``module: Option<Id<'a>>``name: &'a str``args: Vec<WastArg<'a>>`Trait Implementations --- ### impl<'a> Debug for WastInvoke<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. --- ### impl<'a> RefUnwindSafe for WastInvoke<'a### impl<'a> Send for WastInvoke<'a### impl<'a> Sync for WastInvoke<'a### impl<'a> Unpin for WastInvoke<'a### impl<'a> UnwindSafe for WastInvoke<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct wast::WastThread === ``` pub struct WastThread<'a> { pub span: Span, pub name: Id<'a>, pub shared_module: Option<Id<'a>>, pub directives: Vec<WastDirective<'a>>, } ``` Fields --- `span: Span``name: Id<'a>``shared_module: Option<Id<'a>>``directives: Vec<WastDirective<'a>>`Trait Implementations --- ### impl<'a> Debug for WastThread<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. --- ### impl<'a> RefUnwindSafe for WastThread<'a### impl<'a> Send for WastThread<'a### impl<'a> Sync for WastThread<'a### impl<'a> Unpin for WastThread<'a### impl<'a> UnwindSafe for WastThread<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum wast::QuoteWat === ``` pub enum QuoteWat<'a> { Wat(Wat<'a>), QuoteModule(Span, Vec<(Span, &'a [u8])>), QuoteComponent(Span, Vec<(Span, &'a [u8])>), } ``` Variants --- ### Wat(Wat<'a>) ### QuoteModule(Span, Vec<(Span, &'a [u8])>) ### QuoteComponent(Span, Vec<(Span, &'a [u8])>) Implementations --- ### impl QuoteWat<'_#### pub fn encode(&mut self) -> Result<Vec<u8>, ErrorEncodes this module to bytes, either by encoding the module directly or parsing the contents and then encoding it. Trait Implementations --- ### impl<'a> Debug for QuoteWat<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. --- ### impl<'a> RefUnwindSafe for QuoteWat<'a### impl<'a> Send for QuoteWat<'a### impl<'a> Sync for QuoteWat<'a### impl<'a> Unpin for QuoteWat<'a### impl<'a> UnwindSafe for QuoteWat<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum wast::WastArg === ``` pub enum WastArg<'a> { Core(WastArgCore<'a>), Component(WastVal<'a>), } ``` Variants --- ### Core(WastArgCore<'a>) ### Component(WastVal<'a>) Trait Implementations --- ### impl<'a> Debug for WastArg<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. --- ### impl<'a> RefUnwindSafe for WastArg<'a### impl<'a> Send for WastArg<'a### impl<'a> Sync for WastArg<'a### impl<'a> Unpin for WastArg<'a### impl<'a> UnwindSafe for WastArg<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum wast::WastDirective === ``` pub enum WastDirective<'a> { Wat(QuoteWat<'a>), AssertMalformed { span: Span, module: QuoteWat<'a>, message: &'a str, }, AssertInvalid { span: Span, module: QuoteWat<'a>, message: &'a str, }, Register { span: Span, name: &'a str, module: Option<Id<'a>>, }, Invoke(WastInvoke<'a>), AssertTrap { span: Span, exec: WastExecute<'a>, message: &'a str, }, AssertReturn { span: Span, exec: WastExecute<'a>, results: Vec<WastRet<'a>>, }, AssertExhaustion { span: Span, call: WastInvoke<'a>, message: &'a str, }, AssertUnlinkable { span: Span, module: Wat<'a>, message: &'a str, }, AssertException { span: Span, exec: WastExecute<'a>, }, Thread(WastThread<'a>), Wait { span: Span, thread: Id<'a>, }, } ``` The different kinds of directives found in a `*.wast` file. It’s not entirely clear to me what all of these are per se, but they’re only really interesting to test harnesses mostly. Variants --- ### Wat(QuoteWat<'a>) ### AssertMalformed #### Fields `span: Span``module: QuoteWat<'a>``message: &'a str`### AssertInvalid #### Fields `span: Span``module: QuoteWat<'a>``message: &'a str`### Register #### Fields `span: Span``name: &'a str``module: Option<Id<'a>>`### Invoke(WastInvoke<'a>) ### AssertTrap #### Fields `span: Span``exec: WastExecute<'a>``message: &'a str`### AssertReturn #### Fields `span: Span``exec: WastExecute<'a>``results: Vec<WastRet<'a>>`### AssertExhaustion #### Fields `span: Span``call: WastInvoke<'a>``message: &'a str`### AssertUnlinkable #### Fields `span: Span``module: Wat<'a>``message: &'a str`### AssertException #### Fields `span: Span``exec: WastExecute<'a>`### Thread(WastThread<'a>) ### Wait #### Fields `span: Span``thread: Id<'a>`Implementations --- ### impl WastDirective<'_#### pub fn span(&self) -> Span Returns the location in the source that this directive was defined at Trait Implementations --- ### impl<'a> Debug for WastDirective<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. --- ### impl<'a> RefUnwindSafe for WastDirective<'a### impl<'a> Send for WastDirective<'a### impl<'a> Sync for WastDirective<'a### impl<'a> Unpin for WastDirective<'a### impl<'a> UnwindSafe for WastDirective<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum wast::WastExecute === ``` pub enum WastExecute<'a> { Invoke(WastInvoke<'a>), Wat(Wat<'a>), Get { module: Option<Id<'a>>, global: &'a str, }, } ``` Variants --- ### Invoke(WastInvoke<'a>) ### Wat(Wat<'a>) ### Get #### Fields `module: Option<Id<'a>>``global: &'a str`Trait Implementations --- ### impl<'a> Debug for WastExecute<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. --- ### impl<'a> RefUnwindSafe for WastExecute<'a### impl<'a> Send for WastExecute<'a### impl<'a> Sync for WastExecute<'a### impl<'a> Unpin for WastExecute<'a### impl<'a> UnwindSafe for WastExecute<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum wast::WastRet === ``` pub enum WastRet<'a> { Core(WastRetCore<'a>), Component(WastVal<'a>), } ``` Variants --- ### Core(WastRetCore<'a>) ### Component(WastVal<'a>) Trait Implementations --- ### impl<'a> Debug for WastRet<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. --- ### impl<'a> RefUnwindSafe for WastRet<'a### impl<'a> Send for WastRet<'a### impl<'a> Sync for WastRet<'a### impl<'a> Unpin for WastRet<'a### impl<'a> UnwindSafe for WastRet<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum wast::Wat === ``` pub enum Wat<'a> { Module(Module<'a>), Component(Component<'a>), } ``` A `*.wat` file parser, or a parser for one parenthesized module. This is the top-level type which you’ll frequently parse when working with this crate. A `*.wat` file is either one `module` s-expression or a sequence of s-expressions that are module fields. Variants --- ### Module(Module<'a>) ### Component(Component<'a>) Implementations --- ### impl Wat<'_#### pub fn encode(&mut self) -> Result<Vec<u8>, ErrorEncodes this `Wat` to binary form. This calls either `Module::encode` or `Component::encode`. Trait Implementations --- ### impl<'a> Debug for Wat<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. --- ### impl<'a> RefUnwindSafe for Wat<'a### impl<'a> Send for Wat<'a### impl<'a> Sync for Wat<'a### impl<'a> Unpin for Wat<'a### impl<'a> UnwindSafe for Wat<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
siblings
hex
Erlang
Siblings    [Kantox ❤ OSS](https://kantox.com/)  [Test](https://github.com/am-kantox/siblings/actions?query=workflow%3ATest)  [Dialyzer](https://github.com/am-kantox/siblings/actions?query=workflow%3ADialyzer) === **The partitioned dynamic supervision of FSM-backed workers.** [usage](#usage) Usage --- [`Siblings`](Siblings.html) is a library to painlessly manage many uniform processes, all having the lifecycle *and* the *FSM* behind. Consider the service, that polls the market rates from several diffferent sources, allowing semi-automated trading based on predefined conditions. For each bid, the process is to be spawn, polling the external resources. Once the bid condition is met, the bid gets traded. With [`Siblings`](Siblings.html), one should implement [`Siblings.Worker.perform/3`](Siblings.Worker.html#c:perform/3) callback, doing actual work and returning either `:ok` if no action should be taken, or `{:transition, event, payload}` to initiate the *FSM* transition. When the *FSM* get exhausted (reaches its end state,) both the performing process *and* the *FSM* itself do shut down. *FSM* instances leverage [`Finitomata`](https://hexdocs.pm/finitomata) library, which should be used alone if no recurrent `perform` should be accomplished *or* if the instances are not uniform. Typical code for the [`Siblings.Worker`](Siblings.Worker.html) implementation would be as follows ``` defmodule MyApp.Worker do @fsm """ born --> |reject| rejected born --> |bid| traded """ use Finitomata, @fsm def on_transition(:born, :reject, _nil, payload) do perform_rejection(payload) {:ok, :rejected, payload} end def on_transition(:born, :bid, _nil, payload) do perform_bidding(payload) {:ok, :traded, payload} end @behaviour Siblings.Worker @impl Siblings.Worker def perform(state, id, payload) def perform(:born, id, payload) do cond do time_to_bid?() -> {:transition, :bid, nil} stale?() -> {:transition, :reject, nil} true -> :noop end end def perform(:rejected, id, _payload) do Logger.info("The bid #{id} was rejected") {:transition, :__end__, nil} end def perform(:traded, id, _payload) do Logger.info("The bid #{id} was traded") {:transition, :__end__, nil} end end ``` Now it can be used as shown below ``` {:ok, pid} = Siblings.start_link() Siblings.start_child(MyApp.Worker, "Bid1", %{}, interval: 1_000) Siblings.start_child(MyApp.Worker, "Bid2", %{}, interval: 1_000) ... ``` The above would spawn two processes, checking the conditions once per a second (`interval`,) and manipulating the underlying *FSM* to walk through the bids’ lifecycles. Worker’s interval might be reset with `GenServer.cast(pid, {:reset, interval})` and the message might be casted to it with `GenServer.call(pid, {:message, message})`. For the latter to work, the optional callback `on_call/2` must be implemented. *Sidenote:* Normally, [`Siblings`](Siblings.html) supervisor would be put into the supervision tree of the target application. [installation](#installation) Installation --- ``` def deps do [ {:siblings, "~> 0.1"} ] end ``` [changelog](#changelog) Changelog --- * `0.11.3` — OTP26 ready * `0.11.2` — [FIX] wrong specs for `start_link/1` and `child_spec/1` * `0.11.1` — upgraded to [`Finitomata`](https://hexdocs.pm/finitomata/0.14.3/Finitomata.html) (`v0.11.0`) * `0.11.0` — throttler → generic + on perform * `0.10.3` — accept `{(any() -> :ok), timeout}` as `die_with_children`, write-only `InternalState` * `0.10.2` — accept `(any() -> :ok)` as `die_with_children` option as a callback * `0.10.0` — `die_with_children: boolean()` option * `0.8.2` — updated with last `finitomata` compiler * `0.7.0` — `Siblings.state/{0,1,2,3}` + update to `Finitoma 0.7` * `0.5.1` — allow `{:reschedule, non_neg_integer()}` return from `perform/3` * `0.5.0` — use *FSM* for the `Sibling.Lookup` * `0.4.3` — accept `hibernate?:` boolean parameter in call to [`Siblings.start_child/4`](Siblings.html#start_child/4) to hibernate children * `0.4.2` — accept `workers:` in call to [`Siblings.child_spec/1`](Siblings.html#child_spec/1) to statically initialize [`Siblings`](Siblings.html) * `0.4.1` — [BUG] many named [`Siblings`](Siblings.html) instances * `0.4.0` — `Siblings.{multi_call/2, multi_transition/3}` * `0.3.3` — `Siblings.{state/1, payload/2}` * `0.3.2` — `Siblings.{call/3, reset/3, transition/4}` * `0.3.1` — retrieve childrens as both `map` and `list` * `0.3.0` — `GenServer.cast(pid, {:reset, interval})` and `GenServer.call(pid, {:message, message})` * `0.2.0` — Fast `Worker` lookup * `0.1.0` — Initial MVP [documentation](#documentation) [Documentation](https://hexdocs.pm/siblings) --- [API Reference](api-reference.html) Siblings === Bolerplate to effectively handle many long-lived entities of the same shape, driven by FSM. [`Siblings`](Siblings.html#content) is a library to painlessly manage many uniform processes, all having the lifecycle *and* the *FSM* behind. Consider the service, that polls the market rates from several diffferent sources, allowing semi-automated trading based on predefined conditions. For each bid, the process is to be spawn, polling the external resources. Once the bid condition is met, the bid gets traded. With [`Siblings`](Siblings.html#content), one should implement [`Siblings.Worker.perform/3`](Siblings.Worker.html#c:perform/3) callback, doing actual work and returning either `:ok` if no action should be taken, or `{:transition, event, payload}` to initiate the *FSM* transition. When the *FSM* get exhausted (reaches its end state,) both the performing process *and* the *FSM* itself do shut down. *FSM* instances leverage [`Finitomata`](https://hexdocs.pm/finitomata) library, which should be used alone if no recurrent `perform` should be accomplished *or* if the instances are not uniform. Typical code for the [`Siblings.Worker`](Siblings.Worker.html) implementation would be as follows ``` defmodule MyApp.Worker do @fsm """ born --> |reject| rejected born --> |bid| traded """ use Finitomata, @fsm def on_transition(:born, :reject, _nil, payload) do perform_rejection(payload) {:ok, :rejected, payload} end def on_transition(:born, :bid, _nil, payload) do perform_bidding(payload) {:ok, :traded, payload} end @behaviour Siblings.Worker @impl Siblings.Worker def perform(state, id, payload) def perform(:born, id, payload) do cond do time_to_bid?() -> {:transition, :bid, nil} stale?() -> {:transition, :reject, nil} true -> :noop end end def perform(:rejected, id, _payload) do Logger.info("The bid #{id} was rejected") {:transition, :__end__, nil} end def perform(:traded, id, _payload) do Logger.info("The bid #{id} was traded") {:transition, :__end__, nil} end end ``` Now it can be used as shown below ``` {:ok, pid} = Siblings.start_link() Siblings.start_child(MyApp.Worker, "Bid1", %{}, interval: 1_000) Siblings.start_child(MyApp.Worker, "Bid2", %{}, interval: 1_000) ... ``` The above would spawn two processes, checking the conditions once per a second (`interval`,) and manipulating the underlying *FSM* to walk through the bids’ lifecycles. Worker’s interval might be reset with `GenServer.cast(pid, {:reset, interval})` and the message might be casted to it with `GenServer.call(pid, {:message, message})`. For the latter to work, the optional callback `on_call/2` must be implemented. *Sidenote:* Normally, [`Siblings`](Siblings.html#content) supervisor would be put into the supervision tree of the target application. [Link to this section](#summary) Summary === [Types](#types) --- [start_options()](#t:start_options/0) [worker()](#t:worker/0) [Functions](#functions) --- [call(name \\ default_fqn(), id, message)](#call/3) Performs a [`GenServer.call/3`](https://hexdocs.pm/elixir/GenServer.html#call/3) on the named worker. [child_spec(init_arg)](#child_spec/1) Returns the [child spec](https://hexdocs.pm/elixir/Supervisor.html#t:child_spec/0) for the named or unnamed [`Siblings`](Siblings.html#content) process. [multi_call(name \\ default_fqn(), message)](#multi_call/2) Performs a [`GenServer.call/3`](https://hexdocs.pm/elixir/GenServer.html#call/3) on all the workers. [multi_transition(name \\ default_fqn(), event, payload)](#multi_transition/3) Initiates the transition of all the workers. [payload(name \\ default_fqn(), id)](#payload/2) Returns the payload of *FSM* behind the named worker. [reset(name \\ default_fqn(), id, interval)](#reset/3) Resets the the named worker’s interval. [start_child(worker, id, payload, opts \\ [])](#start_child/4) Starts the supervised child under the [`PartitionSupervisor`](https://hexdocs.pm/elixir/PartitionSupervisor.html). [start_link(opts \\ [])](#start_link/1) Starts the supervision subtree, holding the [`PartitionSupervisor`](https://hexdocs.pm/elixir/PartitionSupervisor.html). [state(request \\ :instance, id \\ nil, name \\ default_fqn())](#state/3) Returns the state of the [`Siblings`](Siblings.html#content) instance itself, of the named worker, or the named worker’s underlying *FSM*, depending on the first argument. [states(name \\ default_fqn())](#states/1) Returns the states of all the workers as a map. [transition(name \\ default_fqn(), id, event, payload)](#transition/4) Initiates the transition of the named worker. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Siblings.InternalWorker === The internal process to manage [`Siblings.Worker`](Siblings.Worker.html) subsequent runs along with its *FSM*. [Link to this section](#summary) Summary === [Types](#types) --- [options()](#t:options/0) Allowed options in a call to `start_link/4` [Functions](#functions) --- [child_spec(init_arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Siblings.Worker behaviour === The worker for the single sibling process. [Link to this section](#summary) Summary === [Types](#types) --- [call_result()](#t:call_result/0) Value, returned from `on_call/2` callback [id()](#t:id/0) Identifier of the worker process [message()](#t:message/0) Message to be sent to the worker process [payload()](#t:payload/0) Payload associated with the worker [Callbacks](#callbacks) --- [finitomata()](#c:finitomata/0) The [`Finitomata`](https://hexdocs.pm/finitomata/0.14.3/Finitomata.html) FSM implementation module. [on_call(message, t)](#c:on_call/2) The handler for the routed message from `Siblings.InternalWorker.handle_call({:message, any()})`. [on_init(pid)](#c:on_init/1) The function to re-initialize FSM after crash. [perform(state, id, payload)](#c:perform/3) The callback to be implemented in each and every worker. [Link to this section](#types) Types === [Link to this section](#callbacks) Callbacks === Siblings.InternalWorker.State === The state of the worker. [Link to this section](#summary) Summary === [Types](#types) --- [t()](#t:t/0) [Link to this section](#types) Types === Siblings.Lookup === The instance of *FSM* backed up by [`Finitomata`](https://hexdocs.pm/finitomata/0.14.3/Finitomata.html). [fsm-representation](#module-fsm-representation) FSM representation --- ``` graph TD idle --> |initialize| ready ready --> |start_child| ready ready --> |delete_child| ready ready --> |terminate| terminated ``` [Link to this section](#summary) Summary === [Types](#types) --- [state()](#t:state/0) Kind of event which might be send to initiate the transition. [Functions](#functions) --- [__config__(key)](#__config__/1) Getter for the internal compiled in *FSM* information. [all(name \\ Siblings.default_fqn())](#all/1) Returns all the workers running under this [`Siblings`](Siblings.html) instance as a map `%{Siblings.Worker.id() => pid()}`. [child_spec(arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [config(key)](#config/1) The convenient macro to allow using states in guards, returns a compile-time list of states for [`Siblings.Lookup`](Siblings.Lookup.html#content). [del(name \\ Siblings.default_fqn(), id)](#del/2) Removes the reference for the naturally terminated child from the workers map through `:delete_child` transition with all the respective callbacks. [get(name \\ Siblings.default_fqn(), id, default \\ nil)](#get/3) Returns the `pid` of the single dynamically supervised worker by its `id`. [put(name \\ Siblings.default_fqn(), worker)](#put/2) Initiates the `:start_child` transition with all the respective callbacks to add a new child to the supervised list. [start_link(payload)](#start_link/1) Starts an *FSM* alone with `name` and `payload` given. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Siblings.Test.Callable === The instance of *FSM* backed up by [`Finitomata`](https://hexdocs.pm/finitomata/0.14.3/Finitomata.html). [fsm-representation](#module-fsm-representation) FSM representation --- ``` graph TD s1 --> |to_s2| s2 s2 --> |to_s3| s3 ``` [Link to this section](#summary) Summary === [Types](#types) --- [state()](#t:state/0) Kind of event which might be send to initiate the transition. [Functions](#functions) --- [__config__(key)](#__config__/1) Getter for the internal compiled in *FSM* information. [child_spec(init_arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [config(key)](#config/1) The convenient macro to allow using states in guards, returns a compile-time list of states for [`Siblings.Test.Callable`](Siblings.Test.Callable.html#content). [start_link(payload)](#start_link/1) Starts an *FSM* alone with `name` and `payload` given. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Siblings.Test.FSM === The instance of *FSM* backed up by [`Finitomata`](https://hexdocs.pm/finitomata/0.14.3/Finitomata.html). [fsm-representation](#module-fsm-representation) FSM representation --- ``` graph TD s1 --> |to_s2| s2 s1 --> |to_s3| s3 ``` [Link to this section](#summary) Summary === [Types](#types) --- [state()](#t:state/0) Kind of event which might be send to initiate the transition. [Functions](#functions) --- [__config__(key)](#__config__/1) Getter for the internal compiled in *FSM* information. [child_spec(init_arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [config(key)](#config/1) The convenient macro to allow using states in guards, returns a compile-time list of states for [`Siblings.Test.FSM`](Siblings.Test.FSM.html#content). [start_link(payload)](#start_link/1) Starts an *FSM* alone with `name` and `payload` given. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Siblings.Test.NoPerform === The instance of *FSM* backed up by [`Finitomata`](https://hexdocs.pm/finitomata/0.14.3/Finitomata.html). [fsm-representation](#module-fsm-representation) FSM representation --- ``` graph TD s1 --> |to_s2| s2 s2 --> |in_s2| s2 s2 --> |to_s3| s3 ``` [Link to this section](#summary) Summary === [Types](#types) --- [state()](#t:state/0) Kind of event which might be send to initiate the transition. [Functions](#functions) --- [__config__(key)](#__config__/1) Getter for the internal compiled in *FSM* information. [child_spec(init_arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [config(key)](#config/1) The convenient macro to allow using states in guards, returns a compile-time list of states for [`Siblings.Test.NoPerform`](Siblings.Test.NoPerform.html#content). [start_link(payload)](#start_link/1) Starts an *FSM* alone with `name` and `payload` given. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Siblings.Test.WorkerFSM === The instance of *FSM* backed up by [`Finitomata`](https://hexdocs.pm/finitomata/0.14.3/Finitomata.html). [fsm-representation](#module-fsm-representation) FSM representation --- ``` graph TD s1 --> |to_s2| s2 s2 --> |in_s2| s2 s2 --> |to_s3| s3 ``` [Link to this section](#summary) Summary === [Types](#types) --- [state()](#t:state/0) Kind of event which might be send to initiate the transition. [Functions](#functions) --- [__config__(key)](#__config__/1) Getter for the internal compiled in *FSM* information. [child_spec(init_arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [config(key)](#config/1) The convenient macro to allow using states in guards, returns a compile-time list of states for [`Siblings.Test.WorkerFSM`](Siblings.Test.WorkerFSM.html#content). [start_link(payload)](#start_link/1) Starts an *FSM* alone with `name` and `payload` given. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Siblings.Throttler === The internal definition of the call to throttle. [`Siblings.Throttler.call/3`](#call/3) is a blocking call similar to [`GenServer.call/3`](https://hexdocs.pm/elixir/GenServer.html#call/3), but served by the underlying [`GenStage`](https://hexdocs.pm/gen_stage/1.2.1/GenStage.html) producer-consumer pair. Despite this implementation of throttling based on [`GenStage`](https://hexdocs.pm/gen_stage/1.2.1/GenStage.html) is provided mostly for internal needs, it is generic enough to use wherever. Use the childspec `{Siblings.Throttler, name: name, initial: [], max_demand: 3, interval: 1_000}` to start a throttling process and [`Siblings.Throttler.call/3`](#call/3) to perform throttled synchronous calls from different processes. [Link to this section](#summary) Summary === [Types](#types) --- [t()](#t:t/0) The *in/out* parameter for calls to [`Siblings.Throttler.call/3`](#call/3) [throttlee()](#t:throttlee/0) The simplified *in* parameter for calls to [`Siblings.Throttler.call/3`](#call/3) [Functions](#functions) --- [call(name \\ Siblings.default_fqn(), request, timeout \\ :infinity)](#call/3) Synchronously executes the function, using throttling based on [`GenStage`](https://hexdocs.pm/gen_stage/1.2.1/GenStage.html). [child_spec(init_arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [start_link(opts)](#start_link/1) Starts the throttler with the underlying producer-consumer stages. [Link to this section](#types) Types === [Link to this section](#functions) Functions ===
@types/joi-phone-number
npm
JavaScript
[Installation](#installation) === > `npm install --save @types/joi-phone-number` [Summary](#summary) === This package contains type definitions for joi-phone-number (<https://github.com/Salesflare/joi-phone-number>). [Details](#details) === Files were exported from <https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/joi-phone-number>. [index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/joi-phone-number/index.d.ts) --- ``` /// <reference types="node" /> import { Extension, Reference, Root, StringSchema } from "joi"; declare module "joi" { interface PhoneNumberOptions { defaultCountry?: string[] | string | Reference | undefined; strict?: boolean | Reference | undefined; format?: "e164" | "international" | "national" | "rfc3966" | Reference | undefined; } interface StringSchema { phoneNumber(options?: PhoneNumberOptions): this; } } interface StringExtension extends Extension { type: "string"; base: StringSchema; } declare function JoiStringFactory(joi: Root): StringExtension; export = JoiStringFactory; ``` ### [Additional Details](#additional-details) * Last updated: Wed, 18 Oct 2023 01:17:35 GMT * Dependencies: [@types/node](https://npmjs.com/package/@types/node), [joi](https://npmjs.com/package/joi) [Credits](#credits) === These definitions were written by [<NAME>](https://github.com/NurMarvin), and [<NAME>](https://github.com/jlismore). Readme --- ### Keywords none
AWSiOSSDKv2
cocoapods
Objective-C
awsiossdkv2 === Introduction --- The AWS iOS SDK provides a library for developing iOS applications that interact with Amazon Web Services (AWS) services. This documentation provides an overview of how to install and use the AWS iOS SDK to build powerful and scalable iOS applications. Features --- * High-level APIs for easy integration with AWS services * Support for a wide range of AWS services, including Amazon S3, Amazon EC2, Amazon DynamoDB, Amazon SES, and more * Ability to handle asynchronous operations and errors gracefully * Securely manage credentials using AWS Key Management Service * Integration with Amazon Cognito for user authentication and authorization Getting Started --- ### Step 1: Installation To get started, you need to install the AWS iOS SDK using Cocoapods. Open your project’s terminal and execute the following command: `pod 'AWSiOSSDKv2'` ### Step 2: Configuring AWS Credentials Before you can start using the AWS iOS SDK, you need to configure your AWS credentials. Follow these steps: 1. Create an AWS account if you don’t have one already. 2. Go to the AWS Management Console and navigate to the IAM service. 3. Create a new IAM user or use an existing one. 4. Generate access keys for the IAM user. 5. Set the access key and secret access key in your app’s code using the AWS SDK’s `AWSStaticCredentialsProvider` class. ### Step 3: Using AWS Services Once you have installed the SDK and configured your credentials, you can start using AWS services in your iOS app. The AWS iOS SDK provides high-level APIs for interacting with various AWS services. Consult the documentation for each individual service for more details. API Documentation --- For detailed information on each service’s API and available methods, refer to the official API documentation provided by AWS: ### Amazon S3 API documentation: <https://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html### Amazon EC2 API documentation: <https://docs.aws.amazon.com/AWSEC2/latest/APIReference/making-api-requests.html### Amazon DynamoDB API documentation: <https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/Welcome.html### Amazon SES API documentation: <https://docs.aws.amazon.com/ses/latest/DeveloperGuideConclusion --- The AWS iOS SDK offers a powerful and flexible toolset for developing iOS applications that integrate with AWS services. By following the steps outlined in this documentation, you can quickly get started building scalable and highly functional iOS apps using AWS.
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights
go
Go
README [¶](#section-readme) --- ### Azure Time Series Insights Module for Go [![PkgGoDev](https://pkg.go.dev/badge/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights) The `armtimeseriesinsights` module provides operations for working with Azure Time Series Insights. [Source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights) ### Getting started #### Prerequisites * an [Azure subscription](https://azure.microsoft.com/free/) * Go 1.18 or above (You could download and install the latest version of Go from [here](https://go.dev/doc/install). It will replace the existing Go on your machine. If you want to install multiple Go versions on the same machine, you could refer this [doc](https://go.dev/doc/manage-install).) #### Install the package This project uses [Go modules](https://github.com/golang/go/wiki/Modules) for versioning and dependency management. Install the Azure Time Series Insights module: ``` go get github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights ``` #### Authorization When creating a client, you will need to provide a credential for authenticating with Azure Time Series Insights. The `azidentity` module provides facilities for various ways of authenticating with Azure including client/secret, certificate, managed identity, and more. ``` cred, err := azidentity.NewDefaultAzureCredential(nil) ``` For more information on authentication, please see the documentation for `azidentity` at [pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity). #### Client Factory Azure Time Series Insights module consists of one or more clients. We provide a client factory which could be used to create any client in this module. ``` clientFactory, err := armtimeseriesinsights.NewClientFactory(<subscription ID>, cred, nil) ``` You can use `ClientOptions` in package `github.com/Azure/azure-sdk-for-go/sdk/azcore/arm` to set endpoint to connect with public and sovereign clouds as well as Azure Stack. For more information, please see the documentation for `azcore` at [pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore). ``` options := arm.ClientOptions { ClientOptions: azcore.ClientOptions { Cloud: cloud.AzureChina, }, } clientFactory, err := armtimeseriesinsights.NewClientFactory(<subscription ID>, cred, &options) ``` #### Clients A client groups a set of related APIs, providing access to its functionality. Create one or more clients to access the APIs you require using client factory. ``` client := clientFactory.NewEventSourcesClient() ``` #### Provide Feedback If you encounter bugs or have suggestions, please [open an issue](https://github.com/Azure/azure-sdk-for-go/issues) and assign the `Time Series Insights` label. ### Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit <https://cla.microsoft.com>. When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [<EMAIL>](mailto:<EMAIL>) with any additional questions or comments. Documentation [¶](#section-documentation) --- ### Index [¶](#pkg-index) * [type AccessPoliciesClient](#AccessPoliciesClient) * + [func NewAccessPoliciesClient(subscriptionID string, credential azcore.TokenCredential, ...) (*AccessPoliciesClient, error)](#NewAccessPoliciesClient) * + [func (client *AccessPoliciesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, environmentName string, ...) (AccessPoliciesClientCreateOrUpdateResponse, error)](#AccessPoliciesClient.CreateOrUpdate) + [func (client *AccessPoliciesClient) Delete(ctx context.Context, resourceGroupName string, environmentName string, ...) (AccessPoliciesClientDeleteResponse, error)](#AccessPoliciesClient.Delete) + [func (client *AccessPoliciesClient) Get(ctx context.Context, resourceGroupName string, environmentName string, ...) (AccessPoliciesClientGetResponse, error)](#AccessPoliciesClient.Get) + [func (client *AccessPoliciesClient) ListByEnvironment(ctx context.Context, resourceGroupName string, environmentName string, ...) (AccessPoliciesClientListByEnvironmentResponse, error)](#AccessPoliciesClient.ListByEnvironment) + [func (client *AccessPoliciesClient) Update(ctx context.Context, resourceGroupName string, environmentName string, ...) (AccessPoliciesClientUpdateResponse, error)](#AccessPoliciesClient.Update) * [type AccessPoliciesClientCreateOrUpdateOptions](#AccessPoliciesClientCreateOrUpdateOptions) * [type AccessPoliciesClientCreateOrUpdateResponse](#AccessPoliciesClientCreateOrUpdateResponse) * [type AccessPoliciesClientDeleteOptions](#AccessPoliciesClientDeleteOptions) * [type AccessPoliciesClientDeleteResponse](#AccessPoliciesClientDeleteResponse) * [type AccessPoliciesClientGetOptions](#AccessPoliciesClientGetOptions) * [type AccessPoliciesClientGetResponse](#AccessPoliciesClientGetResponse) * [type AccessPoliciesClientListByEnvironmentOptions](#AccessPoliciesClientListByEnvironmentOptions) * [type AccessPoliciesClientListByEnvironmentResponse](#AccessPoliciesClientListByEnvironmentResponse) * [type AccessPoliciesClientUpdateOptions](#AccessPoliciesClientUpdateOptions) * [type AccessPoliciesClientUpdateResponse](#AccessPoliciesClientUpdateResponse) * [type AccessPolicyCreateOrUpdateParameters](#AccessPolicyCreateOrUpdateParameters) * + [func (a AccessPolicyCreateOrUpdateParameters) MarshalJSON() ([]byte, error)](#AccessPolicyCreateOrUpdateParameters.MarshalJSON) + [func (a *AccessPolicyCreateOrUpdateParameters) UnmarshalJSON(data []byte) error](#AccessPolicyCreateOrUpdateParameters.UnmarshalJSON) * [type AccessPolicyListResponse](#AccessPolicyListResponse) * + [func (a AccessPolicyListResponse) MarshalJSON() ([]byte, error)](#AccessPolicyListResponse.MarshalJSON) + [func (a *AccessPolicyListResponse) UnmarshalJSON(data []byte) error](#AccessPolicyListResponse.UnmarshalJSON) * [type AccessPolicyMutableProperties](#AccessPolicyMutableProperties) * + [func (a AccessPolicyMutableProperties) MarshalJSON() ([]byte, error)](#AccessPolicyMutableProperties.MarshalJSON) + [func (a *AccessPolicyMutableProperties) UnmarshalJSON(data []byte) error](#AccessPolicyMutableProperties.UnmarshalJSON) * [type AccessPolicyResource](#AccessPolicyResource) * + [func (a AccessPolicyResource) MarshalJSON() ([]byte, error)](#AccessPolicyResource.MarshalJSON) + [func (a *AccessPolicyResource) UnmarshalJSON(data []byte) error](#AccessPolicyResource.UnmarshalJSON) * [type AccessPolicyResourceProperties](#AccessPolicyResourceProperties) * + [func (a AccessPolicyResourceProperties) MarshalJSON() ([]byte, error)](#AccessPolicyResourceProperties.MarshalJSON) + [func (a *AccessPolicyResourceProperties) UnmarshalJSON(data []byte) error](#AccessPolicyResourceProperties.UnmarshalJSON) * [type AccessPolicyRole](#AccessPolicyRole) * + [func PossibleAccessPolicyRoleValues() []AccessPolicyRole](#PossibleAccessPolicyRoleValues) * [type AccessPolicyUpdateParameters](#AccessPolicyUpdateParameters) * + [func (a AccessPolicyUpdateParameters) MarshalJSON() ([]byte, error)](#AccessPolicyUpdateParameters.MarshalJSON) + [func (a *AccessPolicyUpdateParameters) UnmarshalJSON(data []byte) error](#AccessPolicyUpdateParameters.UnmarshalJSON) * [type AzureEventSourceProperties](#AzureEventSourceProperties) * + [func (a AzureEventSourceProperties) MarshalJSON() ([]byte, error)](#AzureEventSourceProperties.MarshalJSON) + [func (a *AzureEventSourceProperties) UnmarshalJSON(data []byte) error](#AzureEventSourceProperties.UnmarshalJSON) * [type ClientFactory](#ClientFactory) * + [func NewClientFactory(subscriptionID string, credential azcore.TokenCredential, ...) (*ClientFactory, error)](#NewClientFactory) * + [func (c *ClientFactory) NewAccessPoliciesClient() *AccessPoliciesClient](#ClientFactory.NewAccessPoliciesClient) + [func (c *ClientFactory) NewEnvironmentsClient() *EnvironmentsClient](#ClientFactory.NewEnvironmentsClient) + [func (c *ClientFactory) NewEventSourcesClient() *EventSourcesClient](#ClientFactory.NewEventSourcesClient) + [func (c *ClientFactory) NewOperationsClient() *OperationsClient](#ClientFactory.NewOperationsClient) + [func (c *ClientFactory) NewReferenceDataSetsClient() *ReferenceDataSetsClient](#ClientFactory.NewReferenceDataSetsClient) * [type CreateOrUpdateTrackedResourceProperties](#CreateOrUpdateTrackedResourceProperties) * + [func (c CreateOrUpdateTrackedResourceProperties) MarshalJSON() ([]byte, error)](#CreateOrUpdateTrackedResourceProperties.MarshalJSON) + [func (c *CreateOrUpdateTrackedResourceProperties) UnmarshalJSON(data []byte) error](#CreateOrUpdateTrackedResourceProperties.UnmarshalJSON) * [type DataStringComparisonBehavior](#DataStringComparisonBehavior) * + [func PossibleDataStringComparisonBehaviorValues() []DataStringComparisonBehavior](#PossibleDataStringComparisonBehaviorValues) * [type Dimension](#Dimension) * + [func (d Dimension) MarshalJSON() ([]byte, error)](#Dimension.MarshalJSON) + [func (d *Dimension) UnmarshalJSON(data []byte) error](#Dimension.UnmarshalJSON) * [type EnvironmentCreateOrUpdateParameters](#EnvironmentCreateOrUpdateParameters) * + [func (e *EnvironmentCreateOrUpdateParameters) GetEnvironmentCreateOrUpdateParameters() *EnvironmentCreateOrUpdateParameters](#EnvironmentCreateOrUpdateParameters.GetEnvironmentCreateOrUpdateParameters) + [func (e EnvironmentCreateOrUpdateParameters) MarshalJSON() ([]byte, error)](#EnvironmentCreateOrUpdateParameters.MarshalJSON) + [func (e *EnvironmentCreateOrUpdateParameters) UnmarshalJSON(data []byte) error](#EnvironmentCreateOrUpdateParameters.UnmarshalJSON) * [type EnvironmentCreateOrUpdateParametersClassification](#EnvironmentCreateOrUpdateParametersClassification) * [type EnvironmentKind](#EnvironmentKind) * + [func PossibleEnvironmentKindValues() []EnvironmentKind](#PossibleEnvironmentKindValues) * [type EnvironmentListResponse](#EnvironmentListResponse) * + [func (e EnvironmentListResponse) MarshalJSON() ([]byte, error)](#EnvironmentListResponse.MarshalJSON) + [func (e *EnvironmentListResponse) UnmarshalJSON(data []byte) error](#EnvironmentListResponse.UnmarshalJSON) * [type EnvironmentResource](#EnvironmentResource) * + [func (e *EnvironmentResource) GetEnvironmentResource() *EnvironmentResource](#EnvironmentResource.GetEnvironmentResource) + [func (e EnvironmentResource) MarshalJSON() ([]byte, error)](#EnvironmentResource.MarshalJSON) + [func (e *EnvironmentResource) UnmarshalJSON(data []byte) error](#EnvironmentResource.UnmarshalJSON) * [type EnvironmentResourceClassification](#EnvironmentResourceClassification) * [type EnvironmentResourceKind](#EnvironmentResourceKind) * + [func PossibleEnvironmentResourceKindValues() []EnvironmentResourceKind](#PossibleEnvironmentResourceKindValues) * [type EnvironmentResourceProperties](#EnvironmentResourceProperties) * + [func (e EnvironmentResourceProperties) MarshalJSON() ([]byte, error)](#EnvironmentResourceProperties.MarshalJSON) + [func (e *EnvironmentResourceProperties) UnmarshalJSON(data []byte) error](#EnvironmentResourceProperties.UnmarshalJSON) * [type EnvironmentStateDetails](#EnvironmentStateDetails) * + [func (e EnvironmentStateDetails) MarshalJSON() ([]byte, error)](#EnvironmentStateDetails.MarshalJSON) + [func (e *EnvironmentStateDetails) UnmarshalJSON(data []byte) error](#EnvironmentStateDetails.UnmarshalJSON) * [type EnvironmentStatus](#EnvironmentStatus) * + [func (e EnvironmentStatus) MarshalJSON() ([]byte, error)](#EnvironmentStatus.MarshalJSON) + [func (e *EnvironmentStatus) UnmarshalJSON(data []byte) error](#EnvironmentStatus.UnmarshalJSON) * [type EnvironmentUpdateParameters](#EnvironmentUpdateParameters) * + [func (e *EnvironmentUpdateParameters) GetEnvironmentUpdateParameters() *EnvironmentUpdateParameters](#EnvironmentUpdateParameters.GetEnvironmentUpdateParameters) + [func (e EnvironmentUpdateParameters) MarshalJSON() ([]byte, error)](#EnvironmentUpdateParameters.MarshalJSON) + [func (e *EnvironmentUpdateParameters) UnmarshalJSON(data []byte) error](#EnvironmentUpdateParameters.UnmarshalJSON) * [type EnvironmentUpdateParametersClassification](#EnvironmentUpdateParametersClassification) * [type EnvironmentsClient](#EnvironmentsClient) * + [func NewEnvironmentsClient(subscriptionID string, credential azcore.TokenCredential, ...) (*EnvironmentsClient, error)](#NewEnvironmentsClient) * + [func (client *EnvironmentsClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, environmentName string, ...) (*runtime.Poller[EnvironmentsClientCreateOrUpdateResponse], error)](#EnvironmentsClient.BeginCreateOrUpdate) + [func (client *EnvironmentsClient) BeginUpdate(ctx context.Context, resourceGroupName string, environmentName string, ...) (*runtime.Poller[EnvironmentsClientUpdateResponse], error)](#EnvironmentsClient.BeginUpdate) + [func (client *EnvironmentsClient) Delete(ctx context.Context, resourceGroupName string, environmentName string, ...) (EnvironmentsClientDeleteResponse, error)](#EnvironmentsClient.Delete) + [func (client *EnvironmentsClient) Get(ctx context.Context, resourceGroupName string, environmentName string, ...) (EnvironmentsClientGetResponse, error)](#EnvironmentsClient.Get) + [func (client *EnvironmentsClient) ListByResourceGroup(ctx context.Context, resourceGroupName string, ...) (EnvironmentsClientListByResourceGroupResponse, error)](#EnvironmentsClient.ListByResourceGroup) + [func (client *EnvironmentsClient) ListBySubscription(ctx context.Context, options *EnvironmentsClientListBySubscriptionOptions) (EnvironmentsClientListBySubscriptionResponse, error)](#EnvironmentsClient.ListBySubscription) * [type EnvironmentsClientBeginCreateOrUpdateOptions](#EnvironmentsClientBeginCreateOrUpdateOptions) * [type EnvironmentsClientBeginUpdateOptions](#EnvironmentsClientBeginUpdateOptions) * [type EnvironmentsClientCreateOrUpdateResponse](#EnvironmentsClientCreateOrUpdateResponse) * + [func (e *EnvironmentsClientCreateOrUpdateResponse) UnmarshalJSON(data []byte) error](#EnvironmentsClientCreateOrUpdateResponse.UnmarshalJSON) * [type EnvironmentsClientDeleteOptions](#EnvironmentsClientDeleteOptions) * [type EnvironmentsClientDeleteResponse](#EnvironmentsClientDeleteResponse) * [type EnvironmentsClientGetOptions](#EnvironmentsClientGetOptions) * [type EnvironmentsClientGetResponse](#EnvironmentsClientGetResponse) * + [func (e *EnvironmentsClientGetResponse) UnmarshalJSON(data []byte) error](#EnvironmentsClientGetResponse.UnmarshalJSON) * [type EnvironmentsClientListByResourceGroupOptions](#EnvironmentsClientListByResourceGroupOptions) * [type EnvironmentsClientListByResourceGroupResponse](#EnvironmentsClientListByResourceGroupResponse) * [type EnvironmentsClientListBySubscriptionOptions](#EnvironmentsClientListBySubscriptionOptions) * [type EnvironmentsClientListBySubscriptionResponse](#EnvironmentsClientListBySubscriptionResponse) * [type EnvironmentsClientUpdateResponse](#EnvironmentsClientUpdateResponse) * + [func (e *EnvironmentsClientUpdateResponse) UnmarshalJSON(data []byte) error](#EnvironmentsClientUpdateResponse.UnmarshalJSON) * [type EventHubEventSourceCommonProperties](#EventHubEventSourceCommonProperties) * + [func (e EventHubEventSourceCommonProperties) MarshalJSON() ([]byte, error)](#EventHubEventSourceCommonProperties.MarshalJSON) + [func (e *EventHubEventSourceCommonProperties) UnmarshalJSON(data []byte) error](#EventHubEventSourceCommonProperties.UnmarshalJSON) * [type EventHubEventSourceCreateOrUpdateParameters](#EventHubEventSourceCreateOrUpdateParameters) * + [func (e *EventHubEventSourceCreateOrUpdateParameters) GetEventSourceCreateOrUpdateParameters() *EventSourceCreateOrUpdateParameters](#EventHubEventSourceCreateOrUpdateParameters.GetEventSourceCreateOrUpdateParameters) + [func (e EventHubEventSourceCreateOrUpdateParameters) MarshalJSON() ([]byte, error)](#EventHubEventSourceCreateOrUpdateParameters.MarshalJSON) + [func (e *EventHubEventSourceCreateOrUpdateParameters) UnmarshalJSON(data []byte) error](#EventHubEventSourceCreateOrUpdateParameters.UnmarshalJSON) * [type EventHubEventSourceCreationProperties](#EventHubEventSourceCreationProperties) * + [func (e EventHubEventSourceCreationProperties) MarshalJSON() ([]byte, error)](#EventHubEventSourceCreationProperties.MarshalJSON) + [func (e *EventHubEventSourceCreationProperties) UnmarshalJSON(data []byte) error](#EventHubEventSourceCreationProperties.UnmarshalJSON) * [type EventHubEventSourceMutableProperties](#EventHubEventSourceMutableProperties) * + [func (e EventHubEventSourceMutableProperties) MarshalJSON() ([]byte, error)](#EventHubEventSourceMutableProperties.MarshalJSON) + [func (e *EventHubEventSourceMutableProperties) UnmarshalJSON(data []byte) error](#EventHubEventSourceMutableProperties.UnmarshalJSON) * [type EventHubEventSourceResource](#EventHubEventSourceResource) * + [func (e *EventHubEventSourceResource) GetEventSourceResource() *EventSourceResource](#EventHubEventSourceResource.GetEventSourceResource) + [func (e EventHubEventSourceResource) MarshalJSON() ([]byte, error)](#EventHubEventSourceResource.MarshalJSON) + [func (e *EventHubEventSourceResource) UnmarshalJSON(data []byte) error](#EventHubEventSourceResource.UnmarshalJSON) * [type EventHubEventSourceResourceProperties](#EventHubEventSourceResourceProperties) * + [func (e EventHubEventSourceResourceProperties) MarshalJSON() ([]byte, error)](#EventHubEventSourceResourceProperties.MarshalJSON) + [func (e *EventHubEventSourceResourceProperties) UnmarshalJSON(data []byte) error](#EventHubEventSourceResourceProperties.UnmarshalJSON) * [type EventHubEventSourceUpdateParameters](#EventHubEventSourceUpdateParameters) * + [func (e *EventHubEventSourceUpdateParameters) GetEventSourceUpdateParameters() *EventSourceUpdateParameters](#EventHubEventSourceUpdateParameters.GetEventSourceUpdateParameters) + [func (e EventHubEventSourceUpdateParameters) MarshalJSON() ([]byte, error)](#EventHubEventSourceUpdateParameters.MarshalJSON) + [func (e *EventHubEventSourceUpdateParameters) UnmarshalJSON(data []byte) error](#EventHubEventSourceUpdateParameters.UnmarshalJSON) * [type EventSourceCommonProperties](#EventSourceCommonProperties) * + [func (e EventSourceCommonProperties) MarshalJSON() ([]byte, error)](#EventSourceCommonProperties.MarshalJSON) + [func (e *EventSourceCommonProperties) UnmarshalJSON(data []byte) error](#EventSourceCommonProperties.UnmarshalJSON) * [type EventSourceCreateOrUpdateParameters](#EventSourceCreateOrUpdateParameters) * + [func (e *EventSourceCreateOrUpdateParameters) GetEventSourceCreateOrUpdateParameters() *EventSourceCreateOrUpdateParameters](#EventSourceCreateOrUpdateParameters.GetEventSourceCreateOrUpdateParameters) + [func (e EventSourceCreateOrUpdateParameters) MarshalJSON() ([]byte, error)](#EventSourceCreateOrUpdateParameters.MarshalJSON) + [func (e *EventSourceCreateOrUpdateParameters) UnmarshalJSON(data []byte) error](#EventSourceCreateOrUpdateParameters.UnmarshalJSON) * [type EventSourceCreateOrUpdateParametersClassification](#EventSourceCreateOrUpdateParametersClassification) * [type EventSourceKind](#EventSourceKind) * + [func PossibleEventSourceKindValues() []EventSourceKind](#PossibleEventSourceKindValues) * [type EventSourceListResponse](#EventSourceListResponse) * + [func (e EventSourceListResponse) MarshalJSON() ([]byte, error)](#EventSourceListResponse.MarshalJSON) + [func (e *EventSourceListResponse) UnmarshalJSON(data []byte) error](#EventSourceListResponse.UnmarshalJSON) * [type EventSourceMutableProperties](#EventSourceMutableProperties) * + [func (e EventSourceMutableProperties) MarshalJSON() ([]byte, error)](#EventSourceMutableProperties.MarshalJSON) + [func (e *EventSourceMutableProperties) UnmarshalJSON(data []byte) error](#EventSourceMutableProperties.UnmarshalJSON) * [type EventSourceResource](#EventSourceResource) * + [func (e *EventSourceResource) GetEventSourceResource() *EventSourceResource](#EventSourceResource.GetEventSourceResource) + [func (e EventSourceResource) MarshalJSON() ([]byte, error)](#EventSourceResource.MarshalJSON) + [func (e *EventSourceResource) UnmarshalJSON(data []byte) error](#EventSourceResource.UnmarshalJSON) * [type EventSourceResourceClassification](#EventSourceResourceClassification) * [type EventSourceResourceKind](#EventSourceResourceKind) * + [func PossibleEventSourceResourceKindValues() []EventSourceResourceKind](#PossibleEventSourceResourceKindValues) * [type EventSourceUpdateParameters](#EventSourceUpdateParameters) * + [func (e *EventSourceUpdateParameters) GetEventSourceUpdateParameters() *EventSourceUpdateParameters](#EventSourceUpdateParameters.GetEventSourceUpdateParameters) + [func (e EventSourceUpdateParameters) MarshalJSON() ([]byte, error)](#EventSourceUpdateParameters.MarshalJSON) + [func (e *EventSourceUpdateParameters) UnmarshalJSON(data []byte) error](#EventSourceUpdateParameters.UnmarshalJSON) * [type EventSourceUpdateParametersClassification](#EventSourceUpdateParametersClassification) * [type EventSourcesClient](#EventSourcesClient) * + [func NewEventSourcesClient(subscriptionID string, credential azcore.TokenCredential, ...) (*EventSourcesClient, error)](#NewEventSourcesClient) * + [func (client *EventSourcesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, environmentName string, ...) (EventSourcesClientCreateOrUpdateResponse, error)](#EventSourcesClient.CreateOrUpdate) + [func (client *EventSourcesClient) Delete(ctx context.Context, resourceGroupName string, environmentName string, ...) (EventSourcesClientDeleteResponse, error)](#EventSourcesClient.Delete) + [func (client *EventSourcesClient) Get(ctx context.Context, resourceGroupName string, environmentName string, ...) (EventSourcesClientGetResponse, error)](#EventSourcesClient.Get) + [func (client *EventSourcesClient) ListByEnvironment(ctx context.Context, resourceGroupName string, environmentName string, ...) (EventSourcesClientListByEnvironmentResponse, error)](#EventSourcesClient.ListByEnvironment) + [func (client *EventSourcesClient) Update(ctx context.Context, resourceGroupName string, environmentName string, ...) (EventSourcesClientUpdateResponse, error)](#EventSourcesClient.Update) * [type EventSourcesClientCreateOrUpdateOptions](#EventSourcesClientCreateOrUpdateOptions) * [type EventSourcesClientCreateOrUpdateResponse](#EventSourcesClientCreateOrUpdateResponse) * + [func (e *EventSourcesClientCreateOrUpdateResponse) UnmarshalJSON(data []byte) error](#EventSourcesClientCreateOrUpdateResponse.UnmarshalJSON) * [type EventSourcesClientDeleteOptions](#EventSourcesClientDeleteOptions) * [type EventSourcesClientDeleteResponse](#EventSourcesClientDeleteResponse) * [type EventSourcesClientGetOptions](#EventSourcesClientGetOptions) * [type EventSourcesClientGetResponse](#EventSourcesClientGetResponse) * + [func (e *EventSourcesClientGetResponse) UnmarshalJSON(data []byte) error](#EventSourcesClientGetResponse.UnmarshalJSON) * [type EventSourcesClientListByEnvironmentOptions](#EventSourcesClientListByEnvironmentOptions) * [type EventSourcesClientListByEnvironmentResponse](#EventSourcesClientListByEnvironmentResponse) * [type EventSourcesClientUpdateOptions](#EventSourcesClientUpdateOptions) * [type EventSourcesClientUpdateResponse](#EventSourcesClientUpdateResponse) * + [func (e *EventSourcesClientUpdateResponse) UnmarshalJSON(data []byte) error](#EventSourcesClientUpdateResponse.UnmarshalJSON) * [type Gen1EnvironmentCreateOrUpdateParameters](#Gen1EnvironmentCreateOrUpdateParameters) * + [func (g *Gen1EnvironmentCreateOrUpdateParameters) GetEnvironmentCreateOrUpdateParameters() *EnvironmentCreateOrUpdateParameters](#Gen1EnvironmentCreateOrUpdateParameters.GetEnvironmentCreateOrUpdateParameters) + [func (g Gen1EnvironmentCreateOrUpdateParameters) MarshalJSON() ([]byte, error)](#Gen1EnvironmentCreateOrUpdateParameters.MarshalJSON) + [func (g *Gen1EnvironmentCreateOrUpdateParameters) UnmarshalJSON(data []byte) error](#Gen1EnvironmentCreateOrUpdateParameters.UnmarshalJSON) * [type Gen1EnvironmentCreationProperties](#Gen1EnvironmentCreationProperties) * + [func (g Gen1EnvironmentCreationProperties) MarshalJSON() ([]byte, error)](#Gen1EnvironmentCreationProperties.MarshalJSON) + [func (g *Gen1EnvironmentCreationProperties) UnmarshalJSON(data []byte) error](#Gen1EnvironmentCreationProperties.UnmarshalJSON) * [type Gen1EnvironmentMutableProperties](#Gen1EnvironmentMutableProperties) * + [func (g Gen1EnvironmentMutableProperties) MarshalJSON() ([]byte, error)](#Gen1EnvironmentMutableProperties.MarshalJSON) + [func (g *Gen1EnvironmentMutableProperties) UnmarshalJSON(data []byte) error](#Gen1EnvironmentMutableProperties.UnmarshalJSON) * [type Gen1EnvironmentResource](#Gen1EnvironmentResource) * + [func (g *Gen1EnvironmentResource) GetEnvironmentResource() *EnvironmentResource](#Gen1EnvironmentResource.GetEnvironmentResource) + [func (g Gen1EnvironmentResource) MarshalJSON() ([]byte, error)](#Gen1EnvironmentResource.MarshalJSON) + [func (g *Gen1EnvironmentResource) UnmarshalJSON(data []byte) error](#Gen1EnvironmentResource.UnmarshalJSON) * [type Gen1EnvironmentResourceProperties](#Gen1EnvironmentResourceProperties) * + [func (g Gen1EnvironmentResourceProperties) MarshalJSON() ([]byte, error)](#Gen1EnvironmentResourceProperties.MarshalJSON) + [func (g *Gen1EnvironmentResourceProperties) UnmarshalJSON(data []byte) error](#Gen1EnvironmentResourceProperties.UnmarshalJSON) * [type Gen1EnvironmentUpdateParameters](#Gen1EnvironmentUpdateParameters) * + [func (g *Gen1EnvironmentUpdateParameters) GetEnvironmentUpdateParameters() *EnvironmentUpdateParameters](#Gen1EnvironmentUpdateParameters.GetEnvironmentUpdateParameters) + [func (g Gen1EnvironmentUpdateParameters) MarshalJSON() ([]byte, error)](#Gen1EnvironmentUpdateParameters.MarshalJSON) + [func (g *Gen1EnvironmentUpdateParameters) UnmarshalJSON(data []byte) error](#Gen1EnvironmentUpdateParameters.UnmarshalJSON) * [type Gen2EnvironmentCreateOrUpdateParameters](#Gen2EnvironmentCreateOrUpdateParameters) * + [func (g *Gen2EnvironmentCreateOrUpdateParameters) GetEnvironmentCreateOrUpdateParameters() *EnvironmentCreateOrUpdateParameters](#Gen2EnvironmentCreateOrUpdateParameters.GetEnvironmentCreateOrUpdateParameters) + [func (g Gen2EnvironmentCreateOrUpdateParameters) MarshalJSON() ([]byte, error)](#Gen2EnvironmentCreateOrUpdateParameters.MarshalJSON) + [func (g *Gen2EnvironmentCreateOrUpdateParameters) UnmarshalJSON(data []byte) error](#Gen2EnvironmentCreateOrUpdateParameters.UnmarshalJSON) * [type Gen2EnvironmentCreationProperties](#Gen2EnvironmentCreationProperties) * + [func (g Gen2EnvironmentCreationProperties) MarshalJSON() ([]byte, error)](#Gen2EnvironmentCreationProperties.MarshalJSON) + [func (g *Gen2EnvironmentCreationProperties) UnmarshalJSON(data []byte) error](#Gen2EnvironmentCreationProperties.UnmarshalJSON) * [type Gen2EnvironmentMutableProperties](#Gen2EnvironmentMutableProperties) * + [func (g Gen2EnvironmentMutableProperties) MarshalJSON() ([]byte, error)](#Gen2EnvironmentMutableProperties.MarshalJSON) + [func (g *Gen2EnvironmentMutableProperties) UnmarshalJSON(data []byte) error](#Gen2EnvironmentMutableProperties.UnmarshalJSON) * [type Gen2EnvironmentResource](#Gen2EnvironmentResource) * + [func (g *Gen2EnvironmentResource) GetEnvironmentResource() *EnvironmentResource](#Gen2EnvironmentResource.GetEnvironmentResource) + [func (g Gen2EnvironmentResource) MarshalJSON() ([]byte, error)](#Gen2EnvironmentResource.MarshalJSON) + [func (g *Gen2EnvironmentResource) UnmarshalJSON(data []byte) error](#Gen2EnvironmentResource.UnmarshalJSON) * [type Gen2EnvironmentResourceProperties](#Gen2EnvironmentResourceProperties) * + [func (g Gen2EnvironmentResourceProperties) MarshalJSON() ([]byte, error)](#Gen2EnvironmentResourceProperties.MarshalJSON) + [func (g *Gen2EnvironmentResourceProperties) UnmarshalJSON(data []byte) error](#Gen2EnvironmentResourceProperties.UnmarshalJSON) * [type Gen2EnvironmentUpdateParameters](#Gen2EnvironmentUpdateParameters) * + [func (g *Gen2EnvironmentUpdateParameters) GetEnvironmentUpdateParameters() *EnvironmentUpdateParameters](#Gen2EnvironmentUpdateParameters.GetEnvironmentUpdateParameters) + [func (g Gen2EnvironmentUpdateParameters) MarshalJSON() ([]byte, error)](#Gen2EnvironmentUpdateParameters.MarshalJSON) + [func (g *Gen2EnvironmentUpdateParameters) UnmarshalJSON(data []byte) error](#Gen2EnvironmentUpdateParameters.UnmarshalJSON) * [type Gen2StorageConfigurationInput](#Gen2StorageConfigurationInput) * + [func (g Gen2StorageConfigurationInput) MarshalJSON() ([]byte, error)](#Gen2StorageConfigurationInput.MarshalJSON) + [func (g *Gen2StorageConfigurationInput) UnmarshalJSON(data []byte) error](#Gen2StorageConfigurationInput.UnmarshalJSON) * [type Gen2StorageConfigurationMutableProperties](#Gen2StorageConfigurationMutableProperties) * + [func (g Gen2StorageConfigurationMutableProperties) MarshalJSON() ([]byte, error)](#Gen2StorageConfigurationMutableProperties.MarshalJSON) + [func (g *Gen2StorageConfigurationMutableProperties) UnmarshalJSON(data []byte) error](#Gen2StorageConfigurationMutableProperties.UnmarshalJSON) * [type Gen2StorageConfigurationOutput](#Gen2StorageConfigurationOutput) * + [func (g Gen2StorageConfigurationOutput) MarshalJSON() ([]byte, error)](#Gen2StorageConfigurationOutput.MarshalJSON) + [func (g *Gen2StorageConfigurationOutput) UnmarshalJSON(data []byte) error](#Gen2StorageConfigurationOutput.UnmarshalJSON) * [type IngressEnvironmentStatus](#IngressEnvironmentStatus) * + [func (i IngressEnvironmentStatus) MarshalJSON() ([]byte, error)](#IngressEnvironmentStatus.MarshalJSON) + [func (i *IngressEnvironmentStatus) UnmarshalJSON(data []byte) error](#IngressEnvironmentStatus.UnmarshalJSON) * [type IngressStartAtProperties](#IngressStartAtProperties) * + [func (i IngressStartAtProperties) MarshalJSON() ([]byte, error)](#IngressStartAtProperties.MarshalJSON) + [func (i *IngressStartAtProperties) UnmarshalJSON(data []byte) error](#IngressStartAtProperties.UnmarshalJSON) * [type IngressStartAtType](#IngressStartAtType) * + [func PossibleIngressStartAtTypeValues() []IngressStartAtType](#PossibleIngressStartAtTypeValues) * [type IngressState](#IngressState) * + [func PossibleIngressStateValues() []IngressState](#PossibleIngressStateValues) * [type IoTHubEventSourceCommonProperties](#IoTHubEventSourceCommonProperties) * + [func (i IoTHubEventSourceCommonProperties) MarshalJSON() ([]byte, error)](#IoTHubEventSourceCommonProperties.MarshalJSON) + [func (i *IoTHubEventSourceCommonProperties) UnmarshalJSON(data []byte) error](#IoTHubEventSourceCommonProperties.UnmarshalJSON) * [type IoTHubEventSourceCreateOrUpdateParameters](#IoTHubEventSourceCreateOrUpdateParameters) * + [func (i *IoTHubEventSourceCreateOrUpdateParameters) GetEventSourceCreateOrUpdateParameters() *EventSourceCreateOrUpdateParameters](#IoTHubEventSourceCreateOrUpdateParameters.GetEventSourceCreateOrUpdateParameters) + [func (i IoTHubEventSourceCreateOrUpdateParameters) MarshalJSON() ([]byte, error)](#IoTHubEventSourceCreateOrUpdateParameters.MarshalJSON) + [func (i *IoTHubEventSourceCreateOrUpdateParameters) UnmarshalJSON(data []byte) error](#IoTHubEventSourceCreateOrUpdateParameters.UnmarshalJSON) * [type IoTHubEventSourceCreationProperties](#IoTHubEventSourceCreationProperties) * + [func (i IoTHubEventSourceCreationProperties) MarshalJSON() ([]byte, error)](#IoTHubEventSourceCreationProperties.MarshalJSON) + [func (i *IoTHubEventSourceCreationProperties) UnmarshalJSON(data []byte) error](#IoTHubEventSourceCreationProperties.UnmarshalJSON) * [type IoTHubEventSourceMutableProperties](#IoTHubEventSourceMutableProperties) * + [func (i IoTHubEventSourceMutableProperties) MarshalJSON() ([]byte, error)](#IoTHubEventSourceMutableProperties.MarshalJSON) + [func (i *IoTHubEventSourceMutableProperties) UnmarshalJSON(data []byte) error](#IoTHubEventSourceMutableProperties.UnmarshalJSON) * [type IoTHubEventSourceResource](#IoTHubEventSourceResource) * + [func (i *IoTHubEventSourceResource) GetEventSourceResource() *EventSourceResource](#IoTHubEventSourceResource.GetEventSourceResource) + [func (i IoTHubEventSourceResource) MarshalJSON() ([]byte, error)](#IoTHubEventSourceResource.MarshalJSON) + [func (i *IoTHubEventSourceResource) UnmarshalJSON(data []byte) error](#IoTHubEventSourceResource.UnmarshalJSON) * [type IoTHubEventSourceResourceProperties](#IoTHubEventSourceResourceProperties) * + [func (i IoTHubEventSourceResourceProperties) MarshalJSON() ([]byte, error)](#IoTHubEventSourceResourceProperties.MarshalJSON) + [func (i *IoTHubEventSourceResourceProperties) UnmarshalJSON(data []byte) error](#IoTHubEventSourceResourceProperties.UnmarshalJSON) * [type IoTHubEventSourceUpdateParameters](#IoTHubEventSourceUpdateParameters) * + [func (i *IoTHubEventSourceUpdateParameters) GetEventSourceUpdateParameters() *EventSourceUpdateParameters](#IoTHubEventSourceUpdateParameters.GetEventSourceUpdateParameters) + [func (i IoTHubEventSourceUpdateParameters) MarshalJSON() ([]byte, error)](#IoTHubEventSourceUpdateParameters.MarshalJSON) + [func (i *IoTHubEventSourceUpdateParameters) UnmarshalJSON(data []byte) error](#IoTHubEventSourceUpdateParameters.UnmarshalJSON) * [type LocalTimestamp](#LocalTimestamp) * + [func (l LocalTimestamp) MarshalJSON() ([]byte, error)](#LocalTimestamp.MarshalJSON) + [func (l *LocalTimestamp) UnmarshalJSON(data []byte) error](#LocalTimestamp.UnmarshalJSON) * [type LocalTimestampFormat](#LocalTimestampFormat) * + [func PossibleLocalTimestampFormatValues() []LocalTimestampFormat](#PossibleLocalTimestampFormatValues) * [type LocalTimestampTimeZoneOffset](#LocalTimestampTimeZoneOffset) * + [func (l LocalTimestampTimeZoneOffset) MarshalJSON() ([]byte, error)](#LocalTimestampTimeZoneOffset.MarshalJSON) + [func (l *LocalTimestampTimeZoneOffset) UnmarshalJSON(data []byte) error](#LocalTimestampTimeZoneOffset.UnmarshalJSON) * [type LogSpecification](#LogSpecification) * + [func (l LogSpecification) MarshalJSON() ([]byte, error)](#LogSpecification.MarshalJSON) + [func (l *LogSpecification) UnmarshalJSON(data []byte) error](#LogSpecification.UnmarshalJSON) * [type MetricAvailability](#MetricAvailability) * + [func (m MetricAvailability) MarshalJSON() ([]byte, error)](#MetricAvailability.MarshalJSON) + [func (m *MetricAvailability) UnmarshalJSON(data []byte) error](#MetricAvailability.UnmarshalJSON) * [type MetricSpecification](#MetricSpecification) * + [func (m MetricSpecification) MarshalJSON() ([]byte, error)](#MetricSpecification.MarshalJSON) + [func (m *MetricSpecification) UnmarshalJSON(data []byte) error](#MetricSpecification.UnmarshalJSON) * [type Operation](#Operation) * + [func (o Operation) MarshalJSON() ([]byte, error)](#Operation.MarshalJSON) + [func (o *Operation) UnmarshalJSON(data []byte) error](#Operation.UnmarshalJSON) * [type OperationDisplay](#OperationDisplay) * + [func (o OperationDisplay) MarshalJSON() ([]byte, error)](#OperationDisplay.MarshalJSON) + [func (o *OperationDisplay) UnmarshalJSON(data []byte) error](#OperationDisplay.UnmarshalJSON) * [type OperationListResult](#OperationListResult) * + [func (o OperationListResult) MarshalJSON() ([]byte, error)](#OperationListResult.MarshalJSON) + [func (o *OperationListResult) UnmarshalJSON(data []byte) error](#OperationListResult.UnmarshalJSON) * [type OperationProperties](#OperationProperties) * + [func (o OperationProperties) MarshalJSON() ([]byte, error)](#OperationProperties.MarshalJSON) + [func (o *OperationProperties) UnmarshalJSON(data []byte) error](#OperationProperties.UnmarshalJSON) * [type OperationsClient](#OperationsClient) * + [func NewOperationsClient(credential azcore.TokenCredential, options *arm.ClientOptions) (*OperationsClient, error)](#NewOperationsClient) * + [func (client *OperationsClient) NewListPager(options *OperationsClientListOptions) *runtime.Pager[OperationsClientListResponse]](#OperationsClient.NewListPager) * [type OperationsClientListOptions](#OperationsClientListOptions) * [type OperationsClientListResponse](#OperationsClientListResponse) * [type PropertyType](#PropertyType) * + [func PossiblePropertyTypeValues() []PropertyType](#PossiblePropertyTypeValues) * [type ProvisioningState](#ProvisioningState) * + [func PossibleProvisioningStateValues() []ProvisioningState](#PossibleProvisioningStateValues) * [type ReferenceDataKeyPropertyType](#ReferenceDataKeyPropertyType) * + [func PossibleReferenceDataKeyPropertyTypeValues() []ReferenceDataKeyPropertyType](#PossibleReferenceDataKeyPropertyTypeValues) * [type ReferenceDataSetCreateOrUpdateParameters](#ReferenceDataSetCreateOrUpdateParameters) * + [func (r ReferenceDataSetCreateOrUpdateParameters) MarshalJSON() ([]byte, error)](#ReferenceDataSetCreateOrUpdateParameters.MarshalJSON) + [func (r *ReferenceDataSetCreateOrUpdateParameters) UnmarshalJSON(data []byte) error](#ReferenceDataSetCreateOrUpdateParameters.UnmarshalJSON) * [type ReferenceDataSetCreationProperties](#ReferenceDataSetCreationProperties) * + [func (r ReferenceDataSetCreationProperties) MarshalJSON() ([]byte, error)](#ReferenceDataSetCreationProperties.MarshalJSON) + [func (r *ReferenceDataSetCreationProperties) UnmarshalJSON(data []byte) error](#ReferenceDataSetCreationProperties.UnmarshalJSON) * [type ReferenceDataSetKeyProperty](#ReferenceDataSetKeyProperty) * + [func (r ReferenceDataSetKeyProperty) MarshalJSON() ([]byte, error)](#ReferenceDataSetKeyProperty.MarshalJSON) + [func (r *ReferenceDataSetKeyProperty) UnmarshalJSON(data []byte) error](#ReferenceDataSetKeyProperty.UnmarshalJSON) * [type ReferenceDataSetListResponse](#ReferenceDataSetListResponse) * + [func (r ReferenceDataSetListResponse) MarshalJSON() ([]byte, error)](#ReferenceDataSetListResponse.MarshalJSON) + [func (r *ReferenceDataSetListResponse) UnmarshalJSON(data []byte) error](#ReferenceDataSetListResponse.UnmarshalJSON) * [type ReferenceDataSetResource](#ReferenceDataSetResource) * + [func (r ReferenceDataSetResource) MarshalJSON() ([]byte, error)](#ReferenceDataSetResource.MarshalJSON) + [func (r *ReferenceDataSetResource) UnmarshalJSON(data []byte) error](#ReferenceDataSetResource.UnmarshalJSON) * [type ReferenceDataSetResourceProperties](#ReferenceDataSetResourceProperties) * + [func (r ReferenceDataSetResourceProperties) MarshalJSON() ([]byte, error)](#ReferenceDataSetResourceProperties.MarshalJSON) + [func (r *ReferenceDataSetResourceProperties) UnmarshalJSON(data []byte) error](#ReferenceDataSetResourceProperties.UnmarshalJSON) * [type ReferenceDataSetUpdateParameters](#ReferenceDataSetUpdateParameters) * + [func (r ReferenceDataSetUpdateParameters) MarshalJSON() ([]byte, error)](#ReferenceDataSetUpdateParameters.MarshalJSON) + [func (r *ReferenceDataSetUpdateParameters) UnmarshalJSON(data []byte) error](#ReferenceDataSetUpdateParameters.UnmarshalJSON) * [type ReferenceDataSetsClient](#ReferenceDataSetsClient) * + [func NewReferenceDataSetsClient(subscriptionID string, credential azcore.TokenCredential, ...) (*ReferenceDataSetsClient, error)](#NewReferenceDataSetsClient) * + [func (client *ReferenceDataSetsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, environmentName string, ...) (ReferenceDataSetsClientCreateOrUpdateResponse, error)](#ReferenceDataSetsClient.CreateOrUpdate) + [func (client *ReferenceDataSetsClient) Delete(ctx context.Context, resourceGroupName string, environmentName string, ...) (ReferenceDataSetsClientDeleteResponse, error)](#ReferenceDataSetsClient.Delete) + [func (client *ReferenceDataSetsClient) Get(ctx context.Context, resourceGroupName string, environmentName string, ...) (ReferenceDataSetsClientGetResponse, error)](#ReferenceDataSetsClient.Get) + [func (client *ReferenceDataSetsClient) ListByEnvironment(ctx context.Context, resourceGroupName string, environmentName string, ...) (ReferenceDataSetsClientListByEnvironmentResponse, error)](#ReferenceDataSetsClient.ListByEnvironment) + [func (client *ReferenceDataSetsClient) Update(ctx context.Context, resourceGroupName string, environmentName string, ...) (ReferenceDataSetsClientUpdateResponse, error)](#ReferenceDataSetsClient.Update) * [type ReferenceDataSetsClientCreateOrUpdateOptions](#ReferenceDataSetsClientCreateOrUpdateOptions) * [type ReferenceDataSetsClientCreateOrUpdateResponse](#ReferenceDataSetsClientCreateOrUpdateResponse) * [type ReferenceDataSetsClientDeleteOptions](#ReferenceDataSetsClientDeleteOptions) * [type ReferenceDataSetsClientDeleteResponse](#ReferenceDataSetsClientDeleteResponse) * [type ReferenceDataSetsClientGetOptions](#ReferenceDataSetsClientGetOptions) * [type ReferenceDataSetsClientGetResponse](#ReferenceDataSetsClientGetResponse) * [type ReferenceDataSetsClientListByEnvironmentOptions](#ReferenceDataSetsClientListByEnvironmentOptions) * [type ReferenceDataSetsClientListByEnvironmentResponse](#ReferenceDataSetsClientListByEnvironmentResponse) * [type ReferenceDataSetsClientUpdateOptions](#ReferenceDataSetsClientUpdateOptions) * [type ReferenceDataSetsClientUpdateResponse](#ReferenceDataSetsClientUpdateResponse) * [type Resource](#Resource) * + [func (r Resource) MarshalJSON() ([]byte, error)](#Resource.MarshalJSON) + [func (r *Resource) UnmarshalJSON(data []byte) error](#Resource.UnmarshalJSON) * [type ResourceProperties](#ResourceProperties) * + [func (r ResourceProperties) MarshalJSON() ([]byte, error)](#ResourceProperties.MarshalJSON) + [func (r *ResourceProperties) UnmarshalJSON(data []byte) error](#ResourceProperties.UnmarshalJSON) * [type SKU](#SKU) * + [func (s SKU) MarshalJSON() ([]byte, error)](#SKU.MarshalJSON) + [func (s *SKU) UnmarshalJSON(data []byte) error](#SKU.UnmarshalJSON) * [type SKUName](#SKUName) * + [func PossibleSKUNameValues() []SKUName](#PossibleSKUNameValues) * [type ServiceSpecification](#ServiceSpecification) * + [func (s ServiceSpecification) MarshalJSON() ([]byte, error)](#ServiceSpecification.MarshalJSON) + [func (s *ServiceSpecification) UnmarshalJSON(data []byte) error](#ServiceSpecification.UnmarshalJSON) * [type StorageLimitExceededBehavior](#StorageLimitExceededBehavior) * + [func PossibleStorageLimitExceededBehaviorValues() []StorageLimitExceededBehavior](#PossibleStorageLimitExceededBehaviorValues) * [type TimeSeriesIDProperty](#TimeSeriesIDProperty) * + [func (t TimeSeriesIDProperty) MarshalJSON() ([]byte, error)](#TimeSeriesIDProperty.MarshalJSON) + [func (t *TimeSeriesIDProperty) UnmarshalJSON(data []byte) error](#TimeSeriesIDProperty.UnmarshalJSON) * [type TrackedResource](#TrackedResource) * + [func (t TrackedResource) MarshalJSON() ([]byte, error)](#TrackedResource.MarshalJSON) + [func (t *TrackedResource) UnmarshalJSON(data []byte) error](#TrackedResource.UnmarshalJSON) * [type WarmStorageEnvironmentStatus](#WarmStorageEnvironmentStatus) * + [func (w WarmStorageEnvironmentStatus) MarshalJSON() ([]byte, error)](#WarmStorageEnvironmentStatus.MarshalJSON) + [func (w *WarmStorageEnvironmentStatus) UnmarshalJSON(data []byte) error](#WarmStorageEnvironmentStatus.UnmarshalJSON) * [type WarmStoragePropertiesState](#WarmStoragePropertiesState) * + [func PossibleWarmStoragePropertiesStateValues() []WarmStoragePropertiesState](#PossibleWarmStoragePropertiesStateValues) * [type WarmStoragePropertiesUsage](#WarmStoragePropertiesUsage) * + [func (w WarmStoragePropertiesUsage) MarshalJSON() ([]byte, error)](#WarmStoragePropertiesUsage.MarshalJSON) + [func (w *WarmStoragePropertiesUsage) UnmarshalJSON(data []byte) error](#WarmStoragePropertiesUsage.UnmarshalJSON) * [type WarmStoragePropertiesUsageStateDetails](#WarmStoragePropertiesUsageStateDetails) * + [func (w WarmStoragePropertiesUsageStateDetails) MarshalJSON() ([]byte, error)](#WarmStoragePropertiesUsageStateDetails.MarshalJSON) + [func (w *WarmStoragePropertiesUsageStateDetails) UnmarshalJSON(data []byte) error](#WarmStoragePropertiesUsageStateDetails.UnmarshalJSON) * [type WarmStoreConfigurationProperties](#WarmStoreConfigurationProperties) * + [func (w WarmStoreConfigurationProperties) MarshalJSON() ([]byte, error)](#WarmStoreConfigurationProperties.MarshalJSON) + [func (w *WarmStoreConfigurationProperties) UnmarshalJSON(data []byte) error](#WarmStoreConfigurationProperties.UnmarshalJSON) #### Examples [¶](#pkg-examples) * [AccessPoliciesClient.CreateOrUpdate](#example-AccessPoliciesClient.CreateOrUpdate) * [AccessPoliciesClient.Delete](#example-AccessPoliciesClient.Delete) * [AccessPoliciesClient.Get](#example-AccessPoliciesClient.Get) * [AccessPoliciesClient.ListByEnvironment](#example-AccessPoliciesClient.ListByEnvironment) * [AccessPoliciesClient.Update](#example-AccessPoliciesClient.Update) * [EnvironmentsClient.BeginCreateOrUpdate](#example-EnvironmentsClient.BeginCreateOrUpdate) * [EnvironmentsClient.BeginUpdate](#example-EnvironmentsClient.BeginUpdate) * [EnvironmentsClient.Delete](#example-EnvironmentsClient.Delete) * [EnvironmentsClient.Get](#example-EnvironmentsClient.Get) * [EnvironmentsClient.ListByResourceGroup](#example-EnvironmentsClient.ListByResourceGroup) * [EnvironmentsClient.ListBySubscription](#example-EnvironmentsClient.ListBySubscription) * [EventSourcesClient.CreateOrUpdate (CreateEventHubEventSource)](#example-EventSourcesClient.CreateOrUpdate-CreateEventHubEventSource) * [EventSourcesClient.CreateOrUpdate (EventSourcesCreateEventHubWithCustomEnquedTime)](#example-EventSourcesClient.CreateOrUpdate-EventSourcesCreateEventHubWithCustomEnquedTime) * [EventSourcesClient.Delete](#example-EventSourcesClient.Delete) * [EventSourcesClient.Get](#example-EventSourcesClient.Get) * [EventSourcesClient.ListByEnvironment](#example-EventSourcesClient.ListByEnvironment) * [EventSourcesClient.Update](#example-EventSourcesClient.Update) * [OperationsClient.NewListPager](#example-OperationsClient.NewListPager) * [ReferenceDataSetsClient.CreateOrUpdate](#example-ReferenceDataSetsClient.CreateOrUpdate) * [ReferenceDataSetsClient.Delete](#example-ReferenceDataSetsClient.Delete) * [ReferenceDataSetsClient.Get](#example-ReferenceDataSetsClient.Get) * [ReferenceDataSetsClient.ListByEnvironment](#example-ReferenceDataSetsClient.ListByEnvironment) * [ReferenceDataSetsClient.Update](#example-ReferenceDataSetsClient.Update) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) #### type [AccessPoliciesClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/accesspolicies_client.go#L26) [¶](#AccessPoliciesClient) ``` type AccessPoliciesClient struct { // contains filtered or unexported fields } ``` AccessPoliciesClient contains the methods for the AccessPolicies group. Don't use this type directly, use NewAccessPoliciesClient() instead. #### func [NewAccessPoliciesClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/accesspolicies_client.go#L35) [¶](#NewAccessPoliciesClient) ``` func NewAccessPoliciesClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[AccessPoliciesClient](#AccessPoliciesClient), [error](/builtin#error)) ``` NewAccessPoliciesClient creates a new instance of AccessPoliciesClient with the specified values. * subscriptionID - Azure Subscription ID. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*AccessPoliciesClient) [CreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/accesspolicies_client.go#L57) [¶](#AccessPoliciesClient.CreateOrUpdate) ``` func (client *[AccessPoliciesClient](#AccessPoliciesClient)) CreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), accessPolicyName [string](/builtin#string), parameters [AccessPolicyCreateOrUpdateParameters](#AccessPolicyCreateOrUpdateParameters), options *[AccessPoliciesClientCreateOrUpdateOptions](#AccessPoliciesClientCreateOrUpdateOptions)) ([AccessPoliciesClientCreateOrUpdateResponse](#AccessPoliciesClientCreateOrUpdateResponse), [error](/builtin#error)) ``` CreateOrUpdate - Create or update an access policy in the specified environment. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * accessPolicyName - Name of the access policy. * parameters - Parameters for creating an access policy. * options - AccessPoliciesClientCreateOrUpdateOptions contains the optional parameters for the AccessPoliciesClient.CreateOrUpdate method. Example [¶](#example-AccessPoliciesClient.CreateOrUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/AccessPoliciesCreate.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewAccessPoliciesClient().CreateOrUpdate(ctx, "rg1", "env1", "ap1", armtimeseriesinsights.AccessPolicyCreateOrUpdateParameters{ Properties: &armtimeseriesinsights.AccessPolicyResourceProperties{ Description: to.Ptr("some description"), PrincipalObjectID: to.Ptr("aGuid"), Roles: []*armtimeseriesinsights.AccessPolicyRole{ to.Ptr(armtimeseriesinsights.AccessPolicyRoleReader)}, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.AccessPolicyResource = armtimeseriesinsights.AccessPolicyResource{ // Name: to.Ptr("ap1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/AccessPolicies"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/accessPolicies/ap1"), // Properties: &armtimeseriesinsights.AccessPolicyResourceProperties{ // Description: to.Ptr("some description"), // PrincipalObjectID: to.Ptr("aGuid"), // Roles: []*armtimeseriesinsights.AccessPolicyRole{ // to.Ptr(armtimeseriesinsights.AccessPolicyRoleReader)}, // }, // } } ``` ``` Output: ``` Share Format Run #### func (*AccessPoliciesClient) [Delete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/accesspolicies_client.go#L119) [¶](#AccessPoliciesClient.Delete) ``` func (client *[AccessPoliciesClient](#AccessPoliciesClient)) Delete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), accessPolicyName [string](/builtin#string), options *[AccessPoliciesClientDeleteOptions](#AccessPoliciesClientDeleteOptions)) ([AccessPoliciesClientDeleteResponse](#AccessPoliciesClientDeleteResponse), [error](/builtin#error)) ``` Delete - Deletes the access policy with the specified name in the specified subscription, resource group, and environment If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * accessPolicyName - The name of the Time Series Insights access policy associated with the specified environment. * options - AccessPoliciesClientDeleteOptions contains the optional parameters for the AccessPoliciesClient.Delete method. Example [¶](#example-AccessPoliciesClient.Delete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/AccessPoliciesDelete.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } _, err = clientFactory.NewAccessPoliciesClient().Delete(ctx, "rg1", "env1", "ap1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*AccessPoliciesClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/accesspolicies_client.go#L172) [¶](#AccessPoliciesClient.Get) ``` func (client *[AccessPoliciesClient](#AccessPoliciesClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), accessPolicyName [string](/builtin#string), options *[AccessPoliciesClientGetOptions](#AccessPoliciesClientGetOptions)) ([AccessPoliciesClientGetResponse](#AccessPoliciesClientGetResponse), [error](/builtin#error)) ``` Get - Gets the access policy with the specified name in the specified environment. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * accessPolicyName - The name of the Time Series Insights access policy associated with the specified environment. * options - AccessPoliciesClientGetOptions contains the optional parameters for the AccessPoliciesClient.Get method. Example [¶](#example-AccessPoliciesClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/AccessPoliciesGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewAccessPoliciesClient().Get(ctx, "rg1", "env1", "ap1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.AccessPolicyResource = armtimeseriesinsights.AccessPolicyResource{ // Name: to.Ptr("ap1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/AccessPolicies"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/accessPolicies/ap1"), // Properties: &armtimeseriesinsights.AccessPolicyResourceProperties{ // Description: to.Ptr("some description"), // PrincipalObjectID: to.Ptr("aGuid"), // Roles: []*armtimeseriesinsights.AccessPolicyRole{ // to.Ptr(armtimeseriesinsights.AccessPolicyRoleReader)}, // }, // } } ``` ``` Output: ``` Share Format Run #### func (*AccessPoliciesClient) [ListByEnvironment](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/accesspolicies_client.go#L234) [¶](#AccessPoliciesClient.ListByEnvironment) ``` func (client *[AccessPoliciesClient](#AccessPoliciesClient)) ListByEnvironment(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), options *[AccessPoliciesClientListByEnvironmentOptions](#AccessPoliciesClientListByEnvironmentOptions)) ([AccessPoliciesClientListByEnvironmentResponse](#AccessPoliciesClientListByEnvironmentResponse), [error](/builtin#error)) ``` ListByEnvironment - Lists all the available access policies associated with the environment. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * options - AccessPoliciesClientListByEnvironmentOptions contains the optional parameters for the AccessPoliciesClient.ListByEnvironment method. Example [¶](#example-AccessPoliciesClient.ListByEnvironment) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/AccessPoliciesListByEnvironment.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewAccessPoliciesClient().ListByEnvironment(ctx, "rg1", "env1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.AccessPolicyListResponse = armtimeseriesinsights.AccessPolicyListResponse{ // Value: []*armtimeseriesinsights.AccessPolicyResource{ // { // Name: to.Ptr("ap1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/AccessPolicies"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/accessPolicies/ap1"), // Properties: &armtimeseriesinsights.AccessPolicyResourceProperties{ // Description: to.Ptr("some description"), // PrincipalObjectID: to.Ptr("aGuid"), // Roles: []*armtimeseriesinsights.AccessPolicyRole{ // to.Ptr(armtimeseriesinsights.AccessPolicyRoleReader)}, // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### func (*AccessPoliciesClient) [Update](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/accesspolicies_client.go#L293) [¶](#AccessPoliciesClient.Update) ``` func (client *[AccessPoliciesClient](#AccessPoliciesClient)) Update(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), accessPolicyName [string](/builtin#string), accessPolicyUpdateParameters [AccessPolicyUpdateParameters](#AccessPolicyUpdateParameters), options *[AccessPoliciesClientUpdateOptions](#AccessPoliciesClientUpdateOptions)) ([AccessPoliciesClientUpdateResponse](#AccessPoliciesClientUpdateResponse), [error](/builtin#error)) ``` Update - Updates the access policy with the specified name in the specified subscription, resource group, and environment. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * accessPolicyName - The name of the Time Series Insights access policy associated with the specified environment. * accessPolicyUpdateParameters - Request object that contains the updated information for the access policy. * options - AccessPoliciesClientUpdateOptions contains the optional parameters for the AccessPoliciesClient.Update method. Example [¶](#example-AccessPoliciesClient.Update) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/AccessPoliciesPatchRoles.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewAccessPoliciesClient().Update(ctx, "rg1", "env1", "ap1", armtimeseriesinsights.AccessPolicyUpdateParameters{ Properties: &armtimeseriesinsights.AccessPolicyMutableProperties{ Roles: []*armtimeseriesinsights.AccessPolicyRole{ to.Ptr(armtimeseriesinsights.AccessPolicyRoleReader), to.Ptr(armtimeseriesinsights.AccessPolicyRoleContributor)}, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.AccessPolicyResource = armtimeseriesinsights.AccessPolicyResource{ // Name: to.Ptr("ap1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/AccessPolicies"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/accessPolicies/ap1"), // Properties: &armtimeseriesinsights.AccessPolicyResourceProperties{ // Description: to.Ptr("some description"), // PrincipalObjectID: to.Ptr("aGuid"), // Roles: []*armtimeseriesinsights.AccessPolicyRole{ // to.Ptr(armtimeseriesinsights.AccessPolicyRoleReader)}, // }, // } } ``` ``` Output: ``` Share Format Run #### type [AccessPoliciesClientCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L16) [¶](#AccessPoliciesClientCreateOrUpdateOptions) ``` type AccessPoliciesClientCreateOrUpdateOptions struct { } ``` AccessPoliciesClientCreateOrUpdateOptions contains the optional parameters for the AccessPoliciesClient.CreateOrUpdate method. #### type [AccessPoliciesClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L13) [¶](#AccessPoliciesClientCreateOrUpdateResponse) ``` type AccessPoliciesClientCreateOrUpdateResponse struct { [AccessPolicyResource](#AccessPolicyResource) } ``` AccessPoliciesClientCreateOrUpdateResponse contains the response from method AccessPoliciesClient.CreateOrUpdate. #### type [AccessPoliciesClientDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L21) [¶](#AccessPoliciesClientDeleteOptions) ``` type AccessPoliciesClientDeleteOptions struct { } ``` AccessPoliciesClientDeleteOptions contains the optional parameters for the AccessPoliciesClient.Delete method. #### type [AccessPoliciesClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L18) [¶](#AccessPoliciesClientDeleteResponse) ``` type AccessPoliciesClientDeleteResponse struct { } ``` AccessPoliciesClientDeleteResponse contains the response from method AccessPoliciesClient.Delete. #### type [AccessPoliciesClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L26) [¶](#AccessPoliciesClientGetOptions) ``` type AccessPoliciesClientGetOptions struct { } ``` AccessPoliciesClientGetOptions contains the optional parameters for the AccessPoliciesClient.Get method. #### type [AccessPoliciesClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L23) [¶](#AccessPoliciesClientGetResponse) ``` type AccessPoliciesClientGetResponse struct { [AccessPolicyResource](#AccessPolicyResource) } ``` AccessPoliciesClientGetResponse contains the response from method AccessPoliciesClient.Get. #### type [AccessPoliciesClientListByEnvironmentOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L32) [¶](#AccessPoliciesClientListByEnvironmentOptions) ``` type AccessPoliciesClientListByEnvironmentOptions struct { } ``` AccessPoliciesClientListByEnvironmentOptions contains the optional parameters for the AccessPoliciesClient.ListByEnvironment method. #### type [AccessPoliciesClientListByEnvironmentResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L28) [¶](#AccessPoliciesClientListByEnvironmentResponse) ``` type AccessPoliciesClientListByEnvironmentResponse struct { [AccessPolicyListResponse](#AccessPolicyListResponse) } ``` AccessPoliciesClientListByEnvironmentResponse contains the response from method AccessPoliciesClient.ListByEnvironment. #### type [AccessPoliciesClientUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L37) [¶](#AccessPoliciesClientUpdateOptions) ``` type AccessPoliciesClientUpdateOptions struct { } ``` AccessPoliciesClientUpdateOptions contains the optional parameters for the AccessPoliciesClient.Update method. #### type [AccessPoliciesClientUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L33) [¶](#AccessPoliciesClientUpdateResponse) ``` type AccessPoliciesClientUpdateResponse struct { [AccessPolicyResource](#AccessPolicyResource) } ``` AccessPoliciesClientUpdateResponse contains the response from method AccessPoliciesClient.Update. #### type [AccessPolicyCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L41) [¶](#AccessPolicyCreateOrUpdateParameters) ``` type AccessPolicyCreateOrUpdateParameters struct { // REQUIRED Properties *[AccessPolicyResourceProperties](#AccessPolicyResourceProperties) `json:"properties,omitempty"` } ``` #### func (AccessPolicyCreateOrUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L20) [¶](#AccessPolicyCreateOrUpdateParameters.MarshalJSON) added in v1.1.0 ``` func (a [AccessPolicyCreateOrUpdateParameters](#AccessPolicyCreateOrUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type AccessPolicyCreateOrUpdateParameters. #### func (*AccessPolicyCreateOrUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L27) [¶](#AccessPolicyCreateOrUpdateParameters.UnmarshalJSON) added in v1.1.0 ``` func (a *[AccessPolicyCreateOrUpdateParameters](#AccessPolicyCreateOrUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type AccessPolicyCreateOrUpdateParameters. #### type [AccessPolicyListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L47) [¶](#AccessPolicyListResponse) ``` type AccessPolicyListResponse struct { // Result of the List access policies operation. Value []*[AccessPolicyResource](#AccessPolicyResource) `json:"value,omitempty"` } ``` AccessPolicyListResponse - The response of the List access policies operation. #### func (AccessPolicyListResponse) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L47) [¶](#AccessPolicyListResponse.MarshalJSON) added in v1.1.0 ``` func (a [AccessPolicyListResponse](#AccessPolicyListResponse)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type AccessPolicyListResponse. #### func (*AccessPolicyListResponse) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L54) [¶](#AccessPolicyListResponse.UnmarshalJSON) added in v1.1.0 ``` func (a *[AccessPolicyListResponse](#AccessPolicyListResponse)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type AccessPolicyListResponse. #### type [AccessPolicyMutableProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L53) [¶](#AccessPolicyMutableProperties) ``` type AccessPolicyMutableProperties struct { // An description of the access policy. Description *[string](/builtin#string) `json:"description,omitempty"` // The list of roles the principal is assigned on the environment. Roles []*[AccessPolicyRole](#AccessPolicyRole) `json:"roles,omitempty"` } ``` AccessPolicyMutableProperties - An object that represents a set of mutable access policy resource properties. #### func (AccessPolicyMutableProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L74) [¶](#AccessPolicyMutableProperties.MarshalJSON) ``` func (a [AccessPolicyMutableProperties](#AccessPolicyMutableProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type AccessPolicyMutableProperties. #### func (*AccessPolicyMutableProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L82) [¶](#AccessPolicyMutableProperties.UnmarshalJSON) added in v1.1.0 ``` func (a *[AccessPolicyMutableProperties](#AccessPolicyMutableProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type AccessPolicyMutableProperties. #### type [AccessPolicyResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L64) [¶](#AccessPolicyResource) ``` type AccessPolicyResource struct { Properties *[AccessPolicyResourceProperties](#AccessPolicyResourceProperties) `json:"properties,omitempty"` // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` AccessPolicyResource - An access policy is used to grant users and applications access to the environment. Roles are assigned to service principals in Azure Active Directory. These roles define the actions the principal can perform through the Time Series Insights data plane APIs. #### func (AccessPolicyResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L105) [¶](#AccessPolicyResource.MarshalJSON) added in v1.1.0 ``` func (a [AccessPolicyResource](#AccessPolicyResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type AccessPolicyResource. #### func (*AccessPolicyResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L115) [¶](#AccessPolicyResource.UnmarshalJSON) added in v1.1.0 ``` func (a *[AccessPolicyResource](#AccessPolicyResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type AccessPolicyResource. #### type [AccessPolicyResourceProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L77) [¶](#AccessPolicyResourceProperties) ``` type AccessPolicyResourceProperties struct { // An description of the access policy. Description *[string](/builtin#string) `json:"description,omitempty"` // The objectId of the principal in Azure Active Directory. PrincipalObjectID *[string](/builtin#string) `json:"principalObjectId,omitempty"` // The list of roles the principal is assigned on the environment. Roles []*[AccessPolicyRole](#AccessPolicyRole) `json:"roles,omitempty"` } ``` #### func (AccessPolicyResourceProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L144) [¶](#AccessPolicyResourceProperties.MarshalJSON) ``` func (a [AccessPolicyResourceProperties](#AccessPolicyResourceProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type AccessPolicyResourceProperties. #### func (*AccessPolicyResourceProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L153) [¶](#AccessPolicyResourceProperties.UnmarshalJSON) added in v1.1.0 ``` func (a *[AccessPolicyResourceProperties](#AccessPolicyResourceProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type AccessPolicyResourceProperties. #### type [AccessPolicyRole](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L18) [¶](#AccessPolicyRole) ``` type AccessPolicyRole [string](/builtin#string) ``` AccessPolicyRole - A role defining the data plane operations that a principal can perform on a Time Series Insights client. ``` const ( AccessPolicyRoleContributor [AccessPolicyRole](#AccessPolicyRole) = "Contributor" AccessPolicyRoleReader [AccessPolicyRole](#AccessPolicyRole) = "Reader" ) ``` #### func [PossibleAccessPolicyRoleValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L26) [¶](#PossibleAccessPolicyRoleValues) ``` func PossibleAccessPolicyRoleValues() [][AccessPolicyRole](#AccessPolicyRole) ``` PossibleAccessPolicyRoleValues returns the possible values for the AccessPolicyRole const type. #### type [AccessPolicyUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L88) [¶](#AccessPolicyUpdateParameters) ``` type AccessPolicyUpdateParameters struct { // An object that represents a set of mutable access policy resource properties. Properties *[AccessPolicyMutableProperties](#AccessPolicyMutableProperties) `json:"properties,omitempty"` } ``` #### func (AccessPolicyUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L179) [¶](#AccessPolicyUpdateParameters.MarshalJSON) ``` func (a [AccessPolicyUpdateParameters](#AccessPolicyUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type AccessPolicyUpdateParameters. #### func (*AccessPolicyUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L186) [¶](#AccessPolicyUpdateParameters.UnmarshalJSON) added in v1.1.0 ``` func (a *[AccessPolicyUpdateParameters](#AccessPolicyUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type AccessPolicyUpdateParameters. #### type [AzureEventSourceProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L94) [¶](#AzureEventSourceProperties) ``` type AzureEventSourceProperties struct { // REQUIRED; The resource id of the event source in Azure Resource Manager. EventSourceResourceID *[string](/builtin#string) `json:"eventSourceResourceId,omitempty"` // An object that contains the details about the starting point in time to ingest events. IngressStartAt *[IngressStartAtProperties](#IngressStartAtProperties) `json:"ingressStartAt,omitempty"` // An object that represents the local timestamp property. It contains the format of local timestamp that needs to be used // and the corresponding timezone offset information. If a value isn't specified // for localTimestamp, or if null, then the local timestamp will not be ingressed with the events. LocalTimestamp *[LocalTimestamp](#LocalTimestamp) `json:"localTimestamp,omitempty"` // The event property that will be used as the event source's timestamp. If a value isn't specified for timestampPropertyName, // or if null or empty-string is specified, the event creation time will be // used. TimestampPropertyName *[string](/builtin#string) `json:"timestampPropertyName,omitempty"` // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` } ``` AzureEventSourceProperties - Properties of an event source that reads events from an event broker in Azure. #### func (AzureEventSourceProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L206) [¶](#AzureEventSourceProperties.MarshalJSON) ``` func (a [AzureEventSourceProperties](#AzureEventSourceProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type AzureEventSourceProperties. #### func (*AzureEventSourceProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L218) [¶](#AzureEventSourceProperties.UnmarshalJSON) ``` func (a *[AzureEventSourceProperties](#AzureEventSourceProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type AzureEventSourceProperties. #### type [ClientFactory](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/client_factory.go#L19) [¶](#ClientFactory) added in v1.1.0 ``` type ClientFactory struct { // contains filtered or unexported fields } ``` ClientFactory is a client factory used to create any client in this module. Don't use this type directly, use NewClientFactory instead. #### func [NewClientFactory](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/client_factory.go#L30) [¶](#NewClientFactory) added in v1.1.0 ``` func NewClientFactory(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[ClientFactory](#ClientFactory), [error](/builtin#error)) ``` NewClientFactory creates a new instance of ClientFactory with the specified values. The parameter values will be propagated to any client created from this factory. * subscriptionID - Azure Subscription ID. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*ClientFactory) [NewAccessPoliciesClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/client_factory.go#L61) [¶](#ClientFactory.NewAccessPoliciesClient) added in v1.1.0 ``` func (c *[ClientFactory](#ClientFactory)) NewAccessPoliciesClient() *[AccessPoliciesClient](#AccessPoliciesClient) ``` #### func (*ClientFactory) [NewEnvironmentsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/client_factory.go#L46) [¶](#ClientFactory.NewEnvironmentsClient) added in v1.1.0 ``` func (c *[ClientFactory](#ClientFactory)) NewEnvironmentsClient() *[EnvironmentsClient](#EnvironmentsClient) ``` #### func (*ClientFactory) [NewEventSourcesClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/client_factory.go#L51) [¶](#ClientFactory.NewEventSourcesClient) added in v1.1.0 ``` func (c *[ClientFactory](#ClientFactory)) NewEventSourcesClient() *[EventSourcesClient](#EventSourcesClient) ``` #### func (*ClientFactory) [NewOperationsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/client_factory.go#L41) [¶](#ClientFactory.NewOperationsClient) added in v1.1.0 ``` func (c *[ClientFactory](#ClientFactory)) NewOperationsClient() *[OperationsClient](#OperationsClient) ``` #### func (*ClientFactory) [NewReferenceDataSetsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/client_factory.go#L56) [¶](#ClientFactory.NewReferenceDataSetsClient) added in v1.1.0 ``` func (c *[ClientFactory](#ClientFactory)) NewReferenceDataSetsClient() *[ReferenceDataSetsClient](#ReferenceDataSetsClient) ``` #### type [CreateOrUpdateTrackedResourceProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L119) [¶](#CreateOrUpdateTrackedResourceProperties) ``` type CreateOrUpdateTrackedResourceProperties struct { // REQUIRED; The location of the resource. Location *[string](/builtin#string) `json:"location,omitempty"` // Key-value pairs of additional properties for the resource. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` CreateOrUpdateTrackedResourceProperties - Properties required to create any resource tracked by Azure Resource Manager. #### func (CreateOrUpdateTrackedResourceProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L253) [¶](#CreateOrUpdateTrackedResourceProperties.MarshalJSON) ``` func (c [CreateOrUpdateTrackedResourceProperties](#CreateOrUpdateTrackedResourceProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type CreateOrUpdateTrackedResourceProperties. #### func (*CreateOrUpdateTrackedResourceProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L261) [¶](#CreateOrUpdateTrackedResourceProperties.UnmarshalJSON) added in v1.1.0 ``` func (c *[CreateOrUpdateTrackedResourceProperties](#CreateOrUpdateTrackedResourceProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type CreateOrUpdateTrackedResourceProperties. #### type [DataStringComparisonBehavior](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L37) [¶](#DataStringComparisonBehavior) ``` type DataStringComparisonBehavior [string](/builtin#string) ``` DataStringComparisonBehavior - The reference data set key comparison behavior can be set using this property. By default, the value is 'Ordinal' - which means case sensitive key comparison will be performed while joining reference data with events or while adding new reference data. When 'OrdinalIgnoreCase' is set, case insensitive comparison will be used. ``` const ( DataStringComparisonBehaviorOrdinal [DataStringComparisonBehavior](#DataStringComparisonBehavior) = "Ordinal" DataStringComparisonBehaviorOrdinalIgnoreCase [DataStringComparisonBehavior](#DataStringComparisonBehavior) = "OrdinalIgnoreCase" ) ``` #### func [PossibleDataStringComparisonBehaviorValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L45) [¶](#PossibleDataStringComparisonBehaviorValues) ``` func PossibleDataStringComparisonBehaviorValues() [][DataStringComparisonBehavior](#DataStringComparisonBehavior) ``` PossibleDataStringComparisonBehaviorValues returns the possible values for the DataStringComparisonBehavior const type. #### type [Dimension](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L128) [¶](#Dimension) ``` type Dimension struct { // Display name of dimension. DisplayName *[string](/builtin#string) `json:"displayName,omitempty"` // Display name of dimension. Name *[string](/builtin#string) `json:"name,omitempty"` } ``` Dimension of blobs, possibly be blob type or access tier. #### func (Dimension) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L284) [¶](#Dimension.MarshalJSON) added in v1.1.0 ``` func (d [Dimension](#Dimension)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Dimension. #### func (*Dimension) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L292) [¶](#Dimension.UnmarshalJSON) added in v1.1.0 ``` func (d *[Dimension](#Dimension)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Dimension. #### type [EnvironmentCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L146) [¶](#EnvironmentCreateOrUpdateParameters) ``` type EnvironmentCreateOrUpdateParameters struct { // REQUIRED; The kind of the environment. Kind *[EnvironmentKind](#EnvironmentKind) `json:"kind,omitempty"` // REQUIRED; The location of the resource. Location *[string](/builtin#string) `json:"location,omitempty"` // REQUIRED; The sku determines the type of environment, either Gen1 (S1 or S2) or Gen2 (L1). For Gen1 environments the sku // determines the capacity of the environment, the ingress rate, and the billing rate. SKU *[SKU](#SKU) `json:"sku,omitempty"` // Key-value pairs of additional properties for the resource. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` EnvironmentCreateOrUpdateParameters - Parameters supplied to the CreateOrUpdate Environment operation. #### func (*EnvironmentCreateOrUpdateParameters) [GetEnvironmentCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L163) [¶](#EnvironmentCreateOrUpdateParameters.GetEnvironmentCreateOrUpdateParameters) ``` func (e *[EnvironmentCreateOrUpdateParameters](#EnvironmentCreateOrUpdateParameters)) GetEnvironmentCreateOrUpdateParameters() *[EnvironmentCreateOrUpdateParameters](#EnvironmentCreateOrUpdateParameters) ``` GetEnvironmentCreateOrUpdateParameters implements the EnvironmentCreateOrUpdateParametersClassification interface for type EnvironmentCreateOrUpdateParameters. #### func (EnvironmentCreateOrUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L315) [¶](#EnvironmentCreateOrUpdateParameters.MarshalJSON) ``` func (e [EnvironmentCreateOrUpdateParameters](#EnvironmentCreateOrUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EnvironmentCreateOrUpdateParameters. #### func (*EnvironmentCreateOrUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L325) [¶](#EnvironmentCreateOrUpdateParameters.UnmarshalJSON) added in v1.1.0 ``` func (e *[EnvironmentCreateOrUpdateParameters](#EnvironmentCreateOrUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EnvironmentCreateOrUpdateParameters. #### type [EnvironmentCreateOrUpdateParametersClassification](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L140) [¶](#EnvironmentCreateOrUpdateParametersClassification) ``` type EnvironmentCreateOrUpdateParametersClassification interface { // GetEnvironmentCreateOrUpdateParameters returns the EnvironmentCreateOrUpdateParameters content of the underlying type. GetEnvironmentCreateOrUpdateParameters() *[EnvironmentCreateOrUpdateParameters](#EnvironmentCreateOrUpdateParameters) } ``` EnvironmentCreateOrUpdateParametersClassification provides polymorphic access to related types. Call the interface's GetEnvironmentCreateOrUpdateParameters() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *EnvironmentCreateOrUpdateParameters, *Gen1EnvironmentCreateOrUpdateParameters, *Gen2EnvironmentCreateOrUpdateParameters #### type [EnvironmentKind](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L53) [¶](#EnvironmentKind) ``` type EnvironmentKind [string](/builtin#string) ``` EnvironmentKind - The kind of the environment. ``` const ( EnvironmentKindGen1 [EnvironmentKind](#EnvironmentKind) = "Gen1" EnvironmentKindGen2 [EnvironmentKind](#EnvironmentKind) = "Gen2" ) ``` #### func [PossibleEnvironmentKindValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L61) [¶](#PossibleEnvironmentKindValues) ``` func PossibleEnvironmentKindValues() [][EnvironmentKind](#EnvironmentKind) ``` PossibleEnvironmentKindValues returns the possible values for the EnvironmentKind const type. #### type [EnvironmentListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L168) [¶](#EnvironmentListResponse) ``` type EnvironmentListResponse struct { // Result of the List Environments operation. Value [][EnvironmentResourceClassification](#EnvironmentResourceClassification) `json:"value,omitempty"` } ``` EnvironmentListResponse - The response of the List Environments operation. #### func (EnvironmentListResponse) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L354) [¶](#EnvironmentListResponse.MarshalJSON) added in v1.1.0 ``` func (e [EnvironmentListResponse](#EnvironmentListResponse)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EnvironmentListResponse. #### func (*EnvironmentListResponse) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L361) [¶](#EnvironmentListResponse.UnmarshalJSON) ``` func (e *[EnvironmentListResponse](#EnvironmentListResponse)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EnvironmentListResponse. #### type [EnvironmentResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L184) [¶](#EnvironmentResource) ``` type EnvironmentResource struct { // REQUIRED; The kind of the environment. Kind *[EnvironmentResourceKind](#EnvironmentResourceKind) `json:"kind,omitempty"` // REQUIRED; Resource location Location *[string](/builtin#string) `json:"location,omitempty"` // REQUIRED; The sku determines the type of environment, either Gen1 (S1 or S2) or Gen2 (L1). For Gen1 environments the sku // determines the capacity of the environment, the ingress rate, and the billing rate. SKU *[SKU](#SKU) `json:"sku,omitempty"` // Resource tags Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` EnvironmentResource - An environment is a set of time-series data available for query, and is the top level Azure Time Series Insights resource. #### func (*EnvironmentResource) [GetEnvironmentResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L209) [¶](#EnvironmentResource.GetEnvironmentResource) ``` func (e *[EnvironmentResource](#EnvironmentResource)) GetEnvironmentResource() *[EnvironmentResource](#EnvironmentResource) ``` GetEnvironmentResource implements the EnvironmentResourceClassification interface for type EnvironmentResource. #### func (EnvironmentResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L381) [¶](#EnvironmentResource.MarshalJSON) added in v1.1.0 ``` func (e [EnvironmentResource](#EnvironmentResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EnvironmentResource. #### func (*EnvironmentResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L394) [¶](#EnvironmentResource.UnmarshalJSON) added in v1.1.0 ``` func (e *[EnvironmentResource](#EnvironmentResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EnvironmentResource. #### type [EnvironmentResourceClassification](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L177) [¶](#EnvironmentResourceClassification) ``` type EnvironmentResourceClassification interface { // GetEnvironmentResource returns the EnvironmentResource content of the underlying type. GetEnvironmentResource() *[EnvironmentResource](#EnvironmentResource) } ``` EnvironmentResourceClassification provides polymorphic access to related types. Call the interface's GetEnvironmentResource() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *EnvironmentResource, *Gen1EnvironmentResource, *Gen2EnvironmentResource #### type [EnvironmentResourceKind](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L69) [¶](#EnvironmentResourceKind) ``` type EnvironmentResourceKind [string](/builtin#string) ``` EnvironmentResourceKind - The kind of the environment. ``` const ( EnvironmentResourceKindGen1 [EnvironmentResourceKind](#EnvironmentResourceKind) = "Gen1" EnvironmentResourceKindGen2 [EnvironmentResourceKind](#EnvironmentResourceKind) = "Gen2" ) ``` #### func [PossibleEnvironmentResourceKindValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L77) [¶](#PossibleEnvironmentResourceKindValues) ``` func PossibleEnvironmentResourceKindValues() [][EnvironmentResourceKind](#EnvironmentResourceKind) ``` PossibleEnvironmentResourceKindValues returns the possible values for the EnvironmentResourceKind const type. #### type [EnvironmentResourceProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L212) [¶](#EnvironmentResourceProperties) ``` type EnvironmentResourceProperties struct { // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; The fully qualified domain name used to access the environment data, e.g. to query the environment's events // or upload reference data for the environment. DataAccessFqdn *[string](/builtin#string) `json:"dataAccessFqdn,omitempty" azure:"ro"` // READ-ONLY; An id used to access the environment data, e.g. to query the environment's events or upload reference data for // the environment. DataAccessID *[string](/builtin#string) `json:"dataAccessId,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` // READ-ONLY; An object that represents the status of the environment, and its internal state in the Time Series Insights // service. Status *[EnvironmentStatus](#EnvironmentStatus) `json:"status,omitempty" azure:"ro"` } ``` EnvironmentResourceProperties - Properties of the environment. #### func (EnvironmentResourceProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L432) [¶](#EnvironmentResourceProperties.MarshalJSON) ``` func (e [EnvironmentResourceProperties](#EnvironmentResourceProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EnvironmentResourceProperties. #### func (*EnvironmentResourceProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L443) [¶](#EnvironmentResourceProperties.UnmarshalJSON) ``` func (e *[EnvironmentResourceProperties](#EnvironmentResourceProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EnvironmentResourceProperties. #### type [EnvironmentStateDetails](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L233) [¶](#EnvironmentStateDetails) ``` type EnvironmentStateDetails struct { // Contains the code that represents the reason of an environment being in a particular state. Can be used to programmatically // handle specific cases. Code *[string](/builtin#string) `json:"code,omitempty"` // A message that describes the state in detail. Message *[string](/builtin#string) `json:"message,omitempty"` } ``` EnvironmentStateDetails - An object that contains the details about an environment's state. #### func (EnvironmentStateDetails) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L475) [¶](#EnvironmentStateDetails.MarshalJSON) added in v1.1.0 ``` func (e [EnvironmentStateDetails](#EnvironmentStateDetails)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EnvironmentStateDetails. #### func (*EnvironmentStateDetails) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L483) [¶](#EnvironmentStateDetails.UnmarshalJSON) added in v1.1.0 ``` func (e *[EnvironmentStateDetails](#EnvironmentStateDetails)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EnvironmentStateDetails. #### type [EnvironmentStatus](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L244) [¶](#EnvironmentStatus) ``` type EnvironmentStatus struct { // READ-ONLY; An object that represents the status of ingress on an environment. Ingress *[IngressEnvironmentStatus](#IngressEnvironmentStatus) `json:"ingress,omitempty" azure:"ro"` // READ-ONLY; An object that represents the status of warm storage on an environment. WarmStorage *[WarmStorageEnvironmentStatus](#WarmStorageEnvironmentStatus) `json:"warmStorage,omitempty" azure:"ro"` } ``` EnvironmentStatus - An object that represents the status of the environment, and its internal state in the Time Series Insights service. #### func (EnvironmentStatus) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L506) [¶](#EnvironmentStatus.MarshalJSON) added in v1.1.0 ``` func (e [EnvironmentStatus](#EnvironmentStatus)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EnvironmentStatus. #### func (*EnvironmentStatus) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L514) [¶](#EnvironmentStatus.UnmarshalJSON) added in v1.1.0 ``` func (e *[EnvironmentStatus](#EnvironmentStatus)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EnvironmentStatus. #### type [EnvironmentUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L262) [¶](#EnvironmentUpdateParameters) ``` type EnvironmentUpdateParameters struct { // REQUIRED; The kind of the environment. Kind *[EnvironmentKind](#EnvironmentKind) `json:"kind,omitempty"` // Key-value pairs of additional properties for the environment. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` EnvironmentUpdateParameters - Parameters supplied to the Update Environment operation. #### func (*EnvironmentUpdateParameters) [GetEnvironmentUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L271) [¶](#EnvironmentUpdateParameters.GetEnvironmentUpdateParameters) ``` func (e *[EnvironmentUpdateParameters](#EnvironmentUpdateParameters)) GetEnvironmentUpdateParameters() *[EnvironmentUpdateParameters](#EnvironmentUpdateParameters) ``` GetEnvironmentUpdateParameters implements the EnvironmentUpdateParametersClassification interface for type EnvironmentUpdateParameters. #### func (EnvironmentUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L537) [¶](#EnvironmentUpdateParameters.MarshalJSON) ``` func (e [EnvironmentUpdateParameters](#EnvironmentUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EnvironmentUpdateParameters. #### func (*EnvironmentUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L545) [¶](#EnvironmentUpdateParameters.UnmarshalJSON) added in v1.1.0 ``` func (e *[EnvironmentUpdateParameters](#EnvironmentUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EnvironmentUpdateParameters. #### type [EnvironmentUpdateParametersClassification](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L256) [¶](#EnvironmentUpdateParametersClassification) ``` type EnvironmentUpdateParametersClassification interface { // GetEnvironmentUpdateParameters returns the EnvironmentUpdateParameters content of the underlying type. GetEnvironmentUpdateParameters() *[EnvironmentUpdateParameters](#EnvironmentUpdateParameters) } ``` EnvironmentUpdateParametersClassification provides polymorphic access to related types. Call the interface's GetEnvironmentUpdateParameters() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *EnvironmentUpdateParameters, *Gen1EnvironmentUpdateParameters, *Gen2EnvironmentUpdateParameters #### type [EnvironmentsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/environments_client.go#L26) [¶](#EnvironmentsClient) ``` type EnvironmentsClient struct { // contains filtered or unexported fields } ``` EnvironmentsClient contains the methods for the Environments group. Don't use this type directly, use NewEnvironmentsClient() instead. #### func [NewEnvironmentsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/environments_client.go#L35) [¶](#NewEnvironmentsClient) ``` func NewEnvironmentsClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[EnvironmentsClient](#EnvironmentsClient), [error](/builtin#error)) ``` NewEnvironmentsClient creates a new instance of EnvironmentsClient with the specified values. * subscriptionID - Azure Subscription ID. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*EnvironmentsClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/environments_client.go#L56) [¶](#EnvironmentsClient.BeginCreateOrUpdate) ``` func (client *[EnvironmentsClient](#EnvironmentsClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), parameters [EnvironmentCreateOrUpdateParametersClassification](#EnvironmentCreateOrUpdateParametersClassification), options *[EnvironmentsClientBeginCreateOrUpdateOptions](#EnvironmentsClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[EnvironmentsClientCreateOrUpdateResponse](#EnvironmentsClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Create or update an environment in the specified subscription and resource group. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - Name of the environment * parameters - Parameters for creating an environment resource. * options - EnvironmentsClientBeginCreateOrUpdateOptions contains the optional parameters for the EnvironmentsClient.BeginCreateOrUpdate method. Example [¶](#example-EnvironmentsClient.BeginCreateOrUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/EnvironmentsCreate.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewEnvironmentsClient().BeginCreateOrUpdate(ctx, "rg1", "env1", &armtimeseriesinsights.Gen1EnvironmentCreateOrUpdateParameters{ Location: to.Ptr("West US"), Kind: to.Ptr(armtimeseriesinsights.EnvironmentKindGen1), SKU: &armtimeseriesinsights.SKU{ Name: to.Ptr(armtimeseriesinsights.SKUNameS1), Capacity: to.Ptr[int32](1), }, Properties: &armtimeseriesinsights.Gen1EnvironmentCreationProperties{ DataRetentionTime: to.Ptr("P31D"), PartitionKeyProperties: []*armtimeseriesinsights.TimeSeriesIDProperty{ { Name: to.Ptr("DeviceId1"), Type: to.Ptr(armtimeseriesinsights.PropertyTypeString), }}, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res = armtimeseriesinsights.EnvironmentsClientCreateOrUpdateResponse{ // EnvironmentResourceClassification: &armtimeseriesinsights.Gen1EnvironmentResource{ // Name: to.Ptr("env1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Kind: to.Ptr(armtimeseriesinsights.EnvironmentResourceKindGen1), // SKU: &armtimeseriesinsights.SKU{ // Name: to.Ptr(armtimeseriesinsights.SKUNameS1), // Capacity: to.Ptr[int32](1), // }, // Properties: &armtimeseriesinsights.Gen1EnvironmentResourceProperties{ // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // DataRetentionTime: to.Ptr("P31D"), // }, // }, // } } ``` ``` Output: ``` Share Format Run #### func (*EnvironmentsClient) [BeginUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/environments_client.go#L332) [¶](#EnvironmentsClient.BeginUpdate) ``` func (client *[EnvironmentsClient](#EnvironmentsClient)) BeginUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), environmentUpdateParameters [EnvironmentUpdateParametersClassification](#EnvironmentUpdateParametersClassification), options *[EnvironmentsClientBeginUpdateOptions](#EnvironmentsClientBeginUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[EnvironmentsClientUpdateResponse](#EnvironmentsClientUpdateResponse)], [error](/builtin#error)) ``` BeginUpdate - Updates the environment with the specified name in the specified subscription and resource group. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * environmentUpdateParameters - Request object that contains the updated information for the environment. * options - EnvironmentsClientBeginUpdateOptions contains the optional parameters for the EnvironmentsClient.BeginUpdate method. Example [¶](#example-EnvironmentsClient.BeginUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/EnvironmentsPatchTags.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewEnvironmentsClient().BeginUpdate(ctx, "rg1", "env1", &armtimeseriesinsights.EnvironmentUpdateParameters{ Tags: map[string]*string{ "someTag": to.Ptr("someTagValue"), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res = armtimeseriesinsights.EnvironmentsClientUpdateResponse{ // EnvironmentResourceClassification: &armtimeseriesinsights.Gen1EnvironmentResource{ // Name: to.Ptr("env1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // "someTag": to.Ptr("someTagValue"), // }, // Kind: to.Ptr(armtimeseriesinsights.EnvironmentResourceKindGen1), // SKU: &armtimeseriesinsights.SKU{ // Name: to.Ptr(armtimeseriesinsights.SKUNameS1), // Capacity: to.Ptr[int32](10), // }, // Properties: &armtimeseriesinsights.Gen1EnvironmentResourceProperties{ // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // DataRetentionTime: to.Ptr("P31D"), // }, // }, // } } ``` ``` Output: ``` Share Format Run #### func (*EnvironmentsClient) [Delete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/environments_client.go#L120) [¶](#EnvironmentsClient.Delete) ``` func (client *[EnvironmentsClient](#EnvironmentsClient)) Delete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), options *[EnvironmentsClientDeleteOptions](#EnvironmentsClientDeleteOptions)) ([EnvironmentsClientDeleteResponse](#EnvironmentsClientDeleteResponse), [error](/builtin#error)) ``` Delete - Deletes the environment with the specified name in the specified subscription and resource group. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * options - EnvironmentsClientDeleteOptions contains the optional parameters for the EnvironmentsClient.Delete method. Example [¶](#example-EnvironmentsClient.Delete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/EnvironmentsDelete.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } _, err = clientFactory.NewEnvironmentsClient().Delete(ctx, "rg1", "env1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*EnvironmentsClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/environments_client.go#L168) [¶](#EnvironmentsClient.Get) ``` func (client *[EnvironmentsClient](#EnvironmentsClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), options *[EnvironmentsClientGetOptions](#EnvironmentsClientGetOptions)) ([EnvironmentsClientGetResponse](#EnvironmentsClientGetResponse), [error](/builtin#error)) ``` Get - Gets the environment with the specified name in the specified subscription and resource group. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * options - EnvironmentsClientGetOptions contains the optional parameters for the EnvironmentsClient.Get method. Example [¶](#example-EnvironmentsClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/EnvironmentsGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewEnvironmentsClient().Get(ctx, "rg1", "env1", &armtimeseriesinsights.EnvironmentsClientGetOptions{Expand: nil}) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res = armtimeseriesinsights.EnvironmentsClientGetResponse{ // EnvironmentResourceClassification: &armtimeseriesinsights.Gen1EnvironmentResource{ // Name: to.Ptr("env1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Kind: to.Ptr(armtimeseriesinsights.EnvironmentResourceKindGen1), // SKU: &armtimeseriesinsights.SKU{ // Name: to.Ptr(armtimeseriesinsights.SKUNameS1), // Capacity: to.Ptr[int32](1), // }, // Properties: &armtimeseriesinsights.Gen1EnvironmentResourceProperties{ // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // DataRetentionTime: to.Ptr("P31D"), // PartitionKeyProperties: []*armtimeseriesinsights.TimeSeriesIDProperty{ // { // Name: to.Ptr("DeviceId1"), // Type: to.Ptr(armtimeseriesinsights.PropertyTypeString), // }}, // }, // }, // } } ``` ``` Output: ``` Share Format Run #### func (*EnvironmentsClient) [ListByResourceGroup](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/environments_client.go#L229) [¶](#EnvironmentsClient.ListByResourceGroup) ``` func (client *[EnvironmentsClient](#EnvironmentsClient)) ListByResourceGroup(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), options *[EnvironmentsClientListByResourceGroupOptions](#EnvironmentsClientListByResourceGroupOptions)) ([EnvironmentsClientListByResourceGroupResponse](#EnvironmentsClientListByResourceGroupResponse), [error](/builtin#error)) ``` ListByResourceGroup - Lists all the available environments associated with the subscription and within the specified resource group. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * options - EnvironmentsClientListByResourceGroupOptions contains the optional parameters for the EnvironmentsClient.ListByResourceGroup method. Example [¶](#example-EnvironmentsClient.ListByResourceGroup) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/EnvironmentsListByResourceGroup.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewEnvironmentsClient().ListByResourceGroup(ctx, "rg1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.EnvironmentListResponse = armtimeseriesinsights.EnvironmentListResponse{ // Value: []armtimeseriesinsights.EnvironmentResourceClassification{ // &armtimeseriesinsights.Gen1EnvironmentResource{ // Name: to.Ptr("env1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Kind: to.Ptr(armtimeseriesinsights.EnvironmentResourceKindGen1), // SKU: &armtimeseriesinsights.SKU{ // Name: to.Ptr(armtimeseriesinsights.SKUNameS1), // Capacity: to.Ptr[int32](1), // }, // Properties: &armtimeseriesinsights.Gen1EnvironmentResourceProperties{ // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // DataRetentionTime: to.Ptr("P31D"), // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### func (*EnvironmentsClient) [ListBySubscription](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/environments_client.go#L281) [¶](#EnvironmentsClient.ListBySubscription) ``` func (client *[EnvironmentsClient](#EnvironmentsClient)) ListBySubscription(ctx [context](/context).[Context](/context#Context), options *[EnvironmentsClientListBySubscriptionOptions](#EnvironmentsClientListBySubscriptionOptions)) ([EnvironmentsClientListBySubscriptionResponse](#EnvironmentsClientListBySubscriptionResponse), [error](/builtin#error)) ``` ListBySubscription - Lists all the available environments within a subscription, irrespective of the resource groups. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * options - EnvironmentsClientListBySubscriptionOptions contains the optional parameters for the EnvironmentsClient.ListBySubscription method. Example [¶](#example-EnvironmentsClient.ListBySubscription) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/EnvironmentsListBySubscription.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewEnvironmentsClient().ListBySubscription(ctx, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.EnvironmentListResponse = armtimeseriesinsights.EnvironmentListResponse{ // Value: []armtimeseriesinsights.EnvironmentResourceClassification{ // &armtimeseriesinsights.Gen1EnvironmentResource{ // Name: to.Ptr("env1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Kind: to.Ptr(armtimeseriesinsights.EnvironmentResourceKindGen1), // SKU: &armtimeseriesinsights.SKU{ // Name: to.Ptr(armtimeseriesinsights.SKUNameS1), // Capacity: to.Ptr[int32](1), // }, // Properties: &armtimeseriesinsights.Gen1EnvironmentResourceProperties{ // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // DataRetentionTime: to.Ptr("P31D"), // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### type [EnvironmentsClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L277) [¶](#EnvironmentsClientBeginCreateOrUpdateOptions) ``` type EnvironmentsClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` EnvironmentsClientBeginCreateOrUpdateOptions contains the optional parameters for the EnvironmentsClient.BeginCreateOrUpdate method. #### type [EnvironmentsClientBeginUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L283) [¶](#EnvironmentsClientBeginUpdateOptions) ``` type EnvironmentsClientBeginUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` EnvironmentsClientBeginUpdateOptions contains the optional parameters for the EnvironmentsClient.BeginUpdate method. #### type [EnvironmentsClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L38) [¶](#EnvironmentsClientCreateOrUpdateResponse) ``` type EnvironmentsClientCreateOrUpdateResponse struct { [EnvironmentResourceClassification](#EnvironmentResourceClassification) } ``` EnvironmentsClientCreateOrUpdateResponse contains the response from method EnvironmentsClient.BeginCreateOrUpdate. #### func (*EnvironmentsClientCreateOrUpdateResponse) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L43) [¶](#EnvironmentsClientCreateOrUpdateResponse.UnmarshalJSON) ``` func (e *[EnvironmentsClientCreateOrUpdateResponse](#EnvironmentsClientCreateOrUpdateResponse)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EnvironmentsClientCreateOrUpdateResponse. #### type [EnvironmentsClientDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L289) [¶](#EnvironmentsClientDeleteOptions) ``` type EnvironmentsClientDeleteOptions struct { } ``` EnvironmentsClientDeleteOptions contains the optional parameters for the EnvironmentsClient.Delete method. #### type [EnvironmentsClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L53) [¶](#EnvironmentsClientDeleteResponse) ``` type EnvironmentsClientDeleteResponse struct { } ``` EnvironmentsClientDeleteResponse contains the response from method EnvironmentsClient.Delete. #### type [EnvironmentsClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L294) [¶](#EnvironmentsClientGetOptions) ``` type EnvironmentsClientGetOptions struct { // Setting $expand=status will include the status of the internal services of the environment in the Time Series Insights // service. Expand *[string](/builtin#string) } ``` EnvironmentsClientGetOptions contains the optional parameters for the EnvironmentsClient.Get method. #### type [EnvironmentsClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L58) [¶](#EnvironmentsClientGetResponse) ``` type EnvironmentsClientGetResponse struct { [EnvironmentResourceClassification](#EnvironmentResourceClassification) } ``` EnvironmentsClientGetResponse contains the response from method EnvironmentsClient.Get. #### func (*EnvironmentsClientGetResponse) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L63) [¶](#EnvironmentsClientGetResponse.UnmarshalJSON) ``` func (e *[EnvironmentsClientGetResponse](#EnvironmentsClientGetResponse)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EnvironmentsClientGetResponse. #### type [EnvironmentsClientListByResourceGroupOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L302) [¶](#EnvironmentsClientListByResourceGroupOptions) ``` type EnvironmentsClientListByResourceGroupOptions struct { } ``` EnvironmentsClientListByResourceGroupOptions contains the optional parameters for the EnvironmentsClient.ListByResourceGroup method. #### type [EnvironmentsClientListByResourceGroupResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L73) [¶](#EnvironmentsClientListByResourceGroupResponse) ``` type EnvironmentsClientListByResourceGroupResponse struct { [EnvironmentListResponse](#EnvironmentListResponse) } ``` EnvironmentsClientListByResourceGroupResponse contains the response from method EnvironmentsClient.ListByResourceGroup. #### type [EnvironmentsClientListBySubscriptionOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L308) [¶](#EnvironmentsClientListBySubscriptionOptions) ``` type EnvironmentsClientListBySubscriptionOptions struct { } ``` EnvironmentsClientListBySubscriptionOptions contains the optional parameters for the EnvironmentsClient.ListBySubscription method. #### type [EnvironmentsClientListBySubscriptionResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L78) [¶](#EnvironmentsClientListBySubscriptionResponse) ``` type EnvironmentsClientListBySubscriptionResponse struct { [EnvironmentListResponse](#EnvironmentListResponse) } ``` EnvironmentsClientListBySubscriptionResponse contains the response from method EnvironmentsClient.ListBySubscription. #### type [EnvironmentsClientUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L83) [¶](#EnvironmentsClientUpdateResponse) ``` type EnvironmentsClientUpdateResponse struct { [EnvironmentResourceClassification](#EnvironmentResourceClassification) } ``` EnvironmentsClientUpdateResponse contains the response from method EnvironmentsClient.BeginUpdate. #### func (*EnvironmentsClientUpdateResponse) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L88) [¶](#EnvironmentsClientUpdateResponse.UnmarshalJSON) ``` func (e *[EnvironmentsClientUpdateResponse](#EnvironmentsClientUpdateResponse)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EnvironmentsClientUpdateResponse. #### type [EventHubEventSourceCommonProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L313) [¶](#EventHubEventSourceCommonProperties) ``` type EventHubEventSourceCommonProperties struct { // REQUIRED; The name of the event hub's consumer group that holds the partitions from which events will be read. ConsumerGroupName *[string](/builtin#string) `json:"consumerGroupName,omitempty"` // REQUIRED; The name of the event hub. EventHubName *[string](/builtin#string) `json:"eventHubName,omitempty"` // REQUIRED; The resource id of the event source in Azure Resource Manager. EventSourceResourceID *[string](/builtin#string) `json:"eventSourceResourceId,omitempty"` // REQUIRED; The name of the SAS key that grants the Time Series Insights service access to the event hub. The shared access // policies for this key must grant 'Listen' permissions to the event hub. KeyName *[string](/builtin#string) `json:"keyName,omitempty"` // REQUIRED; The name of the service bus that contains the event hub. ServiceBusNamespace *[string](/builtin#string) `json:"serviceBusNamespace,omitempty"` // An object that contains the details about the starting point in time to ingest events. IngressStartAt *[IngressStartAtProperties](#IngressStartAtProperties) `json:"ingressStartAt,omitempty"` // An object that represents the local timestamp property. It contains the format of local timestamp that needs to be used // and the corresponding timezone offset information. If a value isn't specified // for localTimestamp, or if null, then the local timestamp will not be ingressed with the events. LocalTimestamp *[LocalTimestamp](#LocalTimestamp) `json:"localTimestamp,omitempty"` // The event property that will be used as the event source's timestamp. If a value isn't specified for timestampPropertyName, // or if null or empty-string is specified, the event creation time will be // used. TimestampPropertyName *[string](/builtin#string) `json:"timestampPropertyName,omitempty"` // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` } ``` EventHubEventSourceCommonProperties - Properties of the EventHub event source. #### func (EventHubEventSourceCommonProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L568) [¶](#EventHubEventSourceCommonProperties.MarshalJSON) ``` func (e [EventHubEventSourceCommonProperties](#EventHubEventSourceCommonProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventHubEventSourceCommonProperties. #### func (*EventHubEventSourceCommonProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L584) [¶](#EventHubEventSourceCommonProperties.UnmarshalJSON) ``` func (e *[EventHubEventSourceCommonProperties](#EventHubEventSourceCommonProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventHubEventSourceCommonProperties. #### type [EventHubEventSourceCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L352) [¶](#EventHubEventSourceCreateOrUpdateParameters) ``` type EventHubEventSourceCreateOrUpdateParameters struct { // REQUIRED; The kind of the event source. Kind *[EventSourceKind](#EventSourceKind) `json:"kind,omitempty"` // REQUIRED; The location of the resource. Location *[string](/builtin#string) `json:"location,omitempty"` // REQUIRED; Properties of the EventHub event source that are required on create or update requests. Properties *[EventHubEventSourceCreationProperties](#EventHubEventSourceCreationProperties) `json:"properties,omitempty"` // An object that represents the local timestamp property. It contains the format of local timestamp that needs to be used // and the corresponding timezone offset information. If a value isn't specified // for localTimestamp, or if null, then the local timestamp will not be ingressed with the events. LocalTimestamp *[LocalTimestamp](#LocalTimestamp) `json:"localTimestamp,omitempty"` // Key-value pairs of additional properties for the resource. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` EventHubEventSourceCreateOrUpdateParameters - Parameters supplied to the Create or Update Event Source operation for an EventHub event source. #### func (*EventHubEventSourceCreateOrUpdateParameters) [GetEventSourceCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L373) [¶](#EventHubEventSourceCreateOrUpdateParameters.GetEventSourceCreateOrUpdateParameters) ``` func (e *[EventHubEventSourceCreateOrUpdateParameters](#EventHubEventSourceCreateOrUpdateParameters)) GetEventSourceCreateOrUpdateParameters() *[EventSourceCreateOrUpdateParameters](#EventSourceCreateOrUpdateParameters) ``` GetEventSourceCreateOrUpdateParameters implements the EventSourceCreateOrUpdateParametersClassification interface for type EventHubEventSourceCreateOrUpdateParameters. #### func (EventHubEventSourceCreateOrUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L631) [¶](#EventHubEventSourceCreateOrUpdateParameters.MarshalJSON) ``` func (e [EventHubEventSourceCreateOrUpdateParameters](#EventHubEventSourceCreateOrUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventHubEventSourceCreateOrUpdateParameters. #### func (*EventHubEventSourceCreateOrUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L642) [¶](#EventHubEventSourceCreateOrUpdateParameters.UnmarshalJSON) ``` func (e *[EventHubEventSourceCreateOrUpdateParameters](#EventHubEventSourceCreateOrUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventHubEventSourceCreateOrUpdateParameters. #### type [EventHubEventSourceCreationProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L383) [¶](#EventHubEventSourceCreationProperties) ``` type EventHubEventSourceCreationProperties struct { // REQUIRED; The name of the event hub's consumer group that holds the partitions from which events will be read. ConsumerGroupName *[string](/builtin#string) `json:"consumerGroupName,omitempty"` // REQUIRED; The name of the event hub. EventHubName *[string](/builtin#string) `json:"eventHubName,omitempty"` // REQUIRED; The resource id of the event source in Azure Resource Manager. EventSourceResourceID *[string](/builtin#string) `json:"eventSourceResourceId,omitempty"` // REQUIRED; The name of the SAS key that grants the Time Series Insights service access to the event hub. The shared access // policies for this key must grant 'Listen' permissions to the event hub. KeyName *[string](/builtin#string) `json:"keyName,omitempty"` // REQUIRED; The name of the service bus that contains the event hub. ServiceBusNamespace *[string](/builtin#string) `json:"serviceBusNamespace,omitempty"` // REQUIRED; The value of the shared access key that grants the Time Series Insights service read access to the event hub. // This property is not shown in event source responses. SharedAccessKey *[string](/builtin#string) `json:"sharedAccessKey,omitempty"` // An object that contains the details about the starting point in time to ingest events. IngressStartAt *[IngressStartAtProperties](#IngressStartAtProperties) `json:"ingressStartAt,omitempty"` // An object that represents the local timestamp property. It contains the format of local timestamp that needs to be used // and the corresponding timezone offset information. If a value isn't specified // for localTimestamp, or if null, then the local timestamp will not be ingressed with the events. LocalTimestamp *[LocalTimestamp](#LocalTimestamp) `json:"localTimestamp,omitempty"` // The event property that will be used as the event source's timestamp. If a value isn't specified for timestampPropertyName, // or if null or empty-string is specified, the event creation time will be // used. TimestampPropertyName *[string](/builtin#string) `json:"timestampPropertyName,omitempty"` // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` } ``` EventHubEventSourceCreationProperties - Properties of the EventHub event source that are required on create or update requests. #### func (EventHubEventSourceCreationProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L674) [¶](#EventHubEventSourceCreationProperties.MarshalJSON) ``` func (e [EventHubEventSourceCreationProperties](#EventHubEventSourceCreationProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventHubEventSourceCreationProperties. #### func (*EventHubEventSourceCreationProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L691) [¶](#EventHubEventSourceCreationProperties.UnmarshalJSON) ``` func (e *[EventHubEventSourceCreationProperties](#EventHubEventSourceCreationProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventHubEventSourceCreationProperties. #### type [EventHubEventSourceMutableProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L425) [¶](#EventHubEventSourceMutableProperties) ``` type EventHubEventSourceMutableProperties struct { // The value of the shared access key that grants the Time Series Insights service read access to the event hub. This property // is not shown in event source responses. SharedAccessKey *[string](/builtin#string) `json:"sharedAccessKey,omitempty"` // The event property that will be used as the event source's timestamp. If a value isn't specified for timestampPropertyName, // or if null or empty-string is specified, the event creation time will be // used. TimestampPropertyName *[string](/builtin#string) `json:"timestampPropertyName,omitempty"` } ``` EventHubEventSourceMutableProperties - An object that represents a set of mutable EventHub event source resource properties. #### func (EventHubEventSourceMutableProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L741) [¶](#EventHubEventSourceMutableProperties.MarshalJSON) added in v1.1.0 ``` func (e [EventHubEventSourceMutableProperties](#EventHubEventSourceMutableProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventHubEventSourceMutableProperties. #### func (*EventHubEventSourceMutableProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L749) [¶](#EventHubEventSourceMutableProperties.UnmarshalJSON) added in v1.1.0 ``` func (e *[EventHubEventSourceMutableProperties](#EventHubEventSourceMutableProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventHubEventSourceMutableProperties. #### type [EventHubEventSourceResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L437) [¶](#EventHubEventSourceResource) ``` type EventHubEventSourceResource struct { // REQUIRED; The kind of the event source. Kind *[EventSourceResourceKind](#EventSourceResourceKind) `json:"kind,omitempty"` // REQUIRED; Resource location Location *[string](/builtin#string) `json:"location,omitempty"` // REQUIRED; Properties of the EventHub event source resource. Properties *[EventHubEventSourceResourceProperties](#EventHubEventSourceResourceProperties) `json:"properties,omitempty"` // Resource tags Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` EventHubEventSourceResource - An event source that receives its data from an Azure EventHub. #### func (*EventHubEventSourceResource) [GetEventSourceResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L461) [¶](#EventHubEventSourceResource.GetEventSourceResource) ``` func (e *[EventHubEventSourceResource](#EventHubEventSourceResource)) GetEventSourceResource() *[EventSourceResource](#EventSourceResource) ``` GetEventSourceResource implements the EventSourceResourceClassification interface for type EventHubEventSourceResource. #### func (EventHubEventSourceResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L772) [¶](#EventHubEventSourceResource.MarshalJSON) added in v1.1.0 ``` func (e [EventHubEventSourceResource](#EventHubEventSourceResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventHubEventSourceResource. #### func (*EventHubEventSourceResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L785) [¶](#EventHubEventSourceResource.UnmarshalJSON) ``` func (e *[EventHubEventSourceResource](#EventHubEventSourceResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventHubEventSourceResource. #### type [EventHubEventSourceResourceProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L473) [¶](#EventHubEventSourceResourceProperties) ``` type EventHubEventSourceResourceProperties struct { // REQUIRED; The name of the event hub's consumer group that holds the partitions from which events will be read. ConsumerGroupName *[string](/builtin#string) `json:"consumerGroupName,omitempty"` // REQUIRED; The name of the event hub. EventHubName *[string](/builtin#string) `json:"eventHubName,omitempty"` // REQUIRED; The resource id of the event source in Azure Resource Manager. EventSourceResourceID *[string](/builtin#string) `json:"eventSourceResourceId,omitempty"` // REQUIRED; The name of the SAS key that grants the Time Series Insights service access to the event hub. The shared access // policies for this key must grant 'Listen' permissions to the event hub. KeyName *[string](/builtin#string) `json:"keyName,omitempty"` // REQUIRED; The name of the service bus that contains the event hub. ServiceBusNamespace *[string](/builtin#string) `json:"serviceBusNamespace,omitempty"` // An object that contains the details about the starting point in time to ingest events. IngressStartAt *[IngressStartAtProperties](#IngressStartAtProperties) `json:"ingressStartAt,omitempty"` // An object that represents the local timestamp property. It contains the format of local timestamp that needs to be used // and the corresponding timezone offset information. If a value isn't specified // for localTimestamp, or if null, then the local timestamp will not be ingressed with the events. LocalTimestamp *[LocalTimestamp](#LocalTimestamp) `json:"localTimestamp,omitempty"` // The event property that will be used as the event source's timestamp. If a value isn't specified for timestampPropertyName, // or if null or empty-string is specified, the event creation time will be // used. TimestampPropertyName *[string](/builtin#string) `json:"timestampPropertyName,omitempty"` // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` } ``` EventHubEventSourceResourceProperties - Properties of the EventHub event source resource. #### func (EventHubEventSourceResourceProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L823) [¶](#EventHubEventSourceResourceProperties.MarshalJSON) ``` func (e [EventHubEventSourceResourceProperties](#EventHubEventSourceResourceProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventHubEventSourceResourceProperties. #### func (*EventHubEventSourceResourceProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L839) [¶](#EventHubEventSourceResourceProperties.UnmarshalJSON) ``` func (e *[EventHubEventSourceResourceProperties](#EventHubEventSourceResourceProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventHubEventSourceResourceProperties. #### type [EventHubEventSourceUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L512) [¶](#EventHubEventSourceUpdateParameters) ``` type EventHubEventSourceUpdateParameters struct { // REQUIRED; The kind of the event source. Kind *[EventSourceKind](#EventSourceKind) `json:"kind,omitempty"` // Properties of the EventHub event source. Properties *[EventHubEventSourceMutableProperties](#EventHubEventSourceMutableProperties) `json:"properties,omitempty"` // Key-value pairs of additional properties for the event source. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` EventHubEventSourceUpdateParameters - Parameters supplied to the Update Event Source operation to update an EventHub event source. #### func (*EventHubEventSourceUpdateParameters) [GetEventSourceUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L524) [¶](#EventHubEventSourceUpdateParameters.GetEventSourceUpdateParameters) ``` func (e *[EventHubEventSourceUpdateParameters](#EventHubEventSourceUpdateParameters)) GetEventSourceUpdateParameters() *[EventSourceUpdateParameters](#EventSourceUpdateParameters) ``` GetEventSourceUpdateParameters implements the EventSourceUpdateParametersClassification interface for type EventHubEventSourceUpdateParameters. #### func (EventHubEventSourceUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L886) [¶](#EventHubEventSourceUpdateParameters.MarshalJSON) ``` func (e [EventHubEventSourceUpdateParameters](#EventHubEventSourceUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventHubEventSourceUpdateParameters. #### func (*EventHubEventSourceUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L895) [¶](#EventHubEventSourceUpdateParameters.UnmarshalJSON) ``` func (e *[EventHubEventSourceUpdateParameters](#EventHubEventSourceUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventHubEventSourceUpdateParameters. #### type [EventSourceCommonProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L532) [¶](#EventSourceCommonProperties) ``` type EventSourceCommonProperties struct { // An object that contains the details about the starting point in time to ingest events. IngressStartAt *[IngressStartAtProperties](#IngressStartAtProperties) `json:"ingressStartAt,omitempty"` // An object that represents the local timestamp property. It contains the format of local timestamp that needs to be used // and the corresponding timezone offset information. If a value isn't specified // for localTimestamp, or if null, then the local timestamp will not be ingressed with the events. LocalTimestamp *[LocalTimestamp](#LocalTimestamp) `json:"localTimestamp,omitempty"` // The event property that will be used as the event source's timestamp. If a value isn't specified for timestampPropertyName, // or if null or empty-string is specified, the event creation time will be // used. TimestampPropertyName *[string](/builtin#string) `json:"timestampPropertyName,omitempty"` // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` } ``` EventSourceCommonProperties - Properties of the event source. #### func (EventSourceCommonProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L921) [¶](#EventSourceCommonProperties.MarshalJSON) ``` func (e [EventSourceCommonProperties](#EventSourceCommonProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventSourceCommonProperties. #### func (*EventSourceCommonProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L932) [¶](#EventSourceCommonProperties.UnmarshalJSON) ``` func (e *[EventSourceCommonProperties](#EventSourceCommonProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventSourceCommonProperties. #### type [EventSourceCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L563) [¶](#EventSourceCreateOrUpdateParameters) ``` type EventSourceCreateOrUpdateParameters struct { // REQUIRED; The kind of the event source. Kind *[EventSourceKind](#EventSourceKind) `json:"kind,omitempty"` // REQUIRED; The location of the resource. Location *[string](/builtin#string) `json:"location,omitempty"` // An object that represents the local timestamp property. It contains the format of local timestamp that needs to be used // and the corresponding timezone offset information. If a value isn't specified // for localTimestamp, or if null, then the local timestamp will not be ingressed with the events. LocalTimestamp *[LocalTimestamp](#LocalTimestamp) `json:"localTimestamp,omitempty"` // Key-value pairs of additional properties for the resource. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` EventSourceCreateOrUpdateParameters - Parameters supplied to the Create or Update Event Source operation. #### func (*EventSourceCreateOrUpdateParameters) [GetEventSourceCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L581) [¶](#EventSourceCreateOrUpdateParameters.GetEventSourceCreateOrUpdateParameters) ``` func (e *[EventSourceCreateOrUpdateParameters](#EventSourceCreateOrUpdateParameters)) GetEventSourceCreateOrUpdateParameters() *[EventSourceCreateOrUpdateParameters](#EventSourceCreateOrUpdateParameters) ``` GetEventSourceCreateOrUpdateParameters implements the EventSourceCreateOrUpdateParametersClassification interface for type EventSourceCreateOrUpdateParameters. #### func (EventSourceCreateOrUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L964) [¶](#EventSourceCreateOrUpdateParameters.MarshalJSON) ``` func (e [EventSourceCreateOrUpdateParameters](#EventSourceCreateOrUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventSourceCreateOrUpdateParameters. #### func (*EventSourceCreateOrUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L974) [¶](#EventSourceCreateOrUpdateParameters.UnmarshalJSON) added in v1.1.0 ``` func (e *[EventSourceCreateOrUpdateParameters](#EventSourceCreateOrUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventSourceCreateOrUpdateParameters. #### type [EventSourceCreateOrUpdateParametersClassification](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L557) [¶](#EventSourceCreateOrUpdateParametersClassification) ``` type EventSourceCreateOrUpdateParametersClassification interface { // GetEventSourceCreateOrUpdateParameters returns the EventSourceCreateOrUpdateParameters content of the underlying type. GetEventSourceCreateOrUpdateParameters() *[EventSourceCreateOrUpdateParameters](#EventSourceCreateOrUpdateParameters) } ``` EventSourceCreateOrUpdateParametersClassification provides polymorphic access to related types. Call the interface's GetEventSourceCreateOrUpdateParameters() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *EventHubEventSourceCreateOrUpdateParameters, *EventSourceCreateOrUpdateParameters, *IoTHubEventSourceCreateOrUpdateParameters #### type [EventSourceKind](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L85) [¶](#EventSourceKind) ``` type EventSourceKind [string](/builtin#string) ``` EventSourceKind - The kind of the event source. ``` const ( EventSourceKindMicrosoftEventHub [EventSourceKind](#EventSourceKind) = "Microsoft.EventHub" EventSourceKindMicrosoftIoTHub [EventSourceKind](#EventSourceKind) = "Microsoft.IoTHub" ) ``` #### func [PossibleEventSourceKindValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L93) [¶](#PossibleEventSourceKindValues) ``` func PossibleEventSourceKindValues() [][EventSourceKind](#EventSourceKind) ``` PossibleEventSourceKindValues returns the possible values for the EventSourceKind const type. #### type [EventSourceListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L586) [¶](#EventSourceListResponse) ``` type EventSourceListResponse struct { // Result of the List EventSources operation. Value [][EventSourceResourceClassification](#EventSourceResourceClassification) `json:"value,omitempty"` } ``` EventSourceListResponse - The response of the List EventSources operation. #### func (EventSourceListResponse) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1003) [¶](#EventSourceListResponse.MarshalJSON) added in v1.1.0 ``` func (e [EventSourceListResponse](#EventSourceListResponse)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventSourceListResponse. #### func (*EventSourceListResponse) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1010) [¶](#EventSourceListResponse.UnmarshalJSON) ``` func (e *[EventSourceListResponse](#EventSourceListResponse)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventSourceListResponse. #### type [EventSourceMutableProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L592) [¶](#EventSourceMutableProperties) ``` type EventSourceMutableProperties struct { // The event property that will be used as the event source's timestamp. If a value isn't specified for timestampPropertyName, // or if null or empty-string is specified, the event creation time will be // used. TimestampPropertyName *[string](/builtin#string) `json:"timestampPropertyName,omitempty"` } ``` EventSourceMutableProperties - An object that represents a set of mutable event source resource properties. #### func (EventSourceMutableProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1030) [¶](#EventSourceMutableProperties.MarshalJSON) added in v1.1.0 ``` func (e [EventSourceMutableProperties](#EventSourceMutableProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventSourceMutableProperties. #### func (*EventSourceMutableProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1037) [¶](#EventSourceMutableProperties.UnmarshalJSON) added in v1.1.0 ``` func (e *[EventSourceMutableProperties](#EventSourceMutableProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventSourceMutableProperties. #### type [EventSourceResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L611) [¶](#EventSourceResource) ``` type EventSourceResource struct { // REQUIRED; The kind of the event source. Kind *[EventSourceResourceKind](#EventSourceResourceKind) `json:"kind,omitempty"` // REQUIRED; Resource location Location *[string](/builtin#string) `json:"location,omitempty"` // Resource tags Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` EventSourceResource - An environment receives data from one or more event sources. Each event source has associated connection info that allows the Time Series Insights ingress pipeline to connect to and pull data from the event source #### func (*EventSourceResource) [GetEventSourceResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L632) [¶](#EventSourceResource.GetEventSourceResource) ``` func (e *[EventSourceResource](#EventSourceResource)) GetEventSourceResource() *[EventSourceResource](#EventSourceResource) ``` GetEventSourceResource implements the EventSourceResourceClassification interface for type EventSourceResource. #### func (EventSourceResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1057) [¶](#EventSourceResource.MarshalJSON) added in v1.1.0 ``` func (e [EventSourceResource](#EventSourceResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventSourceResource. #### func (*EventSourceResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1069) [¶](#EventSourceResource.UnmarshalJSON) added in v1.1.0 ``` func (e *[EventSourceResource](#EventSourceResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventSourceResource. #### type [EventSourceResourceClassification](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L603) [¶](#EventSourceResourceClassification) ``` type EventSourceResourceClassification interface { // GetEventSourceResource returns the EventSourceResource content of the underlying type. GetEventSourceResource() *[EventSourceResource](#EventSourceResource) } ``` EventSourceResourceClassification provides polymorphic access to related types. Call the interface's GetEventSourceResource() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *EventHubEventSourceResource, *EventSourceResource, *IoTHubEventSourceResource #### type [EventSourceResourceKind](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L101) [¶](#EventSourceResourceKind) ``` type EventSourceResourceKind [string](/builtin#string) ``` EventSourceResourceKind - The kind of the event source. ``` const ( EventSourceResourceKindMicrosoftEventHub [EventSourceResourceKind](#EventSourceResourceKind) = "Microsoft.EventHub" EventSourceResourceKindMicrosoftIoTHub [EventSourceResourceKind](#EventSourceResourceKind) = "Microsoft.IoTHub" ) ``` #### func [PossibleEventSourceResourceKindValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L109) [¶](#PossibleEventSourceResourceKindValues) ``` func PossibleEventSourceResourceKindValues() [][EventSourceResourceKind](#EventSourceResourceKind) ``` PossibleEventSourceResourceKindValues returns the possible values for the EventSourceResourceKind const type. #### type [EventSourceUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L644) [¶](#EventSourceUpdateParameters) ``` type EventSourceUpdateParameters struct { // REQUIRED; The kind of the event source. Kind *[EventSourceKind](#EventSourceKind) `json:"kind,omitempty"` // Key-value pairs of additional properties for the event source. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` EventSourceUpdateParameters - Parameters supplied to the Update Event Source operation. #### func (*EventSourceUpdateParameters) [GetEventSourceUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L653) [¶](#EventSourceUpdateParameters.GetEventSourceUpdateParameters) ``` func (e *[EventSourceUpdateParameters](#EventSourceUpdateParameters)) GetEventSourceUpdateParameters() *[EventSourceUpdateParameters](#EventSourceUpdateParameters) ``` GetEventSourceUpdateParameters implements the EventSourceUpdateParametersClassification interface for type EventSourceUpdateParameters. #### func (EventSourceUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1104) [¶](#EventSourceUpdateParameters.MarshalJSON) ``` func (e [EventSourceUpdateParameters](#EventSourceUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EventSourceUpdateParameters. #### func (*EventSourceUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1112) [¶](#EventSourceUpdateParameters.UnmarshalJSON) added in v1.1.0 ``` func (e *[EventSourceUpdateParameters](#EventSourceUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventSourceUpdateParameters. #### type [EventSourceUpdateParametersClassification](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L638) [¶](#EventSourceUpdateParametersClassification) ``` type EventSourceUpdateParametersClassification interface { // GetEventSourceUpdateParameters returns the EventSourceUpdateParameters content of the underlying type. GetEventSourceUpdateParameters() *[EventSourceUpdateParameters](#EventSourceUpdateParameters) } ``` EventSourceUpdateParametersClassification provides polymorphic access to related types. Call the interface's GetEventSourceUpdateParameters() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *EventHubEventSourceUpdateParameters, *EventSourceUpdateParameters, *IoTHubEventSourceUpdateParameters #### type [EventSourcesClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/eventsources_client.go#L26) [¶](#EventSourcesClient) ``` type EventSourcesClient struct { // contains filtered or unexported fields } ``` EventSourcesClient contains the methods for the EventSources group. Don't use this type directly, use NewEventSourcesClient() instead. #### func [NewEventSourcesClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/eventsources_client.go#L35) [¶](#NewEventSourcesClient) ``` func NewEventSourcesClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[EventSourcesClient](#EventSourcesClient), [error](/builtin#error)) ``` NewEventSourcesClient creates a new instance of EventSourcesClient with the specified values. * subscriptionID - Azure Subscription ID. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*EventSourcesClient) [CreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/eventsources_client.go#L57) [¶](#EventSourcesClient.CreateOrUpdate) ``` func (client *[EventSourcesClient](#EventSourcesClient)) CreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), eventSourceName [string](/builtin#string), parameters [EventSourceCreateOrUpdateParametersClassification](#EventSourceCreateOrUpdateParametersClassification), options *[EventSourcesClientCreateOrUpdateOptions](#EventSourcesClientCreateOrUpdateOptions)) ([EventSourcesClientCreateOrUpdateResponse](#EventSourcesClientCreateOrUpdateResponse), [error](/builtin#error)) ``` CreateOrUpdate - Create or update an event source under the specified environment. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * eventSourceName - Name of the event source. * parameters - Parameters for creating an event source resource. * options - EventSourcesClientCreateOrUpdateOptions contains the optional parameters for the EventSourcesClient.CreateOrUpdate method. Example (CreateEventHubEventSource) [¶](#example-EventSourcesClient.CreateOrUpdate-CreateEventHubEventSource) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/EventSourcesCreateEventHub.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewEventSourcesClient().CreateOrUpdate(ctx, "rg1", "env1", "es1", &armtimeseriesinsights.EventHubEventSourceCreateOrUpdateParameters{ Location: to.Ptr("West US"), Kind: to.Ptr(armtimeseriesinsights.EventSourceKindMicrosoftEventHub), Properties: &armtimeseriesinsights.EventHubEventSourceCreationProperties{ IngressStartAt: &armtimeseriesinsights.IngressStartAtProperties{ Type: to.Ptr(armtimeseriesinsights.IngressStartAtTypeEarliestAvailable), }, LocalTimestamp: &armtimeseriesinsights.LocalTimestamp{ Format: to.Ptr(armtimeseriesinsights.LocalTimestampFormat("TimeSpan")), TimeZoneOffset: &armtimeseriesinsights.LocalTimestampTimeZoneOffset{ PropertyName: to.Ptr("someEventPropertyName"), }, }, TimestampPropertyName: to.Ptr("someTimestampProperty"), EventSourceResourceID: to.Ptr("somePathInArm"), ConsumerGroupName: to.Ptr("cgn"), EventHubName: to.Ptr("ehn"), KeyName: to.Ptr("managementKey"), ServiceBusNamespace: to.Ptr("sbn"), SharedAccessKey: to.Ptr("someSecretvalue"), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res = armtimeseriesinsights.EventSourcesClientCreateOrUpdateResponse{ // EventSourceResourceClassification: &armtimeseriesinsights.EventHubEventSourceResource{ // Name: to.Ptr("es1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/EventSources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/eventSources/es1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Kind: to.Ptr(armtimeseriesinsights.EventSourceResourceKindMicrosoftEventHub), // Properties: &armtimeseriesinsights.EventHubEventSourceResourceProperties{ // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // IngressStartAt: &armtimeseriesinsights.IngressStartAtProperties{ // Type: to.Ptr(armtimeseriesinsights.IngressStartAtTypeEarliestAvailable), // }, // LocalTimestamp: &armtimeseriesinsights.LocalTimestamp{ // Format: to.Ptr(armtimeseriesinsights.LocalTimestampFormat("TimeSpan")), // TimeZoneOffset: &armtimeseriesinsights.LocalTimestampTimeZoneOffset{ // PropertyName: to.Ptr("someEventPropertyName"), // }, // }, // EventSourceResourceID: to.Ptr("somePathInArm"), // ConsumerGroupName: to.Ptr("cgn"), // EventHubName: to.Ptr("ehn"), // KeyName: to.Ptr("managementKey"), // ServiceBusNamespace: to.Ptr("sbn"), // }, // }, // } } ``` ``` Output: ``` Share Format Run Example (EventSourcesCreateEventHubWithCustomEnquedTime) [¶](#example-EventSourcesClient.CreateOrUpdate-EventSourcesCreateEventHubWithCustomEnquedTime) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/EventSourcesCreateEventHubWithCustomEnquedTime.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewEventSourcesClient().CreateOrUpdate(ctx, "rg1", "env1", "es1", &armtimeseriesinsights.EventHubEventSourceCreateOrUpdateParameters{ Location: to.Ptr("West US"), Kind: to.Ptr(armtimeseriesinsights.EventSourceKindMicrosoftEventHub), Properties: &armtimeseriesinsights.EventHubEventSourceCreationProperties{ IngressStartAt: &armtimeseriesinsights.IngressStartAtProperties{ Type: to.Ptr(armtimeseriesinsights.IngressStartAtTypeCustomEnqueuedTime), Time: to.Ptr("2017-04-01T19:20:33.2288820Z"), }, TimestampPropertyName: to.Ptr("someTimestampProperty"), EventSourceResourceID: to.Ptr("somePathInArm"), ConsumerGroupName: to.Ptr("cgn"), EventHubName: to.Ptr("ehn"), KeyName: to.Ptr("managementKey"), ServiceBusNamespace: to.Ptr("sbn"), SharedAccessKey: to.Ptr("someSecretvalue"), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res = armtimeseriesinsights.EventSourcesClientCreateOrUpdateResponse{ // EventSourceResourceClassification: &armtimeseriesinsights.EventHubEventSourceResource{ // Name: to.Ptr("es1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/EventSources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/eventSources/es1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Kind: to.Ptr(armtimeseriesinsights.EventSourceResourceKindMicrosoftEventHub), // Properties: &armtimeseriesinsights.EventHubEventSourceResourceProperties{ // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // IngressStartAt: &armtimeseriesinsights.IngressStartAtProperties{ // Type: to.Ptr(armtimeseriesinsights.IngressStartAtTypeCustomEnqueuedTime), // Time: to.Ptr("2017-04-01T19:20:33.2288820Z"), // }, // EventSourceResourceID: to.Ptr("somePathInArm"), // ConsumerGroupName: to.Ptr("cgn"), // EventHubName: to.Ptr("ehn"), // KeyName: to.Ptr("managementKey"), // ServiceBusNamespace: to.Ptr("sbn"), // }, // }, // } } ``` ``` Output: ``` Share Format Run #### func (*EventSourcesClient) [Delete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/eventsources_client.go#L119) [¶](#EventSourcesClient.Delete) ``` func (client *[EventSourcesClient](#EventSourcesClient)) Delete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), eventSourceName [string](/builtin#string), options *[EventSourcesClientDeleteOptions](#EventSourcesClientDeleteOptions)) ([EventSourcesClientDeleteResponse](#EventSourcesClientDeleteResponse), [error](/builtin#error)) ``` Delete - Deletes the event source with the specified name in the specified subscription, resource group, and environment If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * eventSourceName - The name of the Time Series Insights event source associated with the specified environment. * options - EventSourcesClientDeleteOptions contains the optional parameters for the EventSourcesClient.Delete method. Example [¶](#example-EventSourcesClient.Delete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/EventSourcesDelete.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } _, err = clientFactory.NewEventSourcesClient().Delete(ctx, "rg1", "env1", "es1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*EventSourcesClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/eventsources_client.go#L172) [¶](#EventSourcesClient.Get) ``` func (client *[EventSourcesClient](#EventSourcesClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), eventSourceName [string](/builtin#string), options *[EventSourcesClientGetOptions](#EventSourcesClientGetOptions)) ([EventSourcesClientGetResponse](#EventSourcesClientGetResponse), [error](/builtin#error)) ``` Get - Gets the event source with the specified name in the specified environment. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * eventSourceName - The name of the Time Series Insights event source associated with the specified environment. * options - EventSourcesClientGetOptions contains the optional parameters for the EventSourcesClient.Get method. Example [¶](#example-EventSourcesClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/EventSourcesGetEventHub.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewEventSourcesClient().Get(ctx, "rg1", "env1", "es1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res = armtimeseriesinsights.EventSourcesClientGetResponse{ // EventSourceResourceClassification: &armtimeseriesinsights.EventHubEventSourceResource{ // Name: to.Ptr("es1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/EventSources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/eventSources/es1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Kind: to.Ptr(armtimeseriesinsights.EventSourceResourceKindMicrosoftEventHub), // Properties: &armtimeseriesinsights.EventHubEventSourceResourceProperties{ // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // IngressStartAt: &armtimeseriesinsights.IngressStartAtProperties{ // Type: to.Ptr(armtimeseriesinsights.IngressStartAtTypeEarliestAvailable), // }, // LocalTimestamp: &armtimeseriesinsights.LocalTimestamp{ // Format: to.Ptr(armtimeseriesinsights.LocalTimestampFormat("TimeSpan")), // TimeZoneOffset: &armtimeseriesinsights.LocalTimestampTimeZoneOffset{ // PropertyName: to.Ptr("someEventPropertyName"), // }, // }, // EventSourceResourceID: to.Ptr("somePathInArm"), // ConsumerGroupName: to.Ptr("cgn"), // EventHubName: to.Ptr("ehn"), // KeyName: to.Ptr("managementKey"), // ServiceBusNamespace: to.Ptr("sbn"), // }, // }, // } } ``` ``` Output: ``` Share Format Run #### func (*EventSourcesClient) [ListByEnvironment](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/eventsources_client.go#L235) [¶](#EventSourcesClient.ListByEnvironment) ``` func (client *[EventSourcesClient](#EventSourcesClient)) ListByEnvironment(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), options *[EventSourcesClientListByEnvironmentOptions](#EventSourcesClientListByEnvironmentOptions)) ([EventSourcesClientListByEnvironmentResponse](#EventSourcesClientListByEnvironmentResponse), [error](/builtin#error)) ``` ListByEnvironment - Lists all the available event sources associated with the subscription and within the specified resource group and environment. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * options - EventSourcesClientListByEnvironmentOptions contains the optional parameters for the EventSourcesClient.ListByEnvironment method. Example [¶](#example-EventSourcesClient.ListByEnvironment) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/EventSourcesListByEnvironment.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewEventSourcesClient().ListByEnvironment(ctx, "rg1", "env1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.EventSourceListResponse = armtimeseriesinsights.EventSourceListResponse{ // Value: []armtimeseriesinsights.EventSourceResourceClassification{ // &armtimeseriesinsights.EventHubEventSourceResource{ // Name: to.Ptr("es1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/EventSources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/eventSources/es1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Kind: to.Ptr(armtimeseriesinsights.EventSourceResourceKindMicrosoftEventHub), // Properties: &armtimeseriesinsights.EventHubEventSourceResourceProperties{ // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // IngressStartAt: &armtimeseriesinsights.IngressStartAtProperties{ // Type: to.Ptr(armtimeseriesinsights.IngressStartAtTypeEarliestAvailable), // }, // LocalTimestamp: &armtimeseriesinsights.LocalTimestamp{ // Format: to.Ptr(armtimeseriesinsights.LocalTimestampFormat("TimeSpan")), // TimeZoneOffset: &armtimeseriesinsights.LocalTimestampTimeZoneOffset{ // PropertyName: to.Ptr("someEventPropertyName"), // }, // }, // EventSourceResourceID: to.Ptr("somePathInArm"), // ConsumerGroupName: to.Ptr("cgn"), // EventHubName: to.Ptr("ehn"), // KeyName: to.Ptr("managementKey"), // ServiceBusNamespace: to.Ptr("sbn"), // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### func (*EventSourcesClient) [Update](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/eventsources_client.go#L294) [¶](#EventSourcesClient.Update) ``` func (client *[EventSourcesClient](#EventSourcesClient)) Update(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), eventSourceName [string](/builtin#string), eventSourceUpdateParameters [EventSourceUpdateParametersClassification](#EventSourceUpdateParametersClassification), options *[EventSourcesClientUpdateOptions](#EventSourcesClientUpdateOptions)) ([EventSourcesClientUpdateResponse](#EventSourcesClientUpdateResponse), [error](/builtin#error)) ``` Update - Updates the event source with the specified name in the specified subscription, resource group, and environment. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * eventSourceName - The name of the Time Series Insights event source associated with the specified environment. * eventSourceUpdateParameters - Request object that contains the updated information for the event source. * options - EventSourcesClientUpdateOptions contains the optional parameters for the EventSourcesClient.Update method. Example [¶](#example-EventSourcesClient.Update) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/EventSourcesPatchTags.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewEventSourcesClient().Update(ctx, "rg1", "env1", "es1", &armtimeseriesinsights.EventSourceUpdateParameters{ Tags: map[string]*string{ "someKey": to.Ptr("someValue"), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res = armtimeseriesinsights.EventSourcesClientUpdateResponse{ // EventSourceResourceClassification: &armtimeseriesinsights.EventHubEventSourceResource{ // Name: to.Ptr("es1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/EventSources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/eventSources/es1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // "someKey": to.Ptr("someValue"), // }, // Kind: to.Ptr(armtimeseriesinsights.EventSourceResourceKindMicrosoftEventHub), // Properties: &armtimeseriesinsights.EventHubEventSourceResourceProperties{ // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // IngressStartAt: &armtimeseriesinsights.IngressStartAtProperties{ // Type: to.Ptr(armtimeseriesinsights.IngressStartAtTypeEarliestAvailable), // }, // LocalTimestamp: &armtimeseriesinsights.LocalTimestamp{ // Format: to.Ptr(armtimeseriesinsights.LocalTimestampFormat("TimeSpan")), // TimeZoneOffset: &armtimeseriesinsights.LocalTimestampTimeZoneOffset{ // PropertyName: to.Ptr("someEventPropertyName"), // }, // }, // TimestampPropertyName: to.Ptr("someOtherTimestampProperty"), // EventSourceResourceID: to.Ptr("somePathInArm"), // ConsumerGroupName: to.Ptr("cgn"), // EventHubName: to.Ptr("ehn"), // KeyName: to.Ptr("managementKey"), // ServiceBusNamespace: to.Ptr("sbn"), // }, // }, // } } ``` ``` Output: ``` Share Format Run #### type [EventSourcesClientCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L658) [¶](#EventSourcesClientCreateOrUpdateOptions) ``` type EventSourcesClientCreateOrUpdateOptions struct { } ``` EventSourcesClientCreateOrUpdateOptions contains the optional parameters for the EventSourcesClient.CreateOrUpdate method. #### type [EventSourcesClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L98) [¶](#EventSourcesClientCreateOrUpdateResponse) ``` type EventSourcesClientCreateOrUpdateResponse struct { [EventSourceResourceClassification](#EventSourceResourceClassification) } ``` EventSourcesClientCreateOrUpdateResponse contains the response from method EventSourcesClient.CreateOrUpdate. #### func (*EventSourcesClientCreateOrUpdateResponse) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L103) [¶](#EventSourcesClientCreateOrUpdateResponse.UnmarshalJSON) ``` func (e *[EventSourcesClientCreateOrUpdateResponse](#EventSourcesClientCreateOrUpdateResponse)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventSourcesClientCreateOrUpdateResponse. #### type [EventSourcesClientDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L663) [¶](#EventSourcesClientDeleteOptions) ``` type EventSourcesClientDeleteOptions struct { } ``` EventSourcesClientDeleteOptions contains the optional parameters for the EventSourcesClient.Delete method. #### type [EventSourcesClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L113) [¶](#EventSourcesClientDeleteResponse) ``` type EventSourcesClientDeleteResponse struct { } ``` EventSourcesClientDeleteResponse contains the response from method EventSourcesClient.Delete. #### type [EventSourcesClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L668) [¶](#EventSourcesClientGetOptions) ``` type EventSourcesClientGetOptions struct { } ``` EventSourcesClientGetOptions contains the optional parameters for the EventSourcesClient.Get method. #### type [EventSourcesClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L118) [¶](#EventSourcesClientGetResponse) ``` type EventSourcesClientGetResponse struct { [EventSourceResourceClassification](#EventSourceResourceClassification) } ``` EventSourcesClientGetResponse contains the response from method EventSourcesClient.Get. #### func (*EventSourcesClientGetResponse) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L123) [¶](#EventSourcesClientGetResponse.UnmarshalJSON) ``` func (e *[EventSourcesClientGetResponse](#EventSourcesClientGetResponse)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventSourcesClientGetResponse. #### type [EventSourcesClientListByEnvironmentOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L674) [¶](#EventSourcesClientListByEnvironmentOptions) ``` type EventSourcesClientListByEnvironmentOptions struct { } ``` EventSourcesClientListByEnvironmentOptions contains the optional parameters for the EventSourcesClient.ListByEnvironment method. #### type [EventSourcesClientListByEnvironmentResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L133) [¶](#EventSourcesClientListByEnvironmentResponse) ``` type EventSourcesClientListByEnvironmentResponse struct { [EventSourceListResponse](#EventSourceListResponse) } ``` EventSourcesClientListByEnvironmentResponse contains the response from method EventSourcesClient.ListByEnvironment. #### type [EventSourcesClientUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L679) [¶](#EventSourcesClientUpdateOptions) ``` type EventSourcesClientUpdateOptions struct { } ``` EventSourcesClientUpdateOptions contains the optional parameters for the EventSourcesClient.Update method. #### type [EventSourcesClientUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L138) [¶](#EventSourcesClientUpdateResponse) ``` type EventSourcesClientUpdateResponse struct { [EventSourceResourceClassification](#EventSourceResourceClassification) } ``` EventSourcesClientUpdateResponse contains the response from method EventSourcesClient.Update. #### func (*EventSourcesClientUpdateResponse) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L143) [¶](#EventSourcesClientUpdateResponse.UnmarshalJSON) ``` func (e *[EventSourcesClientUpdateResponse](#EventSourcesClientUpdateResponse)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EventSourcesClientUpdateResponse. #### type [Gen1EnvironmentCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L685) [¶](#Gen1EnvironmentCreateOrUpdateParameters) ``` type Gen1EnvironmentCreateOrUpdateParameters struct { // REQUIRED; The kind of the environment. Kind *[EnvironmentKind](#EnvironmentKind) `json:"kind,omitempty"` // REQUIRED; The location of the resource. Location *[string](/builtin#string) `json:"location,omitempty"` // REQUIRED; Properties used to create a Gen1 environment. Properties *[Gen1EnvironmentCreationProperties](#Gen1EnvironmentCreationProperties) `json:"properties,omitempty"` // REQUIRED; The sku determines the type of environment, either Gen1 (S1 or S2) or Gen2 (L1). For Gen1 environments the sku // determines the capacity of the environment, the ingress rate, and the billing rate. SKU *[SKU](#SKU) `json:"sku,omitempty"` // Key-value pairs of additional properties for the resource. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` Gen1EnvironmentCreateOrUpdateParameters - Parameters supplied to the Create or Update Environment operation for a Gen1 environment. #### func (*Gen1EnvironmentCreateOrUpdateParameters) [GetEnvironmentCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L705) [¶](#Gen1EnvironmentCreateOrUpdateParameters.GetEnvironmentCreateOrUpdateParameters) ``` func (g *[Gen1EnvironmentCreateOrUpdateParameters](#Gen1EnvironmentCreateOrUpdateParameters)) GetEnvironmentCreateOrUpdateParameters() *[EnvironmentCreateOrUpdateParameters](#EnvironmentCreateOrUpdateParameters) ``` GetEnvironmentCreateOrUpdateParameters implements the EnvironmentCreateOrUpdateParametersClassification interface for type Gen1EnvironmentCreateOrUpdateParameters. #### func (Gen1EnvironmentCreateOrUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1135) [¶](#Gen1EnvironmentCreateOrUpdateParameters.MarshalJSON) ``` func (g [Gen1EnvironmentCreateOrUpdateParameters](#Gen1EnvironmentCreateOrUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen1EnvironmentCreateOrUpdateParameters. #### func (*Gen1EnvironmentCreateOrUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1146) [¶](#Gen1EnvironmentCreateOrUpdateParameters.UnmarshalJSON) ``` func (g *[Gen1EnvironmentCreateOrUpdateParameters](#Gen1EnvironmentCreateOrUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen1EnvironmentCreateOrUpdateParameters. #### type [Gen1EnvironmentCreationProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L715) [¶](#Gen1EnvironmentCreationProperties) ``` type Gen1EnvironmentCreationProperties struct { // REQUIRED; ISO8601 timespan specifying the minimum number of days the environment's events will be available for query. DataRetentionTime *[string](/builtin#string) `json:"dataRetentionTime,omitempty"` // The list of event properties which will be used to partition data in the environment. Currently, only a single partition // key property is supported. PartitionKeyProperties []*[TimeSeriesIDProperty](#TimeSeriesIDProperty) `json:"partitionKeyProperties,omitempty"` // The behavior the Time Series Insights service should take when the environment's capacity has been exceeded. If "PauseIngress" // is specified, new events will not be read from the event source. If // "PurgeOldData" is specified, new events will continue to be read and old events will be deleted from the environment. The // default behavior is PurgeOldData. StorageLimitExceededBehavior *[StorageLimitExceededBehavior](#StorageLimitExceededBehavior) `json:"storageLimitExceededBehavior,omitempty"` } ``` Gen1EnvironmentCreationProperties - Properties used to create a Gen1 environment. #### func (Gen1EnvironmentCreationProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1178) [¶](#Gen1EnvironmentCreationProperties.MarshalJSON) ``` func (g [Gen1EnvironmentCreationProperties](#Gen1EnvironmentCreationProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen1EnvironmentCreationProperties. #### func (*Gen1EnvironmentCreationProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1187) [¶](#Gen1EnvironmentCreationProperties.UnmarshalJSON) added in v1.1.0 ``` func (g *[Gen1EnvironmentCreationProperties](#Gen1EnvironmentCreationProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen1EnvironmentCreationProperties. #### type [Gen1EnvironmentMutableProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L731) [¶](#Gen1EnvironmentMutableProperties) ``` type Gen1EnvironmentMutableProperties struct { // ISO8601 timespan specifying the minimum number of days the environment's events will be available for query. DataRetentionTime *[string](/builtin#string) `json:"dataRetentionTime,omitempty"` // The behavior the Time Series Insights service should take when the environment's capacity has been exceeded. If "PauseIngress" // is specified, new events will not be read from the event source. If // "PurgeOldData" is specified, new events will continue to be read and old events will be deleted from the environment. The // default behavior is PurgeOldData. StorageLimitExceededBehavior *[StorageLimitExceededBehavior](#StorageLimitExceededBehavior) `json:"storageLimitExceededBehavior,omitempty"` } ``` Gen1EnvironmentMutableProperties - An object that represents a set of mutable Gen1 environment resource properties. #### func (Gen1EnvironmentMutableProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1213) [¶](#Gen1EnvironmentMutableProperties.MarshalJSON) added in v1.1.0 ``` func (g [Gen1EnvironmentMutableProperties](#Gen1EnvironmentMutableProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen1EnvironmentMutableProperties. #### func (*Gen1EnvironmentMutableProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1221) [¶](#Gen1EnvironmentMutableProperties.UnmarshalJSON) added in v1.1.0 ``` func (g *[Gen1EnvironmentMutableProperties](#Gen1EnvironmentMutableProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen1EnvironmentMutableProperties. #### type [Gen1EnvironmentResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L744) [¶](#Gen1EnvironmentResource) ``` type Gen1EnvironmentResource struct { // REQUIRED; The kind of the environment. Kind *[EnvironmentResourceKind](#EnvironmentResourceKind) `json:"kind,omitempty"` // REQUIRED; Resource location Location *[string](/builtin#string) `json:"location,omitempty"` // REQUIRED; Properties of the Gen1 environment. Properties *[Gen1EnvironmentResourceProperties](#Gen1EnvironmentResourceProperties) `json:"properties,omitempty"` // REQUIRED; The sku determines the type of environment, either Gen1 (S1 or S2) or Gen2 (L1). For Gen1 environments the sku // determines the capacity of the environment, the ingress rate, and the billing rate. SKU *[SKU](#SKU) `json:"sku,omitempty"` // Resource tags Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` Gen1EnvironmentResource - An environment is a set of time-series data available for query, and is the top level Azure Time Series Insights resource. Gen1 environments have data retention limits. #### func (*Gen1EnvironmentResource) [GetEnvironmentResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L772) [¶](#Gen1EnvironmentResource.GetEnvironmentResource) ``` func (g *[Gen1EnvironmentResource](#Gen1EnvironmentResource)) GetEnvironmentResource() *[EnvironmentResource](#EnvironmentResource) ``` GetEnvironmentResource implements the EnvironmentResourceClassification interface for type Gen1EnvironmentResource. #### func (Gen1EnvironmentResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1244) [¶](#Gen1EnvironmentResource.MarshalJSON) added in v1.1.0 ``` func (g [Gen1EnvironmentResource](#Gen1EnvironmentResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen1EnvironmentResource. #### func (*Gen1EnvironmentResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1258) [¶](#Gen1EnvironmentResource.UnmarshalJSON) ``` func (g *[Gen1EnvironmentResource](#Gen1EnvironmentResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen1EnvironmentResource. #### type [Gen1EnvironmentResourceProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L785) [¶](#Gen1EnvironmentResourceProperties) ``` type Gen1EnvironmentResourceProperties struct { // REQUIRED; ISO8601 timespan specifying the minimum number of days the environment's events will be available for query. DataRetentionTime *[string](/builtin#string) `json:"dataRetentionTime,omitempty"` // The list of event properties which will be used to partition data in the environment. Currently, only a single partition // key property is supported. PartitionKeyProperties []*[TimeSeriesIDProperty](#TimeSeriesIDProperty) `json:"partitionKeyProperties,omitempty"` // The behavior the Time Series Insights service should take when the environment's capacity has been exceeded. If "PauseIngress" // is specified, new events will not be read from the event source. If // "PurgeOldData" is specified, new events will continue to be read and old events will be deleted from the environment. The // default behavior is PurgeOldData. StorageLimitExceededBehavior *[StorageLimitExceededBehavior](#StorageLimitExceededBehavior) `json:"storageLimitExceededBehavior,omitempty"` // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; The fully qualified domain name used to access the environment data, e.g. to query the environment's events // or upload reference data for the environment. DataAccessFqdn *[string](/builtin#string) `json:"dataAccessFqdn,omitempty" azure:"ro"` // READ-ONLY; An id used to access the environment data, e.g. to query the environment's events or upload reference data for // the environment. DataAccessID *[string](/builtin#string) `json:"dataAccessId,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` // READ-ONLY; An object that represents the status of the environment, and its internal state in the Time Series Insights // service. Status *[EnvironmentStatus](#EnvironmentStatus) `json:"status,omitempty" azure:"ro"` } ``` Gen1EnvironmentResourceProperties - Properties of the Gen1 environment. #### func (Gen1EnvironmentResourceProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1299) [¶](#Gen1EnvironmentResourceProperties.MarshalJSON) ``` func (g [Gen1EnvironmentResourceProperties](#Gen1EnvironmentResourceProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen1EnvironmentResourceProperties. #### func (*Gen1EnvironmentResourceProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1313) [¶](#Gen1EnvironmentResourceProperties.UnmarshalJSON) ``` func (g *[Gen1EnvironmentResourceProperties](#Gen1EnvironmentResourceProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen1EnvironmentResourceProperties. #### type [Gen1EnvironmentUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L819) [¶](#Gen1EnvironmentUpdateParameters) ``` type Gen1EnvironmentUpdateParameters struct { // REQUIRED; The kind of the environment. Kind *[EnvironmentKind](#EnvironmentKind) `json:"kind,omitempty"` // Properties of the Gen1 environment. Properties *[Gen1EnvironmentMutableProperties](#Gen1EnvironmentMutableProperties) `json:"properties,omitempty"` // The sku of the environment. SKU *[SKU](#SKU) `json:"sku,omitempty"` // Key-value pairs of additional properties for the environment. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` Gen1EnvironmentUpdateParameters - Parameters supplied to the Update Environment operation to update a Gen1 environment. #### func (*Gen1EnvironmentUpdateParameters) [GetEnvironmentUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L834) [¶](#Gen1EnvironmentUpdateParameters.GetEnvironmentUpdateParameters) ``` func (g *[Gen1EnvironmentUpdateParameters](#Gen1EnvironmentUpdateParameters)) GetEnvironmentUpdateParameters() *[EnvironmentUpdateParameters](#EnvironmentUpdateParameters) ``` GetEnvironmentUpdateParameters implements the EnvironmentUpdateParametersClassification interface for type Gen1EnvironmentUpdateParameters. #### func (Gen1EnvironmentUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1354) [¶](#Gen1EnvironmentUpdateParameters.MarshalJSON) ``` func (g [Gen1EnvironmentUpdateParameters](#Gen1EnvironmentUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen1EnvironmentUpdateParameters. #### func (*Gen1EnvironmentUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1364) [¶](#Gen1EnvironmentUpdateParameters.UnmarshalJSON) ``` func (g *[Gen1EnvironmentUpdateParameters](#Gen1EnvironmentUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen1EnvironmentUpdateParameters. #### type [Gen2EnvironmentCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L843) [¶](#Gen2EnvironmentCreateOrUpdateParameters) ``` type Gen2EnvironmentCreateOrUpdateParameters struct { // REQUIRED; The kind of the environment. Kind *[EnvironmentKind](#EnvironmentKind) `json:"kind,omitempty"` // REQUIRED; The location of the resource. Location *[string](/builtin#string) `json:"location,omitempty"` // REQUIRED; Properties used to create a Gen2 environment. Properties *[Gen2EnvironmentCreationProperties](#Gen2EnvironmentCreationProperties) `json:"properties,omitempty"` // REQUIRED; The sku determines the type of environment, either Gen1 (S1 or S2) or Gen2 (L1). For Gen1 environments the sku // determines the capacity of the environment, the ingress rate, and the billing rate. SKU *[SKU](#SKU) `json:"sku,omitempty"` // Key-value pairs of additional properties for the resource. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` Gen2EnvironmentCreateOrUpdateParameters - Parameters supplied to the Create or Update Environment operation for a Gen2 environment. #### func (*Gen2EnvironmentCreateOrUpdateParameters) [GetEnvironmentCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L863) [¶](#Gen2EnvironmentCreateOrUpdateParameters.GetEnvironmentCreateOrUpdateParameters) ``` func (g *[Gen2EnvironmentCreateOrUpdateParameters](#Gen2EnvironmentCreateOrUpdateParameters)) GetEnvironmentCreateOrUpdateParameters() *[EnvironmentCreateOrUpdateParameters](#EnvironmentCreateOrUpdateParameters) ``` GetEnvironmentCreateOrUpdateParameters implements the EnvironmentCreateOrUpdateParametersClassification interface for type Gen2EnvironmentCreateOrUpdateParameters. #### func (Gen2EnvironmentCreateOrUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1393) [¶](#Gen2EnvironmentCreateOrUpdateParameters.MarshalJSON) ``` func (g [Gen2EnvironmentCreateOrUpdateParameters](#Gen2EnvironmentCreateOrUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen2EnvironmentCreateOrUpdateParameters. #### func (*Gen2EnvironmentCreateOrUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1404) [¶](#Gen2EnvironmentCreateOrUpdateParameters.UnmarshalJSON) ``` func (g *[Gen2EnvironmentCreateOrUpdateParameters](#Gen2EnvironmentCreateOrUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen2EnvironmentCreateOrUpdateParameters. #### type [Gen2EnvironmentCreationProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L873) [¶](#Gen2EnvironmentCreationProperties) ``` type Gen2EnvironmentCreationProperties struct { // REQUIRED; The storage configuration provides the connection details that allows the Time Series Insights service to connect // to the customer storage account that is used to store the environment's data. StorageConfiguration *[Gen2StorageConfigurationInput](#Gen2StorageConfigurationInput) `json:"storageConfiguration,omitempty"` // REQUIRED; The list of event properties which will be used to define the environment's time series id. TimeSeriesIDProperties []*[TimeSeriesIDProperty](#TimeSeriesIDProperty) `json:"timeSeriesIdProperties,omitempty"` // The warm store configuration provides the details to create a warm store cache that will retain a copy of the environment's // data available for faster query. WarmStoreConfiguration *[WarmStoreConfigurationProperties](#WarmStoreConfigurationProperties) `json:"warmStoreConfiguration,omitempty"` } ``` Gen2EnvironmentCreationProperties - Properties used to create a Gen2 environment. #### func (Gen2EnvironmentCreationProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1436) [¶](#Gen2EnvironmentCreationProperties.MarshalJSON) ``` func (g [Gen2EnvironmentCreationProperties](#Gen2EnvironmentCreationProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen2EnvironmentCreationProperties. #### func (*Gen2EnvironmentCreationProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1445) [¶](#Gen2EnvironmentCreationProperties.UnmarshalJSON) added in v1.1.0 ``` func (g *[Gen2EnvironmentCreationProperties](#Gen2EnvironmentCreationProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen2EnvironmentCreationProperties. #### type [Gen2EnvironmentMutableProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L887) [¶](#Gen2EnvironmentMutableProperties) ``` type Gen2EnvironmentMutableProperties struct { // The storage configuration provides the connection details that allows the Time Series Insights service to connect to the // customer storage account that is used to store the environment's data. StorageConfiguration *[Gen2StorageConfigurationMutableProperties](#Gen2StorageConfigurationMutableProperties) `json:"storageConfiguration,omitempty"` // The warm store configuration provides the details to create a warm store cache that will retain a copy of the environment's // data available for faster query. WarmStoreConfiguration *[WarmStoreConfigurationProperties](#WarmStoreConfigurationProperties) `json:"warmStoreConfiguration,omitempty"` } ``` Gen2EnvironmentMutableProperties - An object that represents a set of mutable Gen2 environment resource properties. #### func (Gen2EnvironmentMutableProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1471) [¶](#Gen2EnvironmentMutableProperties.MarshalJSON) added in v1.1.0 ``` func (g [Gen2EnvironmentMutableProperties](#Gen2EnvironmentMutableProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen2EnvironmentMutableProperties. #### func (*Gen2EnvironmentMutableProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1479) [¶](#Gen2EnvironmentMutableProperties.UnmarshalJSON) added in v1.1.0 ``` func (g *[Gen2EnvironmentMutableProperties](#Gen2EnvironmentMutableProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen2EnvironmentMutableProperties. #### type [Gen2EnvironmentResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L899) [¶](#Gen2EnvironmentResource) ``` type Gen2EnvironmentResource struct { // REQUIRED; The kind of the environment. Kind *[EnvironmentResourceKind](#EnvironmentResourceKind) `json:"kind,omitempty"` // REQUIRED; Resource location Location *[string](/builtin#string) `json:"location,omitempty"` // REQUIRED; Properties of the Gen2 environment. Properties *[Gen2EnvironmentResourceProperties](#Gen2EnvironmentResourceProperties) `json:"properties,omitempty"` // REQUIRED; The sku determines the type of environment, either Gen1 (S1 or S2) or Gen2 (L1). For Gen1 environments the sku // determines the capacity of the environment, the ingress rate, and the billing rate. SKU *[SKU](#SKU) `json:"sku,omitempty"` // Resource tags Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` Gen2EnvironmentResource - An environment is a set of time-series data available for query, and is the top level Azure Time Series Insights resource. Gen2 environments do not have set data retention limits. #### func (*Gen2EnvironmentResource) [GetEnvironmentResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L927) [¶](#Gen2EnvironmentResource.GetEnvironmentResource) ``` func (g *[Gen2EnvironmentResource](#Gen2EnvironmentResource)) GetEnvironmentResource() *[EnvironmentResource](#EnvironmentResource) ``` GetEnvironmentResource implements the EnvironmentResourceClassification interface for type Gen2EnvironmentResource. #### func (Gen2EnvironmentResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1502) [¶](#Gen2EnvironmentResource.MarshalJSON) added in v1.1.0 ``` func (g [Gen2EnvironmentResource](#Gen2EnvironmentResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen2EnvironmentResource. #### func (*Gen2EnvironmentResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1516) [¶](#Gen2EnvironmentResource.UnmarshalJSON) ``` func (g *[Gen2EnvironmentResource](#Gen2EnvironmentResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen2EnvironmentResource. #### type [Gen2EnvironmentResourceProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L940) [¶](#Gen2EnvironmentResourceProperties) ``` type Gen2EnvironmentResourceProperties struct { // REQUIRED; The storage configuration provides the connection details that allows the Time Series Insights service to connect // to the customer storage account that is used to store the environment's data. StorageConfiguration *[Gen2StorageConfigurationOutput](#Gen2StorageConfigurationOutput) `json:"storageConfiguration,omitempty"` // REQUIRED; The list of event properties which will be used to define the environment's time series id. TimeSeriesIDProperties []*[TimeSeriesIDProperty](#TimeSeriesIDProperty) `json:"timeSeriesIdProperties,omitempty"` // The warm store configuration provides the details to create a warm store cache that will retain a copy of the environment's // data available for faster query. WarmStoreConfiguration *[WarmStoreConfigurationProperties](#WarmStoreConfigurationProperties) `json:"warmStoreConfiguration,omitempty"` // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; The fully qualified domain name used to access the environment data, e.g. to query the environment's events // or upload reference data for the environment. DataAccessFqdn *[string](/builtin#string) `json:"dataAccessFqdn,omitempty" azure:"ro"` // READ-ONLY; An id used to access the environment data, e.g. to query the environment's events or upload reference data for // the environment. DataAccessID *[string](/builtin#string) `json:"dataAccessId,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` // READ-ONLY; An object that represents the status of the environment, and its internal state in the Time Series Insights // service. Status *[EnvironmentStatus](#EnvironmentStatus) `json:"status,omitempty" azure:"ro"` } ``` Gen2EnvironmentResourceProperties - Properties of the Gen2 environment. #### func (Gen2EnvironmentResourceProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1557) [¶](#Gen2EnvironmentResourceProperties.MarshalJSON) ``` func (g [Gen2EnvironmentResourceProperties](#Gen2EnvironmentResourceProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen2EnvironmentResourceProperties. #### func (*Gen2EnvironmentResourceProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1571) [¶](#Gen2EnvironmentResourceProperties.UnmarshalJSON) ``` func (g *[Gen2EnvironmentResourceProperties](#Gen2EnvironmentResourceProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen2EnvironmentResourceProperties. #### type [Gen2EnvironmentUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L972) [¶](#Gen2EnvironmentUpdateParameters) ``` type Gen2EnvironmentUpdateParameters struct { // REQUIRED; The kind of the environment. Kind *[EnvironmentKind](#EnvironmentKind) `json:"kind,omitempty"` // Properties of the Gen2 environment. Properties *[Gen2EnvironmentMutableProperties](#Gen2EnvironmentMutableProperties) `json:"properties,omitempty"` // Key-value pairs of additional properties for the environment. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` Gen2EnvironmentUpdateParameters - Parameters supplied to the Update Environment operation to update a Gen2 environment. #### func (*Gen2EnvironmentUpdateParameters) [GetEnvironmentUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L984) [¶](#Gen2EnvironmentUpdateParameters.GetEnvironmentUpdateParameters) ``` func (g *[Gen2EnvironmentUpdateParameters](#Gen2EnvironmentUpdateParameters)) GetEnvironmentUpdateParameters() *[EnvironmentUpdateParameters](#EnvironmentUpdateParameters) ``` GetEnvironmentUpdateParameters implements the EnvironmentUpdateParametersClassification interface for type Gen2EnvironmentUpdateParameters. #### func (Gen2EnvironmentUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1612) [¶](#Gen2EnvironmentUpdateParameters.MarshalJSON) ``` func (g [Gen2EnvironmentUpdateParameters](#Gen2EnvironmentUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen2EnvironmentUpdateParameters. #### func (*Gen2EnvironmentUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1621) [¶](#Gen2EnvironmentUpdateParameters.UnmarshalJSON) ``` func (g *[Gen2EnvironmentUpdateParameters](#Gen2EnvironmentUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen2EnvironmentUpdateParameters. #### type [Gen2StorageConfigurationInput](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L993) [¶](#Gen2StorageConfigurationInput) ``` type Gen2StorageConfigurationInput struct { // REQUIRED; The name of the storage account that will hold the environment's Gen2 data. AccountName *[string](/builtin#string) `json:"accountName,omitempty"` // REQUIRED; The value of the management key that grants the Time Series Insights service write access to the storage account. // This property is not shown in environment responses. ManagementKey *[string](/builtin#string) `json:"managementKey,omitempty"` } ``` Gen2StorageConfigurationInput - The storage configuration provides the connection details that allows the Time Series Insights service to connect to the customer storage account that is used to store the environment's data. #### func (Gen2StorageConfigurationInput) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1647) [¶](#Gen2StorageConfigurationInput.MarshalJSON) added in v1.1.0 ``` func (g [Gen2StorageConfigurationInput](#Gen2StorageConfigurationInput)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen2StorageConfigurationInput. #### func (*Gen2StorageConfigurationInput) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1655) [¶](#Gen2StorageConfigurationInput.UnmarshalJSON) added in v1.1.0 ``` func (g *[Gen2StorageConfigurationInput](#Gen2StorageConfigurationInput)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen2StorageConfigurationInput. #### type [Gen2StorageConfigurationMutableProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1004) [¶](#Gen2StorageConfigurationMutableProperties) ``` type Gen2StorageConfigurationMutableProperties struct { // REQUIRED; The value of the management key that grants the Time Series Insights service write access to the storage account. // This property is not shown in environment responses. ManagementKey *[string](/builtin#string) `json:"managementKey,omitempty"` } ``` Gen2StorageConfigurationMutableProperties - The storage configuration provides the connection details that allows the Time Series Insights service to connect to the customer storage account that is used to store the environment's data. #### func (Gen2StorageConfigurationMutableProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1678) [¶](#Gen2StorageConfigurationMutableProperties.MarshalJSON) added in v1.1.0 ``` func (g [Gen2StorageConfigurationMutableProperties](#Gen2StorageConfigurationMutableProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen2StorageConfigurationMutableProperties. #### func (*Gen2StorageConfigurationMutableProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1685) [¶](#Gen2StorageConfigurationMutableProperties.UnmarshalJSON) added in v1.1.0 ``` func (g *[Gen2StorageConfigurationMutableProperties](#Gen2StorageConfigurationMutableProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen2StorageConfigurationMutableProperties. #### type [Gen2StorageConfigurationOutput](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1012) [¶](#Gen2StorageConfigurationOutput) ``` type Gen2StorageConfigurationOutput struct { // REQUIRED; The name of the storage account that will hold the environment's Gen2 data. AccountName *[string](/builtin#string) `json:"accountName,omitempty"` } ``` Gen2StorageConfigurationOutput - The storage configuration provides the non-secret connection details about the customer storage account that is used to store the environment's data. #### func (Gen2StorageConfigurationOutput) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1705) [¶](#Gen2StorageConfigurationOutput.MarshalJSON) added in v1.1.0 ``` func (g [Gen2StorageConfigurationOutput](#Gen2StorageConfigurationOutput)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Gen2StorageConfigurationOutput. #### func (*Gen2StorageConfigurationOutput) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1712) [¶](#Gen2StorageConfigurationOutput.UnmarshalJSON) added in v1.1.0 ``` func (g *[Gen2StorageConfigurationOutput](#Gen2StorageConfigurationOutput)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Gen2StorageConfigurationOutput. #### type [IngressEnvironmentStatus](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1018) [¶](#IngressEnvironmentStatus) ``` type IngressEnvironmentStatus struct { // This string represents the state of ingress operations on an environment. It can be "Disabled", "Ready", "Running", "Paused" // or "Unknown" State *[IngressState](#IngressState) `json:"state,omitempty"` // READ-ONLY; An object that contains the details about an environment's state. StateDetails *[EnvironmentStateDetails](#EnvironmentStateDetails) `json:"stateDetails,omitempty" azure:"ro"` } ``` IngressEnvironmentStatus - An object that represents the status of ingress on an environment. #### func (IngressEnvironmentStatus) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1732) [¶](#IngressEnvironmentStatus.MarshalJSON) added in v1.1.0 ``` func (i [IngressEnvironmentStatus](#IngressEnvironmentStatus)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type IngressEnvironmentStatus. #### func (*IngressEnvironmentStatus) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1740) [¶](#IngressEnvironmentStatus.UnmarshalJSON) added in v1.1.0 ``` func (i *[IngressEnvironmentStatus](#IngressEnvironmentStatus)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type IngressEnvironmentStatus. #### type [IngressStartAtProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1028) [¶](#IngressStartAtProperties) ``` type IngressStartAtProperties struct { // ISO8601 UTC datetime with seconds precision (milliseconds are optional), specifying the date and time that will be the // starting point for Events to be consumed. Time *[string](/builtin#string) `json:"time,omitempty"` // The type of the ingressStartAt, It can be "EarliestAvailable", "EventSourceCreationTime", "CustomEnqueuedTime". Type *[IngressStartAtType](#IngressStartAtType) `json:"type,omitempty"` } ``` IngressStartAtProperties - An object that contains the details about the starting point in time to ingest events. #### func (IngressStartAtProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1763) [¶](#IngressStartAtProperties.MarshalJSON) added in v1.1.0 ``` func (i [IngressStartAtProperties](#IngressStartAtProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type IngressStartAtProperties. #### func (*IngressStartAtProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1771) [¶](#IngressStartAtProperties.UnmarshalJSON) added in v1.1.0 ``` func (i *[IngressStartAtProperties](#IngressStartAtProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type IngressStartAtProperties. #### type [IngressStartAtType](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L117) [¶](#IngressStartAtType) ``` type IngressStartAtType [string](/builtin#string) ``` IngressStartAtType - The type of the ingressStartAt, It can be "EarliestAvailable", "EventSourceCreationTime", "CustomEnqueuedTime". ``` const ( IngressStartAtTypeCustomEnqueuedTime [IngressStartAtType](#IngressStartAtType) = "CustomEnqueuedTime" IngressStartAtTypeEarliestAvailable [IngressStartAtType](#IngressStartAtType) = "EarliestAvailable" IngressStartAtTypeEventSourceCreationTime [IngressStartAtType](#IngressStartAtType) = "EventSourceCreationTime" ) ``` #### func [PossibleIngressStartAtTypeValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L126) [¶](#PossibleIngressStartAtTypeValues) ``` func PossibleIngressStartAtTypeValues() [][IngressStartAtType](#IngressStartAtType) ``` PossibleIngressStartAtTypeValues returns the possible values for the IngressStartAtType const type. #### type [IngressState](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L136) [¶](#IngressState) ``` type IngressState [string](/builtin#string) ``` IngressState - This string represents the state of ingress operations on an environment. It can be "Disabled", "Ready", "Running", "Paused" or "Unknown" ``` const ( IngressStateDisabled [IngressState](#IngressState) = "Disabled" IngressStatePaused [IngressState](#IngressState) = "Paused" IngressStateReady [IngressState](#IngressState) = "Ready" IngressStateRunning [IngressState](#IngressState) = "Running" IngressStateUnknown [IngressState](#IngressState) = "Unknown" ) ``` #### func [PossibleIngressStateValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L147) [¶](#PossibleIngressStateValues) ``` func PossibleIngressStateValues() [][IngressState](#IngressState) ``` PossibleIngressStateValues returns the possible values for the IngressState const type. #### type [IoTHubEventSourceCommonProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1038) [¶](#IoTHubEventSourceCommonProperties) ``` type IoTHubEventSourceCommonProperties struct { // REQUIRED; The name of the iot hub's consumer group that holds the partitions from which events will be read. ConsumerGroupName *[string](/builtin#string) `json:"consumerGroupName,omitempty"` // REQUIRED; The resource id of the event source in Azure Resource Manager. EventSourceResourceID *[string](/builtin#string) `json:"eventSourceResourceId,omitempty"` // REQUIRED; The name of the iot hub. IotHubName *[string](/builtin#string) `json:"iotHubName,omitempty"` // REQUIRED; The name of the Shared Access Policy key that grants the Time Series Insights service access to the iot hub. // This shared access policy key must grant 'service connect' permissions to the iot hub. KeyName *[string](/builtin#string) `json:"keyName,omitempty"` // An object that contains the details about the starting point in time to ingest events. IngressStartAt *[IngressStartAtProperties](#IngressStartAtProperties) `json:"ingressStartAt,omitempty"` // An object that represents the local timestamp property. It contains the format of local timestamp that needs to be used // and the corresponding timezone offset information. If a value isn't specified // for localTimestamp, or if null, then the local timestamp will not be ingressed with the events. LocalTimestamp *[LocalTimestamp](#LocalTimestamp) `json:"localTimestamp,omitempty"` // The event property that will be used as the event source's timestamp. If a value isn't specified for timestampPropertyName, // or if null or empty-string is specified, the event creation time will be // used. TimestampPropertyName *[string](/builtin#string) `json:"timestampPropertyName,omitempty"` // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` } ``` IoTHubEventSourceCommonProperties - Properties of the IoTHub event source. #### func (IoTHubEventSourceCommonProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1794) [¶](#IoTHubEventSourceCommonProperties.MarshalJSON) ``` func (i [IoTHubEventSourceCommonProperties](#IoTHubEventSourceCommonProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type IoTHubEventSourceCommonProperties. #### func (*IoTHubEventSourceCommonProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1809) [¶](#IoTHubEventSourceCommonProperties.UnmarshalJSON) ``` func (i *[IoTHubEventSourceCommonProperties](#IoTHubEventSourceCommonProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type IoTHubEventSourceCommonProperties. #### type [IoTHubEventSourceCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1074) [¶](#IoTHubEventSourceCreateOrUpdateParameters) ``` type IoTHubEventSourceCreateOrUpdateParameters struct { // REQUIRED; The kind of the event source. Kind *[EventSourceKind](#EventSourceKind) `json:"kind,omitempty"` // REQUIRED; The location of the resource. Location *[string](/builtin#string) `json:"location,omitempty"` // REQUIRED; Properties of the IoTHub event source that are required on create or update requests. Properties *[IoTHubEventSourceCreationProperties](#IoTHubEventSourceCreationProperties) `json:"properties,omitempty"` // An object that represents the local timestamp property. It contains the format of local timestamp that needs to be used // and the corresponding timezone offset information. If a value isn't specified // for localTimestamp, or if null, then the local timestamp will not be ingressed with the events. LocalTimestamp *[LocalTimestamp](#LocalTimestamp) `json:"localTimestamp,omitempty"` // Key-value pairs of additional properties for the resource. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` IoTHubEventSourceCreateOrUpdateParameters - Parameters supplied to the Create or Update Event Source operation for an IoTHub event source. #### func (*IoTHubEventSourceCreateOrUpdateParameters) [GetEventSourceCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1095) [¶](#IoTHubEventSourceCreateOrUpdateParameters.GetEventSourceCreateOrUpdateParameters) ``` func (i *[IoTHubEventSourceCreateOrUpdateParameters](#IoTHubEventSourceCreateOrUpdateParameters)) GetEventSourceCreateOrUpdateParameters() *[EventSourceCreateOrUpdateParameters](#EventSourceCreateOrUpdateParameters) ``` GetEventSourceCreateOrUpdateParameters implements the EventSourceCreateOrUpdateParametersClassification interface for type IoTHubEventSourceCreateOrUpdateParameters. #### func (IoTHubEventSourceCreateOrUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1853) [¶](#IoTHubEventSourceCreateOrUpdateParameters.MarshalJSON) ``` func (i [IoTHubEventSourceCreateOrUpdateParameters](#IoTHubEventSourceCreateOrUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type IoTHubEventSourceCreateOrUpdateParameters. #### func (*IoTHubEventSourceCreateOrUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1864) [¶](#IoTHubEventSourceCreateOrUpdateParameters.UnmarshalJSON) ``` func (i *[IoTHubEventSourceCreateOrUpdateParameters](#IoTHubEventSourceCreateOrUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type IoTHubEventSourceCreateOrUpdateParameters. #### type [IoTHubEventSourceCreationProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1105) [¶](#IoTHubEventSourceCreationProperties) ``` type IoTHubEventSourceCreationProperties struct { // REQUIRED; The name of the iot hub's consumer group that holds the partitions from which events will be read. ConsumerGroupName *[string](/builtin#string) `json:"consumerGroupName,omitempty"` // REQUIRED; The resource id of the event source in Azure Resource Manager. EventSourceResourceID *[string](/builtin#string) `json:"eventSourceResourceId,omitempty"` // REQUIRED; The name of the iot hub. IotHubName *[string](/builtin#string) `json:"iotHubName,omitempty"` // REQUIRED; The name of the Shared Access Policy key that grants the Time Series Insights service access to the iot hub. // This shared access policy key must grant 'service connect' permissions to the iot hub. KeyName *[string](/builtin#string) `json:"keyName,omitempty"` // REQUIRED; The value of the Shared Access Policy key that grants the Time Series Insights service read access to the iot // hub. This property is not shown in event source responses. SharedAccessKey *[string](/builtin#string) `json:"sharedAccessKey,omitempty"` // An object that contains the details about the starting point in time to ingest events. IngressStartAt *[IngressStartAtProperties](#IngressStartAtProperties) `json:"ingressStartAt,omitempty"` // An object that represents the local timestamp property. It contains the format of local timestamp that needs to be used // and the corresponding timezone offset information. If a value isn't specified // for localTimestamp, or if null, then the local timestamp will not be ingressed with the events. LocalTimestamp *[LocalTimestamp](#LocalTimestamp) `json:"localTimestamp,omitempty"` // The event property that will be used as the event source's timestamp. If a value isn't specified for timestampPropertyName, // or if null or empty-string is specified, the event creation time will be // used. TimestampPropertyName *[string](/builtin#string) `json:"timestampPropertyName,omitempty"` // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` } ``` IoTHubEventSourceCreationProperties - Properties of the IoTHub event source that are required on create or update requests. #### func (IoTHubEventSourceCreationProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1896) [¶](#IoTHubEventSourceCreationProperties.MarshalJSON) ``` func (i [IoTHubEventSourceCreationProperties](#IoTHubEventSourceCreationProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type IoTHubEventSourceCreationProperties. #### func (*IoTHubEventSourceCreationProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1912) [¶](#IoTHubEventSourceCreationProperties.UnmarshalJSON) ``` func (i *[IoTHubEventSourceCreationProperties](#IoTHubEventSourceCreationProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type IoTHubEventSourceCreationProperties. #### type [IoTHubEventSourceMutableProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1144) [¶](#IoTHubEventSourceMutableProperties) ``` type IoTHubEventSourceMutableProperties struct { // The value of the shared access key that grants the Time Series Insights service read access to the iot hub. This property // is not shown in event source responses. SharedAccessKey *[string](/builtin#string) `json:"sharedAccessKey,omitempty"` // The event property that will be used as the event source's timestamp. If a value isn't specified for timestampPropertyName, // or if null or empty-string is specified, the event creation time will be // used. TimestampPropertyName *[string](/builtin#string) `json:"timestampPropertyName,omitempty"` } ``` IoTHubEventSourceMutableProperties - An object that represents a set of mutable IoTHub event source resource properties. #### func (IoTHubEventSourceMutableProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1959) [¶](#IoTHubEventSourceMutableProperties.MarshalJSON) added in v1.1.0 ``` func (i [IoTHubEventSourceMutableProperties](#IoTHubEventSourceMutableProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type IoTHubEventSourceMutableProperties. #### func (*IoTHubEventSourceMutableProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1967) [¶](#IoTHubEventSourceMutableProperties.UnmarshalJSON) added in v1.1.0 ``` func (i *[IoTHubEventSourceMutableProperties](#IoTHubEventSourceMutableProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type IoTHubEventSourceMutableProperties. #### type [IoTHubEventSourceResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1156) [¶](#IoTHubEventSourceResource) ``` type IoTHubEventSourceResource struct { // REQUIRED; The kind of the event source. Kind *[EventSourceResourceKind](#EventSourceResourceKind) `json:"kind,omitempty"` // REQUIRED; Resource location Location *[string](/builtin#string) `json:"location,omitempty"` // REQUIRED; Properties of the IoTHub event source resource. Properties *[IoTHubEventSourceResourceProperties](#IoTHubEventSourceResourceProperties) `json:"properties,omitempty"` // Resource tags Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` IoTHubEventSourceResource - An event source that receives its data from an Azure IoTHub. #### func (*IoTHubEventSourceResource) [GetEventSourceResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1180) [¶](#IoTHubEventSourceResource.GetEventSourceResource) ``` func (i *[IoTHubEventSourceResource](#IoTHubEventSourceResource)) GetEventSourceResource() *[EventSourceResource](#EventSourceResource) ``` GetEventSourceResource implements the EventSourceResourceClassification interface for type IoTHubEventSourceResource. #### func (IoTHubEventSourceResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L1990) [¶](#IoTHubEventSourceResource.MarshalJSON) added in v1.1.0 ``` func (i [IoTHubEventSourceResource](#IoTHubEventSourceResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type IoTHubEventSourceResource. #### func (*IoTHubEventSourceResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2003) [¶](#IoTHubEventSourceResource.UnmarshalJSON) ``` func (i *[IoTHubEventSourceResource](#IoTHubEventSourceResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type IoTHubEventSourceResource. #### type [IoTHubEventSourceResourceProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1192) [¶](#IoTHubEventSourceResourceProperties) ``` type IoTHubEventSourceResourceProperties struct { // REQUIRED; The name of the iot hub's consumer group that holds the partitions from which events will be read. ConsumerGroupName *[string](/builtin#string) `json:"consumerGroupName,omitempty"` // REQUIRED; The resource id of the event source in Azure Resource Manager. EventSourceResourceID *[string](/builtin#string) `json:"eventSourceResourceId,omitempty"` // REQUIRED; The name of the iot hub. IotHubName *[string](/builtin#string) `json:"iotHubName,omitempty"` // REQUIRED; The name of the Shared Access Policy key that grants the Time Series Insights service access to the iot hub. // This shared access policy key must grant 'service connect' permissions to the iot hub. KeyName *[string](/builtin#string) `json:"keyName,omitempty"` // An object that contains the details about the starting point in time to ingest events. IngressStartAt *[IngressStartAtProperties](#IngressStartAtProperties) `json:"ingressStartAt,omitempty"` // An object that represents the local timestamp property. It contains the format of local timestamp that needs to be used // and the corresponding timezone offset information. If a value isn't specified // for localTimestamp, or if null, then the local timestamp will not be ingressed with the events. LocalTimestamp *[LocalTimestamp](#LocalTimestamp) `json:"localTimestamp,omitempty"` // The event property that will be used as the event source's timestamp. If a value isn't specified for timestampPropertyName, // or if null or empty-string is specified, the event creation time will be // used. TimestampPropertyName *[string](/builtin#string) `json:"timestampPropertyName,omitempty"` // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` } ``` IoTHubEventSourceResourceProperties - Properties of the IoTHub event source resource. #### func (IoTHubEventSourceResourceProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2041) [¶](#IoTHubEventSourceResourceProperties.MarshalJSON) ``` func (i [IoTHubEventSourceResourceProperties](#IoTHubEventSourceResourceProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type IoTHubEventSourceResourceProperties. #### func (*IoTHubEventSourceResourceProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2056) [¶](#IoTHubEventSourceResourceProperties.UnmarshalJSON) ``` func (i *[IoTHubEventSourceResourceProperties](#IoTHubEventSourceResourceProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type IoTHubEventSourceResourceProperties. #### type [IoTHubEventSourceUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1228) [¶](#IoTHubEventSourceUpdateParameters) ``` type IoTHubEventSourceUpdateParameters struct { // REQUIRED; The kind of the event source. Kind *[EventSourceKind](#EventSourceKind) `json:"kind,omitempty"` // Properties of the IoTHub event source. Properties *[IoTHubEventSourceMutableProperties](#IoTHubEventSourceMutableProperties) `json:"properties,omitempty"` // Key-value pairs of additional properties for the event source. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` IoTHubEventSourceUpdateParameters - Parameters supplied to the Update Event Source operation to update an IoTHub event source. #### func (*IoTHubEventSourceUpdateParameters) [GetEventSourceUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1240) [¶](#IoTHubEventSourceUpdateParameters.GetEventSourceUpdateParameters) ``` func (i *[IoTHubEventSourceUpdateParameters](#IoTHubEventSourceUpdateParameters)) GetEventSourceUpdateParameters() *[EventSourceUpdateParameters](#EventSourceUpdateParameters) ``` GetEventSourceUpdateParameters implements the EventSourceUpdateParametersClassification interface for type IoTHubEventSourceUpdateParameters. #### func (IoTHubEventSourceUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2100) [¶](#IoTHubEventSourceUpdateParameters.MarshalJSON) ``` func (i [IoTHubEventSourceUpdateParameters](#IoTHubEventSourceUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type IoTHubEventSourceUpdateParameters. #### func (*IoTHubEventSourceUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2109) [¶](#IoTHubEventSourceUpdateParameters.UnmarshalJSON) ``` func (i *[IoTHubEventSourceUpdateParameters](#IoTHubEventSourceUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type IoTHubEventSourceUpdateParameters. #### type [LocalTimestamp](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1250) [¶](#LocalTimestamp) ``` type LocalTimestamp struct { // An enum that represents the format of the local timestamp property that needs to be set. Format *[LocalTimestampFormat](#LocalTimestampFormat) `json:"format,omitempty"` // An object that represents the offset information for the local timestamp format specified. Should not be specified for // LocalTimestampFormat - Embedded. TimeZoneOffset *[LocalTimestampTimeZoneOffset](#LocalTimestampTimeZoneOffset) `json:"timeZoneOffset,omitempty"` } ``` LocalTimestamp - An object that represents the local timestamp property. It contains the format of local timestamp that needs to be used and the corresponding timezone offset information. If a value isn't specified for localTimestamp, or if null, then the local timestamp will not be ingressed with the events. #### func (LocalTimestamp) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2135) [¶](#LocalTimestamp.MarshalJSON) added in v1.1.0 ``` func (l [LocalTimestamp](#LocalTimestamp)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type LocalTimestamp. #### func (*LocalTimestamp) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2143) [¶](#LocalTimestamp.UnmarshalJSON) added in v1.1.0 ``` func (l *[LocalTimestamp](#LocalTimestamp)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type LocalTimestamp. #### type [LocalTimestampFormat](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L158) [¶](#LocalTimestampFormat) ``` type LocalTimestampFormat [string](/builtin#string) ``` LocalTimestampFormat - An enum that represents the format of the local timestamp property that needs to be set. ``` const ( LocalTimestampFormatEmbedded [LocalTimestampFormat](#LocalTimestampFormat) = "Embedded" ) ``` #### func [PossibleLocalTimestampFormatValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L165) [¶](#PossibleLocalTimestampFormatValues) ``` func PossibleLocalTimestampFormatValues() [][LocalTimestampFormat](#LocalTimestampFormat) ``` PossibleLocalTimestampFormatValues returns the possible values for the LocalTimestampFormat const type. #### type [LocalTimestampTimeZoneOffset](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1261) [¶](#LocalTimestampTimeZoneOffset) ``` type LocalTimestampTimeZoneOffset struct { // The event property that will be contain the offset information to calculate the local timestamp. When the LocalTimestampFormat // is Iana, the property name will contain the name of the column which // contains IANA Timezone Name (eg: Americas/Los Angeles). When LocalTimestampFormat is Timespan, it contains the name of // property which contains values representing the offset (eg: P1D or 1.00:00:00) PropertyName *[string](/builtin#string) `json:"propertyName,omitempty"` } ``` LocalTimestampTimeZoneOffset - An object that represents the offset information for the local timestamp format specified. Should not be specified for LocalTimestampFormat - Embedded. #### func (LocalTimestampTimeZoneOffset) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2166) [¶](#LocalTimestampTimeZoneOffset.MarshalJSON) added in v1.1.0 ``` func (l [LocalTimestampTimeZoneOffset](#LocalTimestampTimeZoneOffset)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type LocalTimestampTimeZoneOffset. #### func (*LocalTimestampTimeZoneOffset) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2173) [¶](#LocalTimestampTimeZoneOffset.UnmarshalJSON) added in v1.1.0 ``` func (l *[LocalTimestampTimeZoneOffset](#LocalTimestampTimeZoneOffset)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type LocalTimestampTimeZoneOffset. #### type [LogSpecification](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1270) [¶](#LogSpecification) ``` type LogSpecification struct { // Log display name. DisplayName *[string](/builtin#string) `json:"displayName,omitempty"` // Log name. Name *[string](/builtin#string) `json:"name,omitempty"` } ``` LogSpecification - The specification of an Azure Monitoring log. #### func (LogSpecification) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2193) [¶](#LogSpecification.MarshalJSON) added in v1.1.0 ``` func (l [LogSpecification](#LogSpecification)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type LogSpecification. #### func (*LogSpecification) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2201) [¶](#LogSpecification.UnmarshalJSON) added in v1.1.0 ``` func (l *[LogSpecification](#LogSpecification)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type LogSpecification. #### type [MetricAvailability](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1279) [¶](#MetricAvailability) ``` type MetricAvailability struct { BlobDuration *[string](/builtin#string) `json:"blobDuration,omitempty"` TimeGrain *[string](/builtin#string) `json:"timeGrain,omitempty"` } ``` MetricAvailability - Retention policy of a resource metric. #### func (MetricAvailability) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2224) [¶](#MetricAvailability.MarshalJSON) added in v1.1.0 ``` func (m [MetricAvailability](#MetricAvailability)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type MetricAvailability. #### func (*MetricAvailability) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2232) [¶](#MetricAvailability.UnmarshalJSON) added in v1.1.0 ``` func (m *[MetricAvailability](#MetricAvailability)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type MetricAvailability. #### type [MetricSpecification](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1285) [¶](#MetricSpecification) ``` type MetricSpecification struct { // Aggregation type could be Average. AggregationType *[string](/builtin#string) `json:"aggregationType,omitempty"` // Retention policies of a resource metric. Availabilities []*[MetricAvailability](#MetricAvailability) `json:"availabilities,omitempty"` // The category this metric specification belong to, could be Capacity. Category *[string](/builtin#string) `json:"category,omitempty"` // Dimensions of blobs, including blob type and access tier. Dimensions []*[Dimension](#Dimension) `json:"dimensions,omitempty"` // Display description of metric specification. DisplayDescription *[string](/builtin#string) `json:"displayDescription,omitempty"` // Display name of metric specification. DisplayName *[string](/builtin#string) `json:"displayName,omitempty"` // Name of metric specification. Name *[string](/builtin#string) `json:"name,omitempty"` // Account Resource Id. ResourceIDDimensionNameOverride *[string](/builtin#string) `json:"resourceIdDimensionNameOverride,omitempty"` // Unit could be Bytes or Count. Unit *[string](/builtin#string) `json:"unit,omitempty"` } ``` MetricSpecification - Metric specification of operation. #### func (MetricSpecification) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2255) [¶](#MetricSpecification.MarshalJSON) added in v1.1.0 ``` func (m [MetricSpecification](#MetricSpecification)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type MetricSpecification. #### func (*MetricSpecification) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2270) [¶](#MetricSpecification.UnmarshalJSON) added in v1.1.0 ``` func (m *[MetricSpecification](#MetricSpecification)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type MetricSpecification. #### type [Operation](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1315) [¶](#Operation) ``` type Operation struct { // Properties of operation, include metric specifications. OperationProperties *[OperationProperties](#OperationProperties) `json:"properties,omitempty"` // The intended executor of the operation. Origin *[string](/builtin#string) `json:"origin,omitempty"` // READ-ONLY; Contains the localized display information for this particular operation / action. Display *[OperationDisplay](#OperationDisplay) `json:"display,omitempty" azure:"ro"` // READ-ONLY; The name of the operation being performed on this particular object. Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` } ``` Operation - A Time Series Insights REST API operation #### func (Operation) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2314) [¶](#Operation.MarshalJSON) added in v1.1.0 ``` func (o [Operation](#Operation)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Operation. #### func (*Operation) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2324) [¶](#Operation.UnmarshalJSON) added in v1.1.0 ``` func (o *[Operation](#Operation)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Operation. #### type [OperationDisplay](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1330) [¶](#OperationDisplay) ``` type OperationDisplay struct { // READ-ONLY; The localized friendly description for the operation. Description *[string](/builtin#string) `json:"description,omitempty" azure:"ro"` // READ-ONLY; The localized friendly name for the operation. Operation *[string](/builtin#string) `json:"operation,omitempty" azure:"ro"` // READ-ONLY; The localized friendly form of the resource provider name. Provider *[string](/builtin#string) `json:"provider,omitempty" azure:"ro"` // READ-ONLY; The localized friendly form of the resource type related to this action/operation. Resource *[string](/builtin#string) `json:"resource,omitempty" azure:"ro"` } ``` OperationDisplay - Contains the localized display information for this particular operation / action. #### func (OperationDisplay) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2353) [¶](#OperationDisplay.MarshalJSON) added in v1.1.0 ``` func (o [OperationDisplay](#OperationDisplay)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type OperationDisplay. #### func (*OperationDisplay) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2363) [¶](#OperationDisplay.UnmarshalJSON) added in v1.1.0 ``` func (o *[OperationDisplay](#OperationDisplay)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type OperationDisplay. #### type [OperationListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1346) [¶](#OperationListResult) ``` type OperationListResult struct { // READ-ONLY; URL to get the next set of operation list results if there are any. NextLink *[string](/builtin#string) `json:"nextLink,omitempty" azure:"ro"` // READ-ONLY; List of Time Series Insights operations supported by the Microsoft.TimeSeriesInsights resource provider. Value []*[Operation](#Operation) `json:"value,omitempty" azure:"ro"` } ``` OperationListResult - Result of the request to list Time Series Insights operations. It contains a list of operations and a URL link to get the next set of results. #### func (OperationListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2392) [¶](#OperationListResult.MarshalJSON) added in v1.1.0 ``` func (o [OperationListResult](#OperationListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type OperationListResult. #### func (*OperationListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2400) [¶](#OperationListResult.UnmarshalJSON) added in v1.1.0 ``` func (o *[OperationListResult](#OperationListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type OperationListResult. #### type [OperationProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1355) [¶](#OperationProperties) ``` type OperationProperties struct { // One property of operation, include metric specifications. ServiceSpecification *[ServiceSpecification](#ServiceSpecification) `json:"serviceSpecification,omitempty"` } ``` OperationProperties - Properties of operation, include metric specifications. #### func (OperationProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2423) [¶](#OperationProperties.MarshalJSON) added in v1.1.0 ``` func (o [OperationProperties](#OperationProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type OperationProperties. #### func (*OperationProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2430) [¶](#OperationProperties.UnmarshalJSON) added in v1.1.0 ``` func (o *[OperationProperties](#OperationProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type OperationProperties. #### type [OperationsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/operations_client.go#L23) [¶](#OperationsClient) ``` type OperationsClient struct { // contains filtered or unexported fields } ``` OperationsClient contains the methods for the Operations group. Don't use this type directly, use NewOperationsClient() instead. #### func [NewOperationsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/operations_client.go#L30) [¶](#NewOperationsClient) ``` func NewOperationsClient(credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[OperationsClient](#OperationsClient), [error](/builtin#error)) ``` NewOperationsClient creates a new instance of OperationsClient with the specified values. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*OperationsClient) [NewListPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/operations_client.go#L45) [¶](#OperationsClient.NewListPager) ``` func (client *[OperationsClient](#OperationsClient)) NewListPager(options *[OperationsClientListOptions](#OperationsClientListOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[OperationsClientListResponse](#OperationsClientListResponse)] ``` NewListPager - Lists all of the available Time Series Insights related operations. Generated from API version 2020-05-15 * options - OperationsClientListOptions contains the optional parameters for the OperationsClient.NewListPager method. Example [¶](#example-OperationsClient.NewListPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/Operation_List.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewOperationsClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.OperationListResult = armtimeseriesinsights.OperationListResult{ // Value: []*armtimeseriesinsights.Operation{ // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/register/action"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Registers the subscription for the Time Series Insights resource provider and enables the creation of Time Series Insights environments."), // Operation: to.Ptr("Registers the Time Series Insights Resource Provider"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Time Series Insights Resource Provider"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/providers/Microsoft.Insights/metricDefinitions/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Gets the available metrics for environments"), // Operation: to.Ptr("Read environments metric definitions"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("The metrics definition of environments"), // }, // Origin: to.Ptr("system"), // OperationProperties: &armtimeseriesinsights.OperationProperties{ // ServiceSpecification: &armtimeseriesinsights.ServiceSpecification{ // MetricSpecifications: []*armtimeseriesinsights.MetricSpecification{ // { // Name: to.Ptr("IngressReceivedMessages"), // AggregationType: to.Ptr("Total"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Count of messages read from all Event hub or IoT hub event sources"), // DisplayName: to.Ptr("Ingress Received Messages"), // Unit: to.Ptr("Count"), // }, // { // Name: to.Ptr("IngressReceivedInvalidMessages"), // AggregationType: to.Ptr("Total"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Count of invalid messages read from all Event hub or IoT hub event sources"), // DisplayName: to.Ptr("Ingress Received Invalid Messages"), // Unit: to.Ptr("Count"), // }, // { // Name: to.Ptr("IngressReceivedBytes"), // AggregationType: to.Ptr("Total"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Count of bytes read from all event sources"), // DisplayName: to.Ptr("Ingress Received Bytes"), // Unit: to.Ptr("Bytes"), // }, // { // Name: to.Ptr("IngressStoredBytes"), // AggregationType: to.Ptr("Total"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Total size of events successfully processed and available for query"), // DisplayName: to.Ptr("Ingress Stored Bytes"), // Unit: to.Ptr("Bytes"), // }, // { // Name: to.Ptr("IngressStoredEvents"), // AggregationType: to.Ptr("Total"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Count of flattened events successfully processed and available for query"), // DisplayName: to.Ptr("Ingress Stored Events"), // Unit: to.Ptr("Count"), // }, // { // Name: to.Ptr("IngressReceivedMessagesTimeLag"), // AggregationType: to.Ptr("Maximum"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Difference between the time that the message is enqueued in the event source and the time it is processed in Ingress"), // DisplayName: to.Ptr("Ingress Received Messages Time Lag"), // Unit: to.Ptr("Seconds"), // }, // { // Name: to.Ptr("IngressReceivedMessagesCountLag"), // AggregationType: to.Ptr("Average"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Difference between the sequence number of last enqueued message in the event source partition and sequence number of messages being processed in Ingress"), // DisplayName: to.Ptr("Ingress Received Messages Count Lag"), // Unit: to.Ptr("Count"), // }, // { // Name: to.Ptr("WarmStorageMaxProperties"), // AggregationType: to.Ptr("Maximum"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Maximum number of properties used allowed by the environment for S1/S2 SKU and maximum number of properties allowed by Warm Store for PAYG SKU"), // DisplayName: to.Ptr("Warm Storage Max Properties"), // Unit: to.Ptr("Count"), // }, // { // Name: to.Ptr("WarmStorageUsedProperties"), // AggregationType: to.Ptr("Maximum"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Number of properties used by the environment for S1/S2 SKU and number of properties used by Warm Store for PAYG SKU"), // DisplayName: to.Ptr("Warm Storage Used Properties "), // Unit: to.Ptr("Count"), // }}, // }, // }, // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/providers/Microsoft.Insights/diagnosticSettings/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Gets the diagnostic setting for the resource"), // Operation: to.Ptr("Read diagnostic setting."), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("environments"), // }, // Origin: to.Ptr("system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/providers/Microsoft.Insights/diagnosticSettings/write"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Creates or updates the diagnostic setting for the resource"), // Operation: to.Ptr("Write diagnostic setting."), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("environments"), // }, // Origin: to.Ptr("system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/providers/Microsoft.Insights/logDefinitions/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Gets the available logs for environments"), // Operation: to.Ptr("Read environments log definitions"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("environments"), // }, // Origin: to.Ptr("system"), // OperationProperties: &armtimeseriesinsights.OperationProperties{ // ServiceSpecification: &armtimeseriesinsights.ServiceSpecification{ // LogSpecifications: []*armtimeseriesinsights.LogSpecification{ // { // Name: to.Ptr("Ingress"), // DisplayName: to.Ptr("Ingress"), // }, // { // Name: to.Ptr("Management"), // DisplayName: to.Ptr("Management"), // }}, // }, // }, // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/eventsources/providers/Microsoft.Insights/logDefinitions/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Gets the available logs for the event source"), // Operation: to.Ptr("Read event source log definitions"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Event Source"), // }, // Origin: to.Ptr("system"), // OperationProperties: &armtimeseriesinsights.OperationProperties{ // ServiceSpecification: &armtimeseriesinsights.ServiceSpecification{ // LogSpecifications: []*armtimeseriesinsights.LogSpecification{ // { // Name: to.Ptr("Ingress"), // DisplayName: to.Ptr("Ingress"), // }, // { // Name: to.Ptr("Management"), // DisplayName: to.Ptr("Management"), // }}, // }, // }, // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/eventsources/providers/Microsoft.Insights/metricDefinitions/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Gets the available metrics for eventsources"), // Operation: to.Ptr("Read eventsources metric definitions"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("The metrics definition of environments/eventsources"), // }, // Origin: to.Ptr("system"), // OperationProperties: &armtimeseriesinsights.OperationProperties{ // ServiceSpecification: &armtimeseriesinsights.ServiceSpecification{ // MetricSpecifications: []*armtimeseriesinsights.MetricSpecification{ // { // Name: to.Ptr("IngressReceivedMessages"), // AggregationType: to.Ptr("Total"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Count of messages read from the event source"), // DisplayName: to.Ptr("Ingress Received Messages"), // Unit: to.Ptr("Count"), // }, // { // Name: to.Ptr("IngressReceivedInvalidMessages"), // AggregationType: to.Ptr("Total"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Count of invalid messages read from the event source"), // DisplayName: to.Ptr("Ingress Received Invalid Messages"), // Unit: to.Ptr("Count"), // }, // { // Name: to.Ptr("IngressReceivedBytes"), // AggregationType: to.Ptr("Total"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Count of bytes read from the event source"), // DisplayName: to.Ptr("Ingress Received Bytes"), // Unit: to.Ptr("Bytes"), // }, // { // Name: to.Ptr("IngressStoredBytes"), // AggregationType: to.Ptr("Total"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Total size of events successfully processed and available for query"), // DisplayName: to.Ptr("Ingress Stored Bytes"), // Unit: to.Ptr("Bytes"), // }, // { // Name: to.Ptr("IngressStoredEvents"), // AggregationType: to.Ptr("Total"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Count of flattened events successfully processed and available for query"), // DisplayName: to.Ptr("Ingress Stored Events"), // Unit: to.Ptr("Count"), // }, // { // Name: to.Ptr("IngressReceivedMessagesTimeLag"), // AggregationType: to.Ptr("Maximum"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Difference between the time that the message is enqueued in the event source and the time it is processed in Ingress"), // DisplayName: to.Ptr("Ingress Received Messages Time Lag"), // Unit: to.Ptr("Seconds"), // }, // { // Name: to.Ptr("IngressReceivedMessagesCountLag"), // AggregationType: to.Ptr("Average"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Difference between the sequence number of last enqueued message in the event source partition and sequence number of messages being processed in Ingress"), // DisplayName: to.Ptr("Ingress Received Messages Count Lag"), // Unit: to.Ptr("Count"), // }, // { // Name: to.Ptr("WarmStorageMaxProperties"), // AggregationType: to.Ptr("Maximum"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Maximum number of properties used allowed by the environment for S1/S2 SKU and maximum number of properties allowed by Warm Store for PAYG SKU"), // DisplayName: to.Ptr("Warm Storage Max Properties"), // Unit: to.Ptr("Count"), // }, // { // Name: to.Ptr("WarmStorageUsedProperties"), // AggregationType: to.Ptr("Maximum"), // Availabilities: []*armtimeseriesinsights.MetricAvailability{ // { // BlobDuration: to.Ptr("PT1H"), // TimeGrain: to.Ptr("PT1M"), // }}, // DisplayDescription: to.Ptr("Number of properties used by the environment for S1/S2 SKU and number of properties used by Warm Store for PAYG SKU"), // DisplayName: to.Ptr("Warm Storage Used Properties "), // Unit: to.Ptr("Count"), // }}, // }, // }, // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/eventsources/providers/Microsoft.Insights/diagnosticSettings/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Gets the diagnostic setting for the resource"), // Operation: to.Ptr("Read diagnostic setting."), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("environments/eventsources"), // }, // Origin: to.Ptr("system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/eventsources/providers/Microsoft.Insights/diagnosticSettings/write"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Creates or updates the diagnostic setting for the resource"), // Operation: to.Ptr("Write diagnostic setting."), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("environments/eventsources"), // }, // Origin: to.Ptr("system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Get the properties of an environment."), // Operation: to.Ptr("Read Environment"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Environment"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/write"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Creates a new environment, or updates an existing environment."), // Operation: to.Ptr("Create or Update Environment"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Environment"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/delete"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Deletes the environment."), // Operation: to.Ptr("Delete Environment"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Environment"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/status/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Get the status of the environment, state of its associated operations like ingress."), // Operation: to.Ptr("Read Environment status"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Environment"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/eventsources/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Get the properties of an event source."), // Operation: to.Ptr("Read Event Source"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Event Source"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/eventsources/write"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Creates a new event source for an environment, or updates an existing event source."), // Operation: to.Ptr("Create or Update Event Source"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Event Source"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/eventsources/delete"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Deletes the event source."), // Operation: to.Ptr("Delete Event Source"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Event Source"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/referencedatasets/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Get the properties of a reference data set."), // Operation: to.Ptr("Read Reference Data Set"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Reference Data Set"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/referencedatasets/write"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Creates a new reference data set for an environment, or updates an existing reference data set."), // Operation: to.Ptr("Create or Update Reference Data Set"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Reference Data Set"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/referencedatasets/delete"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Deletes the reference data set."), // Operation: to.Ptr("Delete Reference Data Set"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Reference Data Set"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/accesspolicies/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Get the properties of an access policy."), // Operation: to.Ptr("Read Access Policy"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Access Policy"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/accesspolicies/write"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Creates a new access policy for an environment, or updates an existing access policy."), // Operation: to.Ptr("Create or Update Access Policy"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Access Policy"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/accesspolicies/delete"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Deletes the access policy."), // Operation: to.Ptr("Delete Access Policy"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Access Policy"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/privateEndpointConnectionProxies/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Get the properties of a private endpoint connection proxy."), // Operation: to.Ptr("Read private endpoint connection proxy"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Private endpoint connection proxy"), // }, // Origin: to.Ptr("system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/privateEndpointConnectionProxies/write"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Creates a new private endpoint connection proxy for an environment, or updates an existing connection proxy."), // Operation: to.Ptr("Create or Update private endpoint connection proxy"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Private endpoint connection proxy"), // }, // Origin: to.Ptr("system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/privateEndpointConnectionProxies/delete"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Deletes the private endpoint connection proxy."), // Operation: to.Ptr("Delete the private endpoint connection proxy"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Private endpoint connection proxy"), // }, // Origin: to.Ptr("system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/privateEndpointConnectionProxies/validate/action"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Validate the private endpoint connection proxy object before creation."), // Operation: to.Ptr("Validate the private endpoint connection proxy."), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Private endpoint connection proxy"), // }, // Origin: to.Ptr("system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/privateEndpointConnectionProxies/operationresults/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Validate the private endpoint connection proxy operation status."), // Operation: to.Ptr("Get private endpoint connection proxy operation status."), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Private endpoint connection proxy"), // }, // Origin: to.Ptr("system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/privateendpointConnections/read"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Get the properties of a private endpoint connection."), // Operation: to.Ptr("Read private endpoint connection."), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Private endpoint connection."), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/privateendpointConnections/write"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Creates a new private endpoint connection for an environment, or updates an existing connection."), // Operation: to.Ptr("Create or Update private endpoint connection."), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Private endpoint connection."), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.TimeSeriesInsights/environments/privateendpointConnections/delete"), // Display: &armtimeseriesinsights.OperationDisplay{ // Description: to.Ptr("Deletes the private endpoint connection."), // Operation: to.Ptr("Delete the private endpoint connection"), // Provider: to.Ptr("Microsoft Time Series Insights"), // Resource: to.Ptr("Private endpoint connection"), // }, // Origin: to.Ptr("user,system"), // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [OperationsClientListOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1361) [¶](#OperationsClientListOptions) ``` type OperationsClientListOptions struct { } ``` OperationsClientListOptions contains the optional parameters for the OperationsClient.NewListPager method. #### type [OperationsClientListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L153) [¶](#OperationsClientListResponse) ``` type OperationsClientListResponse struct { [OperationListResult](#OperationListResult) } ``` OperationsClientListResponse contains the response from method OperationsClient.NewListPager. #### type [PropertyType](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L172) [¶](#PropertyType) ``` type PropertyType [string](/builtin#string) ``` PropertyType - The type of the property. ``` const ( PropertyTypeString [PropertyType](#PropertyType) = "String" ) ``` #### func [PossiblePropertyTypeValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L179) [¶](#PossiblePropertyTypeValues) ``` func PossiblePropertyTypeValues() [][PropertyType](#PropertyType) ``` PossiblePropertyTypeValues returns the possible values for the PropertyType const type. #### type [ProvisioningState](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L186) [¶](#ProvisioningState) ``` type ProvisioningState [string](/builtin#string) ``` ProvisioningState - Provisioning state of the resource. ``` const ( ProvisioningStateAccepted [ProvisioningState](#ProvisioningState) = "Accepted" ProvisioningStateCreating [ProvisioningState](#ProvisioningState) = "Creating" ProvisioningStateDeleting [ProvisioningState](#ProvisioningState) = "Deleting" ProvisioningStateFailed [ProvisioningState](#ProvisioningState) = "Failed" ProvisioningStateSucceeded [ProvisioningState](#ProvisioningState) = "Succeeded" ProvisioningStateUpdating [ProvisioningState](#ProvisioningState) = "Updating" ) ``` #### func [PossibleProvisioningStateValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L198) [¶](#PossibleProvisioningStateValues) ``` func PossibleProvisioningStateValues() [][ProvisioningState](#ProvisioningState) ``` PossibleProvisioningStateValues returns the possible values for the ProvisioningState const type. #### type [ReferenceDataKeyPropertyType](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L210) [¶](#ReferenceDataKeyPropertyType) ``` type ReferenceDataKeyPropertyType [string](/builtin#string) ``` ReferenceDataKeyPropertyType - The type of the key property. ``` const ( ReferenceDataKeyPropertyTypeBool [ReferenceDataKeyPropertyType](#ReferenceDataKeyPropertyType) = "Bool" ReferenceDataKeyPropertyTypeDateTime [ReferenceDataKeyPropertyType](#ReferenceDataKeyPropertyType) = "DateTime" ReferenceDataKeyPropertyTypeDouble [ReferenceDataKeyPropertyType](#ReferenceDataKeyPropertyType) = "Double" ReferenceDataKeyPropertyTypeString [ReferenceDataKeyPropertyType](#ReferenceDataKeyPropertyType) = "String" ) ``` #### func [PossibleReferenceDataKeyPropertyTypeValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L220) [¶](#PossibleReferenceDataKeyPropertyTypeValues) ``` func PossibleReferenceDataKeyPropertyTypeValues() [][ReferenceDataKeyPropertyType](#ReferenceDataKeyPropertyType) ``` PossibleReferenceDataKeyPropertyTypeValues returns the possible values for the ReferenceDataKeyPropertyType const type. #### type [ReferenceDataSetCreateOrUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1365) [¶](#ReferenceDataSetCreateOrUpdateParameters) ``` type ReferenceDataSetCreateOrUpdateParameters struct { // REQUIRED; The location of the resource. Location *[string](/builtin#string) `json:"location,omitempty"` // REQUIRED; Properties used to create a reference data set. Properties *[ReferenceDataSetCreationProperties](#ReferenceDataSetCreationProperties) `json:"properties,omitempty"` // Key-value pairs of additional properties for the resource. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` #### func (ReferenceDataSetCreateOrUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2450) [¶](#ReferenceDataSetCreateOrUpdateParameters.MarshalJSON) ``` func (r [ReferenceDataSetCreateOrUpdateParameters](#ReferenceDataSetCreateOrUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ReferenceDataSetCreateOrUpdateParameters. #### func (*ReferenceDataSetCreateOrUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2459) [¶](#ReferenceDataSetCreateOrUpdateParameters.UnmarshalJSON) added in v1.1.0 ``` func (r *[ReferenceDataSetCreateOrUpdateParameters](#ReferenceDataSetCreateOrUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ReferenceDataSetCreateOrUpdateParameters. #### type [ReferenceDataSetCreationProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1377) [¶](#ReferenceDataSetCreationProperties) ``` type ReferenceDataSetCreationProperties struct { // REQUIRED; The list of key properties for the reference data set. KeyProperties []*[ReferenceDataSetKeyProperty](#ReferenceDataSetKeyProperty) `json:"keyProperties,omitempty"` // The reference data set key comparison behavior can be set using this property. By default, the value is 'Ordinal' - which // means case sensitive key comparison will be performed while joining reference // data with events or while adding new reference data. When 'OrdinalIgnoreCase' is set, case insensitive comparison will // be used. DataStringComparisonBehavior *[DataStringComparisonBehavior](#DataStringComparisonBehavior) `json:"dataStringComparisonBehavior,omitempty"` } ``` ReferenceDataSetCreationProperties - Properties used to create a reference data set. #### func (ReferenceDataSetCreationProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2485) [¶](#ReferenceDataSetCreationProperties.MarshalJSON) ``` func (r [ReferenceDataSetCreationProperties](#ReferenceDataSetCreationProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ReferenceDataSetCreationProperties. #### func (*ReferenceDataSetCreationProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2493) [¶](#ReferenceDataSetCreationProperties.UnmarshalJSON) added in v1.1.0 ``` func (r *[ReferenceDataSetCreationProperties](#ReferenceDataSetCreationProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ReferenceDataSetCreationProperties. #### type [ReferenceDataSetKeyProperty](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1389) [¶](#ReferenceDataSetKeyProperty) ``` type ReferenceDataSetKeyProperty struct { // The name of the key property. Name *[string](/builtin#string) `json:"name,omitempty"` // The type of the key property. Type *[ReferenceDataKeyPropertyType](#ReferenceDataKeyPropertyType) `json:"type,omitempty"` } ``` ReferenceDataSetKeyProperty - A key property for the reference data set. A reference data set can have multiple key properties. #### func (ReferenceDataSetKeyProperty) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2516) [¶](#ReferenceDataSetKeyProperty.MarshalJSON) added in v1.1.0 ``` func (r [ReferenceDataSetKeyProperty](#ReferenceDataSetKeyProperty)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ReferenceDataSetKeyProperty. #### func (*ReferenceDataSetKeyProperty) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2524) [¶](#ReferenceDataSetKeyProperty.UnmarshalJSON) added in v1.1.0 ``` func (r *[ReferenceDataSetKeyProperty](#ReferenceDataSetKeyProperty)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ReferenceDataSetKeyProperty. #### type [ReferenceDataSetListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1398) [¶](#ReferenceDataSetListResponse) ``` type ReferenceDataSetListResponse struct { // Result of the List Reference Data Sets operation. Value []*[ReferenceDataSetResource](#ReferenceDataSetResource) `json:"value,omitempty"` } ``` ReferenceDataSetListResponse - The response of the List Reference Data Sets operation. #### func (ReferenceDataSetListResponse) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2547) [¶](#ReferenceDataSetListResponse.MarshalJSON) added in v1.1.0 ``` func (r [ReferenceDataSetListResponse](#ReferenceDataSetListResponse)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ReferenceDataSetListResponse. #### func (*ReferenceDataSetListResponse) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2554) [¶](#ReferenceDataSetListResponse.UnmarshalJSON) added in v1.1.0 ``` func (r *[ReferenceDataSetListResponse](#ReferenceDataSetListResponse)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ReferenceDataSetListResponse. #### type [ReferenceDataSetResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1406) [¶](#ReferenceDataSetResource) ``` type ReferenceDataSetResource struct { // REQUIRED; Resource location Location *[string](/builtin#string) `json:"location,omitempty"` // Properties of the reference data set. Properties *[ReferenceDataSetResourceProperties](#ReferenceDataSetResourceProperties) `json:"properties,omitempty"` // Resource tags Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` ReferenceDataSetResource - A reference data set provides metadata about the events in an environment. Metadata in the reference data set will be joined with events as they are read from event sources. The metadata that makes up the reference data set is uploaded or modified through the Time Series Insights data plane APIs. #### func (ReferenceDataSetResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2574) [¶](#ReferenceDataSetResource.MarshalJSON) added in v1.1.0 ``` func (r [ReferenceDataSetResource](#ReferenceDataSetResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ReferenceDataSetResource. #### func (*ReferenceDataSetResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2586) [¶](#ReferenceDataSetResource.UnmarshalJSON) added in v1.1.0 ``` func (r *[ReferenceDataSetResource](#ReferenceDataSetResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ReferenceDataSetResource. #### type [ReferenceDataSetResourceProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1427) [¶](#ReferenceDataSetResourceProperties) ``` type ReferenceDataSetResourceProperties struct { // REQUIRED; The list of key properties for the reference data set. KeyProperties []*[ReferenceDataSetKeyProperty](#ReferenceDataSetKeyProperty) `json:"keyProperties,omitempty"` // The reference data set key comparison behavior can be set using this property. By default, the value is 'Ordinal' - which // means case sensitive key comparison will be performed while joining reference // data with events or while adding new reference data. When 'OrdinalIgnoreCase' is set, case insensitive comparison will // be used. DataStringComparisonBehavior *[DataStringComparisonBehavior](#DataStringComparisonBehavior) `json:"dataStringComparisonBehavior,omitempty"` // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` } ``` ReferenceDataSetResourceProperties - Properties of the reference data set. #### func (ReferenceDataSetResourceProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2621) [¶](#ReferenceDataSetResourceProperties.MarshalJSON) ``` func (r [ReferenceDataSetResourceProperties](#ReferenceDataSetResourceProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ReferenceDataSetResourceProperties. #### func (*ReferenceDataSetResourceProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2631) [¶](#ReferenceDataSetResourceProperties.UnmarshalJSON) ``` func (r *[ReferenceDataSetResourceProperties](#ReferenceDataSetResourceProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ReferenceDataSetResourceProperties. #### type [ReferenceDataSetUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1445) [¶](#ReferenceDataSetUpdateParameters) ``` type ReferenceDataSetUpdateParameters struct { // Key-value pairs of additional properties for the reference data set. Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` } ``` ReferenceDataSetUpdateParameters - Parameters supplied to the Update Reference Data Set operation. #### func (ReferenceDataSetUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2660) [¶](#ReferenceDataSetUpdateParameters.MarshalJSON) ``` func (r [ReferenceDataSetUpdateParameters](#ReferenceDataSetUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ReferenceDataSetUpdateParameters. #### func (*ReferenceDataSetUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2667) [¶](#ReferenceDataSetUpdateParameters.UnmarshalJSON) added in v1.1.0 ``` func (r *[ReferenceDataSetUpdateParameters](#ReferenceDataSetUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ReferenceDataSetUpdateParameters. #### type [ReferenceDataSetsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/referencedatasets_client.go#L26) [¶](#ReferenceDataSetsClient) ``` type ReferenceDataSetsClient struct { // contains filtered or unexported fields } ``` ReferenceDataSetsClient contains the methods for the ReferenceDataSets group. Don't use this type directly, use NewReferenceDataSetsClient() instead. #### func [NewReferenceDataSetsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/referencedatasets_client.go#L35) [¶](#NewReferenceDataSetsClient) ``` func NewReferenceDataSetsClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[ReferenceDataSetsClient](#ReferenceDataSetsClient), [error](/builtin#error)) ``` NewReferenceDataSetsClient creates a new instance of ReferenceDataSetsClient with the specified values. * subscriptionID - Azure Subscription ID. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*ReferenceDataSetsClient) [CreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/referencedatasets_client.go#L57) [¶](#ReferenceDataSetsClient.CreateOrUpdate) ``` func (client *[ReferenceDataSetsClient](#ReferenceDataSetsClient)) CreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), referenceDataSetName [string](/builtin#string), parameters [ReferenceDataSetCreateOrUpdateParameters](#ReferenceDataSetCreateOrUpdateParameters), options *[ReferenceDataSetsClientCreateOrUpdateOptions](#ReferenceDataSetsClientCreateOrUpdateOptions)) ([ReferenceDataSetsClientCreateOrUpdateResponse](#ReferenceDataSetsClientCreateOrUpdateResponse), [error](/builtin#error)) ``` CreateOrUpdate - Create or update a reference data set in the specified environment. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * referenceDataSetName - Name of the reference data set. * parameters - Parameters for creating a reference data set. * options - ReferenceDataSetsClientCreateOrUpdateOptions contains the optional parameters for the ReferenceDataSetsClient.CreateOrUpdate method. Example [¶](#example-ReferenceDataSetsClient.CreateOrUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/ReferenceDataSetsCreate.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewReferenceDataSetsClient().CreateOrUpdate(ctx, "rg1", "env1", "rds1", armtimeseriesinsights.ReferenceDataSetCreateOrUpdateParameters{ Location: to.Ptr("West US"), Properties: &armtimeseriesinsights.ReferenceDataSetCreationProperties{ KeyProperties: []*armtimeseriesinsights.ReferenceDataSetKeyProperty{ { Name: to.Ptr("DeviceId1"), Type: to.Ptr(armtimeseriesinsights.ReferenceDataKeyPropertyTypeString), }, { Name: to.Ptr("DeviceFloor"), Type: to.Ptr(armtimeseriesinsights.ReferenceDataKeyPropertyTypeDouble), }}, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.ReferenceDataSetResource = armtimeseriesinsights.ReferenceDataSetResource{ // Name: to.Ptr("rds1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/ReferenceDataSets"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/referenceDataSets/rds1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Properties: &armtimeseriesinsights.ReferenceDataSetResourceProperties{ // KeyProperties: []*armtimeseriesinsights.ReferenceDataSetKeyProperty{ // { // Name: to.Ptr("DeviceId1"), // Type: to.Ptr(armtimeseriesinsights.ReferenceDataKeyPropertyTypeString), // }, // { // Name: to.Ptr("DeviceFloor"), // Type: to.Ptr(armtimeseriesinsights.ReferenceDataKeyPropertyTypeDouble), // }}, // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*ReferenceDataSetsClient) [Delete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/referencedatasets_client.go#L120) [¶](#ReferenceDataSetsClient.Delete) ``` func (client *[ReferenceDataSetsClient](#ReferenceDataSetsClient)) Delete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), referenceDataSetName [string](/builtin#string), options *[ReferenceDataSetsClientDeleteOptions](#ReferenceDataSetsClientDeleteOptions)) ([ReferenceDataSetsClientDeleteResponse](#ReferenceDataSetsClientDeleteResponse), [error](/builtin#error)) ``` Delete - Deletes the reference data set with the specified name in the specified subscription, resource group, and environment If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * referenceDataSetName - The name of the Time Series Insights reference data set associated with the specified environment. * options - ReferenceDataSetsClientDeleteOptions contains the optional parameters for the ReferenceDataSetsClient.Delete method. Example [¶](#example-ReferenceDataSetsClient.Delete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/ReferenceDataSetsDelete.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } _, err = clientFactory.NewReferenceDataSetsClient().Delete(ctx, "rg1", "env1", "rds1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*ReferenceDataSetsClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/referencedatasets_client.go#L173) [¶](#ReferenceDataSetsClient.Get) ``` func (client *[ReferenceDataSetsClient](#ReferenceDataSetsClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), referenceDataSetName [string](/builtin#string), options *[ReferenceDataSetsClientGetOptions](#ReferenceDataSetsClientGetOptions)) ([ReferenceDataSetsClientGetResponse](#ReferenceDataSetsClientGetResponse), [error](/builtin#error)) ``` Get - Gets the reference data set with the specified name in the specified environment. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * referenceDataSetName - The name of the Time Series Insights reference data set associated with the specified environment. * options - ReferenceDataSetsClientGetOptions contains the optional parameters for the ReferenceDataSetsClient.Get method. Example [¶](#example-ReferenceDataSetsClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/ReferenceDataSetsGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewReferenceDataSetsClient().Get(ctx, "rg1", "env1", "rds1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.ReferenceDataSetResource = armtimeseriesinsights.ReferenceDataSetResource{ // Name: to.Ptr("rds1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/ReferenceDataSets"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/referenceDataSets/rds1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Properties: &armtimeseriesinsights.ReferenceDataSetResourceProperties{ // KeyProperties: []*armtimeseriesinsights.ReferenceDataSetKeyProperty{ // { // Name: to.Ptr("DeviceId1"), // Type: to.Ptr(armtimeseriesinsights.ReferenceDataKeyPropertyTypeString), // }, // { // Name: to.Ptr("DeviceFloor"), // Type: to.Ptr(armtimeseriesinsights.ReferenceDataKeyPropertyTypeDouble), // }}, // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*ReferenceDataSetsClient) [ListByEnvironment](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/referencedatasets_client.go#L236) [¶](#ReferenceDataSetsClient.ListByEnvironment) ``` func (client *[ReferenceDataSetsClient](#ReferenceDataSetsClient)) ListByEnvironment(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), options *[ReferenceDataSetsClientListByEnvironmentOptions](#ReferenceDataSetsClientListByEnvironmentOptions)) ([ReferenceDataSetsClientListByEnvironmentResponse](#ReferenceDataSetsClientListByEnvironmentResponse), [error](/builtin#error)) ``` ListByEnvironment - Lists all the available reference data sets associated with the subscription and within the specified resource group and environment. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * options - ReferenceDataSetsClientListByEnvironmentOptions contains the optional parameters for the ReferenceDataSetsClient.ListByEnvironment method. Example [¶](#example-ReferenceDataSetsClient.ListByEnvironment) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/ReferenceDataSetsListByEnvironment.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewReferenceDataSetsClient().ListByEnvironment(ctx, "rg1", "env1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.ReferenceDataSetListResponse = armtimeseriesinsights.ReferenceDataSetListResponse{ // Value: []*armtimeseriesinsights.ReferenceDataSetResource{ // { // Name: to.Ptr("rds1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/ReferenceDataSets"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/referenceDataSets/rds1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Properties: &armtimeseriesinsights.ReferenceDataSetResourceProperties{ // KeyProperties: []*armtimeseriesinsights.ReferenceDataSetKeyProperty{ // { // Name: to.Ptr("DeviceId1"), // Type: to.Ptr(armtimeseriesinsights.ReferenceDataKeyPropertyTypeString), // }, // { // Name: to.Ptr("DeviceFloor"), // Type: to.Ptr(armtimeseriesinsights.ReferenceDataKeyPropertyTypeDouble), // }}, // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### func (*ReferenceDataSetsClient) [Update](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/referencedatasets_client.go#L296) [¶](#ReferenceDataSetsClient.Update) ``` func (client *[ReferenceDataSetsClient](#ReferenceDataSetsClient)) Update(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), environmentName [string](/builtin#string), referenceDataSetName [string](/builtin#string), referenceDataSetUpdateParameters [ReferenceDataSetUpdateParameters](#ReferenceDataSetUpdateParameters), options *[ReferenceDataSetsClientUpdateOptions](#ReferenceDataSetsClientUpdateOptions)) ([ReferenceDataSetsClientUpdateResponse](#ReferenceDataSetsClientUpdateResponse), [error](/builtin#error)) ``` Update - Updates the reference data set with the specified name in the specified subscription, resource group, and environment. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2020-05-15 * resourceGroupName - Name of an Azure Resource group. * environmentName - The name of the Time Series Insights environment associated with the specified resource group. * referenceDataSetName - The name of the Time Series Insights reference data set associated with the specified environment. * referenceDataSetUpdateParameters - Request object that contains the updated information for the reference data set. * options - ReferenceDataSetsClientUpdateOptions contains the optional parameters for the ReferenceDataSetsClient.Update method. Example [¶](#example-ReferenceDataSetsClient.Update) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/163e27c0ca7570bc39e00a46f255740d9b3ba3cb/specification/timeseriesinsights/resource-manager/Microsoft.TimeSeriesInsights/stable/2020-05-15/examples/ReferenceDataSetsPatchTags.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armtimeseriesinsights.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewReferenceDataSetsClient().Update(ctx, "rg1", "env1", "rds1", armtimeseriesinsights.ReferenceDataSetUpdateParameters{ Tags: map[string]*string{ "someKey": to.Ptr("someValue"), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.ReferenceDataSetResource = armtimeseriesinsights.ReferenceDataSetResource{ // Name: to.Ptr("rds1"), // Type: to.Ptr("Microsoft.TimeSeriesInsights/Environments/ReferenceDataSets"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.TimeSeriesInsights/Environments/env1/referenceDataSets/rds1"), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // "someKey": to.Ptr("someValue"), // }, // Properties: &armtimeseriesinsights.ReferenceDataSetResourceProperties{ // KeyProperties: []*armtimeseriesinsights.ReferenceDataSetKeyProperty{ // { // Name: to.Ptr("DeviceId1"), // Type: to.Ptr(armtimeseriesinsights.ReferenceDataKeyPropertyTypeString), // }, // { // Name: to.Ptr("DeviceFloor"), // Type: to.Ptr(armtimeseriesinsights.ReferenceDataKeyPropertyTypeDouble), // }}, // CreationTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2017-04-18T19:20:33.2288820Z"); return t}()), // ProvisioningState: to.Ptr(armtimeseriesinsights.ProvisioningStateSucceeded), // }, // } } ``` ``` Output: ``` Share Format Run #### type [ReferenceDataSetsClientCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1452) [¶](#ReferenceDataSetsClientCreateOrUpdateOptions) ``` type ReferenceDataSetsClientCreateOrUpdateOptions struct { } ``` ReferenceDataSetsClientCreateOrUpdateOptions contains the optional parameters for the ReferenceDataSetsClient.CreateOrUpdate method. #### type [ReferenceDataSetsClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L158) [¶](#ReferenceDataSetsClientCreateOrUpdateResponse) ``` type ReferenceDataSetsClientCreateOrUpdateResponse struct { [ReferenceDataSetResource](#ReferenceDataSetResource) } ``` ReferenceDataSetsClientCreateOrUpdateResponse contains the response from method ReferenceDataSetsClient.CreateOrUpdate. #### type [ReferenceDataSetsClientDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1457) [¶](#ReferenceDataSetsClientDeleteOptions) ``` type ReferenceDataSetsClientDeleteOptions struct { } ``` ReferenceDataSetsClientDeleteOptions contains the optional parameters for the ReferenceDataSetsClient.Delete method. #### type [ReferenceDataSetsClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L163) [¶](#ReferenceDataSetsClientDeleteResponse) ``` type ReferenceDataSetsClientDeleteResponse struct { } ``` ReferenceDataSetsClientDeleteResponse contains the response from method ReferenceDataSetsClient.Delete. #### type [ReferenceDataSetsClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1462) [¶](#ReferenceDataSetsClientGetOptions) ``` type ReferenceDataSetsClientGetOptions struct { } ``` ReferenceDataSetsClientGetOptions contains the optional parameters for the ReferenceDataSetsClient.Get method. #### type [ReferenceDataSetsClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L168) [¶](#ReferenceDataSetsClientGetResponse) ``` type ReferenceDataSetsClientGetResponse struct { [ReferenceDataSetResource](#ReferenceDataSetResource) } ``` ReferenceDataSetsClientGetResponse contains the response from method ReferenceDataSetsClient.Get. #### type [ReferenceDataSetsClientListByEnvironmentOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1468) [¶](#ReferenceDataSetsClientListByEnvironmentOptions) ``` type ReferenceDataSetsClientListByEnvironmentOptions struct { } ``` ReferenceDataSetsClientListByEnvironmentOptions contains the optional parameters for the ReferenceDataSetsClient.ListByEnvironment method. #### type [ReferenceDataSetsClientListByEnvironmentResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L173) [¶](#ReferenceDataSetsClientListByEnvironmentResponse) ``` type ReferenceDataSetsClientListByEnvironmentResponse struct { [ReferenceDataSetListResponse](#ReferenceDataSetListResponse) } ``` ReferenceDataSetsClientListByEnvironmentResponse contains the response from method ReferenceDataSetsClient.ListByEnvironment. #### type [ReferenceDataSetsClientUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1473) [¶](#ReferenceDataSetsClientUpdateOptions) ``` type ReferenceDataSetsClientUpdateOptions struct { } ``` ReferenceDataSetsClientUpdateOptions contains the optional parameters for the ReferenceDataSetsClient.Update method. #### type [ReferenceDataSetsClientUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/response_types.go#L178) [¶](#ReferenceDataSetsClientUpdateResponse) ``` type ReferenceDataSetsClientUpdateResponse struct { [ReferenceDataSetResource](#ReferenceDataSetResource) } ``` ReferenceDataSetsClientUpdateResponse contains the response from method ReferenceDataSetsClient.Update. #### type [Resource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1478) [¶](#Resource) ``` type Resource struct { // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` Resource - Time Series Insights resource #### func (Resource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2687) [¶](#Resource.MarshalJSON) added in v1.1.0 ``` func (r [Resource](#Resource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Resource. #### func (*Resource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2696) [¶](#Resource.UnmarshalJSON) added in v1.1.0 ``` func (r *[Resource](#Resource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Resource. #### type [ResourceProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1490) [¶](#ResourceProperties) ``` type ResourceProperties struct { // READ-ONLY; The time the resource was created. CreationTime *[time](/time).[Time](/time#Time) `json:"creationTime,omitempty" azure:"ro"` // READ-ONLY; Provisioning state of the resource. ProvisioningState *[ProvisioningState](#ProvisioningState) `json:"provisioningState,omitempty" azure:"ro"` } ``` ResourceProperties - Properties that are common to all tracked resources. #### func (ResourceProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2722) [¶](#ResourceProperties.MarshalJSON) ``` func (r [ResourceProperties](#ResourceProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ResourceProperties. #### func (*ResourceProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2730) [¶](#ResourceProperties.UnmarshalJSON) ``` func (r *[ResourceProperties](#ResourceProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ResourceProperties. #### type [SKU](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1500) [¶](#SKU) ``` type SKU struct { // REQUIRED; The capacity of the sku. For Gen1 environments, this value can be changed to support scale out of environments // after they have been created. Capacity *[int32](/builtin#int32) `json:"capacity,omitempty"` // REQUIRED; The name of this SKU. Name *[SKUName](#SKUName) `json:"name,omitempty"` } ``` SKU - The sku determines the type of environment, either Gen1 (S1 or S2) or Gen2 (L1). For Gen1 environments the sku determines the capacity of the environment, the ingress rate, and the billing rate. #### func (SKU) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2753) [¶](#SKU.MarshalJSON) added in v1.1.0 ``` func (s [SKU](#SKU)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type SKU. #### func (*SKU) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2761) [¶](#SKU.UnmarshalJSON) added in v1.1.0 ``` func (s *[SKU](#SKU)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type SKU. #### type [SKUName](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L230) [¶](#SKUName) ``` type SKUName [string](/builtin#string) ``` SKUName - The name of this SKU. ``` const ( SKUNameL1 [SKUName](#SKUName) = "L1" SKUNameP1 [SKUName](#SKUName) = "P1" SKUNameS1 [SKUName](#SKUName) = "S1" SKUNameS2 [SKUName](#SKUName) = "S2" ) ``` #### func [PossibleSKUNameValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L240) [¶](#PossibleSKUNameValues) ``` func PossibleSKUNameValues() [][SKUName](#SKUName) ``` PossibleSKUNameValues returns the possible values for the SKUName const type. #### type [ServiceSpecification](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1510) [¶](#ServiceSpecification) ``` type ServiceSpecification struct { // A list of Azure Monitoring log definitions. LogSpecifications []*[LogSpecification](#LogSpecification) `json:"logSpecifications,omitempty"` // Metric specifications of operation. MetricSpecifications []*[MetricSpecification](#MetricSpecification) `json:"metricSpecifications,omitempty"` } ``` ServiceSpecification - One property of operation, include metric specifications. #### func (ServiceSpecification) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2784) [¶](#ServiceSpecification.MarshalJSON) added in v1.1.0 ``` func (s [ServiceSpecification](#ServiceSpecification)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ServiceSpecification. #### func (*ServiceSpecification) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2792) [¶](#ServiceSpecification.UnmarshalJSON) added in v1.1.0 ``` func (s *[ServiceSpecification](#ServiceSpecification)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ServiceSpecification. #### type [StorageLimitExceededBehavior](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L253) [¶](#StorageLimitExceededBehavior) ``` type StorageLimitExceededBehavior [string](/builtin#string) ``` StorageLimitExceededBehavior - The behavior the Time Series Insights service should take when the environment's capacity has been exceeded. If "PauseIngress" is specified, new events will not be read from the event source. If "PurgeOldData" is specified, new events will continue to be read and old events will be deleted from the environment. The default behavior is PurgeOldData. ``` const ( StorageLimitExceededBehaviorPauseIngress [StorageLimitExceededBehavior](#StorageLimitExceededBehavior) = "PauseIngress" StorageLimitExceededBehaviorPurgeOldData [StorageLimitExceededBehavior](#StorageLimitExceededBehavior) = "PurgeOldData" ) ``` #### func [PossibleStorageLimitExceededBehaviorValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L261) [¶](#PossibleStorageLimitExceededBehaviorValues) ``` func PossibleStorageLimitExceededBehaviorValues() [][StorageLimitExceededBehavior](#StorageLimitExceededBehavior) ``` PossibleStorageLimitExceededBehaviorValues returns the possible values for the StorageLimitExceededBehavior const type. #### type [TimeSeriesIDProperty](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1520) [¶](#TimeSeriesIDProperty) ``` type TimeSeriesIDProperty struct { // The name of the property. Name *[string](/builtin#string) `json:"name,omitempty"` // The type of the property. Type *[PropertyType](#PropertyType) `json:"type,omitempty"` } ``` TimeSeriesIDProperty - The structure of the property that a time series id can have. An environment can have multiple such properties. #### func (TimeSeriesIDProperty) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2815) [¶](#TimeSeriesIDProperty.MarshalJSON) added in v1.1.0 ``` func (t [TimeSeriesIDProperty](#TimeSeriesIDProperty)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type TimeSeriesIDProperty. #### func (*TimeSeriesIDProperty) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2823) [¶](#TimeSeriesIDProperty.UnmarshalJSON) added in v1.1.0 ``` func (t *[TimeSeriesIDProperty](#TimeSeriesIDProperty)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type TimeSeriesIDProperty. #### type [TrackedResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1529) [¶](#TrackedResource) ``` type TrackedResource struct { // REQUIRED; Resource location Location *[string](/builtin#string) `json:"location,omitempty"` // Resource tags Tags map[[string](/builtin#string)]*[string](/builtin#string) `json:"tags,omitempty"` // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` TrackedResource - Time Series Insights resource that is tracked by Azure Resource Manager. #### func (TrackedResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2846) [¶](#TrackedResource.MarshalJSON) added in v1.1.0 ``` func (t [TrackedResource](#TrackedResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type TrackedResource. #### func (*TrackedResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2857) [¶](#TrackedResource.UnmarshalJSON) added in v1.1.0 ``` func (t *[TrackedResource](#TrackedResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type TrackedResource. #### type [WarmStorageEnvironmentStatus](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1547) [¶](#WarmStorageEnvironmentStatus) ``` type WarmStorageEnvironmentStatus struct { // READ-ONLY; An object that contains the status of warm storage properties usage. PropertiesUsage *[WarmStoragePropertiesUsage](#WarmStoragePropertiesUsage) `json:"propertiesUsage,omitempty" azure:"ro"` } ``` WarmStorageEnvironmentStatus - An object that represents the status of warm storage on an environment. #### func (WarmStorageEnvironmentStatus) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2889) [¶](#WarmStorageEnvironmentStatus.MarshalJSON) added in v1.1.0 ``` func (w [WarmStorageEnvironmentStatus](#WarmStorageEnvironmentStatus)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type WarmStorageEnvironmentStatus. #### func (*WarmStorageEnvironmentStatus) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2896) [¶](#WarmStorageEnvironmentStatus.UnmarshalJSON) added in v1.1.0 ``` func (w *[WarmStorageEnvironmentStatus](#WarmStorageEnvironmentStatus)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type WarmStorageEnvironmentStatus. #### type [WarmStoragePropertiesState](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L270) [¶](#WarmStoragePropertiesState) ``` type WarmStoragePropertiesState [string](/builtin#string) ``` WarmStoragePropertiesState - This string represents the state of warm storage properties usage. It can be "Ok", "Error", "Unknown". ``` const ( WarmStoragePropertiesStateError [WarmStoragePropertiesState](#WarmStoragePropertiesState) = "Error" WarmStoragePropertiesStateOk [WarmStoragePropertiesState](#WarmStoragePropertiesState) = "Ok" WarmStoragePropertiesStateUnknown [WarmStoragePropertiesState](#WarmStoragePropertiesState) = "Unknown" ) ``` #### func [PossibleWarmStoragePropertiesStateValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/constants.go#L279) [¶](#PossibleWarmStoragePropertiesStateValues) ``` func PossibleWarmStoragePropertiesStateValues() [][WarmStoragePropertiesState](#WarmStoragePropertiesState) ``` PossibleWarmStoragePropertiesStateValues returns the possible values for the WarmStoragePropertiesState const type. #### type [WarmStoragePropertiesUsage](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1553) [¶](#WarmStoragePropertiesUsage) ``` type WarmStoragePropertiesUsage struct { // This string represents the state of warm storage properties usage. It can be "Ok", "Error", "Unknown". State *[WarmStoragePropertiesState](#WarmStoragePropertiesState) `json:"state,omitempty"` // READ-ONLY; An object that contains the details about warm storage properties usage state. StateDetails *[WarmStoragePropertiesUsageStateDetails](#WarmStoragePropertiesUsageStateDetails) `json:"stateDetails,omitempty" azure:"ro"` } ``` WarmStoragePropertiesUsage - An object that contains the status of warm storage properties usage. #### func (WarmStoragePropertiesUsage) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2916) [¶](#WarmStoragePropertiesUsage.MarshalJSON) added in v1.1.0 ``` func (w [WarmStoragePropertiesUsage](#WarmStoragePropertiesUsage)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type WarmStoragePropertiesUsage. #### func (*WarmStoragePropertiesUsage) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2924) [¶](#WarmStoragePropertiesUsage.UnmarshalJSON) added in v1.1.0 ``` func (w *[WarmStoragePropertiesUsage](#WarmStoragePropertiesUsage)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type WarmStoragePropertiesUsage. #### type [WarmStoragePropertiesUsageStateDetails](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1562) [¶](#WarmStoragePropertiesUsageStateDetails) ``` type WarmStoragePropertiesUsageStateDetails struct { // A value that represents the number of properties used by the environment for S1/S2 SKU and number of properties used by // Warm Store for PAYG SKU CurrentCount *[int32](/builtin#int32) `json:"currentCount,omitempty"` // A value that represents the maximum number of properties used allowed by the environment for S1/S2 SKU and maximum number // of properties allowed by Warm Store for PAYG SKU. MaxCount *[int32](/builtin#int32) `json:"maxCount,omitempty"` } ``` WarmStoragePropertiesUsageStateDetails - An object that contains the details about warm storage properties usage state. #### func (WarmStoragePropertiesUsageStateDetails) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2947) [¶](#WarmStoragePropertiesUsageStateDetails.MarshalJSON) added in v1.1.0 ``` func (w [WarmStoragePropertiesUsageStateDetails](#WarmStoragePropertiesUsageStateDetails)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type WarmStoragePropertiesUsageStateDetails. #### func (*WarmStoragePropertiesUsageStateDetails) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2955) [¶](#WarmStoragePropertiesUsageStateDetails.UnmarshalJSON) added in v1.1.0 ``` func (w *[WarmStoragePropertiesUsageStateDetails](#WarmStoragePropertiesUsageStateDetails)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type WarmStoragePropertiesUsageStateDetails. #### type [WarmStoreConfigurationProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models.go#L1574) [¶](#WarmStoreConfigurationProperties) ``` type WarmStoreConfigurationProperties struct { // REQUIRED; ISO8601 timespan specifying the number of days the environment's events will be available for query from the // warm store. DataRetention *[string](/builtin#string) `json:"dataRetention,omitempty"` } ``` WarmStoreConfigurationProperties - The warm store configuration provides the details to create a warm store cache that will retain a copy of the environment's data available for faster query. #### func (WarmStoreConfigurationProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2978) [¶](#WarmStoreConfigurationProperties.MarshalJSON) added in v1.1.0 ``` func (w [WarmStoreConfigurationProperties](#WarmStoreConfigurationProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type WarmStoreConfigurationProperties. #### func (*WarmStoreConfigurationProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/v1.1.0/sdk/resourcemanager/timeseriesinsights/armtimeseriesinsights/models_serde.go#L2985) [¶](#WarmStoreConfigurationProperties.UnmarshalJSON) added in v1.1.0 ``` func (w *[WarmStoreConfigurationProperties](#WarmStoreConfigurationProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type WarmStoreConfigurationProperties.
pass3
ctan
TeX
Copyright. Copyright (c) <NAME> 1998 - 2015 ## 7. Cpass3 user-prefix-declaration directive. ( Cpass3 user-prefix-declaration directive \(\gamma\)) \(\equiv\) #include "ws.h" #include "cweb_or_c_k.h" #include "bad_char_set.h" #include "eol.h" #include "prefile_include.h" #include "identifier.h" #include "o2_externs.h" ## 8. _Rpass3 rule._ Rpass3 ( Rpass3 op directive 9) \(\equiv\) using namespace NS_yacco2_k_symbols; if (Cpass3::nestedusecnt_t \(\equiv\)1) { ADD_TOKEN_TO_PRODUCER_QUEUE(\(*\)yacco2::PTRLR1_eog_); ADD_TOKEN_TO_PRODUCER_QUEUE(\(*\)yacco2::PTRLR1_eog_); } --Cpass3::nestedusecnt._; ## 10. _Rpass3's subrule 1._ ( Rpass3 subrule 1 op directive 10) \(\equiv\) _CAbslr1_sym_*_sym_=**new**_Err_empty_file_; _sym-set_rc_(\(*\)_rule_info__.parser_~current_token_(),__FILE__,__LINE__); ADD_TOKEN_TO_ERROR_QUEUE(\(*\)_sym_); _rule_info__.parser_~set_abort_parse_(_true_); ## 11. _Rpass3's subrule 2._ ( Rpass3 subrule 2 op directive 11) \(\equiv\) if (GRAMMAR_TREE \(\equiv\)0) { _CAbslr1_sym_*_sym_=**new**_Err_empty_file_; _sym-set_rc_(\(*\)_rule_info__.parser_~current_token_(),__FILE__,__LINE__); ADD_TOKEN_TO_ERROR_QUEUE(\(*\)_sym_); _rule_info__.parser_~set_abort_parse_(_true_); ## 13 Relems -- left recursion diagram. Note: the left recursion drawn as a Pascal railroad diagram. 15. _Rtoken_'s subrule 3. 15. _Rtoken_'s subrule 3. **16. Identifier slip thru.** As i use a top / down process to consume constructs, an Identifier some how is slipping thru due to either a premature ending of Rules top / down parse process or a before out-of-alignment token that should be within the Rules Vocabulary, or a misplaced misspelt T. (Rtoken subrule 3 op directive 16) \(\equiv\) _sf-p2__-set_auto_delete_(_true_); _CAbslr1_sym_*_sym_=**new**_Err_misplaced_or_misspelt_Rule_or_T_; _sym-set_rc_(_*sf-p2__,__,__FILE__,__LINE__); ADD_TOKEN_TO_ERROR_QUEUE(_*sym_); _rule_info__parser__-set_abort_parse_(_true_); **17.**_Rtoken's subrule 4._ **18.**Dispatch keyword to process its construct phrase. Again neat stuff with its co-operation of top/down and bottom-up parsing paradigms. Notice that i use the catch all "+" to show case it where as i could have referenced"keyword". This is how the parser works: 1) check the state's table for specificly returned T from thread call 2) check for "catch all" returned presence from a thread call 3) try current token T to shift if thread call did not work or is not present in state 4) check for "catch all" presence in the state to shift Note: there are 2 types of "catch all": one for returned T from thread calls and the other for regular parsing. (Rtoken subrule 4 op directive 18) \(\equiv\) _CAbslr1_sym_*_key_=_sf-p2__-; /* extract specific keyword */ _yacco2_::INT_cont_pos_=_rule_info__parser__-current_token_pos__-; _CAbslr1_sym_*_cont_tok_=_rule_info__parser__-current_token_(); **bool**_result_=PROCESS_KEYWORD_FOR_SYNTAX_CODE(_*rule_info__parser__,key_,&_cont_tok_,&_cont_pos_); **if**_(_result_=_false_) { _rule_info__parser__-set_abort_parse_(_true_); **return**; } ADD_TOKEN_TO_PRODUCER_QUEUE(_*key_); /* adv. to phrase's LA pos */ _rule_info__parser__-override_current_token_(_*cont_tok_,_cont_pos_); **19.**_Rtoken's subrule 5._ Error subrules. (Rtoken subrule 10 op directive 28) \(\equiv\) _CAbs_lr1_sym_*_sym_=**new**_Err_not_kw_defining_grammar_construct_; _sym-set_rc_(*_sf-p1_,_FILE_,_LINE_); ADD_TOKEN_TO_ERROR_QUEUE(*_sym_); _rule_info_parser_,_set_abort_parse_(_true_); #### 29. _Rtoken's subrule 11._ (Rtoken subrule 11 op directive 29) \(\equiv\) ADD_TOKEN_TO_ERROR_QUEUE(*_sf-p2_); _rule_info_parser_,_set_abort_parse_(_true_); #### 30. _Rtoken's subrule 12._ (Rtoken subrule 12 op directive 30) \(\equiv\) ADD_TOKEN_TO_ERROR_QUEUE(*_sf-p2_); _rule_info_parser_,_set_abort_parse_(_true_); #### 31. _Rtoken's subrule 13._ (Rtoken subrule 13 op directive 31) \(\equiv\) ADD_TOKEN_TO_ERROR_QUEUE(*_sf-p2_); _rule_info_parser_,_set_abort_parse_(_true_); #### 32. _Rtoken's subrule 14._ (Rtoken subrule 14 op directive 32) \(\equiv\) _Err_bad_char_*_k_=_sf-p2_; ADD_TOKEN_TO_ERROR_QUEUE(*_k_); _rule_info_parser_,_set_abort_parse_(_true_); **39. Index.** |+|: 14. ||: 14. |?|: 14, 33. |t|: 33. __FILE__: 10, 11, 16, 20, 22, 26, 28, 35, 36. __LINE__: 10, 11, 16, 20, 22, 26, 28, 35, 36. ADD_TOKEN_TO_ERROR_QUEUE: 10, 11, 16, 20, 22, 28, 29, 30, 31, 32, 34. ADD_TOKEN_TO_PRODUCER_QUEUE: 9, 18. ADD_TOKEN_TO_RECYCLE_BIN: 34. AST: 26. bad char: 14. bad eos: 14. BUILD_GRAMMAR_TREE: 26. _CAbs_lr1_sym_: 10, 11, 16, 18, 20, 22, 28, 34, 35, 36. comment: 14. comment-overrun: 14. _cont_pos_: 18. _cont_tok_: 18. _Cpass3_: 4, 6, 9. _current_token_: 10, 11, 18, 36. _current_token_pos__: 18. _cw_: 26. cweb-comment: 14. CWEB_MARKER: 26. _cweb_t__: 26. _cwebk_t__: 26. eog: 8. eol: 14. _err_: 34. _Err_bad_char_: 32. _Err_bad_directive_: 35. _Err_empty_file_: 10, 11. _Err_misplaced_or_misspelt_Rule_or_T: 16. _Err_no_directive_present_: 36. _Err_not_kw_defining_grammar_construct_: 28. _Err_use_of_N_outside_Rules_construct_: 20. _Err_use_of_T_outside_Rules_construct_: 22. _error_sym_: 34. _false_: 18, 34. file-inclusion: 33. GRAMMAR_TREE: 11. identifier: 14. INT: 18. _join_pts_: 26. _key_: 18. _nested_use_cnt_.: 4, 5, 6, 9. NS_bad_char_set::TH_bad_char_set: 14. NS_cweb_or_c_k::TH_cweb_or_c_k: 14. NS_eol::TH_eol: 14. NS_identifier::TH_identifier: 14. NS_profile_include: :PROC_TH_prefile_include: 33. NS_ws::TH_ws: 14. _NS_prefile_inc_: 2. _NS_vacco2_k_symbols_: 9. NULL: 14, 33. _override_current_token_: 18. _parser__: 10, 11, 16, 18, 20, 22, 28, 29, 30, 31,_ _32, 34, 35, 36._ _pass1_: 2. _pass3_: 2, 24. _PROC_TH_prefile_inc_: 2. PROCESS_INCLUDE_FILE: 24, 34. PROCESS_KEYWORD_FOR_SYNTAX_CODE: 2, 18. _PTR_LR1_eog__: 9. _p1__: 28._ _p2__: 16, 18, 20, 22, 25, 26, 29, 30, 31, 32, 34, 35._ Relems: 8, 13. _Relems_: 12. _result_: 18, 34. _Rpass3_: 8, 10, 11. Rprefile_inc_dispatcher_: 14. _Rprefile_inc_dispatcher_: 2, 33, 34, 35, 36. RSCVP: 35, 36. Rtoken: 13. _Rtoken_: 14, 15, 17, 19, 21, 23, 25, 26, 27, 29,_ 30, 31, 32. rule-in-stbl: 14. _rule_info__: 10, 11, 16, 18, 20, 22, 28, 29, 30,_ 31, 32, 34, 35, 36._ _set_abort_parse_: 10, 11, 16, 18, 20, 22, 28, 29,_ 30, 31, 32, 34._ _set_auto_delete_: 16, 20, 22, 25, 35._ _set_content_: 26. _set_rc_: 10, 11, 16, 20, 22, 26, 28, 35, 36._ _set_stop_parse_: 35, 36._ _sf_: 16, 18, 20, 22, 25, 26, 28, 29, 30, 31, 32, 34, 35._ _start_manually_parallel_parsing_: 24. _sym_: 10, 11, 16, 20, 22, 28, 35, 36._ T-in-stbl: 14. _T_comment_: 25. _T_cweb_comment_: 26. _T_cweb_marker_: 26. _token_producer__: 34. _true_: 10, 11, 16, 18, 20, 22, 25, 28, 29, 30, 31,_ _32, 34, 35, 36._ _use_cnt__: 24._ ws: 14. _yacco2__: 9, 18._ _yacco2__keyword_: 2. \(\langle\)Cpass3 op directive 4\(\rangle\) \(\langle\)Cpass3 user-declaration directive 5\(\rangle\) \(\langle\)Cpass3 user-implementation directive 6\(\rangle\) \(\langle\)Cpass3 user-prefix-declaration directive 7\(\rangle\) [MISSING_PAGE_POST] pass3 Grammar Date: January 2, 2015 at 15:38 File: pass3.lex Ns: NS_pass3 Version: 1.0 Grammar Comments: Type: Monolithic \(O_{2}\)'s lexer constructing tokens for syntax parser stage. **Copyright** **pass3 stand alone Grammar**: 2 2 Fsm Cpass3 class 3 2 Cpass3 op directive 4 2 Cpass3 user-declaration directive 5 2 Cpass3 user-implementation directive 6 2 Cpass3 user-prefix-declaration directive 7 3 _Rpass3_ rule 8 3 Rpass3 op directive 9 3 _Rpass3_'s subrule 1 10 3 _Rpass3_'s subrule 2 11 3 _Relems_ rule 12 3 Relems -- left recursion diagram 13 _Rtoken_ rule 14 4 _Rtoken_'s subrule 3 15 4 Identifier slip thru 16 5 _Rtoken_'s subrule 4 17 5 Dispatch keyword to process its construct phrase 18 5 _Rtoken_'s subrule 5 19 5 "rule-in-stbl" slip thru 20 6 _Rtoken_'s subrule 6 21 6 "T-in-stbl" slip thru 22 _Rtoken_'s subrule 7 23 _Yac\({}_{2}\)o\({}_{2}\)'s pre-processor include directive 24 6 _Rtoken_'s subrule 8 25 6 _Rtoken_'s subrule 9 26 7 _Rtoken_'s subrule 10 27 7 Error subrules 28 8 _Rtoken_'s subrule 11 29 8 _Rtoken_'s subrule 12 30 8 _Rtoken_'s subrule 13 31 8 _Rtoken_'s subrule 14 32 8 _Rprefile_inc_dispatcher_ rule 33 9 _Rprefile_inc_dispatcher_ rule 34 9 _Rprefile_inc_dispatcher_'s subrule 2 35 9 _Rprefile_inc_dispatcher_'s subrule 36 9 _First Set Language for \(O_{2}^{linker}\)_37 10 **Lr1 State Network**: 38 11 **Index**:
cargo_sea_orm_rocket_codegen.jsonl
personal_doc
Markdown
### Structs * Catcher * Config * Data * Error * Request * Response * Rocket * Route * Shutdown * State * catcher::Catcher * config::Config * config::Ident * config::MutualTls * config::SecretKey * config::Shutdown * config::TlsConfig * data::ByteUnit * data::Capped * data::Data * data::DataStream * data::Limits * data::N * error::Error * fairing::AdHoc * fairing::Info * fairing::Kind * form::Context * form::Contextual * form::DataField * form::Error * form::Errors * form::Form * form::Lenient * form::Options * form::Strict * form::ValueField * form::error::Error * form::error::Errors * form::name::Key * form::name::Name * form::name::NameBuf * form::name::NameView * fs::FileName * fs::FileServer * fs::NamedFile * fs::Options * http::Accept * http::ContentType * http::Cookie * http::CookieJar * http::Header * http::HeaderMap * http::Iter * http::MediaType * http::QMediaType * http::RawStr * http::RawStrBuf * http::Status * http::uncased::Uncased * http::uncased::UncasedStr * http::uri::Absolute * http::uri::Asterisk * http::uri::Authority * http::uri::Error * http::uri::Host * http::uri::Origin * http::uri::Path * http::uri::Query * http::uri::Reference * http::uri::Segments * http::uri::error::Error * http::uri::error::TryFromUriError * http::uri::fmt::Formatter * local::asynchronous::Client * local::asynchronous::LocalRequest * local::asynchronous::LocalResponse * local::blocking::Client * local::blocking::LocalRequest * local::blocking::LocalResponse * mtls::Certificate * mtls::Name * mtls::bigint::BigInt * mtls::bigint::BigUint * mtls::bigint::ParseBigIntError * mtls::bigint::TryFromBigIntError * mtls::bigint::U32Digits * mtls::bigint::U64Digits * mtls::oid::LoadedEntry * mtls::oid::Oid * mtls::oid::OidEntry * mtls::oid::OidRegistry * mtls::oid::asn1_rs::ASN1DateTime * mtls::oid::asn1_rs::Any * mtls::oid::asn1_rs::BerClassFromIntError * mtls::oid::asn1_rs::BitString * mtls::oid::asn1_rs::BmpString * mtls::oid::asn1_rs::Boolean * mtls::oid::asn1_rs::EmbeddedPdv * mtls::oid::asn1_rs::EndOfContent * mtls::oid::asn1_rs::Enumerated * mtls::oid::asn1_rs::GeneralString * mtls::oid::asn1_rs::GeneralizedTime * mtls::oid::asn1_rs::GraphicString * mtls::oid::asn1_rs::Header * mtls::oid::asn1_rs::Ia5String * mtls::oid::asn1_rs::Integer * mtls::oid::asn1_rs::Null * mtls::oid::asn1_rs::NumericString * mtls::oid::asn1_rs::ObjectDescriptor * mtls::oid::asn1_rs::OctetString * mtls::oid::asn1_rs::Oid * mtls::oid::asn1_rs::OptTaggedParser * mtls::oid::asn1_rs::PrintableString * mtls::oid::asn1_rs::Sequence * mtls::oid::asn1_rs::SequenceIterator * mtls::oid::asn1_rs::SequenceOf * mtls::oid::asn1_rs::Set * mtls::oid::asn1_rs::SetOf * mtls::oid::asn1_rs::Tag * mtls::oid::asn1_rs::TaggedParser * mtls::oid::asn1_rs::TaggedParserBuilder * mtls::oid::asn1_rs::TaggedValue * mtls::oid::asn1_rs::TeletexString * mtls::oid::asn1_rs::UniversalString * mtls::oid::asn1_rs::UtcTime * mtls::oid::asn1_rs::Utf8String * mtls::oid::asn1_rs::VideotexString * mtls::oid::asn1_rs::VisibleString * mtls::oid::asn1_rs::nom::And * mtls::oid::asn1_rs::nom::AndThen * mtls::oid::asn1_rs::nom::FlatMap * mtls::oid::asn1_rs::nom::Into * mtls::oid::asn1_rs::nom::Map * mtls::oid::asn1_rs::nom::Or * mtls::oid::asn1_rs::nom::combinator::ParserIterator * mtls::oid::asn1_rs::nom::error::Error * mtls::oid::asn1_rs::nom::error::VerboseError * mtls::oid::asn1_rs::nom::lib::std::alloc::AllocError * mtls::oid::asn1_rs::nom::lib::std::alloc::Global * mtls::oid::asn1_rs::nom::lib::std::alloc::Layout * mtls::oid::asn1_rs::nom::lib::std::alloc::LayoutError * mtls::oid::asn1_rs::nom::lib::std::alloc::System * mtls::oid::asn1_rs::nom::lib::std::boxed::Box * mtls::oid::asn1_rs::nom::lib::std::boxed::ThinBox * mtls::oid::asn1_rs::nom::lib::std::cmp::Reverse * mtls::oid::asn1_rs::nom::lib::std::collections::BTreeMap * mtls::oid::asn1_rs::nom::lib::std::collections::BTreeSet * mtls::oid::asn1_rs::nom::lib::std::collections::BinaryHeap * mtls::oid::asn1_rs::nom::lib::std::collections::HashMap * mtls::oid::asn1_rs::nom::lib::std::collections::HashSet * mtls::oid::asn1_rs::nom::lib::std::collections::LinkedList * mtls::oid::asn1_rs::nom::lib::std::collections::TryReserveError * mtls::oid::asn1_rs::nom::lib::std::collections::VecDeque * mtls::oid::asn1_rs::nom::lib::std::collections::binary_heap::BinaryHeap * mtls::oid::asn1_rs::nom::lib::std::collections::binary_heap::Drain * mtls::oid::asn1_rs::nom::lib::std::collections::binary_heap::DrainSorted * mtls::oid::asn1_rs::nom::lib::std::collections::binary_heap::IntoIter * mtls::oid::asn1_rs::nom::lib::std::collections::binary_heap::IntoIterSorted * mtls::oid::asn1_rs::nom::lib::std::collections::binary_heap::Iter * mtls::oid::asn1_rs::nom::lib::std::collections::binary_heap::PeekMut * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::BTreeMap * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::Cursor * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::CursorMut * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::DrainFilter * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::IntoIter * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::IntoKeys * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::IntoValues * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::Iter * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::IterMut * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::Keys * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::OccupiedEntry * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::OccupiedError * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::Range * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::RangeMut * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::VacantEntry * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::Values * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::ValuesMut * mtls::oid::asn1_rs::nom::lib::std::collections::btree_set::BTreeSet * mtls::oid::asn1_rs::nom::lib::std::collections::btree_set::Difference * mtls::oid::asn1_rs::nom::lib::std::collections::btree_set::DrainFilter * mtls::oid::asn1_rs::nom::lib::std::collections::btree_set::Intersection * mtls::oid::asn1_rs::nom::lib::std::collections::btree_set::IntoIter * mtls::oid::asn1_rs::nom::lib::std::collections::btree_set::Iter * mtls::oid::asn1_rs::nom::lib::std::collections::btree_set::Range * mtls::oid::asn1_rs::nom::lib::std::collections::btree_set::SymmetricDifference * mtls::oid::asn1_rs::nom::lib::std::collections::btree_set::Union * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::DefaultHasher * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::Drain * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::DrainFilter * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::HashMap * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::IntoIter * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::IntoKeys * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::IntoValues * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::Iter * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::IterMut * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::Keys * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::OccupiedEntry * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::OccupiedError * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::RandomState * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::RawEntryBuilder * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::RawEntryBuilderMut * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::RawOccupiedEntryMut * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::RawVacantEntryMut * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::VacantEntry * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::Values * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::ValuesMut * mtls::oid::asn1_rs::nom::lib::std::collections::hash_set::Difference * mtls::oid::asn1_rs::nom::lib::std::collections::hash_set::Drain * mtls::oid::asn1_rs::nom::lib::std::collections::hash_set::DrainFilter * mtls::oid::asn1_rs::nom::lib::std::collections::hash_set::HashSet * mtls::oid::asn1_rs::nom::lib::std::collections::hash_set::Intersection * mtls::oid::asn1_rs::nom::lib::std::collections::hash_set::IntoIter * mtls::oid::asn1_rs::nom::lib::std::collections::hash_set::Iter * mtls::oid::asn1_rs::nom::lib::std::collections::hash_set::SymmetricDifference * mtls::oid::asn1_rs::nom::lib::std::collections::hash_set::Union * mtls::oid::asn1_rs::nom::lib::std::collections::linked_list::Cursor * mtls::oid::asn1_rs::nom::lib::std::collections::linked_list::CursorMut * mtls::oid::asn1_rs::nom::lib::std::collections::linked_list::DrainFilter * mtls::oid::asn1_rs::nom::lib::std::collections::linked_list::IntoIter * mtls::oid::asn1_rs::nom::lib::std::collections::linked_list::Iter * mtls::oid::asn1_rs::nom::lib::std::collections::linked_list::IterMut * mtls::oid::asn1_rs::nom::lib::std::collections::linked_list::LinkedList * mtls::oid::asn1_rs::nom::lib::std::collections::vec_deque::Drain * mtls::oid::asn1_rs::nom::lib::std::collections::vec_deque::IntoIter * mtls::oid::asn1_rs::nom::lib::std::collections::vec_deque::Iter * mtls::oid::asn1_rs::nom::lib::std::collections::vec_deque::IterMut * mtls::oid::asn1_rs::nom::lib::std::collections::vec_deque::VecDeque * mtls::oid::asn1_rs::nom::lib::std::fmt::Arguments * mtls::oid::asn1_rs::nom::lib::std::fmt::DebugList * mtls::oid::asn1_rs::nom::lib::std::fmt::DebugMap * mtls::oid::asn1_rs::nom::lib::std::fmt::DebugSet * mtls::oid::asn1_rs::nom::lib::std::fmt::DebugStruct * mtls::oid::asn1_rs::nom::lib::std::fmt::DebugTuple * mtls::oid::asn1_rs::nom::lib::std::fmt::Error * mtls::oid::asn1_rs::nom::lib::std::fmt::Formatter * mtls::oid::asn1_rs::nom::lib::std::hash::BuildHasherDefault * mtls::oid::asn1_rs::nom::lib::std::hash::SipHasher * mtls::oid::asn1_rs::nom::lib::std::iter::ArrayChunks * mtls::oid::asn1_rs::nom::lib::std::iter::ByRefSized * mtls::oid::asn1_rs::nom::lib::std::iter::Chain * mtls::oid::asn1_rs::nom::lib::std::iter::Cloned * mtls::oid::asn1_rs::nom::lib::std::iter::Copied * mtls::oid::asn1_rs::nom::lib::std::iter::Cycle * mtls::oid::asn1_rs::nom::lib::std::iter::Empty * mtls::oid::asn1_rs::nom::lib::std::iter::Enumerate * mtls::oid::asn1_rs::nom::lib::std::iter::Filter * mtls::oid::asn1_rs::nom::lib::std::iter::FilterMap * mtls::oid::asn1_rs::nom::lib::std::iter::FlatMap * mtls::oid::asn1_rs::nom::lib::std::iter::Flatten * mtls::oid::asn1_rs::nom::lib::std::iter::FromFn * mtls::oid::asn1_rs::nom::lib::std::iter::Fuse * mtls::oid::asn1_rs::nom::lib::std::iter::Inspect * mtls::oid::asn1_rs::nom::lib::std::iter::Intersperse * mtls::oid::asn1_rs::nom::lib::std::iter::IntersperseWith * mtls::oid::asn1_rs::nom::lib::std::iter::Map * mtls::oid::asn1_rs::nom::lib::std::iter::MapWhile * mtls::oid::asn1_rs::nom::lib::std::iter::Once * mtls::oid::asn1_rs::nom::lib::std::iter::OnceWith * mtls::oid::asn1_rs::nom::lib::std::iter::Peekable * mtls::oid::asn1_rs::nom::lib::std::iter::Repeat * mtls::oid::asn1_rs::nom::lib::std::iter::RepeatWith * mtls::oid::asn1_rs::nom::lib::std::iter::Rev * mtls::oid::asn1_rs::nom::lib::std::iter::Scan * mtls::oid::asn1_rs::nom::lib::std::iter::Skip * mtls::oid::asn1_rs::nom::lib::std::iter::SkipWhile * mtls::oid::asn1_rs::nom::lib::std::iter::StepBy * mtls::oid::asn1_rs::nom::lib::std::iter::Successors * mtls::oid::asn1_rs::nom::lib::std::iter::Take * mtls::oid::asn1_rs::nom::lib::std::iter::TakeWhile * mtls::oid::asn1_rs::nom::lib::std::iter::Zip * mtls::oid::asn1_rs::nom::lib::std::mem::Assume * mtls::oid::asn1_rs::nom::lib::std::mem::Discriminant * mtls::oid::asn1_rs::nom::lib::std::mem::ManuallyDrop * mtls::oid::asn1_rs::nom::lib::std::ops::Range * mtls::oid::asn1_rs::nom::lib::std::ops::RangeFrom * mtls::oid::asn1_rs::nom::lib::std::ops::RangeFull * mtls::oid::asn1_rs::nom::lib::std::ops::RangeInclusive * mtls::oid::asn1_rs::nom::lib::std::ops::RangeTo * mtls::oid::asn1_rs::nom::lib::std::ops::RangeToInclusive * mtls::oid::asn1_rs::nom::lib::std::ops::Yeet * mtls::oid::asn1_rs::nom::lib::std::option::IntoIter * mtls::oid::asn1_rs::nom::lib::std::option::Iter * mtls::oid::asn1_rs::nom::lib::std::option::IterMut * mtls::oid::asn1_rs::nom::lib::std::result::IntoIter * mtls::oid::asn1_rs::nom::lib::std::result::Iter * mtls::oid::asn1_rs::nom::lib::std::result::IterMut * mtls::oid::asn1_rs::nom::lib::std::slice::ArrayChunks * mtls::oid::asn1_rs::nom::lib::std::slice::ArrayChunksMut * mtls::oid::asn1_rs::nom::lib::std::slice::ArrayWindows * mtls::oid::asn1_rs::nom::lib::std::slice::Chunks * mtls::oid::asn1_rs::nom::lib::std::slice::ChunksExact * mtls::oid::asn1_rs::nom::lib::std::slice::ChunksExactMut * mtls::oid::asn1_rs::nom::lib::std::slice::ChunksMut * mtls::oid::asn1_rs::nom::lib::std::slice::EscapeAscii * mtls::oid::asn1_rs::nom::lib::std::slice::GroupBy * mtls::oid::asn1_rs::nom::lib::std::slice::GroupByMut * mtls::oid::asn1_rs::nom::lib::std::slice::Iter * mtls::oid::asn1_rs::nom::lib::std::slice::IterMut * mtls::oid::asn1_rs::nom::lib::std::slice::RChunks * mtls::oid::asn1_rs::nom::lib::std::slice::RChunksExact * mtls::oid::asn1_rs::nom::lib::std::slice::RChunksExactMut * mtls::oid::asn1_rs::nom::lib::std::slice::RChunksMut * mtls::oid::asn1_rs::nom::lib::std::slice::RSplit * mtls::oid::asn1_rs::nom::lib::std::slice::RSplitMut * mtls::oid::asn1_rs::nom::lib::std::slice::RSplitN * mtls::oid::asn1_rs::nom::lib::std::slice::RSplitNMut * mtls::oid::asn1_rs::nom::lib::std::slice::Split * mtls::oid::asn1_rs::nom::lib::std::slice::SplitInclusive * mtls::oid::asn1_rs::nom::lib::std::slice::SplitInclusiveMut * mtls::oid::asn1_rs::nom::lib::std::slice::SplitMut * mtls::oid::asn1_rs::nom::lib::std::slice::SplitN * mtls::oid::asn1_rs::nom::lib::std::slice::SplitNMut * mtls::oid::asn1_rs::nom::lib::std::slice::Windows * mtls::oid::asn1_rs::nom::lib::std::str::Bytes * mtls::oid::asn1_rs::nom::lib::std::str::CharIndices * mtls::oid::asn1_rs::nom::lib::std::str::Chars * mtls::oid::asn1_rs::nom::lib::std::str::EncodeUtf16 * mtls::oid::asn1_rs::nom::lib::std::str::EscapeDebug * mtls::oid::asn1_rs::nom::lib::std::str::EscapeDefault * mtls::oid::asn1_rs::nom::lib::std::str::EscapeUnicode * mtls::oid::asn1_rs::nom::lib::std::str::Lines * mtls::oid::asn1_rs::nom::lib::std::str::LinesAny * mtls::oid::asn1_rs::nom::lib::std::str::MatchIndices * mtls::oid::asn1_rs::nom::lib::std::str::Matches * mtls::oid::asn1_rs::nom::lib::std::str::ParseBoolError * mtls::oid::asn1_rs::nom::lib::std::str::RMatchIndices * mtls::oid::asn1_rs::nom::lib::std::str::RMatches * mtls::oid::asn1_rs::nom::lib::std::str::RSplit * mtls::oid::asn1_rs::nom::lib::std::str::RSplitN * mtls::oid::asn1_rs::nom::lib::std::str::RSplitTerminator * mtls::oid::asn1_rs::nom::lib::std::str::Split * mtls::oid::asn1_rs::nom::lib::std::str::SplitAsciiWhitespace * mtls::oid::asn1_rs::nom::lib::std::str::SplitInclusive * mtls::oid::asn1_rs::nom::lib::std::str::SplitN * mtls::oid::asn1_rs::nom::lib::std::str::SplitTerminator * mtls::oid::asn1_rs::nom::lib::std::str::SplitWhitespace * mtls::oid::asn1_rs::nom::lib::std::str::Utf8Chunk * mtls::oid::asn1_rs::nom::lib::std::str::Utf8Chunks * mtls::oid::asn1_rs::nom::lib::std::str::Utf8Error * mtls::oid::asn1_rs::nom::lib::std::str::pattern::CharArrayRefSearcher * mtls::oid::asn1_rs::nom::lib::std::str::pattern::CharArraySearcher * mtls::oid::asn1_rs::nom::lib::std::str::pattern::CharPredicateSearcher * mtls::oid::asn1_rs::nom::lib::std::str::pattern::CharSearcher * mtls::oid::asn1_rs::nom::lib::std::str::pattern::CharSliceSearcher * mtls::oid::asn1_rs::nom::lib::std::str::pattern::StrSearcher * mtls::oid::asn1_rs::nom::lib::std::string::Drain * mtls::oid::asn1_rs::nom::lib::std::string::FromUtf16Error * mtls::oid::asn1_rs::nom::lib::std::string::FromUtf8Error * mtls::oid::asn1_rs::nom::lib::std::string::String * mtls::oid::asn1_rs::nom::lib::std::vec::Drain * mtls::oid::asn1_rs::nom::lib::std::vec::DrainFilter * mtls::oid::asn1_rs::nom::lib::std::vec::IntoIter * mtls::oid::asn1_rs::nom::lib::std::vec::Splice * mtls::oid::asn1_rs::nom::lib::std::vec::Vec * mtls::x509::ASN1Time * mtls::x509::AccessDescription * mtls::x509::AlgorithmIdentifier * mtls::x509::AttributeTypeAndValue * mtls::x509::AuthorityInfoAccess * mtls::x509::AuthorityKeyIdentifier * mtls::x509::BasicConstraints * mtls::x509::BasicExtension * mtls::x509::CRLDistributionPoint * mtls::x509::CertificateRevocationList * mtls::x509::CtExtensions * mtls::x509::CtLogID * mtls::x509::CtVersion * mtls::x509::DigitallySigned * mtls::x509::ExtendedKeyUsage * mtls::x509::ExtensionRequest * mtls::x509::GeneralSubtree * mtls::x509::InhibitAnyPolicy * mtls::x509::IssuerAlternativeName * mtls::x509::KeyIdentifier * mtls::x509::KeyUsage * mtls::x509::NSCertType * mtls::x509::NameConstraints * mtls::x509::NidError * mtls::x509::PolicyConstraints * mtls::x509::PolicyInformation * mtls::x509::PolicyMapping * mtls::x509::PolicyMappings * mtls::x509::PolicyQualifierInfo * mtls::x509::ReasonCode * mtls::x509::ReasonFlags * mtls::x509::RelativeDistinguishedName * mtls::x509::RevokedCertificate * mtls::x509::SignedCertificateTimestamp * mtls::x509::SubjectAlternativeName * mtls::x509::SubjectPublicKeyInfo * mtls::x509::TbsCertList * mtls::x509::TbsCertificate * mtls::x509::TbsCertificateParser * mtls::x509::UniqueIdentifier * mtls::x509::UnparsedObject * mtls::x509::Validity * mtls::x509::X509Certificate * mtls::x509::X509CertificateParser * mtls::x509::X509CriAttribute * mtls::x509::X509Extension * mtls::x509::X509ExtensionParser * mtls::x509::X509Name * mtls::x509::X509Version * mtls::x509::ber::BerObject * mtls::x509::ber::BerObjectIntoIterator * mtls::x509::ber::BerObjectRefIterator * mtls::x509::ber::BitStringObject * mtls::x509::ber::Header * mtls::x509::ber::PrettyBer * mtls::x509::ber::Tag * mtls::x509::ber::compat::Tag * mtls::x509::der::Header * mtls::x509::der::Tag * request::Request * response::Body * response::Builder * response::Debug * response::Flash * response::Redirect * response::Response * response::content::RawCss * response::content::RawHtml * response::content::RawJavaScript * response::content::RawJson * response::content::RawMsgPack * response::content::RawText * response::content::RawXml * response::status::Accepted * response::status::BadRequest * response::status::Conflict * response::status::Created * response::status::Custom * response::status::Forbidden * response::status::NoContent * response::status::NotFound * response::status::Unauthorized * response::stream::ByteStream * response::stream::Event * response::stream::EventStream * response::stream::One * response::stream::ReaderStream * response::stream::TextStream * route::Route * route::RouteUri * serde::json::Json * serde::msgpack::MsgPack * serde::uuid::Builder * serde::uuid::Error * serde::uuid::Uuid * serde::uuid::fmt::Braced * serde::uuid::fmt::Hyphenated * serde::uuid::fmt::Simple * serde::uuid::fmt::Urn * shield::Permission * shield::Shield ### Enums * Build * Ignite * Orbit * config::CipherSuite * config::LogLevel * config::Sig * error::ErrorKind * form::error::Entity * form::error::ErrorKind * fs::TempFile * http::Method * http::SameSite * http::StatusClass * http::uri::Uri * http::uri::error::PathError * http::uri::fmt::Path * http::uri::fmt::Query * mtls::Error * mtls::bigint::Sign * mtls::oid::asn1_rs::ASN1TimeZone * mtls::oid::asn1_rs::Class * mtls::oid::asn1_rs::DerConstraint * mtls::oid::asn1_rs::Err * mtls::oid::asn1_rs::Error * mtls::oid::asn1_rs::Explicit * mtls::oid::asn1_rs::Implicit * mtls::oid::asn1_rs::Length * mtls::oid::asn1_rs::Needed * mtls::oid::asn1_rs::OidParseError * mtls::oid::asn1_rs::PdvIdentification * mtls::oid::asn1_rs::Real * mtls::oid::asn1_rs::SerializeError * mtls::oid::asn1_rs::nom::CompareResult * mtls::oid::asn1_rs::nom::Err * mtls::oid::asn1_rs::nom::Needed * mtls::oid::asn1_rs::nom::error::ErrorKind * mtls::oid::asn1_rs::nom::error::VerboseErrorKind * mtls::oid::asn1_rs::nom::lib::std::cmp::Ordering * mtls::oid::asn1_rs::nom::lib::std::collections::Bound * mtls::oid::asn1_rs::nom::lib::std::collections::TryReserveErrorKind * mtls::oid::asn1_rs::nom::lib::std::collections::btree_map::Entry * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::Entry * mtls::oid::asn1_rs::nom::lib::std::collections::hash_map::RawEntryMut * mtls::oid::asn1_rs::nom::lib::std::convert::Infallible * mtls::oid::asn1_rs::nom::lib::std::fmt::Alignment * mtls::oid::asn1_rs::nom::lib::std::ops::Bound * mtls::oid::asn1_rs::nom::lib::std::ops::ControlFlow * mtls::oid::asn1_rs::nom::lib::std::ops::GeneratorState * mtls::oid::asn1_rs::nom::lib::std::option::Option * mtls::oid::asn1_rs::nom::lib::std::result::Result * mtls::oid::asn1_rs::nom::lib::std::str::pattern::SearchStep * mtls::oid::asn1_rs::nom::number::Endianness * mtls::x509::DistributionPointName * mtls::x509::GeneralName * mtls::x509::PEMError * mtls::x509::ParsedCriAttribute * mtls::x509::ParsedExtension * mtls::x509::X509Error * mtls::x509::ber::BerObjectContent * mtls::x509::ber::Class * mtls::x509::ber::Length * mtls::x509::ber::PrettyPrinterFlag * mtls::x509::der::Class * outcome::Outcome * serde::json::Error * serde::json::Value * serde::msgpack::Error * serde::uuid::Variant * serde::uuid::Version * shield::Allow * shield::ExpectCt * shield::Feature * shield::Frame * shield::Hsts * shield::NoSniff * shield::Prefetch * shield::Referrer * shield::XssFilter ### Unions ### Traits * Phase * Sentinel * catcher::Cloneable * catcher::Handler * data::FromData * data::ToByteUnit * fairing::Fairing * form::FromForm * form::FromFormField * form::validate::Contains * form::validate::Len * http::ext::IntoCollection * http::ext::IntoOwned * http::uncased::AsUncased * http::uri::fmt::FromUriParam * http::uri::fmt::Ignorable * http::uri::fmt::Part * http::uri::fmt::UriDisplay * mtls::bigint::ToBigInt * mtls::bigint::ToBigUint * mtls::oid::asn1_rs::AsTaggedExplicit * mtls::oid::asn1_rs::AsTaggedImplicit * mtls::oid::asn1_rs::BerChoice * mtls::oid::asn1_rs::CheckDerConstraints * mtls::oid::asn1_rs::Choice * mtls::oid::asn1_rs::DerChoice * mtls::oid::asn1_rs::DynTagged * mtls::oid::asn1_rs::FromBer * mtls::oid::asn1_rs::FromDer * mtls::oid::asn1_rs::TagKind * mtls::oid::asn1_rs::Tagged * mtls::oid::asn1_rs::TestValidCharset * mtls::oid::asn1_rs::ToDer * mtls::oid::asn1_rs::ToStatic * mtls::oid::asn1_rs::nom::AsBytes * mtls::oid::asn1_rs::nom::AsChar * mtls::oid::asn1_rs::nom::Compare * mtls::oid::asn1_rs::nom::ErrorConvert * mtls::oid::asn1_rs::nom::ExtendInto * mtls::oid::asn1_rs::nom::FindSubstring * mtls::oid::asn1_rs::nom::FindToken * mtls::oid::asn1_rs::nom::Finish * mtls::oid::asn1_rs::nom::HexDisplay * mtls::oid::asn1_rs::nom::InputIter * mtls::oid::asn1_rs::nom::InputLength * mtls::oid::asn1_rs::nom::InputTake * mtls::oid::asn1_rs::nom::InputTakeAtPosition * mtls::oid::asn1_rs::nom::Offset * mtls::oid::asn1_rs::nom::ParseTo * mtls::oid::asn1_rs::nom::Parser * mtls::oid::asn1_rs::nom::Slice * mtls::oid::asn1_rs::nom::ToUsize * mtls::oid::asn1_rs::nom::UnspecializedInput * mtls::oid::asn1_rs::nom::branch::Alt * mtls::oid::asn1_rs::nom::branch::Permutation * mtls::oid::asn1_rs::nom::error::ContextError * mtls::oid::asn1_rs::nom::error::FromExternalError * mtls::oid::asn1_rs::nom::error::ParseError * mtls::oid::asn1_rs::nom::lib::std::alloc::Allocator * mtls::oid::asn1_rs::nom::lib::std::alloc::GlobalAlloc * mtls::oid::asn1_rs::nom::lib::std::cmp::Eq * mtls::oid::asn1_rs::nom::lib::std::cmp::Ord * mtls::oid::asn1_rs::nom::lib::std::cmp::PartialEq * mtls::oid::asn1_rs::nom::lib::std::cmp::PartialOrd * mtls::oid::asn1_rs::nom::lib::std::convert::AsMut * mtls::oid::asn1_rs::nom::lib::std::convert::AsRef * mtls::oid::asn1_rs::nom::lib::std::convert::FloatToInt * mtls::oid::asn1_rs::nom::lib::std::convert::From * mtls::oid::asn1_rs::nom::lib::std::convert::Into * mtls::oid::asn1_rs::nom::lib::std::convert::TryFrom * mtls::oid::asn1_rs::nom::lib::std::convert::TryInto * mtls::oid::asn1_rs::nom::lib::std::fmt::Binary * mtls::oid::asn1_rs::nom::lib::std::fmt::Debug * mtls::oid::asn1_rs::nom::lib::std::fmt::Display * mtls::oid::asn1_rs::nom::lib::std::fmt::LowerExp * mtls::oid::asn1_rs::nom::lib::std::fmt::LowerHex * mtls::oid::asn1_rs::nom::lib::std::fmt::Octal * mtls::oid::asn1_rs::nom::lib::std::fmt::Pointer * mtls::oid::asn1_rs::nom::lib::std::fmt::UpperExp * mtls::oid::asn1_rs::nom::lib::std::fmt::UpperHex * mtls::oid::asn1_rs::nom::lib::std::fmt::Write * mtls::oid::asn1_rs::nom::lib::std::hash::BuildHasher * mtls::oid::asn1_rs::nom::lib::std::hash::Hash * mtls::oid::asn1_rs::nom::lib::std::hash::Hasher * mtls::oid::asn1_rs::nom::lib::std::iter::DoubleEndedIterator * mtls::oid::asn1_rs::nom::lib::std::iter::ExactSizeIterator * mtls::oid::asn1_rs::nom::lib::std::iter::Extend * mtls::oid::asn1_rs::nom::lib::std::iter::FromIterator * mtls::oid::asn1_rs::nom::lib::std::iter::FusedIterator * mtls::oid::asn1_rs::nom::lib::std::iter::IntoIterator * mtls::oid::asn1_rs::nom::lib::std::iter::Iterator * mtls::oid::asn1_rs::nom::lib::std::iter::Product * mtls::oid::asn1_rs::nom::lib::std::iter::Step * mtls::oid::asn1_rs::nom::lib::std::iter::Sum * mtls::oid::asn1_rs::nom::lib::std::iter::TrustedLen * mtls::oid::asn1_rs::nom::lib::std::iter::TrustedStep * mtls::oid::asn1_rs::nom::lib::std::mem::BikeshedIntrinsicFrom * mtls::oid::asn1_rs::nom::lib::std::ops::Add * mtls::oid::asn1_rs::nom::lib::std::ops::AddAssign * mtls::oid::asn1_rs::nom::lib::std::ops::BitAnd * mtls::oid::asn1_rs::nom::lib::std::ops::BitAndAssign * mtls::oid::asn1_rs::nom::lib::std::ops::BitOr * mtls::oid::asn1_rs::nom::lib::std::ops::BitOrAssign * mtls::oid::asn1_rs::nom::lib::std::ops::BitXor * mtls::oid::asn1_rs::nom::lib::std::ops::BitXorAssign * mtls::oid::asn1_rs::nom::lib::std::ops::CoerceUnsized * mtls::oid::asn1_rs::nom::lib::std::ops::Deref * mtls::oid::asn1_rs::nom::lib::std::ops::DerefMut * mtls::oid::asn1_rs::nom::lib::std::ops::DispatchFromDyn * mtls::oid::asn1_rs::nom::lib::std::ops::Div * mtls::oid::asn1_rs::nom::lib::std::ops::DivAssign * mtls::oid::asn1_rs::nom::lib::std::ops::Drop * mtls::oid::asn1_rs::nom::lib::std::ops::Fn * mtls::oid::asn1_rs::nom::lib::std::ops::FnMut * mtls::oid::asn1_rs::nom::lib::std::ops::FnOnce * mtls::oid::asn1_rs::nom::lib::std::ops::FromResidual * mtls::oid::asn1_rs::nom::lib::std::ops::Generator * mtls::oid::asn1_rs::nom::lib::std::ops::Index * mtls::oid::asn1_rs::nom::lib::std::ops::IndexMut * mtls::oid::asn1_rs::nom::lib::std::ops::Mul * mtls::oid::asn1_rs::nom::lib::std::ops::MulAssign * mtls::oid::asn1_rs::nom::lib::std::ops::Neg * mtls::oid::asn1_rs::nom::lib::std::ops::Not * mtls::oid::asn1_rs::nom::lib::std::ops::OneSidedRange * mtls::oid::asn1_rs::nom::lib::std::ops::RangeBounds * mtls::oid::asn1_rs::nom::lib::std::ops::Rem * mtls::oid::asn1_rs::nom::lib::std::ops::RemAssign * mtls::oid::asn1_rs::nom::lib::std::ops::Residual * mtls::oid::asn1_rs::nom::lib::std::ops::Shl * mtls::oid::asn1_rs::nom::lib::std::ops::ShlAssign * mtls::oid::asn1_rs::nom::lib::std::ops::Shr * mtls::oid::asn1_rs::nom::lib::std::ops::ShrAssign * mtls::oid::asn1_rs::nom::lib::std::ops::Sub * mtls::oid::asn1_rs::nom::lib::std::ops::SubAssign * mtls::oid::asn1_rs::nom::lib::std::ops::Try * mtls::oid::asn1_rs::nom::lib::std::slice::Concat * mtls::oid::asn1_rs::nom::lib::std::slice::Join * mtls::oid::asn1_rs::nom::lib::std::slice::SliceIndex * mtls::oid::asn1_rs::nom::lib::std::str::FromStr * mtls::oid::asn1_rs::nom::lib::std::str::pattern::DoubleEndedSearcher * mtls::oid::asn1_rs::nom::lib::std::str::pattern::Pattern * mtls::oid::asn1_rs::nom::lib::std::str::pattern::ReverseSearcher * mtls::oid::asn1_rs::nom::lib::std::str::pattern::Searcher * mtls::oid::asn1_rs::nom::lib::std::string::ToString * mtls::oid::asn1_rs::nom::sequence::Tuple * mtls::x509::FromDer * mtls::x509::ber::Visit * mtls::x509::ber::VisitMut * outcome::IntoOutcome * request::FromParam * request::FromRequest * request::FromSegments * response::Responder * route::Cloneable * route::Handler * serde::Deserialize * serde::DeserializeOwned * serde::Deserializer * serde::Serialize * serde::Serializer * shield::Policy ### Macros * catchers * form::validate::msg * fs::relative * http::impl_from_uri_param_identity * mtls::oid::asn1_rs::int * mtls::oid::asn1_rs::nom::error::error_node_position * mtls::oid::asn1_rs::nom::error::error_position * mtls::oid::asn1_rs::nom::error_node_position * mtls::oid::asn1_rs::nom::error_position * mtls::oid::asn1_rs::nom::lib::std::cmp::Eq * mtls::oid::asn1_rs::nom::lib::std::cmp::Ord * mtls::oid::asn1_rs::nom::lib::std::cmp::PartialEq * mtls::oid::asn1_rs::nom::lib::std::cmp::PartialOrd * mtls::oid::asn1_rs::nom::lib::std::fmt::Debug * mtls::oid::asn1_rs::nom::lib::std::hash::Hash * mtls::oid::asn1_rs::nom::lib::std::vec * mtls::oid::asn1_rs::oid * outcome::try_outcome * request::local_cache * request::local_cache_once * response::stream::ByteStream * response::stream::EventStream * response::stream::ReaderStream * response::stream::TextStream * response::stream::stream * routes * serde::json::json * serde::uuid::uuid * uri ### Attribute Macros ### Derive Macros * FromForm * FromFormField * Responder * UriDisplayPath * UriDisplayQuery * mtls::oid::asn1_rs::BerSequence * mtls::oid::asn1_rs::DerSequence ### Functions * build * custom * execute * form::validate::contains * form::validate::dbg_contains * form::validate::dbg_eq * form::validate::dbg_omits * form::validate::eq * form::validate::ext * form::validate::len * form::validate::neq * form::validate::omits * form::validate::one_of * form::validate::range * form::validate::try_with * form::validate::with * http::uncased::eq * mtls::oid::asn1_rs::nom::bits * mtls::oid::asn1_rs::nom::bits::bits * mtls::oid::asn1_rs::nom::bits::bytes * mtls::oid::asn1_rs::nom::bits::complete::bool * mtls::oid::asn1_rs::nom::bits::complete::tag * mtls::oid::asn1_rs::nom::bits::complete::take * mtls::oid::asn1_rs::nom::bits::streaming::bool * mtls::oid::asn1_rs::nom::bits::streaming::tag * mtls::oid::asn1_rs::nom::bits::streaming::take * mtls::oid::asn1_rs::nom::branch::alt * mtls::oid::asn1_rs::nom::branch::permutation * mtls::oid::asn1_rs::nom::bytes * mtls::oid::asn1_rs::nom::bytes::complete::escaped * mtls::oid::asn1_rs::nom::bytes::complete::escaped_transform * mtls::oid::asn1_rs::nom::bytes::complete::is_a * mtls::oid::asn1_rs::nom::bytes::complete::is_not * mtls::oid::asn1_rs::nom::bytes::complete::tag * mtls::oid::asn1_rs::nom::bytes::complete::tag_no_case * mtls::oid::asn1_rs::nom::bytes::complete::take * mtls::oid::asn1_rs::nom::bytes::complete::take_till * mtls::oid::asn1_rs::nom::bytes::complete::take_till1 * mtls::oid::asn1_rs::nom::bytes::complete::take_until * mtls::oid::asn1_rs::nom::bytes::complete::take_until1 * mtls::oid::asn1_rs::nom::bytes::complete::take_while * mtls::oid::asn1_rs::nom::bytes::complete::take_while1 * mtls::oid::asn1_rs::nom::bytes::complete::take_while_m_n * mtls::oid::asn1_rs::nom::bytes::streaming::escaped * mtls::oid::asn1_rs::nom::bytes::streaming::escaped_transform * mtls::oid::asn1_rs::nom::bytes::streaming::is_a * mtls::oid::asn1_rs::nom::bytes::streaming::is_not * mtls::oid::asn1_rs::nom::bytes::streaming::tag * mtls::oid::asn1_rs::nom::bytes::streaming::tag_no_case * mtls::oid::asn1_rs::nom::bytes::streaming::take * mtls::oid::asn1_rs::nom::bytes::streaming::take_till * mtls::oid::asn1_rs::nom::bytes::streaming::take_till1 * mtls::oid::asn1_rs::nom::bytes::streaming::take_until * mtls::oid::asn1_rs::nom::bytes::streaming::take_until1 * mtls::oid::asn1_rs::nom::bytes::streaming::take_while * mtls::oid::asn1_rs::nom::bytes::streaming::take_while1 * mtls::oid::asn1_rs::nom::bytes::streaming::take_while_m_n * mtls::oid::asn1_rs::nom::character::complete::alpha0 * mtls::oid::asn1_rs::nom::character::complete::alpha1 * mtls::oid::asn1_rs::nom::character::complete::alphanumeric0 * mtls::oid::asn1_rs::nom::character::complete::alphanumeric1 * mtls::oid::asn1_rs::nom::character::complete::anychar * mtls::oid::asn1_rs::nom::character::complete::char * mtls::oid::asn1_rs::nom::character::complete::crlf * mtls::oid::asn1_rs::nom::character::complete::digit0 * mtls::oid::asn1_rs::nom::character::complete::digit1 * mtls::oid::asn1_rs::nom::character::complete::hex_digit0 * mtls::oid::asn1_rs::nom::character::complete::hex_digit1 * mtls::oid::asn1_rs::nom::character::complete::i128 * mtls::oid::asn1_rs::nom::character::complete::i16 * mtls::oid::asn1_rs::nom::character::complete::i32 * mtls::oid::asn1_rs::nom::character::complete::i64 * mtls::oid::asn1_rs::nom::character::complete::i8 * mtls::oid::asn1_rs::nom::character::complete::line_ending * mtls::oid::asn1_rs::nom::character::complete::multispace0 * mtls::oid::asn1_rs::nom::character::complete::multispace1 * mtls::oid::asn1_rs::nom::character::complete::newline * mtls::oid::asn1_rs::nom::character::complete::none_of * mtls::oid::asn1_rs::nom::character::complete::not_line_ending * mtls::oid::asn1_rs::nom::character::complete::oct_digit0 * mtls::oid::asn1_rs::nom::character::complete::oct_digit1 * mtls::oid::asn1_rs::nom::character::complete::one_of * mtls::oid::asn1_rs::nom::character::complete::satisfy * mtls::oid::asn1_rs::nom::character::complete::space0 * mtls::oid::asn1_rs::nom::character::complete::space1 * mtls::oid::asn1_rs::nom::character::complete::tab * mtls::oid::asn1_rs::nom::character::complete::u128 * mtls::oid::asn1_rs::nom::character::complete::u16 * mtls::oid::asn1_rs::nom::character::complete::u32 * mtls::oid::asn1_rs::nom::character::complete::u64 * mtls::oid::asn1_rs::nom::character::complete::u8 * mtls::oid::asn1_rs::nom::character::is_alphabetic * mtls::oid::asn1_rs::nom::character::is_alphanumeric * mtls::oid::asn1_rs::nom::character::is_digit * mtls::oid::asn1_rs::nom::character::is_hex_digit * mtls::oid::asn1_rs::nom::character::is_newline * mtls::oid::asn1_rs::nom::character::is_oct_digit * mtls::oid::asn1_rs::nom::character::is_space * mtls::oid::asn1_rs::nom::character::streaming::alpha0 * mtls::oid::asn1_rs::nom::character::streaming::alpha1 * mtls::oid::asn1_rs::nom::character::streaming::alphanumeric0 * mtls::oid::asn1_rs::nom::character::streaming::alphanumeric1 * mtls::oid::asn1_rs::nom::character::streaming::anychar * mtls::oid::asn1_rs::nom::character::streaming::char * mtls::oid::asn1_rs::nom::character::streaming::crlf * mtls::oid::asn1_rs::nom::character::streaming::digit0 * mtls::oid::asn1_rs::nom::character::streaming::digit1 * mtls::oid::asn1_rs::nom::character::streaming::hex_digit0 * mtls::oid::asn1_rs::nom::character::streaming::hex_digit1 * mtls::oid::asn1_rs::nom::character::streaming::i128 * mtls::oid::asn1_rs::nom::character::streaming::i16 * mtls::oid::asn1_rs::nom::character::streaming::i32 * mtls::oid::asn1_rs::nom::character::streaming::i64 * mtls::oid::asn1_rs::nom::character::streaming::i8 * mtls::oid::asn1_rs::nom::character::streaming::line_ending * mtls::oid::asn1_rs::nom::character::streaming::multispace0 * mtls::oid::asn1_rs::nom::character::streaming::multispace1 * mtls::oid::asn1_rs::nom::character::streaming::newline * mtls::oid::asn1_rs::nom::character::streaming::none_of * mtls::oid::asn1_rs::nom::character::streaming::not_line_ending * mtls::oid::asn1_rs::nom::character::streaming::oct_digit0 * mtls::oid::asn1_rs::nom::character::streaming::oct_digit1 * mtls::oid::asn1_rs::nom::character::streaming::one_of * mtls::oid::asn1_rs::nom::character::streaming::satisfy * mtls::oid::asn1_rs::nom::character::streaming::space0 * mtls::oid::asn1_rs::nom::character::streaming::space1 * mtls::oid::asn1_rs::nom::character::streaming::tab * mtls::oid::asn1_rs::nom::character::streaming::u128 * mtls::oid::asn1_rs::nom::character::streaming::u16 * mtls::oid::asn1_rs::nom::character::streaming::u32 * mtls::oid::asn1_rs::nom::character::streaming::u64 * mtls::oid::asn1_rs::nom::character::streaming::u8 * mtls::oid::asn1_rs::nom::combinator::all_consuming * mtls::oid::asn1_rs::nom::combinator::complete * mtls::oid::asn1_rs::nom::combinator::cond * mtls::oid::asn1_rs::nom::combinator::consumed * mtls::oid::asn1_rs::nom::combinator::cut * mtls::oid::asn1_rs::nom::combinator::eof * mtls::oid::asn1_rs::nom::combinator::fail * mtls::oid::asn1_rs::nom::combinator::flat_map * mtls::oid::asn1_rs::nom::combinator::into * mtls::oid::asn1_rs::nom::combinator::iterator * mtls::oid::asn1_rs::nom::combinator::map * mtls::oid::asn1_rs::nom::combinator::map_opt * mtls::oid::asn1_rs::nom::combinator::map_parser * mtls::oid::asn1_rs::nom::combinator::map_res * mtls::oid::asn1_rs::nom::combinator::not * mtls::oid::asn1_rs::nom::combinator::opt * mtls::oid::asn1_rs::nom::combinator::peek * mtls::oid::asn1_rs::nom::combinator::recognize * mtls::oid::asn1_rs::nom::combinator::rest * mtls::oid::asn1_rs::nom::combinator::rest_len * mtls::oid::asn1_rs::nom::combinator::success * mtls::oid::asn1_rs::nom::combinator::value * mtls::oid::asn1_rs::nom::combinator::verify * mtls::oid::asn1_rs::nom::error::append_error * mtls::oid::asn1_rs::nom::error::context * mtls::oid::asn1_rs::nom::error::convert_error * mtls::oid::asn1_rs::nom::error::dbg_dmp * mtls::oid::asn1_rs::nom::error::error_to_u32 * mtls::oid::asn1_rs::nom::error::make_error * mtls::oid::asn1_rs::nom::lib::std::alloc::alloc * mtls::oid::asn1_rs::nom::lib::std::alloc::alloc_zeroed * mtls::oid::asn1_rs::nom::lib::std::alloc::dealloc * mtls::oid::asn1_rs::nom::lib::std::alloc::handle_alloc_error * mtls::oid::asn1_rs::nom::lib::std::alloc::realloc * mtls::oid::asn1_rs::nom::lib::std::alloc::set_alloc_error_hook * mtls::oid::asn1_rs::nom::lib::std::alloc::take_alloc_error_hook * mtls::oid::asn1_rs::nom::lib::std::cmp::max * mtls::oid::asn1_rs::nom::lib::std::cmp::max_by * mtls::oid::asn1_rs::nom::lib::std::cmp::max_by_key * mtls::oid::asn1_rs::nom::lib::std::cmp::min * mtls::oid::asn1_rs::nom::lib::std::cmp::min_by * mtls::oid::asn1_rs::nom::lib::std::cmp::min_by_key * mtls::oid::asn1_rs::nom::lib::std::convert::identity * mtls::oid::asn1_rs::nom::lib::std::fmt::format * mtls::oid::asn1_rs::nom::lib::std::fmt::write * mtls::oid::asn1_rs::nom::lib::std::iter::empty * mtls::oid::asn1_rs::nom::lib::std::iter::from_fn * mtls::oid::asn1_rs::nom::lib::std::iter::from_generator * mtls::oid::asn1_rs::nom::lib::std::iter::once * mtls::oid::asn1_rs::nom::lib::std::iter::once_with * mtls::oid::asn1_rs::nom::lib::std::iter::repeat * mtls::oid::asn1_rs::nom::lib::std::iter::repeat_with * mtls::oid::asn1_rs::nom::lib::std::iter::successors * mtls::oid::asn1_rs::nom::lib::std::iter::zip * mtls::oid::asn1_rs::nom::lib::std::mem::align_of * mtls::oid::asn1_rs::nom::lib::std::mem::align_of_val * mtls::oid::asn1_rs::nom::lib::std::mem::align_of_val_raw * mtls::oid::asn1_rs::nom::lib::std::mem::copy * mtls::oid::asn1_rs::nom::lib::std::mem::discriminant * mtls::oid::asn1_rs::nom::lib::std::mem::drop * mtls::oid::asn1_rs::nom::lib::std::mem::forget * mtls::oid::asn1_rs::nom::lib::std::mem::forget_unsized * mtls::oid::asn1_rs::nom::lib::std::mem::min_align_of * mtls::oid::asn1_rs::nom::lib::std::mem::min_align_of_val * mtls::oid::asn1_rs::nom::lib::std::mem::needs_drop * mtls::oid::asn1_rs::nom::lib::std::mem::replace * mtls::oid::asn1_rs::nom::lib::std::mem::size_of * mtls::oid::asn1_rs::nom::lib::std::mem::size_of_val * mtls::oid::asn1_rs::nom::lib::std::mem::size_of_val_raw * mtls::oid::asn1_rs::nom::lib::std::mem::swap * mtls::oid::asn1_rs::nom::lib::std::mem::take * mtls::oid::asn1_rs::nom::lib::std::mem::transmute * mtls::oid::asn1_rs::nom::lib::std::mem::transmute_copy * mtls::oid::asn1_rs::nom::lib::std::mem::uninitialized * mtls::oid::asn1_rs::nom::lib::std::mem::variant_count * mtls::oid::asn1_rs::nom::lib::std::mem::zeroed * mtls::oid::asn1_rs::nom::lib::std::slice::from_mut * mtls::oid::asn1_rs::nom::lib::std::slice::from_mut_ptr_range * mtls::oid::asn1_rs::nom::lib::std::slice::from_ptr_range * mtls::oid::asn1_rs::nom::lib::std::slice::from_raw_parts * mtls::oid::asn1_rs::nom::lib::std::slice::from_raw_parts_mut * mtls::oid::asn1_rs::nom::lib::std::slice::from_ref * mtls::oid::asn1_rs::nom::lib::std::slice::range * mtls::oid::asn1_rs::nom::lib::std::str::from_boxed_utf8_unchecked * mtls::oid::asn1_rs::nom::lib::std::str::from_utf8 * mtls::oid::asn1_rs::nom::lib::std::str::from_utf8_mut * mtls::oid::asn1_rs::nom::lib::std::str::from_utf8_unchecked * mtls::oid::asn1_rs::nom::lib::std::str::from_utf8_unchecked_mut * mtls::oid::asn1_rs::nom::multi::count * mtls::oid::asn1_rs::nom::multi::fill * mtls::oid::asn1_rs::nom::multi::fold_many0 * mtls::oid::asn1_rs::nom::multi::fold_many1 * mtls::oid::asn1_rs::nom::multi::fold_many_m_n * mtls::oid::asn1_rs::nom::multi::length_count * mtls::oid::asn1_rs::nom::multi::length_data * mtls::oid::asn1_rs::nom::multi::length_value * mtls::oid::asn1_rs::nom::multi::many0 * mtls::oid::asn1_rs::nom::multi::many0_count * mtls::oid::asn1_rs::nom::multi::many1 * mtls::oid::asn1_rs::nom::multi::many1_count * mtls::oid::asn1_rs::nom::multi::many_m_n * mtls::oid::asn1_rs::nom::multi::many_till * mtls::oid::asn1_rs::nom::multi::separated_list0 * mtls::oid::asn1_rs::nom::multi::separated_list1 * mtls::oid::asn1_rs::nom::number::complete::be_f32 * mtls::oid::asn1_rs::nom::number::complete::be_f64 * mtls::oid::asn1_rs::nom::number::complete::be_i128 * mtls::oid::asn1_rs::nom::number::complete::be_i16 * mtls::oid::asn1_rs::nom::number::complete::be_i24 * mtls::oid::asn1_rs::nom::number::complete::be_i32 * mtls::oid::asn1_rs::nom::number::complete::be_i64 * mtls::oid::asn1_rs::nom::number::complete::be_i8 * mtls::oid::asn1_rs::nom::number::complete::be_u128 * mtls::oid::asn1_rs::nom::number::complete::be_u16 * mtls::oid::asn1_rs::nom::number::complete::be_u24 * mtls::oid::asn1_rs::nom::number::complete::be_u32 * mtls::oid::asn1_rs::nom::number::complete::be_u64 * mtls::oid::asn1_rs::nom::number::complete::be_u8 * mtls::oid::asn1_rs::nom::number::complete::double * mtls::oid::asn1_rs::nom::number::complete::f32 * mtls::oid::asn1_rs::nom::number::complete::f64 * mtls::oid::asn1_rs::nom::number::complete::float * mtls::oid::asn1_rs::nom::number::complete::hex_u32 * mtls::oid::asn1_rs::nom::number::complete::i128 * mtls::oid::asn1_rs::nom::number::complete::i16 * mtls::oid::asn1_rs::nom::number::complete::i24 * mtls::oid::asn1_rs::nom::number::complete::i32 * mtls::oid::asn1_rs::nom::number::complete::i64 * mtls::oid::asn1_rs::nom::number::complete::i8 * mtls::oid::asn1_rs::nom::number::complete::le_f32 * mtls::oid::asn1_rs::nom::number::complete::le_f64 * mtls::oid::asn1_rs::nom::number::complete::le_i128 * mtls::oid::asn1_rs::nom::number::complete::le_i16 * mtls::oid::asn1_rs::nom::number::complete::le_i24 * mtls::oid::asn1_rs::nom::number::complete::le_i32 * mtls::oid::asn1_rs::nom::number::complete::le_i64 * mtls::oid::asn1_rs::nom::number::complete::le_i8 * mtls::oid::asn1_rs::nom::number::complete::le_u128 * mtls::oid::asn1_rs::nom::number::complete::le_u16 * mtls::oid::asn1_rs::nom::number::complete::le_u24 * mtls::oid::asn1_rs::nom::number::complete::le_u32 * mtls::oid::asn1_rs::nom::number::complete::le_u64 * mtls::oid::asn1_rs::nom::number::complete::le_u8 * mtls::oid::asn1_rs::nom::number::complete::recognize_float * mtls::oid::asn1_rs::nom::number::complete::recognize_float_parts * mtls::oid::asn1_rs::nom::number::complete::u128 * mtls::oid::asn1_rs::nom::number::complete::u16 * mtls::oid::asn1_rs::nom::number::complete::u24 * mtls::oid::asn1_rs::nom::number::complete::u32 * mtls::oid::asn1_rs::nom::number::complete::u64 * mtls::oid::asn1_rs::nom::number::complete::u8 * mtls::oid::asn1_rs::nom::number::streaming::be_f32 * mtls::oid::asn1_rs::nom::number::streaming::be_f64 * mtls::oid::asn1_rs::nom::number::streaming::be_i128 * mtls::oid::asn1_rs::nom::number::streaming::be_i16 * mtls::oid::asn1_rs::nom::number::streaming::be_i24 * mtls::oid::asn1_rs::nom::number::streaming::be_i32 * mtls::oid::asn1_rs::nom::number::streaming::be_i64 * mtls::oid::asn1_rs::nom::number::streaming::be_i8 * mtls::oid::asn1_rs::nom::number::streaming::be_u128 * mtls::oid::asn1_rs::nom::number::streaming::be_u16 * mtls::oid::asn1_rs::nom::number::streaming::be_u24 * mtls::oid::asn1_rs::nom::number::streaming::be_u32 * mtls::oid::asn1_rs::nom::number::streaming::be_u64 * mtls::oid::asn1_rs::nom::number::streaming::be_u8 * mtls::oid::asn1_rs::nom::number::streaming::double * mtls::oid::asn1_rs::nom::number::streaming::f32 * mtls::oid::asn1_rs::nom::number::streaming::f64 * mtls::oid::asn1_rs::nom::number::streaming::float * mtls::oid::asn1_rs::nom::number::streaming::hex_u32 * mtls::oid::asn1_rs::nom::number::streaming::i128 * mtls::oid::asn1_rs::nom::number::streaming::i16 * mtls::oid::asn1_rs::nom::number::streaming::i24 * mtls::oid::asn1_rs::nom::number::streaming::i32 * mtls::oid::asn1_rs::nom::number::streaming::i64 * mtls::oid::asn1_rs::nom::number::streaming::i8 * mtls::oid::asn1_rs::nom::number::streaming::le_f32 * mtls::oid::asn1_rs::nom::number::streaming::le_f64 * mtls::oid::asn1_rs::nom::number::streaming::le_i128 * mtls::oid::asn1_rs::nom::number::streaming::le_i16 * mtls::oid::asn1_rs::nom::number::streaming::le_i24 * mtls::oid::asn1_rs::nom::number::streaming::le_i32 * mtls::oid::asn1_rs::nom::number::streaming::le_i64 * mtls::oid::asn1_rs::nom::number::streaming::le_i8 * mtls::oid::asn1_rs::nom::number::streaming::le_u128 * mtls::oid::asn1_rs::nom::number::streaming::le_u16 * mtls::oid::asn1_rs::nom::number::streaming::le_u24 * mtls::oid::asn1_rs::nom::number::streaming::le_u32 * mtls::oid::asn1_rs::nom::number::streaming::le_u64 * mtls::oid::asn1_rs::nom::number::streaming::le_u8 * mtls::oid::asn1_rs::nom::number::streaming::recognize_float * mtls::oid::asn1_rs::nom::number::streaming::recognize_float_parts * mtls::oid::asn1_rs::nom::number::streaming::u128 * mtls::oid::asn1_rs::nom::number::streaming::u16 * mtls::oid::asn1_rs::nom::number::streaming::u24 * mtls::oid::asn1_rs::nom::number::streaming::u32 * mtls::oid::asn1_rs::nom::number::streaming::u64 * mtls::oid::asn1_rs::nom::number::streaming::u8 * mtls::oid::asn1_rs::nom::sequence::delimited * mtls::oid::asn1_rs::nom::sequence::pair * mtls::oid::asn1_rs::nom::sequence::preceded * mtls::oid::asn1_rs::nom::sequence::separated_pair * mtls::oid::asn1_rs::nom::sequence::terminated * mtls::oid::asn1_rs::nom::sequence::tuple * mtls::oid::asn1_rs::parse_der_tagged_explicit * mtls::oid::asn1_rs::parse_der_tagged_explicit_g * mtls::oid::asn1_rs::parse_der_tagged_implicit * mtls::oid::asn1_rs::parse_der_tagged_implicit_g * mtls::oid::format_oid * mtls::oid::generate_file * mtls::oid::load_file * mtls::oid::oid2abbrev * mtls::oid::oid2description * mtls::oid::oid2sn * mtls::oid::oid_registry * mtls::x509::ber::ber_read_element_content_as * mtls::x509::ber::ber_read_element_header * mtls::x509::ber::parse_ber * mtls::x509::ber::parse_ber_any * mtls::x509::ber::parse_ber_any_r * mtls::x509::ber::parse_ber_any_with_tag_r * mtls::x509::ber::parse_ber_bitstring * mtls::x509::ber::parse_ber_bmpstring * mtls::x509::ber::parse_ber_bool * mtls::x509::ber::parse_ber_container * mtls::x509::ber::parse_ber_content * mtls::x509::ber::parse_ber_content2 * mtls::x509::ber::parse_ber_endofcontent * mtls::x509::ber::parse_ber_enum * mtls::x509::ber::parse_ber_explicit_optional * mtls::x509::ber::parse_ber_generalizedtime * mtls::x509::ber::parse_ber_generalstring * mtls::x509::ber::parse_ber_graphicstring * mtls::x509::ber::parse_ber_i32 * mtls::x509::ber::parse_ber_i64 * mtls::x509::ber::parse_ber_ia5string * mtls::x509::ber::parse_ber_implicit * mtls::x509::ber::parse_ber_integer * mtls::x509::ber::parse_ber_null * mtls::x509::ber::parse_ber_numericstring * mtls::x509::ber::parse_ber_objectdescriptor * mtls::x509::ber::parse_ber_octetstring * mtls::x509::ber::parse_ber_oid * mtls::x509::ber::parse_ber_optional * mtls::x509::ber::parse_ber_printablestring * mtls::x509::ber::parse_ber_recursive * mtls::x509::ber::parse_ber_relative_oid * mtls::x509::ber::parse_ber_sequence * mtls::x509::ber::parse_ber_sequence_defined * mtls::x509::ber::parse_ber_sequence_defined_g * mtls::x509::ber::parse_ber_sequence_of * mtls::x509::ber::parse_ber_sequence_of_v * mtls::x509::ber::parse_ber_set * mtls::x509::ber::parse_ber_set_defined * mtls::x509::ber::parse_ber_set_defined_g * mtls::x509::ber::parse_ber_set_of * mtls::x509::ber::parse_ber_set_of_v * mtls::x509::ber::parse_ber_slice * mtls::x509::ber::parse_ber_t61string * mtls::x509::ber::parse_ber_tagged_explicit * mtls::x509::ber::parse_ber_tagged_explicit_g * mtls::x509::ber::parse_ber_tagged_implicit * mtls::x509::ber::parse_ber_tagged_implicit_g * mtls::x509::ber::parse_ber_u32 * mtls::x509::ber::parse_ber_u64 * mtls::x509::ber::parse_ber_universalstring * mtls::x509::ber::parse_ber_utctime * mtls::x509::ber::parse_ber_utf8string * mtls::x509::ber::parse_ber_videotexstring * mtls::x509::ber::parse_ber_visiblestring * mtls::x509::ber::parse_ber_with_tag * mtls::x509::der::der_read_element_content * mtls::x509::der::der_read_element_content_as * mtls::x509::der::der_read_element_header * mtls::x509::der::parse_der * mtls::x509::der::parse_der_bitstring * mtls::x509::der::parse_der_bmpstring * mtls::x509::der::parse_der_bool * mtls::x509::der::parse_der_container * mtls::x509::der::parse_der_content * mtls::x509::der::parse_der_content2 * mtls::x509::der::parse_der_endofcontent * mtls::x509::der::parse_der_enum * mtls::x509::der::parse_der_explicit_optional * mtls::x509::der::parse_der_generalizedtime * mtls::x509::der::parse_der_generalstring * mtls::x509::der::parse_der_graphicstring * mtls::x509::der::parse_der_i32 * mtls::x509::der::parse_der_i64 * mtls::x509::der::parse_der_ia5string * mtls::x509::der::parse_der_implicit * mtls::x509::der::parse_der_integer * mtls::x509::der::parse_der_null * mtls::x509::der::parse_der_numericstring * mtls::x509::der::parse_der_objectdescriptor * mtls::x509::der::parse_der_octetstring * mtls::x509::der::parse_der_oid * mtls::x509::der::parse_der_printablestring * mtls::x509::der::parse_der_recursive * mtls::x509::der::parse_der_relative_oid * mtls::x509::der::parse_der_sequence * mtls::x509::der::parse_der_sequence_defined * mtls::x509::der::parse_der_sequence_defined_g * mtls::x509::der::parse_der_sequence_of * mtls::x509::der::parse_der_sequence_of_v * mtls::x509::der::parse_der_set * mtls::x509::der::parse_der_set_defined * mtls::x509::der::parse_der_set_defined_g * mtls::x509::der::parse_der_set_of * mtls::x509::der::parse_der_set_of_v * mtls::x509::der::parse_der_slice * mtls::x509::der::parse_der_t61string * mtls::x509::der::parse_der_tagged_explicit * mtls::x509::der::parse_der_tagged_explicit_g * mtls::x509::der::parse_der_tagged_implicit * mtls::x509::der::parse_der_tagged_implicit_g * mtls::x509::der::parse_der_u32 * mtls::x509::der::parse_der_u64 * mtls::x509::der::parse_der_universalstring * mtls::x509::der::parse_der_utctime * mtls::x509::der::parse_der_utf8string * mtls::x509::der::parse_der_videotexstring * mtls::x509::der::parse_der_with_tag * mtls::x509::der::visiblestring * mtls::x509::parse_ct_signed_certificate_timestamp * mtls::x509::parse_ct_signed_certificate_timestamp_list * serde::json::from_slice * serde::json::from_str * serde::json::from_value * serde::json::to_pretty_string * serde::json::to_string * serde::json::to_value * serde::msgpack::from_slice * serde::msgpack::to_compact_vec * serde::msgpack::to_vec ### Type Definitions * catcher::BoxFuture * catcher::Result * data::Outcome * fairing::Result * form::Result * mtls::Result * mtls::oid::LoadedMap * mtls::oid::asn1_rs::IResult * mtls::oid::asn1_rs::OptTaggedExplicit * mtls::oid::asn1_rs::OptTaggedImplicit * mtls::oid::asn1_rs::ParseResult * mtls::oid::asn1_rs::Result * mtls::oid::asn1_rs::SerializeResult * mtls::oid::asn1_rs::SetIterator * mtls::oid::asn1_rs::TaggedExplicit * mtls::oid::asn1_rs::TaggedImplicit * mtls::oid::asn1_rs::nom::IResult * mtls::oid::asn1_rs::nom::lib::std::alloc::LayoutErr * mtls::oid::asn1_rs::nom::lib::std::fmt::Result * mtls::oid::asn1_rs::nom::lib::std::string::ParseError * mtls::x509::CRLDistributionPoints * mtls::x509::CertificatePolicies * mtls::x509::X509Result * mtls::x509::ber::compat::BerClass * mtls::x509::ber::compat::BerObjectHeader * mtls::x509::ber::compat::BerSize * mtls::x509::ber::compat::BerTag * mtls::x509::der::DerClass * mtls::x509::der::DerObject * mtls::x509::der::DerObjectContent * mtls::x509::der::DerObjectHeader * mtls::x509::der::DerTag * request::FlashMessage * request::Outcome * response::Result * route::BoxFuture * route::Outcome * serde::uuid::Bytes ### Constants * http::hyper::header::ACCEPT * http::hyper::header::ACCEPT_CHARSET * http::hyper::header::ACCEPT_ENCODING * http::hyper::header::ACCEPT_LANGUAGE * http::hyper::header::ACCEPT_RANGES * http::hyper::header::ACCESS_CONTROL_ALLOW_CREDENTIALS * http::hyper::header::ACCESS_CONTROL_ALLOW_HEADERS * http::hyper::header::ACCESS_CONTROL_ALLOW_METHODS * http::hyper::header::ACCESS_CONTROL_ALLOW_ORIGIN * http::hyper::header::ACCESS_CONTROL_EXPOSE_HEADERS * http::hyper::header::ACCESS_CONTROL_MAX_AGE * http::hyper::header::ACCESS_CONTROL_REQUEST_HEADERS * http::hyper::header::ACCESS_CONTROL_REQUEST_METHOD * http::hyper::header::ALLOW * http::hyper::header::AUTHORIZATION * http::hyper::header::CACHE_CONTROL * http::hyper::header::CONNECTION * http::hyper::header::CONTENT_DISPOSITION * http::hyper::header::CONTENT_ENCODING * http::hyper::header::CONTENT_LANGUAGE * http::hyper::header::CONTENT_LENGTH * http::hyper::header::CONTENT_LOCATION * http::hyper::header::CONTENT_RANGE * http::hyper::header::CONTENT_SECURITY_POLICY * http::hyper::header::CONTENT_SECURITY_POLICY_REPORT_ONLY * http::hyper::header::CONTENT_TYPE * http::hyper::header::DATE * http::hyper::header::ETAG * http::hyper::header::EXPECT * http::hyper::header::EXPIRES * http::hyper::header::FORWARDED * http::hyper::header::FROM * http::hyper::header::HOST * http::hyper::header::IF_MATCH * http::hyper::header::IF_MODIFIED_SINCE * http::hyper::header::IF_NONE_MATCH * http::hyper::header::IF_RANGE * http::hyper::header::IF_UNMODIFIED_SINCE * http::hyper::header::LAST_MODIFIED * http::hyper::header::LINK * http::hyper::header::LOCATION * http::hyper::header::ORIGIN * http::hyper::header::PRAGMA * http::hyper::header::RANGE * http::hyper::header::REFERER * http::hyper::header::REFERRER_POLICY * http::hyper::header::REFRESH * http::hyper::header::STRICT_TRANSPORT_SECURITY * http::hyper::header::TE * http::hyper::header::TRANSFER_ENCODING * http::hyper::header::UPGRADE * http::hyper::header::USER_AGENT * http::hyper::header::VARY * mtls::oid::MS_CTL * mtls::oid::MS_JURISDICTION_COUNTRY * mtls::oid::MS_JURISDICTION_LOCALITY * mtls::oid::MS_JURISDICTION_STATE_OR_PROVINCE * mtls::oid::OID_CT_LIST_SCT * mtls::oid::OID_DOMAIN_COMPONENT * mtls::oid::OID_EC_P256 * mtls::oid::OID_GOST_R3410_2001 * mtls::oid::OID_HASH_SHA1 * mtls::oid::OID_KDF_SHA1_SINGLE * mtls::oid::OID_KEY_TYPE_DSA * mtls::oid::OID_KEY_TYPE_EC_PUBLIC_KEY * mtls::oid::OID_KEY_TYPE_GOST_R3410_2012_256 * mtls::oid::OID_KEY_TYPE_GOST_R3410_2012_512 * mtls::oid::OID_MD5_WITH_RSA * mtls::oid::OID_NIST_EC_P384 * mtls::oid::OID_NIST_EC_P521 * mtls::oid::OID_NIST_ENC_AES256_CBC * mtls::oid::OID_NIST_HASH_SHA256 * mtls::oid::OID_NIST_HASH_SHA384 * mtls::oid::OID_NIST_HASH_SHA512 * mtls::oid::OID_PKCS12 * mtls::oid::OID_PKCS12_PBEIDS * mtls::oid::OID_PKCS12_PBE_SHA1_128RC2_CBC * mtls::oid::OID_PKCS12_PBE_SHA1_128RC4 * mtls::oid::OID_PKCS12_PBE_SHA1_2K_3DES_CBC * mtls::oid::OID_PKCS12_PBE_SHA1_3K_3DES_CBC * mtls::oid::OID_PKCS12_PBE_SHA1_40RC2_CBC * mtls::oid::OID_PKCS12_PBE_SHA1_40RC4 * mtls::oid::OID_PKCS1_MD2WITHRSAENC * mtls::oid::OID_PKCS1_MD4WITHRSAENC * mtls::oid::OID_PKCS1_MD5WITHRSAENC * mtls::oid::OID_PKCS1_RSAENCRYPTION * mtls::oid::OID_PKCS1_RSASSAPSS * mtls::oid::OID_PKCS1_SHA1WITHRSA * mtls::oid::OID_PKCS1_SHA224WITHRSA * mtls::oid::OID_PKCS1_SHA256WITHRSA * mtls::oid::OID_PKCS1_SHA384WITHRSA * mtls::oid::OID_PKCS1_SHA512WITHRSA * mtls::oid::OID_PKCS7_ID_DATA * mtls::oid::OID_PKCS7_ID_DIGESTED_DATA * mtls::oid::OID_PKCS7_ID_ENCRYPTED_DATA * mtls::oid::OID_PKCS7_ID_ENVELOPED_DATA * mtls::oid::OID_PKCS7_ID_SIGNED_DATA * mtls::oid::OID_PKCS7_ID_SIGNED_ENVELOPED_DATA * mtls::oid::OID_PKCS9_CONTENT_TYPE * mtls::oid::OID_PKCS9_EMAIL_ADDRESS * mtls::oid::OID_PKCS9_EXTENSION_REQUEST * mtls::oid::OID_PKCS9_FRIENDLY_NAME * mtls::oid::OID_PKCS9_ID_MESSAGE_DIGEST * mtls::oid::OID_PKCS9_SIGNING_TIME * mtls::oid::OID_PKCS9_SMIME_CAPABILITIES * mtls::oid::OID_PKCS9_UNSTRUCTURED_NAME * mtls::oid::OID_PKIX_ACCESS_DESCRIPTOR_CA_ISSUERS * mtls::oid::OID_PKIX_ACCESS_DESCRIPTOR_CA_REPOSITORY * mtls::oid::OID_PKIX_ACCESS_DESCRIPTOR_CMC * mtls::oid::OID_PKIX_ACCESS_DESCRIPTOR_DVCS * mtls::oid::OID_PKIX_ACCESS_DESCRIPTOR_HTTP_CERTS * mtls::oid::OID_PKIX_ACCESS_DESCRIPTOR_HTTP_CRLS * mtls::oid::OID_PKIX_ACCESS_DESCRIPTOR_OCSP * mtls::oid::OID_PKIX_ACCESS_DESCRIPTOR_RPKI_MANIFEST * mtls::oid::OID_PKIX_ACCESS_DESCRIPTOR_RPKI_NOTIFY * mtls::oid::OID_PKIX_ACCESS_DESCRIPTOR_SIGNED_OBJECT * mtls::oid::OID_PKIX_ACCESS_DESCRIPTOR_STIRTNLIST * mtls::oid::OID_PKIX_ACCESS_DESCRIPTOR_TIMESTAMPING * mtls::oid::OID_PKIX_AUTHORITY_INFO_ACCESS * mtls::oid::OID_SHA1_WITH_RSA * mtls::oid::OID_SIG_DSA_WITH_SHA1 * mtls::oid::OID_SIG_ECDSA_WITH_SHA224 * mtls::oid::OID_SIG_ECDSA_WITH_SHA256 * mtls::oid::OID_SIG_ECDSA_WITH_SHA384 * mtls::oid::OID_SIG_ECDSA_WITH_SHA512 * mtls::oid::OID_SIG_ED25519 * mtls::oid::OID_SIG_ED448 * mtls::oid::OID_SIG_GOST_R3410_2012_256 * mtls::oid::OID_SIG_GOST_R3410_2012_512 * mtls::oid::OID_SIG_GOST_R3411_94_WITH_R3410_2001 * mtls::oid::OID_SIG_RSA_RIPE_MD160 * mtls::oid::OID_X500 * mtls::oid::OID_X509 * mtls::oid::OID_X509_ALIASED_ENTRY_NAME * mtls::oid::OID_X509_BUSINESS_CATEGORY * mtls::oid::OID_X509_COMMON_NAME * mtls::oid::OID_X509_COUNTRY_NAME * mtls::oid::OID_X509_DESCRIPTION * mtls::oid::OID_X509_DN_QUALIFIER * mtls::oid::OID_X509_EXT_AUTHORITY_KEY_IDENTIFIER * mtls::oid::OID_X509_EXT_BASE_URL * mtls::oid::OID_X509_EXT_BASIC_CONSTRAINTS * mtls::oid::OID_X509_EXT_CA_CERT_URL * mtls::oid::OID_X509_EXT_CA_CRL_URL * mtls::oid::OID_X509_EXT_CA_POLICY_URL * mtls::oid::OID_X509_EXT_CA_REVOCATION_URL * mtls::oid::OID_X509_EXT_CERTIFICATE_POLICIES * mtls::oid::OID_X509_EXT_CERT_COMMENT * mtls::oid::OID_X509_EXT_CERT_TYPE * mtls::oid::OID_X509_EXT_CRL_DISTRIBUTION_POINTS * mtls::oid::OID_X509_EXT_CRL_NUMBER * mtls::oid::OID_X509_EXT_DELTA_CRL_INDICATOR * mtls::oid::OID_X509_EXT_ENTITY_LOGO * mtls::oid::OID_X509_EXT_EXTENDED_KEY_USAGE * mtls::oid::OID_X509_EXT_FRESHEST_CRL * mtls::oid::OID_X509_EXT_HOLD_INSTRUCTION_CODE * mtls::oid::OID_X509_EXT_HOMEPAGE_URL * mtls::oid::OID_X509_EXT_INHIBITANT_ANY_POLICY * mtls::oid::OID_X509_EXT_INVALIDITY_DATE * mtls::oid::OID_X509_EXT_ISSUER * mtls::oid::OID_X509_EXT_ISSUER_ALT_NAME * mtls::oid::OID_X509_EXT_ISSUER_DISTRIBUTION_POINT * mtls::oid::OID_X509_EXT_KEY_USAGE * mtls::oid::OID_X509_EXT_NAME_CONSTRAINTS * mtls::oid::OID_X509_EXT_POLICY_CONSTRAINTS * mtls::oid::OID_X509_EXT_POLICY_MAPPINGS * mtls::oid::OID_X509_EXT_PRIVATE_KEY_USAGE_PERIOD * mtls::oid::OID_X509_EXT_REASON_CODE * mtls::oid::OID_X509_EXT_RENEWAL_URL * mtls::oid::OID_X509_EXT_REVOCATION_URL * mtls::oid::OID_X509_EXT_SSL_SERVER_NAME * mtls::oid::OID_X509_EXT_SUBJECT_ALT_NAME * mtls::oid::OID_X509_EXT_SUBJECT_KEY_IDENTIFIER * mtls::oid::OID_X509_EXT_USER_PICTURE * mtls::oid::OID_X509_GENERATION_QUALIFIER * mtls::oid::OID_X509_GIVEN_NAME * mtls::oid::OID_X509_INITIALS * mtls::oid::OID_X509_KNOWLEDGE_INFORMATION * mtls::oid::OID_X509_LOCALITY_NAME * mtls::oid::OID_X509_NAME * mtls::oid::OID_X509_OBJECT_CLASS * mtls::oid::OID_X509_OBSOLETE_AUTHORITY_KEY_IDENTIFIER * mtls::oid::OID_X509_OBSOLETE_CERTIFICATE_POLICIES * mtls::oid::OID_X509_OBSOLETE_ISSUER_ALT_NAME * mtls::oid::OID_X509_OBSOLETE_KEY_ATTRIBUTES * mtls::oid::OID_X509_OBSOLETE_KEY_USAGE * mtls::oid::OID_X509_OBSOLETE_POLICY_MAPPING * mtls::oid::OID_X509_OBSOLETE_SUBJECT_ALT_NAME * mtls::oid::OID_X509_OBSOLETE_SUBTREES_CONSTRAINT * mtls::oid::OID_X509_ORGANIZATIONAL_UNIT * mtls::oid::OID_X509_ORGANIZATION_NAME * mtls::oid::OID_X509_POSTAL_ADDRESS * mtls::oid::OID_X509_POSTAL_CODE * mtls::oid::OID_X509_SEARCH_GUIDE * mtls::oid::OID_X509_SERIALNUMBER * mtls::oid::OID_X509_STATE_OR_PROVINCE_NAME * mtls::oid::OID_X509_STREET_ADDRESS * mtls::oid::OID_X509_SURNAME * mtls::oid::OID_X509_TITLE * mtls::oid::OID_X509_UNIQUE_IDENTIFIER * mtls::oid::SPC_INDIRECT_DATA_OBJID * mtls::oid::SPC_INDIVIDUAL_SP_KEY_PURPOSE_OBJID * mtls::oid::SPC_PE_IMAGE_DATA * mtls::oid::SPC_SP_OPUS_INFO_OBJID * mtls::oid::SPC_STATEMENT_TYPE_OBJID * mtls::x509::ber::MAX_OBJECT_SIZE * mtls::x509::ber::MAX_RECURSION # Module rocket::mtls Support for mutual TLS client certificates. For details on how to configure mutual TLS, see `MutualTls` and the TLS guide. See `Certificate` for a request guard that validated, verifies, and retrieves client certificates. * Signed and unsigned big integer types re-exported from `num_bigint` . * Lower-level OID types re-exported from `oid_registry` and `der-parser` . * Lower-level X.509 types re-exported from `x509_parser` . * A request guard for validated, verified client certificates. * An X.509 Distinguished Name (DN) found in a `Certificate` . * An error returned by the `Certificate` request guard. # Module rocket::serde::json Automatic JSON (de)serialization support. See `Json` for details. ``` [dependencies.rocket] version = "=0.5.0-rc.3" features = ["json"] ``` The `LocalRequest` and `LocalResponse` types provide `json()` and `into_json()` methods to create a request with serialized JSON and deserialize a response as JSON, respectively. * A macro to create ad-hoc JSON serializable values using JSON syntax. * The JSON guard: easily consume and return JSON. * Error returned by the `Json` guard when JSON deserialization fails. * An arbitrary JSON value as returned by `json!` . * Deserialize an instance of type `T` from bytes of JSON text. * Deserialize an instance of type `T` from a string of JSON text. * Interpret a `Value` as an instance of type `T` . * Serialize a `T` into a JSON string with “pretty” formatted representation. * Serialize a `T` into a JSON string with compact representation. * Convert a `T` into a `Value` , an opaque value representing JSON data. # Module rocket::serde::msgpack Automatic MessagePack (de)serialization support. See `MsgPack` for further details. ``` [dependencies.rocket] version = "=0.5.0-rc.3" features = ["msgpack"] ``` The `LocalRequest` and `LocalResponse` types provide `msgpack()` and `into_msgpack()` methods to create a request with serialized MessagePack and deserialize a response as MessagePack, respectively. * The MessagePack guard: easily consume and return MessagePack. * Enum representing errors that can occur while decoding MessagePack data. * Deserialize an instance of type `T` from MessagePack encoded bytes. * Serialize a `T` into a MessagePack byte vector with compact representation. * Serialize a `T` into a MessagePack byte vector with named representation. # Module rocket::serde::uuid UUID path/query parameter and form value parsing support. ``` [dependencies.rocket] version = "=0.5.0-rc.3" features = ["uuid"] ``` `Uuid` implements `FromParam` and `FromFormField` (i.e, `FromForm` ), allowing UUID values to be accepted directly in paths, queries, and forms. You can use the `Uuid` type directly as a target of a dynamic parameter: #[get("/users/<id>")] fn user(id: Uuid) -> String { format!("We found: {}", id) } ``` You can also use the `Uuid` as a form value, including in query strings: #[get("/user?<id>")] fn user(id: Uuid) -> String { format!("User ID: {}", id) } ``` Additionally, `Uuid` implements `UriDisplay<P>` for all `P` . As such, route URIs including `Uuid` s can be generated in a type-safe manner: #[get("/user/<id>")] fn user(id: Uuid) -> String { format!("User ID: {}", id) } #[get("/user?<id>")] fn old_user_path(id: Uuid) -> Redirect { Redirect::to(uri!(user(id))) } ``` ## Extra Features The `uuid` crate exposes extra `v{n}` features for generating UUIDs which are not enabled by Rocket. To enable these features, depend on `uuid` directly. The extra functionality can be accessed via both ``` rocket::serde::uuid::Uuid ``` or the direct `uuid::Uuid` ; the types are one and the same. ``` [dependencies.uuid] version = "1" features = ["v1", "v4"] ``` * Adapters for alternative string formats. * Parse `Uuid` s from string literals at compile time. * A builder for creating a UUID. * Error returned on `FromParam` or `FromFormField` failure. * A Universally Unique Identifier (UUID). * The reserved variants of UUIDs. * The version of the UUID, denoting the generating algorithm. * A 128-bit (16 byte) buffer containing the UUID. # Module rocket::local Structures for local dispatching of requests, primarily for testing. This module allows for simple request dispatching against a local, non-networked instance of Rocket. The primary use of this module is to unit and integration test Rocket applications by crafting requests, dispatching them, and verifying the response. ## Async. vs. Blocking This module contains two variants, in its two submodules, of the same local API: `asynchronous` , and `blocking` . As their names imply, the `asynchronous` API is `async` , returning a `Future` for operations that would otherwise block, while `blocking` blocks for the same operations. Unless your request handling requires concurrency to make progress, or you’re making use of a `Client` in an environment that necessitates or would benefit from concurrency, you should use the `blocking` set of APIs due their ease-of-use. If your request handling does require concurrency to make progress, for instance by having one handler `await` a response generated from a request to another handler, use the `asynchronous` set of APIs. Both APIs include a `Client` structure that is used to create `LocalRequest` structures that can be dispatched against a given `Rocket` instance to yield a `LocalResponse` structure. The APIs are identical except in that the `asynchronous` APIs return `Future` s for otherwise blocking operations. ## Unit/Integration Testing This module is primarily intended to be used to test a Rocket application by constructing requests via `Client` , dispatching them, and validating the resulting response. As a complete example, consider the following “Hello, world!” application, with testing. #[get("/")] fn hello() -> &'static str { "Hello, world!" } #[cfg(test)] mod test { // Using the preferred `blocking` API. #[test] fn test_hello_world_blocking() { use rocket::local::blocking::Client; // Dispatch a request to 'GET /' and validate the response. let response = client.get("/").dispatch(); assert_eq!(response.into_string().unwrap(), "Hello, world!"); } // Using the `asynchronous` API. #[rocket::async_test] async fn test_hello_world_async() { use rocket::local::asynchronous::Client; // Dispatch a request to 'GET /' and validate the response. let response = client.get("/").dispatch().await; assert_eq!(response.into_string().await.unwrap(), "Hello, world!"); } } ``` For more details on testing, see the testing guide. `Client` A `Client` , either `blocking::Client` or `asynchronous::Client` , referred to as simply `Client` and `async` `Client` , respectively, constructs requests for local dispatching. Usage A `Client` is constructed via the `tracked()` ( `async` `tracked()` ) or `untracked()` ( `async` `untracked()` ) methods from an already constructed `Rocket` instance. Once a value of `Client` has been constructed, `get()` , `put()` , `post()` , and so on ( `async` `get()` , `async` `put()` , `async` `post()` ) can be called to create a `LocalRequest` ( `async` `LocalRequest` ) for dispatching. Cookie Tracking A `Client` constructed using `tracked()` propagates cookie changes made by responses to previously dispatched requests. In other words, if a previously dispatched request resulted in a response that adds a cookie, any future requests will contain that cookie. Similarly, cookies removed by a response won’t be propagated further. This is typically the desired mode of operation for a `Client` as it removes the burden of manually tracking cookies. Under some circumstances, however, disabling this tracking may be desired. In these cases, use the `untracked()` constructor to create a `Client` that will not track cookies. Example For a usage example, see `Client` or `async` `Client` . `LocalRequest` A `LocalRequest` ( `async` `LocalRequest` ) is constructed via a `Client` . Once obtained, headers, cookies, including private cookies, the remote IP address, and the request body can all be set via methods on the `LocalRequest` structure. Dispatching A `LocalRequest` is dispatched by calling `dispatch()` ( `async` `dispatch()` ). The `LocalRequest` is consumed and a `LocalResponse` ( `async` `LocalResponse` ) is returned. Note that `LocalRequest` implements `Clone` . As such, if the same request needs to be dispatched multiple times, the request can first be cloned and then dispatched: ``` request.clone().dispatch() ``` . Example For a usage example, see `LocalRequest` or `async` `LocalRequest` . `LocalResponse` The result of `dispatch()` ing a `LocalRequest` is a `LocalResponse` ( `async` `LocalResponse` ). A `LocalResponse` can be queried for response metadata, including the HTTP status, headers, and cookies, via its getter methods. Additionally, the body of the response can be read into a string ( `into_string()` or `async` `into_string()` ) or a vector ( `into_bytes()` or `async` `into_bytes()` ). The response body must also be read directly using standard I/O mechanisms: the `blocking` `LocalResponse` implements `Read` while the `async` `LocalResponse` implements `AsyncRead` . For a usage example, see `LocalResponse` or `async` `LocalResponse` . * Asynchronous local dispatching of requests. * Blocking local dispatching of requests. # Module rocket::catcher Types and traits for error catchers and their handlers and return types. * An error catching route. * Helper trait to make a `Catcher` ’s `Box<dyn Handler>` `Clone` . * Trait implemented by `Catcher` error handlers. * Type alias for the return type of a `Catcher` ’s `Handler::handle()` . # Module rocket::config Server and application configuration. See the configuration guide for full details. ### Extracting Configuration Parameters Rocket exposes the active `Figment` via `Rocket::figment()` . Any value that implements `Deserialize` can be extracted from the figment: #[derive(serde::Deserialize)] struct AppConfig { id: Option<usize>, port: u16, } #[rocket::launch] fn rocket() -> _ { rocket::build().attach(AdHoc::config::<AppConfig>()) } ``` ### Workers The `workers` parameter sets the number of threads used for parallel task execution; there is no limit to the number of concurrent tasks. Due to a limitation in upstream async executers, unlike other values, the `workers` configuration value cannot be reconfigured or be configured from sources other than those provided by `Config::figment()` . In other words, only the values set by the `ROCKET_WORKERS` environment variable or in the `workers` property of `Rocket.toml` will be considered - all other `workers` values are ignored. ### Custom Providers A custom provider can be set via `rocket::custom()` , which replaces calls to `rocket::build()` . The configured provider can be built on top of `Config::figment()` , `Config::default()` , both, or neither. The Figment documentation has full details on instantiating existing providers like `Toml` and `Env` as well as creating custom providers for more complex cases. Configuration values can be overridden at runtime by merging figment’s tuple providers with Rocket’s default provider: ``` use rocket::data::{Limits, ToByteUnit}; #[launch] fn rocket() -> _ { let figment = rocket::Config::figment() .merge(("port", 1111)) .merge(("limits", Limits::new().limit("json", 2.mebibytes()))); rocket::custom(figment).mount("/", routes![/* .. */]) } ``` An application that wants to use Rocket’s defaults for `Config` , but not its configuration sources, while allowing the application to be configured via an `App.toml` file that uses top-level keys as profiles ( `.nested()` ) and `APP_` environment variables as global overrides ( `.global()` ), and `APP_PROFILE` to configure the selected profile, can be structured as follows: ``` use serde::{Serialize, Deserialize}; use figment::{Figment, Profile, providers::{Format, Toml, Serialized, Env}}; use rocket::fairing::AdHoc; #[derive(Debug, Deserialize, Serialize)] struct Config { app_value: usize, /* and so on.. */ } impl Default for Config { fn default() -> Config { Config { app_value: 3, } } } #[launch] fn rocket() -> _ { let figment = Figment::from(rocket::Config::default()) .merge(Serialized::defaults(Config::default())) .merge(Toml::file("App.toml").nested()) .merge(Env::prefixed("APP_").global()) .select(Profile::from_env_or("APP_PROFILE", "default")); rocket::custom(figment) .mount("/", routes![/* .. */]) .attach(AdHoc::config::<Config>()) } ``` * Rocket server configuration. * An identifier (or `None` ) to send as the `Server` header. * MutualTls `mtls` Mutual TLS configuration. * SecretKey `secrets` A cryptographically secure secret key. * Graceful shutdown configuration. * TlsConfig `tls` TLS configuration: certificate chain, key, and ciphersuites. * CipherSuite `tls` A supported TLS cipher suite. * Defines the maximum level of log messages to show. * SigUnixA Unix signal for triggering graceful shutdown. # Module rocket::data Types and traits for handling incoming body data. * A unit of bytes with saturating `const` constructors and arithmetic. * Encapsulates a value capped to a data limit. * Type representing the body data of a request. * Raw data stream of a request body. * Mapping from (hierarchical) data types to size limits. * Number of bytes read/written and whether that consisted of the entire stream. * Trait implemented by data guards to derive a value from request body data. * Extension trait for conversion from integer types to `ByteUnit` . * Type alias for the `Outcome` of `FromData` . # Module rocket::error Module rocket :: error source · [ − ] Expand description Types representing various errors that can occur in a Rocket application. Structs Error An error that occurs during launch. Enums ErrorKind The kind error that occurred. # Module rocket::fairing Fairings: callbacks at launch, liftoff, request, and response time. Fairings allow for structured interposition at various points in the application lifetime. Fairings can be seen as a restricted form of “middleware”. A fairing is an arbitrary structure with methods representing callbacks that Rocket will run at requested points in a program. You can use fairings to rewrite or record information about requests and responses, or to perform an action once a Rocket application has launched. To learn more about writing a fairing, see the `Fairing` trait documentation. You can also use `AdHoc` to create a fairing on-the-fly from a closure or function. ### Attaching You must inform Rocket about fairings that you wish to be active by calling `Rocket::attach()` method on the application’s `Rocket` instance and passing in the appropriate `Fairing` . For instance, to attach fairings named `req_fairing` and `res_fairing` to a new Rocket instance, you might write: ``` let rocket = rocket::build() .attach(req_fairing) .attach(res_fairing); ``` Once a fairing is attached, Rocket will execute it at the appropriate time, which varies depending on the fairing implementation. See the `Fairing` trait documentation for more information on the dispatching of fairing methods. ### Ordering `Fairing` s are executed in the order in which they are attached: the first attached fairing has its callbacks executed before all others. A fairing can be attached any number of times. Except for singleton fairings, all attached instances are polled at runtime. Fairing callbacks may not be commutative; the order in which fairings are attached may be significant. It is thus important to communicate specific fairing functionality clearly. Furthermore, a `Fairing` should take care to act locally so that the actions of other `Fairings` are not jeopardized. For instance, unless it is made abundantly clear, a fairing should not rewrite every request. * A ad-hoc fairing that can be created from a function or closure. * Information about a `Fairing` . * A bitset representing the kinds of callbacks a `Fairing` wishes to receive. * Trait implemented by fairings: Rocket’s structured middleware. * A type alias for the return `Result` type of `Fairing::on_ignite()` . # Module rocket::form Parsing and validation of HTTP forms and fields. See the forms guide for general form support documentation. ## Field Wire Format Rocket’s field wire format is a flexible, non-self-descriptive, text-based encoding of arbitrarily nested structure keys and their corresponding values. The general grammar is: ``` field := name ('=' value)? name := key* key := indices | '[' indices ']' | '.' indices indices := index (':' index)* index := STRING except ':', ']' value := STRING ``` Each field name consists of any number of `key` s and at most one `value` . Keys are delimited by `[` or `.` . A `key` consists of indices delimited by `:` . The meaning of a key or index is type-dependent, hence the format is non-self-descriptive. Any structure can be described by this format. The delimiters `.` , `[` , `:` , and `]` have no semantic meaning. Some examples of valid fields are: * `=` * `key=value` * `key[]=value` * `.0=value` * `[0]=value` * `people[].name=Bob` * ``` bob.cousin.names[]=Bob ``` * `map[k:1]=Bob` * ``` people[bob]nickname=Stan ``` See `FromForm` for full details on push-parsing and complete examples. * Form error types. * Types for field names, name keys, and key indices. * Form field validation routines. * A form context containing received fields, values, and encountered errors. * An infallible form guard that records form fields and errors during parsing. * A multipart form field with an underlying data stream. * A form error, potentially tied to a specific form field. * A collection of `Error` s. * A data guard for `FromForm` types. * A form guard for parsing form types leniently. * Form guard options. * A form guard for parsing form types strictly. * A form field with a string value. * Trait implemented by form guards: types parseable from HTTP forms. * Implied form guard ( `FromForm` ) for parsing a single form field. * Type alias for `Result` with an error type of `Errors` . # Module rocket::fs File serving, file accepting, and file metadata types. * Generates a crate-relative version of a path. * Custom handler for serving static files. * A `Responder` that sends file data with a Content-Type based on its file extension. * A bitset representing configurable options for `FileServer` . * A data and form guard that streams data into a temporary file. # Module rocket::http Types that map to concepts in HTTP. This module exports types that map to HTTP concepts or to the underlying HTTP library when needed. * Extension traits implemented by several HTTP types. * Re-exported hyper HTTP library types. * Case-preserving, ASCII case-insensitive string types. * Types for URIs and traits for rendering URI components. * Macro to automatically generate identity `FromUriParam` trait implementations. * The HTTP Accept header. * Representation of HTTP Content-Types. * Representation of an HTTP cookie. * Collection of one or more HTTP cookies. * Simple representation of an HTTP header. * A collection of headers, mapping a header name to its many ordered values. * Iterator over all of the cookies in a jar. * An HTTP media type. * A `MediaType` with an associated quality value. * A reference to a string inside of a raw HTTP message. * An owned version of `RawStr` . * Structure representing an HTTP status: an integer code. * Representation of HTTP methods. * The `SameSite` cookie attribute. * Enumeration of HTTP status classes. # Module rocket::outcome Success, failure, and forward handling. The `Outcome<S, E, F>` type is similar to the standard library’s `Result<S, E>` type. It is an enum with three variants, each containing a value: `Success(S)` , which represents a successful outcome, `Failure(E)` , which represents a failing outcome, and `Forward(F)` , which represents neither a success or failure, but instead, indicates that processing could not be handled and should instead be forwarded to whatever can handle the processing next. The `Outcome` type is the return type of many of the core Rocket traits, including `FromRequest` , `FromData` `Responder` . It is also the return type of request handlers via the `Response` type. ## Success A successful `Outcome<S, E, F>` , `Success(S)` , is returned from functions that complete successfully. The meaning of a `Success` outcome depends on the context. For instance, the `Outcome` of the `from_data` method of the `FromData` trait will be matched against the type expected by the user. For example, consider the following handler: ## Failure A failure `Outcome<S, E, F>` , `Failure(E)` , is returned when a function fails with some error and no processing can or should continue as a result. The meaning of a failure depends on the context. In Rocket, a `Failure` generally means that a request is taken out of normal processing. The request is then given to the catcher corresponding to some status code. Users can catch failures by requesting a type of `Result<S, E>` or `Option<S>` in request handlers. For example, if a user’s handler looks like: ``` #[post("/", data = "<my_val>")] fn hello(my_val: Result<S, E>) { /* ... */ } ``` ## Forward A forward `Outcome<S, E, F>` , `Forward(F)` , is returned when a function wants to indicate that the requested processing should be forwarded to the next available processor. Again, the exact meaning depends on the context. In Rocket, a `Forward` generally means that a request is forwarded to the next available request handler. For example, consider the following request handler: The `FromData` implementation for the type `S` returns an `Outcome` with a `Success(S)` , `Failure(E)` , and `Forward(F)` . If the `Outcome` is a `Forward` , the `hello` handler isn’t called. Instead, the incoming request is forwarded, or passed on to, the next matching route, if any. Ultimately, if there are no non-forwarding routes, forwarded requests are handled by the 404 catcher. Similar to `Failure` s, users can catch `Forward` s by requesting a type of `Option<S>` . If an `Outcome` is a `Forward` , the `Option` will be `None` . * Unwraps a `Success` or propagates a `Forward` or `Failure` . * An enum representing success ( `Success` ), failure ( `Failure` ), or forwarding ( `Forward` ). * Conversion trait from some type into an Outcome type. # Module rocket::request Types and traits for request parsing and handling. * Store and immediately retrieve a vector-like value `$v` ( `String` or `Vec<T>` ) in `$request` ’s local cache using a locally generated anonymous type to avoid type conflicts. * Store and immediately retrieve a value `$v` in `$request` ’s local cache using a locally generated anonymous type to avoid type conflicts. * The type of an incoming web request. * Trait to convert a dynamic path segment string to a concrete value. * Trait implemented by request guards to derive a value from incoming requests. * Trait to convert many dynamic path segment strings to a concrete value. * Type alias to retrieve `Flash` messages from a request. * Type alias for the `Outcome` of a `FromRequest` conversion. # Module rocket::response Types and traits to build and send responses. The return type of a Rocket handler can be any type that implements the `Responder` trait, which means that the type knows how to generate a `Response` . Among other things, this module contains several such types. ## Composing Many of the built-in `Responder` types chain responses: they take in another `Responder` and add, remove, or change information in the response. In other words, many `Responder` types are built to compose well. As a result, you’ll often have types of the form `A<B<C>>` consisting of three `Responder` s `A` , `B` , and `C` . This is normal and encouraged as the type names typically illustrate the intended response. * Contains types that set the Content-Type of a response. * Contains types that set the status code and corresponding headers of a response. * Potentially infinite async `Stream` response types. * The body of a `Response` . * Builder for the `Response` type. * Debug prints the internal value before forwarding to the 500 error catcher. * Sets a “flash” cookie that will be removed when it is accessed. The analogous request type is `FlashMessage` . * An empty redirect response to a given URL. * A response, as returned by types implementing `Responder` . * Trait implemented by types that generate responses for clients. * Type alias for the `Result` of a ``` Responder::respond_to() ``` call. # Module rocket::route Types and traits for routes and their request handlers and return types. * A request handling route. * A route URI which is matched against requests. * Type alias for the return type of a `Route` ’s `Handler::handle()` . # Module rocket::serde Serialization and deserialization support. * JSON support is provided by the `Json` type. * MessagePack support is provided by the `MsgPack` type. * UUID support is provided by the `UUID` type. Types implement one or all of `FromParam` , `FromForm` , `FromData` , and `Responder` . ### Deriving `Serialize` , `Deserialize` For convenience, Rocket re-exports `serde` ’s `Serialize` and `Deserialize` traits and derive macros from this module. However, due to Rust’s limited support for derive macro re-exports, using the re-exported derive macros requires annotating structures with ``` #[serde(crate = "rocket::serde")] ``` ``` use rocket::serde::{Serialize, Deserialize}; #[derive(Serialize, Deserialize)] #[serde(crate = "rocket::serde")] struct MyStruct { foo: String, } ``` If you’d like to avoid this extra annotation, you must depend on `serde` directly via your crate’s `Cargo.toml` : ``` [dependencies] serde = { version = "1.0", features = ["derive"] } ``` * json `json` Automatic JSON (de)serialization support. * msgpack `msgpack` Automatic MessagePack (de)serialization support. * uuid `uuid` UUID path/query parameter and form value parsing support. * A data structure that can be deserialized from any data format supported by Serde. * A data structure that can be deserialized without borrowing any data from the deserializer. * A data format that can deserialize any data structure supported by Serde. * A data structure that can be serialized into any data format supported by Serde. * A data format that can serialize any data structure supported by Serde. # Module rocket::shield Date: 2016-08-26 Categories: Tags: Security and privacy headers for all outgoing responses. The `Shield` fairing provides a typed interface for injecting HTTP security and privacy headers into all outgoing responses. It takes some inspiration from helmetjs, a similar piece of middleware for express. ## Supported Headers HTTP Header | Description | Policy | Default? | | --- | --- | --- | --- | X-XSS-Protection | Prevents some reflected XSS attacks. | | ✗ | X-Content-Type-Options | Prevents client sniffing of MIME type. | | ✔ | X-Frame-Options | Prevents clickjacking. | | ✔ | Strict-Transport-Security | Enforces strict use of HTTPS. | | ? | Expect-CT | Enables certificate transparency. | | ✗ | Referrer-Policy | Enables referrer policy. | | ✗ | X-DNS-Prefetch-Control | Controls browser DNS prefetching. | | ✗ | Permissions-Policy | Allows or block browser features. | | ✔ | ? If TLS is enabled in a non-debug profile, HSTS is automatically enabled with its default policy and a warning is logged at liftoff. By default, `Shield::default()` is attached to all instances Rocket. To change the default, including removing all `Shield` headers, attach a configured instance of `Shield` : ``` use rocket::shield::Shield; #[launch] fn rocket() -> _ { // Remove all `Shield` headers. rocket::build().attach(Shield::new()) } ``` Each header can be configured individually. To enable a particular header, call the chainable `enable()` method on an instance of `Shield` , passing in the configured policy type. Similarly, to disable a header, call the chainable `disable()` method on an instance of `Shield` : ``` use time::Duration; use rocket::http::uri::Uri; use rocket::shield::{Shield, Referrer, Prefetch, ExpectCt, NoSniff}; let report_uri = uri!("https://report.rocket.rs"); let shield = Shield::default() .enable(Referrer::NoReferrer) .enable(Prefetch::Off) .enable(ExpectCt::ReportAndEnforce(Duration::days(30), report_uri)) .disable::<NoSniff>(); ``` ## FAQ Which policies should I choose? See the links in the table above for individual header documentation. The helmetjs docs are also a good resource, and OWASP has a collection of references on these headers. * Do I need any headers beyond what `Shield` enables by default? Maybe! The other headers may protect against many important vulnerabilities. Please consult their documentation and other resources to determine if they are needed for your project. * The Permissions-Policy header: allow or block the use of browser features. * A `Fairing` that injects browser security and privacy headers into all outgoing responses. * Specifies the origin(s) allowed to access a browser `Feature` via `Permission` . * The Expect-CT header: enables reporting and/or enforcement of Certificate Transparency. * A browser feature that can be enabled or blocked via `Permission` . * The X-Frame-Options header: helps prevent clickjacking attacks. * The HTTP Strict-Transport-Security (HSTS) header: enforces strict HTTPS usage. * The X-Content-Type-Options header: turns off mime sniffing which can prevent certain attacks. * The X-DNS-Prefetch-Control header: controls browser DNS prefetching. * The Referrer-Policy header: controls the value set by the browser for the Referer header. * The X-XSS-Protection header: filters some forms of reflected XSS attacks. Modern browsers do not support or enforce this header. * Trait implemented by security and privacy policy headers. # Macro rocket::catchers Generates a `Vec` of `Catcher` s from a set of catcher paths. The `catchers!` macro expands a list of catcher paths into a `Vec` of their corresponding `Catcher` structures. For example, given the following catchers: ``` #[catch(404)] fn not_found() { /* .. */ } mod inner { #[catch(400)] pub fn unauthorized() { /* .. */ } } #[catch(default)] fn default_catcher() { /* .. */ } ``` The `catchers!` macro can be used as: ``` let my_catchers = catchers![not_found, inner::unauthorized, default_catcher]; assert_eq!(my_catchers.len(), 3); let not_found = &my_catchers[0]; assert_eq!(not_found.code, Some(404)); let unauthorized = &my_catchers[1]; assert_eq!(unauthorized.code, Some(400)); let default = &my_catchers[2]; assert_eq!(default.code, None); ``` The grammar for `catchers!` is defined as: ``` catchers := PATH (',' PATH)* # Struct rocket::Catcher ``` pub struct Catcher { pub name: Option<Cow<'static, str>>, pub base: Origin<'static>, pub code: Option<u16>, pub handler: Box<dyn Handler>, } ``` An error catching route. Catchers are routes that run when errors are produced by the application. They consist of a `Handler` and an optional status code to match against arising errors. Errors arise from the the following sources: * A failing guard. * A failing responder. * Routing failure. Each failure is paired with a status code. Guards and responders indicate the status code themselves via their `Err` return value while a routing failure is always a `404` . Rocket invokes the error handler for the catcher with the error’s status code. # Error Handler Restrictions Because error handlers are a last resort, they should not fail to produce a response. If an error handler does fail, Rocket invokes its default `500` error catcher. Error handlers cannot forward. An error arising from a particular request matches a catcher iff: * It is a default catcher or has a status code matching the error code. * Its base is a prefix of the normalized/decoded request URI path. A default catcher is a catcher with no explicit status code: `None` . The catcher’s base is provided as the first argument to `Rocket::register()` . Two catchers are said to collide if there exists an error that matches both catchers. Colliding catchers present a routing ambiguity and are thus disallowed by Rocket. Because catchers can be constructed dynamically, collision checking is done at `ignite` time, after it becomes statically impossible to register any more catchers on an instance of `Rocket` . # Built-In Default Rocket’s provides a built-in default catcher that can handle all errors. It produces HTML or JSON, depending on the value of the `Accept` header. As such, catchers only need to be registered if an error needs to be handled in a custom fashion. The built-in default never conflicts with any user-registered catchers. Catchers should rarely be constructed or used directly. Instead, they are typically generated via the `catch` attribute, as follows: use rocket::Request; use rocket::http::Status; #[launch] fn rocket() -> _ { rocket::build().register("/", catchers![internal_error, not_found, default]) } ``` A function decorated with `#[catch]` may take zero, one, or two arguments. It’s type signature must be one of the following, where `R:` `Responder` : See the `catch` documentation for full details. The name of this catcher, if one was given. ``` base: Origin<'static> ``` The mount point. `code: Option<u16>` The HTTP status to match against if this route is not `default` . The catcher’s associated error handler. b'source&#167; impl Catcher' b'source pub fn new<S, H>(code: S, handler: H) -> Catcherwhere S: Into<Option<u16>>, H: Handler,' # pub fn new<S, H>(code: S, handler: H) -> Catcherwhere S: Into<Option<u16>>, H: Handler, Creates a catcher for the given `status` , or a default catcher if `status` is `None` , using the given error handler. This should only be used when routing manually. fn handle_500<'r>(_: Status, req: &'r Request<'_>) -> BoxFuture<'r> { Box::pin(async move{ "Whoops, we messed up!".respond_to(req) }) } fn handle_default<'r>(status: Status, req: &'r Request<'_>) -> BoxFuture<'r> { let res = (status, format!("{}: {}", status, req.uri())); Box::pin(async move { res.respond_to(req) }) } let not_found_catcher = Catcher::new(404, handle_404); let internal_server_error_catcher = Catcher::new(500, handle_500); let default_error_catcher = Catcher::new(None, handle_default); ``` Panics if `code` is not in the HTTP status code error range `[400, 600)` . let catcher = Catcher::new(404, handle_404); assert_eq!(catcher.base.path(), "/"); let catcher = catcher.map_base(|_| format!("/bar")).unwrap(); assert_eq!(catcher.base.path(), "/bar"); let catcher = catcher.map_base(|base| format!("/foo{}", base)).unwrap(); assert_eq!(catcher.base.path(), "/foo/bar"); let catcher = catcher.map_base(|base| format!("/foo ? {}", base)); assert!(catcher.is_err()); ``` b'source&#167; impl Clone for Catcher' ### impl Debug for Catcher ### impl Send for Catcher # Macro rocket::routes Generates a `Vec` of `Route` s from a set of route paths. The `routes!` macro expands a list of route paths into a `Vec` of their corresponding `Route` structures. For example, given the following routes: ``` #[get("/")] fn index() { /* .. */ } mod person { #[post("/hi/<person>")] pub fn hello(person: String) { /* .. */ } } ``` The `routes!` macro can be used as: ``` let my_routes = routes![index, person::hello]; assert_eq!(my_routes.len(), 2); let index_route = &my_routes[0]; assert_eq!(index_route.method, Method::Get); assert_eq!(index_route.name.as_ref().unwrap(), "index"); assert_eq!(index_route.uri.path(), "/"); let hello_route = &my_routes[1]; assert_eq!(hello_route.method, Method::Post); assert_eq!(hello_route.name.as_ref().unwrap(), "hello"); assert_eq!(hello_route.uri.path(), "/hi/<person>"); ``` The grammar for `routes!` is defined as: ``` routes := PATH (',' PATH)* # Struct rocket::Route ``` pub struct Route { pub name: Option<Cow<'static, str>>, pub method: Method, pub handler: Box<dyn Handler>, pub uri: RouteUri<'static>, pub rank: isize, pub format: Option<MediaType>, /* private fields */ } ``` A request handling route. A route consists of exactly the information in its fields. While a `Route` can be instantiated directly, doing so should be a rare or nonexistent event. Instead, a Rocket application should use Rocket’s `#[route]` series of attributes to generate a `Route` . ``` #[get("/route/<path..>?query", rank = 2, format = "json")] fn route_name(path: PathBuf) { /* handler procedure */ } use rocket::http::{Method, MediaType}; let route = routes![route_name].remove(0); assert_eq!(route.name.unwrap(), "route_name"); assert_eq!(route.method, Method::Get); assert_eq!(route.uri, "/route/<path..>?query"); assert_eq!(route.rank, 2); assert_eq!(route.format.unwrap(), MediaType::JSON); ``` Note that the `rank` and `format` attribute parameters are optional. See `#[route]` for details on macro usage. Note also that a route’s mounted base becomes part of its URI; see `RouteUri` for details. A request matches a route iff: * The route’s method matches that of the incoming request. * The route’s format (if any) matches that of the incoming request. * If route specifies a format, it only matches requests for that format. * If route doesn’t specify a format, it matches requests for any format. * A route’s `format` matches against the `Accept` header in the request when the route’s method `supports_payload()` and `Content-Type` header otherwise. * Non-specific `Accept` header components ( `*` ) match anything. * All static components in the route’s path match the corresponding components in the same position in the incoming request. * All static components in the route’s query string are also in the request query string, though in any position. If there is no query in the route, requests with and without queries match. Rocket routes requests to matching routes. Two routes are said to collide if there exists a request that matches both routes. Colliding routes present a routing ambiguity and are thus disallowed by Rocket. Because routes can be constructed dynamically, collision checking is done at `ignite` time, after it becomes statically impossible to add any more routes to an instance of `Rocket` . Note that because query parsing is always lenient – extra and missing query parameters are allowed – queries do not directly impact whether two routes collide. ### Resolving Collisions Collisions are resolved through ranking. Routes with lower ranks have higher precedence during routing than routes with higher ranks. Thus, routes are attempted in ascending rank order. If a higher precedence route returns an `Outcome` of `Forward` , the next highest precedence route is attempted, and so on, until a route returns `Success` or `Failure` , or there are no more routes to try. When all routes have been attempted, Rocket issues a `404` error, handled by the appropriate `Catcher` . ### Default Ranking Most collisions are automatically resolved by Rocket’s default rank. The default rank prefers static components over dynamic components in both paths and queries: the more static a route’s path and query are, the lower its rank and thus the higher its precedence. There are three “colors” to paths and queries: * `static` - all components are static * `partial` - at least one, but not all, components are dynamic * `wild` - all components are dynamic Static paths carry more weight than static queries. The same is true for partial and wild paths. This results in the following default ranking table: path | query | rank | | --- | --- | --- | static | static | -12 | static | partial | -11 | static | wild | -10 | static | none | -9 | partial | static | -8 | partial | partial | -7 | partial | wild | -6 | partial | none | -5 | wild | static | -4 | wild | partial | -3 | wild | wild | -2 | wild | none | -1 | Recall that lower ranks have higher precedence. macro_rules! assert_rank { ($($uri:expr => $rank:expr,)*) => {$( let route = Route::new(Method::Get, $uri, rocket::route::dummy_handler); assert_eq!(route.rank, $rank); )*} } assert_rank! { "/?foo" => -12, // static path, static query "/foo/bar?a=b&bob" => -12, // static path, static query "/?a=b&bob" => -12, // static path, static query "/?a&<zoo..>" => -11, // static path, partial query "/foo?a&<zoo..>" => -11, // static path, partial query "/?a&<zoo>" => -11, // static path, partial query "/?<zoo..>" => -10, // static path, wild query "/foo?<zoo..>" => -10, // static path, wild query "/foo?<a>&<b>" => -10, // static path, wild query "/" => -9, // static path, no query "/foo/bar" => -9, // static path, no query "/a/<b>?foo" => -8, // partial path, static query "/a/<b..>?foo" => -8, // partial path, static query "/<a>/b?foo" => -8, // partial path, static query "/a/<b>?<b>&c" => -7, // partial path, partial query "/a/<b..>?a&<c..>" => -7, // partial path, partial query "/a/<b>?<c..>" => -6, // partial path, wild query "/a/<b..>?<c>&<d>" => -6, // partial path, wild query "/a/<b..>?<c>" => -6, // partial path, wild query "/a/<b>" => -5, // partial path, no query "/<a>/b" => -5, // partial path, no query "/a/<b..>" => -5, // partial path, no query "/<b>/<c>?foo&bar" => -4, // wild path, static query "/<a>/<b..>?foo" => -4, // wild path, static query "/<b..>?cat" => -4, // wild path, static query "/<b>/<c>?<foo>&bar" => -3, // wild path, partial query "/<a>/<b..>?a&<b..>" => -3, // wild path, partial query "/<b..>?cat&<dog>" => -3, // wild path, partial query "/<b>/<c>?<foo>" => -2, // wild path, wild query "/<a>/<b..>?<b..>" => -2, // wild path, wild query "/<b..>?<c>&<dog>" => -2, // wild path, wild query "/<b>/<c>" => -1, // wild path, no query "/<a>/<b..>" => -1, // wild path, no query "/<b..>" => -1, // wild path, no query } ``` The name of this route, if one was given. `method: Method` The method this route matches against. The function that should be called when the route matches. ``` uri: RouteUri<'static> ``` The route URI. `rank: isize` The rank of this route. Lower ranks have higher priorities. ``` format: Option<MediaType> ``` The media type this route matches against, if any. b'source pub fn new<H: Handler>(method: Method, uri: &str, handler: H) -> Route' # pub fn new<H: Handler>(method: Method, uri: &str, handler: H) -> Route Creates a new route with the given method, path, and handler with a base of `/` and a computed default rank. // this is a rank 1 route matching requests to `GET /` let index = Route::new(Method::Get, "/", handler); assert_eq!(index.rank, -9); assert_eq!(index.method, Method::Get); assert_eq!(index.uri, "/"); ``` b"source pub fn ranked<H, R>(rank: R, method: Method, uri: &str, handler: H) -> Routewhere H: Handler + 'static, R: Into<Option<isize>>," # pub fn ranked<H, R>(rank: R, method: Method, uri: &str, handler: H) -> Routewhere H: Handler + 'static, R: Into<Option<isize>>, Creates a new route with the given rank, method, path, and handler with a base of `/` . If `rank` is `None` , the computed default rank is used. ``` use rocket::Route; use rocket::http::{Method, uri::Origin}; let index = Route::new(Method::Get, "/foo/bar", handler); assert_eq!(index.uri.base(), "/"); assert_eq!(index.uri.unmounted_origin.path(), "/foo/bar"); assert_eq!(index.uri.path(), "/foo/bar"); let index = index.map_base(|base| format!("{}{}", "/boo", base)).unwrap(); assert_eq!(index.uri.base(), "/boo"); assert_eq!(index.uri.unmounted_origin.path(), "/foo/bar"); assert_eq!(index.uri.path(), "/boo/foo/bar"); ``` b'source&#167; impl Debug for Route' ### impl Debug for Route b'source&#167; impl Display for Route' ### impl Display for Route b"source&#167; impl<'r> FromRequest<'r> for &'r Route" ### impl<'r> FromRequest<'r> for &'r Route ### impl Send for Route ### impl Unpin for Route ### impl !UnwindSafe for Route # Macro rocket::uri ``` uri!() { /* proc-macro */ } ``` Type-safe, encoding-safe route and non-route URI generation. The `uri!` macro creates type-safe, URL-safe URIs given a route and concrete parameters for its URI or a URI string literal. ## String Literal Parsing Given a string literal as input, `uri!` parses the string using [ `Uri::parse_any()` ] and emits a `'static` , `const` value whose type is one of `Asterisk` , `Origin` , `Authority` , `Absolute` , or `Reference` , reflecting the parsed value. If the type allows normalization, the value is normalized before being emitted. Parse errors are caught and emitted at compile-time. The grammar for this variant of `uri!` is: ``` uri := STRING STRING := an uncooked string literal, as defined by Rust (example: `"/hi"`) ``` `STRING` is expected to be an undecoded URI of any variant. ### Examples ``` use rocket::http::uri::Absolute; // Values returned from `uri!` are `const` and `'static`. const ROOT_CONST: Absolute<'static> = uri!("https://rocket.rs"); static ROOT_STATIC: Absolute<'static> = uri!("https://rocket.rs?root"); // Any variant can be parsed, but beware of ambiguities. let asterisk = uri!("*"); let origin = uri!("/foo/bar/baz"); let authority = uri!("rocket.rs:443"); let absolute = uri!("https://rocket.rs:443"); let reference = uri!("foo?bar#baz"); ``` ## Type-Safe Route URIs A URI to a route name `foo` is generated using ``` uri!(foo(v1, v2, v3)) ``` ``` uri!(foo(a = v1, b = v2, c = v3)) ``` , where `v1` , `v2` , `v3` are the values to fill in for route parameters named `a` , `b` , and `c` . If the named parameter syntax is used ( `a = v1` , etc.), parameters can appear in any order. More concretely, for the route `person` defined below: …a URI can be created as follows: ``` // with unnamed parameters, in route path declaration order let mike = uri!(person("<NAME>", Some(28))); assert_eq!(mike.to_string(), "/person/Mike%20Smith?age=28"); // with named parameters, order irrelevant let mike = uri!(person(name = "Mike", age = Some(28))); let mike = uri!(person(age = Some(28), name = "Mike")); assert_eq!(mike.to_string(), "/person/Mike?age=28"); // with unnamed values, explicitly `None`. let mike = uri!(person("Mike", None::<u8>)); assert_eq!(mike.to_string(), "/person/Mike"); // with named values, explicitly `None` let option: Option<u8> = None; let mike = uri!(person(name = "Mike", age = None::<u8>)); assert_eq!(mike.to_string(), "/person/Mike"); ``` For optional query parameters, those of type `Option` or `Result` , a `_` can be used in-place of `None` or `Err` : ``` // with named values ignored let mike = uri!(person(name = "Mike", age = _)); assert_eq!(mike.to_string(), "/person/Mike"); // with named values ignored let mike = uri!(person(age = _, name = "Mike")); assert_eq!(mike.to_string(), "/person/Mike"); // with unnamed values ignored let mike = uri!(person("Mike", _)); assert_eq!(mike.to_string(), "/person/Mike"); ``` It is a type error to attempt to ignore query parameters that are neither `Option` or `Result` . Path parameters can never be ignored. A path parameter of type `Option<T>` or `Result<T, E>` must be filled by a value that can target a type of `T` : ``` #[get("/person/<name>")] fn maybe(name: Option<&str>) { } let bob1 = uri!(maybe(name = "Bob")); let bob2 = uri!(maybe("<NAME>")); assert_eq!(bob1.to_string(), "/person/Bob"); assert_eq!(bob2.to_string(), "/person/Bob%20Smith"); #[get("/person/<age>")] fn ok(age: Result<u8, &str>) { } let kid1 = uri!(ok(age = 10)); let kid2 = uri!(ok(12)); assert_eq!(kid1.to_string(), "/person/10"); assert_eq!(kid2.to_string(), "/person/12"); ``` Values for ignored route segments can be of any type as long as the type implements `UriDisplay` for the appropriate URI part. If a route URI contains ignored segments, the route URI invocation cannot use named arguments. ``` #[get("/ignore/<_>/<other>")] fn ignore(other: &str) { } let bob = uri!(ignore("<NAME>", "hello")); let life = uri!(ignore(42, "cat&dog")); assert_eq!(bob.to_string(), "/ignore/Bob%20Hope/hello"); assert_eq!(life.to_string(), "/ignore/42/cat%26dog"); ``` ### Prefixes and Suffixes A route URI can be be optionally prefixed and/or suffixed by a URI generated from a string literal or an arbitrary expression. This takes the form ``` uri!(prefix, foo(v1, v2, v3), suffix) ``` , where both `prefix` and `suffix` are optional, and either `prefix` or `suffix` may be `_` to specify the value as empty. // with a specific mount-point of `/api`. let bob = uri!("/api", person("Bob", Some(28))); assert_eq!(bob.to_string(), "/api/person/Bob?age=28"); // with an absolute URI as a prefix let bob = uri!("https://rocket.rs", person("Bob", Some(28))); assert_eq!(bob.to_string(), "https://rocket.rs/person/Bob?age=28"); // with another absolute URI as a prefix let bob = uri!("https://rocket.rs/foo", person("Bob", Some(28))); assert_eq!(bob.to_string(), "https://rocket.rs/foo/person/Bob?age=28"); // with an expression as a prefix let host = uri!("http://bob.me"); let bob = uri!(host, person("Bob", Some(28))); assert_eq!(bob.to_string(), "http://bob.me/person/Bob?age=28"); // with a suffix but no prefix let bob = uri!(_, person("Bob", Some(28)), "#baz"); assert_eq!(bob.to_string(), "/person/Bob?age=28#baz"); // with both a prefix and suffix let bob = uri!("https://rocket.rs/", person("Bob", Some(28)), "#woo"); assert_eq!(bob.to_string(), "https://rocket.rs/person/Bob?age=28#woo"); // with an expression suffix. if the route URI already has a query, the // query part is ignored. otherwise it is added. let suffix = uri!("?woo#bam"); let bob = uri!(_, person("Bob", Some(28)), suffix.clone()); assert_eq!(bob.to_string(), "/person/Bob?age=28#bam"); let bob = uri!(_, person("Bob", None::<u8>), suffix.clone()); assert_eq!(bob.to_string(), "/person/Bob?woo#bam"); ``` ### Grammar The grammar for this variant of the `uri!` macro is: ``` uri := (prefix ',')? route | prefix ',' route ',' suffix prefix := STRING | expr ; `Origin` or `Absolute` suffix := STRING | expr ; `Reference` or `Absolute` route := PATH '(' (named | unnamed) ')' named := IDENT = expr (',' named)? ','? unnamed := expr (',' unnamed)? ','? expr := EXPR | '_' EXPR := a valid Rust expression (examples: `foo()`, `12`, `"hey"`) IDENT := a valid Rust identifier (examples: `name`, `age`) STRING := an uncooked string literal, as defined by Rust (example: `"hi"`) PATH := a path, as defined by Rust (examples: `route`, `my_mod::route`) ``` ### Dynamic Semantics The returned value is that of the prefix (minus any query part) concatenated with the route URI concatenated with the query (if the route has no query part) and fragment parts of the suffix. The route URI is generated by interpolating the declared route URI with the URL-safe version of the route values in `uri!()` . The generated URI is guaranteed to be URI-safe. Each route value is rendered in its appropriate place in the URI using the `UriDisplay` implementation for the value’s type. The `UriDisplay` implementation ensures that the rendered value is URL-safe. A `uri!()` invocation allocated at-most once. ### Static Semantics The `uri!` macro returns one of `Origin` , `Absolute` , or `Reference` , depending on the types of the prefix and suffix, if any. The table below specifies all combinations: Prefix | Suffix | Output | | --- | --- | --- | None | None | | None | | | None | | | | None | | | | | | | | | None | | | | | | | | A `uri!` invocation only typechecks if the type of every route URI value in the invocation matches the type declared for the parameter in the given route, after conversion with `FromUriParam` , or if a value is ignored using `_` and the corresponding route type implements `Ignorable` . # Conversion The `FromUriParam` trait is used to typecheck and perform a conversion for each value passed to `uri!` . If a ``` FromUriParam<P, S> for T ``` implementation exists for a type `T` for part URI part `P` , then a value of type `S` can be used in `uri!` macro for a route URI parameter declared with a type of `T` in part `P` . For example, the following implementation, provided by Rocket, allows an `&str` to be used in a `uri!` invocation for route URI parameters declared as `String` : ``` impl<P: Part, 'a> FromUriParam<P, &'a str> for String { .. } ``` # Ignorables Query parameters can be ignored using `_` in place of an expression. The corresponding type in the route URI must implement `Ignorable` . Ignored parameters are not interpolated into the resulting `Origin` . Path parameters are not ignorable. # Struct rocket::Config `pub struct Config {` Rocket server configuration. See the module level docs as well as the configuration guide for further details. All configuration values have a default, documented in the fields section below. returns the default values for the debug profile while the default values for the release profile. The `Config::default()` method automatically selects the appropriate of the two based on the selected profile. With the exception of `log_level` , which is `normal` in `debug` and `critical` in `release` , and `secret_key` , which is regenerated from a random value if not set in “debug” mode only, all default values are identical in all profiles. ## Provider Details `Config` is a Figment `Provider` with the following characteristics: Profile The profile is set to the value of the `profile` field. * Metadata This provider is named `Rocket Config` . It does not specify a `Source` and uses default interpolation. * Data The data emitted by this provider are the keys and values corresponding to the fields and values of the structure. The dictionary is emitted to the “default” meta-profile. Note that these behaviors differ from those of `Config::figment()` . § `profile: Profile` The selected profile. (default: debug `debug` / release `release` ) Note: This field is never serialized nor deserialized. When a `Config` is merged into a `Figment` as a `Provider` , this profile is selected on the `Figment` . When a `Config` is extracted, this field is set to the extracting Figment’s selected `Profile` . `address: IpAddr` IP address to serve on. (default: `127.0.0.1` ) `port: u16` Port to serve on. (default: `8000` ) `workers: usize` Number of threads to use for executing futures. (default: `num_cores` ) Note: Rocket only reads this value from sources in the default provider. `max_blocking: usize` Limit on threads to start for blocking tasks. (default: `512` ) `ident: Ident` How, if at all, to identify the server via the `Server` header. (default: `"Rocket"` ) ``` ip_header: Option<Uncased<'static>> ``` The name of a header, whose value is typically set by an intermediary server or proxy, which contains the real IP address of the connecting client. Used internally and by `Request::client_ip()` and `Request::real_ip()` . To disable using any header for this purpose, set this value to `false` . Deserialization semantics are identical to those of `Ident` except that the value must syntactically be a valid HTTP header name. (default: `"X-Real-IP"` ) `limits: Limits` Streaming read size limits. (default: `Limits::default()` ) ``` temp_dir: RelativePathBuf ``` Directory to store temporary files in. (default: `std::env::temp_dir()` ) `keep_alive: u32` Keep-alive timeout in seconds; disabled when `0` . (default: `5` ) ``` tls: Option<TlsConfig> ``` `tls` only. The TLS configuration, if any. (default: `None` ) ``` secret_key: SecretKey ``` `secrets` only. The secret key for signing and encrypting. (default: `0` ) Note: This field always serializes as a 256-bit array of `0` s to aid in preventing leakage of the secret key. `shutdown: Shutdown` Graceful shutdown configuration. (default: `Shutdown::default()` ) `log_level: LogLevel` Max level to log. (default: debug `normal` / release `critical` ) `cli_colors: bool` Whether to use colors and emoji when logging. (default: `true` ) b'source pub fn debug_default() -> Config' # pub fn debug_default() -> Config let config = Config::debug_default(); ``` b'source pub fn release_default() -> Config' # pub fn release_default() -> Config let config = Config::release_default(); ``` b'source pub fn figment() -> Figment' # pub fn figment() -> Figment Returns the default provider figment used by `rocket::build()` . The default figment reads from the following sources, in ascending priority order: * `Config::default()` (see defaults) * `Rocket.toml` or filename in `ROCKET_CONFIG` environment variable * `ROCKET_` prefixed environment variables The profile selected is the value set in the `ROCKET_PROFILE` environment variable. If it is not set, it defaults to `debug` when compiled in debug mode and `release` when compiled in release mode. ``` use rocket::Config; use serde::Deserialize; #[derive(Deserialize)] struct MyConfig { app_key: String, } let my_config = Config::figment().extract::<MyConfig>(); ``` b'source pub fn try_from<T: Provider>(provider: T) -> Result<Self>' # pub fn try_from<T: Provider>(provider: T) -> Result<Self Attempts to extract a `Config` from `provider` , returning the result. // Use Rocket's default `Figment`, but allow values from `MyApp.toml` // and `MY_APP_` prefixed environment variables to supersede its values. let figment = Config::figment() .merge(("some-thing", 123)) .merge(Env::prefixed("CONFIG_")); let config = Config::try_from(figment); ``` b'source pub fn from<T: Provider>(provider: T) -> Self' # pub fn from<T: Provider>(provider: T) -> Self Extract a `Config` from `provider` , panicking if extraction fails. If extraction fails, prints an error message indicating the failure and panics. For a version that doesn’t panic, use `Config::try_from()` . // Use Rocket's default `Figment`, but allow values from `MyApp.toml` // and `MY_APP_` prefixed environment variables to supersede its values. let figment = Config::figment() .merge(Toml::file("MyApp.toml").nested()) .merge(Env::prefixed("MY_APP_")); let config = Config::from(figment); ``` b'source pub fn tls_enabled(&self) -> bool' # pub fn tls_enabled(&self) -> bool Returns `true` if TLS is enabled. TLS is enabled when the `tls` feature is enabled and TLS has been configured with at least one ciphersuite. Note that without changing defaults, all supported ciphersuites are enabled in the recommended configuration. ``` let config = rocket::Config::default(); if config.tls_enabled() { println!("TLS is enabled!"); } else { println!("TLS is disabled."); } ``` b'source pub fn mtls_enabled(&self) -> bool' # pub fn mtls_enabled(&self) -> bool Returns `true` if mTLS is enabled. mTLS is enabled when TLS is enabled ( ``` Config::tls_enabled() ``` ) and the `mtls` feature is enabled and mTLS has been configured with a CA certificate chain. ``` let config = rocket::Config::default(); if config.mtls_enabled() { println!("mTLS is enabled!"); } else { println!("mTLS is disabled."); } ``` Associated constants for default profiles. b'source pub const DEBUG_PROFILE: Profile = _' # pub const DEBUG_PROFILE: Profile = _ The default debug profile: `debug` . b'source pub const RELEASE_PROFILE: Profile = _' # pub const RELEASE_PROFILE: Profile = _ The default release profile: `release` . b'source pub const DEFAULT_PROFILE: Profile = Self::DEBUG_PROFILE' # pub const DEFAULT_PROFILE: Profile = Self::DEBUG_PROFILE The default profile: “debug” on `debug` , “release” on `release` . Associated constants for stringy versions of configuration parameters. b'source pub const ADDRESS: &\'static str = "address"' # pub const ADDRESS: &'static str = "address" b'source pub const PORT: &\'static str = "port"' # pub const PORT: &'static str = "port" The stringy parameter name for setting/extracting `Config::port` . b'source pub const WORKERS: &\'static str = "workers"' # pub const WORKERS: &'static str = "workers" The stringy parameter name for setting/extracting `Config::workers` . b'source pub const MAX_BLOCKING: &\'static str = "max_blocking"' # pub const MAX_BLOCKING: &'static str = "max_blocking" The stringy parameter name for setting/extracting `Config::max_blocking` . b'source pub const KEEP_ALIVE: &\'static str = "keep_alive"' # pub const KEEP_ALIVE: &'static str = "keep_alive" b'source pub const LIMITS: &\'static str = "limits"' # pub const LIMITS: &'static str = "limits" b'source pub const TLS: &\'static str = "tls"' # pub const TLS: &'static str = "tls" The stringy parameter name for setting/extracting `Config::tls` . b'source pub const SECRET_KEY: &\'static str = "secret_key"' # pub const SECRET_KEY: &'static str = "secret_key" b'source pub const TEMP_DIR: &\'static str = "temp_dir"' # pub const TEMP_DIR: &'static str = "temp_dir" b'source pub const LOG_LEVEL: &\'static str = "log_level"' # pub const LOG_LEVEL: &'static str = "log_level" The stringy parameter name for setting/extracting `Config::log_level` . b'source pub const SHUTDOWN: &\'static str = "shutdown"' # pub const SHUTDOWN: &'static str = "shutdown" b'source pub const CLI_COLORS: &\'static str = "cli_colors"' # pub const CLI_COLORS: &'static str = "cli_colors" b'source&#167; impl Debug for Config' ### impl Debug for Config b'source&#167; fn default() -> Config' # fn default() -> Config Returns the default configuration based on the Rust compilation profile. This is in `debug` and in `release` . let config = Config::default(); ``` b"source&#167; impl<'de> Deserialize<'de> for Config" ### impl<'de> Deserialize<'de> for Config b"source&#167; fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>," # fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, b"source&#167; impl<'r> FromRequest<'r> for &'r Config" ### impl<'r> FromRequest<'r> for &'r Config b'source&#167; fn eq(&self, other: &Config) -> bool' # fn eq(&self, other: &Config) -> bool b'source&#167; impl Provider for Config' b'source&#167; fn metadata(&self) -> Metadata' # fn metadata(&self) -> Metadata `Metadata` for this provider, identifying itself and its configuration sources. b'source&#167; impl Serialize for Config' ### impl StructuralPartialEq for Config ### impl RefUnwindSafe for Config ### impl Unpin for Config ### impl UnwindSafe for Config # Struct rocket::Data Type representing the body data of a request. This type is the only means by which the body of a request can be retrieved. This type is not usually used directly. Instead, data guards (types that implement `FromData` ) are created indirectly via code generation by specifying the `data = "<var>"` route parameter as follows: ``` #[post("/submit", data = "<var>")] fn submit(var: DataGuard) { /* ... */ } ``` Above, `DataGuard` can be any type that implements `FromData` . Note that `Data` itself implements `FromData` . ## Reading Data Data may be read from a `Data` object by calling either the `open()` or `peek()` methods. The `open` method consumes the `Data` object and returns the raw data stream. The `Data` object is consumed for safety reasons: consuming the object ensures that holding a `Data` object means that all of the data is available for reading. The `peek` method returns a slice containing at most 512 bytes of buffered body data. This enables partially or fully reading from a `Data` object without consuming the `Data` object. b"source&#167; impl<'r> Data<'r>" ### impl<'r> Data<'rb"source pub fn open(self, limit: ByteUnit) -> DataStream<'r>" # pub fn open(self, limit: ByteUnit) -> DataStream<'r Returns the raw data stream, limited to `limit` bytes. The stream contains all of the data in the body of the request, including that in the `peek` buffer. The method consumes the `Data` instance. This ensures that a `Data` type always represents all of the data in a request. ``` use rocket::data::{Data, ToByteUnit}; fn handler(data: Data<'_>) { let stream = data.open(2.mebibytes()); } ``` b'source pub async fn peek(&mut self, num: usize) -> &[u8] &#9432;' # pub async fn peek(&mut self, num: usize) -> &[u8] ⓘ Retrieve at most `num` bytes from the `peek` buffer without consuming `self` . The peek buffer contains at most 512 bytes of the body of the request. The actual size of the returned buffer is the `min` of the request’s body, `num` and `512` . The `peek_complete` method can be used to determine if this buffer contains all of the data in the body of the request. In a data guard: ``` use rocket::request::{self, Request, FromRequest}; use rocket::data::{Data, FromData, Outcome}; #[rocket::async_trait] impl<'r> FromData<'r> for MyType { type Error = MyError; async fn from_data(r: &'r Request<'_>, mut data: Data<'r>) -> Outcome<'r, Self> { if data.peek(2).await != b"hi" { return Outcome::Forward(data) } In a fairing: ``` use rocket::{Rocket, Request, Data, Response}; use rocket::fairing::{Fairing, Info, Kind}; #[rocket::async_trait] impl Fairing for MyType { fn info(&self) -> Info { Info { name: "Data Peeker", kind: Kind::Request } } async fn on_request(&self, req: &mut Request<'_>, data: &mut Data<'_>) { if data.peek(2).await == b"hi" { /* do something; body data starts with `"hi"` */ } b'source pub fn peek_complete(&self) -> bool' # pub fn peek_complete(&self) -> bool Returns true if the `peek` buffer contains all of the data in the body of the request. Returns `false` if it does not or if it is not known if it does. ``` use rocket::data::Data; async fn handler(mut data: Data<'_>) { if data.peek_complete() { println!("All of the data: {:?}", data.peek(512).await); } } ``` b"source&#167; impl<'r> FromData<'r> for Data<'r>" ### impl<'r> FromData<'r> for Data<'r `Self` from the incoming request body data. Read more b"source&#167; fn into_outcome(self, status: Status) -> Outcome<'r, S, E>" # fn into_outcome(self, status: Status) -> Outcome<'r, S, Eb"source&#167; fn or_forward(self, data: Data<'r>) -> Outcome<'r, S, E>" # fn or_forward(self, data: Data<'r>) -> Outcome<'r, S, E### impl<'r> !RefUnwindSafe for Data<'r### impl<'r> Send for Data<'r### impl<'r> Sync for Data<'r### impl<'r> Unpin for Data<'r### impl<'r> !UnwindSafe for Data<'r# Struct rocket::Error An error that occurs during launch. An `Error` is returned by `launch()` when launching an application fails or, more rarely, when the runtime fails after launching. ## Panics A value of this type panics if it is dropped without first being inspected. An inspection occurs when any method is called. For instance, if ``` println!("Error: {}", e) ``` is called, where `e: Error` , the `Display::fmt` method being called by `println!` results in `e` being marked as inspected; a subsequent `drop` of the value will not result in a panic. The following snippet illustrates this: ``` if let Err(error) = rocket::build().launch().await { // This println "inspects" the error. println!("Launch failed! Error: {}", error); // This call to drop (explicit here for demonstration) will do nothing. drop(error); } ``` When a value of this type panics, the corresponding error message is pretty printed to the console. The following illustrates this: ``` let error = rocket::build().launch().await; // This call to drop (explicit here for demonstration) will result in // `error` being pretty-printed to the console along with a `panic!`. drop(error); ``` An `Error` value should usually be allowed to `drop` without inspection. There are at least two exceptions: If you are writing a library or high-level application on-top of Rocket, you likely want to inspect the value before it drops to avoid a Rocket-specific `panic!` . This typically means simply printing the value. * You want to display your own error messages. b'source&#167; impl Error' b'source pub fn kind(&self) -> &ErrorKind' # pub fn kind(&self) -> &ErrorKind Retrieve the `kind` of the launch error. ``` use rocket::error::ErrorKind; if let Err(error) = rocket::build().launch().await { match error.kind() { ErrorKind::Io(e) => println!("found an i/o launch error: {}", e), e => println!("something else happened: {}", e) } } ``` b'source&#167; impl Display for Error' ### impl Display for Error b'source&#167; impl Error for Error' b"1.30.0 &#183; source&#167; fn source(&self) -> Option<&(dyn Error + 'static)>" # fn source(&self) -> Option<&(dyn Error + 'static)b'1.0.0 &#183; source&#167; fn description(&self) -> &str' # fn description(&self) -> &str b'1.0.0 &#183; source&#167; fn cause(&self) -> Option<&dyn Error>' # fn cause(&self) -> Option<&dyn Errorb'source&#167; impl<E> Provider for Ewhere E: Error + ?Sized,' ### impl<E> Provider for Ewhere E: Error + ?Sized, # Struct rocket::Request ``` pub struct Request<'r> { /* private fields */ } ``` The type of an incoming web request. This should be used sparingly in Rocket applications. In particular, it should likely only be used when writing `FromRequest` implementations. It contains all of the information for a given web request except for the body data. This includes the HTTP method, URI, cookies, headers, and more. b"source&#167; impl<'r> Request<'r>" ### impl<'r> Request<'rb'source pub fn method(&self) -> Method' # pub fn method(&self) -> Method Retrieve the method from `self` . assert_eq!(get("/").method(), Method::Get); assert_eq!(post("/").method(), Method::Post); ``` b'source pub fn set_method(&mut self, method: Method)' # pub fn set_method(&mut self, method: Method) Set the method of `self` to `method` . assert_eq!(request.method(), Method::Get); request.set_method(Method::Post); assert_eq!(request.method(), Method::Post); ``` b"source pub fn uri(&self) -> &Origin<'r>" # pub fn uri(&self) -> &Origin<'rb"source pub fn set_uri(&mut self, uri: Origin<'r>)" # pub fn set_uri(&mut self, uri: Origin<'r>) Set the URI in `self` to `uri` . ``` use rocket::http::uri::Origin; let uri = Origin::parse("/hello/Sergio?type=greeting").unwrap(); request.set_uri(uri); assert_eq!(request.uri().path(), "/hello/Sergio"); assert_eq!(request.uri().query().unwrap(), "type=greeting"); let new_uri = request.uri().map_path(|p| format!("/foo{}", p)).unwrap(); request.set_uri(new_uri); assert_eq!(request.uri().path(), "/foo/hello/Sergio"); assert_eq!(request.uri().query().unwrap(), "type=greeting"); ``` b"source pub fn host(&self) -> Option<&Host<'r>>" # pub fn host(&self) -> Option<&Host<'r> Returns the `Host` identified in the request, if any. If the request is made via HTTP/1.1 (or earlier), this method returns the value in the `HOST` header without the deprecated `user_info` component. Otherwise, this method returns the contents of the `:authority` pseudo-header request field. Note that this method only reflects the `HOST` header in the initial request and not any changes made thereafter. To change the value returned by this method, use `Request::set_host()` . # ⚠️ DANGER ⚠️ Using the user-controlled `host` to construct URLs is a security hazard! Never do so without first validating the host against a whitelist. For this reason, Rocket disallows constructing host-prefixed URIs with `uri!` . Always use `uri!` to construct URIs. Retrieve the raw host, unusable to construct safe URIs: request.set_host(Host::from(uri!("rocket.rs"))); let host = request.host().unwrap(); assert_eq!(host.domain(), "rocket.rs"); assert_eq!(host.port(), None); request.set_host(Host::from(uri!("rocket.rs:2392"))); let host = request.host().unwrap(); assert_eq!(host.domain(), "rocket.rs"); assert_eq!(host.port(), Some(2392)); ``` Retrieve the raw host, check it against a whitelist, and construct a URI: // A sensitive URI we want to prefix with safe hosts. #[get("/token?<secret>")] fn token(secret: Token) { /* .. */ } // Whitelist of known hosts. In a real setting, you might retrieve this // list from config at ignite-time using tools like `AdHoc::config()`. const WHITELIST: [Host<'static>; 3] = [ Host::new(uri!("rocket.rs")), Host::new(uri!("rocket.rs:443")), Host::new(uri!("guide.rocket.rs:443")), ]; // A request with a host of "rocket.rs". Note the case-insensitivity. request.set_host(Host::from(uri!("ROCKET.rs"))); let prefix = request.host().and_then(|h| h.to_absolute("https", &WHITELIST)); // `rocket.rs` is in the whitelist, so we'll get back a `Some`. assert!(prefix.is_some()); if let Some(prefix) = prefix { // We can use this prefix to safely construct URIs. let uri = uri!(prefix, token("some-secret-token")); assert_eq!(uri, "https://ROCKET.rs/token?secret=some-secret-token"); } // A request with a host of "attacker-controlled.com". request.set_host(Host::from(uri!("attacker-controlled.com"))); let prefix = request.host().and_then(|h| h.to_absolute("https", &WHITELIST)); // `attacker-controlled.come` is _not_ on the whitelist. assert!(prefix.is_none()); assert!(request.host().is_some()); ``` b"source pub fn set_host(&mut self, host: Host<'r>)" # pub fn set_host(&mut self, host: Host<'r>) Sets the host of `self` to `host` . Set the host to `rocket.rs:443` . request.set_host(Host::from(uri!("rocket.rs:443"))); let host = request.host().unwrap(); assert_eq!(host.domain(), "rocket.rs"); assert_eq!(host.port(), Some(443)); ``` b'source pub fn remote(&self) -> Option<SocketAddr>' # pub fn remote(&self) -> Option<SocketAddr Returns the raw address of the remote connection that initiated this request if the address is known. If the address is not known, `None` is returned. Because it is common for proxies to forward connections for clients, the remote address may contain information about the proxy instead of the client. For this reason, proxies typically set a “X-Real-IP” header `ip_header` with the client’s true IP. To extract this IP from the request, use the `real_ip()` or `client_ip()` methods. b'source pub fn set_remote(&mut self, address: SocketAddr)' # pub fn set_remote(&mut self, address: SocketAddr) Sets the remote address of `self` to `address` . Set the remote address to be 127.0.0.1:8000: b'source pub fn real_ip(&self) -> Option<IpAddr>' # pub fn real_ip(&self) -> Option<IpAddr Returns the IP address of the configured `ip_header` of the request if such a header is configured, exists and contains a valid IP address. ``` use std::net::Ipv4Addr; use rocket::http::Header; assert_eq!(req.real_ip(), None); // `ip_header` defaults to `X-Real-IP`. let req = req.header(Header::new("X-Real-IP", "127.0.0.1")); assert_eq!(req.real_ip(), Some(Ipv4Addr::LOCALHOST.into())); ``` b'source pub fn client_ip(&self) -> Option<IpAddr>' # pub fn client_ip(&self) -> Option<IpAddr Attempts to return the client’s IP address by first inspecting the `ip_header` and then using the remote connection’s IP address. Note that the built-in `IpAddr` request guard can be used to retrieve the same information in a handler: ``` use std::net::IpAddr; #[get("/")] fn get_ip(client_ip: IpAddr) { /* ... */ } #[get("/")] fn try_get_ip(client_ip: Option<IpAddr>) { /* ... */ } ``` If the `ip_header` exists and contains a valid IP address, that address is returned. Otherwise, if the address of the remote connection is known, that address is returned. Otherwise, `None` is returned. ``` // starting without an "X-Real-IP" header or remote address assert!(request.client_ip().is_none()); // add a remote address; this is done by Rocket automatically request.set_remote("127.0.0.1:8000".parse().unwrap()); assert_eq!(request.client_ip(), Some("127.0.0.1".parse().unwrap())); // now with an X-Real-IP header, the default value for `ip_header`. request.add_header(Header::new("X-Real-IP", "8.8.8.8")); assert_eq!(request.client_ip(), Some("8.8.8.8".parse().unwrap())); ``` Returns a wrapped borrow to the cookies in `self` . `CookieJar` implements internal mutability, so this method allows you to get and add/remove cookies in `self` . Add a new cookie to a request’s cookies: ``` use rocket::http::Cookie; req.cookies().add(Cookie::new("key", "val")); req.cookies().add(Cookie::new("ans", format!("life: {}", 38 + 4))); assert_eq!(req.cookies().get_pending("key").unwrap().value(), "val"); assert_eq!(req.cookies().get_pending("ans").unwrap().value(), "life: 42"); ``` b"source pub fn add_header<'h: 'r, H: Into<Header<'h>>>(&mut self, header: H)" Add `header` to `self` ’s headers. The type of `header` can be any type that implements the `Into<Header>` trait. This includes common types such as `ContentType` and `Accept` . request.add_header(ContentType::HTML); assert!(request.headers().contains("Content-Type")); assert_eq!(request.headers().len(), 1); ``` Replaces the value of the header with name `header.name` with `header.value` . If no such header exists, `header` is added as a header to `self` . request.add_header(ContentType::Any); assert_eq!(request.headers().get_one("Content-Type"), Some("*/*")); assert_eq!(request.content_type(), Some(&ContentType::Any)); request.replace_header(ContentType::PNG); assert_eq!(request.headers().get_one("Content-Type"), Some("image/png")); assert_eq!(request.content_type(), Some(&ContentType::PNG)); ``` b'source pub fn content_type(&self) -> Option<&ContentType>' # pub fn content_type(&self) -> Option<&ContentTypeassert_eq!(get("/").content_type(), None); let req = get("/").header(ContentType::JSON); assert_eq!(req.content_type(), Some(&ContentType::JSON)); ``` b'source pub fn accept(&self) -> Option<&Accept>' # pub fn accept(&self) -> Option<&Accept``` use rocket::http::Accept; assert_eq!(get("/").accept(), None); assert_eq!(get("/").header(Accept::JSON).accept(), Some(&Accept::JSON)); ``` b'source pub fn format(&self) -> Option<&MediaType>' # pub fn format(&self) -> Option<&MediaTypeReturns the media type “format” of the request. The “format” of a request is either the Content-Type, if the request methods indicates support for a payload, or the preferred media type in the Accept header otherwise. If the method indicates no payload and no Accept header is specified, a media type of `Any` is returned. The media type returned from this method is used to match against the `format` route attribute. ``` use rocket::http::{Accept, ContentType, MediaType}; // Non-payload-bearing: format is accept header. let req = get("/").header(Accept::HTML); assert_eq!(req.format(), Some(&MediaType::HTML)); let req = get("/").header(ContentType::JSON).header(Accept::HTML); assert_eq!(req.format(), Some(&MediaType::HTML)); // Payload: format is content-type header. let req = post("/").header(ContentType::HTML); assert_eq!(req.format(), Some(&MediaType::HTML)); let req = post("/").header(ContentType::JSON).header(Accept::HTML); assert_eq!(req.format(), Some(&MediaType::JSON)); // Non-payload-bearing method and no accept header: `Any`. assert_eq!(get("/").format(), Some(&MediaType::Any)); ``` b"source pub fn rocket(&self) -> &'r Rocket<Orbit>" # pub fn rocket(&self) -> &'r Rocket<Orbit Returns the `Rocket` instance that is handling this request. ``` // Retrieve the application config via `Rocket::config()`. let config = request.rocket().config(); // Retrieve managed state via `Rocket::state()`. let state = request.rocket().state::<Pool>(); // Get a list of all of the registered routes and catchers. let routes = request.rocket().routes(); let catchers = request.rocket().catchers(); ``` b"source pub fn limits(&self) -> &'r Limits" # pub fn limits(&self) -> &'r Limits Returns the configured application data limits. This is convenience function equivalent to: ``` &request.rocket().config().limits ``` ``` use rocket::data::ToByteUnit; // This is the default `form` limit. assert_eq!(request.limits().get("form"), Some(32.kibibytes())); // Retrieve the limit for files with extension `.pdf`; etails to 1MiB. assert_eq!(request.limits().get("file/pdf"), Some(1.mebibytes())); ``` b"source pub fn route(&self) -> Option<&'r Route>" # pub fn route(&self) -> Option<&'r RouteGet the presently matched route, if any. This method returns `Some` any time a handler or its guards are being invoked. This method returns `None` before routing has commenced; this includes during request fairing callbacks. ``` let route = request.route(); ``` b"source pub fn guard<'z, 'a, T>(&'a self) -> BoxFuture<'z, Outcome<T, T::Error>>where T: FromRequest<'a> + 'z, 'a: 'z, 'r: 'z," # pub fn guard<'z, 'a, T>(&'a self) -> BoxFuture<'z, Outcome<T, T::Error>>where T: FromRequest<'a> + 'z, 'a: 'z, 'r: 'z, Invokes the request guard implementation for `T` , returning its outcome. Assuming a `User` request guard exists, invoke it: ``` let outcome = request.guard::<User>().await; ``` b"source pub fn local_cache<T, F>(&self, f: F) -> &Twhere F: FnOnce() -> T, T: Send + Sync + 'static," # pub fn local_cache<T, F>(&self, f: F) -> &Twhere F: FnOnce() -> T, T: Send + Sync + 'static, Retrieves the cached value for type `T` from the request-local cached state of `self` . If no such value has previously been cached for this request, `f` is called to produce the value which is subsequently returned. Different values of the same type cannot be cached without using a proxy, wrapper type. To avoid the need to write these manually, or for libraries wishing to store values of public types, use the `local_cache!` or `local_cache_once!` macros to generate a locally anonymous wrapper type, store, and retrieve the wrapped value from request-local cache. ``` // The first store into local cache for a given type wins. let value = request.local_cache(|| "hello"); assert_eq!(*request.local_cache(|| "hello"), "hello"); // The following return the cached, previously stored value for the type. assert_eq!(*request.local_cache(|| "goodbye"), "hello"); ``` Retrieves the cached value for type `T` from the request-local cached state of `self` . If no such value has previously been cached for this request, `fut` is `await` ed to produce the value which is subsequently returned. ``` async fn current_user<'r>(request: &Request<'r>) -> User { // validate request for a given user, load from database, etc } let current_user = request.local_cache_async(async { current_user(&request).await }).await; ``` b"source pub fn param<'a, T>(&'a self, n: usize) -> Option<Result<T, T::Error>>where T: FromParam<'a>," # pub fn param<'a, T>(&'a self, n: usize) -> Option<Result<T, T::Error>>where T: FromParam<'a>, Retrieves and parses into `T` the 0-indexed `n` th non-empty segment from the routed request, that is, the `n` th segment after the mount point. If the request has not been routed, then this is simply the `n` th non-empty request URI segment. Returns `None` if `n` is greater than the number of non-empty segments. Returns `Some(Err(T::Error))` if the parameter type `T` failed to be parsed from the `n` th dynamic parameter. This method exists only to be used by manual routing. To retrieve parameters from a request, use Rocket’s code generation facilities. ``` assert_eq!(get("/a/b/c").param(0), Some(Ok("a"))); assert_eq!(get("/a/b/c").param(1), Some(Ok("b"))); assert_eq!(get("/a/b/c").param(2), Some(Ok("c"))); assert_eq!(get("/a/b/c").param::<&str>(3), None); assert_eq!(get("/1/b/3").param(0), Some(Ok(1))); assert!(get("/1/b/3").param::<usize>(1).unwrap().is_err()); assert_eq!(get("/1/b/3").param(2), Some(Ok(3))); assert_eq!(get("/").param::<&str>(0), None); ``` b"source pub fn segments<'a, T>(&'a self, n: RangeFrom<usize>) -> Result<T, T::Error>where T: FromSegments<'a>," # pub fn segments<'a, T>(&'a self, n: RangeFrom<usize>) -> Result<T, T::Error>where T: FromSegments<'a>, Retrieves and parses into `T` all of the path segments in the request URI beginning and including the 0-indexed `n` th non-empty segment after the mount point.,that is, the `n` th segment after the mount point. If the request has not been routed, then this is simply the `n` th non-empty request URI segment. `T` must implement `FromSegments` , which is used to parse the segments. If there are no non-empty segments, the `Segments` iterator will be empty. This method exists only to be used by manual routing. To retrieve segments from a request, use Rocket’s code generation facilities. ``` use std::path::PathBuf; assert_eq!(get("/").segments(0..), Ok(PathBuf::new())); assert_eq!(get("/").segments(2..), Ok(PathBuf::new())); // Empty segments are skipped. assert_eq!(get("///").segments(2..), Ok(PathBuf::new())); assert_eq!(get("/a/b/c").segments(0..), Ok(PathBuf::from("a/b/c"))); assert_eq!(get("/a/b/c").segments(1..), Ok(PathBuf::from("b/c"))); assert_eq!(get("/a/b/c").segments(2..), Ok(PathBuf::from("c"))); assert_eq!(get("/a/b/c").segments(3..), Ok(PathBuf::new())); assert_eq!(get("/a/b/c").segments(4..), Ok(PathBuf::new())); ``` b"source pub fn query_value<'a, T>(&'a self, name: &str) -> Option<Result<'a, T>>where T: FromForm<'a>," # pub fn query_value<'a, T>(&'a self, name: &str) -> Option<Result<'a, T>>where T: FromForm<'a>, Retrieves and parses into `T` the query value with field name `name` . `T` must implement `FromForm` , which is used to parse the query’s value. Key matching is performed case-sensitively. # Warning This method exists only to be used by manual routing and should never be used in a regular Rocket application. It is much more expensive to use this method than to retrieve query parameters via Rocket’s codegen. To retrieve query values from a request, always prefer to use Rocket’s code generation facilities. If a query segment with name `name` isn’t present, returns `None` . If parsing the value fails, returns `Some(Err(_))` . #[derive(Debug, PartialEq, FromForm)] struct Dog<'r> { name: &'r str, age: usize } let req = get("/?a=apple&z=zebra&a=aardvark"); assert_eq!(req.query_value::<&str>("a").unwrap(), Ok("apple")); assert_eq!(req.query_value::<&str>("z").unwrap(), Ok("zebra")); assert_eq!(req.query_value::<&str>("b"), None); let a_seq = req.query_value::<Vec<&str>>("a"); assert_eq!(a_seq.unwrap().unwrap(), ["apple", "aardvark"]); let req = get("/?dog.name=Max+Fido&dog.age=3"); let dog = req.query_value::<Dog>("dog"); assert_eq!(dog.unwrap().unwrap(), Dog { name: "<NAME>", age: 3 }); ``` b"source&#167; impl Debug for Request<'_>" ### impl Debug for Request<'_### impl<'r> !RefUnwindSafe for Request<'r### impl<'r> Send for Request<'r### impl<'r> Sync for Request<'r### impl<'r> Unpin for Request<'r### impl<'r> !UnwindSafe for Request<'r# Struct rocket::Response b"source&#167; impl<'r> Response<'r>" ### impl<'r> Response<'rb"source pub fn new() -> Response<'r>" # pub fn new() -> Response<'r Creates a new, empty `Response` without a status, body, or headers. Because all HTTP responses must have a status, if a default `Response` is written to the client without a status, the status defaults to `200 Ok` . assert_eq!(response.status(), Status::Ok); assert_eq!(response.headers().len(), 0); assert!(response.body().is_none()); ``` b"source pub fn build() -> Builder<'r>" Returns a `Builder` with a base of `Response::new()` . let builder = Response::build(); ``` b"source pub fn build_from(other: Response<'r>) -> Builder<'r>" # pub fn build_from(other: Response<'r>) -> Builder<'r Returns a `Builder` with a base of `other` . let other = Response::new(); let builder = Response::build_from(other); ``` b'source pub fn status(&self) -> Status' # pub fn status(&self) -> Status Returns the status of `self` . response.set_status(Status::NotFound); assert_eq!(response.status(), Status::NotFound); ``` b'source pub fn set_status(&mut self, status: Status)' # pub fn set_status(&mut self, status: Status) Sets the status of `self` to `status` . let mut response = Response::new(); response.set_status(Status::ImATeapot); assert_eq!(response.status(), Status::ImATeapot); ``` b'source pub fn content_type(&self) -> Option<ContentType>' # pub fn content_type(&self) -> Option<ContentType Returns the Content-Type header of `self` . If the header is not present or is malformed, returns `None` . let mut response = Response::new(); response.set_header(ContentType::HTML); assert_eq!(response.content_type(), Some(ContentType::HTML)); ``` Returns an iterator over the cookies in `self` as identified by the `Set-Cookie` header. Malformed cookies are skipped. ``` use rocket::Response; use rocket::http::Cookie; let mut response = Response::new(); response.set_header(Cookie::new("hello", "world!")); let cookies: Vec<_> = response.cookies().collect(); assert_eq!(cookies, vec![Cookie::new("hello", "world!")]); ``` Returns a `HeaderMap` of all of the headers in `self` . let mut response = Response::new(); response.adjoin_raw_header("X-Custom", "1"); response.adjoin_raw_header("X-Custom", "2"); let mut custom_headers = response.headers().iter(); assert_eq!(custom_headers.next(), Some(Header::new("X-Custom", "1"))); assert_eq!(custom_headers.next(), Some(Header::new("X-Custom", "2"))); assert_eq!(custom_headers.next(), None); ``` b"source pub fn set_header<'h: 'r, H: Into<Header<'h>>>(&mut self, header: H) -> bool" Sets the header `header` in `self` . Any existing headers with the name `header.name` will be lost, and only `header` will remain. The type of `header` can be any type that implements `Into<Header>` . This includes `Header` itself, `ContentType` and `hyper::header` types. response.set_header(ContentType::HTML); assert_eq!(response.headers().iter().next(), Some(ContentType::HTML.into())); assert_eq!(response.headers().len(), 1); response.set_header(ContentType::JSON); assert_eq!(response.headers().iter().next(), Some(ContentType::JSON.into())); assert_eq!(response.headers().len(), 1); ``` Sets the custom header with name `name` and value `value` in `self` . Any existing headers with the same `name` will be lost, and the new custom header will remain. This method should be used sparingly; prefer to use set_header instead. response.set_raw_header("X-Custom", "1"); assert_eq!(response.headers().get_one("X-Custom"), Some("1")); assert_eq!(response.headers().len(), 1); response.set_raw_header("X-Custom", "2"); assert_eq!(response.headers().get_one("X-Custom"), Some("2")); assert_eq!(response.headers().len(), 1); ``` let mut response = Response::new(); response.adjoin_header(Header::new(ACCEPT.as_str(), "application/json")); response.adjoin_header(Header::new(ACCEPT.as_str(), "text/plain")); let mut accept_headers = response.headers().iter(); assert_eq!(accept_headers.next(), Some(Header::new(ACCEPT.as_str(), "application/json"))); assert_eq!(accept_headers.next(), Some(Header::new(ACCEPT.as_str(), "text/plain"))); assert_eq!(accept_headers.next(), None); ``` let mut response = Response::new(); response.adjoin_raw_header("X-Custom", "one"); response.adjoin_raw_header("X-Custom", "two"); let mut custom_headers = response.headers().iter(); assert_eq!(custom_headers.next(), Some(Header::new("X-Custom", "one"))); assert_eq!(custom_headers.next(), Some(Header::new("X-Custom", "two"))); assert_eq!(custom_headers.next(), None); ``` b'source pub fn remove_header(&mut self, name: &str)' # pub fn remove_header(&mut self, name: &str) Removes all headers with the name `name` . response.adjoin_raw_header("X-Custom", "one"); response.adjoin_raw_header("X-Custom", "two"); response.adjoin_raw_header("X-Other", "hi"); assert_eq!(response.headers().len(), 3); response.remove_header("X-Custom"); assert_eq!(response.headers().len(), 1); ``` b"source pub fn body(&self) -> &Body<'r>" # pub fn body(&self) -> &Body<'r Returns an immutable borrow of the body of `self` , if there is one. let string = "Hello, world!"; response.set_sized_body(string.len(), Cursor::new(string)); assert!(response.body().is_some()); ``` b"source pub fn body_mut(&mut self) -> &mut Body<'r>" # pub fn body_mut(&mut self) -> &mut Body<'r Returns a mutable borrow of the body of `self` , if there is one. A mutable borrow allows for reading the body. let string = "Hello, world!"; response.set_sized_body(string.len(), Cursor::new(string)); let string = response.body_mut().to_string().await; assert_eq!(string.unwrap(), "Hello, world!"); ``` b"source pub fn set_sized_body<B, S>(&mut self, size: S, body: B)where B: AsyncRead + AsyncSeek + Send + 'r, S: Into<Option<usize>>," # pub fn set_sized_body<B, S>(&mut self, size: S, body: B)where B: AsyncRead + AsyncSeek + Send + 'r, S: Into<Option<usize>>, Sets the body of `self` to be the fixed-sized `body` with size `size` , which may be `None` . If `size` is `None` , the body’s size will be computing with calls to `seek` just before being written out in a response. ``` use std::io; use rocket::Response; let string = "Hello, world!"; let mut response = Response::new(); response.set_sized_body(string.len(), io::Cursor::new(string)); assert_eq!(response.body_mut().to_string().await?, "Hello, world!"); ``` b"source pub fn set_streamed_body<B>(&mut self, body: B)where B: AsyncRead + Send + 'r," # pub fn set_streamed_body<B>(&mut self, body: B)where B: AsyncRead + Send + 'r, Sets the body of `self` to `body` , which will be streamed. The max chunk size is configured via ``` Response::set_max_chunk_size() ``` and defaults to let mut response = Response::new(); response.set_streamed_body(repeat(97).take(5)); assert_eq!(response.body_mut().to_string().await?, "aaaaa"); ``` b'source pub fn set_max_chunk_size(&mut self, size: usize)' # pub fn set_max_chunk_size(&mut self, size: usize) Sets the body’s maximum chunk size to `size` bytes. The default max chunk size is . The max chunk size is a property of the body and is thus reset whenever a body is set via ``` Response::set_streamed_body() ``` , ``` Response::set_sized_body() ``` , or the corresponding builder methods. This setting does not typically need to be changed. Configuring a high value can result in high memory usage. Similarly, configuring a low value can result in excessive network writes. When unsure, leave the value unchanged. b"source pub fn merge(&mut self, other: Response<'r>)" # pub fn merge(&mut self, other: Response<'r>) Replaces this response’s status and body with that of `other` , if they exist in `other` . Any headers that exist in `other` replace the ones in `self` . Any in `self` that aren’t in `other` remain in `self` . let response = Response::build() .status(Status::ImATeapot) .raw_header("X-Custom", "value 2") .raw_header_adjoin("X-Custom", "value 3") .merge(base) .finalize(); assert_eq!(response.status(), Status::NotFound); let custom_values: Vec<_> = response.headers().get("X-Custom").collect(); assert_eq!(custom_values, vec!["value 1"]); ``` b"source pub fn join(&mut self, other: Response<'r>)" # pub fn join(&mut self, other: Response<'r>) Sets `self` ’s status and body to that of `other` if they are not already set in `self` . Any headers present in both `other` and `self` are adjoined. let response = Response::build() .status(Status::ImATeapot) .raw_header("X-Custom", "value 2") .raw_header_adjoin("X-Custom", "value 3") .join(other) .finalize(); assert_eq!(response.status(), Status::ImATeapot); let custom_values: Vec<_> = response.headers().get("X-Custom").collect(); assert_eq!(custom_values, vec!["value 2", "value 3", "value 1"]); ``` b"source&#167; impl Debug for Response<'_>" ### impl Debug for Response<'_### impl<'r> !RefUnwindSafe for Response<'r### impl<'r> Send for Response<'r### impl<'r> !Sync for Response<'r### impl<'r> Unpin for Response<'r### impl<'r> !UnwindSafe for Response<'r# Trait rocket::response::Responder ``` pub trait Responder<'r, 'o: 'r> { // Required method fn respond_to(self, request: &'r Request<'_>) -> Result<'o>; } ``` Trait implemented by types that generate responses for clients. Any type that implements `Responder` can be used as the return type of a handler: ``` // This works for any `T` that implements `Responder`. #[get("/")] fn index() -> T { /* ... */ } ``` This trait can, and largely should, be automatically derived. The derive can handle all simple cases and most complex cases, too. When deriving `Responder` , the first field of the annotated structure (or of each variant if an `enum` ) is used to generate a response while the remaining fields are used as response headers: ``` use rocket::http::ContentType; use rocket::serde::{Serialize, json::Json}; #[derive(Responder)] enum Error<T> { #[response(status = 400)] Unauthorized(Json<T>), #[response(status = 404)] NotFound(Template, ContentType), } ``` Rocket implements `Responder` for several standard library types. Their behavior is documented here. Note that the `Result` implementation is overloaded, allowing for two `Responder` s to be used at once, depending on the variant. &str String &[u8] ``` Stream::from(Cursor::new(data)) ``` Vec<u8``` Stream::from(Cursor::new(vec)) ``` File Responds with a streamed body containing the data in the `File` . No `Content-Type` is set. To automatically have a `Content-Type` set based on the file’s extension, use `NamedFile` . * () Responds with an empty body. No `Content-Type` is set. * Option<TIf the `Option` is `Some` , the wrapped responder is used to respond to the client. Otherwise, an `Err` with status 404 Not Found is returned and a warning is printed to the console. * Result<T, EIf the `Result` is `Ok` , the wrapped `Ok` responder is used to respond to the client. If the `Result` is `Err` , the wrapped `Err` responder is used to respond to the client. ## Return Value A `Responder` returns a `Future` whose output type is a ``` Result<Response, Status> ``` An `Ok(Response)` indicates success. The `Response` will be written out to the client. * An `Err(Status)` indicates failure. The error catcher for `Status` will be invoked to generate a response. ## Implementation Tips This section describes a few best practices to take into account when implementing `Responder` . Avoid Manual Implementations The `Responder` derive is a powerful mechanism that eliminates the need to implement `Responder` in almost all cases. We encourage you to explore using the derive before attempting to implement `Responder` directly. It allows you to leverage existing `Responder` implementations through composition, decreasing the opportunity for mistakes or performance degradation. * Joining and Merging When chaining/wrapping other `Responder` s, start with ``` Response::build_from() ``` and/or use the `merge()` or `join()` methods on the `Response` or `ResponseBuilder` struct. Ensure that you document merging or joining behavior appropriately. * Inspecting Requests While tempting, a `Responder` that varies its functionality based on the incoming request sacrifices its functionality being understood based purely on its type. By implication, gleaning the functionality of a handler from its type signature also becomes more difficult. You should avoid varying responses based on the `Request` value as much as possible. ### Lifetimes `Responder` has two lifetimes: ``` Responder<'r, 'o: 'r> ``` * `'r` bounds the reference to the `&'r Request` . * `'o` bounds the returned `Response<'o>` to values that live at least as long as the request. This includes borrows from the `Request` itself (where `'o` would be `'r` as in ``` impl<'r> Responder<'r, 'r> ``` ) as well as `'static` data (where `'o` would be `'static` as in ``` impl<'r> Responder<'r, 'static> ``` ). In practice, you are likely choosing between four signatures: ``` // If the response contains no borrowed data. impl<'r> Responder<'r, 'static> for A { fn respond_to(self, _: &'r Request<'_>) -> response::Result<'static> { todo!() } } // If the response borrows from the request. impl<'r> Responder<'r, 'r> for B<'r> { fn respond_to(self, _: &'r Request<'_>) -> response::Result<'r> { todo!() } } // If the response is or wraps a borrow that may outlive the request. impl<'r, 'o: 'r> Responder<'r, 'o> for &'o C { fn respond_to(self, _: &'r Request<'_>) -> response::Result<'o> { todo!() } } // If the response wraps an existing responder. impl<'r, 'o: 'r, R: Responder<'r, 'o>> Responder<'r, 'o> for D<R> { fn respond_to(self, _: &'r Request<'_>) -> response::Result<'o> { todo!() } } ``` Say that you have a custom type, `Person` : ``` struct Person { name: String, age: u16 } ``` You’d like to use `Person` as a `Responder` so that you can return a `Person` directly from a handler: ``` #[get("/person/<id>")] fn person(id: usize) -> Option<Person> { Person::from_id(id) } ``` You want the `Person` responder to set two header fields: `X-Person-Name` and `X-Person-Age` as well as supply a custom representation of the object ( ``` Content-Type: application/x-person ``` ) in the body of the response. The following `Responder` implementation accomplishes this: ``` use std::io::Cursor; use rocket::request::Request; use rocket::response::{self, Response, Responder}; use rocket::http::ContentType; impl<'r> Responder<'r, 'static> for Person { fn respond_to(self, req: &'r Request<'_>) -> response::Result<'static> { let string = format!("{}:{}", self.name, self.age); Response::build_from(string.respond_to(req)?) .raw_header("X-Person-Name", self.name) .raw_header("X-Person-Age", self.age.to_string()) .header(ContentType::new("application", "x-person")) .ok() } } ``` Note that the implementation could have instead been derived if structured in a slightly different manner: ``` use rocket::http::Header; use rocket::response::Responder; #[derive(Responder)] #[response(content_type = "application/x-person")] struct Person { text: String, name: Header<'static>, age: Header<'static>, } impl Person { fn new(name: &str, age: usize) -> Person { Person { text: format!("{}:{}", name, age), name: Header::new("X-Person-Name", name.to_string()), age: Header::new("X-Person-Age", age.to_string()) } } } ``` b"source fn respond_to(self, request: &'r Request<'_>) -> Result<'o>" Returns `Ok` if a `Response` could be generated successfully. Otherwise, returns an `Err` with a failing `Status` . The `request` parameter is the `Request` that this `Responder` is responding to. When using Rocket’s code generation, if an `Ok(Response)` is returned, the response will be written out to the client. If an `Err(Status)` is returned, the error catcher for the given status is retrieved and called to generate a final error response, which is then written out to the client. Responds with the wrapped `Responder` in `self` , whether it is `Left` or `Right` . b"source&#167; impl<'r> Responder<'r, 'static> for ()" ### impl<'r> Responder<'r, 'static> for () Returns an empty, default `Response` . Always returns `Ok` . b"source&#167; impl<'r, 'o: 'r> Responder<'r, 'o> for &'o str" ### impl<'r, 'o: 'r> Responder<'r, 'o> for &'o str b"source&#167; impl<'r, 'o: 'r> Responder<'r, 'o> for &'o [u8]" ### impl<'r, 'o: 'r> Responder<'r, 'o> for &'o [u8] ### impl<'r, 'o: 'r, R: Responder<'r, 'o>> Responder<'r, 'o> for (Status, R) b"source&#167; impl<'r> Responder<'r, 'static> for Error" ### impl<'r> Responder<'r, 'static> for Error Prints a warning with the error and forwards to the `500` error catcher. b"source&#167; impl<'r> Responder<'r, 'static> for Arc<[u8]>" ### impl<'r> Responder<'r, 'static> for Arc<[u8] Responds with the inner `Responder` in `Cow` . b"source&#167; impl<'r, 'o: 'r, R: Responder<'r, 'o>> Responder<'r, 'o> for (ContentType, R)" ### impl<'r, 'o: 'r, R: Responder<'r, 'o>> Responder<'r, 'o> for (ContentType, R) b"source&#167; impl<'r> Responder<'r, 'static> for Arc<str>" ### impl<'r> Responder<'r, 'static> for Arc<str# fn respond_to(self, req: &'r Request<'_>) -> Result<'static### impl<'r> Responder<'r, 'static> for Value `json` only. Serializes the value into JSON. Returns a response with Content-Type JSON and a fixed-size body with the serialized value. Streams the named file to the client. Sets or overrides the Content-Type in the response according to the file’s extension if the extension is recognized. See ``` ContentType::from_extension() ``` for more information. If you would like to stream a file with a different Content-Type than that implied by its extension, use a `File` directly. ### impl<'r> Responder<'r, 'static> for Status The response generated by `Status` depends on the status code itself. The table below summarizes the functionality: Status Code Range | Response | | --- | --- | [400, 599] | Forwards to catcher for given status. | 100, [200, 205] | Empty with status of | All others. | Invalid. Errors to | In short, a client or server error status codes will forward to the corresponding error catcher, a successful status code less than `206` or `100` responds with any empty body and the given status code, and all other status code emit an error message and forward to the `500` (internal server error) catcher. ### impl<'r> Responder<'r, 'static> for String ### impl<'r> Responder<'r, 'static> for NoContent Sets the status code of the response to 204 No Content. ### impl<'r> Responder<'r, 'static> for Redirect Constructs a response with the appropriate status code and the given URL in the `Location` header field. The body of the response is empty. If the URI value used to create the `Responder` is an invalid URI, an error of Responds with the wrapped `Responder` in `self` , whether it is `Ok` or `Err` . If `self` is `Some` , responds with the wrapped `Responder` . Otherwise prints a warning message and returns an `Err` of `Status::NotFound` . ### impl<'r, 'o: 'r, R: Responder<'r, 'o>> Responder<'r, 'o> for RawHtml<R Sets the status code of the response to 202 Accepted. If the responder is `Some` , it is used to finalize the response. Sets the status code of the response to 400 Bad Request. If the responder is `Some` , it is used to finalize the response. Sets the status code of the response to 409 Conflict. If the responder is `Some` , it is used to finalize the response. Sets the status code of the response to 201 Created. Sets the `Location` header to the parameter in the `Created::new()` constructor. The optional responder, set via `Created::body()` or finalizes the response if it exists. The wrapped responder should write the body of the response so that it contains information about the created resource. If no responder is provided, the response body will be empty. In addition to setting the status code, `Location` header, and finalizing the response with the `Responder` , the `ETag` header is set conditionally if a hashable `Responder` is provided via . The `ETag` header is set to a hash value of the responder. Sets the status code of the response and then delegates the remainder of the response to the wrapped responder. Sets the message cookie and then uses the wrapped responder to complete the response. In other words, simply sets a cookie and delegates the rest of the response handling to the wrapped responder. As a result, the `Outcome` of the response is the `Outcome` of the wrapped `Responder` . ### impl<'r, 'o: 'r, T: Responder<'r, 'o> + Sized> Responder<'r, 'o> for Box<T Returns the response generated by the inner `T` . Note that this explicitly does not support `Box<dyn Responder>` . use rocket::response::Responder; #[get("/")] fn f() -> Box<dyn Responder<'static,'static>> { Box::new(()) } ``` However, this `impl` allows boxing sized responders: #[derive(Responder)] enum Content { Redirect(Box<rocket::response::Redirect>), Text(String), } #[get("/")] fn f() -> Option<Box<String>> { None } #[get("/")] fn g() -> Content { Content::Text("hello".to_string()) } ``` ### impl<'r, 'o: 'r, T: Responder<'r, 'o>> Responder<'r, 'o> for Capped<T### impl<'r, E: Debug> Responder<'r, 'static> for Debug<E### impl<'r, S> Responder<'r, 'r> for ByteStream<S>where S: Send + 'r + Stream, S::Item: AsRef<[u8]> + Send + Unpin + 'r, ### impl<'r, S> Responder<'r, 'r> for ReaderStream<S>where S: Send + 'r + Stream, S::Item: AsyncRead + Send, ### impl<'r, S> Responder<'r, 'r> for TextStream<S>where S: Send + 'r + Stream, S::Item: AsRef<str> + Send + Unpin + 'r, ### impl<'r, S: Stream<Item = Event> + Send + 'r> Responder<'r, 'r> for EventStream<S### impl<'r, T: Serialize> Responder<'r, 'static> for Json<T `json` only. Serializes the wrapped value into JSON. Returns a response with Content-Type JSON and a fixed-size body with the serialized value. If serialization fails, an `Err` of ### impl<'r, T: Serialize> Responder<'r, 'static> for MsgPack<T `msgpack` only. Serializes the wrapped value into MessagePack. Returns a response with Content-Type `MsgPack` and a fixed-size body with the serialization. If serialization fails, an `Err` of # Struct rocket::Rocket ``` pub struct Rocket<P: Phase>(_); ``` The application server itself. ## Phases A `Rocket` instance represents a web server and its state. It progresses through three statically-enforced phases: build, ignite, orbit. Build: application and server configuration This phase enables: * setting configuration options * mounting/registering routes/catchers * managing state * attaching fairings This is the only phase in which an instance can be modified. To finalize changes, an instance is ignited via `Rocket::ignite()` , progressing it into the ignite phase, or directly launched into orbit with `Rocket::launch()` which progress the instance through ignite into orbit. * Ignite: verification and finalization of configuration An instance in the `Ignite` phase is in its final configuration, available via `Rocket::config()` . Barring user-supplied interior mutation, application state is guaranteed to remain unchanged beyond this point. An instance in the ignite phase can be launched into orbit to serve requests via `Rocket::launch()` . * Orbit: a running web server An instance in the `Orbit` phase represents a running application, actively serving requests. ## Launching To launch a `Rocket` application, the suggested approach is to return an instance of `Rocket<Build>` from a function named `rocket` marked with the `#[launch]` attribute: This generates a `main` function with an `async` runtime that runs the returned `Rocket` instance. Manual Launching To launch an instance of `Rocket` , it must progress through all three phases. To progress into the ignite or launch phases, a tokio `async` runtime is required. The `#[main]` attribute initializes a Rocket-specific tokio runtime and runs the attributed `async fn` inside of it: Note that `Rocket::launch()` automatically progresses an instance of `Rocket` from any phase into orbit: ``` #[rocket::main] async fn main() -> Result<(), rocket::Error> { let _rocket = rocket::build().launch().await?; Ok(()) } ``` For extreme and rare cases in which `#[main]` imposes obstinate restrictions, use `rocket::execute()` to execute Rocket’s `launch()` future. * Automatic Launching Manually progressing an instance of Rocket though its phases is only necessary when either an instance’s finalized state is to be inspected (in the ignite phase) or the instance is expected to deorbit due to `Rocket::shutdown()` . In the more common case when neither is required, the `#[launch]` attribute can be used. When applied to a function that returns a `Rocket<Build>` , it automatically initializes an `async` runtime and launches the function’s returned instance: b'source&#167; impl Rocket<Build>' ### impl Rocket<Buildb'source pub fn build() -> Self' Create a new `Rocket` application using the default configuration provider, `Config::figment()` . This method is typically called through the `rocket::build()` alias. b'source pub fn custom<T: Provider>(provider: T) -> Self' # pub fn custom<T: Provider>(provider: T) -> Self Creates a new `Rocket` application using the supplied configuration provider. This method is typically called through the `rocket::custom()` alias. ``` use rocket::figment::{Figment, providers::{Toml, Env, Format}}; #[launch] fn rocket() -> _ { let figment = Figment::from(rocket::Config::default()) .merge(Toml::file("MyApp.toml").nested()) .merge(Env::prefixed("MY_APP_").global()); rocket::custom(figment) } ``` b'source pub fn configure<T: Provider>(self, provider: T) -> Self' # pub fn configure<T: Provider>(self, provider: T) -> Self Sets the configuration provider in `self` to `provider` . A `Figment` generated from the current `provider` can always be retrieved via `Rocket::figment()` . However, because the provider can be changed at any point prior to ignition, a `Config` can only be retrieved in the ignite or orbit phases, or by manually extracting one from a particular figment. let config = Config { port: 7777, address: Ipv4Addr::new(18, 127, 0, 1).into(), temp_dir: "/tmp/config-example".into(), ..Config::debug_default() }; let rocket = rocket::custom(&config).ignite().await?; assert_eq!(rocket.config().port, 7777); assert_eq!(rocket.config().address, Ipv4Addr::new(18, 127, 0, 1)); assert_eq!(rocket.config().temp_dir.relative(), Path::new("/tmp/config-example")); // Create a new figment which modifies _some_ keys the existing figment: let figment = rocket.figment().clone() .merge((Config::PORT, 8888)) .merge((Config::ADDRESS, "171.64.200.10")); let rocket = rocket::custom(&config) .configure(figment) .ignite().await?; assert_eq!(rocket.config().port, 8888); assert_eq!(rocket.config().address, Ipv4Addr::new(171, 64, 200, 10)); assert_eq!(rocket.config().temp_dir.relative(), Path::new("/tmp/config-example")); ``` Mounts all of the routes in the supplied vector at the given `base` path. Mounting a route with path `path` at path `base` makes the route available at `base/path` . Panics if either: the `base` mount point is not a valid static path: a valid origin URI without dynamic parameters. * any route’s URI is not a valid origin URI. Note: This kind of panic is guaranteed not to occur if the routes were generated using Rocket’s code generation. Use the `routes!` macro to mount routes created using the code generation facilities. Requests to the `/hello/world` URI will be dispatched to the `hi` route. ``` #[get("/world")] fn hi() -> &'static str { "Hello!" } #[launch] fn rocket() -> _ { rocket::build().mount("/hello", routes![hi]) } ``` Manually create a route named `hi` at path `"/world"` mounted at base `"/hello"` . Requests to the `/hello/world` URI will be dispatched to the `hi` route. ``` use rocket::{Request, Route, Data, route}; use rocket::http::Method; fn hi<'r>(req: &'r Request, _: Data<'r>) -> route::BoxFuture<'r> { route::Outcome::from(req, "Hello!").pin() } #[launch] fn rocket() -> _ { let hi_route = Route::new(Method::Get, "/world", hi); rocket::build().mount("/hello", vec![hi_route]) } ``` b"source pub fn register<'a, B, C>(self, base: B, catchers: C) -> Selfwhere B: TryInto<Origin<'a>> + Clone + Display, B::Error: Display, C: Into<Vec<Catcher>>," # pub fn register<'a, B, C>(self, base: B, catchers: C) -> Selfwhere B: TryInto<Origin<'a>> + Clone + Display, B::Error: Display, C: Into<Vec<Catcher>>, Registers all of the catchers in the supplied vector, scoped to `base` . Panics if `base` is not a valid static path: a valid origin URI without dynamic parameters. ``` use rocket::Request; #[launch] fn rocket() -> _ { rocket::build().register("/", catchers![internal_error, not_found]) } ``` b"source pub fn manage<T>(self, state: T) -> Selfwhere T: Send + Sync + 'static," # pub fn manage<T>(self, state: T) -> Selfwhere T: Send + Sync + 'static, Add `state` to the state managed by this instance of Rocket. This method can be called any number of times as long as each call refers to a different `T` . Managed state can be retrieved by any request handler via the `State` request guard. In particular, if a value of type `T` is managed by Rocket, adding `State<T>` to the list of arguments in a request handler instructs Rocket to retrieve the managed value. Panics if state of type `T` is already being managed. struct MyInt(isize); struct MyString(String); #[get("/int")] fn int(state: &State<MyInt>) -> String { format!("The stateful int is: {}", state.0) } #[get("/string")] fn string(state: &State<MyString>) -> &str { &state.0 } #[launch] fn rocket() -> _ { rocket::build() .manage(MyInt(10)) .manage(MyString("Hello, managed state!".to_string())) .mount("/", routes![int, string]) } ``` b'source pub fn attach<F: Fairing>(self, fairing: F) -> Self' # pub fn attach<F: Fairing>(self, fairing: F) -> Self Attaches a fairing to this instance of Rocket. No fairings are eagerly executed; fairings are executed at their appropriate time. If the attached fairing is fungible and a fairing of the same name already exists, this fairing replaces it. #[launch] fn rocket() -> _ { rocket::build() .attach(AdHoc::on_liftoff("Liftoff Message", |_| Box::pin(async { println!("We have liftoff!"); }))) } ``` b'source pub async fn ignite(self) -> Result<Rocket<Ignite>, Error>' # pub async fn ignite(self) -> Result<Rocket<Ignite>, Error Returns a `Future` that transitions this instance of `Rocket` into the ignite phase. When `await` ed, the future runs all ignite fairings in serial, attach order, and verifies that `self` represents a valid instance of `Rocket` ready for launch. This means that: * All ignite fairings succeeded. * A valid `Config` was extracted from `Rocket::figment()` . * If `secrets` are enabled, the extracted `Config` contains a safe secret key. * There are no `Route` or `Catcher` collisions. * No `Sentinel` triggered an abort. If any of these conditions fail to be met, a respective `Error` is returned. #[rocket::main] async fn main() -> Result<(), rocket::Error> { let rocket = rocket::build() .attach(AdHoc::on_ignite("Manage State", |rocket| async move { rocket.manage(String::from("managed string")) })); // No fairings are run until ignition occurs. assert!(rocket.state::<String>().is_none()); let rocket = rocket.ignite().await?; assert_eq!(rocket.state::<String>().unwrap(), "managed string"); b'source&#167; impl Rocket<Ignite>' ### impl Rocket<IgniteReturns the finalized, active configuration. This is guaranteed to remain stable through ignition and into orbit. Returns a handle which can be used to trigger a shutdown and detect a triggered shutdown. A completed graceful shutdown resolves the future returned by `Rocket::launch()` . If `Shutdown::notify()` is called before an instance is launched, it will be immediately shutdown after liftoff. See `Shutdown` and `config::Shutdown` for details on graceful shutdown. ``` use rocket::tokio::{self, time}; let shutdown = rocket.shutdown(); tokio::spawn(async move { time::sleep(time::Duration::from_secs(5)).await; shutdown.notify(); }); // The `launch()` future resolves after ~5 seconds. let result = rocket.launch().await; assert!(result.is_ok()); b'source&#167; impl Rocket<Orbit>' ### impl Rocket<Orbit Returns the finalized, active configuration. This is guaranteed to remain stable after `Rocket::ignite()` , through ignition and into orbit. #[launch] fn rocket() -> _ { rocket::build() .attach(AdHoc::on_liftoff("Config", |rocket| Box::pin(async move { println!("Rocket launch config: {:?}", rocket.config()); }))) } ``` Returns a handle which can be used to trigger a shutdown and detect a triggered shutdown. A completed graceful shutdown resolves the future returned by `Rocket::launch()` . See `Shutdown` and `config::Shutdown` for details on graceful shutdown. ``` use rocket::tokio::{self, time}; use rocket::fairing::AdHoc; #[launch] fn rocket() -> _ { rocket::build() .attach(AdHoc::on_liftoff("Shutdown", |rocket| Box::pin(async move { let shutdown = rocket.shutdown(); tokio::spawn(async move { time::sleep(time::Duration::from_secs(5)).await; shutdown.notify(); }); }))) } ``` b'source&#167; impl<P: Phase> Rocket<P>' ### impl<P: Phase> Rocket<Pb'source pub fn routes(&self) -> impl Iterator<Item = &Route>' # pub fn routes(&self) -> impl Iterator<Item = &RouteReturns an iterator over all of the routes mounted on this instance of Rocket. The order is unspecified. #[get("/hello")] fn hello() -> &'static str { "Hello, world!" } let rocket = rocket::build() .mount("/", routes![hello]) .mount("/hi", routes![hello]); assert_eq!(rocket.routes().count(), 2); assert!(rocket.routes().any(|r| r.uri == "/hello")); assert!(rocket.routes().any(|r| r.uri == "/hi/hello")); ``` b'source pub fn catchers(&self) -> impl Iterator<Item = &Catcher>' # pub fn catchers(&self) -> impl Iterator<Item = &CatcherReturns an iterator over all of the catchers registered on this instance of Rocket. The order is unspecified. #[catch(404)] fn not_found() -> &'static str { "Nothing here, sorry!" } #[catch(500)] fn just_500() -> &'static str { "Whoops!?" } #[catch(default)] fn some_default() -> &'static str { "Everything else." } let rocket = rocket::build() .register("/foo", catchers![not_found]) .register("/", catchers![just_500, some_default]); assert_eq!(rocket.catchers().count(), 3); assert!(rocket.catchers().any(|c| c.code == Some(404) && c.base == "/foo")); assert!(rocket.catchers().any(|c| c.code == Some(500) && c.base == "/")); assert!(rocket.catchers().any(|c| c.code == None && c.base == "/")); ``` b"source pub fn state<T: Send + Sync + 'static>(&self) -> Option<&T>" # pub fn state<T: Send + Sync + 'static>(&self) -> Option<&T Returns `Some` of the managed state value for the type `T` if it is being managed by `self` . Otherwise, returns `None` . ``` #[derive(PartialEq, Debug)] struct MyState(&'static str); let rocket = rocket::build().manage(MyState("hello!")); assert_eq!(rocket.state::<MyState>().unwrap(), &MyState("hello!")); ``` b'source pub fn figment(&self) -> &Figment' # pub fn figment(&self) -> &Figment Returns the figment derived from the configuration provider set for `self` . To extract a typed config, prefer to use `AdHoc::config()` . ``` let rocket = rocket::build(); let figment = rocket.figment(); ``` b'source pub async fn launch(self) -> Result<Rocket<Ignite>, Error>' # pub async fn launch(self) -> Result<Rocket<Ignite>, Error Returns a `Future` that transitions this instance of `Rocket` from any phase into the orbit phase. When `await` ed, the future drives the server forward, listening for and dispatching requests to mounted routes and catchers. In addition to all of the processes that occur during ignition, a successful launch results in liftoff fairings being executed after binding to any respective network interfaces but before serving the first request. Liftoff fairings are run concurrently; resolution of all fairings is `await` ed before resuming request serving. The `Future` resolves as an `Err` if any of the following occur: * there is an error igniting; see `Rocket::ignite()` . * there is an I/O error starting the server. * an unrecoverable, system-level error occurs while running. The `Future` resolves as an `Ok` if any of the following occur: * graceful shutdown via `Shutdown::notify()` completes. The returned value on `Ok(())` is previously running instance. The `Future` does not resolve otherwise. If there is a problem starting the application or the application fails unexpectedly while running, an `Error` is returned. Note that a value of type `Error` panics if dropped without first being inspected. See the `Error` documentation for more information. ``` #[rocket::main] async fn main() { let result = rocket::build().launch().await; // this is reachable only after `Shutdown::notify()` or `Ctrl+C`. println!("Rocket: deorbit."); } ``` ### impl<P> RefUnwindSafe for Rocket<P>where <P as Phase>::State: RefUnwindSafe, ### impl<P> Send for Rocket<P### impl<P> UnwindSafe for Rocket<P>where <P as Phase>::State: UnwindSafe, # Struct rocket::Shutdown ``` pub struct Shutdown(_); ``` A request guard and future for graceful shutdown. A server shutdown is manually requested by calling `Shutdown::notify()` or, if enabled, through automatic triggers like `Ctrl-C` . Rocket will stop accepting new requests, finish handling any pending requests, wait a grace period before cancelling any outstanding I/O, and return `Ok()` to the caller of `Rocket::launch()` . Graceful shutdown is configured via `config::Shutdown` . ## Detecting Shutdown `Shutdown` is also a future that resolves when `Shutdown::notify()` is called. This can be used to detect shutdown in any part of the application: #[get("/wait/for/shutdown")] async fn wait_for_shutdown(shutdown: Shutdown) -> &'static str { shutdown.await; "Somewhere, shutdown was requested." } ``` See the `stream` docs for an example of detecting shutdown in an infinite responder. Additionally, a completed shutdown request resolves the future returned from `Rocket::launch()` : #[rocket::main] async fn main() { let result = rocket::build() .mount("/", routes![shutdown]) .launch() .await; // If the server shut down (by visiting `/shutdown`), `result` is `Ok`. result.expect("server failed unexpectedly"); } ``` b'source&#167; impl Shutdown' b'source pub fn notify(self)' # pub fn notify(self) Notify the application to shut down gracefully. This function returns immediately; pending requests will continue to run until completion or expiration of the grace period, which ever comes first, before the actual shutdown occurs. The grace period can be configured via `Shutdown::grace` . b'source&#167; impl Clone for Shutdown' ### impl Debug for Shutdown b"source&#167; impl<'r> FromRequest<'r> for Shutdown" ### impl<'r> FromRequest<'r> for Shutdown b'source&#167; impl Future for Shutdown' ### impl Future for Shutdown ### impl !RefUnwindSafe for Shutdown ### impl Unpin for Shutdown ### impl !UnwindSafe for Shutdown b'&#167; impl<T> FutureExt for Twhere T: Future + ?Sized,' ### impl<T> FutureExt for Twhere T: Future + ?Sized, b'&#167; fn map<U, F>(self, f: F) -> Map<Self, F>where F: FnOnce(Self::Output) -> U, Self: Sized,' # fn map<U, F>(self, f: F) -> Map<Self, F>where F: FnOnce(Self::Output) -> U, Self: Sized, b'&#167; fn map_into<U>(self) -> MapInto<Self, U>where Self::Output: Into<U>, Self: Sized,' # fn map_into<U>(self) -> MapInto<Self, U>where Self::Output: Into<U>, Self: Sized, b'&#167; fn then<Fut, F>(self, f: F) -> Then<Self, Fut, F>where F: FnOnce(Self::Output) -> Fut, Fut: Future, Self: Sized,' # fn then<Fut, F>(self, f: F) -> Then<Self, Fut, F>where F: FnOnce(Self::Output) -> Fut, Fut: Future, Self: Sized, `f` . Read more b'&#167; fn left_future<B>(self) -> Either<Self, B>where B: Future<Output = Self::Output>, Self: Sized,' # fn left_future<B>(self) -> Either<Self, B>where B: Future<Output = Self::Output>, Self: Sized, `Either` future, making it the left-hand variant of that `Either` . Read more b'&#167; fn right_future<A>(self) -> Either<A, Self>where A: Future<Output = Self::Output>, Self: Sized,' # fn right_future<A>(self) -> Either<A, Self>where A: Future<Output = Self::Output>, Self: Sized, `Either` future, making it the right-hand variant of that `Either` . Read more b'&#167; fn into_stream(self) -> IntoStream<Self>where Self: Sized,' # fn into_stream(self) -> IntoStream<Self>where Self: Sized, b'&#167; fn flatten(self) -> Flatten<Self>where Self::Output: Future, Self: Sized,' # fn flatten(self) -> Flatten<Self>where Self::Output: Future, Self: Sized, b'&#167; fn flatten_stream(self) -> FlattenStream<Self>where Self::Output: Stream, Self: Sized,' # fn flatten_stream(self) -> FlattenStream<Self>where Self::Output: Stream, Self: Sized, b'&#167; fn fuse(self) -> Fuse<Self>where Self: Sized,' # fn fuse(self) -> Fuse<Self>where Self: Sized, `poll` will never again be called once it has completed. This method can be used to turn any `Future` into a `FusedFuture` . Read more b'&#167; fn catch_unwind(self) -> CatchUnwind<Self>where Self: Sized + UnwindSafe,' # fn catch_unwind(self) -> CatchUnwind<Self>where Self: Sized + UnwindSafe, b'&#167; fn remote_handle(self) -> (Remote<Self>, RemoteHandle<Self::Output>)where Self: Sized,' # fn remote_handle(self) -> (Remote<Self>, RemoteHandle<Self::Output>)where Self: Sized, `()` on completion and sends its output to another future on a separate task. Read more # fn boxed_local<'a>( self ) -> Pin<Box<dyn Future<Output = Self::Output> + 'a, Global>>where Self: Sized + 'a, b'&#167; fn unit_error(self) -> UnitError<Self>where Self: Sized,' # fn unit_error(self) -> UnitError<Self>where Self: Sized, ``` TryFuture<Ok = T, Error = () ``` b'&#167; fn never_error(self) -> NeverError<Self>where Self: Sized,' # fn never_error(self) -> NeverError<Self>where Self: Sized, ``` TryFuture<Ok = T, Error = Never ``` b"&#167; fn poll_unpin(&mut self, cx: &mut Context<'_>) -> Poll<Self::Output>where Self: Unpin," # fn poll_unpin(&mut self, cx: &mut Context<'_>) -> Poll<Self::Output>where Self: Unpin, `Future::poll` on `Unpin` future types. b'source&#167; impl<F> IntoFuture for Fwhere F: Future,' ### impl<F> IntoFuture for Fwhere F: Future, b'&#167; type IntoFuture = F' # type IntoFuture = F b'source&#167; fn into_future(self) -> <F as IntoFuture>::IntoFuture' # fn into_future(self) -> <F as IntoFuture>::IntoFuture # Struct rocket::State ``` pub struct State<T: Send + Sync + 'static>(_); ``` Request guard to retrieve managed state. A reference `&State<T>` type is a request guard which retrieves the managed state managing for some type `T` . A value for the given type must previously have been registered to be managed by Rocket via `Rocket::manage()` . The type being managed must be thread safe and sendable across thread boundaries as multiple handlers in multiple threads may be accessing the value at once. In other words, it must implement `Send` + `Sync` + `'static` . Imagine you have some configuration struct of the type `MyConfig` that you’d like to initialize at start-up and later access it in several handlers. The following example does just this: // In a real application, this would likely be more complex. struct MyConfig { user_val: String } #[get("/")] fn index(state: &State<MyConfig>) -> String { format!("The config value is: {}", state.user_val) } #[get("/raw")] fn raw_config_value(state: &State<MyConfig>) -> &str { &state.user_val } #[launch] fn rocket() -> _ { rocket::build() .mount("/", routes![index, raw_config_value]) .manage(MyConfig { user_val: "user input".to_string() }) } ``` ## Within Request Guards Because `State` is itself a request guard, managed state can be retrieved from another request guard’s implementation using either `Request::guard()` or `Rocket::state()` . In the following code example, the `Item` request guard retrieves `MyConfig` from managed state: ``` use rocket::State; use rocket::request::{self, Request, FromRequest}; use rocket::outcome::IntoOutcome; struct Item<'r>(&'r str); #[rocket::async_trait] impl<'r> FromRequest<'r> for Item<'r> { type Error = (); async fn from_request(request: &'r Request<'_>) -> request::Outcome<Self, ()> { // Using `State` as a request guard. Use `inner()` to get an `'r`. let outcome = request.guard::<&State<MyConfig>>().await .map(|my_config| Item(&my_config.user_val)); // Or alternatively, using `Rocket::state()`: let outcome = request.rocket().state::<MyConfig>() .map(|my_config| Item(&my_config.user_val)) .or_forward(()); outcome } } ``` ## Testing with `State` When unit testing your application, you may find it necessary to manually construct a type of `State` to pass to your functions. To do so, use the `State::get()` static method or the `From<&T>` implementation: struct MyManagedState(usize); #[get("/")] fn handler(state: &State<MyManagedState>) -> String { state.0.to_string() } let mut rocket = rocket::build().manage(MyManagedState(127)); let state = State::get(&rocket).expect("managed `MyManagedState`"); assert_eq!(handler(state), "127"); let managed = MyManagedState(77); assert_eq!(handler(State::from(&managed)), "77"); ``` b'source pub fn get<P: Phase>(rocket: &Rocket<P>) -> Option<&State<T>>' # pub fn get<P: Phase>(rocket: &Rocket<P>) -> Option<&State<T> Returns the managed state value in `rocket` for the type `T` if it is being managed by `rocket` . Otherwise, returns `None` . #[derive(Debug, PartialEq)] struct Managed(usize); #[derive(Debug, PartialEq)] struct Unmanaged(usize); let rocket = rocket::build().manage(Managed(7)); let state: Option<&State<Managed>> = State::get(&rocket); assert_eq!(state.map(|s| s.inner()), Some(&Managed(7))); let state: Option<&State<Unmanaged>> = State::get(&rocket); assert_eq!(state, None); ``` b'source pub fn inner(&self) -> &T' # pub fn inner(&self) -> &T Borrow the inner value. Using this method is typically unnecessary as `State` implements `Deref` with a `Deref::Target` of `T` . This means Rocket will automatically coerce a `State<T>` to an `&T` as required. This method should only be used when a longer lifetime is required. #[derive(Clone)] struct MyConfig { user_val: String } fn handler1<'r>(config: &State<MyConfig>) -> String { let config = config.inner().clone(); config.user_val } // Use the `Deref` implementation which coerces implicitly fn handler2(config: &State<MyConfig>) -> String { config.user_val.clone() } ``` b"source&#167; impl<'r, T: Send + Sync + 'static> From<&'r T> for &'r State<T>" ### impl<'r, T: Send + Sync + 'static> From<&'r T> for &'r State<Tb"source&#167; impl<'r, T: Send + Sync + 'static> FromRequest<'r> for &'r State<T>" ### impl<'r, T: Send + Sync + 'static> FromRequest<'r> for &'r State<T### impl<T: Ord + Send + Sync + 'static> Ord for State<Tb'source&#167; fn cmp(&self, other: &State<T>) -> Ordering' # fn cmp(&self, other: &State<T>) -> Ordering # fn max(self, other: Self) -> Selfwhere Self: Sized, # fn min(self, other: Self) -> Selfwhere Self: Sized, b"source&#167; impl<T: PartialEq + Send + Sync + 'static> PartialEq<State<T>> for State<T>" ### impl<T: PartialEq + Send + Sync + 'static> PartialEq<State<T>> for State<Tb'source&#167; fn eq(&self, other: &State<T>) -> bool' # fn eq(&self, other: &State<T>) -> bool b"source&#167; impl<T: PartialOrd + Send + Sync + 'static> PartialOrd<State<T>> for State<T>" ### impl<T: PartialOrd + Send + Sync + 'static> PartialOrd<State<T>> for State<Tb'source&#167; fn partial_cmp(&self, other: &State<T>) -> Option<Ordering>' # fn partial_cmp(&self, other: &State<T>) -> Option<Ordering `self` and `other` values if one exists. Read more # fn lt(&self, other: &Rhs) -> bool # fn le(&self, other: &Rhs) -> bool b'1.0.0 &#183; source&#167; fn gt(&self, other: &Rhs) -> bool' # fn gt(&self, other: &Rhs) -> bool b"source&#167; impl<T: Send + Sync + 'static> Sentinel for &State<T>" ### impl<T: Send + Sync + 'static> Sentinel for &State<T### impl<T: Send + Sync + 'static> StructuralEq for State<T### impl<T> RefUnwindSafe for State<T>where T: RefUnwindSafe, ### impl<T> Send for State<T### impl<T> Sync for State<T### impl<T> Unpin for State<T>where T: Unpin, ### impl<T> UnwindSafe for State<T>where T: UnwindSafe, b'&#167; impl<T> CallHasher for Twhere T: Hash + ?Sized,' ### impl<T> CallHasher for Twhere T: Hash + ?Sized, b'source&#167; fn equivalent(&self, key: &K) -> bool' # fn equivalent(&self, key: &K) -> bool `key` and return `true` if they are equal. # Trait rocket::Phase ``` pub trait Phase: Sealed { } ``` A marker trait for Rocket’s launch phases. This treat is implemented by the three phase marker types: `Build` , `Ignite` , and `Orbit` , representing the three phases to launch an instance of `Rocket` . This trait is sealed and cannot be implemented outside of Rocket. For a description of the three phases, see `Rocket` . # Trait rocket::Sentinel ``` pub trait Sentinel { // Required method ; } ``` An automatic last line of defense against launching an invalid `Rocket` . A sentinel, automatically run on `ignition` , can trigger a launch abort should an instance fail to meet arbitrary conditions. Every type that appears in a mounted route’s type signature is eligible to be a sentinel. Of these, those that implement `Sentinel` have their `abort()` method invoked automatically, immediately after ignition, once for each unique type. Sentinels inspect the finalized instance of `Rocket` and can trigger a launch abort by returning `true` . ## Built-In Sentinels The `State<T>` type is a sentinel that triggers an abort if the finalized `Rocket` instance is not managing state for type `T` . Doing so prevents run-time failures of the `State` request guard. As an example, consider the following simple application: ``` #[get("/<id>")] fn index(id: usize, state: &State<String>) -> Response { /* ... */ } At ignition time, effected by the `#[launch]` attribute here, Rocket probes all types in all mounted routes for `Sentinel` implementations. In this example, the types are: `usize` , `State<String>` , and `Response` . Those that implement `Sentinel` are queried for an abort trigger via their `Sentinel::abort()` method. In this example, the sentinel types are `State` and potentially `Response` , if it implements `Sentinel` . If `abort()` returns true, launch is aborted with a corresponding error. In this example, launch will be aborted because state of type `String` is not being managed. To correct the error and allow launching to proceed nominally, a value of type `String` must be managed: ``` #[launch] fn rocket() -> _ { rocket::build() .mount("/", routes![index]) .manage(String::from("my managed string")) } ``` ## Embedded Sentinels Embedded types – type parameters of already eligible types – are also eligible to be sentinels. Consider the following route: ``` #[get("/")] fn f(guard: Option<&State<String>>) -> Either<Foo, Inner<Bar>> { unimplemented!() } ``` The directly eligible sentinel types, guard and responders, are: ``` Option<&State<String>> ``` ``` Either<Foo, Inner<Bar>> ``` In addition, all embedded types are also eligible. These are: * `&State<String>` * `State<String>` * `String` * `Foo` * `Inner<Bar>` * `Bar` A type, whether embedded or not, is queried if it is a `Sentinel` and none of its parent types are sentinels. Said a different way, if every directly eligible type is viewed as the root of an acyclic graph with edges between a type and its type parameters, the first `Sentinel` in breadth-first order is queried: ``` 1. Option<&State<String>> Either<Foo, Inner<Bar>> | / \ 2. &State<String> Foo Inner<Bar> | | 3. State<String> Bar | 4. String ``` In each graph above, types are queried from top to bottom, level 1 to 4. Querying continues down paths where the parents were not sentinels. For example, if `Option` is a sentinel but `Either` is not, then querying stops for the left subgraph ( `Option` ) but continues for the right subgraph `Either` . ## Limitations Because Rocket must know which `Sentinel` implementation to query based on its written type, generally only explicitly written, resolved, concrete types are eligible to be sentinels. A typical application will only work with such types, but there are several common cases to be aware of. ### `impl Trait` Occasionally an existential `impl Trait` may find its way into return types: ``` use rocket::response::Responder; #[get("/")] fn f<'r>() -> Either<impl Responder<'r, 'static>, AnotherSentinel> { /* ... */ } ``` Note: Rocket actively discourages using `impl Trait` in route signatures. In addition to impeding sentinel discovery, doing so decreases the ability to gleam a handler’s functionality based on its type signature. The return type of the route `f` depends on its implementation. At present, it is not possible to name the underlying concrete type of an `impl Trait` at compile-time and thus not possible to determine if it implements `Sentinel` . As such, existentials are not eligible to be sentinels. That being said, this limitation only applies per embedding: types embedded inside of an `impl Trait` are eligible. As such, in the example above, the named `AnotherSentinel` type continues to be eligible. When possible, prefer to name all types: ``` #[get("/")] fn f() -> Either<AbortingSentinel, AnotherSentinel> { /* ... */ } ``` ### Aliases Embedded sentinels made opaque by a type alias will fail to be considered; the aliased type itself is considered. In the example below, only `Result<Foo, Bar>` will be considered, while the embedded `Foo` and `Bar` will not. ``` type SomeAlias = Result<Foo, Bar>; #[get("/")] fn f() -> SomeAlias { /* ... */ } ``` Note, however, that `Option<T>` and `Debug<T>` are a sentinels if `T: Sentinel` , and `Result<T, E>` and `Either<T, E>` are sentinels if both ``` T: Sentinel, E: Sentinel ``` . Thus, for these specific cases, a type alias will “consider” embeddings. Nevertheless, prefer to write concrete types when possible. ### Type Macros It is impossible to determine, a priori, what a type macro will expand to. As such, Rocket is unable to determine which sentinels, if any, a type macro references, and thus no sentinels are discovered from type macros. Even approximations are impossible. For example, consider the following: ``` macro_rules! MyType { (State<'_, u32>) => (&'_ rocket::Config) } #[get("/")] fn f(guard: MyType![State<'_, u32>]) { /* ... */ } ``` While the ``` MyType![State<'_, u32>] ``` type appears to contain a `State` sentinel, the macro actually expands to `&'_ rocket::Config` , which is not the `State` sentinel. Because Rocket knows the exact syntax expected by type macros that it exports, such as the typed stream macros, discovery in these macros works as expected. You should prefer not to use type macros aside from those exported by Rocket, or if necessary, restrict your use to those that always expand to types without sentinels. ## Custom Sentinels Any type can implement `Sentinel` , and the implementation can arbitrarily inspect an ignited instance of `Rocket` . For illustration, consider the following implementation of `Sentinel` for a custom `Responder` which requires: * state for a type `T` to be managed * a catcher for status code `400` at base `/` ``` use rocket::{Rocket, Ignite, Sentinel}; impl Sentinel for MyResponder { fn abort(rocket: &Rocket<Ignite>) -> bool { if rocket.state::<T>().is_none() { return true; } if !rocket.catchers().any(|c| c.code == Some(400) && c.base == "/") { return true; } false } } ``` If a `MyResponder` is returned by any mounted route, its `abort()` method will be invoked. If the required conditions aren’t met, signaled by returning `true` from `abort()` , Rocket aborts launch. b'source' Returns `true` if launch should be aborted and `false` otherwise. ### impl<T> Sentinel for Debug<T A sentinel that never aborts. The `Responder` impl for `Debug` will never be called, so it’s okay to not abort for failing `T: Sentinel` . # Function rocket::build pub fn build() -> Rocket<BuildCreates a Rocket instance with the default config provider: aliases Rocket::build(). Rocket::build() # Function rocket::custom pub fn custom<T: Provider>(provider: T) -> Rocket<BuildCreates a Rocket instance with a custom config provider: aliases Rocket::custom(). Rocket::custom() # Function rocket::execute ``` pub fn execute<R, F>(future: F) -> Rwhere F: Future<Output = R> + Send, ``` Executes a `future` to completion on a new tokio-based Rocket async runtime. The runtime is terminated on shutdown, and the future’s resolved value is returned. ## Considerations This function is a low-level mechanism intended to be used to execute the future returned by `Rocket::launch()` in a self-contained async runtime designed for Rocket. It runs futures in exactly the same manner as `#[launch]` and `#[main]` do and is thus never the preferred mechanism for running a Rocket application. Always prefer to use the `#[launch]` or `#[main]` attributes. For example `#[main]` can be used even when Rocket is just a small part of a bigger application: ``` #[rocket::main] async fn main() { let rocket = rocket::build(); if should_start_server_in_foreground { rocket::build().launch().await; } else if should_start_server_in_background { rocket::tokio::spawn(rocket.launch()); } else { // do something else } } ``` See Rocket for more on using these attributes. Build an instance of Rocket, launch it, and wait for shutdown: let rocket = rocket::build() .attach(AdHoc::on_liftoff("Liftoff Printer", |_| Box::pin(async move { println!("Stalling liftoff for a second..."); rocket::tokio::time::sleep(std::time::Duration::from_secs(1)).await; println!("And we're off!"); }))); rocket::execute(rocket.launch()); ``` Launch a pre-built instance of Rocket and wait for it to shutdown: ``` use rocket::{Rocket, Ignite, Phase, Error}; fn launch<P: Phase>(rocket: Rocket<P>) -> Result<Rocket<Ignite>, Error> { rocket::execute(rocket.launch()) } ``` Do async work to build an instance of Rocket, launch, and wait for shutdown: // This line can also be inside of the `async` block. let rocket = rocket::build(); rocket::execute(async move { let rocket = rocket.ignite().await?; let config = rocket.config(); rocket.launch().await }); ``` # Attribute Macro rocket::async_test b'async_test' async_testsource · `#[async_test]` Retrofits supports for `async fn` in unit tests. Simply decorate a test `async fn` with `#[async_test]` instead of `#[test]` : ``` #[cfg(test)] mod tests { #[async_test] async fn test() { /* .. */ } } ``` The attribute rewrites the function to execute inside of a Rocket-compatible async runtime. # Attribute Macro rocket::async_trait b'async_trait' async_traitsource · `#[async_trait]` Retrofits support for `async fn` in trait impls and declarations. Any trait declaration or trait `impl` decorated with `#[async_trait]` is retrofitted with support for `async fn` s: ``` #[async_trait] trait MyAsyncTrait { async fn do_async_work(); } #[async_trait] impl MyAsyncTrait for () { async fn do_async_work() { /* .. */ } } ``` All `impl` s for a trait declared with `#[async_trait]` must themselves be decorated with `#[async_trait]` . Many of Rocket’s traits, such as `FromRequest` and `Fairing` are `async` . As such, implementations of said traits must be decorated with `#[async_trait]` . See the individual trait docs for trait-specific details. For more details on `#[async_trait]` , see `async_trait` . # Attribute Macro rocket::catch `#[catch]` Attribute to generate a `Catcher` and associated metadata. This attribute can only be applied to free functions: ``` use rocket::Request; use rocket::http::Status; #[catch(404)] fn not_found(req: &Request) -> String { format!("Sorry, {} does not exist.", req.uri()) } ## Grammar The grammar for the `#[catch]` attributes is defined as: ``` catch := STATUS | 'default' STATUS := valid HTTP status code (integer in [200, 599]) ``` ## Typing Requirements The decorated function may take zero, one, or two arguments. It’s type signature must be one of the following, where `R:` `Responder` : The attribute generates two items: An error `Handler` . The generated handler calls the decorated function, passing in the `Status` and `&Request` values if requested. The returned value is used to generate a `Response` via the type’s `Responder` implementation. * A static structure used by `catchers!` to generate a `Catcher` . The static structure (and resulting `Catcher` ) is populated with the name (the function’s name) and status code from the route attribute or `None` if `default` . The handler is set to the generated handler. # Attribute Macro rocket::launch `#[launch]` Generates a `main` function that launches a returned `Rocket<Build>` . When applied to a function that returns a `Rocket<Build>` instance, `#[launch]` automatically initializes an `async` runtime and launches the function’s returned instance: ``` use rocket::{Rocket, Build}; This generates code equivalent to the following: ``` #[rocket::main] async fn main() { // Recall that an uninspected `Error` will cause a pretty-printed panic, // so rest assured failures do not go undetected when using `#[launch]`. let _ = rocket().launch().await; } ``` The attributed function may be `async` : # Attribute Macro rocket::main `#[main]` Retrofits `async fn` support in `main` functions. A `main` `async fn` function decorated with `#[rocket::main]` is transformed into a regular `main` function that internally initializes a Rocket-specific tokio runtime and runs the attributed `async fn` inside of it: It should be used only when the return values of `ignite()` or `launch()` are to be inspected: let rocket = rocket.launch().await?; println!("Welcome back, Rocket: {:?}", rocket); For all other cases, use `#[launch]` instead. The function attributed with `#[rocket::main]` must be `async` and must be called `main` . Violation of either results in a compile-time error. # Derive Macro rocket::FromForm ``` #[derive(FromForm)] { // Attributes available to this derive: #[form] #[field] } ``` Derive for the `FromForm` trait. The `FromForm` derive can be applied to structures with named or unnamed fields: #[derive(FromForm)] struct MyStruct<'r> { field: usize, #[field(name = "renamed_field")] #[field(name = uncased("RenamedField"))] other: &'r str, #[field(validate = range(1..), default = 3)] r#type: usize, #[field(default = None)] is_nice: bool, } #[derive(FromForm)] #[field(validate = len(6..))] #[field(validate = neq("password"))] struct Password<'r>(&'r str); ``` Each field type is required to implement `FromForm` . The derive generates an implementation of the `FromForm` trait. Named Fields If the structure has named fields, the implementation parses a form whose field names match the field names of the structure on which the derive was applied. Each field’s value is parsed with the `FromForm` implementation of the field’s type. The `FromForm` implementation succeeds only when all fields parse successfully or return a default. Errors are collected into a `form::Errors` and returned if non-empty after parsing all fields. Unnamed Fields If the structure is a tuple struct, it must have exactly one field. The implementation parses a form exactly when the internal field parses a form and any `#[field]` validations succeed. ### Syntax The derive accepts one field attribute: `field` , and one container attribute, `form` , with the following syntax: ``` field := name? default? validate* name := 'name' '=' name_val ','? name_val := '"' FIELD_NAME '"' | 'uncased(' '"' FIELD_NAME '"' ') default := 'default' '=' EXPR ','? | 'default_with' '=' EXPR ','? validate := 'validate' '=' EXPR ','? FIELD_NAME := valid field name, according to the HTML5 spec EXPR := valid expression, as defined by Rust ``` `#[field]` can be applied any number of times on a field. `default` and `default_with` are mutually exclusive: at most one of `default` or `default_with` can be present per field. ``` #[derive(FromForm)] struct MyStruct { #[field(name = uncased("number"))] #[field(default = 42)] field: usize, #[field(name = "renamed_field")] #[field(name = uncased("anotherName"))] #[field(validate = eq("banana"))] #[field(validate = neq("orange"))] other: String } ``` For tuples structs, the `field` attribute can be applied to the structure itself: ``` #[derive(FromForm)] #[field(default = 42, validate = eq(42))] struct Meaning(usize); ``` ### Field Attribute Parameters * `name` A `name` attribute changes the name to match against when parsing the form field. The value is either an exact string to match against ( `"foo"` ), or `uncased("foo")` , which causes the match to be case-insensitive but case-preserving. When more than one `name` attribute is applied, the field will match against any of the names. * `validate = expr` The validation `expr` is run if the field type parses successfully. The expression must return a value of type ``` Result<(), form::Errors> ``` . On `Err` , the errors are added to the thus-far collected errors. If more than one `validate` attribute is applied, all validations are run. * `default = expr` If `expr` is not literally `None` , the parameter sets the default value of the field to be `expr.into()` . If `expr` is `None` , the parameter unsets the default value of the field, if any. The expression is only evaluated if the attributed field is missing in the incoming form. Except when `expr` is `None` , `expr` must be of type `T: Into<F>` where `F` is the field’s type. * `default_with = expr` The parameter sets the default value of the field to be exactly `expr` which must be of type `Option<F>` where `F` is the field’s type. If the expression evaluates to `None` , there is no default. Otherwise the value wrapped in `Some` is used. The expression is only evaluated if the attributed field is missing in the incoming form. ``` use std::num::NonZeroUsize; #[derive(FromForm)] struct MyForm { // `NonZeroUsize::new()` return an `Option<NonZeroUsize>`. #[field(default_with = NonZeroUsize::new(42))] num: NonZeroUsize, } ``` The derive accepts any number of type generics and at most one lifetime generic. If a type generic is present, the generated implementation will require a bound of `FromForm<'r>` for the field type containing the generic. For example, for a struct ``` struct Foo<T>(Json<T>) ``` , the bound ``` Json<T>: FromForm<'r> ``` will be added to the generated implementation. ``` use rocket::form::FromForm; use rocket::serde::json::Json; // The bounds `A: FromForm<'r>`, `B: FromForm<'r>` will be required. #[derive(FromForm)] struct FancyForm<A, B> { first: A, second: B, }; // The bound `Json<T>: FromForm<'r>` will be required. #[derive(FromForm)] struct JsonToken<T> { token: Json<T>, id: usize, } ``` If a lifetime generic is present, it is replaced with `'r` in the generated implementation `impl FromForm<'r>` : ``` // Generates `impl<'r> FromForm<'r> for MyWrapper<'r>`. #[derive(FromForm)] struct MyWrapper<'a>(&'a str); ``` ``` use rocket::form::{self, FromForm}; // The bound `form::Result<'r, T>: FromForm<'r>` will be required. #[derive(FromForm)] struct SomeResult<'o, T>(form::Result<'o, T>); ``` The special bounds on `Json` and `Result` are required due to incomplete and incorrect support for lifetime generics in `async` blocks in Rust. See rust-lang/#64552 for further details. # Trait rocket::form::FromForm Date: 2012-10-12 Categories: Tags: ``` pub trait FromForm<'r>: Send + Sized { type Context: Send; // Required methods fn init(opts: Options) -> Self::Context; fn push_value(ctxt: &mut Self::Context, field: ValueField<'r>); fn push_data<'life0, 'life1, 'async_trait>( ctxt: &'life0 mut Self::Context, field: DataField<'r, 'life1> ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>> where Self: 'async_trait, 'r: 'async_trait, 'life0: 'async_trait, 'life1: 'async_trait; fn finalize(ctxt: Self::Context) -> Result<'r, Self>; // Provided methods fn push_error(_ctxt: &mut Self::Context, _error: Error<'r>) { ... } fn default(opts: Options) -> Option<Self> { ... } } ``` Trait implemented by form guards: types parseable from HTTP forms. Only form guards that are collections, that is, collect more than one form field while parsing, should implement `FromForm` . All other types should implement `FromFormField` instead, which offers a simplified interface to parsing a single form field. For a gentle introduction to forms in Rocket, see the forms guide. ## Form Guards A form guard is a guard that operates on form fields, typically those with a particular name prefix. Form guards validate and parse form field data via implementations of `FromForm` . In other words, a type is a form guard iff it implements `FromForm` . Form guards are used as the inner type of the `Form` data guard: ``` use rocket::form::Form; #[post("/submit", data = "<var>")] fn submit(var: Form<FormGuard>) { /* ... */ } ``` This trait can, and largely should, be automatically derived. When deriving `FromForm` , every field in the structure must implement `FromForm` . Form fields with the struct field’s name are shifted and then pushed to the struct field’s `FromForm` parser. #[derive(FromForm)] struct TodoTask<'r> { #[field(validate = len(1..))] description: &'r str, #[field(name = "done")] completed: bool } ``` ## Parsing Strategy Form parsing is either strict or lenient, controlled by `Options::strict` . A strict parse errors when there are missing or extra fields, while a lenient parse allows both, providing there is a `default()` in the case of a missing field. Most type inherit their strategy on `FromForm::init()` , but some types like `Option` override the requested strategy. The strategy can also be overwritten manually, per-field or per-value, by using the `Strict` or `Lenient` form guard: ``` use rocket::form::{self, FromForm, Strict, Lenient}; #[derive(FromForm)] struct TodoTask<'r> { strict_bool: Strict<bool>, lenient_inner_option: Option<Lenient<bool>>, strict_inner_result: form::Result<'r, Strict<bool>>, } ``` A form guard may have a default which is used in case of a missing field when parsing is lenient. When parsing is strict, all errors, including missing fields, are propagated directly. Rocket implements `FromForm` for many common types. As a result, most applications will never need a custom implementation of `FromForm` or `FromFormField` . Their behavior is documented in the table below. Type | Strategy | Default | Data | Value | Notes | | --- | --- | --- | --- | --- | --- | | strict | if | if | if | | | lenient | if | if | if | | | strict | | if | if | Infallible, | | inherit | | if | if | Infallible, | | inherit | | if | if | | b'Type\n Strategy\n Default\n Data\n Value\n Notes\n \n \n \n Strict<T>\n \n strict\n if strict T\n if T\n if T\n \n T: FromForm\n \n \n \n \n Lenient<T>\n \n lenient\n if lenient T\n if T\n if T\n \n T: FromForm\n \n \n \n \n Option<T>\n \n strict\n \n None\n \n if T\n if T\n Infallible, T: FromForm\n \n \n \n Result<T>\n \n inherit\n \n T::finalize()\n \n if T\n if T\n Infallible, T: FromForm\n \n \n \n Vec<T>\n \n inherit\n \n vec![]\n \n if T\n if T\n \n T: FromForm\n \n \n \n \n HashMap<K, V>\n \n inherit\n \n HashMap::new()\n \n if V\n if V\n K: FromForm + Eq + Hash, V: FromForm\n \n \n \n BTreeMap<K, V>\n \n inherit\n \n BTreeMap::new()\n \n if V\n if V\n K: FromForm + Ord, V: FromForm\n \n \n \n bool\n \n inherit\n \n false\n \n No\n Yes\n "yes"/"on"/"true", "no"/"off"/"false"\n \n \n (un)signed int\n inherit\n no default\n No\n Yes\n \n {u,i}{size,8,16,32,64,128}\n \n \n \n nonzero int\n inherit\n no default\n No\n Yes\n \n NonZero{I,U}{size,8,16,32,64,128}\n \n \n \n float\n inherit\n no default\n No\n Yes\n \n f{32,64}\n \n \n \n \n &str\n \n inherit\n no default\n Yes\n Yes\n Percent-decoded. Data limit string applies.\n \n \n \n String\n \n inherit\n no default\n Yes\n Yes\n Exactly &str, but owned. Prefer &str.\n \n \n IP Address\n inherit\n no default\n No\n Yes\n IpAddr, Ipv4Addr, Ipv6Addr\n \n \n Socket Address\n inherit\n no default\n No\n Yes\n SocketAddr, SocketAddrV4, SocketAddrV6\n \n \n \n TempFile\n \n inherit\n no default\n Yes\n Yes\n Data limits apply. See TempFile.\n \n \n \n Capped<C>\n \n inherit\n no default\n Yes\n Yes\n C is &str, String, or TempFile.\n \n \n \n time::Date\n \n inherit\n no default\n No\n Yes\n %F (YYYY-MM-DD). HTML &#8220;date&#8221; input.\n \n \n \n time::DateTime\n \n inherit\n no default\n No\n Yes\n %FT%R or %FT%T (YYYY-MM-DDTHH:MM[:SS])\n \n \n \n time::Time\n \n inherit\n no default\n No\n Yes\n %R or %T (HH:MM[:SS])' ### Additional Notes * `Vec<T>` where `T: FromForm` Parses a sequence of `T` ’s. A new `T` is created whenever the field name’s key changes or is empty; the previous `T` is finalized and errors are stored. While the key remains the same and non-empty, form values are pushed to the current `T` after being shifted. All collected errors are returned at finalization, if any, or the successfully created vector is returned. * `HashMap<K, V>` where ``` K: FromForm + Eq + Hash ``` , `V: FromForm` `BTreeMap<K, V>` where `K: FromForm + Ord` , `V: FromForm` Parses a sequence of `(K, V)` ’s. A new pair is created for every unique first index of the key. If the key has only one index ( `map[index]=value` ), the index itself is pushed to `K` ’s parser and the remaining shifted field is pushed to `V` ’s parser. If the key has two indices ( `map[k:index]=value` or `map[v:index]=value` ), the first index must start with `k` or `v` . If the first index starts with `k` , the shifted field is pushed to `K` ’s parser. If the first index starts with `v` , the shifted field is pushed to `V` ’s parser. If the first index is anything else, an error is created for the offending form field. Errors are collected as they occur. Finalization finalizes all pairs and returns errors, if any, or the map. * `bool` Parses as `false` for missing values (when lenient) and case-insensitive values of `off` , `false` , and `no` . Parses as `true` for values of `on` , `true` , `yes` , and the empty value. Failed to parse otherwise. * Parses a date in `%FT%R` or `%FT%T` format, that is, `YYYY-MM-DDTHH:MM` or `YYYY-MM-DDTHH:MM:SS` . This is the `"datetime-local"` HTML input type without support for the millisecond variant. * Parses a time in `%R` or `%T` format, that is, `HH:MM` or `HH:MM:SS` . This is the `"time"` HTML input type without support for the millisecond variant. ## Push Parsing `FromForm` describes a push-based parser for Rocket’s field wire format. Fields are preprocessed into either `ValueField` s or `DataField` s which are then pushed to the parser in ``` FromForm::push_value() ``` ``` FromForm::push_data() ``` , respectively. Both url-encoded forms and multipart forms are supported. All url-encoded form fields are preprocessed as `ValueField` s. Multipart form fields with Content-Types are processed as `DataField` s while those without a set Content-Type are processed as `ValueField` s. `ValueField` field names and values are percent-decoded. Parsing is split into 3 stages. After preprocessing, the three stages are: Initialization. The type sets up a context for later `push` es. ``` use rocket::form::Options; fn init(opts: Options) -> Self::Context { todo!("return a context for storing parse state") } ``` Push. The structure is repeatedly pushed form fields; the latest context is provided with each `push` . If the structure contains children, it uses the first `key()` to identify a child to which it then `push` es the remaining `field` to, likely with a `shift()` ed name. Otherwise, the structure parses the `value` itself. The context is updated as needed. ``` use rocket::form::{ValueField, DataField}; fn push_value(ctxt: &mut Self::Context, field: ValueField<'r>) { todo!("modify context as necessary for `field`") } async fn push_data(ctxt: &mut Self::Context, field: DataField<'r, '_>) { todo!("modify context as necessary for `field`") } ``` Finalization. The structure is informed that there are no further fields. It systemizes the effects of previous `push` es via its context to return a parsed structure or generate `Errors` . ``` use rocket::form::Result; fn finalize(ctxt: Self::Context) -> Result<'r, Self> { todo!("inspect context to generate `Self` or `Errors`") } ``` These three stages make up the entirety of the `FromForm` trait. ### Nesting and `NameView` Each field name key typically identifies a unique child of a structure. As such, when processed left-to-right, the keys of a field jointly identify a unique leaf of a structure. The value of the field typically represents the desired value of the leaf. A `NameView` captures and simplifies this “left-to-right” processing of a field’s name by exposing a sliding-prefix view into a name. A `shift()` shifts the view one key to the right. Thus, a `Name` of `a.b.c` when viewed through a new `NameView` is `a` . Shifted once, the view is `a.b` . `key()` returns the last (or “current”) key in the view. A nested structure can thus handle a field with a `NameView` , operate on the `key()` , `shift()` the `NameView` , and pass the field with the shifted `NameView` to the next processor which handles `b` and so on. ### A Simple Example The following example uses `f1=v1&f2=v2` to illustrate field/value pairs `(f1, v2)` and `(f2, v2)` . This is the same encoding used to send HTML forms over HTTP, though Rocket’s push-parsers are unaware of any specific encoding, dealing only with logical `field` s, `index` es, and `value` s. # A Single Field ( `T: FormFormField` ) The simplest example parses a single value of type `T` from a string with an optional default value: this is ``` impl<T: FromFormField> FromForm for T ``` Initialization. The context stores form options and an `Option` of ``` Result<T, form::Error> ``` for storing the `result` of parsing `T` , which is initially set to `None` . ``` use rocket::form::{self, FromFormField}; struct Context<'r, T: FromFormField<'r>> { opts: form::Options, result: Option<form::Result<'r, T>>, } fn init(opts: form::Options) -> Context<'r, T> { Context { opts, result: None } } ``` Push. If `ctxt.result` is `None` , `T` is parsed from `field` , and the result is stored in `context.result` . Otherwise a field has already been parsed and nothing is done. ``` fn push_value(ctxt: &mut Context<'r, T>, field: ValueField<'r>) { if ctxt.result.is_none() { ctxt.result = Some(T::from_value(field)); } } ``` Finalization. If `ctxt.result` is `None` , parsing is lenient, and `T` has a default, the default is returned. Otherwise a `Missing` error is returned. If `ctxt.result` is `Some(v)` , the result `v` is returned. ``` fn finalize(ctxt: Context<'r, T>) -> form::Result<'r, T> { match ctxt.result { Some(result) => result, None if ctxt.opts.strict => Err(Errors::from(ErrorKind::Missing)), None => match T::default() { Some(default) => Ok(default), None => Err(Errors::from(ErrorKind::Missing)), } } } ``` This implementation is complete except for the following details: * handling both `push_data` and `push_value` * checking for duplicate pushes when parsing is `strict` * tracking the field’s name and value to generate a complete `Error` Implementing `FromForm` should be a rare occurrence. Prefer instead to use Rocket’s built-in derivation or, for custom types, implementing `FromFormField` . An implementation of `FromForm` consists of implementing the three stages outlined above. `FromForm` is an async trait, so implementations must be decorated with an attribute of ``` use rocket::form::{self, FromForm, DataField, ValueField}; #[rocket::async_trait] impl<'r> FromForm<'r> for MyType { type Context = MyContext; fn init(opts: form::Options) -> Self::Context { todo!() } fn push_value(ctxt: &mut Self::Context, field: ValueField<'r>) { todo!() } async fn push_data(ctxt: &mut Self::Context, field: DataField<'r, '_>) { todo!() } fn finalize(this: Self::Context) -> form::Result<'r, Self> { todo!() } } ``` The lifetime `'r` corresponds to the lifetime of the request. ### A More Involved Example We illustrate implementation of `FromForm` through an example. The example implements `FromForm` for a `Pair(A, B)` type where `A: FromForm` and `B: FromForm` , parseable from forms with at least two fields, one with a key of `0` and the other with a key of `1` . The field with key `0` is parsed as an `A` while the field with key `1` is parsed as a `B` . Specifically, to parse a `Pair(A, B)` from a field with prefix `pair` , a form with the following fields must be submitted: * `pair[0]` - type A * `pair[1]` - type B Examples include: as `Pair(&str, usize)` * as `Pair(&str, &str)` * ``` pair[0]=2012-10-12&pair[1]=100 ``` ``` Pair(time::Date, &str) ``` ``` pair.0=2012-10-12&pair.1=100 ``` ``` Pair(time::Date, usize) ``` ``` use either::Either; use rocket::form::{self, FromForm, ValueField, DataField, Error, Errors}; /// A form guard parseable from fields `.0` and `.1`. struct Pair<A, B>(A, B); // The parsing context. We'll be pushing fields with key `.0` to `left` // and fields with `.1` to `right`. We'll collect errors along the way. struct PairContext<'v, A: FromForm<'v>, B: FromForm<'v>> { left: A::Context, right: B::Context, errors: Errors<'v>, } #[rocket::async_trait] impl<'v, A: FromForm<'v>, B: FromForm<'v>> FromForm<'v> for Pair<A, B> { type Context = PairContext<'v, A, B>; // We initialize the `PairContext` as expected. fn init(opts: form::Options) -> Self::Context { PairContext { left: A::init(opts), right: B::init(opts), errors: Errors::new() } } // For each value, we determine if the key is `.0` (left) or `.1` // (right) and push to the appropriate parser. If it was neither, we // store the error for emission on finalization. The parsers for `A` and // `B` will handle duplicate values and so on. fn push_value(ctxt: &mut Self::Context, field: ValueField<'v>) { match ctxt.context(field.name) { Ok(Either::Left(ctxt)) => A::push_value(ctxt, field.shift()), Ok(Either::Right(ctxt)) => B::push_value(ctxt, field.shift()), Err(e) => ctxt.errors.push(e), } } // This is identical to `push_value` but for data fields. async fn push_data(ctxt: &mut Self::Context, field: DataField<'v, '_>) { match ctxt.context(field.name) { Ok(Either::Left(ctxt)) => A::push_data(ctxt, field.shift()).await, Ok(Either::Right(ctxt)) => B::push_data(ctxt, field.shift()).await, Err(e) => ctxt.errors.push(e), } } // Finally, we finalize `A` and `B`. If both returned `Ok` and we // encountered no errors during the push phase, we return our pair. If // there were errors, we return them. If `A` and/or `B` failed, we // return the commutative errors. fn finalize(mut ctxt: Self::Context) -> form::Result<'v, Self> { match (A::finalize(ctxt.left), B::finalize(ctxt.right)) { (Ok(l), Ok(r)) if ctxt.errors.is_empty() => Ok(Pair(l, r)), (Ok(_), Ok(_)) => Err(ctxt.errors), (left, right) => { if let Err(e) = left { ctxt.errors.extend(e); } if let Err(e) = right { ctxt.errors.extend(e); } Err(ctxt.errors) } } } } impl<'v, A: FromForm<'v>, B: FromForm<'v>> PairContext<'v, A, B> { // Helper method used by `push_{value, data}`. Determines which context // we should push to based on the field name's key. If the key is // neither `0` nor `1`, we return an error. fn context( &mut self, name: form::name::NameView<'v> ) -> Result<Either<&mut A::Context, &mut B::Context>, Error<'v>> { use std::borrow::Cow; match name.key().map(|k| k.as_str()) { Some("0") => Ok(Either::Left(&mut self.left)), Some("1") => Ok(Either::Right(&mut self.right)), _ => Err(Error::from(&[Cow::Borrowed("0"), Cow::Borrowed("1")]) .with_entity(form::error::Entity::Index(0)) .with_name(name)), } } } ``` ## Required Associated Types§ b"source fn push_value(ctxt: &mut Self::Context, field: ValueField<'r>)" # fn push_value(ctxt: &mut Self::Context, field: ValueField<'r>) Processes the value field `field` . Processes the data field `field` . b"source fn push_error(_ctxt: &mut Self::Context, _error: Error<'r>)" # fn push_error(_ctxt: &mut Self::Context, _error: Error<'r>) Processes the external form or field error `_error` . The default implementation does nothing, which is always correct. b'source fn default(opts: Options) -> Option<Self>' # fn default(opts: Options) -> Option<SelfReturns a default value, if any, to use when a value is desired and parsing fails. The default implementation initializes `Self` with `opts` and finalizes immediately, returning the value if finalization succeeds. This is always correct and should likely not be changed. Returning a different value may result in ambiguous parses. # Derive Macro rocket::FromFormField b'FromFormField' FromFormFieldsource · ``` #[derive(FromFormField)] { // Attributes available to this derive: #[field] } ``` Derive for the `FromFormField` trait. The `FromFormField` derive can be applied to enums with nullary (zero-length) fields: ``` #[derive(FromFormField)] enum MyValue { First, Second, Third, } ``` The derive generates an implementation of the `FromFormField` trait for the decorated `enum` . The implementation returns successfully when the form value matches, case insensitively, the stringified version of a variant’s name, returning an instance of said variant. If there is no match, an error recording all of the available options is returned. As an example, for the `enum` above, the form values `"first"` , `"FIRST"` , `"fiRSt"` , and so on would parse as `MyValue::First` , while `"second"` and `"third"` (in any casing) would parse as `MyValue::Second` and `MyValue::Third` , respectively. The `field` field attribute can be used to change the string value that is compared against for a given variant: ``` #[derive(FromFormField)] enum MyValue { First, Second, #[field(value = "fourth")] #[field(value = "fifth")] Third, } ``` When more than one `value` is specified, matching any value will result in parsing the decorated variant. Declaring any two values that are case-insensitively equal to any other value or variant name is a compile-time error. The `#[field]` attribute’s grammar is: ``` field := 'value' '=' STRING_LIT STRING_LIT := any valid string literal, as defined by Rust ``` The attribute accepts a single string parameter of name `value` corresponding to the string to use to match against for the decorated variant. In the example above, the the strings `"fourth"` , `"FOUrth"` , `"fiFTH"` and so on would parse as `MyValue::Third` . # Trait rocket::form::FromFormField b'form::' form:: b'FromFormField' FromFormFieldsource · ``` pub trait FromFormField<'v>: Send + Sized { // Provided methods fn from_value(field: ValueField<'v>) -> Result<'v, Self> { ... } fn from_data<'life0, 'async_trait>( field: DataField<'v, 'life0> ) -> Pin<Box<dyn Future<Output = Result<'v, Self>> + Send + 'async_trait>> where Self: 'async_trait, 'v: 'async_trait, 'life0: 'async_trait { ... } fn default() -> Option<Self> { ... } } ``` Implied form guard ( `FromForm` ) for parsing a single form field. Types that implement `FromFormField` automatically implement `FromForm` via a blanket implementation. As such, all `FromFormField` types are form guards and can appear as the type of values in derived `FromForm` struct fields: ``` #[derive(FromForm)] struct Person<'r> { name: &'r str, age: u16 } ``` `FromFormField` can be derived for C-like enums, where the generated implementation case-insensitively parses fields with values equal to the name of the variant or the value in `field(value = "...")` . ``` /// Fields with value `"simple"` parse as `Kind::Simple`. Fields with value /// `"fancy"` parse as `Kind::SoFancy`. #[derive(FromFormField)] enum Kind { Simple, #[field(value = "fancy")] SoFancy, } ``` See `FromForm` for a list of all form guards, including those implemented via `FromFormField` . Implementing `FromFormField` requires implementing one or both of `from_value` or `from_data` , depending on whether the type can be parsed from a value field (text) and/or streaming binary data. Typically, a value can be parsed from either, either directly or by using request-local cache as an intermediary, and parsing from both should be preferred when sensible. `FromFormField` is an async trait, so implementations must be decorated with an attribute of #[rocket::async_trait] impl<'r> FromFormField<'r> for MyType { fn from_value(field: ValueField<'r>) -> form::Result<'r, Self> { todo!("parse from a value or use default impl") } async fn from_data(field: DataField<'r, '_>) -> form::Result<'r, Self> { todo!("parse from a value or use default impl") } } ``` The following example parses a custom `Person` type with the format `$name:$data` , where `$name` is expected to be string and `data` is expected to be any slice of bytes. use memchr::memchr; struct Person<'r> { name: &'r str, data: &'r [u8] } #[rocket::async_trait] impl<'r> FromFormField<'r> for Person<'r> { fn from_value(field: ValueField<'r>) -> form::Result<'r, Self> { match field.value.find(':') { Some(i) => Ok(Person { name: &field.value[..i], data: field.value[(i + 1)..].as_bytes() }), None => Err(form::Error::validation("does not contain ':'"))? } } async fn from_data(field: DataField<'r, '_>) -> form::Result<'r, Self> { // Retrieve the configured data limit or use `256KiB` as default. let limit = field.request.limits() .get("person") .unwrap_or(256.kibibytes()); // Read the capped data stream, returning a limit error as needed. let bytes = field.data.open(limit).into_bytes().await?; if !bytes.is_complete() { Err((None, Some(limit)))?; } // Store the bytes in request-local cache and split at ':'. let bytes = bytes.into_inner(); let bytes = rocket::request::local_cache!(field.request, bytes); let (raw_name, data) = match memchr(b':', bytes) { Some(i) => (&bytes[..i], &bytes[(i + 1)..]), None => Err(form::Error::validation("does not contain ':'"))? }; // Try to parse the name as UTF-8 or return an error if it fails. let name = std::str::from_utf8(raw_name)?; Ok(Person { name, data }) } } use rocket::form::{Form, FromForm}; // The type can be used directly, if only one field is expected... #[post("/person", data = "<person>")] fn person(person: Form<Person<'_>>) { /* ... */ } // ...or as a named field in another form guard... #[derive(FromForm)] struct NewPerson<'r> { person: Person<'r> } #[post("/person", data = "<person>")] fn new_person(person: Form<NewPerson<'_>>) { /* ... */ } ``` b"source fn from_value(field: ValueField<'v>) -> Result<'v, Self>" ``` ValueField::unexpected() ``` ``` DataField::unexpected() ``` b'source fn default() -> Option<Self>' Returns a default value, if any exists, to be used during lenient parsing when the form field is missing. A return value of `None` means that field is required to exist and parse successfully, always. A return value of `Some(default)` means that `default` should be used when a field is missing. The default implementation returns `None` . b"source&#167; impl<'v> FromFormField<'v> for NonZeroI128" ### impl<'v> FromFormField<'v> for NonZeroI128 b"source&#167; impl<'v> FromFormField<'v> for NonZeroU8" ### impl<'v> FromFormField<'v> for NonZeroU8 b"source&#167; impl<'v> FromFormField<'v> for Time" ### impl<'v> FromFormField<'v> for Time b"source&#167; impl<'v> FromFormField<'v> for Date" ### impl<'v> FromFormField<'v> for Date b"source&#167; impl<'v> FromFormField<'v> for u16" b"source&#167; impl<'v> FromFormField<'v> for &'v str" ### impl<'v> FromFormField<'v> for &'v str ### impl<'v> FromFormField<'v> for i128 b"source&#167; impl<'v> FromFormField<'v> for NonZeroI64" ### impl<'v> FromFormField<'v> for NonZeroI64 b"source&#167; impl<'v> FromFormField<'v> for NonZeroI8" b"source&#167; impl<'v> FromFormField<'v> for u8" ### impl<'v> FromFormField<'v> for u8 b"source&#167; impl<'v> FromFormField<'v> for i8" ### impl<'v> FromFormField<'v> for i8 b"source&#167; impl<'v> FromFormField<'v> for PrimitiveDateTime" b"source&#167; impl<'v> FromFormField<'v> for i64" ### impl<'v> FromFormField<'v> for i64 b"source&#167; impl<'v> FromFormField<'v> for Cow<'v, str>" ### impl<'v> FromFormField<'v> for Cow<'v, strb"source&#167; impl<'v> FromFormField<'v> for SocketAddrV4" ### impl<'v> FromFormField<'v> for SocketAddrV4 b"source&#167; impl<'v> FromFormField<'v> for NonZeroU16" b"source&#167; impl<'v> FromFormField<'v> for NonZeroIsize" ### impl<'v> FromFormField<'v> for NonZeroIsize b"source&#167; impl<'v> FromFormField<'v> for &'v [u8]" ### impl<'v> FromFormField<'v> for &'v [u8] b"source&#167; impl<'v> FromFormField<'v> for u64" ### impl<'v> FromFormField<'v> for u64 b"source&#167; impl<'v> FromFormField<'v> for isize" ### impl<'v> FromFormField<'v> for isize b"source&#167; impl<'v> FromFormField<'v> for Ipv6Addr" b"source&#167; impl<'v> FromFormField<'v> for u32" ### impl<'v> FromFormField<'v> for u32 b"source&#167; impl<'v> FromFormField<'v> for NonZeroUsize" ### impl<'v> FromFormField<'v> for NonZeroUsize b"source&#167; impl<'v> FromFormField<'v> for IpAddr" ### impl<'v> FromFormField<'v> for IpAddr ### impl<'v> FromFormField<'v> for SocketAddrV6 b"source&#167; impl<'v> FromFormField<'v> for i32" ### impl<'v> FromFormField<'v> for i32 b"source&#167; impl<'v> FromFormField<'v> for usize" b"source&#167; impl<'v> FromFormField<'v> for f32" b"source&#167; impl<'v> FromFormField<'v> for i16" ### impl<'v> FromFormField<'v> for i16 b"source&#167; impl<'v> FromFormField<'v> for SocketAddr" ### impl<'v> FromFormField<'v> for SocketAddr b"source&#167; impl<'v> FromFormField<'v> for bool" ### impl<'v> FromFormField<'v> for bool b"source&#167; impl<'v> FromFormField<'v> for Ipv4Addr" ### impl<'v> FromFormField<'v> for Ipv4Addr b"source&#167; impl<'v> FromFormField<'v> for NonZeroI32" ### impl<'v> FromFormField<'v> for NonZeroI32 b"source&#167; impl<'v> FromFormField<'v> for NonZeroU64" b"source&#167; impl<'v> FromFormField<'v> for NonZeroU32" b"source&#167; impl<'v> FromFormField<'v> for NonZeroU128" ### impl<'v> FromFormField<'v> for NonZeroU128 b"source&#167; impl<'v> FromFormField<'v> for f64" ### impl<'v> FromFormField<'v> for f64 b"source&#167; impl<'v> FromFormField<'v> for u128" ### impl<'v> FromFormField<'v> for NonZeroI16 ### impl<'v> FromFormField<'v> for TempFile<'v### impl<'v> FromFormField<'v> for Capped<&'v str### impl<'v> FromFormField<'v> for Capped<&'v [u8]### impl<'v> FromFormField<'v> for Capped<TempFile<'v>### impl<'v> FromFormField<'v> for Capped<Cow<'v, str>### impl<'v> FromFormField<'v> for Capped<String### impl<'v> FromFormField<'v> for String ### impl<'v> FromFormField<'v> for Uuid ### impl<'v, T: Deserialize<'v> + Send> FromFormField<'v> for Json<T### impl<'v, T: Deserialize<'v> + Send> FromFormField<'v> for MsgPack<T `msgpack` only. # Derive Macro rocket::Responder ``` #[derive(Responder)] { // Attributes available to this derive: #[response] } ``` Derive for the `Responder` trait. The `Responder` derive can be applied to enums and structs with named fields. When applied to enums, variants must have at least one field. When applied to structs, the struct must have at least one field. ``` #[derive(Responder)] enum MyResponderA { A(String), B(File, ContentType), } #[derive(Responder)] struct MyResponderB { inner: OtherResponder, header: ContentType, } ``` The derive generates an implementation of the `Responder` trait for the decorated enum or structure. The derive uses the first field of a variant or structure to generate a `Response` . As such, the type of the first field must implement `Responder` . The remaining fields of a variant or structure are set as headers in the produced `Response` using ``` Response::set_header() ``` . As such, every other field (unless explicitly ignored, explained next) must implement `Into<Header>` . Except for the first field, fields decorated with `#[response(ignore)]` are ignored by the derive: ``` #[derive(Responder)] enum MyResponder { A(String), B(File, ContentType, #[response(ignore)] Other), } #[derive(Responder)] struct MyOtherResponder { inner: NamedFile, header: ContentType, #[response(ignore)] other: Other, } ``` Decorating the first field with `#[response(ignore)]` has no effect. ## Field Attribute Additionally, the `response` attribute can be used on named structures and enum variants to override the status and/or content-type of the `Response` produced by the generated implementation. The `response` attribute used in these positions has the following grammar: ``` response := parameter (',' parameter)? parameter := 'status' '=' STATUS | 'content_type' '=' CONTENT_TYPE STATUS := unsigned integer >= 100 and < 600 CONTENT_TYPE := string literal, as defined by Rust, identifying a valid Content-Type, as defined by Rocket ``` It can be used as follows: ``` #[derive(Responder)] enum Error { #[response(status = 500, content_type = "json")] A(String), #[response(status = 404)] B(NamedFile, ContentType), } #[derive(Responder)] #[response(status = 400)] struct MyResponder { inner: InnerResponder, header: ContentType, #[response(ignore)] other: Other, } ``` The attribute accepts two key/value pairs: `status` and `content_type` . The value of `status` must be an unsigned integer representing a valid status code. The `Response` produced from the generated implementation will have its status overridden to this value. The value of `content_type` must be a valid media-type in `top/sub` form or `shorthand` form. Examples include: * `"text/html"` * ``` "application/x-custom" ``` * `"html"` * `"json"` * `"plain"` * `"binary"` See ``` ContentType::parse_flexible() ``` for a full list of available shorthands. The `Response` produced from the generated implementation will have its content-type overridden to this value. The derive accepts any number of type generics and at most one lifetime generic. If a type generic is present and the generic is used in the first field of a structure, the generated implementation will require a bound of `Responder<'r, 'o>` for the field type containing the generic. In all other fields, unless ignores, a bound of `Into<Header<'o>` is added. For example, for a struct ``` struct Foo<T, H>(Json<T>, H) ``` , the derive adds: ``` Json<T>: Responder<'r, 'o> ``` * `H: Into<Header<'o>>` ``` use rocket::serde::Serialize; use rocket::serde::json::Json; use rocket::http::ContentType; use rocket::response::Responder; // The bound `T: Responder` will be added. #[derive(Responder)] #[response(status = 404, content_type = "html")] struct NotFoundHtml<T>(T); // The bound `Json<T>: Responder` will be added. #[derive(Responder)] struct NotFoundJson<T>(Json<T>); // The bounds `Json<T>: Responder, E: Responder` will be added. #[derive(Responder)] enum MyResult<T, E> { Ok(Json<T>), #[response(status = 404)] Err(E, ContentType) } ``` If a lifetime generic is present, it will be replaced with `'o` in the generated implementation ``` impl Responder<'r, 'o> ``` ``` // Generates `impl<'r, 'o> Responder<'r, 'o> for NotFoundHtmlString<'o>`. #[derive(Responder)] #[response(status = 404, content_type = "html")] struct NotFoundHtmlString<'a>(&'a str); ``` ``` #[derive(Responder)] struct SomeResult<'o, T>(Result<T, &'o str>); ``` # Derive Macro rocket::UriDisplayPath b'UriDisplayPath' UriDisplayPathsource · ``` #[derive(UriDisplayPath)] ``` Derive for the `UriDisplay<Path>` trait. The `UriDisplay<Path>` derive can only be applied to tuple structs with one field. ``` #[derive(UriDisplayPath)] struct Name(String); #[derive(UriDisplayPath)] struct Age(usize); ``` for the field. # Trait rocket::http::uri::fmt::UriDisplay b'http::' http:: b'uri::' uri:: b'fmt::' fmt:: b'UriDisplay' UriDisplay ``` pub trait UriDisplay<P>where P: Part,{ // Required method fn fmt(&self, f: &mut Formatter<'_, P>) -> Result<(), Error>; } ``` Trait implemented by types that can be displayed as part of a URI in `uri!` . Types implementing this trait can be displayed in a URI-safe manner. Unlike `Display` , the string written by a `UriDisplay` implementation must be URI-safe. In practice, this means that the string must either be percent-encoded or consist only of characters that are alphanumeric, “-”, “.”, “_”, or “~” - the “unreserved” characters. ## Marker Generic: `Path` , `Query` The `Part` parameter `P` in `UriDisplay<P>` must be either `Path` or `Query` (see the `Part` documentation for how this is enforced), resulting in either `UriDisplay<Path>` or `UriDisplay<Query>` . As the names might imply, the `Path` version of the trait is used when displaying parameters in the path part of the URI while the `Query` version is used when displaying parameters in the query part of the URI. These distinct versions of the trait exist exactly to differentiate, at the type-level, where in the URI a value is to be written to, allowing for type safety in the face of differences between the two locations. For example, while it is valid to use a value of `None` in the query part, omitting the parameter entirely, doing so is not valid in the path part. By differentiating in the type system, both of these conditions can be enforced appropriately through distinct implementations of `UriDisplay<Path>` and `UriDisplay<Query>` . Occasionally, the implementation of `UriDisplay` is independent of where the parameter is to be displayed. When this is the case, the parameter may be kept generic. That is, implementations can take the form: ``` impl<P: Part> UriDisplay<P> for SomeType ``` When the `uri!` macro is used to generate a URI for a route, the types for the route’s path URI parameters must implement `UriDisplay<Path>` , while types in the route’s query parameters must implement `UriDisplay<Query>` . Any parameters ignored with `_` must be of a type that implements `Ignorable` . The `UriDisplay` implementation for these types is used when generating the URI. To illustrate `UriDisplay` ’s role in code generation for `uri!` , consider the following route: ``` #[get("/item/<id>?<track>")] fn get_item(id: i32, track: Option<String>) { /* .. */ } ``` A URI for this route can be generated as follows: ``` // With unnamed parameters. uri!(get_item(100, Some("inbound"))); // With named parameters. uri!(get_item(id = 100, track = Some("inbound"))); uri!(get_item(track = Some("inbound"), id = 100)); // Ignoring `track`. uri!(get_item(100, _)); uri!(get_item(100, None as Option<String>)); uri!(get_item(id = 100, track = _)); uri!(get_item(track = _, id = 100)); uri!(get_item(id = 100, track = None as Option<&str>)); ``` After verifying parameters and their types, Rocket will generate code similar (in spirit) to the following: ``` Origin::parse(&format!("/item/{}?track={}", &100 as &dyn UriDisplay<Path>, &"inbound" as &dyn UriDisplay<Query>)); ``` For this expression to typecheck, `i32` must implement `UriDisplay<Path>` and `&str` must implement `UriDisplay<Query>` . What’s more, when `track` is ignored, `Option<String>` is required to implement `Ignorable` . As can be seen, the implementations will be used to display the value in a URI-safe manner. Rocket implements `UriDisplay<P>` for all `P: Part` for several built-in types. i8, i16, i32, i64, i128, isize, u8, u16, u32, u64, u128, usize, f32, f64, bool, IpAddr, Ipv4Addr, Ipv6Addr The implementation of `UriDisplay` for these types is identical to the `Display` implementation. * `String` , `&str` , `Cow<str>` The string is percent encoded. * `&T` , `&mut T` where `T: UriDisplay` Uses the implementation of `UriDisplay` for `T` . Rocket implements `UriDisplay<Path>` (but not `UriDisplay<Query>` ) for several built-in types. * `T` for `Option<T>` where `T: UriDisplay<Path>` Uses the implementation of `UriDisplay` for `T::Target` . When a type of `Option<T>` appears in a route path, use a type of `T` as the parameter in `uri!` . Note that `Option<T>` itself does not implement `UriDisplay<Path>` . * `T` for `Result<T, E>` where `T: UriDisplay<Path>` Uses the implementation of `UriDisplay` for `T::Target` . When a type of `Result<T, E>` appears in a route path, use a type of `T` as the parameter in `uri!` . Note that `Result<T, E>` itself does not implement `UriDisplay<Path>` . Rocket implements `UriDisplay<Query>` (but not `UriDisplay<Path>` ) for several built-in types. * `Form<T>` , `LenientForm<T>` where ``` T: FromUriParam + FromForm ``` Uses the implementation of `UriDisplay` for `T::Target` . In general, when a type of `Form<T>` is to be displayed as part of a URI’s query, it suffices to derive `UriDisplay` for `T` . Note that any type that can be converted into a `T` using `FromUriParam` can be used in place of a `Form<T>` in a `uri!` invocation. * `Option<T>` where `T: UriDisplay<Query>` If the `Option` is `Some` , uses the implementation of `UriDisplay` for `T` . Otherwise, nothing is rendered. * `Result<T, E>` where `T: UriDisplay<Query>` If the `Result` is `Ok` , uses the implementation of `UriDisplay` for `T` . Otherwise, nothing is rendered. Manually implementing `UriDisplay` should be done with care. For most use cases, deriving `UriDisplay` will suffice: ``` // Derives `UriDisplay<Query>` #[derive(UriDisplayQuery)] struct User { name: String, age: usize, } let user = User { name: "<NAME>".into(), age: 31 }; let uri_string = format!("{}", &user as &dyn UriDisplay<Query>); assert_eq!(uri_string, "name=Michael%20Smith&age=31"); // Derives `UriDisplay<Path>` #[derive(UriDisplayPath)] struct Name(String); let name = Name("<NAME>".into()); let uri_string = format!("{}", &name as &dyn UriDisplay<Path>); assert_eq!(uri_string, "Bob%20Smith"); ``` As long as every field in the structure (or enum) implements `UriDisplay` , the trait can be derived. The implementation calls for every named field and for every unnamed field. See the `UriDisplay<Path>` and `UriDisplay<Query>` derive documentation for full details. Implementing `UriDisplay` is similar to implementing `Display` with the caveat that extra care must be taken to ensure that the written string is URI-safe. As mentioned before, in practice, this means that the string must either be percent-encoded or consist only of characters that are alphanumeric, “-”, “.”, “_”, or “~”. When manually implementing `UriDisplay` for your types, you should defer to existing implementations of `UriDisplay` as much as possible. In the example below, for instance, `Name` ’s implementation defers to `String` ’s implementation. To percent-encode a string, use ``` Uri::percent_encode() ``` The following snippet consists of a `Name` type that implements both `FromParam` and `UriDisplay<Path>` . The `FromParam` implementation allows `Name` to be used as the target type of a dynamic parameter, while the `UriDisplay` implementation allows URIs to be generated for routes with `Name` as a dynamic path parameter type. Note the custom parsing in the `FromParam` implementation; as a result of this, a custom (reflexive) `UriDisplay` implementation is required. ``` use rocket::request::FromParam; struct Name<'r>(&'r str); const PREFIX: &str = "name:"; impl<'r> FromParam<'r> for Name<'r> { type Error = &'r str; /// Validates parameters that start with 'name:', extracting the text /// after 'name:' as long as there is at least one character. fn from_param(param: &'r str) -> Result<Self, Self::Error> { if !param.starts_with(PREFIX) || param.len() < (PREFIX.len() + 1) { return Err(param); } let real_name = &param[PREFIX.len()..]; Ok(Name(real_name)) } } use std::fmt; use rocket::http::impl_from_uri_param_identity; use rocket::http::uri::fmt::{Formatter, FromUriParam, UriDisplay, Path}; use rocket::response::Redirect; impl UriDisplay<Path> for Name<'_> { // Writes the raw string `name:`, which is URI-safe, and then delegates // to the `UriDisplay` implementation for `str` which ensures that // string is written in a URI-safe manner. In this case, the string will // be percent encoded. fn fmt(&self, f: &mut Formatter<Path>) -> fmt::Result { f.write_raw("name:")?; UriDisplay::fmt(&self.0, f) } } impl_from_uri_param_identity!([Path] ('a) Name<'a>); #[get("/name/<name>")] fn redirector(name: Name<'_>) -> Redirect { Redirect::to(uri!(real(name))) } #[get("/<name>")] fn real(name: Name<'_>) -> String { format!("Hello, {}!", name.0) } let uri = uri!(real(Name("<NAME>".into()))); assert_eq!(uri.path(), "/name:Mike%20Smith"); ``` b'&#167; impl<P> UriDisplay<P> for u16where P: Part,' ### impl<P> UriDisplay<P> for u16where P: Part, b'&#167; impl<P> UriDisplay<P> for i128where P: Part,' ### impl<P> UriDisplay<P> for i128where P: Part, Percent-encodes the raw string. b'&#167; impl UriDisplay<Path> for Path' ### impl UriDisplay<Path> for Path b'&#167; impl<P> UriDisplay<P> for NonZeroI32where P: Part,' b'&#167; impl<P> UriDisplay<P> for u32where P: Part,' ### impl<P> UriDisplay<P> for u32where P: Part, b'&#167; impl UriDisplay<Path> for PathBuf' ### impl UriDisplay<Path> for PathBuf b'&#167; impl<P> UriDisplay<P> for NonZeroU16where P: Part,' ### impl<P> UriDisplay<P> for NonZeroU16where P: Part, b'&#167; impl<P> UriDisplay<P> for i16where P: Part,' ### impl<P> UriDisplay<P> for i16where P: Part, b'&#167; impl<P> UriDisplay<P> for SocketAddrV4where P: Part,' ### impl<P> UriDisplay<P> for i64where P: Part, b"&#167; impl<P> UriDisplay<P> for Cow<'_, str>where P: Part," ### impl<P> UriDisplay<P> for Cow<'_, str>where P: Part, ### impl<P> UriDisplay<P> for Timewhere P: Part, ### impl<P> UriDisplay<P> for PrimitiveDateTimewhere P: Part, b'&#167; impl<P> UriDisplay<P> for IpAddrwhere P: Part,' ### impl<P> UriDisplay<P> for IpAddrwhere P: Part, ### impl<P> UriDisplay<P> for NonZeroU32where P: Part, b'&#167; impl<P, T> UriDisplay<P> for &mut Twhere P: Part, T: UriDisplay<P> + ?Sized,' ### impl<P, T> UriDisplay<P> for &mut Twhere P: Part, T: UriDisplay<P> + ?Sized, b'&#167; impl<P, T> UriDisplay<P> for &Twhere P: Part, T: UriDisplay<P> + ?Sized,' ### impl<P, T> UriDisplay<P> for &Twhere P: Part, T: UriDisplay<P> + ?Sized, b'&#167; impl<P> UriDisplay<P> for f64where P: Part,' ### impl<K, V> UriDisplay<Query> for BTreeMap<K, V, Global>where K: UriDisplay<Query>, V: UriDisplay<Query>, ### impl<K, V> UriDisplay<Query> for HashMap<K, V, RandomState>where K: UriDisplay<Query>, V: UriDisplay<Query>, ### impl<T> UriDisplay<Query> for Option<T>where T: UriDisplay<Query>, ### impl<T> UriDisplay<Query> for Vec<T, Global>where T: UriDisplay<Query>, ### impl<T, E> UriDisplay<Query> for Result<T, E>where T: UriDisplay<Query>, ### impl<T: Serialize> UriDisplay<Query> for Json<Tb'UriDisplayQuery' UriDisplayQuerysource · ``` #[derive(UriDisplayQuery)] { // Attributes available to this derive: #[field] } ``` Derive for the `UriDisplay<Query>` trait. The `UriDisplay<Query>` derive can be applied to enums and structs. When applied to an enum, the enum must have at least one variant. When applied to a struct, the struct must have at least one field. ``` #[derive(UriDisplayQuery)] enum Kind { A(String), B(usize), } #[derive(UriDisplayQuery)] struct MyStruct { name: String, id: usize, kind: Kind, } ``` for every named field, using the field’s name (unless overridden, explained next) as the `name` parameter, and for every unnamed field in the order the fields are declared. The derive accepts one field attribute: `field` , with the following syntax: ``` field := 'name' '=' '"' FIELD_NAME '"' | 'value' '=' '"' FIELD_VALUE '"' FIELD_NAME := valid HTTP field name FIELD_VALUE := valid HTTP field value ``` When applied to a struct, the attribute can only contain `name` and looks as follows: ``` #[derive(UriDisplayQuery)] struct MyStruct { name: String, id: usize, #[field(name = "type")] #[field(name = "kind")] kind: Kind, } ``` for the given field. The value of the `name` attribute is used instead of the structure’s actual field name. If more than one `field` attribute is applied to a field, the first name is used. In the example above, the field `MyStruct::kind` is rendered with a name of `type` . The attribute can also be applied to variants of C-like enums; it may only contain `value` and looks as follows: ``` #[derive(UriDisplayQuery)] enum Kind { File, #[field(value = "str")] #[field(value = "string")] String, Other } ``` for the given variant. The value of the `value` attribute is used instead of the variant’s actual name. If more than one `field` attribute is applied to a variant, the first value is used. In the example above, the variant `Kind::String` will render with a value of `str` .
PottsUtils
cran
R
Package ‘PottsUtils’ October 12, 2022 Title Utility Functions of the Potts Models Version 0.3-3 Author <NAME> [aut, cre], <NAME> [ctb] Description There are three sets of functions. The first produces basic properties of a graph and generates samples from multinomial distributions to facilitate the simulation functions (they maybe used for other purposes as well). The second provides various simulation functions for a Potts model in Potts, R. B. (1952) <doi:10.1017/S0305004100027419>. The third currently includes only one function which computes the normalizing constant of a Potts model based on simulation results. Maintainer <NAME> <<EMAIL>> Depends R (>= 3.0.2) Imports miscF (>= 0.1-4) License GPL-2 NeedsCompilation yes Repository CRAN Date/Publication 2018-02-18 21:04:47 UTC R topics documented: BlocksGibb... 2 getBlock... 3 getConf... 4 getEdge... 5 getN... 6 getNeighbor... 8 getPatche... 10 getWeight... 11 rPotts... 12 S... 15 Wolf... 16 BlocksGibbs Generate Random Samples from a Potts Model Using the Checker- board Idea Description Generate random samples from a Potts model by Gibbs Sampling that takes advantage of condi- tional independence. Usage BlocksGibbs(n, nvertex, ncolor, neighbors, blocks, weights=1, spatialMat=NULL, beta) Arguments n number of samples. nvertex number of vertices in a graph. ncolor number of colors each vertex can take. neighbors a matrix of all neighbors in a graph, one row per vertex. blocks a list of blocks of vertices in a graph. weights weights between neighbors. One for each corresponding neighbor in neighbors. The default values are 1s for all. spatialMat the matrix that describes the relationship among vertices in neighbor. The de- fault value is NULL corresponding to the simple or compound Potts model. beta the parameter inverse temperature of the Potts model. Details We use the Gibbs algorithm that takes advantage of conditional independence to speed up the gen- eration of random samples from a Potts model. The idea is that if we can divide variables that need to be updated into different blocks and given the variables in other blocks, all the variables within the same block are conditionally independent, then we can update all blocks iteratively with the variables within the same block being updated simultaneously. The spatialMat is the argument used to specify the relationship among vertices in neighbor. See rPotts1 for more information on the Potts model and spatialMat. Value The output is a nvertex by n matrix with the kth column being the kth sample. References <NAME> (2008) Bayesian Hidden Markov Normal Mixture Models with Application to MRI Tissue Classification Ph. D. Dissertation, The University of Iowa See Also Wolff, SW Examples #Example 1: Generate 100 samples from a repulsion Potts model with the # neighborhood structure corresponding to a first-order # Markov random field defined on a 3*3 2D graph. # The number of colors is 3 and beta=0.1,a_1=2,a_2=1,a_3=0. # All weights are equal to 1. neighbors <- getNeighbors(mask=matrix(1, 3, 3), neiStruc=c(2,2,0,0)) blocks <- getBlocks(mask=matrix(1, 3, 3), nblock=2) spatialMat <- matrix(c(2,1,0, 1,2,1,0,1,2), ncol=3) BlocksGibbs(n=100, nvertex=9, ncolor=3, neighbors=neighbors, blocks=blocks, spatialMat=spatialMat, beta=0.1) getBlocks Get Blocks of a Graph Description Obtain blocks of vertices of a 1D, 2D, or 3D graph, in order to use the conditional independence to speed up the simulation (checkerboard idea). Usage getBlocks(mask, nblock) Arguments mask a vector, matrix, or 3D array specifying vertices of a graph. Vertices of value 1 are within the graph and 0 are not. nblock a scalar specifying the number of blocks. For a 2D graph nblock could be either 2 or 4, and for a 3D graph nblock could be either 2 or 8. Details The vertices within each block are mutually independent given the vertices in other blocks. Some blocks could be empty. Value A list with the number of components equal to nblock. Each component consists of vertices within the same block. References <NAME> Parallel Bayesian Computation Handbook of Parallel Computing and Statistics 481-512 <NAME>/CRC Press 2005 Examples #Example 1: split a line into 2 blocks getBlocks(mask=c(1,1,1,1,0,0,1,1,0), nblock=2) #Example 2: split a 4*4 2D graph into 4 blocks in order # to use the checkerboard idea for a neighborhood structure # corresponding to the second-order Markov random field. getBlocks(mask=matrix(1, nrow=4, ncol=4), nblock=4) #Example 3: split a 3*3*3 3D graph into 8 blocks # in order to use the checkerboard idea for a neighborhood # structure based on the 18 neighbors definition, where the # neighbors of a vertex comprise its available # adjacencies sharing the same edges or faces. mask <- array(1, dim=rep(3,3)) getBlocks(mask, nblock=8) getConfs Generate Configurations of a Graph Description Using recursive method to generate all possible configurations of a graph. Usage getConfs(nvertex, ncolor) Arguments nvertex number of vertices in a graph. ncolor number of colors each vertex can take. Details Suppose there are n vertices and each can take values from 1, 2, . . . , ncolor. This function generates all possible configurations. For example, if there are two vertices and each can be either 1 or 2, then the possible configurations are (1,1), (1,2), (2,1) and (2,2). Value A matrix of all possible configurations. Each column corresponds to one configuration. Examples #Example 1: There are two vertices and each is either of # color 1 or 2. getConfs(2,2) getEdges Get Edges of a Graph Description Obtain edges of a 1D, 2D, or 3D graph based on the neighborhood structure. Usage getEdges(mask, neiStruc) Arguments mask a vector, matrix, or 3D array specifying vertices of a graph. Vertices of value 1 are within the graph and 0 are not. neiStruc a scalar, vector of four components, or 3 × 4 matrix corresponding to 1D, 2D, or 3D graphs. It specifies the neighborhood structure. See getNeighbors for details. Details There could be more than one way to define the same 3D neighborhood structure for a graph (see Example 4 for illustration). Value A matrix of two columns with one edge per row. The edges connecting vertices and their corre- sponding first neighbors are listed first, and then those corresponding to the second neighbors, and so on and so forth. The order of neighbors is the same as in getNeighbors. References <NAME> (1995) Image Analysis, Random Fields and Dynamic Monte Carlo Methods Springer-Verlag <NAME> (2008) Bayesian Hidden Markov Normal Mixture Models with Application to MRI Tissue Classification Ph. D. Dissertation, The University of Iowa Examples #Example 1: get all edges of a 1D graph. mask <- c(0,0,rep(1,4),0,1,1,0,0) getEdges(mask, neiStruc=2) #Example 2: get all edges of a 2D graph based on neighborhood structure # corresponding to the first-order Markov random field. mask <- matrix(1 ,nrow=2, ncol=3) getEdges(mask, neiStruc=c(2,2,0,0)) #Example 3: get all edges of a 2D graph based on neighborhood structure # corresponding to the second-order Markov random field. mask <- matrix(1 ,nrow=3, ncol=3) getEdges(mask, neiStruc=c(2,2,2,2)) #Example 4: get all edges of a 3D graph based on 6 neighbors structure # where the neighbors of a vertex comprise its available # N,S,E,W, upper and lower adjacencies. To achieve it, there # are several ways, including the two below. mask <- array(1, dim=rep(3,3)) n61 <- matrix(c(2,2,0,0, 0,2,0,0, 0,0,0,0), nrow=3, byrow=TRUE) n62 <- matrix(c(2,0,0,0, 0,2,0,0, 2,0,0,0), nrow=3, byrow=TRUE) e1 <- getEdges(mask, neiStruc=n61) e2 <- getEdges(mask, neiStruc=n62) e1 <- e1[order(e1[,1], e1[,2]),] e2 <- e2[order(e2[,1], e2[,2]),] all(e1==e2) #Example 5: get all edges of a 3D graph based on 18 neighbors structure # where the neighbors of a vertex comprise its available # adjacencies sharing the same edges or faces. # To achieve it, there are several ways, including the one below. n18 <- matrix(c(2,2,2,2, 0,2,2,2, 0,0,2,2), nrow=3, byrow=TRUE) mask <- array(1, dim=rep(3,3)) getEdges(mask, neiStruc=n18) getNC Calculate the Normalizing Constant of a Simple Potts Model Description Use the thermodynamic integration approach to calculate the normalizing constant of a Simple Potts Model. Usage getNC(beta, subbetas, nvertex, ncolor, edges, neighbors=NULL, blocks=NULL, algorithm=c("SwendsenWang", "Gibbs", "Wolff"), n, burn) Arguments beta the inverse temperature parameter of the Potts model. subbetas vector of betas used for the integration. nvertex number of vertices in a graph. ncolor number of colors each vertex can take. edges all edges in a graph. neighbors all neighbors in a graph. The default is NULL. If the sampling algorithm is "BlocksGibbs" or "Wolff", then this has to be specified. blocks the blocks of vertices of a graph. The default is NULL. If the sampling algorithm is "BlocksGibbs", then this has to be specified. algorithm a character string specifying the algorithm used to generate samples. It must be one of "SwendsenWang", "Gibbs", or "Wolff" and may be abbreviated. The default is "SwendsenWang". n number of iterations. burn number of burn-in. Details Use the thermodynamic integration approach to calculate the normalizing constant from a simple Potts model. See rPotts1 for more information on the simple Potts model. By the thermodynamic integration method, Z β log C(β) = N log k + E(U (z)|β , k)dβ where P N is the total number of vertices (nvertex), k is the number of colors (ncolor), and U (z) = i∼j I(zi = zj ). Calculate E(U (z) for subbetas based on samples, and then compute the integral by numerical integration. Value The corresponding normalizing constant. References <NAME> and <NAME> (2002) Hidden Markov Models and Disease Mapping Journal of the American Statistical Association vol. 97, no. 460, 1055-1070 See Also BlocksGibbs, SW, Wolff Examples ## Not run: #Example 1: Calculate the normalizing constant of a simple Potts model # with the neighborhood structure corresponding to a # first-order Markov random field defined on a # 3*3 2D graph. The number of colors is 2 and beta=2. # Use 11 subbetas evenly distributed between 0 and 2. # The sampling algorithm is Swendsen-Wang with 10000 # iterations and 1000 burn-in. edges <- getEdges(mask=matrix(1,3,3), neiStruc=c(2,2,0,0)) getNC(beta=2, subbetas=seq(0,2,by=0.2), nvertex=3*3, ncolor=2, edges, algorithm="S", n=10000, burn=1000) ## End(Not run) getNeighbors Get Neighbors of All Vertices of a Graph Description Obtain neighbors of vertices of a 1D, 2D, or 3D graph. Usage getNeighbors(mask, neiStruc) Arguments mask a vector, matrix, or 3D array specifying vertices within a graph. Vertices of value 1 are within the graph and 0 are not. neiStruc a scalar, vector of four components, or 3 × 4 matrix corresponding to 1D, 2D, or 3D graphs. It gives the definition of neighbors of a graph. All components of neiStruc should be positive (≥ 0) even numbers. For 1D graphs, neiStruc gives the number of neighbors of each vertex. For 2D graphs, neiStruc[1] specifies the number of neighbors on vertical direction, neiStruc[2] horizontal direction, neiStruc[3] north-west (NW) to south-east (SE) diagonal direction, and neiStruc[4] south-west (SW) to north-east (NE) diagonal direction. For 3D graphs, the first row of neiStruc specifies the number of neighbors on vertical direction, horizontal direction and two diagonal directions from the 1-2 perspec- tive, the second row the 1-3 perspective, and the third row the 2-3 perspective. The index to perspectives is represented with the leftmost subscript of the array being the smallest. Details There could be more than one way to define the same 3D neighborhood structure for a graph (see Example 3 for illustration). Value A matrix with each row giving the neighbors of a vertex. The number of the rows is equal to the number of vertices within the graph and the number or columns is the number of neighbors of each vertex. For a 1D graph, if each vertex has two neighbors, The first column are the neighbors on the left- hand side of corresponding vertices and the second column the right-hand side. For the vertices on boundaries, missing neighbors are represented by the number of vertices within a graph plus 1. When neiStruc is bigger than 2, The first two columns are the same as when neiStruc is equal to 2; the third column are the neighbors on the left-hand side of the vertices on the first column; the forth column are the neighbors on the right-hand side of the vertices on the second column, and so on and so forth. And again for the vertices on boundaries, their missing neighbors are represented by the number of vertices within a graph plus 1. For a 2D graph, the index to vertices is column-wised. For each vertex, the order of neighbors are as follows. First are those on the vertical direction, second the horizontal direction, third the NW to SE diagonal direction, and forth the SW to NE diagonal direction. For each direction, the neighbors of every vertex are arranged in the same way as in a 1D graph. For a 3D graph, the index to vertices is that the leftmost subscript of the array moves the fastest. For each vertex, the neighbors from the 1-2 perspective appear first and then the 1-3 perspective and finally the 2-3 perspective. For each perspective, the neighbors are arranged in the same way as in a 2D graph. References <NAME> (1995) Image Analysis, Random Fields and Dynamic Monte Carlo Methods Springer-Verlag <NAME> (2008) Bayesian Hidden Markov Normal Mixture Models with Application to MRI Tissue Classification Ph. D. Dissertation, The University of Iowa Examples #Example 1: get all neighbors of a 1D graph. mask <- c(0,0,rep(1,4),0,1,1,0,0,1,1,1) getNeighbors(mask, neiStruc=2) #Example 2: get all neighbors of a 2D graph based on neighborhood structure # corresponding to the second-order Markov random field. mask <- matrix(1, nrow=2, ncol=3) getNeighbors(mask, neiStruc=c(2,2,2,2)) #Example 3: get all neighbors of a 3D graph based on 6 neighbors structure # where the neighbors of a vertex comprise its available # N,S,E,W, upper and lower adjacencies. To achieve it, there # are several ways, including the two below. mask <- array(1, dim=rep(3,3)) n61 <- matrix(c(2,2,0,0, 0,2,0,0, 0,0,0,0), nrow=3, byrow=TRUE) n62 <- matrix(c(2,0,0,0, 0,2,0,0, 2,0,0,0), nrow=3, byrow=TRUE) n1 <- getNeighbors(mask, neiStruc=n61) n2 <- getNeighbors(mask, neiStruc=n62) n1 <- apply(n1, 1, sort) n2 <- apply(n2, 1, sort) all(n1==n2) #Example 4: get all neighbors of a 3D graph based on 18 neighbors structure # where the neighbors of a vertex comprise its available # adjacencies sharing the same edges or faces. # To achieve it, there are several ways, including the one below. n18 <- matrix(c(2,2,2,2, 0,2,2,2, 0,0,2,2), nrow=3, byrow=TRUE) mask <- array(1, dim=rep(3,3)) getNeighbors(mask, neiStruc=n18) getPatches Get Patches of a Graph Description Obtain patches of a graph by Rem’s algorithm. Usage getPatches(bonds, nvertex) Arguments bonds a matrix of bonds in a graph, with one bond per row. nvertex number of vertices in a graph. Details Given all bonds and the number of vertices in a graph, this function provides all patches. Value A list comprises all patches in a graph. Each component of the list consists of vertices within one patch. References <NAME> (1976) A Discipline of Programming Englewood Cliffs, New Jersey : Prentice- Hall, Inc Examples #Example 1: Find patches of a 3*3 2D graph with 6 bonds. bonds <- matrix(c(1,2,2,5,5,6,3,6,5,8,7,8), ncol=2, byrow=TRUE) getPatches(bonds, 9) getWeights Get All Weights of a Graph Description Obtain weights of edges of a 1D, 2D, or 3D graph based on the neighborhood structure. Usage getWeights(mask, neiStruc, format=1) Arguments mask a vector, matrix, or 3D array specifying vertices within a graph. Vertices of value 1 are within the graph and 0 are not. neiStruc a scalar, vector of four components, or 3 × 4 matrix corresponding to 1D, 2D, or 3D graphs. It specifies the neighborhood structure. See getNeighbors for details. format If it is 1, then the output is a vector of weights, one for two vertices in the corresponding output from getEdges. If it is 2, then the output is a matrix, one for two vertices in the corresponding output from getNeighbors. The default value is 1. Details The weights are equal to the reciprocals of the distance between neighboring vertices. Value A vector of weights, one component corresponding to an edge of a graph. Or a matrix of weights, one component corresponding to two vertices in neighbor. Examples #Example 1: get all weights of a 2D graph based on neighborhood structure # corresponding to the first-order Markov random field. mask <- matrix(1 ,nrow=2, ncol=3) getWeights(mask, neiStruc=c(2,2,0,0)) #Example 2: get all weights of a 2D graph based on neighborhood structure # corresponding to the second-order Markov random field. # Put the weights in a matrix form corresponding to # neighbors of vertices. mask <- matrix(1 ,nrow=3, ncol=3) getWeights(mask, neiStruc=c(2,2,2,2), format=2) #Example 3: get all weights of a 3D graph based on 6 neighbors structure # where the neighbors of a vertex comprise its available # N,S,E,W, upper and lower adjacencies. mask <- array(1, dim=rep(3,3)) n61 <- matrix(c(2,2,0,0, 0,2,0,0, 0,0,0,0), nrow=3, byrow=TRUE) getWeights(mask, neiStruc=n61) rPotts1 Generate One Random Sample from a Potts Model Description Generate one random sample from a Potts model with external field by Gibbs Sampling that takes advantage of conditional independence, or the partial decoupling method. Usage rPotts1(nvertex, ncolor, neighbors, blocks, edges=NULL, weights=1, spatialMat=NULL, beta, external, colors, algorithm=c("Gibbs", "PartialDecoupling")) Arguments nvertex number of vertices in a graph. ncolor number of colors each vertex can take. neighbors all neighbors in a graph. It is not required when using the partial decoupling method. blocks the blocks of vertices in a graph. It is not required when using the partial decou- pling method. edges all edges in a graph. The default value is NULL. It is not required when using Gibbs sampling. weights weights between neighbors or δij s in the partial decoupling method. When using Gibbs sampling, there is one for each corresponding component in neighbors. When using partial decoupling, there is one for each corresponding component in edges. The default values are 1s for all. spatialMat a matrix that describes the relationship among vertices in neighbor. It is not required when using the partial decoupling method. The default value is NULL corresponding to the simple or compound Potts model. beta the parameter inverse temperature of the Potts model. external a matrix giving values of external field. The number of rows equal to nvertex and number of columns equal to ncolor. colors the current colors of vertices. algorithm a character string specifying the algorithm used to generate samples. It must be either "Gibbs", or "PartialDecoupling", and may be abbreviated. The default is "Gibbs". Details This function generates random samples from a Potts model as follows: X X p(z) = C(β)−1 exp{ αi (zi ) + β wij f (zi , zj )} i i∼j where C(β) is a normalizing constant and i ∼ j indicates neighboring vertices. The parameter β is called the "inverse temperature", which determines the level of spatial homogeneity between neighboring vertices in the graph. We assume β > 0. The set z = {z1 , z2 , . . . , } comprises the indices to the colors of all vertices. Function f (zi , zj ) determines the relationship P among vertices in neighbor. Parameter wij is the weight between vertex i and j. The term i αi (zi ) is called the "external field". For the simple, the compound, and the simple repulsion Potts models, the external field is equal to 0. For the simple and the compound Potts model f (zi , zj ) = I(zi = zj ). Parameters wij are all equal for the simple Potts model but not so for the compound model. For the repulsion Potts model f (zi , zj ) = β1 if zi = zj ; f (zi , zj ) = β2 if |zi − zj | = 1; f (zi , zj ) = β3 otherwise. The argument spatialMat is used to specify the relationship among vertices in neighbor. The default value is NULL corresponding to the simple or the compound Potts model. The component at the ith row and jth column defining the relationship when the color of a vertex is i and the color of its neighbors is j. Besides the default setup, for the simple and the compound Potts models spatailMat could be an identity matrix also. For the repulsion Potts model, it is   a1 a2 a3 . . . a3  a2 a1 a2 . . . a3     .. .. .. . . ..   . . . . .  a3 a3 a3 . . . a1 Other relationships among neighboring vertices can be specified through it as well. Gibbs sampling can be used to generate samples from all kinds of Potts models. We use the method that takes advantage of conditional independence to speed up the simulation. See BlocksGibbs for details. The partial decoupling method could be used to generate samples from the simple Potts model plus the external field. The δij s are specified through the argument weights. Value The output is a vector with the kth component being the new color of vertex k. References <NAME> (2008) Bayesian Hidden Markov Normal Mixture Models with Application to MRI Tissue Classification Ph. D. Dissertation, The University of Iowa <NAME> (1998) Auxiliary variable methods for Markov Chain Monte Carlo with applica- tions Journal of the American Statistical Association vol. 93 585-595 See Also BlocksGibbs, Wolff SW Examples ## Not run: neighbors <- getNeighbors(matrix(1, 16, 16), c(2,2,0,0)) blocks <- getBlocks(matrix(1, 16, 16), 2) spatialMat <- matrix(c(2, 0, -1, 0, 2, 0, -1, 0, 2), ncol=3) mu <- c(22, 70 ,102) sigma <- c(17, 16, 19) count <- c(40, 140, 76) y <- unlist(lapply(1:3, function(i) rnorm(count[i], mu[i], sigma[i]))) external <- do.call(cbind, lapply(1:3, function(i) dnorm(y, mu[i],sigma[i]))) current.colors <- rep(1:3, count) rPotts1(nvertex=16^2, ncolor=3, neighbors=neighbors, blocks=blocks, spatialMat=spatialMat, beta=0.3, external=external, colors=current.colors, algorithm="G") edges <- getEdges(matrix(1, 16, 16), c(2,2,0,0)) rPotts1(nvertex=16^2, ncolor=3, edges=edges, beta=0.3, external=external, colors=current.colors, algorithm="P") ## End(Not run) SW Generate Random Samples from a Compound Potts Model by the Swendsen-Wang Algorithm Description Generate random samples from a compound Potts model using the Swendsen-Wang algorithm. Usage SW(n, nvertex, ncolor, edges, weights, beta) Arguments n number of samples. nvertex number of vertices of a graph. ncolor number of colors each vertex can take. edges edges of a graph. weights weights of edges. One for each corresponding component in edges. The default values are 1s for all. beta the parameter inverse temperature of the Potts model. Details We use the Swendsen-Wang algorithm to generate random samples from a compound Potts model. See rPotts1 for more information on the compound Potts model. Value The output is a nvertex by n matrix with the kth column being the kth sample. References <NAME> and <NAME> (1987) Nonuniversal Critical Dynamics in Monte Carlo Simulations Physical Review Letters vol. 58, no. 2, 86-88 <NAME> (2008) Bayesian Hidden Markov Normal Mixture Models with Application to MRI Tissue Classification Ph. D. Dissertation, The University of Iowa See Also Wolff, BlocksGibbs Examples #Example 1: Generate 100 samples from a Potts model with the # neighborhood structure corresponding to a # second-order Markov random field defined on a # 3*3 2D graph. The number of colors is 2. # beta=0.1. All weights are equal to 1. edges <- getEdges(mask=matrix(1, 2, 2), neiStruc=rep(2,4)) set.seed(100) SW(n=500, nvertex=4, ncolor=2, edges, beta=0.8) Wolff Generate Random Samples from a Compound Potts Model by the Wolff Algorithm Description Generate random samples from a compound Potts model using the Wolff Algorithm. Usage Wolff(n, nvertex, ncolor, neighbors, weights, beta) Arguments n number of samples. nvertex number of vertices of a graph. ncolor number of colors each vertex can take. neighbors neighbors of a graph. weights weights between neighbors. One for each corresponding component in neighbors. The default values are 1s for all. beta the parameter inverse temperature of the Potts model. Details We use the Wolff algorithm to generate random samples from a compound Potts model. See rPotts1 for more information on the compound Potts model. Value A nvertex by n matrix with the kth column being the kth sample. References <NAME> (1989) Collective Monte Carlo Updating for Spin Systems Physical Review Letters vol. 62, no. 4, 361-364 <NAME> (2008) Bayesian Hidden Markov Normal Mixture Models with Application to MRI Tissue Classification Ph. D. Dissertation, The University of Iowa See Also SW, BlocksGibbs Examples #Example 1: Generate 100 samples from a Potts model with the # neighborhood structure corresponding to a # second-order Markov random field defined on a # 3*3 2D graph. The number of colors is 2. # beta=0.7. All weights are equal to 1. neighbors <- getNeighbors(mask=matrix(1, 3, 3), neiStruc=rep(2,4)) Wolff(n=100, nvertex=9, ncolor=2, neighbors, beta=0.7)
bevy_scene
rust
Rust
Struct bevy_scene::DynamicEntity === ``` pub struct DynamicEntity { pub entity: Entity, pub components: Vec<Box<dyn Reflect>>, } ``` A reflection-powered serializable representation of an entity and its components. Fields --- `entity: Entity`The identifier of the entity, unique within a scene (and the world it may have been generated from). Components that reference this entity must consistently use this identifier. `components: Vec<Box<dyn Reflect>>`A vector of boxed components that belong to the given entity and implement the `Reflect` trait. Auto Trait Implementations --- ### impl !RefUnwindSafe for DynamicEntity ### impl Send for DynamicEntity ### impl Sync for DynamicEntity ### impl Unpin for DynamicEntity ### impl !UnwindSafe for DynamicEntity Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct bevy_scene::DynamicScene === ``` pub struct DynamicScene { pub resources: Vec<Box<dyn Reflect>>, pub entities: Vec<DynamicEntity>, } ``` A collection of serializable resources and dynamic entities. Each dynamic entity in the collection contains its own run-time defined set of components. To spawn a dynamic scene, you can use either: * `SceneSpawner::spawn_dynamic` * adding the `DynamicSceneBundle` to an entity * adding the `Handle<DynamicScene>` to an entity (the scene will only be visible if the entity already has `Transform` and `GlobalTransform` components) Fields --- `resources: Vec<Box<dyn Reflect>>``entities: Vec<DynamicEntity>`Implementations --- ### impl DynamicScene #### pub fn from_scene(scene: &Scene) -> Self Create a new dynamic scene from a given scene. #### pub fn from_world(world: &World) -> Self Create a new dynamic scene from a given world. #### pub fn write_to_world_with( &self, world: &mut World, entity_map: &mut EntityMap, type_registry: &AppTypeRegistry ) -> Result<(), SceneSpawnErrorWrite the resources, the dynamic entities, and their corresponding components to the given world. This method will return a `SceneSpawnError` if a type either is not registered in the provided `AppTypeRegistry` resource, or doesn’t reflect the `Component` or `Resource` trait. #### pub fn write_to_world( &self, world: &mut World, entity_map: &mut EntityMap ) -> Result<(), SceneSpawnErrorWrite the resources, the dynamic entities, and their corresponding components to the given world. This method will return a `SceneSpawnError` if a type either is not registered in the world’s `AppTypeRegistry` resource, or doesn’t reflect the `Component` trait. #### pub fn serialize_ron(&self, registry: &TypeRegistryArc) -> Result<String, ErrorSerialize this dynamic scene into rust object notation (ron). Trait Implementations --- ### impl Default for DynamicScene #### fn default() -> DynamicScene Returns the “default value” for a type. #### fn type_path() -> &'static str Returns the fully qualified path of the underlying type. Returns a short, pretty-print enabled path to the type. #### const TYPE_UUID: Uuid = _ Auto Trait Implementations --- ### impl !RefUnwindSafe for DynamicScene ### impl Send for DynamicScene ### impl Sync for DynamicScene ### impl Unpin for DynamicScene ### impl !UnwindSafe for DynamicScene Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynamicTypePath for Twhere T: TypePath, #### fn reflect_type_path(&self) -> &str See `TypePath::type_path`.#### fn reflect_short_type_path(&self) -> &str See `TypePath::short_type_path`.#### fn reflect_type_ident(&self) -> Option<&strSee `TypePath::type_ident`.#### fn reflect_crate_name(&self) -> Option<&strSee `TypePath::crate_name`.#### fn reflect_module_path(&self) -> Option<&strSee `TypePath::module_path`.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> FromWorld for Twhere T: Default, #### fn from_world(_world: &mut World) -> T Creates `Self` using data from the given World### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> TypeUuidDynamic for Twhere T: TypeUuid, #### fn type_uuid(&self) -> Uuid Returns the UUID associated with this value’s type. #### fn type_name(&self) -> &'static str Returns the type name of this value’s type. ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: TypeUuid + TypePath + AssetDynamic + TypeUuidDynamic, ### impl<T> AssetDynamic for Twhere T: Send + Sync + 'static + TypeUuidDynamic, Struct bevy_scene::DynamicSceneBuilder === ``` pub struct DynamicSceneBuilder<'w> { /* private fields */ } ``` A `DynamicScene` builder, used to build a scene from a `World` by extracting some entities and resources. Component Extraction --- By default, all components registered with `ReflectComponent` type data in a world’s `AppTypeRegistry` will be extracted. (this type data is added automatically during registration if `Reflect` is derived with the `#[reflect(Component)]` attribute). This can be changed by specifying a filter or by explicitly allowing/denying certain components. Extraction happens immediately and uses the filter as it exists during the time of extraction. Resource Extraction --- By default, all resources registered with `ReflectResource` type data in a world’s `AppTypeRegistry` will be extracted. (this type data is added automatically during registration if `Reflect` is derived with the `#[reflect(Resource)]` attribute). This can be changed by specifying a filter or by explicitly allowing/denying certain resources. Extraction happens immediately and uses the filter as it exists during the time of extraction. Entity Order --- Extracted entities will always be stored in ascending order based on their index. This means that inserting `Entity(1v0)` then `Entity(0v0)` will always result in the entities being ordered as `[Entity(0v0), Entity(1v0)]`. Example --- ``` let mut builder = DynamicSceneBuilder::from_world(&world); builder.extract_entity(entity); let dynamic_scene = builder.build(); ``` Implementations --- ### impl<'w> DynamicSceneBuilder<'w#### pub fn from_world(world: &'w World) -> Self Prepare a builder that will extract entities and their component from the given `World`. #### pub fn with_filter(&mut self, filter: SceneFilter) -> &mut Self Specify a custom component `SceneFilter` to be used with this builder. #### pub fn with_resource_filter(&mut self, filter: SceneFilter) -> &mut Self Specify a custom resource `SceneFilter` to be used with this builder. #### pub fn allow<T: Component>(&mut self) -> &mut Self Allows the given component type, `T`, to be included in the generated scene. This method may be called multiple times for any number of components. This is the inverse of `deny`. If `T` has already been denied, then it will be removed from the denylist. #### pub fn deny<T: Component>(&mut self) -> &mut Self Denies the given component type, `T`, from being included in the generated scene. This method may be called multiple times for any number of components. This is the inverse of `allow`. If `T` has already been allowed, then it will be removed from the allowlist. #### pub fn allow_all(&mut self) -> &mut Self Updates the filter to allow all component types. This is useful for resetting the filter so that types may be selectively denied. #### pub fn deny_all(&mut self) -> &mut Self Updates the filter to deny all component types. This is useful for resetting the filter so that types may be selectively allowed. #### pub fn allow_resource<T: Resource>(&mut self) -> &mut Self Allows the given resource type, `T`, to be included in the generated scene. This method may be called multiple times for any number of resources. This is the inverse of `deny_resource`. If `T` has already been denied, then it will be removed from the denylist. #### pub fn deny_resource<T: Resource>(&mut self) -> &mut Self Denies the given resource type, `T`, from being included in the generated scene. This method may be called multiple times for any number of resources. This is the inverse of `allow_resource`. If `T` has already been allowed, then it will be removed from the allowlist. #### pub fn allow_all_resources(&mut self) -> &mut Self Updates the filter to allow all resource types. This is useful for resetting the filter so that types may be selectively denied. #### pub fn deny_all_resources(&mut self) -> &mut Self Updates the filter to deny all resource types. This is useful for resetting the filter so that types may be selectively allowed. #### pub fn build(self) -> DynamicScene Consume the builder, producing a `DynamicScene`. To make sure the dynamic scene doesn’t contain entities without any components, call `Self::remove_empty_entities` before building the scene. #### pub fn extract_entity(&mut self, entity: Entity) -> &mut Self Extract one entity from the builder’s `World`. Re-extracting an entity that was already extracted will have no effect. #### pub fn remove_empty_entities(&mut self) -> &mut Self Despawns all entities with no components. These were likely created because none of their components were present in the provided type registry upon extraction. #### pub fn extract_entities( &mut self, entities: impl Iterator<Item = Entity> ) -> &mut Self Extract entities from the builder’s `World`. Re-extracting an entity that was already extracted will have no effect. To control which components are extracted, use the `allow` or `deny` helper methods. This method may be used to extract entities from a query: ``` #[derive(Component, Default, Reflect)] #[reflect(Component)] struct MyComponent; let mut query = world.query_filtered::<Entity, With<MyComponent>>(); let mut builder = DynamicSceneBuilder::from_world(&world); builder.extract_entities(query.iter(&world)); let scene = builder.build(); ``` Note that components extracted from queried entities must still pass through the filter if one is set. #### pub fn extract_resources(&mut self) -> &mut Self Extract resources from the builder’s `World`. Re-extracting a resource that was already extracted will have no effect. To control which resources are extracted, use the `allow_resource` or `deny_resource` helper methods. ``` #[derive(Resource, Default, Reflect)] #[reflect(Resource)] struct MyResource; world.insert_resource(MyResource); let mut builder = DynamicSceneBuilder::from_world(&world); builder.extract_resources(); let scene = builder.build(); ``` Auto Trait Implementations --- ### impl<'w> !RefUnwindSafe for DynamicSceneBuilder<'w### impl<'w> Send for DynamicSceneBuilder<'w### impl<'w> Sync for DynamicSceneBuilder<'w### impl<'w> Unpin for DynamicSceneBuilder<'w### impl<'w> !UnwindSafe for DynamicSceneBuilder<'wBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct bevy_scene::DynamicSceneBundle === ``` pub struct DynamicSceneBundle { pub scene: Handle<DynamicScene>, pub transform: Transform, pub global_transform: GlobalTransform, } ``` A component bundle for a `DynamicScene` root. The dynamic scene from `scene` will be spawn as a child of the entity with this component. Once it’s spawned, the entity will have a `SceneInstance` component. Fields --- `scene: Handle<DynamicScene>`Handle to the scene to spawn `transform: Transform``global_transform: GlobalTransform`Trait Implementations --- ### impl Default for DynamicSceneBundle #### fn default() -> DynamicSceneBundle Returns the “default value” for a type. ### impl DynamicBundle for DynamicSceneBundle Auto Trait Implementations --- ### impl RefUnwindSafe for DynamicSceneBundle ### impl Send for DynamicSceneBundle ### impl Sync for DynamicSceneBundle ### impl Unpin for DynamicSceneBundle ### impl UnwindSafe for DynamicSceneBundle Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> FromWorld for Twhere T: Default, #### fn from_world(_world: &mut World) -> T Creates `Self` using data from the given World### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct bevy_scene::InstanceId === ``` pub struct InstanceId(/* private fields */); ``` Trait Implementations --- ### impl Clone for InstanceId #### fn clone(&self) -> InstanceId Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn hash<__H: Hasher>(&self, state: &mut __H) Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn eq(&self, other: &InstanceId) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for InstanceId ### impl Eq for InstanceId ### impl StructuralEq for InstanceId ### impl StructuralPartialEq for InstanceId Auto Trait Implementations --- ### impl RefUnwindSafe for InstanceId ### impl Send for InstanceId ### impl Sync for InstanceId ### impl Unpin for InstanceId ### impl UnwindSafe for InstanceId Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynEq for Twhere T: Any + Eq, #### fn as_any(&self) -> &(dyn Any + 'static) Casts the type to `dyn Any`.#### fn dyn_eq(&self, other: &(dyn DynEq + 'static)) -> bool This method tests for `self` and `other` values to be equal. T: DynEq + Hash, #### fn as_dyn_eq(&self) -> &(dyn DynEq + 'static) Casts the type to `dyn Any`.#### fn dyn_hash(&self, state: &mut dyn Hasher) Feeds this value into the given `Hasher`.### impl<Q, K> Equivalent<K> for Qwhere Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Checks if this value is equivalent to the given key. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> TypeData for Twhere T: 'static + Send + Sync + Clone, #### fn clone_type_data(&self) -> Box<dyn TypeData, Global### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct bevy_scene::InstanceInfo === ``` pub struct InstanceInfo { pub entity_map: EntityMap, } ``` Information about a scene instance. Fields --- `entity_map: EntityMap`Mapping of entities from the scene world to the instance world. Trait Implementations --- ### impl Debug for InstanceInfo #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for InstanceInfo ### impl Send for InstanceInfo ### impl Sync for InstanceInfo ### impl Unpin for InstanceInfo ### impl UnwindSafe for InstanceInfo Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct bevy_scene::Scene === ``` pub struct Scene { pub world: World, } ``` To spawn a scene, you can use either: * `SceneSpawner::spawn` * adding the `SceneBundle` to an entity * adding the `Handle<Scene>` to an entity (the scene will only be visible if the entity already has `Transform` and `GlobalTransform` components) Fields --- `world: World`Implementations --- ### impl Scene #### pub fn new(world: World) -> Self #### pub fn from_dynamic_scene( dynamic_scene: &DynamicScene, type_registry: &AppTypeRegistry ) -> Result<Scene, SceneSpawnErrorCreate a new scene from a given dynamic scene. #### pub fn clone_with( &self, type_registry: &AppTypeRegistry ) -> Result<Scene, SceneSpawnErrorClone the scene. This method will return a `SceneSpawnError` if a type either is not registered in the provided `AppTypeRegistry` or doesn’t reflect the `Component` trait. #### pub fn write_to_world_with( &self, world: &mut World, type_registry: &AppTypeRegistry ) -> Result<InstanceInfo, SceneSpawnErrorWrite the entities and their corresponding components to the given world. This method will return a `SceneSpawnError` if a type either is not registered in the provided `AppTypeRegistry` or doesn’t reflect the `Component` trait. Trait Implementations --- ### impl Debug for Scene #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn type_path() -> &'static str Returns the fully qualified path of the underlying type. Returns a short, pretty-print enabled path to the type. #### const TYPE_UUID: Uuid = _ Auto Trait Implementations --- ### impl !RefUnwindSafe for Scene ### impl Send for Scene ### impl Sync for Scene ### impl Unpin for Scene ### impl UnwindSafe for Scene Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynamicTypePath for Twhere T: TypePath, #### fn reflect_type_path(&self) -> &str See `TypePath::type_path`.#### fn reflect_short_type_path(&self) -> &str See `TypePath::short_type_path`.#### fn reflect_type_ident(&self) -> Option<&strSee `TypePath::type_ident`.#### fn reflect_crate_name(&self) -> Option<&strSee `TypePath::crate_name`.#### fn reflect_module_path(&self) -> Option<&strSee `TypePath::module_path`.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> TypeUuidDynamic for Twhere T: TypeUuid, #### fn type_uuid(&self) -> Uuid Returns the UUID associated with this value’s type. #### fn type_name(&self) -> &'static str Returns the type name of this value’s type. ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: TypeUuid + TypePath + AssetDynamic + TypeUuidDynamic, ### impl<T> AssetDynamic for Twhere T: Send + Sync + 'static + TypeUuidDynamic, Struct bevy_scene::SceneBundle === ``` pub struct SceneBundle { pub scene: Handle<Scene>, pub transform: Transform, pub global_transform: GlobalTransform, } ``` A component bundle for a `Scene` root. The scene from `scene` will be spawn as a child of the entity with this component. Once it’s spawned, the entity will have a `SceneInstance` component. Fields --- `scene: Handle<Scene>`Handle to the scene to spawn `transform: Transform``global_transform: GlobalTransform`Trait Implementations --- ### impl Default for SceneBundle #### fn default() -> SceneBundle Returns the “default value” for a type. ### impl DynamicBundle for SceneBundle Auto Trait Implementations --- ### impl RefUnwindSafe for SceneBundle ### impl Send for SceneBundle ### impl Sync for SceneBundle ### impl Unpin for SceneBundle ### impl UnwindSafe for SceneBundle Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> FromWorld for Twhere T: Default, #### fn from_world(_world: &mut World) -> T Creates `Self` using data from the given World### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct bevy_scene::SceneInstance === ``` pub struct SceneInstance(/* private fields */); ``` `InstanceId` of a spawned scene. It can be used with the `SceneSpawner` to interact with the spawned scene. Trait Implementations --- ### impl Component for SceneInstancewhere Self: Send + Sync + 'static, #### type Storage = TableStorage A marker type indicating the storage type used for this component. This must be either `TableStorage` or `SparseStorage`.### impl Deref for SceneInstance #### type Target = InstanceId The resulting type after dereferencing.#### fn deref(&self) -> &Self::Target Dereferences the value.### impl DerefMut for SceneInstance #### fn deref_mut(&mut self) -> &mut Self::Target Mutably dereferences the value.Auto Trait Implementations --- ### impl RefUnwindSafe for SceneInstance ### impl Send for SceneInstance ### impl Sync for SceneInstance ### impl Unpin for SceneInstance ### impl UnwindSafe for SceneInstance Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. C: Component, #### fn component_ids( components: &mut Components, storages: &mut Storages, ids: &mut impl FnMut(ComponentId) ) #### unsafe fn from_components<T, F>(ctx: &mut T, func: &mut F) -> Cwhere F: for<'a> FnMut(&'a mut T) -> OwningPtr<'a, Aligned>, ### impl<T> Downcast for Twhere T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<C> DynamicBundle for Cwhere C: Component, #### fn get_components( self, func: &mut impl FnMut(StorageType, OwningPtr<'_, Aligned>) ) ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct bevy_scene::SceneSpawner === ``` pub struct SceneSpawner { /* private fields */ } ``` Implementations --- ### impl SceneSpawner #### pub fn spawn_dynamic( &mut self, scene_handle: Handle<DynamicScene> ) -> InstanceId #### pub fn spawn_dynamic_as_child( &mut self, scene_handle: Handle<DynamicScene>, parent: Entity ) -> InstanceId #### pub fn spawn(&mut self, scene_handle: Handle<Scene>) -> InstanceId #### pub fn spawn_as_child( &mut self, scene_handle: Handle<Scene>, parent: Entity ) -> InstanceId #### pub fn despawn(&mut self, scene_handle: Handle<DynamicScene>) #### pub fn despawn_instance(&mut self, instance_id: InstanceId) #### pub fn despawn_sync( &mut self, world: &mut World, scene_handle: Handle<DynamicScene> ) -> Result<(), SceneSpawnError#### pub fn despawn_instance_sync( &mut self, world: &mut World, instance_id: &InstanceId ) #### pub fn spawn_dynamic_sync( &mut self, world: &mut World, scene_handle: &Handle<DynamicScene> ) -> Result<(), SceneSpawnError#### pub fn spawn_sync( &mut self, world: &mut World, scene_handle: Handle<Scene> ) -> Result<InstanceId, SceneSpawnError#### pub fn update_spawned_scenes( &mut self, world: &mut World, scene_handles: &[Handle<DynamicScene>] ) -> Result<(), SceneSpawnError#### pub fn despawn_queued_scenes( &mut self, world: &mut World ) -> Result<(), SceneSpawnError#### pub fn despawn_queued_instances(&mut self, world: &mut World) #### pub fn spawn_queued_scenes( &mut self, world: &mut World ) -> Result<(), SceneSpawnError#### pub fn instance_is_ready(&self, instance_id: InstanceId) -> bool Check that an scene instance spawned previously is ready to use #### pub fn iter_instance_entities( &self, instance_id: InstanceId ) -> impl Iterator<Item = Entity> + '_ Get an iterator over the entities in an instance, once it’s spawned. Before the scene is spawned, the iterator will be empty. Use `Self::instance_is_ready` to check if the instance is ready. Trait Implementations --- ### impl Default for SceneSpawner #### fn default() -> SceneSpawner Returns the “default value” for a type. Self: Send + Sync + 'static, Auto Trait Implementations --- ### impl RefUnwindSafe for SceneSpawner ### impl Send for SceneSpawner ### impl Sync for SceneSpawner ### impl Unpin for SceneSpawner ### impl UnwindSafe for SceneSpawner Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> FromWorld for Twhere T: Default, #### fn from_world(_world: &mut World) -> T Creates `Self` using data from the given World### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct bevy_scene::SceneLoader === ``` pub struct SceneLoader { /* private fields */ } ``` Trait Implementations --- ### impl AssetLoader for SceneLoader #### fn load<'a>( &'a self, bytes: &'a [u8], load_context: &'a mut LoadContext<'_> ) -> BoxedFuture<'a, Result<()>Processes the asset in an asynchronous closure.#### fn extensions(&self) -> &[&str] Returns a list of extensions supported by this asset loader, without the preceding dot.### impl Debug for SceneLoader #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn from_world(world: &mut World) -> Self Creates `Self` using data from the given WorldAuto Trait Implementations --- ### impl !RefUnwindSafe for SceneLoader ### impl Send for SceneLoader ### impl Sync for SceneLoader ### impl Unpin for SceneLoader ### impl !UnwindSafe for SceneLoader Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct bevy_scene::ScenePlugin === ``` pub struct ScenePlugin; ``` Trait Implementations --- ### impl Default for ScenePlugin #### fn default() -> ScenePlugin Returns the “default value” for a type. #### fn build(&self, app: &mut App) Configures the `App` to which this plugin is added.#### fn ready(&self, _app: &App) -> bool Has the plugin finished it’s setup? This can be useful for plugins that needs something asynchronous to happen before they can finish their setup, like renderer initialization. Once the plugin is ready, `finish` should be called.#### fn finish(&self, _app: &mut App) Finish adding this plugin to the `App`, once all plugins registered are ready. This can be useful for plugins that depends on another plugin asynchronous setup, like the renderer.#### fn cleanup(&self, _app: &mut App) Runs after all plugins are built and finished, but before the app schedule is executed. This can be useful if you have some resource that other plugins need during their build step, but after build you want to remove it and send it to another thread.#### fn name(&self) -> &str Configures a name for the `Plugin` which is primarily used for checking plugin uniqueness and debugging.#### fn is_unique(&self) -> bool If the plugin can be meaningfully instantiated several times in an `App`, override this method to return `false`.Auto Trait Implementations --- ### impl RefUnwindSafe for ScenePlugin ### impl Send for ScenePlugin ### impl Sync for ScenePlugin ### impl Unpin for ScenePlugin ### impl UnwindSafe for ScenePlugin Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> FromWorld for Twhere T: Default, #### fn from_world(_world: &mut World) -> T Creates `Self` using data from the given World### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Plugins<Marker>, Enum bevy_scene::SceneFilter === ``` pub enum SceneFilter { Unset, Allowlist(HashSet<TypeId>), Denylist(HashSet<TypeId>), } ``` A filter used to control which types can be added to a `DynamicScene`. This scene filter *can* be used more generically to represent a filter for any given type; however, note that its intended usage with `DynamicScene` only considers components and resources. Adding types that are not a component or resource will have no effect when used with `DynamicScene`. Variants --- ### Unset Represents an unset filter. This is the equivalent of an empty `Denylist` or an `Allowlist` containing every type— essentially, all types are permissible. Allowing a type will convert this filter to an `Allowlist`. Similarly, denying a type will convert this filter to a `Denylist`. ### Allowlist(HashSet<TypeId>) Contains the set of permitted types by their `TypeId`. Types not contained within this set should not be allowed to be saved to an associated `DynamicScene`. ### Denylist(HashSet<TypeId>) Contains the set of prohibited types by their `TypeId`. Types contained within this set should not be allowed to be saved to an associated `DynamicScene`. Implementations --- ### impl SceneFilter #### pub fn allow_all() -> Self Creates a filter where all types are allowed. This is the equivalent of creating an empty `Denylist`. #### pub fn deny_all() -> Self Creates a filter where all types are denied. This is the equivalent of creating an empty `Allowlist`. #### pub fn allow<T: Any>(&mut self) -> &mut Self Allow the given type, `T`. If this filter is already set as a `Denylist`, then the given type will be removed from the denied set. If this filter is `Unset`, then it will be completely replaced by a new `Allowlist`. #### pub fn allow_by_id(&mut self, type_id: TypeId) -> &mut Self Allow the given type. If this filter is already set as a `Denylist`, then the given type will be removed from the denied set. If this filter is `Unset`, then it will be completely replaced by a new `Allowlist`. #### pub fn deny<T: Any>(&mut self) -> &mut Self Deny the given type, `T`. If this filter is already set as an `Allowlist`, then the given type will be removed from the allowed set. If this filter is `Unset`, then it will be completely replaced by a new `Denylist`. #### pub fn deny_by_id(&mut self, type_id: TypeId) -> &mut Self Deny the given type. If this filter is already set as an `Allowlist`, then the given type will be removed from the allowed set. If this filter is `Unset`, then it will be completely replaced by a new `Denylist`. #### pub fn is_allowed<T: Any>(&self) -> bool Returns true if the given type, `T`, is allowed by the filter. If the filter is `Unset`, this will always return `true`. #### pub fn is_allowed_by_id(&self, type_id: TypeId) -> bool Returns true if the given type is allowed by the filter. If the filter is `Unset`, this will always return `true`. #### pub fn is_denied<T: Any>(&self) -> bool Returns true if the given type, `T`, is denied by the filter. If the filter is `Unset`, this will always return `false`. #### pub fn is_denied_by_id(&self, type_id: TypeId) -> bool Returns true if the given type is denied by the filter. If the filter is `Unset`, this will always return `false`. #### pub fn iter(&self) -> Box<dyn ExactSizeIterator<Item = &TypeId> + '_Returns an iterator over the items in the filter. If the filter is `Unset`, this will return an empty iterator. #### pub fn len(&self) -> usize Returns the number of items in the filter. If the filter is `Unset`, this will always return a length of zero. #### pub fn is_empty(&self) -> bool Returns true if there are zero items in the filter. If the filter is `Unset`, this will always return `true`. Trait Implementations --- ### impl Clone for SceneFilter #### fn clone(&self) -> SceneFilter Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> SceneFilter Returns the “default value” for a type. #### type Item = TypeId The type of the elements being iterated over.#### type IntoIter = IntoIter<TypeId, GlobalWhich kind of iterator are we turning this into?#### fn into_iter(self) -> Self::IntoIter Creates an iterator from a value. #### fn eq(&self, other: &SceneFilter) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for SceneFilter ### impl StructuralEq for SceneFilter ### impl StructuralPartialEq for SceneFilter Auto Trait Implementations --- ### impl RefUnwindSafe for SceneFilter ### impl Send for SceneFilter ### impl Sync for SceneFilter ### impl Unpin for SceneFilter ### impl UnwindSafe for SceneFilter Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynEq for Twhere T: Any + Eq, #### fn as_any(&self) -> &(dyn Any + 'static) Casts the type to `dyn Any`.#### fn dyn_eq(&self, other: &(dyn DynEq + 'static)) -> bool This method tests for `self` and `other` values to be equal. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Checks if this value is equivalent to the given key. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> FromWorld for Twhere T: Default, #### fn from_world(_world: &mut World) -> T Creates `Self` using data from the given World### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> TypeData for Twhere T: 'static + Send + Sync + Clone, #### fn clone_type_data(&self) -> Box<dyn TypeData, Global### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Enum bevy_scene::SceneSpawnError === ``` pub enum SceneSpawnError { UnregisteredComponent { type_name: String, }, UnregisteredResource { type_name: String, }, UnregisteredType { type_name: String, }, NonExistentScene { handle: Handle<DynamicScene>, }, NonExistentRealScene { handle: Handle<Scene>, }, } ``` Variants --- ### UnregisteredComponent #### Fields `type_name: String`### UnregisteredResource #### Fields `type_name: String`### UnregisteredType #### Fields `type_name: String`### NonExistentScene #### Fields `handle: Handle<DynamicScene>`### NonExistentRealScene #### Fields `handle: Handle<Scene>`Trait Implementations --- ### impl Debug for SceneSpawnError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result Formats the value using the given formatter. 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for SceneSpawnError ### impl Send for SceneSpawnError ### impl Sync for SceneSpawnError ### impl Unpin for SceneSpawnError ### impl UnwindSafe for SceneSpawnError Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Send + Sync, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToString for Twhere T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Function bevy_scene::scene_spawner === ``` pub fn scene_spawner( commands: Commands<'_, '_>, scene_to_spawn: Query<'_, '_, (Entity, &Handle<Scene>, Option<&mut SceneInstance>), (Changed<Handle<Scene>>, Without<Handle<DynamicScene>>)>, dynamic_scene_to_spawn: Query<'_, '_, (Entity, &Handle<DynamicScene>, Option<&mut SceneInstance>), (Changed<Handle<DynamicScene>>, Without<Handle<Scene>>)>, scene_spawner: ResMut<'_, SceneSpawner> ) ``` System that will spawn scenes from `SceneBundle`. Function bevy_scene::serialize_ron === ``` pub fn serialize_ron<S>(serialize: S) -> Result<String, Error>where S: Serialize, ``` Serialize a given Rust data structure into rust object notation (ron).
mclust
cran
R
Package ‘mclust’ October 31, 2022 Version 6.0.0 Date 2022-10-31 Title Gaussian Mixture Modelling for Model-Based Clustering, Classification, and Density Estimation Description Gaussian finite mixture models fitted via EM algorithm for model-based clustering, classification, and density estimation, including Bayesian regularization, dimension reduction for visualisation, and resampling-based inference. Depends R (>= 3.0) Imports stats, utils, graphics, grDevices Suggests knitr (>= 1.12), rmarkdown (>= 0.9), mix (>= 1.0), geometry (>= 0.3-6), MASS License GPL (>= 2) URL https://mclust-org.github.io/mclust/ VignetteBuilder knitr Repository CRAN ByteCompile true NeedsCompilation yes LazyData yes Encoding UTF-8 Author <NAME> [aut], <NAME> [aut] (<https://orcid.org/0000-0002-6589-301X>), <NAME> [aut, cre] (<https://orcid.org/0000-0003-3826-0484>), <NAME> [ctb] (<https://orcid.org/0000-0002-5668-7046>), <NAME> [ctb] (<https://orcid.org/0000-0003-3936-2757>) Maintainer <NAME> <<EMAIL>> Date/Publication 2022-10-31 10:57:37 UTC R topics documented: mclust-packag... 4 acidit... 5 adjustedRandInde... 6 banknot... 7 Baudry_etal_2010_JCGS_example... 8 bi... 9 BrierScor... 10 cden... 12 cdens... 14 cdfMclus... 15 chevro... 17 classErro... 17 classPriorProb... 18 clPair... 21 clustComb... 22 clustCombiOpti... 25 combiPlo... 26 combiTre... 27 combMa... 29 coordPro... 29 cov... 31 crimcoord... 32 cros... 34 cvMclustD... 35 decomp2sigm... 37 defaultPrio... 38 den... 40 densityMclus... 41 densityMclust.diagnosti... 42 diabete... 44 dmvnor... 45 dupPartitio... 46 e... 46 emContro... 48 em... 49 entPlo... 51 errorBar... 53 este... 54 estep... 55 EuroUnemploymen... 57 gmmh... 57 GvH... 60 h... 61 hc... 63 hclas... 65 hcRandomPair... 66 hdrlevel... 67 hypvo... 69 ic... 70 imputeDat... 71 imputePair... 72 logLik.Mclus... 73 logLik.MclustD... 74 majorityVot... 75 ma... 76 mapClas... 76 Mclus... 77 mclust-deprecate... 81 mclust.option... 81 mclust1Dplo... 84 mclust2Dplo... 86 mclustBI... 88 mclustBICupdat... 91 MclustBootstra... 92 mclustBootstrapLR... 94 MclustD... 96 MclustD... 99 MclustDRsubse... 102 mclustIC... 105 mclustLogli... 106 mclustMode... 107 mclustModelName... 109 MclustSS... 110 mclustVarianc... 113 m... 114 me.weighte... 116 me... 118 mste... 120 mstep... 121 mv... 123 mvn... 125 nMclustParam... 126 nVarParam... 127 partcon... 128 partuni... 129 plot.clustComb... 130 plot.densityMclus... 131 plot.h... 133 plot.Mclus... 135 plot.mclustBI... 137 plot.MclustBootstra... 138 plot.MclustD... 139 plot.MclustD... 142 plot.mclustIC... 144 plot.MclustSS... 145 predict.densityMclus... 146 predict.Mclus... 147 predict.MclustD... 148 predict.MclustD... 149 predict.MclustSS... 151 priorContro... 152 randomOrthogonalMatri... 153 randPro... 154 sigma2decom... 156 si... 157 sim... 159 summary.Mclus... 161 summary.mclustBI... 162 summary.MclustBootstra... 164 summary.MclustD... 165 summary.MclustD... 166 summary.MclustSS... 166 surfacePlo... 167 thyroi... 170 uncerPlo... 171 unma... 172 wdb... 173 wreat... 175 mclust-package Gaussian Mixture Modelling for Model-Based Clustering, Classifica- tion, and Density Estimation Description Gaussian finite mixture models estimated via EM algorithm for model-based clustering, classifica- tion, and density estimation, including Bayesian regularization and dimension reduction. Details For a quick introduction to mclust see the vignette A quick tour of mclust. See also: • Mclust for clustering; • MclustDA for supervised classification; • MclustSSC for semi-supervised classification; • densityMclust for density estimation. Author(s) <NAME>, <NAME> and <NAME>. Maintainer: <NAME> <<EMAIL>> References <NAME>., <NAME>., <NAME>. and <NAME>. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, The R Journal, 8/1, pp. 289-317. <NAME>. and <NAME>. (2002) Model-based clustering, discriminant analysis and density esti- mation, Journal of the American Statistical Association, 97/458, pp. 611-631. <NAME>., <NAME>., <NAME>. and <NAME>. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. Examples # Clustering mod1 <- Mclust(iris[,1:4]) summary(mod1) plot(mod1, what = c("BIC", "classification")) # Classification data(banknote) mod2 <- MclustDA(banknote[,2:7], banknote$Status) summary(mod2) plot(mod2) # Density estimation mod3 <- densityMclust(faithful$waiting) summary(mod3) acidity Acidity data Description Acidity index measured in a sample of 155 lakes in the Northeastern United States. Following Crawford et al. (1992, 1994), the data are expressed as log(ANC+50), where ANC is the acidity neutralising capacity value. The data were also used to fit mixture of gaussian distributions by Richardson and Green (1997), and by McLachlan and Peel (2000, Sec. 6.6.2). Usage data(acidity) Source https://www.stats.bris.ac.uk/~peter/mixdata References <NAME>. (1994) An application of the Laplace method to finite mixture distribution. Journal of the American Statistical Association, 89, 259–267. <NAME>., <NAME>., <NAME>., and <NAME>. (1994) Modeling lake chemistry distributions: Approximate Bayesian methods for estimating a finite mixture model. Technometrics, 34, 441–453. <NAME>. and <NAME>. (2000) Finite Mixture Models. Wiley, New York. <NAME>. and <NAME>. (1997) On Bayesian analysis of mixtures with unknown number of components (with discussion). Journal of the Royal Statistical Society, Series B, 59, 731–792. adjustedRandIndex Adjusted Rand Index Description Computes the adjusted Rand index comparing two classifications. Usage adjustedRandIndex(x, y) Arguments x A numeric or character vector of class labels. y A numeric or character vector of class labels. The length of y should be the same as that of x. Value The adjusted Rand index comparing the two partitions (a scalar). This index has zero expected value in the case of random partition, and it is bounded above by 1 in the case of perfect agreement between two partitions. References <NAME> and <NAME> (1985) Comparing Partitions, Journal of the Classification, 2, pp. 193-218. See Also classError, mapClass, table Examples a <- rep(1:3, 3) a b <- rep(c("A", "B", "C"), 3) b adjustedRandIndex(a, b) a <- sample(1:3, 9, replace = TRUE) a b <- sample(c("A", "B", "C"), 9, replace = TRUE) b adjustedRandIndex(a, b) a <- rep(1:3, 4) a b <- rep(c("A", "B", "C", "D"), 3) b adjustedRandIndex(a, b) irisHCvvv <- hc(modelName = "VVV", data = iris[,-5]) cl3 <- hclass(irisHCvvv, 3) adjustedRandIndex(cl3,iris[,5]) irisBIC <- mclustBIC(iris[,-5]) adjustedRandIndex(summary(irisBIC,iris[,-5])$classification,iris[,5]) adjustedRandIndex(summary(irisBIC,iris[,-5],G=3)$classification,iris[,5]) banknote Swiss banknotes data Description The data set contains six measurements made on 100 genuine and 100 counterfeit old-Swiss 1000- franc bank notes. Usage data(banknote) Format A data frame with the following variables: Status the status of the banknote: genuine or counterfeit Length Length of bill (mm) Left Width of left edge (mm) Right Width of right edge (mm) Bottom Bottom margin width (mm) Top Top margin width (mm) Diagonal Length of diagonal (mm) Source <NAME>. and <NAME>. (1988). Multivariate Statistics: A practical approach. London: Chapman & Hall, Tables 1.1 and 1.2, pp. 5-8. Baudry_etal_2010_JCGS_examples Simulated Example Datasets From Baudry et al. (2010) Description Simulated datasets used in Baudry et al. (2010) to illustrate the proposed mixture components combining method for clustering. Please see the cited article for a detailed presentation of these datasets. The data frame with name exN.M is presented in Section N.M in the paper. Test1D (not in the article) has been simulated from a Gaussian mixture distribution in R. ex4.1 and ex4.2 have been simulated from a Gaussian mixture distribution in R^2. ex4.3 has been simulated from a mixture of a uniform distribution on a square and a spherical Gaussian distribution in R^2. ex4.4.1 has been simulated from a Gaussian mixture model in R^2 ex4.4.2 has been simulated from a mixture of two uniform distributions in R^3. Usage data(Baudry_etal_2010_JCGS_examples) Format ex4.1 is a data frame with 600 observations on 2 real variables. ex4.2 is a data frame with 600 observations on 2 real variables. ex4.3 is a data frame with 200 observations on 2 real variables. ex4.4.1 is a data frame with 800 observations on 2 real variables. ex4.4.2 is a data frame with 300 observations on 3 real variables. Test1D is a data frame with 200 observations on 1 real variable. References <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (2010). Combining mixture compo- nents for clustering. Journal of Computational and Graphical Statistics, 19(2):332-353. Examples data(Baudry_etal_2010_JCGS_examples) output <- clustCombi(data = ex4.4.1) output # is of class clustCombi # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes plot(output) bic BIC for Parameterized Gaussian Mixture Models Description Computes the BIC (Bayesian Information Criterion) for parameterized mixture models given the loglikelihood, the dimension of the data, and number of mixture components in the model. Usage bic(modelName, loglik, n, d, G, noise=FALSE, equalPro=FALSE, ...) Arguments modelName A character string indicating the model. The help file for mclustModelNames describes the available models. loglik The log-likelihood for a data set with respect to the Gaussian mixture model specified in the modelName argument. n The number of observations in the data used to compute loglik. d The dimension of the data used to compute loglik. G The number of components in the Gaussian mixture model used to compute loglik. noise A logical variable indicating whether or not the model includes an optional Pois- son noise component. The default is to assume no noise component. equalPro A logical variable indicating whether or not the components in the model are assumed to be present in equal proportion. The default is to assume unequal mixing proportions. ... Catches unused arguments in an indirect or list call via do.call. Value The BIC or Bayesian Information Criterion for the given input arguments. See Also mclustBIC, nVarParams, mclustModelNames. Examples n <- nrow(iris) d <- ncol(iris)-1 G <- 3 emEst <- me(modelName="VVI", data=iris[,-5], unmap(iris[,5])) names(emEst) args(bic) bic(modelName="VVI", loglik=emEst$loglik, n=n, d=d, G=G) # do.call("bic", emEst) ## alternative call BrierScore Brier score to assess the accuracy of probabilistic predictions Description The Brier score is a proper score function that measures the accuracy of probabilistic predictions. Usage BrierScore(z, class) Arguments z a matrix containing the predicted probabilities of each observation to be classi- fied in one of the classes. Thus, the number of rows must match the length of class, and the number of columns the number of known classes. class a numeric, character vector or factor containing the known class labels for each observation. If class is a factor, the number of classes is nlevels(class) with classes levels(class). If class is a numeric or character vector, the number of classes is equal to the number of classes obtained via unique(class). Details The Brier Score is the mean square difference between the true classes and the predicted probabili- ties. This function implements the original multi-class definition by Brier (1950), normalized to [0, 1] as in Kruppa et al (2014). The formula is the following: n K BS = (Cik − pik )2 2n i=1 where n is the number of observations, K the number of classes, Cik = {0, 1} the indicator of class k for observation i, and pik is the predicted probability of observation i to belong to class k. The above formulation is applicable to multi-class predictions, including the binary case. A small value of the Brier Score indicates high prediction accuracy. The Brier Score is a strictly proper score (Gneiting and Raftery, 2007), which means that it takes its minimal value only when the predicted probabilities match the empirical probabilities. References <NAME>. (1950) Verification of forecasts expressed in terms of probability. Monthly Weather Review, 78 (1): 1-3. <NAME>. and <NAME>. (2007) Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association 102 (477): 359-378. <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2014) Probability estimation with machine learning methods for dichotomous and multicategory outcome: Applications. Biometrical Journal, 56 (4): 564-583. See Also cvMclustDA Examples # multi-class case class <- factor(c(5,5,5,2,5,3,1,2,1,1), levels = 1:5) probs <- matrix(c(0.15, 0.01, 0.08, 0.23, 0.01, 0.23, 0.59, 0.02, 0.38, 0.45, 0.36, 0.05, 0.30, 0.46, 0.15, 0.13, 0.06, 0.19, 0.27, 0.17, 0.40, 0.34, 0.18, 0.04, 0.47, 0.34, 0.32, 0.01, 0.03, 0.11, 0.04, 0.04, 0.09, 0.05, 0.28, 0.27, 0.02, 0.03, 0.12, 0.25, 0.05, 0.56, 0.35, 0.22, 0.09, 0.03, 0.01, 0.75, 0.20, 0.02), nrow = 10, ncol = 5) cbind(class, probs, map = map(probs)) BrierScore(probs, class) # two-class case class <- factor(c(1,1,1,2,2,1,1,2,1,1), levels = 1:2) probs <- matrix(c(0.91, 0.4, 0.56, 0.27, 0.37, 0.7, 0.97, 0.22, 0.68, 0.43, 0.09, 0.6, 0.44, 0.73, 0.63, 0.3, 0.03, 0.78, 0.32, 0.57), nrow = 10, ncol = 2) cbind(class, probs, map = map(probs)) BrierScore(probs, class) # two-class case when predicted probabilities are constrained to be equal to # 0 or 1, then the (normalized) Brier Score is equal to the classification # error rate probs <- ifelse(probs > 0.5, 1, 0) cbind(class, probs, map = map(probs)) BrierScore(probs, class) classError(map(probs), class)$errorRate # plot Brier score for predicted probabilities in range [0,1] class <- factor(rep(1, each = 100), levels = 0:1) prob <- seq(0, 1, by = 0.01) brier <- sapply(prob, function(p) { z <- matrix(c(1-p,p), nrow = length(class), ncol = 2, byrow = TRUE) BrierScore(z, class) }) plot(prob, brier, type = "l", main = "Scoring all one class", xlab = "Predicted probability", ylab = "Brier score") # brier score for predicting balanced data with constant prob class <- factor(rep(c(1,0), each = 50), levels = 0:1) prob <- seq(0, 1, by = 0.01) brier <- sapply(prob, function(p) { z <- matrix(c(1-p,p), nrow = length(class), ncol = 2, byrow = TRUE) BrierScore(z, class) }) plot(prob, brier, type = "l", main = "Scoring balanced classes", xlab = "Predicted probability", ylab = "Brier score") # brier score for predicting unbalanced data with constant prob class <- factor(rep(c(0,1), times = c(90,10)), levels = 0:1) prob <- seq(0, 1, by = 0.01) brier <- sapply(prob, function(p) { z <- matrix(c(1-p,p), nrow = length(class), ncol = 2, byrow = TRUE) BrierScore(z, class) }) plot(prob, brier, type = "l", main = "Scoring unbalanced classes", xlab = "Predicted probability", ylab = "Brier score") cdens Component Density for Parameterized MVN Mixture Models Description Computes component densities for observations in MVN mixture models parameterized by eigen- value decomposition. Usage cdens(data, modelName, parameters, logarithm = FALSE, warn = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. modelName A character string indicating the model. The help file for mclustModelNames describes the available models. parameters The parameters of the model: mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. logarithm A logical value indicating whether or not the logarithm of the component den- sities should be returned. The default is to return the component densities, ob- tained from the log component densities by exponentiation. warn A logical value indicating whether or not a warning should be issued when com- putations fail. The default is warn=FALSE. ... Catches unused arguments in indirect or list calls via do.call. Value A numeric matrix whose [i,k]th entry is the density or log density of observation i in component k. The densities are not scaled by mixing proportions. Note When one or more component densities are very large in magnitude, it may be possible to com- pute the logarithm of the component densities but not the component densities themselves due to overflow. See Also cdensE, . . . , cdensVVV, dens, estep, mclustModelNames, mclustVariance, mclust.options, do.call Examples z2 <- unmap(hclass(hcVVV(faithful),2)) # initial value for 2 class case model <- me(modelName = "EEE", data = faithful, z = z2) cdens(modelName = "EEE", data = faithful, logarithm = TRUE, parameters = model$parameters)[1:5,] data(cross) odd <- seq(1, nrow(cross), by = 2) oddBIC <- mclustBIC(cross[odd,-1]) oddModel <- mclustModel(cross[odd,-1], oddBIC) ## best parameter estimates names(oddModel) even <- odd + 1 densities <- cdens(modelName = oddModel$modelName, data = cross[even,-1], parameters = oddModel$parameters) cbind(class = cross[even,1], densities)[1:5,] cdensE Component Density for a Parameterized MVN Mixture Model Description Computes component densities for points in a parameterized MVN mixture model. Usage cdensE(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensV(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensX(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensEII(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensVII(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensEEI(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensVEI(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensEVI(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensVVI(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensEEE(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensEEV(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensVEV(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensVVV(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensEVE(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensEVV(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensVEE(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensVVE(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensXII(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensXXI(data, logarithm = FALSE, parameters, warn = NULL, ...) cdensXXX(data, logarithm = FALSE, parameters, warn = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. logarithm A logical value indicating whether or not the logarithm of the component den- sities should be returned. The default is to return the component densities, ob- tained from the log component densities by exponentiation. parameters The parameters of the model: mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. pro Mixing proportions for the components of the mixture. If the model in- cludes a Poisson term for noise, there should be one more mixing propor- tion than the number of Gaussian components. warn A logical value indicating whether or not a warning should be issued when com- putations fail. The default is warn=FALSE. ... Catches unused arguments in indirect or list calls via do.call. Value A numeric matrix whose [i,j]th entry is the density of observation i in component j. The densities are not scaled by mixing proportions. Note When one or more component densities are very large in magnitude, then it may be possible to compute the logarithm of the component densities but not the component densities themselves due to overflow. See Also cdens, dens, mclustVariance, mstep, mclust.options, do.call. Examples z2 <- unmap(hclass(hcVVV(faithful),2)) # initial value for 2 class case model <- meVVV(data=faithful, z=z2) cdensVVV(data=faithful, logarithm = TRUE, parameters = model$parameters) data(cross) z2 <- unmap(cross[,1]) model <- meEEV(data = cross[,-1], z = z2) EEVdensities <- cdensEEV( data = cross[,-1], parameters = model$parameters) cbind(cross[,-1],map(EEVdensities)) cdfMclust Cumulative Distribution and Quantiles for a univariate Gaussian mix- ture distribution Description Compute the cumulative density function (cdf) or quantiles from an estimated one-dimensional Gaussian mixture fitted using densityMclust. Usage cdfMclust(object, data, ngrid = 100, ...) quantileMclust(object, p, ...) Arguments object a densityMclust model object. data a numeric vector of evaluation points. ngrid the number of points in a regular grid to be used as evaluation points if no data are provided. p a numeric vector of probabilities. ... further arguments passed to or from other methods. Details The cdf is evaluated at points given by the optional argument data. If not provided, a regular grid of length ngrid for the evaluation points is used. The quantiles are computed using bisection linear search algorithm. Value cdfMclust returns a list of x and y values providing, respectively, the evaluation points and the estimated cdf. quantileMclust returns a vector of quantiles. Author(s) <NAME> See Also densityMclust, plot.densityMclust. Examples x <- c(rnorm(100), rnorm(100, 3, 2)) dens <- densityMclust(x, plot = FALSE) summary(dens, parameters = TRUE) cdf <- cdfMclust(dens) str(cdf) q <- quantileMclust(dens, p = c(0.01, 0.1, 0.5, 0.9, 0.99)) cbind(quantile = q, cdf = cdfMclust(dens, q)$y) plot(cdf, type = "l", xlab = "x", ylab = "CDF") points(q, cdfMclust(dens, q)$y, pch = 20, col = "red3") par(mfrow = c(2,2)) dens.waiting <- densityMclust(faithful$waiting) plot(cdfMclust(dens.waiting), type = "l", xlab = dens.waiting$varname, ylab = "CDF") dens.eruptions <- densityMclust(faithful$eruptions) plot(cdfMclust(dens.eruptions), type = "l", xlab = dens.eruptions$varname, ylab = "CDF") par(mfrow = c(1,1)) chevron Simulated minefield data Description A set of simulated bivariate minefield data (1104 observations). Usage data(chevron) References <NAME> and <NAME> (1998). Detecting features in spatial point processes with clutter via model-based clustering. Journal of the American Statistical Association 93:294-302. <NAME> and <NAME> (1998). Computer Journal 41:578-588. <NAME> and <NAME> (2000). Finite Mixture Models, Wiley, pages 110-112. classError Classification error Description Computes the errore rate of a given classification relative to the known classes, and the location of misclassified data points. Usage classError(classification, class) Arguments classification A numeric, character vector or factor specifying the predicted class labels. Must have the same length as class. class A numeric, character vector or factor of known true class labels. Must have the same length as classification. Details If more than one mapping between predicted classification and the known truth corresponds to the minimum number of classification errors, only one possible set of misclassified observations is returned. Value A list with the following two components: misclassified The indexes of the misclassified data points in a minimum error mapping be- tween the predicted classification and the known true classes. errorRate The error rate corresponding to a minimum error mapping between the predicted classification and the known true classes. See Also map mapClass, table Examples (a <- rep(1:3, 3)) (b <- rep(c("A", "B", "C"), 3)) classError(a, b) (a <- sample(1:3, 9, replace = TRUE)) (b <- sample(c("A", "B", "C"), 9, replace = TRUE)) classError(a, b) class <- factor(c(5,5,5,2,5,3,1,2,1,1), levels = 1:5) probs <- matrix(c(0.15, 0.01, 0.08, 0.23, 0.01, 0.23, 0.59, 0.02, 0.38, 0.45, 0.36, 0.05, 0.30, 0.46, 0.15, 0.13, 0.06, 0.19, 0.27, 0.17, 0.40, 0.34, 0.18, 0.04, 0.47, 0.34, 0.32, 0.01, 0.03, 0.11, 0.04, 0.04, 0.09, 0.05, 0.28, 0.27, 0.02, 0.03, 0.12, 0.25, 0.05, 0.56, 0.35, 0.22, 0.09, 0.03, 0.01, 0.75, 0.20, 0.02), nrow = 10, ncol = 5) cbind(class, probs, map = map(probs)) classError(map(probs), class) classPriorProbs Estimation of class prior probabilities by EM algorithm Description A simple procedure to improve the estimation of class prior probabilities when the training data does not reflect the true a priori probabilities of the target classes. The EM algorithm used is described in Saerens et al (2002). Usage classPriorProbs(object, newdata = object$data, itmax = 1e3, eps = sqrt(.Machine$double.eps)) Arguments object an object of class 'MclustDA' resulting from a call to MclustDA. newdata a data frame or matrix giving the data. If missing the train data obtained from the call to MclustDA are used. itmax an integer value specifying the maximal number of EM iterations. eps a scalar specifying the tolerance associated with deciding when to terminate the EM iterations. Details The estimation procedure employes an EM algorithm as described in Saerens et al (2002). Value A vector of class prior estimates which can then be used in the predict.MclustDA to improve predictions. References <NAME>., <NAME>. and <NAME>. (2002) Adjusting the outputs of a classifier to new a priori probabilities: a simple procedure, Neural computation, 14 (1), 21–41. See Also MclustDA, predict.MclustDA Examples # generate data from a mixture f(x) = 0.9 * N(0,1) + 0.1 * N(3,1) n <- 10000 mixpro <- c(0.9, 0.1) class <- factor(sample(0:1, size = n, prob = mixpro, replace = TRUE)) x <- ifelse(class == 1, rnorm(n, mean = 3, sd = 1), rnorm(n, mean = 0, sd = 1)) hist(x[class==0], breaks = 11, xlim = range(x), main = "", xlab = "x", col = adjustcolor("dodgerblue2", alpha.f = 0.5), border = "white") hist(x[class==1], breaks = 11, add = TRUE, col = adjustcolor("red3", alpha.f = 0.5), border = "white") box() # generate training data from a balanced case-control sample, i.e. # f(x) = 0.5 * N(0,1) + 0.5 * N(3,1) n_train <- 1000 class_train <- factor(sample(0:1, size = n_train, prob = c(0.5, 0.5), replace = TRUE)) x_train <- ifelse(class_train == 1, rnorm(n_train, mean = 3, sd = 1), rnorm(n_train, mean = 0, sd = 1)) hist(x_train[class_train==0], breaks = 11, xlim = range(x_train), main = "", xlab = "x", col = adjustcolor("dodgerblue2", alpha.f = 0.5), border = "white") hist(x_train[class_train==1], breaks = 11, add = TRUE, col = adjustcolor("red3", alpha.f = 0.5), border = "white") box() # fit a MclustDA model mod <- MclustDA(x_train, class_train) summary(mod, parameters = TRUE) # test set performance pred <- predict(mod, newdata = x) classError(pred$classification, class)$error BrierScore(pred$z, class) # compute performance over a grid of prior probs priorProp <- seq(0.01, 0.99, by = 0.01) CE <- BS <- rep(as.double(NA), length(priorProp)) for(i in seq(priorProp)) { pred <- predict(mod, newdata = x, prop = c(1-priorProp[i], priorProp[i])) CE[i] <- classError(pred$classification, class = class)$error BS[i] <- BrierScore(pred$z, class) } # estimate the optimal class prior probs (priorProbs <- classPriorProbs(mod, x)) pred <- predict(mod, newdata = x, prop = priorProbs) # compute performance at the estimated class prior probs classError(pred$classification, class = class)$error BrierScore(pred$z, class) matplot(priorProp, cbind(CE,BS), type = "l", lty = 1, lwd = 2, xlab = "Class prior probability", ylab = "", ylim = c(0,max(CE,BS)), panel.first = { abline(h = seq(0,1,by=0.05), col = "grey", lty = 3) abline(v = seq(0,1,by=0.05), col = "grey", lty = 3) }) abline(v = mod$prop[2], lty = 2) # training prop abline(v = mean(class==1), lty = 4) # test prop (usually unknown) abline(v = priorProbs[2], lty = 3, lwd = 2) # estimated prior probs legend("topleft", legend = c("ClassError", "BrierScore"), col = 1:2, lty = 1, lwd = 2, inset = 0.02) # Summary of results: priorProp[which.min(CE)] # best prior of class 1 according to classification error priorProp[which.min(BS)] # best prior of class 1 according to Brier score priorProbs # optimal estimated class prior probabilities clPairs Pairwise Scatter Plots showing Classification Description Creates a scatter plot for each pair of variables in given data. Observations in different classes are represented by different colors and symbols. Usage clPairs(data, classification, symbols = NULL, colors = NULL, cex = NULL, labels = dimnames(data)[[2]], cex.labels = 1.5, gap = 0.2, grid = FALSE, ...) clPairsLegend(x, y, class, col, pch, cex, box = TRUE, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. classification A numeric or character vector representing a classification of observations (rows) of data. symbols Either an integer or character vector assigning a plotting symbol to each unique class in classification. Elements in symbols correspond to classes in order of appearance in the sequence of observations (the order used by the function unique). The default is given by mclust.options("classPlotSymbols"). colors Either an integer or character vector assigning a color to each unique class in classification. Elements in colors correspond to classes in order of appear- ance in the sequence of observations (the order used by the function unique). The default is given by mclust.options("classPlotColors"). cex A vector of numerical values specifying the size of the plotting symbol for each unique class in classification. Values in cex correspond to classes in order of appearance in the sequence of observations (the order used by the function unique). By default cex = 1 for all classes is used. labels A vector of character strings for labelling the variables. The default is to use the column dimension names of data. cex.labels A numerical value specifying the size of the text labels. gap An argument specifying the distance between subplots (see pairs). grid A logical specifying if grid lines should be added to panels (see grid). x,y The x and y co-ordinates with respect to a graphic device having plotting region coordinates par("usr" = c(0,1,0,1)). class The class labels. box A logical, if TRUE then a box is drawn around the current plot figure. col, pch The colors and plotting symbols appearing in the legend. ... For a clPairs call may be additional arguments to be passed to pairs. For a clPairsLegend call may be additional arguments to be passed to legend. Details The function clPairs() draws scatter plots on the current graphics device for each combination of variables in data. Observations of different classifications are labeled with different symbols. The function clPairsLegend() can be used to add a legend. See examples below. Value The function clPairs() invisibly returns a list with the following components: class A character vector of class labels. col A vector of colors used for each class. pch A vector of plotting symbols used for each class. See Also pairs, coordProj, mclust.options Examples clPairs(iris[,1:4], cl = iris$Species) clp <- clPairs(iris[,1:4], cl = iris$Species, lower.panel = NULL) clPairsLegend(0.1, 0.4, class = clp$class, col = clp$col, pch = clp$pch, title = "Iris data") clustCombi Combining Gaussian Mixture Components for Clustering Description Provides a hierarchy of combined clusterings from the EM/BIC Gaussian mixture solution to one class, following the methodology proposed in the article cited in the references. Usage clustCombi(object = NULL, data = NULL, ...) Arguments object An object returned by Mclust giving the optimal (according to BIC) parame- ters, conditional probabilities, and log-likelihood, together with the associated classification and its uncertainty. If not provided, the data argument must be specified. data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. If the object argument is not provided, the function Mclust is applied to the given data to fit a mixture model. ... Optional arguments to be passed to called functions. Notably, any argument (such as the numbers of components for which the BIC is computed; the models to be fitted by EM; initialization parameters for the EM algorithm, etc.) to be passed to Mclust in case object = NULL. Please see the Mclust documentation for more details. Details Mclust provides a Gaussian mixture fitted to the data by maximum likelihood through the EM al- gorithm, for the model and number of components selected according to BIC. The corresponding components are hierarchically combined according to an entropy criterion, following the method- ology described in the article cited in the references section. The solutions with numbers of classes between the one selected by BIC and one are returned as a clustCombi class object. Value A list of class clustCombi giving the hierarchy of combined solutions from the number of compo- nents selected by BIC to one. The details of the output components are as follows: classification A list of the data classifications obtained for each combined solution of the hi- erarchy through a MAP assignment combiM A list of matrices. combiM[[K]] is the matrix used to combine the components of the (K+1)-classes solution to get the K-classes solution. Please see the exam- ples. combiz A list of matrices. combiz[[K]] is a matrix whose [i,k]th entry is the probability that observation i in the data belongs to the kth class according to the K-classes combined solution. MclustOutput A list of class Mclust. Output of a call to the Mclust function (as provided by the user or the result of a call to the Mclust function) used to initiate the combined solutions hierarchy: please see the Mclust function documentation for details. Author(s) <NAME>, <NAME>, <NAME> References <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (2010). Combining mixture compo- nents for clustering. Journal of Computational and Graphical Statistics, 19(2):332-353. See Also plot.clustCombi Examples data(Baudry_etal_2010_JCGS_examples) # run Mclust using provided data output <- clustCombi(data = ex4.1) # or run Mclust and then clustcombi on the returned object mod <- Mclust(ex4.1) output <- clustCombi(mod) output summary(output) # run Mclust using provided data and any further optional argument provided output <- clustCombi(data = ex4.1, modelName = "EEV", G = 1:15) # plot the hierarchy of combined solutions plot(output, what = "classification") # plot some "entropy plots" which may help one to select the number of classes plot(output, what = "entropy") # plot the tree structure obtained from combining mixture components plot(output, what = "tree") # the selected model and number of components obtained from Mclust using BIC output$MclustOutput # the matrix whose [i,k]th entry is the probability that i-th observation in # the data belongs to the k-th class according to the BIC solution head( output$combiz[[output$MclustOutput$G]] ) # the matrix whose [i,k]th entry is the probability that i-th observation in # the data belongs to the k-th class according to the first combined solution head( output$combiz[[output$MclustOutput$G-1]] ) # the matrix describing how to merge the 6-classes solution to get the # 5-classes solution output$combiM[[5]] # for example the following code returns the label of the class (in the # 5-classes combined solution) to which the 4th class (in the 6-classes # solution) is assigned. Only two classes in the (K+1)-classes solution # are assigned the same class in the K-classes solution: the two which # are merged at this step output$combiM[[5]] # recover the 5-classes soft clustering from the 6-classes soft clustering # and the 6 -> 5 combining matrix all( output$combiz[[5]] == t( output$combiM[[5]] %*% t(output$combiz[[6]]) ) ) # the hard clustering under the 5-classes solution head( output$classification[[5]] ) clustCombiOptim Optimal number of clusters obtained by combining mixture compo- nents Description Return the optimal number of clusters by combining mixture components based on the entropy method discussed in the reference given below. Usage clustCombiOptim(object, reg = 2, plot = FALSE, ...) Arguments object An object of class 'clustCombi' resulting from a call to clustCombi. reg The number of parts of the piecewise linear regression for the entropy plots. Choose 2 for a two-segment piecewise linear regression model (i.e. 1 change- point), and 3 for a three-segment piecewise linear regression model (i.e. 3 change-points). plot Logical, if TRUE an entropy plot is also produced. ... Further arguments passed to or from other methods. Value The function returns a list with the following components: numClusters.combi The estimated number of clusters. z.combi A matrix whose [i,k]th entry is the probability that observation i in the data belongs to the kth cluster. cluster.combi The clustering labels. Author(s) <NAME>, <NAME>, <NAME> References <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (2010). Combining mixture compo- nents for clustering. Journal of Computational and Graphical Statistics, 19(2):332-353. See Also combiPlot, entPlot, clustCombi Examples data(Baudry_etal_2010_JCGS_examples) output <- clustCombi(data = ex4.1) combiOptim <- clustCombiOptim(output) str(combiOptim) # plot optimal clustering with alpha color transparency proportional to uncertainty zmax <- apply(combiOptim$z.combi, 1, max) col <- mclust.options("classPlotColors")[combiOptim$cluster.combi] vadjustcolor <- Vectorize(adjustcolor) alphacol = (zmax - 1/combiOptim$numClusters.combi)/(1-1/combiOptim$numClusters.combi) col <- vadjustcolor(col, alpha.f = alphacol) plot(ex4.1, col = col, pch = mclust.options("classPlotSymbols")[combiOptim$cluster.combi]) combiPlot Plot Classifications Corresponding to Successive Combined Solutions Description Plot classifications corresponding to successive combined solutions. Usage combiPlot(data, z, combiM, ...) Arguments data The data. z A matrix whose [i,k]th entry is the probability that observation i in the data belongs to the kth class, for the initial solution (ie before any combining). Typi- cally, the one returned by Mclust/BIC. combiM A "combining matrix" (as provided by clustCombi), ie a matrix whose kth row contains only zeros, but in columns corresponding to the labels of the classes in the initial solution to be merged together to get the combined solution. ... Other arguments to be passed to the Mclust plot functions. Value Plot the classifications obtained by MAP from the matrix t(combiM %*% t(z)), which is the matrix whose [i,k]th entry is the probability that observation i in the data belongs to the kth class, according to the combined solution obtained by merging (according to combiM) the initial solution described by z. Author(s) <NAME>, <NAME>, <NAME> References <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (2010). Combining mixture compo- nents for clustering. Journal of Computational and Graphical Statistics, 19(2):332-353. See Also clustCombi, combMat, clustCombi Examples data(Baudry_etal_2010_JCGS_examples) MclustOutput <- Mclust(ex4.1) MclustOutput$G # Mclust/BIC selected 6 classes par(mfrow=c(2,2)) combiM0 <- diag(6) # is the identity matrix # no merging: plot the initial solution, given by z combiPlot(ex4.1, MclustOutput$z, combiM0, cex = 3) title("No combining") combiM1 <- combMat(6, 1, 2) # let's merge classes labeled 1 and 2 combiM1 combiPlot(ex4.1, MclustOutput$z, combiM1) title("Combine 1 and 2") # let's merge classes labeled 1 and 2, and then components labeled (in this # new 5-classes combined solution) 1 and 2 combiM2 <- combMat(5, 1, 2) %*% combMat(6, 1, 2) combiM2 combiPlot(ex4.1, MclustOutput$z, combiM2) title("Combine 1, 2 and then 1 and 2 again") plot(0,0,type="n", xlab = "", ylab = "", axes = FALSE) legend("center", legend = 1:6, col = mclust.options("classPlotColors"), pch = mclust.options("classPlotSymbols"), title = "Class labels:") combiTree Tree structure obtained from combining mixture components Description The method implemented in clustCombi can be used for combining Gaussian mixture components for clustering. This provides a hierarchical structure which can be graphically represented as a tree. Usage combiTree(object, type = c("triangle", "rectangle"), yaxis = c("entropy", "step"), edgePar = list(col = "darkgray", lwd = 2), ...) Arguments object An object of class 'clustCombi' resulting from a call to clustCombi. type A string specifying the dendrogram’s type. Possible values are "triangle" (default), and "rectangle". yaxis A string specifying the quantity used to draw the vertical axis. Possible values are "entropy" (default), and "step". edgePar A list of plotting parameters. See dendrogram. ... Further arguments passed to or from other methods. Value The function always draw a tree and invisibly returns an object of class 'dendrogram' for fine tuning. Author(s) <NAME> See Also clustCombi Examples data(Baudry_etal_2010_JCGS_examples) output <- clustCombi(data = ex4.1) combiTree(output) combiTree(output, type = "rectangle") combiTree(output, yaxis = "step") combiTree(output, type = "rectangle", yaxis = "step") combMat Combining Matrix Description Create a combining matrix Usage combMat(K, l1, l2) Arguments K The original number of classes: the matrix will define a combining from K to (K-1) classes. l1 Label of one of the two classes to be combined. l2 Label of the other class to be combined. Value If z is a vector (length K) whose kth entry is the probability that an observation belongs to the kth class in a K-classes classification, then combiM %*% z is the vector (length K-1) whose kth entry is the probability that the observation belongs to the kth class in the K-1-classes classification obtained by merging classes l1 and l2 in the initial classification. Author(s) <NAME>, <NAME>, <NAME> See Also clustCombi, combiPlot coordProj Coordinate projections of multidimensional data modeled by an MVN mixture. Description Plots coordinate projections given multidimensional data and parameters of an MVN mixture model for the data. Usage coordProj(data, dimens = c(1,2), parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "error", "uncertainty"), addEllipses = TRUE, fillEllipses = mclust.options("fillEllipses"), symbols = NULL, colors = NULL, scale = FALSE, xlim = NULL, ylim = NULL, cex = 1, PCH = ".", main = FALSE, ...) Arguments data A numeric matrix or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. dimens A vector of length 2 giving the integer dimensions of the desired coordinate projections. The default is c(1,2), in which the first dimension is plotted against the second. parameters A named list giving the parameters of an MCLUST model, used to produce superimposing ellipses on the plot. The relevant components are as follows: mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. z A matrix in which the [i,k]th entry gives the probability of observation i be- longing to the kth class. Used to compute classification and uncertainty if those arguments aren’t available. classification A numeric or character vector representing a classification of observations (rows) of data. If present argument z will be ignored. truth A numeric or character vector giving a known classification of each data point. If classification or z is also present, this is used for displaying classification errors. uncertainty A numeric vector of values in (0,1) giving the uncertainty of each data point. If present argument z will be ignored. what Choose from one of the following three options: "classification" (default), "error", "uncertainty". addEllipses A logical indicating whether or not to add ellipses with axes corresponding to the within-cluster covariances in case of "classification" or "uncertainty" plots. fillEllipses A logical specifying whether or not to fill ellipses with transparent colors when addEllipses = TRUE. symbols Either an integer or character vector assigning a plotting symbol to each unique class in classification. Elements in colors correspond to classes in order of appearance in the sequence of observations (the order used by the function unique). The default is given by mclust.options("classPlotSymbols"). colors Either an integer or character vector assigning a color to each unique class in classification. Elements in colors correspond to classes in order of appear- ance in the sequence of observations (the order used by the function unique). The default is given by mclust.options("classPlotColors"). scale A logical variable indicating whether or not the two chosen dimensions should be plotted on the same scale, and thus preserve the shape of the distribution. Default: scale=FALSE xlim, ylim Arguments specifying bounds for the ordinate, abscissa of the plot. This may be useful for when comparing plots. cex A numerical value specifying the size of the plotting symbols. The default value is 1. PCH An argument specifying the symbol to be used when a classification has not been specified for the data. The default value is a small dot ".". main A logical variable or NULL indicating whether or not to add a title to the plot identifying the dimensions used. ... Other graphics parameters. Value A plot showing a two-dimensional coordinate projection of the data, together with the location of the mixture components, classification, uncertainty, and/or classification errors. See Also clPairs, randProj, mclust2Dplot, mclust.options Examples est <- meVVV(iris[,-5], unmap(iris[,5])) par(pty = "s", mfrow = c(1,1)) coordProj(iris[,-5], dimens=c(2,3), parameters = est$parameters, z = est$z, what = "classification", main = TRUE) coordProj(iris[,-5], dimens=c(2,3), parameters = est$parameters, z = est$z, truth = iris[,5], what = "error", main = TRUE) coordProj(iris[,-5], dimens=c(2,3), parameters = est$parameters, z = est$z, what = "uncertainty", main = TRUE) covw Weighted means, covariance and scattering matrices conditioning on a weighted matrix Description Compute efficiently (via Fortran code) the means, covariance and scattering matrices conditioning on a weighted or indicator matrix Usage covw(X, Z, normalize = TRUE) Arguments X A (nxp) data matrix, with n observations on p variables. Z A (nxG) matrix of weights, with G number of groups. normalize A logical indicating if rows of Z should be normalized to sum to one. Value A list with the following components: mean A (pxG) matrix of weighted means. S A (pxpxG) array of weighted covariance matrices. W A (pxpxG) array of weighted scattering matrices. Author(s) <NAME> and <NAME> Examples # Z as an indicator matrix X <- iris[,1:4] Z <- unmap(iris$Species) str(covw(X, Z)) # Z as a matrix of weights mod <- Mclust(X, G = 3, modelNames = "VVV") str(covw(X, mod$z)) crimcoords Discriminant coordinates data projection Description Compute the discriminant coordinates or crimcoords obtained by projecting the observed data from multiple groups onto the discriminant subspace. The optimal projection subspace is given by the linear transformation of the original variables that maximizes the ratio of the between-groups covari- ance (which represents groups separation) to the pooled within-group covariance (which represents within-group dispersion). Usage crimcoords(data, classification, numdir = NULL, unbiased = FALSE, ...) ## S3 method for class 'crimcoords' summary(object, numdir, ...) ## S3 method for class 'crimcoords' plot(x, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. classification A vector (numerical, character string, or factor) giving the groups classification (either the known class labels or the estimated clusters) for the observed data. numdir An integer value specifying the number of directions of the discriminant sub- space to return. If not provided, the maximal number of directions are returned (which is given by the number of non-null eigenvalues, the minimum among the number of variables and the number of groups minus one). However, since the effectiveness of the discriminant coordinates in highlighting the separation of groups is decreasing, it might be useful to provide a smaller value, say 2 or 3. unbiased A logical specifying if unbiased estimates should be used for the between- groups and within-groups covariances. By default unbiased = FALSE so MLE estimates are used. Note that the use of unbiased or MLE estimates only changes the eigenvalues and eigenvectors of the generalized eigendecomposition by a constant of proportionality, so the discriminant coordinates or crimcoords are essentially the same. object, x An object of class crimcoords as returned by crimcoords() function. ... further arguments passed to or from other methods. Value A list of class crimcoords with the following components: means A matrix of within-groups means. B The between-groups covariance matrix. W The pooled within-groups covariance matrix. evalues A vector of eigenvalues. basis A matrix of eigenvectors specifying the basis of the discriminant subspace. projection A matrix of projected data points onto the discriminant subspace. classification A vector giving the groups classification. Author(s) <NAME> <<EMAIL>> References Gnanadesikan, R. (1977) Methods for Statistical Data Analysis of Multivariate Observations. John Wiley 1& Sons, Sec. 4.2. Flury, B. (1997) A First Course in Multivariate Statistics. Springer, Sec. 7.3. See Also MclustDR, clPairs. Examples # discriminant coordinates for the iris data using known classes data("iris") CRIMCOORDS = crimcoords(iris[,-5], iris$Species) summary(CRIMCOORDS) plot(CRIMCOORDS) # banknote data data("banknote") # discriminant coordinate on known classes CRIMCOORDS = crimcoords(banknote[,-1], banknote$Status) summary(CRIMCOORDS) plot(CRIMCOORDS) # discriminant coordinates on estimated clusters mod = Mclust(banknote[,-1]) CRIMCOORDS = crimcoords(banknote[,-1], mod$classification) summary(CRIMCOORDS) plot(CRIMCOORDS) plot(CRIMCOORDS$projection, type = "n") text(CRIMCOORDS$projection, cex = 0.8, labels = strtrim(banknote$Status, 2), col = mclust.options("classPlotColors")[1:mod$G][mod$classification]) cross Simulated Cross Data Description A 500 by 3 matrix in which the first column is the classification and the remaining columns are two data from a simulation of two crossed elliptical Gaussians. Usage data(cross) Examples # This dataset was created as follows n <- 250 set.seed(0) cross <- rbind(matrix(rnorm(n*2), n, 2) %*% diag(c(1,9)), matrix(rnorm(n*2), n, 2) %*% diag(c(1,9))[,2:1]) cross <- cbind(c(rep(1,n),rep(2,n)), cross) cvMclustDA MclustDA cross-validation Description V-fold cross-validation for classification models based on Gaussian finite mixture modelling. Usage cvMclustDA(object, nfold = 10, prop = object$prop, verbose = interactive(), ...) Arguments object An object of class 'MclustDA' resulting from a call to MclustDA. nfold An integer specifying the number of folds (by defaul 10-fold CV is used). prop A vector of class prior probabilities, which if not provided default to the class proportions in the training data. verbose A logical controlling if a text progress bar is displayed during the cross-validation procedure. By default is TRUE if the session is interactive, and FALSE otherwise. ... Further arguments passed to or from other methods. Details The function implements V-fold cross-validation for classification models fitted by MclustDA. Clas- sification error and Brier score are the metrics returned, but other metrics can be computed using the output returned by this function (see Examples section below). Value The function returns a list with the following components: classification a factor of cross-validated class labels. z a matrix containing the cross-validated probabilites for class assignment. ce the cross-validation classification error. se.ce the standard error of the cross-validated classification error. brier the cross-validation Brier score. se.brier the standard error of the cross-validated Brier score. Author(s) <NAME> See Also MclustDA, predict.MclustDA, classError, BrierScore Examples # Iris data Class <- iris$Species X <- iris[,1:4] ## EDDA model with common covariance (essentially equivalent to linear discriminant analysis) irisEDDA <- MclustDA(X, Class, modelType = "EDDA", modelNames = "EEE") cv <- cvMclustDA(irisEDDA) # 10-fold CV (default) str(cv) cv <- cvMclustDA(irisEDDA, nfold = length(Class)) # LOO-CV str(cv) ## MclustDA model selected by BIC irisMclustDA <- MclustDA(X, Class) cv <- cvMclustDA(irisMclustDA) # 10-fold CV (default) str(cv) # Banknote data data("banknote") Class <- banknote$Status X <- banknote[,2:7] ## EDDA model selected by BIC banknoteEDDA <- MclustDA(X, Class, modelType = "EDDA") cv <- cvMclustDA(banknoteEDDA) # 10-fold CV (default) str(cv) (ConfusionMatrix <- table(Pred = cv$classification, Class)) TP <- ConfusionMatrix[1,1] FP <- ConfusionMatrix[1,2] FN <- ConfusionMatrix[2,1] TN <- ConfusionMatrix[2,2] (Sensitivity <- TP/(TP+FN)) (Specificity <- TN/(FP+TN)) decomp2sigma Convert mixture component covariances to matrix form Description Converts covariances from a parameterization by eigenvalue decomposition or cholesky factoriza- tion to representation as a 3-D array. Usage decomp2sigma(d, G, scale, shape, orientation, ...) Arguments d The dimension of the data. G The number of components in the mixture model. scale Either a G-vector giving the scale of the covariance (the dth root of its determi- nant) for each component in the mixture model, or a single numeric value if the scale is the same for each component. shape Either a G by d matrix in which the kth column is the shape of the covariance matrix (normalized to have determinant 1) for the kth component, or a d-vector giving a common shape for all components. orientation Either a d by d by G array whose [,,k]th entry is the orthonomal matrix whose columns are the eigenvectors of the covariance matrix of the kth component, or a d by d orthonormal matrix if the mixture components have a common orien- tation. The orientation component of decomp can be omitted in spherical and diagonal models, for which the principal components are parallel to the coordi- nate axes so that the orientation matrix is the identity. ... Catches unused arguments from an indirect or list call via do.call. Value A 3-D array whose [,,k]th component is the covariance matrix of the kth component in an MVN mixture model. See Also sigma2decomp Examples meEst <- meVEV(iris[,-5], unmap(iris[,5])) names(meEst) meEst$parameters$variance dec <- meEst$parameters$variance decomp2sigma(d=dec$d, G=dec$G, shape=dec$shape, scale=dec$scale, orientation = dec$orientation) do.call("decomp2sigma", dec) ## alternative call defaultPrior Default conjugate prior for Gaussian mixtures Description Default conjugate prior specification for Gaussian mixtures. Usage defaultPrior(data, G, modelName, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. G The number of mixture components. modelName A character string indicating the model: "E": equal variance (univariate) "V": variable variance (univariate) "EII": spherical, equal volume "VII": spherical, unequal volume "EEI": diagonal, equal volume and shape "VEI": diagonal, varying volume, equal shape "EVI": diagonal, equal volume, varying shape "VVI": diagonal, varying volume and shape "EEE": ellipsoidal, equal volume, shape, and orientation "EEV": ellipsoidal, equal volume and equal shape "VEV": ellipsoidal, equal shape "VVV": ellipsoidal, varying volume, shape, and orientation. A description of the models above is provided in the help of mclustModelNames. Note that in the multivariate case only 10 out of 14 models may be used in con- junction with a prior, i.e. those available in MCLUST up to version 4.4. ... One or more of the following: dof The degrees of freedom for the prior on the variance. The default is d + 2, where d is the dimension of the data. scale The scale parameter for the prior on the variance. The default is var(data)/G^(2/d), where d is the dimension of the data. shrinkage The shrinkage parameter for the prior on the mean. The default value is 0.01. If 0 or NA, no prior is assumed for the mean. mean The mean parameter for the prior. The default value is colMeans(data). Details defaultPrior is a function whose default is to output the default prior specification for EM within MCLUST. Furthermore, defaultPrior can be used as a template to specify alternative parameters for a con- jugate prior. Value A list giving the prior degrees of freedom, scale, shrinkage, and mean. References <NAME> and <NAME> (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association 97:611-631. <NAME> and <NAME> (2005, revised 2009). Bayesian regularization for normal mixture estimation and model-based clustering. Technical Report, Department of Statistics, University of Washington. <NAME> and <NAME> (2007). Bayesian regularization for normal mixture estimation and model-based clustering. Journal of Classification 24:155-181. See Also mclustBIC, me, mstep, priorControl Examples # default prior irisBIC <- mclustBIC(iris[,-5], prior = priorControl()) summary(irisBIC, iris[,-5]) # equivalent to previous example irisBIC <- mclustBIC(iris[,-5], prior = priorControl(functionName = "defaultPrior")) summary(irisBIC, iris[,-5]) # no prior on the mean; default prior on variance irisBIC <- mclustBIC(iris[,-5], prior = priorControl(shrinkage = 0)) summary(irisBIC, iris[,-5]) # equivalent to previous example irisBIC <- mclustBIC(iris[,-5], prior = priorControl(functionName="defaultPrior", shrinkage=0)) summary(irisBIC, iris[,-5]) defaultPrior( iris[-5], G = 3, modelName = "VVV") dens Density for Parameterized MVN Mixtures Description Computes densities of observations in parameterized MVN mixtures. Usage dens(data, modelName, parameters, logarithm = FALSE, warn=NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. modelName A character string indicating the model. The help file for mclustModelNames describes the available models. parameters The parameters of the model: pro The vector of mixing proportions for the components of the mixture. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. logarithm A logical value indicating whether or not the logarithm of the component den- sities should be returned. The default is to return the component densities, ob- tained from the log component densities by exponentiation. warn A logical value indicating whether or not a warning should be issued when com- putations fail. The default is warn=FALSE. ... Catches unused arguments in indirect or list calls via do.call. Value A numeric vector whose ith component is the density of the ith observation in data in the MVN mixture specified by parameters. See Also cdens, mclust.options, do.call Examples faithfulModel <- Mclust(faithful) Dens <- dens(modelName = faithfulModel$modelName, data = faithful, parameters = faithfulModel$parameters) Dens ## alternative call do.call("dens", faithfulModel) densityMclust Density Estimation via Model-Based Clustering Description Produces a density estimate for each data point using a Gaussian finite mixture model from Mclust. Usage densityMclust(data, ..., plot = TRUE) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. ... Additional arguments for the Mclust function. In particular, setting the argu- ments G and modelNames allow to specify the number of mixture components and the type of model to be fitted. By default an "optimal" model is selected based on the BIC criterion. plot A logical value specifying if the estimated density should be plotted. For more contols on the resulting graph see the associated plot.densityMclust method. Value An object of class densityMclust, which inherits from Mclust. This contains all the components described in Mclust and the additional element: density The density evaluated at the input data computed from the estimated model. Author(s) Revised version by <NAME> based on the original code by <NAME> and <NAME>. References <NAME>., <NAME>., <NAME>. and <NAME>. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, The R Journal, 8/1, pp. 289-317. <NAME>. and <NAME>. (2002) Model-based clustering, discriminant analysis and density esti- mation, Journal of the American Statistical Association, 97/458, pp. 611-631. <NAME>., <NAME>., <NAME>. and <NAME>. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. See Also plot.densityMclust, Mclust, summary.Mclust, predict.densityMclust. Examples dens <- densityMclust(faithful$waiting) summary(dens) summary(dens, parameters = TRUE) plot(dens, what = "BIC", legendArgs = list(x = "topright")) plot(dens, what = "density", data = faithful$waiting) dens <- densityMclust(faithful, modelNames = "EEE", G = 3, plot = FALSE) summary(dens) summary(dens, parameters = TRUE) plot(dens, what = "density", data = faithful, drawlabels = FALSE, points.pch = 20) plot(dens, what = "density", type = "hdr") plot(dens, what = "density", type = "hdr", prob = c(0.1, 0.9)) plot(dens, what = "density", type = "hdr", data = faithful) plot(dens, what = "density", type = "persp") dens <- densityMclust(iris[,1:4], G = 2) summary(dens, parameters = TRUE) plot(dens, what = "density", data = iris[,1:4], col = "slategrey", drawlabels = FALSE, nlevels = 7) plot(dens, what = "density", type = "hdr", data = iris[,1:4]) plot(dens, what = "density", type = "persp", col = grey(0.9)) densityMclust.diagnostic Diagnostic plots for mclustDensity estimation Description Diagnostic plots for density estimation. Only available for the one-dimensional case. Usage densityMclust.diagnostic(object, type = c("cdf", "qq"), col = c("black", "black"), lwd = c(2,1), lty = c(1,1), legend = TRUE, grid = TRUE, ...) Arguments object An object of class 'mclustDensity' obtained from a call to densityMclust function. type The type of graph requested: "cdf" = a plot of the estimated CDF versus the empirical distribution function. "qq" = a Q-Q plot of sample quantiles versus the quantiles obtained from the inverse of the estimated cdf. col A pair of values for the color to be used for plotting, respectively, the estimated CDF and the empirical cdf. lwd A pair of values for the line width to be used for plotting, respectively, the esti- mated CDF and the empirical cdf. lty A pair of values for the line type to be used for plotting, respectively, the esti- mated CDF and the empirical cdf. legend A logical indicating if a legend must be added to the plot of fitted CDF vs the empirical CDF. grid A logical indicating if a grid should be added to the plot. ... Additional arguments. Details The two diagnostic plots for density estimation in the one-dimensional case are discussed in Loader (1999, pp- 87-90). Author(s) <NAME> References Loader C. (1999), Local Regression and Likelihood. New York, Springer. <NAME>, <NAME>, <NAME> and <NAME> (2012). mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. See Also densityMclust, plot.densityMclust. Examples x <- faithful$waiting dens <- densityMclust(x, plot = FALSE) plot(dens, x, what = "diagnostic") # or densityMclust.diagnostic(dens, type = "cdf") densityMclust.diagnostic(dens, type = "qq") diabetes Diabetes Data (flawed) Description The data set contains three measurements made on 145 non-obese adult patients classified into three groups. Usage data(diabetes) Format A data frame with the following variables: class The type of diabete: Normal, Overt, and Chemical. glucose Area under plasma glucose curve after a three hour oral glucose tolerance test (OGTT). insulin Area under plasma insulin curve after a three hour oral glucose tolerance test (OGTT). sspg Steady state plasma glucose. Details This dataset is flawed (compare with the reference) and it is provided here only for backward com- patibility. A 5-variable version of the Reaven and Miller data is available in package rrcov. The glucose and sspg columns in this datsset are identical to the fpg and insulin columns, respectively in the rrcov version. The insulin column in this dataset differs from the glucose column in the rrcov version in one entry: observation 104 has the value 45 in the insulin column in this data, and 455 in the corresponding glucose column of the rrcov version. Source <NAME>. and <NAME>. (1979). An attempt to define the nature of chemical diabetes using a multidimensional analysis. Diabetologia 16:17-24. dmvnorm Density of multivariate Gaussian distribution Description Efficiently computes the density of observations for a generic multivariate Gaussian distribution. Usage dmvnorm(data, mean, sigma, log = FALSE) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. mean A vector of means for each variable. sigma A positive definite covariance matrix. log A logical value indicating whether or not the logarithm of the densities should be returned. Value A numeric vector whose ith element gives the density of the ith observation in data for the multi- variate Gaussian distribution with parameters mean and sigma. See Also dnorm, dens Examples # univariate ngrid <- 101 x <- seq(-5, 5, length = ngrid) dens <- dmvnorm(x, mean = 1, sigma = 5) plot(x, dens, type = "l") # bivariate ngrid <- 101 x1 <- x2 <- seq(-5, 5, length = ngrid) mu <- c(1,0) sigma <- matrix(c(1,0.5,0.5,2), 2, 2) dens <- dmvnorm(as.matrix(expand.grid(x1, x2)), mu, sigma) dens <- matrix(dens, ngrid, ngrid) image(x1, x2, dens) contour(x1, x2, dens, add = TRUE) dupPartition Partition the data by grouping together duplicated data Description Duplicated data are grouped together to form a basic partition that can be used to start hierarchical agglomeration. Usage dupPartition(data) Arguments data A numeric vector, matrix, or data frame of observations. If a matrix or data frame, rows correspond to observations (n) and columns correspond to variables (d). Value A vector of indices indicating the partition. See Also hc Examples dupPartition(iris[,1:4]) dupPartition(iris) dupPartition(iris$Species) em EM algorithm starting with E-step for parameterized Gaussian mix- ture models Description Implements the EM algorithm for parameterized Gaussian mixture models, starting with the expec- tation step. Usage em(data, modelName, parameters, prior = NULL, control = emControl(), warn = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. modelName A character string indicating the model. The help file for mclustModelNames describes the available models. parameters A names list giving the parameters of the model. The components are as follows: pro Mixing proportions for the components of the mixture. If the model in- cludes a Poisson term for noise, there should be one more mixing propor- tion than the number of Gaussian components. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. Vinv An estimate of the reciprocal hypervolume of the data region. If set to NULL or a negative value, the default is determined by applying function hypvol to the data. Used only when pro includes an additional mixing proportion for a noise component. prior Specification of a conjugate prior on the means and variances. The default as- sumes no prior. control A list of control parameters for EM. The defaults are set by the call emControl(). warn A logical value indicating whether or not a warning should be issued when com- putations fail. The default is warn=FALSE. ... Catches unused arguments in indirect or list calls via do.call. Value A list including the following components: modelName A character string identifying the model (same as the input argument). n The number of observations in the data. d The dimension of the data. G The number of mixture components. z A matrix whose [i,k]th entry is the conditional probability of the ith observa- tion belonging to the kth component of the mixture. parameters pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. Vinv The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. loglik The log likelihood for the data in the mixture model. control The list of control parameters for EM used. prior The specification of a conjugate prior on the means and variances used, NULL if no prior is used. Attributes: "info" Information on the iteration. "WARNING" An appropriate warning if problems are encountered in the compu- tations. See Also emE, . . . , emVVV, estep, me, mstep, mclust.options, do.call Examples msEst <- mstep(modelName = "EEE", data = iris[,-5], z = unmap(iris[,5])) names(msEst) em(modelName = msEst$modelName, data = iris[,-5], parameters = msEst$parameters) do.call("em", c(list(data = iris[,-5]), msEst)) ## alternative call emControl Set control values for use with the EM algorithm Description Supplies a list of values including tolerances for singularity and convergence assessment, for use functions involving EM within MCLUST. Usage emControl(eps, tol, itmax, equalPro) Arguments eps A scalar tolerance associated with deciding when to terminate computations due to computational singularity in covariances. Smaller values of eps allow com- putations to proceed nearer to singularity. The default is the relative machine precision .Machine$double.eps, which is approximately 2e − 16 on IEEE- compliant machines. tol A vector of length two giving relative convergence tolerances for the log-likelihood and for parameter convergence in the inner loop for models with iterative M-step ("VEI", "VEE", "EVE", "VVE", "VEV"), respectively. The default is c(1.e-5, sqrt(.Machine$double.eps)). If only one number is supplied, it is used as the tolerance for the outer iterations and the tolerance for the inner iterations is as in the default. itmax A vector of length two giving integer limits on the number of EM iterations and on the number of iterations in the inner loop for models with iterative M-step ("VEI", "VEE", "EVE", "VVE", "VEV"), respectively. The default is c(.Machine$integer.max, .Machine$integer.max) allowing termination to be completely governed by tol. If only one number is supplied, it is used as the iteration limit for the outer iteration only. equalPro Logical variable indicating whether or not the mixing proportions are equal in the model. Default: equalPro = FALSE. Details emControl is provided for assigning values and defaults for EM within MCLUST. Value A named list in which the names are the names of the arguments and the values are the values supplied to the arguments. See Also em, estep, me, mstep, mclustBIC Examples irisBIC <- mclustBIC(iris[,-5], control = emControl(tol = 1.e-6)) summary(irisBIC, iris[,-5]) emE EM algorithm starting with E-step for a parameterized Gaussian mix- ture model Description Implements the EM algorithm for a parameterized Gaussian mixture model, starting with the ex- pectation step. Usage emE(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emV(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emX(data, prior = NULL, warn = NULL, ...) emEII(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emVII(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emEEI(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emVEI(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emEVI(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emVVI(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emEEE(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emVEE(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emEVE(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emVVE(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emEEV(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emVEV(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emEVV(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emVVV(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) emXII(data, prior = NULL, warn = NULL, ...) emXXI(data, prior = NULL, warn = NULL, ...) emXXX(data, prior = NULL, warn = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. parameters The parameters of the model: pro Mixing proportions for the components of the mixture. There should one more mixing proportion than the number of Gaussian components if the mixture model includes a Poisson noise term. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. Vinv An estimate of the reciprocal hypervolume of the data region. The default is determined by applying function hypvol to the data. Used only when pro includes an additional mixing proportion for a noise component. prior The default assumes no prior, but this argument allows specification of a conju- gate prior on the means and variances through the function priorControl. control A list of control parameters for EM. The defaults are set by the call emControl(). warn A logical value indicating whether or not a warning should be issued whenever a singularity is encountered. The default is given in mclust.options("warn"). ... Catches unused arguments in indirect or list calls via do.call. Value A list including the following components: modelName A character string identifying the model (same as the input argument). z A matrix whose [i,k]th entry is the conditional probability of the ith observa- tion belonging to the kth component of the mixture. parameters pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. Vinv The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. loglik The log likelihood for the data in the mixture model. Attributes: "info" Information on the iteration. "WARNING" An appropriate warning if problems are encountered in the compu- tations. See Also me, mstep, mclustVariance, mclust.options. Examples msEst <- mstepEEE(data = iris[,-5], z = unmap(iris[,5])) names(msEst) emEEE(data = iris[,-5], parameters = msEst$parameters) entPlot Plot Entropy Plots Description Plot "entropy plots" to help select the number of classes from a hierarchy of combined clusterings. Usage entPlot(z, combiM, abc = c("standard", "normalized"), reg = 2, ...) Arguments z A matrix whose [i,k]th entry is the probability that observation i in the data belongs to the kth class, for the initial solution (ie before any combining). Typi- cally, the one returned by Mclust/BIC. combiM A list of "combining matrices" (as provided by clustCombi), ie combiM[[K]] is the matrix whose kth row contains only zeros, but in columns corresponding to the labels of the classes in the (K+1)-classes solution to be merged to get the K-classes combined solution. combiM must contain matrices from K = number of classes in z to one. abc Choose one or more of: "standard", "normalized", to specify whether the num- ber of observations involved in each combining step should be taken into account to scale the plots or not. reg The number of parts of the piecewise linear regression for the entropy plots. Choose one or more of: 2 (for 1 change-point), 3 (for 2 change-points). ... Other graphical arguments to be passed to the plot functions. Details Please see the article cited in the references for more details. A clear elbow in the "entropy plot" should suggest the user to consider the corresponding number(s) of class(es). Value if abc = "standard", plots the entropy against the number of clusters and the difference between the entropy of successive combined solutions against the number of clusters. if abc = "normalized", plots the entropy against the cumulated number of observations involved in the successive com- bining steps and the difference between the entropy of successive combined solutions divided by the number of observations involved in the corresponding combining step against the number of clusters. Author(s) <NAME>, <NAME>, <NAME> References <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (2010). Combining mixture compo- nents for clustering. Journal of Computational and Graphical Statistics, 19(2):332-353. See Also plot.clustCombi, combiPlot, clustCombi Examples data(Baudry_etal_2010_JCGS_examples) # run Mclust to get the MclustOutput output <- clustCombi(data = ex4.2, modelNames = "VII") entPlot(output$MclustOutput$z, output$combiM, reg = c(2,3)) # legend: in red, the single-change-point piecewise linear regression; # in blue, the two-change-point piecewise linear regression. errorBars Draw error bars on a plot Description Draw error bars at x from upper to lower. If horizontal = FALSE (default) bars are drawn vertically, otherwise horizontally. Usage errorBars(x, upper, lower, width = 0.1, code = 3, angle = 90, horizontal = FALSE, ...) Arguments x A vector of values where the bars must be drawn. upper A vector of upper values where the bars must end. lower A vector of lower values where the bars must start. width A value specifying the width of the end-point segment. code An integer code specifying the kind of arrows to be drawn. For details see arrows. angle A value specifying the angle at the arrow edge. For details see arrows. horizontal A logical specifying if bars should be drawn vertically (default) or horizontally. ... Further arguments are passed to arrows. Examples par(mfrow=c(2,2)) # Create a simple example dataset x <- 1:5 n <- c(10, 15, 12, 6, 3) se <- c(1, 1.2, 2, 1, .5) # upper and lower bars b <- barplot(n, ylim = c(0, max(n)*1.5)) errorBars(b, lower = n-se, upper = n+se, lwd = 2, col = "red3") # one side bars b <- barplot(n, ylim = c(0, max(n)*1.5)) errorBars(b, lower = n, upper = n+se, lwd = 2, col = "red3", code = 1) # plot(x, n, ylim = c(0, max(n)*1.5), pch = 0) errorBars(x, lower = n-se, upper = n+se, lwd = 2, col = "red3") # dotchart(n, labels = x, pch = 19, xlim = c(0, max(n)*1.5)) errorBars(x, lower = n-se, upper = n+se, col = "red3", horizontal = TRUE) estep E-step for parameterized Gaussian mixture models. Description Implements the expectation step of EM algorithm for parameterized Gaussian mixture models. Usage estep(data, modelName, parameters, warn = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. modelName A character string indicating the model. The help file for mclustModelNames describes the available models. parameters A names list giving the parameters of the model. The components are as follows: pro Mixing proportions for the components of the mixture. If the model in- cludes a Poisson term for noise, there should be one more mixing propor- tion than the number of Gaussian components. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. Vinv An estimate of the reciprocal hypervolume of the data region. If set to NULL or a negative value, the default is determined by applying function hypvol to the data. Used only when pro includes an additional mixing proportion for a noise component. warn A logical value indicating whether or not a warning should be issued when com- putations fail. The default is warn=FALSE. ... Catches unused arguments in indirect or list calls via do.call. Value A list including the following components: modelName A character string identifying the model (same as the input argument). z A matrix whose [i,k]th entry is the conditional probability of the ith observa- tion belonging to the kth component of the mixture. parameters The input parameters. loglik The log-likelihood for the data in the mixture model. Attributes "WARNING": an appropriate warning if problems are encountered in the compu- tations. See Also estepE, . . . , estepVVV, em, mstep, mclust.options mclustVariance Examples msEst <- mstep(modelName = "VVV", data = iris[,-5], z = unmap(iris[,5])) names(msEst) estep(modelName = msEst$modelName, data = iris[,-5], parameters = msEst$parameters) estepE E-step in the EM algorithm for a parameterized Gaussian mixture model. Description Implements the expectation step in the EM algorithm for a parameterized Gaussian mixture model. Usage estepE(data, parameters, warn = NULL, ...) estepV(data, parameters, warn = NULL, ...) estepEII(data, parameters, warn = NULL, ...) estepVII(data, parameters, warn = NULL, ...) estepEEI(data, parameters, warn = NULL, ...) estepVEI(data, parameters, warn = NULL, ...) estepEVI(data, parameters, warn = NULL, ...) estepVVI(data, parameters, warn = NULL, ...) estepEEE(data, parameters, warn = NULL, ...) estepEEV(data, parameters, warn = NULL, ...) estepVEV(data, parameters, warn = NULL, ...) estepVVV(data, parameters, warn = NULL, ...) estepEVE(data, parameters, warn = NULL, ...) estepEVV(data, parameters, warn = NULL, ...) estepVEE(data, parameters, warn = NULL, ...) estepVVE(data, parameters, warn = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. parameters The parameters of the model: pro Mixing proportions for the components of the mixture. If the model in- cludes a Poisson term for noise, there should be one more mixing propor- tion than the number of Gaussian components. mu The mean for each component. If there is more than one component, this is a matrix whose columns are the means of the components. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. Vinv An estimate of the reciprocal hypervolume of the data region. If not sup- plied or set to a negative value, the default is determined by applying func- tion hypvol to the data. Used only when pro includes an additional mixing proportion for a noise component. warn A logical value indicating whether or certain warnings should be issued. The default is given by mclust.options("warn"). ... Catches unused arguments in indirect or list calls via do.call. Value A list including the following components: modelName Character string identifying the model. z A matrix whose [i,k]th entry is the conditional probability of the ith observa- tion belonging to the kth component of the mixture. parameters The input parameters. loglik The logliklihood for the data in the mixture model. Attribute "WARNING": An appropriate warning if problems are encountered in the compu- tations. See Also estep, em, mstep, do.call, mclustVariance, mclust.options. Examples msEst <- mstepEII(data = iris[,-5], z = unmap(iris[,5])) names(msEst) estepEII(data = iris[,-5], parameters = msEst$parameters) EuroUnemployment Unemployment data for European countries in 2014 Description The data set contains unemployment rates for 31 European countries for the year 2014. Usage data(EuroUnemployment) Format A data frame with the following variables: TUR Total unemployment rate, i.e. percentage of unemployed persons aged 15-74 in the econom- ically active population. YUR Youth unemployment rate, i.e. percentage of unemployed persons aged 15-24 in the eco- nomically active population. LUR Long-term unemployment rate, i.e. percentage of unemployed persons who have been un- employed for 12 months or more. Source Dataset from EUROSTAT available at https://ec.europa.eu/eurostat/web/lfs/data/database. For conditions of use see https://ec.europa.eu/eurostat/about/policies/copyright. gmmhd Identifying Connected Components in Gaussian Finite Mixture Models for Clustering Description Starting with the density estimate obtained from a fitted Gaussian finite mixture model, cluster cores are identified from the connected components at a given density level. Once cluster cores are identified, the remaining observations are allocated to those cluster cores for which the probability of cluster membership is the highest. Usage gmmhd(object, ngrid = min(round((log(nrow(data)))*10), nrow(data)), dr = list(d = 3, lambda = 1, cumEvalues = NULL, mindir = 2), classify = list(G = 1:5, modelNames = mclust.options("emModelNames")[-c(8, 10)]), ...) ## S3 method for class 'gmmhd' plot(x, what = c("mode", "cores", "clusters"), ...) Arguments object An object returned by Mclust. ngrid An integer specifying the number of grid points used to compute the density levels. dr A list of parameters used in the dimension reduction step. classify A list of parameters used in the classification step. x An object of class 'gmmhd' as returned by the function gmmhd. what A string specifying the type of plot to be produced. See Examples section. ... further arguments passed to or from other methods. Details Model-based clustering associates each component of a finite mixture distribution to a group or cluster. An underlying implicit assumption is that a one-to-one correspondence exists between mixture components and clusters. However, a single Gaussian density may not be sufficient, and two or more mixture components could be needed to reasonably approximate the distribution within a homogeneous group of observations. This function implements the methodology proposed by Scrucca (2016) based on the identification of high density regions of the underlying density function. Starting with an estimated Gaussian finite mixture model, the corresponding density estimate is used to identify the cluster cores, i.e. those data points which form the core of the clusters. These cluster cores are obtained from the connected components at a given density level c. A mode function gives the number of connected components as the level c is varied. Once cluster cores are identified, the remaining observations are allocated to those cluster cores for which the probability of cluster membership is the highest. The method usually improves the identification of non-Gaussian clusters compared to a fully para- metric approach. Furthermore, it enables the identification of clusters which cannot be obtained by merging mixture components, and it can be straightforwardly extended to cases of higher dimen- sionality. Value A list of class gmmhd with the following components: Mclust The input object of class "Mclust" representing an estimated Gaussian finite mixture model. MclustDA An object of class "MclustDA" containing the model used for the classification step. MclustDR An object of class "MclustDR" containing the dimension reduction step if per- formed, otherwise NULL. x The data used in the algorithm. This can be the input data or a projection if a preliminary dimension reduction step is performed. density The density estimated from the input Gaussian finite mixture model evaluated at the input data. con A list of connected components at each step. nc A vector giving the number of connected components (i.e. modes) at each step. pn Vector of values over a uniform grid of proportions of length ngrid. qn Vector of density quantiles corresponding to proportions pn. pc Vector of empirical proportions corresponding to quantiles qn. clusterCores Vector of cluster cores numerical labels; NAs indicate that an observation does not belong to any cluster core. clusterCores Vector of numerical labels giving the final clustering. numClusters An integer giving the number of clusters. Author(s) <NAME> <<EMAIL>> References Scrucca, L. (2016) Identifying connected components in Gaussian finite mixture models for clus- tering. Computational Statistics & Data Analysis, 93, 5-17. See Also Mclust Examples data(faithful) mod <- Mclust(faithful) summary(mod) plot(as.densityMclust(mod), faithful, what = "density", points.pch = mclust.options("classPlotSymbols")[mod$classification], points.col = mclust.options("classPlotColors")[mod$classification]) GMMHD <- gmmhd(mod) summary(GMMHD) plot(GMMHD, what = "mode") plot(GMMHD, what = "cores") plot(GMMHD, what = "clusters") GvHD GvHD Dataset Description GvHD (Graft-versus-Host Disease) data of Brinkman et al. (2007). Two samples of this flow cytometry data, one from a patient with the GvHD, and the other from a control patient. The GvHD positive and control samples consist of 9083 and 6809 observations, respectively. Both samples include four biomarker variables, namely, CD4, CD8b, CD3, and CD8. The objective of the analysis is to identify CD3+ CD4+ CD8b+ cell sub-populations present in the GvHD positive sample. A treatment of this data by combining mixtures is proposed in Baudry et al. (2010). Usage data(GvHD) Format GvHD.pos (positive patient) is a data frame with 9083 observations on the following 4 variables, which are biomarker measurements. CD4 CD8b CD3 CD8 GvHD.control (control patient) is a data frame with 6809 observations on the following 4 variables, which are biomarker measurements. CD4 CD8b CD3 CD8 References <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (2007). High-content flow cytometry and temporal data analysis for defining a cellular signature of Graft-versus-Host Disease. Biology of Blood and Marrow Transplantation, 13: 691- 700. <NAME>, <NAME>, <NAME> (2008). Automated gating of flow cytometry data via robust model-based clustering. Cytometry A, 73: 321-332. <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (2010). Combining mixture compo- nents for clustering. Journal of Computational and Graphical Statistics, 19(2):332-353. Examples data(GvHD) dat <- GvHD.pos[1:500,] # only a few lines for a quick example output <- clustCombi(data = dat) output # is of class clustCombi # plot the hierarchy of combined solutions plot(output, what = "classification") # plot some "entropy plots" which may help one to select the number of classes plot(output, what = "entropy") # plot the tree structure obtained from combining mixture components plot(output, what = "tree") hc Model-based Agglomerative Hierarchical Clustering Description Agglomerative hierarchical clustering based on maximum likelihood criteria for Gaussian mixture models parameterized by eigenvalue decomposition. Usage hc(data, modelName = "VVV", use = "VARS", partition = dupPartition(data), minclus = 1, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations (n) and columns correspond to variables (d). modelName A character string indicating the model to be used in model-based agglomerative hierarchical clustering. Possible models are: "E" equal variance (one-dimensional); "V" spherical, variable variance (one-dimensional); "EII" spherical, equal volume; "VII" spherical, unequal volume; "EEE" ellipsoidal, equal volume, shape, and orientation; "VVV" ellipsoidal, varying volume, shape, and orientation (default). If hc() is used for initialization of EM algorithm then the default is taken from mclust.options("hcModelName"). See mclust.options. use A character string specifying the type of input variables/data transformation to be used for model-based agglomerative hierarchical clustering. Possible values are: "VARS" original variables (default); "STD" standardized variables (centered and scaled); "SPH" sphered variables (centered, scaled and uncorrelated) computed using SVD; "PCS" principal components computed using SVD on centered variables (i.e. using the covariance matrix); "PCR" principal components computed using SVD on standardized (center and scaled) variables (i.e. using the correlation matrix); "SVD" scaled SVD transformation. If hc() is used for initialization of EM algorithm then the default is taken from mclust.options("hcUse"). See mclust.options. For further details see Scrucca and Raftery (2015). partition A numeric or character vector representing a partition of observations (rows) of data. If provided, group merges will start with this partition. Otherwise, each observation is assumed to be in a cluster by itself at the start of agglomeration. Starting with version 5.4.8, by default the function dupPartition is used to start with all duplicated observations in the same group, thereby keeping duplicates in the same group throughout the modelling process. minclus A number indicating the number of clusters at which to stop the agglomeration. The default is to stop when all observations have been merged into a single cluster. ... Arguments for the method-specific hc functions. See for example hcE. Details Most models have memory usage of the order of the square of the number groups in the initial partition for fast execution. Some models, such as equal variance or "EEE", do not admit a fast algorithm under the usual agglomerative hierarchical clustering paradigm. These use less memory but are much slower to execute. Value The function hc() returns a numeric two-column matrix in which the ith row gives the minimum in- dex for observations in each of the two clusters merged at the ith stage of agglomerative hierarchical clustering. Several other informations are also returned as attributes. The plotting method plot.hc() draws a dendrogram, which can be based on either the classification loglikelihood or the merge level (number of clusters). For details, see the associated help file. Note If modelName = "E" (univariate with equal variances) or modelName = "EII" (multivariate with equal spherical covariances), then underlying model is the same as that for Ward’s method for hierarchical clustering. References <NAME>. and <NAME>. (1993). Model-based Gaussian and non-Gaussian Clustering. Biometrics, 49:803-821. <NAME>. (1998). Algorithms for model-based Gaussian hierarchical clustering. SIAM Journal on Scientific Computing, 20:270-281. <NAME>. and <NAME>. (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association, 97:611-631. <NAME>. and <NAME>. (2015). Improved initialisation of model-based clustering using Gaus- sian hierarchical partitions. Advances in Data Analysis and Classification, 9/4:447-460. See Also hcE, . . . , hcVVV, plot.hc, hclass, mclust.options Examples hcTree <- hc(modelName = "VVV", data = iris[,-5]) hcTree cl <- hclass(hcTree,c(2,3)) table(cl[,"2"]) table(cl[,"3"]) clPairs(iris[,-5], classification = cl[,"2"]) clPairs(iris[,-5], classification = cl[,"3"]) hcE Model-based Hierarchical Clustering Description Agglomerative hierarchical clustering based on maximum likelihood for a Gaussian mixture model parameterized by eigenvalue decomposition. Usage hcE(data, partition = NULL, minclus=1, ...) hcV(data, partition = NULL, minclus = 1, alpha = 1, ...) hcEII(data, partition = NULL, minclus = 1, ...) hcVII(data, partition = NULL, minclus = 1, alpha = 1, ...) hcEEE(data, partition = NULL, minclus = 1, ...) hcVVV(data, partition = NULL, minclus = 1, alpha = 1, beta = 1, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. partition A numeric or character vector representing a partition of observations (rows) of data. If provided, group merges will start with this partition. Otherwise, each observation is assumed to be in a cluster by itself at the start of agglomeration. minclus A number indicating the number of clusters at which to stop the agglomeration. The default is to stop when all observations have been merged into a single cluster. alpha, beta Additional tuning parameters needed for initializatiion in some models. For details, see Fraley 1998. The defaults provided are usually adequate. ... Catch unused arguments from a do.call call. Details Most models have memory usage of the order of the square of the number groups in the initial partition for fast execution. Some models, such as equal variance or "EEE", do not admit a fast algorithm under the usual agglomerative hierachical clustering paradigm. These use less memory but are much slower to execute. Value A numeric two-column matrix in which the ith row gives the minimum index for observations in each of the two clusters merged at the ith stage of agglomerative hierarchical clustering. References <NAME> and <NAME> (1993). Model-based Gaussian and non-Gaussian Clustering. Biometrics 49:803-821. <NAME> (1998). Algorithms for model-based Gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20:270-281. <NAME> and <NAME> (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association 97:611-631. See Also hc, hclass hcRandomPairs Examples hcTree <- hcEII(data = iris[,-5]) cl <- hclass(hcTree,c(2,3)) par(pty = "s", mfrow = c(1,1)) clPairs(iris[,-5],cl=cl[,"2"]) clPairs(iris[,-5],cl=cl[,"3"]) par(mfrow = c(1,2)) dimens <- c(1,2) coordProj(iris[,-5], classification=cl[,"2"], dimens=dimens) coordProj(iris[,-5], classification=cl[,"3"], dimens=dimens) hclass Classifications from Hierarchical Agglomeration Description Determines the classifications corresponding to different numbers of groups given merge pairs from hierarchical agglomeration. Usage hclass(hcPairs, G) Arguments hcPairs A numeric two-column matrix in which the ith row gives the minimum index for observations in each of the two clusters merged at the ith stage of agglomerative hierarchical clustering. G An integer or vector of integers giving the number of clusters for which the corresponding classfications are wanted. Value A matrix with length(G) columns, each column corresponding to a classification. Columns are indexed by the character representation of the integers in G. See Also hc, hcE Examples hcTree <- hc(modelName="VVV", data = iris[,-5]) cl <- hclass(hcTree,c(2,3)) par(pty = "s", mfrow = c(1,1)) clPairs(iris[,-5],cl=cl[,"2"]) clPairs(iris[,-5],cl=cl[,"3"]) hcRandomPairs Random hierarchical structure Description Create a hierarchical structure using a random hierarchical partition of the data. Usage hcRandomPairs(data, seed = NULL, ...) Arguments data A numeric matrix or data frame of observations. If a matrix or data frame, rows correspond to observations and columns correspond to variables. seed Optional single value, interpreted as an integer, specifying the seed for random partition. ... Catches unused arguments in indirect or list calls via do.call. Value A numeric two-column matrix in which the ith row gives the minimum index for observations in each of the two clusters merged at the ith stage of a random agglomerative hierarchical clustering. See Also hc, hclass hcVVV Examples data <- iris[,1:4] randPairs <- hcRandomPairs(data) str(randPairs) # start model-based clustering from a random partition mod <- Mclust(data, initialization = list(hcPairs = randPairs)) summary(mod) hdrlevels Highest Density Region (HDR) Levels Description Compute the levels of Highest Density Regions (HDRs) for any density and probability levels. Usage hdrlevels(density, prob) Arguments density A vector of density values computed on a set of (observed) evaluation points. prob A vector of probability levels in the range [0, 1]. Details From Hyndman (1996), let f (x) be the density function of a random variable X. Then the 100(1 − α)% HDR is the subset R(fα ) of the sample space of X such that R(fα ) = x : f (x) ≥ fα where fα is the largest constant such that P r(X ∈ R(fα )) ≥ 1 − α Value The function returns a vector of density values corresponding to HDRs at given probability levels. Author(s) <NAME> References <NAME> (1996) Computing and Graphing Highest Density Regions. The American Statis- tician, 50(2):120-126. See Also plot.densityMclust Examples # Example: univariate Gaussian x <- rnorm(1000) f <- dnorm(x) a <- c(0.5, 0.25, 0.1) (f_a <- hdrlevels(f, prob = 1-a)) plot(x, f) abline(h = f_a, lty = 2) text(max(x), f_a, labels = paste0("f_", a), pos = 3) mean(f > f_a[1]) range(x[which(f > f_a[1])]) qnorm(1-a[1]/2) mean(f > f_a[2]) range(x[which(f > f_a[2])]) qnorm(1-a[2]/2) mean(f > f_a[3]) range(x[which(f > f_a[3])]) qnorm(1-a[3]/2) # Example 2: univariate Gaussian mixture set.seed(1) cl <- sample(1:2, size = 1000, prob = c(0.7, 0.3), replace = TRUE) x <- ifelse(cl == 1, rnorm(1000, mean = 0, sd = 1), rnorm(1000, mean = 4, sd = 1)) f <- 0.7*dnorm(x, mean = 0, sd = 1) + 0.3*dnorm(x, mean = 4, sd = 1) a <- 0.25 (f_a <- hdrlevels(f, prob = 1-a)) plot(x, f) abline(h = f_a, lty = 2) text(max(x), f_a, labels = paste0("f_", a), pos = 3) mean(f > f_a) # find the regions of HDR ord <- order(x) f <- f[ord] x <- x[ord] x_a <- x[f > f_a] j <- which.max(diff(x_a)) region1 <- x_a[c(1,j)] region2 <- x_a[c(j+1,length(x_a))] plot(x, f, type = "l") abline(h = f_a, lty = 2) abline(v = region1, lty = 3, col = 2) abline(v = region2, lty = 3, col = 3) hypvol Aproximate Hypervolume for Multivariate Data Description Computes a simple approximation to the hypervolume of a multivariate data set. Usage hypvol(data, reciprocal=FALSE) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. reciprocal A logical variable indicating whether or not the reciprocal hypervolume is de- sired rather than the hypervolume itself. The default is to return the hypervol- ume. Value Returns the minimum of the hypervolume computed from simple variable bounds and that com- puted from variable bounds of the principal component scores. Used for the default hypervol- ume parameter for the noise component when observations are designated as noise in Mclust and mclustBIC. References <NAME> and <NAME> (1998). Detecting features in spatial point processes with clutter via model-based clustering. Journal of the American Statistical Association 93:294-302. <NAME> and <NAME> (1998). Computer Journal 41:578-588. <NAME> and <NAME> (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association 97:611-631. See Also mclustBIC Examples hypvol(iris[,-5]) icl ICL for an estimated Gaussian Mixture Model Description Computes the ICL (Integrated Complete-data Likelihood) for criterion for a Gaussian Mixture Model fitted by Mclust. Usage icl(object, ...) Arguments object An object of class 'Mclust' resulting from a call to Mclust. ... Further arguments passed to or from other methods. Value The ICL for the given input MCLUST model. References <NAME>., <NAME>., <NAME>. (2000). Assessing a mixture model for clustering with the integrated completed likelihood. IEEE Trans. Pattern Analysis and Machine Intelligence, 22 (7), 719-725. See Also Mclust, mclustBIC, mclustICL, bic. Examples mod <- Mclust(iris[,1:4]) icl(mod) imputeData Missing data imputation via the mix package Description Imputes missing data using the mix package. Usage imputeData(data, categorical = NULL, seed = NULL, verbose = interactive()) Arguments data A numeric vector, matrix, or data frame of observations containing missing val- ues. Categorical variables are allowed. If a matrix or data frame, rows corre- spond to observations and columns correspond to variables. categorical A logical vectors whose ith entry is TRUE if the ith variable or column of data is to be interpreted as categorical and FALSE otherwise. The default is to assume that a variable is to be interpreted as categorical only if it is a factor. seed A seed for the function rngseed that is used to initialize the random num- ber generator in mix. By default, a seed is chosen uniformly in the interval (.Machine$integer.max/1024, .Machine$integer.max). verbose A logical, if TRUE reports info about iterations of the algorithm. Value A dataset of the same dimensions as data with missing values filled in. References <NAME>. (1997). Analysis of Imcomplete Multivariate Data, Chapman and Hall. See Also imputePairs Examples # Note that package 'mix' must be installed data(stlouis, package = "mix") # impute the continuos variables in the stlouis data stlimp <- imputeData(stlouis[,-(1:3)]) # plot imputed values imputePairs(stlouis[,-(1:3)], stlimp) imputePairs Pairwise Scatter Plots showing Missing Data Imputations Description Creates a scatter plot for each pair of variables in given data, allowing display of imputations for missing values in different colors and symbols than non missing values. Usage imputePairs(data, dataImp, symbols = c(1,16), colors = c("black", "red"), labels, panel = points, ..., lower.panel = panel, upper.panel = panel, diag.panel = NULL, text.panel = textPanel, label.pos = 0.5 + has.diag/3, cex.labels = NULL, font.labels = 1, row1attop = TRUE, gap = 0.2) Arguments data A numeric vector, matrix, or data frame of observations containing missing val- ues. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. dataImp The dataset data with missing values imputed. symbols Either an integer or character vector assigning plotting symbols to the nonmiss- ing data and impued values, respectively. The default is a closed circle for the nonmissing data and an open circle for the imputed values. colors Either an integer or character vector assigning colors to the nonmissing data and impued values, respectively. The default is black for the nonmissing data and red for the imputed values. labels As in function pairs. panel As in function pairs. ... As in function pairs. lower.panel As in function pairs. upper.panel As in function pairs. diag.panel As in function pairs. text.panel As in function pairs. label.pos As in function pairs. cex.labels As in function pairs. font.labels As in function pairs. row1attop As in function pairs. gap As in function pairs. Value A pairs plot displaying the location of missing and nonmissing values. References <NAME>. (1997). Analysis of Imcomplete Multivariate Data, Chapman and Hall. See Also pairs, imputeData Examples # Note that package 'mix' must be installed data(stlouis, package = "mix") # impute the continuos variables in the stlouis data stlimp <- imputeData(stlouis[,-(1:3)]) # plot imputed values imputePairs(stlouis[,-(1:3)], stlimp) logLik.Mclust Log-Likelihood of a Mclust object Description Returns the log-likelihood for a 'Mclust' object. Usage ## S3 method for class 'Mclust' logLik(object, ...) Arguments object an object of class 'Mclust' resulting from a call to Mclust. ... further arguments passed to or from other methods. Value Returns an object of class 'logLik' with an element providing the maximized log-likelihood, and further arguments giving the number of (estimated) parameters in the model ("df") and the sample size ("nobs"). Author(s) <NAME> See Also Mclust. Examples irisMclust <- Mclust(iris[,1:4]) summary(irisMclust) logLik(irisMclust) logLik.MclustDA Log-Likelihood of a MclustDA object Description Returns the log-likelihood for a MclustDA object. Usage ## S3 method for class 'MclustDA' logLik(object, data, ...) Arguments object an object of class 'MclustDA' resulting from a call to MclustDA. data the data for which the log-likelihood must be computed. If missing, the observed data from the 'MclustDA' object is used. ... further arguments passed to or from other methods. Value Returns an object of class 'logLik' with an element providing the maximized log-likelihood, and further arguments giving the number of (estimated) parameters in the model ("df") and the sample size ("nobs"). Author(s) <NAME> See Also MclustDA. Examples irisMclustDA <- MclustDA(iris[,1:4], iris$Species) summary(irisMclustDA) logLik(irisMclustDA) majorityVote Majority vote Description A function to compute the majority vote (some would say plurality) label in a vector of labels, breaking ties at random. Usage majorityVote(x) Arguments x A vector of values, either numerical or not. Value A list with the following components: table A table of votes for each unique value of x. ind An integer specifying which unique value of x corresponds to the majority vote. majority A string specifying the majority vote label. Author(s) <NAME> Examples x <- c("A", "C", "A", "B", "C", "B", "A") majorityVote(x) map Classification given Probabilities Description Converts a matrix in which each row sums to 1 to an integer vector specifying for each row the column index of the maximum. Usage map(z, warn = mclust.options("warn"), ...) Arguments z A matrix (for example a matrix of conditional probabilities in which each row sums to 1 as produced by the E-step of the EM algorithm). warn A logical variable indicating whether or not a warning should be issued when there are some columns of z for which no row attains a maximum. ... Provided to allow lists with elements other than the arguments can be passed in indirect or list calls with do.call. Value A integer vector with one entry for each row of z, in which the i-th value is the column index at which the i-th row of z attains a maximum. See Also unmap, estep, em, me. Examples emEst <- me(modelName = "VVV", data = iris[,-5], z = unmap(iris[,5])) map(emEst$z) mapClass Correspondence between classifications Description Best correspondence between classes given two vectors viewed as alternative classifications of the same object. Usage mapClass(a, b) Arguments a A numeric or character vector of class labels. b A numeric or character vector of class labels. Must have the same length as a. Value A list with two named elements, aTOb and bTOa which are themselves lists. The aTOb list has a component corresponding to each unique element of a, which gives the element or elements of b that result in the closest class correspondence. The bTOa list has a component corresponding to each unique element of b, which gives the element or elements of a that result in the closest class correspondence. See Also classError, table Examples a <- rep(1:3, 3) a b <- rep(c("A", "B", "C"), 3) b mapClass(a, b) a <- sample(1:3, 9, replace = TRUE) a b <- sample(c("A", "B", "C"), 9, replace = TRUE) b mapClass(a, b) Mclust Model-Based Clustering Description Model-based clustering based on parameterized finite Gaussian mixture models. Models are es- timated by EM algorithm initialized by hierarchical model-based agglomerative clustering. The optimal model is then selected according to BIC. Usage Mclust(data, G = NULL, modelNames = NULL, prior = NULL, control = emControl(), initialization = NULL, warn = mclust.options("warn"), x = NULL, verbose = interactive(), ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations (n) and columns correspond to variables (d). G An integer vector specifying the numbers of mixture components (clusters) for which the BIC is to be calculated. The default is G=1:9. modelNames A vector of character strings indicating the models to be fitted in the EM phase of clustering. The default is: • for univariate data (d = 1): c("E", "V") • for multivariate data (n > d): all the models available in mclust.options("emModelNames") • for multivariate data (n <= d): the spherical and diagonal models, i.e. c("EII", "VII", "EEI", "EVI", "VEI", "VVI") The help file for mclustModelNames describes the available models. prior The default assumes no prior, but this argument allows specification of a conju- gate prior on the means and variances through the function priorControl. Note that, as described in defaultPrior, in the multivariate case only 10 out of 14 models may be used in conjunction with a prior, i.e. those available in MCLUST up to version 4.4. control A list of control parameters for EM. The defaults are set by the call emControl(). initialization A list containing zero or more of the following components: hcPairs A matrix of merge pairs for hierarchical clustering such as produced by function hc. For multivariate data, the default is to compute a hierarchical agglomerative clustering tree by applying function hc with model specified by mclust.options("hcModelName"), and data transformation set by mclust.options("hcUse"). All the input or a subset as indicated by the subset argument is used for initial clustering. The hierarchical clustering results are then used to start the EM algorithm from a given partition. For univariate data, the default is to use quantiles to start the EM algo- rithm. However, hierarchical clustering could also be used by calling hc with model specified as "V" or "E". subset A logical or numeric vector specifying a subset of the data to be used in the initial hierarchical clustering phase. No subset is used unless the num- ber of observations exceeds the value specified by mclust.options("subset"), which by default is set to 2000 (see mclust.options). Note that in this case to guarantee exact reproducibility of results a seed must be specified (see set.seed). noise A logical or numeric vector indicating an initial guess as to which ob- servations are noise in the data. If numeric the entries should correspond to row indexes of the data. If supplied, a noise term will be added to the model in the estimation. warn A logical value indicating whether or not certain warnings (usually related to singularity) should be issued. The default is controlled by mclust.options. x An object of class 'mclustBIC'. If supplied, BIC values for models that have al- ready been computed and are available in x are not recomputed. All arguments, with the exception of data, G and modelName, are ignored and their values are set as specified in the attributes of x. Defaults for G and modelNames are taken from x. verbose A logical controlling if a text progress bar is displayed during the fitting proce- dure. By default is TRUE if the session is interactive, and FALSE otherwise. ... Catches unused arguments in indirect or list calls via do.call. Value An object of class 'Mclust' providing the optimal (according to BIC) mixture model estimation. The details of the output components are as follows: call The matched call data The input data matrix. modelName A character string denoting the model at which the optimal BIC occurs. n The number of observations in the data. d The dimension of the data. G The optimal number of mixture components. BIC All BIC values. loglik The log-likelihood corresponding to the optimal BIC. df The number of estimated parameters. bic BIC value of the selected model. icl ICL value of the selected model. hypvol The hypervolume parameter for the noise component if required, otherwise set to NULL (see hypvol). parameters A list with the following components: pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. If missing, equal proportions are assumed. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. z A matrix whose [i,k]th entry is the probability that observation i in the test data belongs to the kth class. classification The classification corresponding to z, i.e. map(z). uncertainty The uncertainty associated with the classification. References <NAME>., <NAME>., <NAME>. and <NAME>. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, The R Journal, 8/1, pp. 289-317. <NAME>. and <NAME>. (2002) Model-based clustering, discriminant analysis and density esti- mation, Journal of the American Statistical Association, 97/458, pp. 611-631. <NAME>., <NAME>., <NAME>. and <NAME>. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. <NAME> and <NAME> (2007) Bayesian regularization for normal mixture estimation and model-based clustering. Journal of Classification, 24, 155-181. See Also summary.Mclust, plot.Mclust, priorControl, emControl, hc, mclustBIC, mclustModelNames, mclust.options Examples mod1 <- Mclust(iris[,1:4]) summary(mod1) mod2 <- Mclust(iris[,1:4], G = 3) summary(mod2, parameters = TRUE) # Using prior mod3 <- Mclust(iris[,1:4], prior = priorControl()) summary(mod3) mod4 <- Mclust(iris[,1:4], prior = priorControl(functionName="defaultPrior", shrinkage=0.1)) summary(mod4) # Clustering of faithful data with some artificial noise added nNoise <- 100 set.seed(0) # to make it reproducible Noise <- apply(faithful, 2, function(x) runif(nNoise, min = min(x)-.1, max = max(x)+.1)) data <- rbind(faithful, Noise) plot(faithful) points(Noise, pch = 20, cex = 0.5, col = "lightgrey") set.seed(0) NoiseInit <- sample(c(TRUE,FALSE), size = nrow(faithful)+nNoise, replace = TRUE, prob = c(3,1)/4) mod5 <- Mclust(data, initialization = list(noise = NoiseInit)) summary(mod5, parameter = TRUE) plot(mod5, what = "classification") mclust-deprecated Deprecated Functions in mclust package Description These functions are provided for compatibility with older versions of the mclust package only, and may be removed eventually. Usage cv.MclustDA(...) cv1EMtrain(data, labels, modelNames=NULL) bicEMtrain(data, labels, modelNames=NULL) Arguments ... pass arguments down. data A numeric vector or matrix of observations. labels Labels for each element or row in the dataset. modelNames Vector of model names that should be tested. The default is to select all available model names. See Also deprecated mclust.options Default values for use with MCLUST package Description Set or retrieve default values for use with MCLUST package. Usage mclust.options(...) Arguments ... one or more arguments provided in the name = value form, or no argument at all may be given. Available arguments are described in the Details section below. Details mclust.options() is provided for assigning or retrieving default values used by various functions in MCLUST. Available options are: emModelNames A vector of 3-character strings that are associated with multivariate models for which EM estimation is available in MCLUST. The current default is all of the multivariate mixture models supported in MCLUST. The help file for mclustModelNames describes the available models. hcModelName A character string specifying the multivariate model to be used in model-based ag- glomerative hierarchical clustering for initialization of EM algorithm. The available models are the following: "EII" spherical, equal volume; "EEE" ellipsoidal, equal volume, shape, and orientation; "VII" spherical, unequal volume; "VVV" ellipsoidal, varying volume, shape, and orientation (default). hcUse A character string specifying the type of input variables/transformation to be used in model- based agglomerative hierarchical clustering for initialization of EM algorithm. Possible values are: "VARS" original variables; "STD" standardized variables (centered and scaled); "SPH" sphered variables (centered, scaled and uncorrelated) computed using SVD; "PCS" principal components computed using SVD on centered variables (i.e. using the co- variance matrix); "PCR" principal components computed using SVD on standardized (center and scaled) vari- ables (i.e. using the correlation matrix); "SVD" scaled SVD transformation (default); "RND" no transformation is applied but a random hierarchical structure is returned (see hcRandomPairs). For further details see Scrucca and Raftery (2015), Scrucca et al. (2016). subset A value specifying the maximal sample size to be used in the model-based hierarchical clustering to start the EM algorithm. If data sample size exceeds this value, a random sample is drawn of size specified by subset. fillEllipses A logical value specifying whether or not to fill with transparent colors ellipses cor- responding to the within-cluster covariances in case of "classification" plot for 'Mclust' objects, or "scatterplot" graphs for 'MclustDA' objects. bicPlotSymbols A vector whose entries correspond to graphics symbols for plotting the BIC val- ues output from Mclust and mclustBIC. These are displayed in the legend which appears at the lower right of the BIC plots. bicPlotColors A vector whose entries correspond to colors for plotting the BIC curves from output from Mclust and mclustBIC. These are displayed in the legend which appears at the lower right of the BIC plots. classPlotSymbols A vector whose entries are either integers corresponding to graphics symbols or single characters for indicating classifications when plotting data. Classes are assigned symbols in the given order. classPlotColors A vector whose entries correspond to colors for indicating classifications when plotting data. Classes are assigned colors in the given order. warn A logical value indicating whether or not to issue certain warnings. Most of these warnings have to do with situations in which singularities are encountered. The default is warn = FALSE. The parameter values set via a call to this function will remain in effect for the rest of the session, affecting the subsequent behaviour of the functions for which the given parameters are relevant. Value If the argument list is empty the function returns the current list of values. If the argument list is not empty, the returned list is invisible. References <NAME>. and <NAME>. (2015) Improved initialisation of model-based clustering using Gaus- sian hierarchical partitions. Advances in Data Analysis and Classification, 9/4, pp. 447-460. <NAME>., <NAME>., <NAME>. and <NAME>. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, The R Journal, 8/1, pp. 289-317. See Also Mclust, MclustDA, densityMclust, emControl Examples opt <- mclust.options() # save default values irisBIC <- mclustBIC(iris[,-5]) summary(irisBIC, iris[,-5]) mclust.options(emModelNames = c("EII", "EEI", "EEE")) irisBIC <- mclustBIC(iris[,-5]) summary(irisBIC, iris[,-5]) mclust.options(opt) # restore default values mclust.options() oldpar <- par(mfrow = c(2,1), no.readonly = TRUE) n <- with(mclust.options(), max(sapply(list(bicPlotSymbols, bicPlotColors),length))) plot(seq(n), rep(1,n), ylab = "", xlab = "", yaxt = "n", pch = mclust.options("bicPlotSymbols"), col = mclust.options("bicPlotColors")) title("mclust.options(\"bicPlotSymbols\") \n mclust.options(\"bicPlotColors\")") n <- with(mclust.options(), max(sapply(list(classPlotSymbols, classPlotColors),length))) plot(seq(n), rep(1,n), ylab = "", xlab = "", yaxt = "n", pch = mclust.options("classPlotSymbols"), col = mclust.options("classPlotColors")) title("mclust.options(\"classPlotSymbols\") \n mclust.options(\"classPlotColors\")") par(oldpar) mclust1Dplot Plot one-dimensional data modeled by an MVN mixture. Description Plot one-dimensional data given parameters of an MVN mixture model for the data. Usage mclust1Dplot(data, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "density", "error", "uncertainty"), symbols = NULL, colors = NULL, ngrid = length(data), xlab = NULL, ylab = NULL, xlim = NULL, ylim = NULL, cex = 1, main = FALSE, ...) Arguments data A numeric vector of observations. Categorical variables are not allowed. parameters A named list giving the parameters of an MCLUST model, used to produce superimposing ellipses on the plot. The relevant components are as follows: pro Mixing proportions for the components of the mixture. There should one more mixing proportion than the number of Gaussian components if the mixture model includes a Poisson noise term. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. z A matrix in which the [i,k]th entry gives the probability of observation i be- longing to the kth class. Used to compute classification and uncertainty if those arguments aren’t available. classification A numeric or character vector representing a classification of observations (rows) of data. If present argument z will be ignored. truth A numeric or character vector giving a known classification of each data point. If classification or z is also present, this is used for displaying classification errors. uncertainty A numeric vector of values in (0,1) giving the uncertainty of each data point. If present argument z will be ignored. what Choose from one of the following options: "classification" (default), "density", "error", "uncertainty". symbols Either an integer or character vector assigning a plotting symbol to each unique class classification. Elements in symbols correspond to classes in classification in order of appearance in the observations (the order used by the function unique). The default is to use a single plotting symbol |. Classes are delineated by show- ing them in separate lines above the whole of the data. colors Either an integer or character vector assigning a color to each unique class classification. Elements in colors correspond to classes in order of appear- ance in the observations (the order used by the function unique). The default is given is mclust.options("classPlotColors"). ngrid Number of grid points to use for density computation over the interval spanned by the data. The default is the length of the data set. xlab, ylab An argument specifying a label for the axes. xlim, ylim An argument specifying bounds of the plot. This may be useful for when com- paring plots. cex An argument specifying the size of the plotting symbols. The default value is 1. main A logical variable or NULL indicating whether or not to add a title to the plot identifying the dimensions used. ... Other graphics parameters. Value A plot showing location of the mixture components, classification, uncertainty, density and/or clas- sification errors. Points in the different classes are shown in separated levels above the whole of the data. See Also mclust2Dplot, clPairs, coordProj Examples n <- 250 ## create artificial data set.seed(1) y <- c(rnorm(n,-5), rnorm(n,0), rnorm(n,5)) yclass <- c(rep(1,n), rep(2,n), rep(3,n)) yModel <- Mclust(y) mclust1Dplot(y, parameters = yModel$parameters, z = yModel$z, what = "classification") mclust1Dplot(y, parameters = yModel$parameters, z = yModel$z, what = "error", truth = yclass) mclust1Dplot(y, parameters = yModel$parameters, z = yModel$z, what = "density") mclust1Dplot(y, z = yModel$z, parameters = yModel$parameters, what = "uncertainty") mclust2Dplot Plot two-dimensional data modelled by an MVN mixture Description Plot two-dimensional data given parameters of an MVN mixture model for the data. Usage mclust2Dplot(data, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "uncertainty", "error"), addEllipses = TRUE, fillEllipses = mclust.options("fillEllipses"), symbols = NULL, colors = NULL, xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, scale = FALSE, cex = 1, PCH = ".", main = FALSE, swapAxes = FALSE, ...) Arguments data A numeric matrix or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. In this case the data are two dimensional, so there are two columns. parameters A named list giving the parameters of an MCLUST model, used to produce superimposing ellipses on the plot. The relevant components are as follows: pro Mixing proportions for the components of the mixture. There should one more mixing proportion than the number of Gaussian components if the mixture model includes a Poisson noise term. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. z A matrix in which the [i,k]th entry gives the probability of observation i be- longing to the kth class. Used to compute classification and uncertainty if those arguments aren’t available. classification A numeric or character vector representing a classification of observations (rows) of data. If present argument z will be ignored. truth A numeric or character vector giving a known classification of each data point. If classification or z is also present, this is used for displaying classification errors. uncertainty A numeric vector of values in (0,1) giving the uncertainty of each data point. If present argument z will be ignored. what Choose from one of the following three options: "classification" (default), "error", "uncertainty". addEllipses A logical indicating whether or not to add ellipses with axes corresponding to the within-cluster covariances. fillEllipses A logical specifying whether or not to fill ellipses with transparent colors when addEllipses = TRUE. symbols Either an integer or character vector assigning a plotting symbol to each unique class in classification. Elements in colors correspond to classes in order of appearance in the sequence of observations (the order used by the function unique). The default is given by mclust.options("classPlotSymbols"). colors Either an integer or character vector assigning a color to each unique class in classification. Elements in colors correspond to classes in order of appear- ance in the sequence of observations (the order used by the function unique). The default is given is mclust.options("classPlotColors"). xlim, ylim Optional argument specifying bounds for the ordinate, abscissa of the plot. This may be useful for when comparing plots. xlab, ylab Optional argument specifying labels for the x-axis and y-axis. scale A logical variable indicating whether or not the two chosen dimensions should be plotted on the same scale, and thus preserve the shape of the distribution. Default: scale=FALSE cex An argument specifying the size of the plotting symbols. The default value is 1. PCH An argument specifying the symbol to be used when a classificatiion has not been specified for the data. The default value is a small dot ".". main A logical variable or NULL indicating whether or not to add a title to the plot identifying the dimensions used. swapAxes A logical variable indicating whether or not the axes should be swapped for the plot. ... Other graphics parameters. Value A plot showing the data, together with the location of the mixture components, classification, un- certainty, and/or classification errors. See Also surfacePlot, clPairs, coordProj, mclust.options Examples faithfulModel <- Mclust(faithful) mclust2Dplot(faithful, parameters=faithfulModel$parameters, z=faithfulModel$z, what = "classification", main = TRUE) mclust2Dplot(faithful, parameters=faithfulModel$parameters, z=faithfulModel$z, what = "uncertainty", main = TRUE) mclustBIC BIC for Model-Based Clustering Description BIC for parameterized Gaussian mixture models fitted by EM algorithm initialized by model-based hierarchical clustering. Usage mclustBIC(data, G = NULL, modelNames = NULL, prior = NULL, control = emControl(), initialization = list(hcPairs = NULL, subset = NULL, noise = NULL), Vinv = NULL, warn = mclust.options("warn"), x = NULL, verbose = interactive(), ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. G An integer vector specifying the numbers of mixture components (clusters) for which the BIC is to be calculated. The default is G=1:9, unless the argument x is specified, in which case the default is taken from the values associated with x. modelNames A vector of character strings indicating the models to be fitted in the EM phase of clustering. The help file for mclustModelNames describes the available models. The default is: c("E", "V") for univariate data mclust.options("emModelNames") for multivariate data (n > d) c("EII", "VII", "EEI", "EVI", "VEI", "VVI") the spherical and diagonal mod- els for multivariate data (n <= d) unless the argument x is specified, in which case the default is taken from the values associated with x. prior The default assumes no prior, but this argument allows specification of a conju- gate prior on the means and variances through the function priorControl. control A list of control parameters for EM. The defaults are set by the call emControl(). initialization A list containing zero or more of the following components: hcPairs A matrix of merge pairs for hierarchical clustering such as produced by function hc. For multivariate data, the default is to compute a hierarchical agglomerative clustering tree by applying function hc with model specified by mclust.options("hcModelName"), and data transformation set by mclust.options("hcUse"). All the input or a subset as indicated by the subset argument is used for initial clustering. The hierarchical clustering results are then used to start the EM algorithm from a given partition. For univariate data, the default is to use quantiles to start the EM algo- rithm. However, hierarchical clustering could also be used by calling hc with model specified as "V" or "E". subset A logical or numeric vector specifying a subset of the data to be used in the initial hierarchical clustering phase. By default no subset is used unless the number of observations exceeds the value specified by mclust.options("subset"). The subset argument is ignored if hcPairs are provided. Note that to guarantee exact reproducibility of results a seed must be specified (see set.seed). noise A logical or numeric vector indicating an initial guess as to which ob- servations are noise in the data. If numeric the entries should correspond to row indexes of the data. If supplied, a noise term will be added to the model in the estimation. Vinv An estimate of the reciprocal hypervolume of the data region. The default is determined by applying function hypvol to the data. Used only if an initial guess as to which observations are noise is supplied. warn A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when estimation fails. The default is controlled by mclust.options. x An object of class 'mclustBIC'. If supplied, mclustBIC will use the settings in x to produce another object of class 'mclustBIC', but with G and modelNames as specified in the arguments. Models that have already been computed in x are not recomputed. All arguments to mclustBIC except data, G and modelName are ignored and their values are set as specified in the attributes of x. Defaults for G and modelNames are taken from x. verbose A logical controlling if a text progress bar is displayed during the fitting proce- dure. By default is TRUE if the session is interactive, and FALSE otherwise. ... Catches unused arguments in indirect or list calls via do.call. Value Return an object of class 'mclustBIC' containing the Bayesian Information Criterion for the spec- ified mixture models numbers of clusters. Auxiliary information returned as attributes. The corresponding print method shows the matrix of values and the top models according to the BIC criterion. References <NAME>., <NAME>., <NAME>. and <NAME>. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, The R Journal, 8/1, pp. 289-317. <NAME>. and <NAME>. (2002) Model-based clustering, discriminant analysis and density esti- mation, Journal of the American Statistical Association, 97/458, pp. 611-631. <NAME>., <NAME>., <NAME>. and <NAME>. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. See Also summary.mclustBIC, priorControl, emControl, mclustModel, hc, me, mclustModelNames, mclust.options Examples irisBIC <- mclustBIC(iris[,-5]) irisBIC plot(irisBIC) subset <- sample(1:nrow(iris), 100) irisBIC <- mclustBIC(iris[,-5], initialization=list(subset = subset)) irisBIC plot(irisBIC) irisBIC1 <- mclustBIC(iris[,-5], G=seq(from=1,to=9,by=2), modelNames=c("EII", "EEI", "EEE")) irisBIC1 plot(irisBIC1) irisBIC2 <- mclustBIC(iris[,-5], G=seq(from=2,to=8,by=2), modelNames=c("VII", "VVI", "VVV"), x= irisBIC1) irisBIC2 plot(irisBIC2) nNoise <- 450 set.seed(0) poissonNoise <- apply(apply( iris[,-5], 2, range), 2, function(x, n) runif(n, min = x[1]-.1, max = x[2]+.1), n = nNoise) set.seed(0) noiseInit <- sample(c(TRUE,FALSE),size=nrow(iris)+nNoise,replace=TRUE, prob=c(3,1)) irisNdata <- rbind(iris[,-5], poissonNoise) irisNbic <- mclustBIC(data = irisNdata, G = 1:5, initialization = list(noise = noiseInit)) irisNbic plot(irisNbic) mclustBICupdate Update BIC values for parameterized Gaussian mixture models Description Update the BIC (Bayesian Information Criterion) for parameterized Gaussian mixture models by taking the best from BIC results as returned by mclustBIC. Usage mclustBICupdate(BIC, ...) Arguments BIC Object of class 'mclustBIC' containing the BIC values as returned by a call to mclustBIC. ... Further objects of class 'mclustBIC' to be merged. Value An object of class 'mclustBIC' containing the best values obtained from merging the input ar- guments. Attributes are also updated according to the best BIC found, so calling Mclust on the resulting ouput will return the corresponding best model (see example). See Also mclustBIC, Mclust. Examples data(galaxies, package = "MASS") galaxies <- galaxies / 1000 # use several random starting points BIC <- NULL for(j in 1:100) { rBIC <- mclustBIC(galaxies, verbose = FALSE, initialization = list(hcPairs = hcRandomPairs(galaxies))) BIC <- mclustBICupdate(BIC, rBIC) } pickBIC(BIC) plot(BIC) mod <- Mclust(galaxies, x = BIC) summary(mod) MclustBootstrap Resampling-based Inference for Gaussian finite mixture models Description Bootstrap or jackknife estimation of standard errors and percentile bootstrap confidence intervals for the parameters of a Gaussian mixture model. Usage MclustBootstrap(object, nboot = 999, type = c("bs", "wlbs", "pb", "jk"), max.nonfit = 10*nboot, verbose = interactive(), ...) Arguments object An object of class 'Mclust' or 'densityMclust' providing an estimated Gaus- sian mixture model. nboot The number of bootstrap replications. type A character string specifying the type of resampling to use: "bs" nonparametric bootstrap "wlbs" weighted likelihood bootstrap "pb" parametric bootstrap "jk" jackknife max.nonfit The maximum number of non-estimable models allowed. verbose A logical controlling if a text progress bar is displayed during the resampling procedure. By default is TRUE if the session is interactive, and FALSE otherwise. ... Further arguments passed to or from other methods. Details For a fitted Gaussian mixture model with object$G mixture components and covariances parame- terisation object$modelName, this function returns either the bootstrap distribution or the jackknife distribution of mixture parameters. In the former case, the nonparametric bootstrap or the weighted likelihood bootstrap approach could be used, so the the bootstrap procedure generates nboot boot- strap samples of the same size as the original data by resampling with replacement from the ob- served data. In the jackknife case, the procedure considers all the samples obtained by omitting one observation at time. The resulting resampling distribution can then be used to obtain standard errors and percentile confidence intervals by the use of summary.MclustBootstrap function. Value An object of class 'MclustBootstrap' with the following components: n The number of observations in the data. d The dimension of the data. G A value specifying the number of mixture components. modelName A character string specifying the mixture model covariances parameterisation (see mclustModelNames). parameters A list of estimated parameters for the mixture components with the following components: pro a vector of mixing proportions. mean a matrix of means for each component. variance an array of covariance matrices for each component. nboot The number of bootstrap replications if type = "bs" or type = "wlbs". The sample size if type = "jk". type The type of resampling approach used. nonfit The number of resamples that did not convergence during the procedure. pro A matrix of dimension (nboot x G) containing the bootstrap distribution for the mixing proportion. mean An array of dimension (nboot x d x G), where d is the dimension of the data, containing the bootstrap distribution for the component means. variance An array of dimension (nboot x d x d x G), where d is the dimension of the data, containing the bootstrap distribution for the component covariances. References <NAME>. and <NAME>. (1997) Bootstrap Methods and Their Applications. Cambridge Uni- versity Press. <NAME>. and <NAME>. (2000) Finite Mixture Models. Wiley. <NAME>., <NAME>., <NAME>. and <NAME>. (2015) On Estimation of Parameter Uncertainty in Model-Based Clustering. Submitted to Computational Statistics. See Also summary.MclustBootstrap, plot.MclustBootstrap, Mclust, densityMclust. Examples data(diabetes) X <- diabetes[,-1] modClust <- Mclust(X) bootClust <- MclustBootstrap(modClust) summary(bootClust, what = "se") summary(bootClust, what = "ci") data(acidity) modDens <- densityMclust(acidity, plot = FALSE) modDens <- MclustBootstrap(modDens) summary(modDens, what = "se") summary(modDens, what = "ci") mclustBootstrapLRT Bootstrap Likelihood Ratio Test for the Number of Mixture Compo- nents Description Perform the likelihood ratio test (LRT) for assessing the number of mixture components in a specific finite mixture model parameterisation. The observed significance is approximated by using the (parametric) bootstrap for the likelihood ratio test statistic (LRTS). Usage mclustBootstrapLRT(data, modelName = NULL, nboot = 999, level = 0.05, maxG = NULL, verbose = interactive(), ...) ## S3 method for class 'mclustBootstrapLRT' print(x, ...) ## S3 method for class 'mclustBootstrapLRT' plot(x, G = 1, hist.col = "grey", hist.border = "lightgrey", breaks = "Scott", col = "forestgreen", lwd = 2, lty = 3, main = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. modelName A character string indicating the mixture model to be fitted. The help file for mclustModelNames describes the available models. nboot The number of bootstrap replications to use (by default 999). level The significance level to be used to terminate the sequential bootstrap procedure. maxG The maximum number of mixture components G to test. If not provided the procedure is stopped when a test is not significant at the specified level. verbose A logical controlling if a text progress bar is displayed during the bootstrap procedure. By default is TRUE if the session is interactive, and FALSE otherwise. ... Further arguments passed to or from other methods. In particular, see the op- tional arguments in mclustBIC. x An 'mclustBootstrapLRT' object. G A value specifying the number of components for which to plot the bootstrap distribution. hist.col The colour to be used to fill the bars of the histogram. hist.border The color of the border around the bars of the histogram. breaks See the argument in function hist. col, lwd, lty The color, line width and line type to be used to represent the observed LRT statistic. main The title for the graph. Details The implemented algorithm for computing the LRT observed significance using the bootstrap is the following. Let G0 be the number of mixture components under the null hypothesis versus G1 = G0 + 1 under the alternative. Bootstrap samples are drawn by simulating data under the null hypothesis. Then, the p-value may be approximated using eq. (13) on McLachlan and Rathnayake (2014). Equivalently, using the notation of Davison and Hinkley (1997) it may be computed as 1 + #{LRTb∗ ≥ LRT Sobs } p-value = where B = number of bootstrap samples LRTobs = LRTS computed on the observed data LRTb∗ = LRTS computed on the bth bootstrap sample. Value An object of class 'mclustBootstrapLRT' with the following components: G A vector of number of components tested under the null hypothesis. modelName A character string specifying the mixture model as provided in the function call (see above). obs The observed values of the LRTS. boot A matrix of dimension nboot x the number of components tested containing the bootstrap values of LRTS. p.value A vector of p-values. References <NAME>. and <NAME>. (1997) Bootstrap Methods and Their Applications. Cambridge Uni- versity Press. <NAME>. (1987) On bootstrapping the likelihood ratio test statistic for the number of com- ponents in a normal mixture. Applied Statistics, 36, 318-324. <NAME>. and <NAME>. (2000) Finite Mixture Models. Wiley. <NAME>. and <NAME>. (2014) On the number of components in a Gaussian mixture model. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 4(5), pp. 341- 355. See Also mclustBIC, mclustICL, Mclust Examples data(faithful) faithful.boot = mclustBootstrapLRT(faithful, model = "VVV") faithful.boot plot(faithful.boot, G = 1) plot(faithful.boot, G = 2) MclustDA MclustDA discriminant analysis Description Discriminant analysis based on Gaussian finite mixture modeling. Usage MclustDA(data, class, G = NULL, modelNames = NULL, modelType = c("MclustDA", "EDDA"), prior = NULL, control = emControl(), initialization = NULL, warn = mclust.options("warn"), verbose = interactive(), ...) Arguments data A data frame or matrix giving the training data. class A vector giving the known class labels (either a numerical value or a character string) for the observations in the training data. G An integer vector specifying the numbers of mixture components (clusters) for which the BIC is to be calculated within each class. The default is G = 1:5. A different set of mixture components for each class can be specified by pro- viding this argument with a list of integers for each class. See the examples below. modelNames A vector of character strings indicating the models to be fitted by EM within each class (see the description in mclustModelNames). A different set of mix- ture models for each class can be specified by providing this argument with a list of character strings. See the examples below. modelType A character string specifying whether the models given in modelNames should fit a different number of mixture components and covariance structures for each class ("MclustDA", the default) or should be constrained to have a single compo- nent for each class with the same covariance structure among classes ("EDDA"). See Details section and the examples below. prior The default assumes no prior, but this argument allows specification of a conju- gate prior on the means and variances through the function priorControl. control A list of control parameters for EM. The defaults are set by the call emControl(). initialization A list containing zero or more of the following components: hcPairs A matrix of merge pairs for hierarchical clustering such as produced by function hc. The default is to compute a hierarchical clustering tree by applying function hc with modelName = "E" to univariate data and modelName = "VVV" to multivariate data or a subset as indicated by the subset argu- ment. The hierarchical clustering results are used as starting values for EM. subset A logical or numeric vector specifying a subset of the data to be used in the initial hierarchical clustering phase. warn A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when estimation fails. The default is controlled by mclust.options. verbose A logical controlling if a text progress bar is displayed during the fitting proce- dure. By default is TRUE if the session is interactive, and FALSE otherwise. ... Further arguments passed to or from other methods. Details The "EDDA" method for discriminant analysis is described in Bensmail and Celeux (1996), while "MclustDA" in Fraley and Raftery (2002). Value An object of class 'MclustDA' providing the optimal (according to BIC) mixture model. The details of the output components are as follows: call The matched call. data The input data matrix. class The input class labels. type A character string specifying the modelType estimated. models A list of Mclust objects containing information on fitted model for each class. n The total number of observations in the data. d The dimension of the data. bic Optimal BIC value. loglik Log-likelihood for the selected model. df Number of estimated parameters. Author(s) <NAME> References <NAME>., <NAME>., <NAME>. and <NAME>. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, The R Journal, 8/1, pp. 289-317. <NAME>. and <NAME>. (2002) Model-based clustering, discriminant analysis and density esti- mation, Journal of the American Statistical Association, 97/458, pp. 611-631. <NAME>., <NAME>., <NAME>. and <NAME>. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. <NAME>., and <NAME>. (1996) Regularized Gaussian Discriminant Analysis Through Eigen- value Decomposition.Journal of the American Statistical Association, 91, 1743-1748. See Also summary.MclustDA, plot.MclustDA, predict.MclustDA, classError Examples odd <- seq(from = 1, to = nrow(iris), by = 2) even <- odd + 1 X.train <- iris[odd,-5] Class.train <- iris[odd,5] X.test <- iris[even,-5] Class.test <- iris[even,5] # common EEE covariance structure (which is essentially equivalent to linear discriminant analysis) irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA", modelNames = "EEE") summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) # common covariance structure selected by BIC irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA") summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) # general covariance structure selected by BIC irisMclustDA <- MclustDA(X.train, Class.train) summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) plot(irisMclustDA) plot(irisMclustDA, dimens = 3:4) plot(irisMclustDA, dimens = 4) plot(irisMclustDA, what = "classification") plot(irisMclustDA, what = "classification", newdata = X.test) plot(irisMclustDA, what = "classification", dimens = 3:4) plot(irisMclustDA, what = "classification", newdata = X.test, dimens = 3:4) plot(irisMclustDA, what = "classification", dimens = 4) plot(irisMclustDA, what = "classification", dimens = 4, newdata = X.test) plot(irisMclustDA, what = "train&test", newdata = X.test) plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 3:4) plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 4) plot(irisMclustDA, what = "error") plot(irisMclustDA, what = "error", dimens = 3:4) plot(irisMclustDA, what = "error", dimens = 4) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 3:4) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 4) # simulated 1D data n <- 250 set.seed(1) triModal <- c(rnorm(n,-5), rnorm(n,0), rnorm(n,5)) triClass <- c(rep(1,n), rep(2,n), rep(3,n)) odd <- seq(from = 1, to = length(triModal), by = 2) even <- odd + 1 triMclustDA <- MclustDA(triModal[odd], triClass[odd]) summary(triMclustDA, parameters = TRUE) summary(triMclustDA, newdata = triModal[even], newclass = triClass[even]) plot(triMclustDA, what = "scatterplot") plot(triMclustDA, what = "classification") plot(triMclustDA, what = "classification", newdata = triModal[even]) plot(triMclustDA, what = "train&test", newdata = triModal[even]) plot(triMclustDA, what = "error") plot(triMclustDA, what = "error", newdata = triModal[even], newclass = triClass[even]) # simulated 2D cross data data(cross) odd <- seq(from = 1, to = nrow(cross), by = 2) even <- odd + 1 crossMclustDA <- MclustDA(cross[odd,-1], cross[odd,1]) summary(crossMclustDA, parameters = TRUE) summary(crossMclustDA, newdata = cross[even,-1], newclass = cross[even,1]) plot(crossMclustDA, what = "scatterplot") plot(crossMclustDA, what = "classification") plot(crossMclustDA, what = "classification", newdata = cross[even,-1]) plot(crossMclustDA, what = "train&test", newdata = cross[even,-1]) plot(crossMclustDA, what = "error") plot(crossMclustDA, what = "error", newdata =cross[even,-1], newclass = cross[even,1]) MclustDR Dimension reduction for model-based clustering and classification Description A dimension reduction method for visualizing the clustering or classification structure obtained from a finite mixture of Gaussian densities. Usage MclustDR(object, lambda = 1, normalized = TRUE, Sigma, tol = sqrt(.Machine$double.eps)) Arguments object An object of class 'Mclust' or 'MclustDA' resulting from a call to, respec- tively, Mclust or MclustDA. lambda A tuning parameter in the range [0,1] as described in Scrucca (2014). The direc- tions that mostly separate the estimated clusters or classes are recovered using the default value 1. Users can set this parameter to balance the relative im- portance of information derived from cluster/class means and covariances. For instance, a value of 0.5 gives equal importance to differences in means and co- variances among clusters/classes. normalized Logical. If TRUE directions are normalized to unit norm. Sigma Marginal covariance matrix of data. If not provided is estimated by the MLE of observed data. tol A tolerance value. Details The method aims at reducing the dimensionality by identifying a set of linear combinations, ordered by importance as quantified by the associated eigenvalues, of the original features which capture most of the clustering or classification structure contained in the data. Information on the dimension reduction subspace is obtained from the variation on group means and, depending on the estimated mixture model, on the variation on group covariances (see Scrucca, 2010). Observations may then be projected onto such a reduced subspace, thus providing summary plots which help to visualize the underlying structure. The method has been extended to the supervised case, i.e. when the true classification is known (see Scrucca, 2014). This implementation doesn’t provide a formal procedure for the selection of dimensionality. A future release will include one or more methods. Value An object of class 'MclustDR' with the following components: call The matched call type A character string specifying the type of model for which the dimension reduc- tion is computed. Currently, possible values are "Mclust" for clustering, and "MclustDA" or "EDDA" for classification. x The data matrix. Sigma The covariance matrix of the data. mixcomp A numeric vector specifying the mixture component of each data observation. class A factor specifying the classification of each data observation. For model- based clustering this is equivalent to the corresponding mixture component. For model-based classification this is the known classification. G The number of mixture components. modelName The name of the parameterization of the estimated mixture model(s). See mclustModelNames. mu A matrix of means for each mixture component. sigma An array of covariance matrices for each mixture component. pro The estimated prior for each mixture component. M The kernel matrix. lambda The tuning parameter. evalues The eigenvalues from the generalized eigen-decomposition of the kernel matrix. raw.evectors The raw eigenvectors from the generalized eigen-decomposition of the kernel matrix, ordered according to the eigenvalues. basis The basis of the estimated dimension reduction subspace. std.basis The basis of the estimated dimension reduction subspace standardized to vari- ables having unit standard deviation. numdir The dimension of the projection subspace. dir The estimated directions, i.e. the data projected onto the estimated dimension reduction subspace. Author(s) <NAME> References Scrucca, L. (2010) Dimension reduction for model-based clustering. Statistics and Computing, 20(4), pp. 471-484. Scrucca, L. (2014) Graphical Tools for Model-based Mixture Discriminant Analysis. Advances in Data Analysis and Classification, 8(2), pp. 147-165. See Also summary.MclustDR, plot.MclustDR, Mclust, MclustDA. Examples # clustering data(diabetes) mod <- Mclust(diabetes[,-1]) summary(mod) dr <- MclustDR(mod) summary(dr) plot(dr, what = "scatterplot") plot(dr, what = "evalues") dr <- MclustDR(mod, lambda = 0.5) summary(dr) plot(dr, what = "scatterplot") plot(dr, what = "evalues") # classification data(banknote) da <- MclustDA(banknote[,2:7], banknote$Status, modelType = "EDDA") dr <- MclustDR(da) summary(dr) da <- MclustDA(banknote[,2:7], banknote$Status) dr <- MclustDR(da) summary(dr) MclustDRsubsel Subset selection for GMMDR directions based on BIC Description Implements a subset selection method for selecting the relevant directions spanning the dimension reduction subspace for visualizing the clustering or classification structure obtained from a finite mixture of Gaussian densities. Usage MclustDRsubsel(object, G = 1:9, modelNames = mclust.options("emModelNames"), ..., bic.stop = 0, bic.cutoff = 0, mindir = 1, verbose = interactive()) Arguments object An object of class 'MclustDR' resulting from a call to MclustDR. G An integer vector specifying the numbers of mixture components or clusters. modelNames A vector of character strings indicating the models to be fitted. See mclustModelNames for a description of the available models. ... Further arguments passed through Mclust or MclustDA. bic.stop A criterion to terminate the search. If maximal BIC difference is less than bic.stop then the algorithm stops. Two tipical values are: 0: algorithm stops when the BIC difference becomes negative (default); -Inf: algorithm continues until all directions have been selected. bic.cutoff A value specifying how to select simplest “best” model within bic.cutoff from the maximum value achieved. Setting this to 0 (default) simply select the model with the largest BIC difference. mindir An integer value specifying the minimum number of directions to be estimated. verbose A logical or integer value specifying if and how much detailed information should be reported during the iterations of the algorithm. Possible values are: 0 or FALSE: no trace info is shown; 1 or TRUE: a trace info is shown at each step of the search; 2: a more detailed trace info is is shown. Details The GMMDR method aims at reducing the dimensionality by identifying a set of linear combina- tions, ordered by importance as quantified by the associated eigenvalues, of the original features which capture most of the clustering or classification structure contained in the data. This is imple- mented in MclustDR. The MclustDRsubsel function implements the greedy forward search algorithm discussed in Scrucca (2010) to prune the set of all GMMDR directions. The criterion used to select the relevant direc- tions is based on the BIC difference between a clustering model and a model in which the feature proposal has no clustering relevance. The steps are the following: 1. Select the first feature to be the one which maximizes the BIC difference between the best clustering model and the model which assumes no clustering, i.e. a single component. 2. Select the next feature amongst those not previously included, to be the one which maximizes the BIC difference. 3. Iterate the previous step until all the BIC differences for the inclusion of a feature become less than bic.stop. At each step, the search over the model space is performed with respect to the model parametrisation and the number of clusters. Value An object of class 'MclustDRsubsel' which inherits from 'MclustDR', so it has the same compo- nents of the latter plus the following: basisx The basis of the estimated dimension reduction subspace expressed in terms of the original variables. std.basisx The basis of the estimated dimension reduction subspace expressed in terms of the original variables standardized to have unit standard deviation. Author(s) <NAME> References Scrucca, L. (2010) Dimension reduction for model-based clustering. Statistics and Computing, 20(4), pp. 471-484. Scrucca, L. (2014) Graphical Tools for Model-based Mixture Discriminant Analysis. Advances in Data Analysis and Classification, 8(2), pp. 147-165 See Also MclustDR, Mclust, MclustDA. Examples # clustering data(crabs, package = "MASS") x <- crabs[,4:8] class <- paste(crabs$sp, crabs$sex, sep = "|") mod <- Mclust(x) table(class, mod$classification) dr <- MclustDR(mod) summary(dr) plot(dr) drs <- MclustDRsubsel(dr) summary(drs) table(class, drs$classification) plot(drs, what = "scatterplot") plot(drs, what = "pairs") plot(drs, what = "contour") plot(drs, what = "boundaries") plot(drs, what = "evalues") # classification data(banknote) da <- MclustDA(banknote[,2:7], banknote$Status) table(banknote$Status, predict(da)$class) dr <- MclustDR(da) summary(dr) drs <- MclustDRsubsel(dr) summary(drs) table(banknote$Status, predict(drs)$class) plot(drs, what = "scatterplot") plot(drs, what = "classification") plot(drs, what = "boundaries") mclustICL ICL Criterion for Model-Based Clustering Description ICL (Integrated Complete-data Likelihood) for parameterized Gaussian mixture models fitted by EM algorithm initialized by model-based hierarchical clustering. Usage mclustICL(data, G = NULL, modelNames = NULL, initialization = list(hcPairs = NULL, subset = NULL, noise = NULL), x = NULL, ...) ## S3 method for class 'mclustICL' summary(object, G, modelNames, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. G An integer vector specifying the numbers of mixture components (clusters) for which the criteria should be calculated. The default is G = 1:9. modelNames A vector of character strings indicating the models to be fitted in the EM phase of clustering. The help file for mclustModelNames describes the available models. The default is: c("E", "V") for univariate data mclust.options("emModelNames") for multivariate data (n > d) c("EII", "VII", "EEI", "EVI", "VEI", "VVI") the spherical and diagonal mod- els for multivariate data (n <= d) initialization A list containing zero or more of the following components: hcPairs A matrix of merge pairs for hierarchical clustering such as produced by function hc. For multivariate data, the default is to compute a hierar- chical clustering tree by applying function hc with modelName = "VVV" to the data or a subset as indicated by the subset argument. The hierarchical clustering results are to start EM. For univariate data, the default is to use quantiles to start EM. subset A logical or numeric vector specifying a subset of the data to be used in the initial hierarchical clustering phase. x An object of class 'mclustICL'. If supplied, mclustICL will use the settings in x to produce another object of class 'mclustICL', but with G and modelNames as specified in the arguments. Models that have already been computed in x are not recomputed. All arguments to mclustICL except data, G and modelName are ignored and their values are set as specified in the attributes of x. Defaults for G and modelNames are taken from x. ... Futher arguments used in the call to Mclust. See also mclustBIC. object An integer vector specifying the numbers of mixture components (clusters) for which the criteria should be calculated. The default is G = 1:9. Value Returns an object of class 'mclustICL' containing the the ICL criterion for the specified mixture models and numbers of clusters. The corresponding print method shows the matrix of values and the top models according to the ICL criterion. The summary method shows only the top models. References <NAME>., <NAME>., <NAME>. (2000). Assessing a mixture model for clustering with the integrated completed likelihood. IEEE Trans. Pattern Analysis and Machine Intelligence, 22 (7), 719-725. <NAME>., <NAME>., <NAME>. and <NAME>. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, The R Journal, 8/1, pp. 289-317. See Also plot.mclustICL, Mclust, mclustBIC, mclustBootstrapLRT, bic, icl Examples data(faithful) faithful.ICL <- mclustICL(faithful) faithful.ICL summary(faithful.ICL) plot(faithful.ICL) # compare with faithful.BIC <- mclustBIC(faithful) faithful.BIC plot(faithful.BIC) mclustLoglik Log-likelihood from a table of BIC values for parameterized Gaussian mixture models Description Compute the maximal log-likelihood from a table of BIC values contained in a 'mclustBIC' object as returned by function mclustBIC. Usage mclustLoglik(object, ...) Arguments object An object of class 'mclustBIC' containing the BIC values as returned by a call to mclustBIC. ... Catches unused arguments in an indirect or list call via do.call. Value An object of class 'mclustLoglik' containing the maximal log-likelihood values for the Gaussian mixture models provided as input. See Also mclustBIC. Examples BIC <- mclustBIC(iris[,1:4]) mclustLoglik(BIC) mclustModel Best model based on BIC Description Determines the best model from clustering via mclustBIC for a given set of model parameteriza- tions and numbers of components. Usage mclustModel(data, BICvalues, G, modelNames, ...) Arguments data The matrix or vector of observations used to generate ‘object’. BICvalues An 'mclustBIC' object, which is the result of applying mclustBIC to data. G A vector of integers giving the numbers of mixture components (clusters) from which the best model according to BIC will be selected (as.character(G) must be a subset of the row names of BICvalues). The default is to select the best model for all numbers of mixture components used to obtain BICvalues. modelNames A vector of integers giving the model parameterizations from which the best model according to BIC will be selected (as.character(model) must be a sub- set of the column names of BICvalues). The default is to select the best model for parameterizations used to obtain BICvalues. ... Not used. For generic/method consistency. Value A list giving the optimal (according to BIC) parameters, conditional probabilities z, and log- likelihood, together with the associated classification and its uncertainty. The details of the output components are as follows: modelName A character string indicating the model. The help file for mclustModelNames describes the available models. n The number of observations in the data. d The dimension of the data. G The number of components in the Gaussian mixture model corresponding to the optimal BIC. bic The optimal BIC value. loglik The log-likelihood corresponding to the optimal BIC. parameters A list with the following components: pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. If missing, equal proportions are assumed. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. Vinv The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. z A matrix whose [i,k]th entry is the probability that observation i in the test data belongs to the kth class. See Also mclustBIC Examples irisBIC <- mclustBIC(iris[,-5]) mclustModel(iris[,-5], irisBIC) mclustModel(iris[,-5], irisBIC, G = 1:6, modelNames = c("VII", "VVI", "VVV")) mclustModelNames 109 mclustModelNames MCLUST Model Names Description Description of model names used in the MCLUST package. Usage mclustModelNames(model) Arguments model A string specifying the model. Details The following models are available in package mclust: univariate mixture "E" equal variance (one-dimensional) "V" variable/unqual variance (one-dimensional) multivariate mixture "EII" spherical, equal volume "VII" spherical, unequal volume "EEI" diagonal, equal volume and shape "VEI" diagonal, varying volume, equal shape "EVI" diagonal, equal volume, varying shape "VVI" diagonal, varying volume and shape "EEE" ellipsoidal, equal volume, shape, and orientation "VEE" ellipsoidal, equal shape and orientation (*) "EVE" ellipsoidal, equal volume and orientation (*) "VVE" ellipsoidal, equal orientation (*) "EEV" ellipsoidal, equal volume and equal shape "VEV" ellipsoidal, equal shape "EVV" ellipsoidal, equal volume (*) "VVV" ellipsoidal, varying volume, shape, and orientation single component "X" univariate normal "XII" spherical multivariate normal "XXI" diagonal multivariate normal "XXX" ellipsoidal multivariate normal (*) new models in mclust version >= 5.0.0. Value Returns a list with the following components: model a character string indicating the model (as in input). type the description of the indicated model (see Details section). See Also Mclust, mclustBIC Examples mclustModelNames("E") mclustModelNames("EEE") mclustModelNames("VVV") mclustModelNames("XXI") MclustSSC MclustSSC semi-supervised classification Description Semi-Supervised classification based on Gaussian finite mixture modeling. Usage MclustSSC(data, class, G = NULL, modelNames = NULL, prior = NULL, control = emControl(), warn = mclust.options("warn"), verbose = interactive(), ...) Arguments data A data frame or matrix giving the training data. class A vector giving the known class labels (either a numerical value or a character string) for the observations in the training data. Observations with unknown class are encoded as NA. G An integer value specifying the numbers of mixture components or classes. By default is set equal to the number of known classes. See the examples below. modelNames A vector of character strings indicating the models to be fitted by EM (see the description in mclustModelNames). See the examples below. prior The default assumes no prior, but this argument allows specification of a conju- gate prior on the means and variances through the function priorControl. control A list of control parameters for EM. The defaults are set by the call emControl(). warn A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when estimation fails. The default is controlled by mclust.options. verbose A logical controlling if a text progress bar is displayed during the fitting proce- dure. By default is TRUE if the session is interactive, and FALSE otherwise. ... Further arguments passed to or from other methods. Details The semi-supervised approach implemented in MclustSSC() is a simple Gaussian mixture model for classification where at the first M-step only observations with known class labels are used for parameters estimation. Then, a standard EM algorithm is used for updating the probabiltiy of class membership for unlabelled data while keeping fixed the probabilities for labelled data. Value An object of class 'MclustSSC' providing the optimal (according to BIC) Gaussian mixture model for semi-supervised classification. The details of the output components are as follows: call The matched call. data The input data matrix. class The input class labels (including NAs for unknown labels. modelName A character string specifying the "best" estimated model. G A numerical value specifying the number of mixture components or classes of the "best" estimated model. n The total number of observations in the data. d The dimension of the data. BIC All BIC values. loglik Log-likelihood for the selected model. df Number of estimated parameters. bic Optimal BIC value. parameters A list with the following components: pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. z A matrix whose [i,k]th entry is the probability that observation i in the test data belongs to the kth class. classification The classification corresponding to z, i.e. map(z). prior The prior used (if any). control A list of control parameters used in the EM algorithm. Author(s) <NAME> References <NAME>., <NAME>., <NAME>. and <NAME>. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, The R Journal, 8/1, pp. 289-317. See Also summary.MclustSSC, plot.MclustSSC, predict.MclustSSC Examples # Simulate two overlapping groups n <- 200 pars <- list(pro = c(0.5, 0.5), mean = matrix(c(-1,1), nrow = 2, ncol = 2, byrow = TRUE), variance = mclustVariance("EII", d = 2, G = 2)) pars$variance$sigmasq <- 1 data <- sim("EII", parameters = pars, n = n, seed = 12) class <- data[,1] X <- data[,-1] clPairs(X, class, symbols = c(1,2), main = "Full classified data") # Randomly remove labels cl <- class; cl[sample(1:n, size = 195)] <- NA table(cl, useNA = "ifany") clPairs(X, ifelse(is.na(cl), 0, class), symbols = c(0, 16, 17), colors = c("grey", 4, 2), main = "Partially classified data") # Fit semi-supervised classification model mod_SSC <- MclustSSC(X, cl) summary(mod_SSC, parameters = TRUE) pred_SSC <- predict(mod_SSC) table(Predicted = pred_SSC$classification, Actual = class) ngrid <- 50 xgrid <- seq(-3, 3, length.out = ngrid) ygrid <- seq(-4, 4.5, length.out = ngrid) xygrid <- expand.grid(xgrid, ygrid) pred_SSC <- predict(mod_SSC, newdata = xygrid) col <- mclust.options("classPlotColors")[class] pch <- class pch[!is.na(cl)] = ifelse(cl[!is.na(cl)] == 1, 19, 17) plot(X, pch = pch, col = col) contour(xgrid, ygrid, matrix(pred_SSC$z[,1], ngrid, ngrid), add = TRUE, levels = 0.5, drawlabels = FALSE, lty = 2, lwd = 2) mclustVariance Template for variance specification for parameterized Gaussian mix- ture models Description Specification of variance parameters for the various types of Gaussian mixture models. Usage mclustVariance(modelName, d = NULL, G = 2) Arguments modelName A character string specifying the model. d A integer specifying the dimension of the data. G An integer specifying the number of components in the mixture model. Details The variance component in the parameters list from the output to e.g. me or mstep or input to e.g. estep may contain one or more of the following arguments, depending on the model: modelName A character string indicating the model. d The dimension of the data. G The number of components in the mixture model. sigmasq for the one-dimensional models ("E", "V") and spherical models ("EII", "VII"). This is either a vector whose kth component is the variance for the kth component in the mixture model ("V" and "VII"), or a scalar giving the common variance for all components in the mixture model ("E" and "EII"). Sigma For the equal variance models "EII", "EEI", and "EEE". A d by d matrix giving the common covariance for all components of the mixture model. cholSigma For the equal variance model "EEE". A d by d upper triangular matrix giving the Cholesky factor of the common covariance for all components of the mixture model. sigma For all multidimensional mixture models. A d by d by G matrix array whose [,,k]th entry is the covariance matrix for the kth component of the mixture model. cholsigma For the unconstrained covariance mixture model "VVV". A d by d by G matrix array whose [,,k]th entry is the upper triangular Cholesky factor of the covariance matrix for the kth component of the mixture model. scale For diagonal models "EEI", "EVI", "VEI", "VVI" and constant-shape models "EEV" and "VEV". Either a G-vector giving the scale of the covariance (the dth root of its determinant) for each component in the mixture model, or a single numeric value if the scale is the same for each component. shape For diagonal models "EEI", "EVI", "VEI", "VVI" and constant-shape models "EEV" and "VEV". Either a G by d matrix in which the kth column is the shape of the covariance matrix (normalized to have determinant 1) for the kth component, or a d-vector giving a common shape for all components. orientation For the constant-shape models "EEV" and "VEV". Either a d by d by G array whose [,,k]th entry is the orthonomal matrix whose columns are the eigenvectors of the covariance matrix of the kth component, or a d by d orthonormal matrix if the mixture components have a common orientation. The orientation component is not needed in spherical and diag- onal models, since the principal components are parallel to the coordinate axes so that the orientation matrix is the identity. In all cases, the value -1 is used as a placeholder for unknown nonzero entries. me EM algorithm starting with M-step for parameterized MVN mixture models Description Implements the EM algorithm for MVN mixture models parameterized by eignevalue decomposi- tion, starting with the maximization step. Usage me(data, modelName, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. modelName A character string indicating the model. The help file for mclustModelNames describes the available models. z A matrix whose [i,k]th entry is an initial estimate of the conditional probability of the ith observation belonging to the kth component of the mixture. prior Specification of a conjugate prior on the means and variances. See the help file for priorControl for further information. The default assumes no prior. control A list of control parameters for EM. The defaults are set by the call emControl(). Vinv If the model is to include a noise term, Vinv is an estimate of the reciprocal hypervolume of the data region. If set to a negative value or 0, the model will include a noise term with the reciprocal hypervolume estimated by the function hypvol. The default is not to assume a noise term in the model through the setting Vinv=NULL. warn A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is set in mclust.options("warn"). ... Catches unused arguments in indirect or list calls via do.call. Value A list including the following components: modelName A character string identifying the model (same as the input argument). n The number of observations in the data. d The dimension of the data. G The number of mixture components. z A matrix whose [i,k]th entry is the conditional probability of the ith observa- tion belonging to the kth component of the mixture. parameters pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. Vinv The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. loglik The log likelihood for the data in the mixture model. control The list of control parameters for EM used. prior The specification of a conjugate prior on the means and variances used, NULL if no prior is used. Attributes: "info" Information on the iteration. "WARNING" An appropriate warning if problems are encountered in the compu- tations. See Also meE, . . . , meVVV, em, mstep, estep, priorControl, mclustModelNames, mclustVariance, mclust.options Examples me(modelName = "VVV", data = iris[,-5], z = unmap(iris[,5])) me.weighted EM algorithm with weights starting with M-step for parameterized MVN mixture models Description Implements the EM algorithm for fitting MVN mixture models parameterized by eigenvalue de- composition, when observations have weights, starting with the maximization step. Usage me.weighted(data, modelName, z, weights = NULL, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. modelName A character string indicating the model. The help file for mclustModelNames describes the available models. z A matrix whose [i,k]th entry is an initial estimate of the conditional probability of the ith observation belonging to the kth component of the mixture. weights A vector of positive weights, where the [i]th entry is the weight for the ith observation. If any of the weights are greater than one, then they are scaled so that the maximum weight is one. prior Specification of a conjugate prior on the means and variances. See the help file for priorControl for further information. The default assumes no prior. control A list of control parameters for EM. The defaults are set by the call emControl. Vinv If the model is to include a noise term, Vinv is an estimate of the reciprocal hypervolume of the data region. If set to a negative value or 0, the model will include a noise term with the reciprocal hypervolume estimated by the function hypvol. The default is not to assume a noise term in the model through the setting Vinv=NULL. warn A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is set by warn using mclust.options. ... Catches unused arguments in indirect or list calls via do.call. Value A list including the following components: modelName A character string identifying the model (same as the input argument). z A matrix whose [i,k]th entry is the conditional probability of the ith observa- tion belonging to the kth component of the mixture. parameters pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. Vinv The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. loglik The log likelihood for the data in the mixture model. Attributes: "info" Information on the iteration. "WARNING" An appropriate warning if problems are encountered in the compu- tations. Author(s) <NAME> See Also me, meE, . . . , meVVV, em, mstep, estep, priorControl, mclustModelNames, mclustVariance, mclust.options Examples w <- rep(1,150) w[1] <- 0 me.weighted(data = iris[,-5], modelName = "VVV", z = unmap(iris[,5]), weights = w) meE EM algorithm starting with M-step for a parameterized Gaussian mix- ture model Description Implements the EM algorithm for a parameterized Gaussian mixture model, starting with the max- imization step. Usage meE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meX(data, prior = NULL, warn = NULL, ...) meEII(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meVII(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meEEI(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meVEI(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meEVI(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meVVI(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meEEE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meVEE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meEVE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meVVE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meEEV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meVEV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meEVV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meVVV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, ...) meXII(data, prior = NULL, warn = NULL, ...) meXXI(data, prior = NULL, warn = NULL, ...) meXXX(data, prior = NULL, warn = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. z A matrix whose [i,k]th entry is the conditional probability of the ith observa- tion belonging to the kth component of the mixture. prior Specification of a conjugate prior on the means and variances. The default as- sumes no prior. control A list of control parameters for EM. The defaults are set by the call emControl(). Vinv An estimate of the reciprocal hypervolume of the data region, when the model is to include a noise term. Set to a negative value or zero if a noise term is desired, but an estimate is unavailable — in that case function hypvol will be used to obtain the estimate. The default is not to assume a noise term in the model through the setting Vinv=NULL. warn A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is given by mclust.options("warn"). ... Catches unused arguments in indirect or list calls via do.call. Value A list including the following components: modelName A character string identifying the model (same as the input argument). z A matrix whose [i,k]th entry is the conditional probability of the ith observa- tion belonging to the kth component of the mixture. parameters pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. Vinv The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. loglik The log likelihood for the data in the mixture model. Attributes: "info" Information on the iteration. "WARNING" An appropriate warning if problems are encountered in the compu- tations. See Also em, me, estep, mclust.options Examples meVVV(data = iris[,-5], z = unmap(iris[,5])) mstep M-step for parameterized Gaussian mixture models Description Maximization step in the EM algorithm for parameterized Gaussian mixture models. Usage mstep(data, modelName, z, prior = NULL, warn = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. modelName A character string indicating the model. The help file for mclustModelNames describes the available models. z A matrix whose [i,k]th entry is the conditional probability of the ith observa- tion belonging to the kth component of the mixture. In analyses involving noise, this should not include the conditional probabilities for the noise component. prior Specification of a conjugate prior on the means and variances. The default as- sumes no prior. warn A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is given by mclust.options("warn"). ... Catches unused arguments in indirect or list calls via do.call. Value A list including the following components: modelName A character string identifying the model (same as the input argument). parameters pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. Attributes: "info" For those models with iterative M-steps ("VEI" and "VEV"), information on the iteration. "WARNING" An appropriate warning if problems are encountered in the compu- tations. Note This function computes the M-step only for MVN mixtures, so in analyses involving noise, the conditional probabilities input should exclude those for the noise component. In contrast to me for the EM algorithm, computations in mstep are carried out unless failure due to overflow would occur. To impose stricter tolerances on a single mstep, use me with the itmax component of the control argument set to 1. See Also mstepE, . . . , mstepVVV, emControl, me, estep, mclust.options. Examples mstep(modelName = "VII", data = iris[,-5], z = unmap(iris[,5])) mstepE M-step for a parameterized Gaussian mixture model Description Maximization step in the EM algorithm for a parameterized Gaussian mixture model. Usage mstepE( data, z, prior = NULL, warn = NULL, ...) mstepV( data, z, prior = NULL, warn = NULL, ...) mstepEII( data, z, prior = NULL, warn = NULL, ...) mstepVII( data, z, prior = NULL, warn = NULL, ...) mstepEEI( data, z, prior = NULL, warn = NULL, ...) mstepVEI( data, z, prior = NULL, warn = NULL, control = NULL, ...) mstepEVI( data, z, prior = NULL, warn = NULL, ...) mstepVVI( data, z, prior = NULL, warn = NULL, ...) mstepEEE( data, z, prior = NULL, warn = NULL, ...) mstepEEV( data, z, prior = NULL, warn = NULL, ...) mstepVEV( data, z, prior = NULL, warn = NULL, control = NULL,...) mstepVVV( data, z, prior = NULL, warn = NULL, ...) mstepEVE( data, z, prior = NULL, warn = NULL, control = NULL, ...) mstepEVV( data, z, prior = NULL, warn = NULL, ...) mstepVEE( data, z, prior = NULL, warn = NULL, control = NULL, ...) mstepVVE( data, z, prior = NULL, warn = NULL, control = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. z A matrix whose [i,k]th entry is the conditional probability of the ith observa- tion belonging to the kth component of the mixture. In analyses involving noise, this should not include the conditional probabilities for the noise component. prior Specification of a conjugate prior on the means and variances. The default as- sumes no prior. warn A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is given by mclust.options("warn"). control Values controlling termination for models "VEI" and "VEV" that have an it- erative M-step. This should be a list with components named itmax and tol. These components can be of length 1 or 2; in the latter case, mstep will use the second value, under the assumption that the first applies to an outer iteration (as in the function me). The default uses the default values from the function emControl, which sets no limit on the number of iterations, and a relative toler- ance of sqrt(.Machine$double.eps) on successive iterates. ... Catches unused arguments in indirect or list calls via do.call. Value A list including the following components: modelName A character string identifying the model (same as the input argument). parameters pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. Attributes: "info" For those models with iterative M-steps ("VEI" and "VEV"), information on the iteration. "WARNING" An appropriate warning if problems are encountered in the compu- tations. Note This function computes the M-step only for MVN mixtures, so in analyses involving noise, the conditional probabilities input should exclude those for the noise component. In contrast to me for the EM algorithm, computations in mstep are carried out unless failure due to overflow would occur. To impose stricter tolerances on a single mstep, use me with the itmax component of the control argument set to 1. See Also mstep, me, estep, mclustVariance, priorControl, emControl. Examples mstepVII(data = iris[,-5], z = unmap(iris[,5])) mvn Univariate or Multivariate Normal Fit Description Computes the mean, covariance, and log-likelihood from fitting a single Gaussian to given data (univariate or multivariate normal). Usage mvn( modelName, data, prior = NULL, warn = NULL, ...) Arguments modelName A character string representing a model name. This can be either "Spherical", "Diagonal", or "Ellipsoidal" or else "X" for one-dimensional data, "XII" for a spherical Gaussian, "XXI" for a diagonal Gaussian "XXX" for a general ellipsoidal Gaussian data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. prior Specification of a conjugate prior on the means and variances. The default as- sumes no prior. warn A logical value indicating whether or not a warning should be issued whenever a singularity is encountered. The default is given by mclust.options("warn"). ... Catches unused arguments in indirect or list calls via do.call. Value A list including the following components: modelName A character string identifying the model (same as the input argument). parameters mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. loglik The log likelihood for the data in the mixture model. Attributes: "WARNING" An appropriate warning if problems are encountered in the compu- tations. See Also mvnX, mvnXII, mvnXXI, mvnXXX, mclustModelNames Examples n <- 1000 set.seed(0) x <- rnorm(n, mean = -1, sd = 2) mvn(modelName = "X", x) mu <- c(-1, 0, 1) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) %*% (2*diag(3)), MARGIN = 2, STATS = mu, FUN = "+") mvn(modelName = "XII", x) mvn(modelName = "Spherical", x) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) %*% diag(1:3), MARGIN = 2, STATS = mu, FUN = "+") mvn(modelName = "XXI", x) mvn(modelName = "Diagonal", x) Sigma <- matrix(c(9,-4,1,-4,9,4,1,4,9), 3, 3) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) %*% chol(Sigma), MARGIN = 2, STATS = mu, FUN = "+") mvn(modelName = "XXX", x) mvn(modelName = "Ellipsoidal", x) mvnX Univariate or Multivariate Normal Fit Description Computes the mean, covariance, and log-likelihood from fitting a single Gaussian (univariate or multivariate normal). Usage mvnX(data, prior = NULL, warn = NULL, ...) mvnXII(data, prior = NULL, warn = NULL, ...) mvnXXI(data, prior = NULL, warn = NULL, ...) mvnXXX(data, prior = NULL, warn = NULL, ...) Arguments data A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. prior Specification of a conjugate prior on the means and variances. The default as- sumes no prior. warn A logical value indicating whether or not a warning should be issued whenever a singularity is encountered. The default is given by mclust.options("warn"). ... Catches unused arguments in indirect or list calls via do.call. Details mvnXII computes the best fitting Gaussian with the covariance restricted to be a multiple of the identity. mvnXXI computes the best fitting Gaussian with the covariance restricted to be diagonal. mvnXXX computes the best fitting Gaussian with ellipsoidal (unrestricted) covariance. Value A list including the following components: modelName A character string identifying the model (same as the input argument). parameters mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. loglik The log likelihood for the data in the mixture model. Attributes: "WARNING" An appropriate warning if problems are encountered in the compu- tations. See Also mvn, mstepE Examples n <- 1000 set.seed(0) x <- rnorm(n, mean = -1, sd = 2) mvnX(x) mu <- c(-1, 0, 1) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) %*% (2*diag(3)), MARGIN = 2, STATS = mu, FUN = "+") mvnXII(x) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) %*% diag(1:3), MARGIN = 2, STATS = mu, FUN = "+") mvnXXI(x) Sigma <- matrix(c(9,-4,1,-4,9,4,1,4,9), 3, 3) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) %*% chol(Sigma), MARGIN = 2, STATS = mu, FUN = "+") mvnXXX(x) nMclustParams Number of Estimated Parameters in Gaussian Mixture Models Description Gives the number of estimated parameters for parameterizations of the Gaussian mixture model that are used in MCLUST. Usage nMclustParams(modelName, d, G, noise = FALSE, equalPro = FALSE, ...) Arguments modelName A character string indicating the model. The help file for mclustModelNames describes the available models. d The dimension of the data. Not used for models in which neither the shape nor the orientation varies. G The number of components in the Gaussian mixture model used to compute loglik. noise A logical variable indicating whether or not the model includes an optional Pois- son noise component. equalPro A logical variable indicating whether or not the components in the model are assumed to be present in equal proportion. ... Catches unused arguments in indirect or list calls via do.call. Details To get the total number of parameters in model, add G*d for the means and G-1 for the mixing proportions if they are unequal. Value The number of variance parameters in the corresponding Gaussian mixture model. See Also bic, nVarParams. Examples mapply(nMclustParams, mclust.options("emModelNames"), d = 2, G = 3) nVarParams Number of Variance Parameters in Gaussian Mixture Models Description Gives the number of variance parameters for parameterizations of the Gaussian mixture model that are used in MCLUST. Usage nVarParams(modelName, d, G, ...) Arguments modelName A character string indicating the model. The help file for mclustModelNames describes the available models. d The dimension of the data. Not used for models in which neither the shape nor the orientation varies. G The number of components in the Gaussian mixture model used to compute loglik. ... Catches unused arguments in indirect or list calls via do.call. Details To get the total number of parameters in model, add G*d for the means and G-1 for the mixing proportions if they are unequal. Value The number of variance parameters in the corresponding Gaussian mixture model. References <NAME> and <NAME> (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association 97:611:631. <NAME>, <NAME>, <NAME> and <NAME> (2012). mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. See Also bic, nMclustParams. Examples mapply(nVarParams, mclust.options("emModelNames"), d = 2, G = 3) partconv Numeric Encoding of a Partitioning Description Converts a vector interpreted as a classification or partitioning into a numeric vector. Usage partconv(x, consec=TRUE) Arguments x A vector interpreted as a classification or partitioning. consec Logical value indicating whether or not to consecutive class numbers should be used . Value Numeric encoding of x. When consec = TRUE, the distinct values in x are numbered by the order in which they appear. When consec = FALSE, each distinct value in x is numbered by the index corresponding to its first appearance in x. See Also partuniq Examples partconv(iris[,5]) set.seed(0) cl <- sample(LETTERS[1:9], 25, replace=TRUE) partconv(cl, consec=FALSE) partconv(cl, consec=TRUE) partuniq Classifies Data According to Unique Observations Description Gives a one-to-one mapping from unique observations to rows of a data matrix. Usage partuniq(x) Arguments x Matrix of observations. Value A vector of length nrow(x) with integer entries. An observation k is assigned an integer i whenever observation i is the first row of x that is identical to observation k (note that i <= k). See Also partconv Examples set.seed(0) mat <- data.frame(lets = sample(LETTERS[1:2],9,TRUE), nums = sample(1:2,9,TRUE)) mat ans <- partuniq(mat) ans partconv(ans,consec=TRUE) plot.clustCombi Plot Combined Clusterings Results Description Plot combined clusterings results: classifications corresponding to Mclust/BIC and to the hierarchi- cally combined classes, "entropy plots" to help to select a number of classes, and the tree structure obtained from combining mixture components. Usage ## S3 method for class 'clustCombi' plot(x, what = c("classification", "entropy", "tree"), ...) Arguments x Object returned by clustCombi function. what Type of plot. ... Other arguments to be passed to other functions: combiPlot, entPlot, combiTree. Please see the corresponding documentations. Value Classifications are plotted with combiPlot, which relies on the Mclust plot functions. Entropy plots are plotted with entPlot and may help to select a number of classes: please see the article cited in the references. Tree plots are produced by combiTree and graph the tree structure implied by the clusters combining process. Author(s) <NAME>, <NAME>, <NAME> References <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (2010). Combining mixture compo- nents for clustering. Journal of Computational and Graphical Statistics, 19(2):332-353. See Also combiPlot, entPlot, combiTree, clustCombi. Examples data(Baudry_etal_2010_JCGS_examples) ## 1D Example output <- clustCombi(data = Test1D, G=1:15) # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes (please see the article cited # in the references) plot(output) ## 2D Example output <- clustCombi(data = ex4.1) # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes (please see the article cited # in the references) plot(output) ## 3D Example output <- clustCombi(data = ex4.4.2) # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes (please see the article cited # in the references) plot(output) plot.densityMclust Plots for Mixture-Based Density Estimate Description Plotting methods for an object of class 'mclustDensity'. Available graphs are plot of BIC values and density for univariate and bivariate data. For higher data dimensionality a scatterplot matrix of pairwise densities is drawn. Usage ## S3 method for class 'densityMclust' plot(x, data = NULL, what = c("BIC", "density", "diagnostic"), ...) plotDensityMclust1(x, data = NULL, col = gray(0.3), hist.col = "lightgrey", hist.border = "white", breaks = "Sturges", ...) plotDensityMclust2(x, data = NULL, nlevels = 11, levels = NULL, prob = c(0.25, 0.5, 0.75), points.pch = 1, points.col = 1, points.cex = 0.8, ...) plotDensityMclustd(x, data = NULL, nlevels = 11, levels = NULL, prob = c(0.25, 0.5, 0.75), points.pch = 1, points.col = 1, points.cex = 0.8, gap = 0.2, ...) Arguments x An object of class 'mclustDensity' obtained from a call to densityMclust function. data Optional data points. what The type of graph requested: "density" = a plot of estimated density; if data is also provided the density is plotted over data points (see Details section). "BIC" = a plot of BIC values for the estimated models versus the number of components. "diagnostic" = diagnostic plots (only available for the one-dimensional case, see densityMclust.diagnostic) col The color to be used to draw the density line in 1-dimension or contours in higher dimensions. hist.col The color to be used to fill the bars of the histogram. hist.border The color of the border around the bars of the histogram. breaks See the argument in function hist. points.pch, points.col, points.cex The character symbols, colors, and magnification to be used for plotting data points. nlevels An integer, the number of levels to be used in plotting contour densities. levels A vector of density levels at which to draw the contour lines. prob A vector of probability levels for computing HDR. Only used if type = "hdr" and supersede previous nlevels and levels arguments. gap Distance between subplots, in margin lines, for the matrix of pairwise scatter- plots. ... Additional arguments passed to surfacePlot. Details The function plot.densityMclust allows to obtain the plot of estimated density or the graph of BIC values for evaluated models. If what = "density" the produced plot dependes on the dimensionality of the data. For one-dimensional data a call with no data provided produces a plot of the estimated density over a sensible range of values. If data is provided the density is over-plotted on a histogram for the observed data. For two-dimensional data further arguments available are those accepted by the surfacePlot function. In particular, the density can be represented through "contour", "hdr", "image", and "persp" type of graph. For type = "hdr" Highest Density Regions (HDRs) are plotted for proba- bility levels prob. See hdrlevels for details. For higher dimensionality a scatterplot matrix of pairwise projected densities is drawn. Author(s) <NAME> See Also densityMclust, surfacePlot, densityMclust.diagnostic, Mclust. Examples dens <- densityMclust(faithful$waiting, plot = FALSE) summary(dens) summary(dens, parameters = TRUE) plot(dens, what = "BIC", legendArgs = list(x = "topright")) plot(dens, what = "density", data = faithful$waiting) dens <- densityMclust(faithful, plot = FALSE) summary(dens) summary(dens, parameters = TRUE) plot(dens, what = "density", data = faithful, drawlabels = FALSE, points.pch = 20) plot(dens, what = "density", type = "hdr") plot(dens, what = "density", type = "hdr", prob = seq(0.1, 0.9, by = 0.1)) plot(dens, what = "density", type = "hdr", data = faithful) plot(dens, what = "density", type = "persp") dens <- densityMclust(iris[,1:4], plot = FALSE) summary(dens, parameters = TRUE) plot(dens, what = "density", data = iris[,1:4], col = "slategrey", drawlabels = FALSE, nlevels = 7) plot(dens, what = "density", type = "hdr", data = iris[,1:4]) plot(dens, what = "density", type = "persp", col = grey(0.9)) plot.hc Dendrograms for Model-based Agglomerative Hierarchical Cluster- ing Description Display two types for dendrograms for model-based hierarchical clustering objects. Usage ## S3 method for class 'hc' plot(x, what=c("loglik","merge"), maxG=NULL, labels=FALSE, hang=0, ...) Arguments x An object of class 'hc'. what A character string indicating the type of dendrogram to be displayed. Possible options are: "loglik" Distances between dendrogram levels are based on the classification likelihood. "merge" Distances between dendrogram levels are uniform, so that levels cor- respond to the number of clusters. maxG The maximum number of clusters for the dendrogram. For what = "merge", the default is the number of clusters in the initial partition. For what = "loglik", the default is the minimnum of the maximum number of clusters for which the classification loglikelihood an be computed in most cases, and the maximum number of clusters for which the classification likelihood increases with increas- ing numbers of clusters. labels A logical variable indicating whether or not to display leaf (observation) labels for the dendrogram (row names of the data). These are likely to be useful only if the number of observations in fairly small, since otherwise the labels will be too crowded to read. The default is not to display the leaf labels. hang For hclust objects, this argument is the fraction of the plot height by which labels should hang below the rest of the plot. A negative value will cause the labels to hang down from 0. Because model-based hierarchical clustering does not share all of the properties of hclust, the hang argment won’t work in many instances. ... Additional plotting arguments. Details The plotting input does not share all of the properties of hclust objects, hence not all plotting arguments associated with hclust can be expected to work here. Value A dendrogram is drawn, with distances based on either the classification likelihood or the merge level (number of clusters). Note If modelName = "E" (univariate with equal variances) or modelName = "EII" (multivariate with equal spherical covariances), then the underlying model is the same as for Ward’s method for hier- archical clustering. References <NAME> and <NAME> (1993). Model-based Gaussian and non-Gaussian Clustering. Biometrics 49:803-821. C. Fraley (1998). Algorithms for model-based Gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20:270-281. <NAME> and <NAME> (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association 97:611-631. See Also hc Examples data(EuroUnemployment) hcTree <- hc(modelName = "VVV", data = EuroUnemployment) plot(hcTree, what = "loglik") plot(hcTree, what = "loglik", labels = TRUE) plot(hcTree, what = "loglik", maxG = 5, labels = TRUE) plot(hcTree, what = "merge") plot(hcTree, what = "merge", labels = TRUE) plot(hcTree, what = "merge", labels = TRUE, hang = 0.1) plot(hcTree, what = "merge", labels = TRUE, hang = -1) plot(hcTree, what = "merge", labels = TRUE, maxG = 5) plot.Mclust Plotting method for Mclust model-based clustering Description Plots for model-based clustering results, such as BIC, classification, uncertainty and density. Usage ## S3 method for class 'Mclust' plot(x, what = c("BIC", "classification", "uncertainty", "density"), dimens = NULL, xlab = NULL, ylab = NULL, addEllipses = TRUE, main = FALSE, ...) Arguments x Output from Mclust. what A string specifying the type of graph requested. Available choices are: "BIC" plot of BIC values used for choosing the number of clusters. "classification" = a plot showing the clustering. For data in more than two dimensions a pairs plot is produced, followed by a coordinate projection plot using specified dimens. Ellipses corresponding to covariances of mix- ture components are also drawn if addEllipses = TRUE. "uncertainty" a plot of classification uncertainty. For data in more than two dimensions a coordinate projection plot is drawn using specified dimens. "density" a plot of estimated density. For data in more than two dimensions a matrix of contours for coordinate projection plot is drawn using specified dimens. If not specified, in interactive sessions a menu of choices is proposed. dimens A vector of integers specifying the dimensions of the coordinate projections in case of "classification", "uncertainty", or "density" plots. xlab, ylab Optional labels for the x-axis and the y-axis. addEllipses A logical indicating whether or not to add ellipses with axes corresponding to the within-cluster covariances in case of "classification" or "uncertainty" plots. main A logical or NULL indicating whether or not to add a title to the plot identifying the type of plot drawn. ... Other graphics parameters. Details For more flexibility in plotting, use mclust1Dplot, mclust2Dplot, surfacePlot, coordProj, or randProj. See Also Mclust, plot.mclustBIC, plot.mclustICL, mclust1Dplot, mclust2Dplot, surfacePlot, coordProj, randProj. Examples precipMclust <- Mclust(precip) plot(precipMclust) faithfulMclust <- Mclust(faithful) plot(faithfulMclust) irisMclust <- Mclust(iris[,-5]) plot(irisMclust) plot.mclustBIC BIC Plot for Model-Based Clustering Description Plots the BIC values returned by the mclustBIC function. Usage ## S3 method for class 'mclustBIC' plot(x, G = NULL, modelNames = NULL, symbols = NULL, colors = NULL, xlab = NULL, ylab = "BIC", legendArgs = list(x = "bottomright", ncol = 2, cex = 1, inset = 0.01), ...) Arguments x Output from mclustBIC. G One or more numbers of components corresponding to models fit in x. The default is to plot the BIC for all of the numbers of components fit. modelNames One or more model names corresponding to models fit in x. The default is to plot the BIC for all of the models fit. symbols Either an integer or character vector assigning a plotting symbol to each unique class in classification. Elements in colors correspond to classes in order of appearance in the sequence of observations (the order used by the function unique). The default is given by mclust.options("classPlotSymbols"). colors Either an integer or character vector assigning a color to each unique class in classification. Elements in colors correspond to classes in order of appear- ance in the sequence of observations (the order used by the function unique). The default is given by mclust.options("classPlotColors"). xlab Optional label for the horizontal axis of the BIC plot. ylab Label for the vertical axis of the BIC plot. legendArgs Arguments to pass to the legend function. Set to NULL for no legend. ... Other graphics parameters. Value A plot of the BIC values. See Also mclustBIC Examples plot(mclustBIC(precip), legendArgs = list(x = "bottomleft")) plot(mclustBIC(faithful)) plot(mclustBIC(iris[,-5])) plot.MclustBootstrap Plot of bootstrap distributions for mixture model parameters Description Plots the bootstrap distribution of parameters as returned by the MclustBootstrap function. Usage ## S3 method for class 'MclustBootstrap' plot(x, what = c("pro", "mean", "var"), show.parest = TRUE, show.confint = TRUE, hist.col = "grey", hist.border = "lightgrey", breaks = "Sturges", col = "forestgreen", lwd = 2, lty = 3, xlab = NULL, xlim = NULL, ylim = NULL, ...) Arguments x Object returned by MclustBootstrap. what Character string specifying if mixing proportions ("pro"), component means ("mean") or component variances ("var") should be drawn. show.parest A logical specifying if the parameter estimate should be drawn as vertical line. show.confint A logical specifying if the resampling-based confidence interval should be drawn at the bottom of the graph. Confidence level can be provided as further argument conf.level; see summary.MclustBootstrap. hist.col The color to be used to fill the bars of the histograms. hist.border The color of the border around the bars of the histograms. breaks See the argument in function hist. col, lwd, lty The color, line width and line type to be used to represent the estimated param- eters and confidence intervals. xlab Optional label for the horizontal axis. xlim, ylim A two-values vector of axis range for, respectively, horizontal and vertical axis. ... Other graphics parameters. Value A plot for each variable/component of the selected parameters. See Also MclustBootstrap Examples data(diabetes) X <- diabetes[,-1] modClust <- Mclust(X, G = 3, modelNames = "VVV") bootClust <- MclustBootstrap(modClust, nboot = 99) par(mfrow = c(1,3), mar = c(4,2,2,0.5)) plot(bootClust, what = "pro") par(mfrow = c(3,3), mar = c(4,2,2,0.5)) plot(bootClust, what = "mean") plot.MclustDA Plotting method for MclustDA discriminant analysis Description Plots for model-based mixture discriminant analysis results, such as scatterplot of training and test data, classification of train and test data, and errors. Usage ## S3 method for class 'MclustDA' plot(x, what = c("scatterplot", "classification", "train&test", "error"), newdata, newclass, dimens = NULL, symbols, colors, main = NULL, ...) Arguments x An object of class 'MclustDA' resulting from a call to MclustDA. what A string specifying the type of graph requested. Available choices are: "scatterplot" = a plot of training data with points marked based on the known classification. Ellipses corresponding to covariances of mixture compo- nents are also drawn. "classification" = a plot of data with points marked on based the predicted classification; if newdata is provided then the test set is shown otherwise the training set. "train&test" = a plot of training and test data with points marked according to the type of set. "error" = a plot of training set (or test set if newdata and newclass are pro- vided) with misclassified points marked. If not specified, in interactive sessions a menu of choices is proposed. newdata A data frame or matrix for test data. newclass A vector giving the class labels for the observations in the test data (if known). dimens A vector of integers giving the dimensions of the desired coordinate projections for multivariate data. The default is to take all the the available dimensions for plotting. symbols Either an integer or character vector assigning a plotting symbol to each unique class. Elements in colors correspond to classes in order of appearance in the sequence of observations (the order used by the function factor). The default is given by mclust.options("classPlotSymbols"). colors Either an integer or character vector assigning a color to each unique class in classification. Elements in colors correspond to classes in order of appear- ance in the sequence of observations (the order used by the function factor). The default is given by mclust.options("classPlotColors"). main A logical, a character string, or NULL (default) for the main title. If NULL or FALSE no title is added to a plot. If TRUE a default title is added identifying the type of plot drawn. If a character string is provided, this is used for the title. ... further arguments passed to or from other methods. Details For more flexibility in plotting, use mclust1Dplot, mclust2Dplot, surfacePlot, coordProj, or randProj. Author(s) <NAME> See Also MclustDA, surfacePlot, coordProj, randProj Examples odd <- seq(from = 1, to = nrow(iris), by = 2) even <- odd + 1 X.train <- iris[odd,-5] Class.train <- iris[odd,5] X.test <- iris[even,-5] Class.test <- iris[even,5] # common EEE covariance structure (which is essentially equivalent to linear discriminant analysis) irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA", modelNames = "EEE") summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) # common covariance structure selected by BIC irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA") summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) # general covariance structure selected by BIC irisMclustDA <- MclustDA(X.train, Class.train) summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) plot(irisMclustDA) plot(irisMclustDA, dimens = 3:4) plot(irisMclustDA, dimens = 4) plot(irisMclustDA, what = "classification") plot(irisMclustDA, what = "classification", newdata = X.test) plot(irisMclustDA, what = "classification", dimens = 3:4) plot(irisMclustDA, what = "classification", newdata = X.test, dimens = 3:4) plot(irisMclustDA, what = "classification", dimens = 4) plot(irisMclustDA, what = "classification", dimens = 4, newdata = X.test) plot(irisMclustDA, what = "train&test", newdata = X.test) plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 3:4) plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 4) plot(irisMclustDA, what = "error") plot(irisMclustDA, what = "error", dimens = 3:4) plot(irisMclustDA, what = "error", dimens = 4) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 3:4) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 4) # simulated 1D data n <- 250 set.seed(1) triModal <- c(rnorm(n,-5), rnorm(n,0), rnorm(n,5)) triClass <- c(rep(1,n), rep(2,n), rep(3,n)) odd <- seq(from = 1, to = length(triModal), by = 2) even <- odd + 1 triMclustDA <- MclustDA(triModal[odd], triClass[odd]) summary(triMclustDA, parameters = TRUE) summary(triMclustDA, newdata = triModal[even], newclass = triClass[even]) plot(triMclustDA) plot(triMclustDA, what = "classification") plot(triMclustDA, what = "classification", newdata = triModal[even]) plot(triMclustDA, what = "train&test", newdata = triModal[even]) plot(triMclustDA, what = "error") plot(triMclustDA, what = "error", newdata = triModal[even], newclass = triClass[even]) # simulated 2D cross data data(cross) odd <- seq(from = 1, to = nrow(cross), by = 2) even <- odd + 1 crossMclustDA <- MclustDA(cross[odd,-1], cross[odd,1]) summary(crossMclustDA, parameters = TRUE) summary(crossMclustDA, newdata = cross[even,-1], newclass = cross[even,1]) plot(crossMclustDA) plot(crossMclustDA, what = "classification") plot(crossMclustDA, what = "classification", newdata = cross[even,-1]) plot(crossMclustDA, what = "train&test", newdata = cross[even,-1]) plot(crossMclustDA, what = "error") plot(crossMclustDA, what = "error", newdata =cross[even,-1], newclass = cross[even,1]) plot.MclustDR Plotting method for dimension reduction for model-based clustering and classification Description Graphs data projected onto the estimated subspace for model-based clustering and classification. Usage ## S3 method for class 'MclustDR' plot(x, dimens, what = c("scatterplot", "pairs", "contour", "classification", "boundaries", "density", "evalues"), symbols, colors, col.contour = gray(0.7), col.sep = grey(0.4), ngrid = 200, nlevels = 5, asp = NULL, ...) Arguments x An object of class 'MclustDR' resulting from a call to MclustDR. dimens A vector of integers giving the dimensions of the desired coordinate projections for multivariate data. what The type of graph requested: "scatterplot" = a two-dimensional plot of data projected onto the first two directions specified by dimens and with data points marked according to the corresponding mixture component. By default, the first two directions are selected for plotting. "pairs" = a scatterplot matrix of data projected onto the estimated subspace and with data points marked according to the corresponding mixture com- ponent. By default, all the available directions are used, unless they have been specified by dimens. "contour" = a two-dimensional plot of data projected onto the first two direc- tions specified by dimens (by default, the first two directions) with density contours for classes or clusters and data points marked according to the corresponding mixture component. "classification" = a two-dimensional plot of data projected onto the first two directions specified by dimens (by default, the first two directions) with classification region and data points marked according to the corresponding mixture component. "boundaries" = a two-dimensional plot of data projected onto the first two di- rections specified by dimens (by default, the first two directions) with un- certainty boundaries and data points marked according to the corresponding mixture component. The uncertainty is shown using a greyscale with darker regions indicating higher uncertainty. "density" = a one-dimensional plot of estimated density for the first direction specified by dimens (by default, the first one). A set of box-plots for each estimated cluster or known class are also shown at the bottom of the graph. symbols Either an integer or character vector assigning a plotting symbol to each unique mixture component. Elements in colors correspond to classes in order of appearance in the sequence of observations (the order used by the function factor). The default is given by mclust.options("classPlotSymbols"). colors Either an integer or character vector assigning a color to each unique cluster or known class. Elements in colors correspond to classes in order of appearance in the sequence of observations (the order used by the function factor). The default is given by mclust.options("classPlotColors"). col.contour The color of contours in case what = "contour". col.sep The color of classification boundaries in case what = "classification". ngrid An integer specifying the number of grid points to use in evaluating the classifi- cation regions. nlevels The number of levels to use in case what = "contour". asp For scatterplots the y/x aspect ratio, see plot.window. ... further arguments passed to or from other methods. Author(s) <NAME> References <NAME>. (2010) Dimension reduction for model-based clustering. Statistics and Computing, 20(4), pp. 471-484. See Also MclustDR Examples mod <- Mclust(iris[,1:4], G = 3) dr <- MclustDR(mod, lambda = 0.5) plot(dr, what = "evalues") plot(dr, what = "pairs") plot(dr, what = "scatterplot", dimens = c(1,3)) plot(dr, what = "contour") plot(dr, what = "classification", ngrid = 200) plot(dr, what = "boundaries", ngrid = 200) plot(dr, what = "density") plot(dr, what = "density", dimens = 2) data(banknote) da <- MclustDA(banknote[,2:7], banknote$Status, G = 1:3) dr <- MclustDR(da) plot(dr, what = "evalues") plot(dr, what = "pairs") plot(dr, what = "contour") plot(dr, what = "classification", ngrid = 200) plot(dr, what = "boundaries", ngrid = 200) plot(dr, what = "density") plot(dr, what = "density", dimens = 2) plot.mclustICL ICL Plot for Model-Based Clustering Description Plots the ICL values returned by the mclustICL function. Usage ## S3 method for class 'mclustICL' plot(x, ylab = "ICL", ...) Arguments x Output from mclustICL. ylab Label for the vertical axis of the plot. ... Further arguments passed to the plot.mclustBIC function. Value A plot of the ICL values. See Also mclustICL Examples data(faithful) faithful.ICL = mclustICL(faithful) plot(faithful.ICL) plot.MclustSSC Plotting method for MclustSSC semi-supervised classification Description Plots for semi-supervised classification based on Gaussian finite mixture models. Usage ## S3 method for class 'MclustSSC' plot(x, what = c("BIC", "classification", "uncertainty"), ...) Arguments x An object of class 'MclustSSC' resulting from a call to MclustSSC. what A string specifying the type of graph requested. Available choices are: "BIC" = plot of BIC values used for model selection, i.e. for choosing the model class covariances. "classification" = a plot of data with points marked based on the known and the predicted classification. "uncertainty" = a plot of classification uncertainty. If not specified, in interactive sessions a menu of choices is proposed. ... further arguments passed to or from other methods. See plot.Mclust. Author(s) <NAME> See Also MclustSSC Examples X <- iris[,1:4] class <- iris$Species # randomly remove class labels set.seed(123) class[sample(1:length(class), size = 120)] <- NA table(class, useNA = "ifany") clPairs(X, ifelse(is.na(class), 0, class), symbols = c(0, 16, 17, 18), colors = c("grey", 4, 2, 3), main = "Partially classified data") # Fit semi-supervised classification model mod_SSC <- MclustSSC(X, class) summary(mod_SSC, parameters = TRUE) pred_SSC <- predict(mod_SSC) table(Predicted = pred_SSC$classification, Actual = class, useNA = "ifany") plot(mod_SSC, what = "BIC") plot(mod_SSC, what = "classification") plot(mod_SSC, what = "uncertainty") predict.densityMclust Density estimate of multivariate observations by Gaussian finite mix- ture modeling Description Compute density estimation for multivariate observations based on Gaussian finite mixture models estimated by densityMclust. Usage ## S3 method for class 'densityMclust' predict(object, newdata, what = c("dens", "cdens", "z"), logarithm = FALSE, ...) Arguments object an object of class 'densityMclust' resulting from a call to densityMclust. newdata a vector, a data frame or matrix giving the data. If missing the density is com- puted for the input data obtained from the call to densityMclust. what a character string specifying what to retrieve: "dens" returns a vector of values for the mixture density; "cdens" returns a matrix of component densities for each mixture component (along the columns); "z" returns a matrix of condi- tional probabilities of each data point to belong to a mixture component. logarithm A logical value indicating whether or not the logarithm of the density or com- ponent densities should be returned. ... further arguments passed to or from other methods. Value Returns a vector or a matrix of densities evaluated at newdata depending on the argument what (see above). Author(s) <NAME> See Also Mclust. Examples x <- faithful$waiting dens <- densityMclust(x, plot = FALSE) x0 <- seq(50, 100, by = 10) d0 <- predict(dens, x0) plot(dens, what = "density") points(x0, d0, pch = 20) predict.Mclust Cluster multivariate observations by Gaussian finite mixture modeling Description Cluster prediction for multivariate observations based on Gaussian finite mixture models estimated by Mclust. Usage ## S3 method for class 'Mclust' predict(object, newdata, ...) Arguments object an object of class 'Mclust' resulting from a call to Mclust. newdata a data frame or matrix giving the data. If missing the clustering data obtained from the call to Mclust are classified. ... further arguments passed to or from other methods. Value Returns a list of with the following components: classification a factor of predicted cluster labels for newdata. z a matrix whose [i,k]th entry is the probability that observation i in newdata belongs to the kth cluster. Author(s) <NAME> See Also Mclust. Examples model <- Mclust(faithful) # predict cluster for the observed data pred <- predict(model) str(pred) pred$z # equal to model$z pred$classification # equal to plot(faithful, col = pred$classification, pch = pred$classification) # predict cluster over a grid grid <- apply(faithful, 2, function(x) seq(min(x), max(x), length = 50)) grid <- expand.grid(eruptions = grid[,1], waiting = grid[,2]) pred <- predict(model, grid) plot(grid, col = mclust.options("classPlotColors")[pred$classification], pch = 15, cex = 0.5) points(faithful, pch = model$classification) predict.MclustDA Classify multivariate observations by Gaussian finite mixture model- ing Description Classify multivariate observations based on Gaussian finite mixture models estimated by MclustDA. Usage ## S3 method for class 'MclustDA' predict(object, newdata, prop = object$prop, ...) Arguments object an object of class 'MclustDA' resulting from a call to MclustDA. newdata a data frame or matrix giving the data. If missing the train data obtained from the call to MclustDA are classified. prop the class proportions or prior class probabilities to belong to each class; by de- fault, this is set at the class proportions in the training data. ... further arguments passed to or from other methods. Value Returns a list of with the following components: classification a factor of predicted class labels for newdata. z a matrix whose [i,k]th entry is the probability that observation i in newdata belongs to the kth class. Author(s) <NAME> See Also MclustDA. Examples odd <- seq(from = 1, to = nrow(iris), by = 2) even <- odd + 1 X.train <- iris[odd,-5] Class.train <- iris[odd,5] X.test <- iris[even,-5] Class.test <- iris[even,5] irisMclustDA <- MclustDA(X.train, Class.train) predTrain <- predict(irisMclustDA) predTrain predTest <- predict(irisMclustDA, X.test) predTest predict.MclustDR Classify multivariate observations on a dimension reduced subspace by Gaussian finite mixture modeling Description Classify multivariate observations on a dimension reduced subspace estimated from a Gaussian finite mixture model. Usage ## S3 method for class 'MclustDR' predict(object, dim = 1:object$numdir, newdata, eval.points, ...) Arguments object an object of class 'MclustDR' resulting from a call to MclustDR. dim the dimensions of the reduced subspace used for prediction. newdata a data frame or matrix giving the data. If missing the data obtained from the call to MclustDR are used. eval.points a data frame or matrix giving the data projected on the reduced subspace. If provided newdata is not used. ... further arguments passed to or from other methods. Value Returns a list of with the following components: dir a matrix containing the data projected onto the dim dimensions of the reduced subspace. density densities from mixture model for each data point. z a matrix whose [i,k]th entry is the probability that observation i in newdata belongs to the kth class. uncertainty The uncertainty associated with the classification. classification A vector of values giving the MAP classification. Author(s) <NAME> References Scrucca, L. (2010) Dimension reduction for model-based clustering. Statistics and Computing, 20(4), pp. 471-484. See Also MclustDR. Examples mod = Mclust(iris[,1:4]) dr = MclustDR(mod) pred = predict(dr) str(pred) data(banknote) mod = MclustDA(banknote[,2:7], banknote$Status) dr = MclustDR(mod) pred = predict(dr) str(pred) predict.MclustSSC Classification of multivariate observations by semi-supervised Gaus- sian finite mixtures Description Classify multivariate observations based on Gaussian finite mixture models estimated by MclustSSC. Usage ## S3 method for class 'MclustSSC' predict(object, newdata, ...) Arguments object an object of class 'MclustSSC' resulting from a call to MclustSSC. newdata a data frame or matrix giving the data. If missing the train data obtained from the call to MclustSSC are classified. ... further arguments passed to or from other methods. Value Returns a list of with the following components: classification a factor of predicted class labels for newdata. z a matrix whose [i,k]th entry is the probability that observation i in newdata belongs to the kth class. Author(s) <NAME> See Also MclustSSC. Examples X <- iris[,1:4] class <- iris$Species # randomly remove class labels set.seed(123) class[sample(1:length(class), size = 120)] <- NA table(class, useNA = "ifany") clPairs(X, ifelse(is.na(class), 0, class), symbols = c(0, 16, 17, 18), colors = c("grey", 4, 2, 3), main = "Partially classified data") # Fit semi-supervised classification model mod_SSC <- MclustSSC(X, class) pred_SSC <- predict(mod_SSC) table(Predicted = pred_SSC$classification, Actual = class, useNA = "ifany") X_new = data.frame(Sepal.Length = c(5, 8), Sepal.Width = c(3.1, 4), Petal.Length = c(2, 5), Petal.Width = c(0.5, 2)) predict(mod_SSC, newdata = X_new) priorControl Conjugate Prior for Gaussian Mixtures. Description Specify a conjugate prior for Gaussian mixtures. Usage priorControl(functionName = "defaultPrior", ...) Arguments functionName The name of the function specifying the conjugate prior. By default the function defaultPrior is used, and this can also be used as a template for alternative specification. ... Optional named arguments to the function specified in functionName together with their values. Details The function priorControl is used to specify a conjugate prior for EM within MCLUST. Note that, as described in defaultPrior, in the multivariate case only 10 out of 14 models may be used in conjunction with a prior, i.e. those available in MCLUST up to version 4.4. Value A list with the function name as the first component. The remaining components (if any) consist of a list of arguments to the function with assigned values. References <NAME> and <NAME> (2007). Bayesian regularization for normal mixture estimation and model-based clustering. Journal of Classification 24:155-181. See Also mclustBIC, me, mstep, defaultPrior Examples # default prior irisBIC <- mclustBIC(iris[,-5], prior = priorControl()) summary(irisBIC, iris[,-5]) # no prior on the mean; default prior on variance irisBIC <- mclustBIC(iris[,-5], prior = priorControl(shrinkage = 0)) summary(irisBIC, iris[,-5]) randomOrthogonalMatrix Random orthogonal matrix Description Generate a random orthogonal basis matrix of dimension (nrowxncol) using the method in Heiberger (1978). Usage randomOrthogonalMatrix(nrow, ncol, n = nrow, d = ncol, seed = NULL) Arguments nrow the number of rows of the resulting orthogonal matrix. ncol the number of columns of the resulting orthogonal matrix. n deprecated. See nrow above. d deprecated. See ncol above. seed an optional integer argument to use in set.seed() for reproducibility. By de- fault the current seed will be used. Reproducibility can also be achieved by calling set.seed() before calling this function. Details The use of arguments n and d is deprecated and they will be removed in the future. Value An orthogonal matrix of dimension nrowxncol such that each column is orthogonal to the other and has unit lenght. Because of the latter, it is also called orthonormal. References <NAME>. (1978) Generation of random orthogonal matrices. Journal of the Royal Statistical Society. Series C (Applied Statistics), 27(2), 199-206. See Also coordProj Examples B <- randomOrthogonalMatrix(10,3) zapsmall(crossprod(B)) randProj Random projections of multidimensional data modeled by an MVN mixture Description Plots random projections given multidimensional data and parameters of an MVN mixture model for the data. Usage randProj(data, seeds = NULL, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "error", "uncertainty"), quantiles = c(0.75, 0.95), addEllipses = TRUE, fillEllipses = mclust.options("fillEllipses"), symbols = NULL, colors = NULL, scale = FALSE, xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, cex = 1, PCH = ".", main = FALSE, ...) Arguments data A numeric matrix or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. seeds An integer value or a vector of integer values to be used as seed for random num- ber generation. If multiple values are provided, then each seed should produce a different projection. By default, a single seed is drawn randomnly, so each call of randProj() produces different projections. parameters A named list giving the parameters of an MCLUST model, used to produce superimposing ellipses on the plot. The relevant components are as follows: mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. z A matrix in which the [i,k]th entry gives the probability of observation i be- longing to the kth class. Used to compute classification and uncertainty if those arguments aren’t available. classification A numeric or character vector representing a classification of observations (rows) of data. If present argument z will be ignored. truth A numeric or character vector giving a known classification of each data point. If classification or z is also present, this is used for displaying classification errors. uncertainty A numeric vector of values in (0,1) giving the uncertainty of each data point. If present argument z will be ignored. what Choose from one of the following three options: "classification" (default), "error", "uncertainty". quantiles A vector of length 2 giving quantiles used in plotting uncertainty. The smallest symbols correspond to the smallest quantile (lowest uncertainty), medium-sized (open) symbols to points falling between the given quantiles, and large (filled) symbols to those in the largest quantile (highest uncertainty). The default is (0.75,0.95). addEllipses A logical indicating whether or not to add ellipses with axes corresponding to the within-cluster covariances in case of "classification" or "uncertainty" plots. fillEllipses A logical specifying whether or not to fill ellipses with transparent colors when addEllipses = TRUE. symbols Either an integer or character vector assigning a plotting symbol to each unique class in classification. Elements in colors correspond to classes in order of appearance in the sequence of observations (the order used by the function unique). The default is given by mclust.options("classPlotSymbols"). colors Either an integer or character vector assigning a color to each unique class in classification. Elements in colors correspond to classes in order of appear- ance in the sequence of observations (the order used by the function unique). The default is given by mclust.options("classPlotColors"). scale A logical variable indicating whether or not the two chosen dimensions should be plotted on the same scale, and thus preserve the shape of the distribution. Default: scale=FALSE xlim, ylim Optional arguments specifying bounds for the ordinate, abscissa of the plot. This may be useful for when comparing plots. xlab, ylab Optional arguments specifying the labels for, respectively, the horizontal and vertical axis. cex A numerical value specifying the size of the plotting symbols. The default value is 1. PCH An argument specifying the symbol to be used when a classificatiion has not been specified for the data. The default value is a small dot ".". main A logical variable or NULL indicating whether or not to add a title to the plot identifying the dimensions used. ... Other graphics parameters. Value A plot showing a random two-dimensional projection of the data, together with the location of the mixture components, classification, uncertainty, and/or classification errors. The function also returns an invisible list with components basis, the randomnly generated basis of the projection subspace, data, a matrix of projected data, and mu and sigma the component parameters transformed to the projection subspace. See Also clPairs, coordProj, mclust2Dplot, mclust.options Examples est <- meVVV(iris[,-5], unmap(iris[,5])) par(pty = "s", mfrow = c(1,1)) randProj(iris[,-5], seeds=1:3, parameters = est$parameters, z = est$z, what = "classification", main = TRUE) randProj(iris[,-5], seeds=1:3, parameters = est$parameters, z = est$z, truth = iris[,5], what = "error", main = TRUE) randProj(iris[,-5], seeds=1:3, parameters = est$parameters, z = est$z, what = "uncertainty", main = TRUE) sigma2decomp Convert mixture component covariances to decomposition form. Description Converts a set of covariance matrices from representation as a 3-D array to a parameterization by eigenvalue decomposition. Usage sigma2decomp(sigma, G = NULL, tol = sqrt(.Machine$double.eps), ...) Arguments sigma Either a 3-D array whose [„k]th component is the covariance matrix for the kth component in an MVN mixture model, or a single covariance matrix in the case that all components have the same covariance. G The number of components in the mixture. When sigma is a 3-D array, the number of components can be inferred from its dimensions. tol Tolerance for determining whether or not the covariances have equal volume, shape, and or orientation. The default is the square root of the relative machine precision, sqrt(.Machine$double.eps), which is about 1.e-8. ... Catches unused arguments from an indirect or list call via do.call. Value The covariance matrices for the mixture components in decomposition form, including the follow- ing components: modelName A character string indicating the infered model. The help file for mclustModelNames describes the available models. d The dimension of the data. G The number of components in the mixture model. scale Either a G-vector giving the scale of the covariance (the dth root of its determi- nant) for each component in the mixture model, or a single numeric value if the scale is the same for each component. shape Either a G by d matrix in which the kth column is the shape of the covariance matrix (normalized to have determinant 1) for the kth component, or a d-vector giving a common shape for all components. orientation Either a d by d by G array whose [,,k]th entry is the orthonomal matrix whose columns are the eigenvectors of the covariance matrix of the kth component, or a d by d orthonormal matrix if the mixture components have a common orien- tation. The orientation component of decomp can be omitted in spherical and diagonal models, for which the principal components are parallel to the coordi- nate axes so that the orientation matrix is the identity. See Also decomp2sigma Examples meEst <- meEEE(iris[,-5], unmap(iris[,5])) names(meEst$parameters$variance) meEst$parameters$variance$Sigma sigma2decomp(meEst$parameters$variance$Sigma, G = length(unique(iris[,5]))) sim Simulate from Parameterized MVN Mixture Models Description Simulate data from parameterized MVN mixture models. Usage sim(modelName, parameters, n, seed = NULL, ...) Arguments modelName A character string indicating the model. The help file for mclustModelNames describes the available models. parameters A list with the following components: pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. If missing, equal proportions are assumed. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. n An integer specifying the number of data points to be simulated. seed An optional integer argument to set.seed for reproducible random class as- signment. By default the current seed will be used. Reproducibility can also be achieved by calling set.seed before calling sim. ... Catches unused arguments in indirect or list calls via do.call. Details This function can be used with an indirect or list call using do.call, allowing the output of e.g. mstep, em, me, Mclust to be passed directly without the need to specify individual parameters as arguments. Value A matrix in which first column is the classification and the remaining columns are the n observations simulated from the specified MVN mixture model. Attributes: "modelName" A character string indicating the variance model used for the sim- ulation. See Also simE, . . . , simVVV, Mclust, mstep, do.call Examples irisBIC <- mclustBIC(iris[,-5]) irisModel <- mclustModel(iris[,-5], irisBIC) names(irisModel) irisSim <- sim(modelName = irisModel$modelName, parameters = irisModel$parameters, n = nrow(iris)) do.call("sim", irisModel) # alternative call par(pty = "s", mfrow = c(1,2)) dimnames(irisSim) <- list(NULL, c("dummy", (dimnames(iris)[[2]])[-5])) dimens <- c(1,2) lim1 <- apply(iris[,dimens],2,range) lim2 <- apply(irisSim[,dimens+1],2,range) lims <- apply(rbind(lim1,lim2),2,range) xlim <- lims[,1] ylim <- lims[,2] coordProj(iris[,-5], parameters=irisModel$parameters, classification=map(irisModel$z), dimens=dimens, xlim=xlim, ylim=ylim) coordProj(iris[,-5], parameters=irisModel$parameters, classification=map(irisModel$z), truth = irisSim[,-1], dimens=dimens, xlim=xlim, ylim=ylim) irisModel3 <- mclustModel(iris[,-5], irisBIC, G=3) irisSim3 <- sim(modelName = irisModel3$modelName, parameters = irisModel3$parameters, n = 500, seed = 1) irisModel3$n <- NULL irisSim3 <- do.call("sim",c(list(n=500,seed=1),irisModel3)) # alternative call clPairs(irisSim3[,-1], cl = irisSim3[,1]) simE Simulate from a Parameterized MVN Mixture Model Description Simulate data from a parameterized MVN mixture model. Usage simE(parameters, n, seed = NULL, ...) simV(parameters, n, seed = NULL, ...) simEII(parameters, n, seed = NULL, ...) simVII(parameters, n, seed = NULL, ...) simEEI(parameters, n, seed = NULL, ...) simVEI(parameters, n, seed = NULL, ...) simEVI(parameters, n, seed = NULL, ...) simVVI(parameters, n, seed = NULL, ...) simEEE(parameters, n, seed = NULL, ...) simVEE(parameters, n, seed = NULL, ...) simEVE(parameters, n, seed = NULL, ...) simVVE(parameters, n, seed = NULL, ...) simEEV(parameters, n, seed = NULL, ...) simVEV(parameters, n, seed = NULL, ...) simEVV(parameters, n, seed = NULL, ...) simVVV(parameters, n, seed = NULL, ...) Arguments parameters A list with the following components: pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. If missing, equal proportions are assumed. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. n An integer specifying the number of data points to be simulated. seed An optional integer argument to set.seed() for reproducible random class as- signment. By default the current seed will be used. Reproducibility can also be achieved by calling set.seed before calling sim. ... Catches unused arguments in indirect or list calls via do.call. Details This function can be used with an indirect or list call using do.call, allowing the output of e.g. mstep, em me, Mclust, to be passed directly without the need to specify individual parameters as arguments. Value A matrix in which first column is the classification and the remaining columns are the n observations simulated from the specified MVN mixture model. Attributes: "modelName" A character string indicating the variance model used for the sim- ulation. See Also sim, Mclust, mstepE, mclustVariance. Examples d <- 2 G <- 2 scale <- 1 shape <- c(1, 9) O1 <- diag(2) O2 <- diag(2)[,c(2,1)] O <- array(cbind(O1,O2), c(2, 2, 2)) O variance <- list(d= d, G = G, scale = scale, shape = shape, orientation = O) mu <- matrix(0, d, G) ## center at the origin simdat <- simEEV( n = 200, parameters = list(pro=c(1,1),mean=mu,variance=variance), seed = NULL) cl <- simdat[,1] sigma <- array(apply(O, 3, function(x,y) crossprod(x*y), y = sqrt(scale*shape)), c(2,2,2)) paramList <- list(mu = mu, sigma = sigma) coordProj( simdat, paramList = paramList, classification = cl) summary.Mclust Summarizing Gaussian Finite Mixture Model Fits Description Summary method for class "Mclust". Usage ## S3 method for class 'Mclust' summary(object, classification = TRUE, parameters = FALSE, ...) ## S3 method for class 'summary.Mclust' print(x, digits = getOption("digits"), ...) Arguments object An object of class 'Mclust' resulting of a call to Mclust or densityMclust. x An object of class 'summary.Mclust', usually, a result of a call to summary.Mclust. classification Logical; if TRUE a table of MAP classification/clustering of observations is printed. parameters Logical; if TRUE, the parameters of mixture components are printed. digits The number of significant digits to use when printing. ... Further arguments passed to or from other methods. Author(s) <NAME> See Also Mclust, densityMclust. Examples mod1 = Mclust(iris[,1:4]) summary(mod1) summary(mod1, parameters = TRUE, classification = FALSE) mod2 = densityMclust(faithful, plot = FALSE) summary(mod2) summary(mod2, parameters = TRUE) summary.mclustBIC Summary function for model-based clustering via BIC Description Optimal model characteristics and classification for model-based clustering via mclustBIC. Usage ## S3 method for class 'mclustBIC' summary(object, data, G, modelNames, ...) Arguments object An 'mclustBIC' object, which is the result of applying mclustBIC to data. data The matrix or vector of observations used to generate ‘object’. G A vector of integers giving the numbers of mixture components (clusters) from which the best model according to BIC will be selected (as.character(G) must be a subset of the row names of object). The default is to select the best model for all numbers of mixture components used to obtain object. modelNames A vector of integers giving the model parameterizations from which the best model according to BIC will be selected (as.character(model) must be a sub- set of the column names of object). The default is to select the best model for parameterizations used to obtain object. ... Not used. For generic/method consistency. Value A list giving the optimal (according to BIC) parameters, conditional probabilities z, and log- likelihood, together with the associated classification and its uncertainty. The details of the output components are as follows: modelName A character string denoting the model corresponding to the optimal BIC. n The number of observations in the data. d The dimension of the data. G The number of mixture components in the model corresponding to the optimal BIC. bic The optimal BIC value. loglik The log-likelihood corresponding to the optimal BIC. parameters A list with the following components: pro A vector whose kth component is the mixing proportion for the kth compo- nent of the mixture model. If missing, equal proportions are assumed. mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. z A matrix whose [i,k]th entry is the probability that observation i in the data belongs to the kth class. classification map(z): The classification corresponding to z. uncertainty The uncertainty associated with the classification. Attributes: "bestBICvalues" Some of the best bic values for the analysis. "prior" The prior as specified in the input. "control" The control parameters for EM as specified in the input. "initialization" The parameters used to initial EM for computing the maxi- mum likelihood values used to obtain the BIC. See Also mclustBIC mclustModel Examples irisBIC <- mclustBIC(iris[,-5]) summary(irisBIC, iris[,-5]) summary(irisBIC, iris[,-5], G = 1:6, modelNames = c("VII", "VVI", "VVV")) summary.MclustBootstrap Summary Function for Bootstrap Inference for Gaussian Finite Mix- ture Models Description Summary of bootstrap distribution for the parameters of a Gaussian mixture model providing either standard errors or percentile bootstrap confidence intervals. Usage ## S3 method for class 'MclustBootstrap' summary(object, what = c("se", "ci", "ave"), conf.level = 0.95, ...) Arguments object An object of class 'MclustBootstrap' as returned by MclustBootstrap. what A character string: "se" for the standard errors; "ci" for the confidence inter- vals; "ave" for the averages. conf.level A value specifying the confidence level of the interval. ... Further arguments passed to or from other methods. Details For details about the procedure used to obtain the bootstrap distribution see MclustBootstrap. See Also MclustBootstrap. Examples data(diabetes) X = diabetes[,-1] modClust = Mclust(X) bootClust = MclustBootstrap(modClust) summary(bootClust, what = "se") summary(bootClust, what = "ci") data(acidity) modDens = densityMclust(acidity, plot = FALSE) modDens = MclustBootstrap(modDens) summary(modDens, what = "se") summary(modDens, what = "ci") summary.MclustDA Summarizing discriminant analysis based on Gaussian finite mixture modeling Description Summary method for class "MclustDA". Usage ## S3 method for class 'MclustDA' summary(object, parameters = FALSE, newdata, newclass, ...) ## S3 method for class 'summary.MclustDA' print(x, digits = getOption("digits"), ...) Arguments object An object of class 'MclustDA' resulting from a call to MclustDA. x An object of class 'summary.MclustDA', usually, a result of a call to summary.MclustDA. parameters Logical; if TRUE, the parameters of mixture components are printed. newdata A data frame or matrix giving the test data. newclass A vector giving the class labels for the observations in the test data. digits The number of significant digits to use when printing. ... Further arguments passed to or from other methods. Value The function summary.MclustDA computes and returns a list of summary statistics of the estimated MclustDA or EDDA model for classification. Author(s) <NAME> See Also MclustDA, plot.MclustDA. Examples mod = MclustDA(data = iris[,1:4], class = iris$Species) summary(mod) summary(mod, parameters = TRUE) summary.MclustDR Summarizing dimension reduction method for model-based clustering and classification Description Summary method for class "MclustDR". Usage ## S3 method for class 'MclustDR' summary(object, numdir, std = FALSE, ...) ## S3 method for class 'summary.MclustDR' print(x, digits = max(5, getOption("digits") - 3), ...) Arguments object An object of class 'MclustDR' resulting from a call to MclustDR. x An object of class 'summary.MclustDR', usually, a result of a call to summary.MclustDR. numdir An integer providing the number of basis directions to be printed. std if TRUE the coefficients basis are scaled such that all predictors have unit standard deviation. digits The number of significant digits to use when printing. ... Further arguments passed to or from other methods. Author(s) <NAME> See Also MclustDR, plot.MclustDR summary.MclustSSC Summarizing semi-supervised classification model based on Gaussian finite mixtures Description Summary method for class "MclustSSC". Usage ## S3 method for class 'MclustSSC' summary(object, parameters = FALSE, ...) ## S3 method for class 'summary.MclustSSC' print(x, digits = getOption("digits"), ...) Arguments object An object of class 'MclustSSC' resulting from a call to MclustSSC. x An object of class 'summary.MclustSSC', usually, a result of a call to summary.MclustSSC. parameters Logical; if TRUE, the parameters of mixture components are printed. digits The number of significant digits to use when printing. ... Further arguments passed to or from other methods. Value The function summary.MclustSSC computes and returns a list of summary statistics of the estimated MclustSSC model for semi-supervised classification. Author(s) <NAME> See Also MclustSSC, plot.MclustSSC. surfacePlot Density or uncertainty surface for bivariate mixtures Description Plots a density or uncertainty surface given bivariate data and parameters of a MVN mixture model for the data. Usage surfacePlot(data, parameters, what = c("density", "uncertainty"), type = c("contour", "hdr", "image", "persp"), transformation = c("none", "log", "sqrt"), grid = 200, nlevels = 11, levels = NULL, prob = c(0.25, 0.5, 0.75), col = gray(0.5), col.palette = function(...) hcl.colors(..., "blues", rev = TRUE), hdr.palette = blue2grey.colors, xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, main = FALSE, scale = FALSE, swapAxes = FALSE, verbose = FALSE, ...) Arguments data A matrix, or data frame of bivariate observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. parameters A named list giving the parameters of an MCLUST model, used to produce superimposing ellipses on the plot. The relevant components are as follows: mean The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the kth component of the mixture model. variance A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for mclustVariance for details. what Choose from one of the following options: "density" (default), "uncertainty" indicating what to plot. type Choose from one of the following three options: "contour" (default), "hdr", "image", and "persp" indicating the plot type. transformation Choose from one of the following three options: "none" (default), "log", "sqrt" indicating a transformation to be applied before plotting. grid The number of grid points (evenly spaced on each axis). The mixture density and uncertainty is computed at grid x grid points to produce the surface plot. Default: 100. nlevels The number of levels to use for a contour plot. Default: 11. levels A vector of levels at which to draw the lines in a contour plot. prob A vector of probability levels for computing HDR. Only used if type = "hdr" and supersede previous nlevels and levels arguments. col A string specifying the colour to be used for type = "contour" and type = "persp" plots. col.palette A function which defines a palette of colours to be used for type = "image" plots. hdr.palette A function which defines a palette of colours to be used for type = "hdr" plots. xlim, ylim Optional argument specifying bounds for the ordinate, abscissa of the plot. This may be useful for when comparing plots. xlab, ylab Optional argument specifying labels for the x-axis and y-axis. main A logical variable or NULL indicating whether or not to add a title to the plot identifying the dimensions used. scale A logical variable indicating whether or not the two dimensions should be plot- ted on the same scale, and thus preserve the shape of the distribution. The default is not to scale. swapAxes A logical variable indicating whether or not the axes should be swapped for the plot. verbose A logical variable telling whether or not to print an indication that the function is in the process of computing values at the grid points, which typically takes some time to complete. ... Other graphics parameters. Details For an image plot, a color scheme may need to be selected on the display device in order to view the plot. Value A plots showing (a transformation of) the density or uncertainty for the given mixture model and data. The function also returns an invisible list with components x, y, and z in which x and y are the values used to define the grid and z is the transformed density or uncertainty at the grid points. References <NAME> and <NAME> (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association 97:611-631. <NAME>, <NAME>, <NAME> and <NAME> (2012). mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. See Also mclust2Dplot Examples faithfulModel <- Mclust(faithful) surfacePlot(faithful, parameters = faithfulModel$parameters, type = "contour", what = "density", transformation = "none", drawlabels = FALSE) surfacePlot(faithful, parameters = faithfulModel$parameters, type = "persp", what = "density", transformation = "log") surfacePlot(faithful, parameters = faithfulModel$parameters, type = "contour", what = "uncertainty", transformation = "log") thyroid UCI Thyroid Gland Data Description Data on five laboratory tests administered to a sample of 215 patients. The tests are used to pre- dict whether a patient’s thyroid can be classified as euthyroidism (normal thyroid gland function), hypothyroidism (underactive thyroid not producing enough thyroid hormone) or hyperthyroidism (overactive thyroid producing and secreting excessive amounts of the free thyroid hormones T3 and/or thyroxine T4). Diagnosis of thyroid operation was based on a complete medical record, including anamnesis, scan, etc. Usage data(thyroid) Format A data frame with the following variables: Diagnosis Diagnosis of thyroid operation: Hypo, Normal, and Hyper. RT3U T3-resin uptake test (percentage). T4 Total Serum thyroxin as measured by the isotopic displacement method. T3 Total serum triiodothyronine as measured by radioimmuno assay. TSH Basal thyroid-stimulating hormone (TSH) as measured by radioimmuno assay. DTSH Maximal absolute difference of TSH value after injection of 200 micro grams of thyrotropin- releasing hormone as compared to the basal value. Source One of several databases in the Thyroid Disease Data Set (new-thyroid.data, new-thyroid.names) of the UCI Machine Learning Repository https://archive.ics.uci.edu/ml/datasets/thyroid+ disease. Please note the UCI conditions of use. References <NAME>., Broeckaert, <NAME>. and <NAME>. (1983) Comparison of Multivariate Discriminant Techniques for Clinical Data - Application to the Thyroid Functional State, Meth. Inform. Med. 22, pp. 93-101. <NAME>. and <NAME> (1986) Potential Pattern Recognition in Cemical and Medical De- cision Making, Research Studies Press, Letchworth, England. uncerPlot Uncertainty Plot for Model-Based Clustering Description Displays the uncertainty in converting a conditional probablility from EM to a classification in model-based clustering. Usage uncerPlot(z, truth, ...) Arguments z A matrix whose [i,k]th entry is the conditional probability of the ith observation belonging to the kth component of the mixture. truth A numeric or character vector giving the true classification of the data. ... Provided to allow lists with elements other than the arguments can be passed in indirect or list calls with do.call. Details When truth is provided and the number of classes is compatible with z, the function compareClass is used to to find best correspondence between classes in truth and z. Value A plot of the uncertainty profile of the data, with uncertainties in increasing order of magnitude. If truth is supplied and the number of classes is the same as the number of columns of z, the uncertainty of the misclassified data is marked by vertical lines on the plot. See Also mclustBIC, em, me, mapClass Examples irisModel3 <- Mclust(iris[,-5], G = 3) uncerPlot(z = irisModel3$z) uncerPlot(z = irisModel3$z, truth = iris[,5]) unmap Indicator Variables given Classification Description Converts a classification into a matrix of indicator variables. Usage unmap(classification, groups=NULL, noise=NULL, ...) Arguments classification A numeric or character vector. Typically the distinct entries of this vector would represent a classification of observations in a data set. groups A numeric or character vector indicating the groups from which classification is drawn. If not supplied, the default is to assumed to be the unique entries of classification. noise A single numeric or character value used to indicate the value of groups corre- sponding to noise. ... Catches unused arguments in indirect or list calls via do.call. Value An n by m matrix of (0,1) indicator variables, where n is the length of classification and m is the number of unique values or symbols in classification. Columns are labeled by the unique values in classification, and the [i,j]th entry is 1 if classification[i] is the jth unique value or symbol in sorted order classification. If a noise value of symbol is designated, the corresponding indicator variables are relocated to the last column of the matrix. See Also map, estep, me Examples z <- unmap(iris[,5]) z[1:5, ] emEst <- me(modelName = "VVV", data = iris[,-5], z = z) emEst$z[1:5,] map(emEst$z) wdbc UCI Wisconsin Diagnostic Breast Cancer Data Description The data set provides data for 569 patients on 30 features of the cell nuclei obtained from a digitized image of a fine needle aspirate (FNA) of a breast mass. For each patient the cancer was diagnosed as malignant or benign. Usage data(wdbc) Format A data frame with 569 observations on the following variables: ID ID number Diagnosis cancer diagnosis: M = malignant, B = benign Radius_mean a numeric vector Texture_mean a numeric vector Perimeter_mean a numeric vector Area_mean a numeric vector Smoothness_mean a numeric vector Compactness_mean a numeric vector Concavity_mean a numeric vector Nconcave_mean a numeric vector Symmetry_mean a numeric vector Fractaldim_mean a numeric vector Radius_se a numeric vector Texture_se a numeric vector Perimeter_se a numeric vector Area_se a numeric vector Smoothness_se a numeric vector Compactness_se a numeric vector Concavity_se a numeric vector Nconcave_se a numeric vector Symmetry_se a numeric vector Fractaldim_se a numeric vector Radius_extreme a numeric vector Texture_extreme a numeric vector Perimeter_extreme a numeric vector Area_extreme a numeric vector Smoothness_extreme a numeric vector Compactness_extreme a numeric vector Concavity_extreme a numeric vector Nconcave_extreme a numeric vector Symmetry_extreme a numeric vector Fractaldim_extreme a numeric vector Details The recorded features are: • Radius as mean of distances from center to points on the perimeter • Texture as standard deviation of gray-scale values • Perimeter as cell nucleus perimeter • Area as cell nucleus area • Smoothness as local variation in radius lengths • Compactness as cell nucleus compactness, perimeter^2 / area - 1 • Concavity as severity of concave portions of the contour • Nconcave as number of concave portions of the contour • Symmetry as cell nucleus shape • Fractaldim as fractal dimension, "coastline approximation" - 1 For each feature the recorded values are computed from each image as <feature_name>_mean, <feature_name>_se, and <feature_name>_extreme, for the mean, the standard error, and the mean of the three largest values. Source The Breast Cancer Wisconsin (Diagnostic) Data Set (wdbc.data, wdbc.names) from the UCI Ma- chine Learning Repository https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+ (Diagnostic). Please note the UCI conditions of use. References <NAME>., <NAME>., and <NAME>. (1995) Breast cancer diagnosis and prognosis via linear programming. Operations Research, 43(4), pp. 570-577. wreath Data Simulated from a 14-Component Mixture Description A dataset consisting of 1000 observations drawn from a 14-component normal mixture in which the covariances of the components have the same size and shape but differ in orientation. Usage data(wreath) References <NAME>, <NAME> and <NAME> (2005). Incremental model-based clustering for large datasets with small clusters. Journal of Computational and Graphical Statistics 14:1:18.
POMADE
cran
R
Package ‘POMADE’ December 2, 2022 Title Power for Meta-Analysis of Dependent Effects Version 0.1.0 BugReports https://github.com/MikkelVembye/POMADE/issues Description Provides functions to compute and plot power levels, minimum detectable ef- fect sizes, and minimum required sample sizes for the test of the overall average ef- fect size in meta-analysis of dependent effect sizes. Depends R (>= 4.1.0) License MIT + file LICENSE Encoding UTF-8 LazyData true RoxygenNote 7.2.1 Imports ggplot2, dplyr, magrittr, purrr, future, furrr, stats, stringr, utils, tidyr, tibble Suggests covr, roxygen2, testthat (>= 3.0.0) Config/testthat/edition 3 Language en-US URL https://mikkelvembye.github.io/POMADE/ NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0001-9071-0724>), <NAME> [aut] (<https://orcid.org/0000-0003-0591-9465>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2022-12-02 16:20:13 UTC R topics documented: cluster_bias_adjustmen... 2 effective_sample_size... 3 mdes_MAD... 4 min_studies_MAD... 6 plot_MAD... 8 plot_MADE.mde... 10 plot_MADE.min_studie... 12 plot_MADE.powe... 15 power_MAD... 17 tau2_approximatio... 20 VWB22_pilo... 21 cluster_bias_adjustment Cluster Bias Correction Description Function to conduct cluster bias correction of sampling variance estimates obtained from cluster- randomized studies in which the reported variance does not account for clustering. Usage cluster_bias_adjustment(sigma2js, cluster_size = 22, icc = 0.2) Arguments sigma2js A vector of sampling variance estimates that do not account for clustering. cluster_size A numerical value for average cluster size. icc Assumed intra-class correlation (proportion of total variance at the cluster level). Value Returns a vector of cluster bias adjusted variance estimates Examples cbc_var <- cluster_bias_adjustment( sigma2js = c(0.04, 0.06, 0.08, 0.1), cluster_size = 15, icc = 0.15 ) cbc_var effective_sample_sizes Approximate Effective Sample Sizes Description Approximate Effective Sample Sizes Usage effective_sample_sizes( sample_sizes_raw = NULL, Nt_raw = NULL, Nc_raw = NULL, cluster_size = 22, icc = 0.22 ) Arguments sample_sizes_raw Vector of the raw total study sample size(s). Nt_raw Vector of raw treatment group sample size(s). Nc_raw Vector of raw control group sample size(s). cluster_size Average cluster size (Default = 22, a common class size in education research studies). icc Assumed intra-class correlation (Default = 0.22, the average ICC value in Hedges & Hedberg (2007) unconditional models) Details N_j/DE Value A vector of effective sample sizes, adjusted for cluster-dependence. Examples sample_sizes <- sample(50:1000, 50, replace = TRUE) effective_sample_sizes( sample_sizes_raw = sample_sizes, cluster_size = 20, icc = 0.15 ) mdes_MADE Minimum Detectable Effect Size (MDES) for Meta-Analysis With De- pendent Effect Sizes Description Compute the minimum detectable effect size in a meta-analysis of dependent effect size estimates, given a specified number of studies, power level, estimation method, and further assumptions about the distribution of studies. Usage mdes_MADE( J, tau, omega, rho, alpha = 0.05, target_power = 0.8, d = 0, model = "CHE", var_df = "RVE", sigma2_dist = NULL, n_ES_dist = NULL, iterations = 100, seed = NULL, warning = TRUE, upper = 2, show_lower = FALSE ) Arguments J Number of studies. Can be one value or a vector of multiple values. tau Between-study SD. Can be one value or a vector of multiple values. omega Within-study SD. Can be one value or a vector of multiple values. rho Correlation coefficient between effect size estimates from the same study. Can be one value or a vector of multiple values. alpha Level of statistical significance. Can be one value or a vector of multiple values. Default is 0.05. target_power Numerical value specifying the target power level. Can be one value or a vector of multiple values. d Contrast value. Can be one value or a vector of multiple values. Default is 0. model Assumed working model for dependent effect sizes, either "CHE" for the correlated- and-hierarchical effects model, "CE" for the correlated effects model, or "MLMA" for the multi-level meta-analysis model. Default is "CHE". Can be one value or a vector of multiple values. var_df Indicates the technique used to obtain the sampling variance of the average ef- fect size estimate and the degrees of freedom, either "Model" for model-based variance estimator with degrees of freedom of J - 1, "Satt" for model-based variance estimator with Satterthwaite degrees of freedom, or "RVE" for robust variance estimator with Satterthwaite degrees of freedom. Default is "RVE". Can be one value or a vector of multiple values. sigma2_dist Distribution of sampling variance estimates from each study. Can be either a single value, a vector of plausible values, or a function that generates random values. n_ES_dist Distribution of the number of effect sizes per study. Can be either a single value, a vector of plausible values, or a function that generates random values. iterations Number of iterations per condition (default is 100). seed Numerical value for a seed to ensure reproducibility of the iterated power ap- proximations. warning Logical indicating whether to return a warning when either sigma2_dist or n_ES_dist is based on balanced assumptions. upper Numerical value containing the upper bound of the interval to be searched for the MDES. show_lower Logical value indicating whether to report lower bound of the interval searched for the MDES. Default is FALSE. Value Returns a tibble with information about the expectation of the number of studies, the between- study and within-study variance components, the sample correlation, the contrast effect, the level of statistical significance, the target power value(s), the minimum detectable effect size, the number of iterations, the model to handle dependent effect sizes, and the methods used to obtain sampling variance estimates as well as the number effect sizes per study. Examples mdes_MADE( J = 30, tau = 0.05, omega = 0.02, rho = 0.2, model = "CHE", var_df = "RVE", sigma2_dist = 4 / 100, n_ES_dist = 6, seed = 10052510 ) min_studies_MADE Finding the Number of Studies Needed to Obtain a Certain Amount of Power Description Compute the minimum number of studies needed to obtain a specified power level in a meta-analysis of dependent effect size estimates, given an effect size of practical concern, estimation method, and further assumptions about the distribution of studies. Usage min_studies_MADE( mu, tau, omega, rho, alpha = 0.05, target_power = 0.8, d = 0, model = "CHE", var_df = "RVE", sigma2_dist = NULL, n_ES_dist = NULL, iterations = 100, seed = NULL, warning = TRUE, upper = 100, show_lower = FALSE ) Arguments mu Effect size of practical concern. Can be one value or a vector of multiple values. tau Between-study SD. Can be one value or a vector of multiple values. omega Within-study SD. Can be one value or a vector of multiple values. rho Correlation coefficient between effect size estimates from the same study. Can be one value or a vector of multiple values. alpha Level of statistical significance. Can be one value or a vector of multiple values. Default is 0.05. target_power Numerical value specifying the target power level. Can be one value or a vector of multiple values. d Contrast value. Can be one value or a vector of multiple values. Default is 0. model Assumed working model for dependent effect sizes, either "CHE" for the correlated- and-hierarchical effects model, "CE" for the correlated effects model, or "MLMA" for the multi-level meta-analysis model. Default is "CHE". Can be one value or a vector of multiple values. var_df Indicates the technique used to obtain the sampling variance of the average ef- fect size estimate and the degrees of freedom, either "Model" for model-based variance estimator with degrees of freedom of J - 1, "Satt" for model-based variance estimator with Satterthwaite degrees of freedom, or "RVE" for robust variance estimator with Satterthwaite degrees of freedom. Default is "RVE". Can be one value or a vector of multiple values. sigma2_dist Distribution of sampling variance estimates from each study. Can be either a single value, a vector of plausible values, or a function that generates random values. n_ES_dist Distribution of the number of effect sizes per study. Can be either a single value, a vector of plausible values, or a function that generates random values. iterations Number of iterations per condition (default is 100). seed Numerical value for a seed to ensure reproducibility of the iterated power ap- proximations. warning Logical indicating whether to return a warning when either sigma2_dist or n_ES_dist is based on balanced assumptions. upper Numerical value containing the upper bound of the interval to be searched for the minimum number of studies. show_lower Logical value indicating whether to report lower bound of the interval searched for the minimum number of studies. Default is FALSE. Value Returns a tibble with information about the expectation of the effect size of practical concern, the between-study and within-study variance components, the sample correlation, the contrast effect, the level of statistical significance, the target power value(s), the number of studies needed, the number of iterations, the model to handle dependent effect sizes, and the methods used to obtain sampling variance estimates as well as the number effect sizes per study. Examples min_studies_MADE( mu = 0.3, tau = 0.05, omega = 0.01, rho = 0.2, target_power = .7, alpha = 0.05, model = "CE", var_df = "RVE", sigma2_dist = 4 / 200, n_ES_dist = 5.5, seed = 10052510 ) plot_MADE Generic plot function for ’MADE’ objects Description Create a faceted plot displaying the results of a set of power analyses. This is a generic func- tion to make facet_grid plots, with specific methods defined for power_MADE, mdes_MADE, and min_studies_MADE objects. Usage plot_MADE( data, v_lines, legend_position, color, numbers, number_size, numbers_ynudge, caption, x_lab, x_breaks, x_limits, y_breaks, y_limits, y_expand = NULL, warning, traffic_light_assumptions, ... ) Arguments data Data/object for which the plot should be made. v_lines Integer or vector to specify vertical line(s) in within each plot. Default is NULL. legend_position Character string to specify position of legend. Default is "bottom". color Logical indicating whether to use color in the plot(s). Default is TRUE. numbers Logical indicating whether to number the plots. Default is TRUE. number_size Integer value specifying the size of the (optional) plot numbers. Default is 2.5. numbers_ynudge Integer value for vertical nudge of the (optional) plot numbers. caption Logical indicating whether to include a caption with detailed information re- garding the analysis. Default is TRUE. x_lab Title for the x-axis. If NULL (the default), the x_lab is specified automatically. x_breaks Optional vector to specify breaks on the x-axis. Default is NULL. x_limits Optional vector of length 2 to specify the limits of the x-axis. Default is NULL, which allows limits to be determined automatically from the data. y_breaks Optional vector to specify breaks on the y-axis. y_limits Optional vector of length 2 to specify the limits of the y-axis. y_expand Optional vector to expand the limits of the y-axis. Default is NULL. warning Logical indicating whether warnings should be returned when multiple models appear in the data. Default is TRUE. traffic_light_assumptions Optional logical to specify coloring of strips of the facet grids to emphasize assumptions about the likelihood the given analytical scenario. See Vembye, Pustejovsky, & Pigott (In preparation) for further details. ... Additional arguments available for some classes of objects. Value A ggplot object References <NAME>., <NAME>., & <NAME>. (In preparation). Conducting power analysis for meta-analysis of dependent effect sizes: Common guidelines and an introduction to the POMADE R package. See Also plot_MADE.power, plot_MADE.mdes, plot_MADE.min_studies Examples power_dat <- power_MADE( J = c(50, 56), mu = 0.15, tau = 0.1, omega = 0.05, rho = 0, sigma2_dist = 4 / 200, n_ES_dist = 6 ) power_example <- plot_MADE( data = power_dat, power_min = 0.8, expected_studies = c(52, 54), warning = FALSE, caption = TRUE, color = TRUE, model_comparison = FALSE, numbers = FALSE ) power_example plot_MADE.mdes Plot function for a ’mdes’ object Description Creates a faceted plot for minimum detectable effect size (mdes) analyses calculated using mdes_MADE. Usage ## S3 method for class 'mdes' plot_MADE( data, v_lines = NULL, legend_position = "bottom", color = TRUE, numbers = TRUE, number_size = 2.5, numbers_ynudge = NULL, caption = TRUE, x_lab = NULL, x_breaks = NULL, x_limits = NULL, y_breaks = ggplot2::waiver(), y_limits = NULL, y_expand = NULL, warning = TRUE, traffic_light_assumptions = NULL, es_min = NULL, expected_studies = NULL, ... ) Arguments data Data/object for which the plot should be made. v_lines Integer or vector to specify vertical line(s) in within each plot. Default is NULL. legend_position Character string to specify position of legend. Default is "bottom". color Logical indicating whether to use color in the plot(s). Default is TRUE. numbers Logical indicating whether to number the plots. Default is TRUE. number_size Integer value specifying the size of the (optional) plot numbers. Default is 2.5. numbers_ynudge Integer value for vertical nudge of the (optional) plot numbers. caption Logical indicating whether to include a caption with detailed information re- garding the analysis. Default is TRUE. x_lab Title for the x-axis. If NULL (the default), the x_lab is specified automatically. x_breaks Optional vector to specify breaks on the x-axis. Default is NULL. x_limits Optional vector of length 2 to specify the limits of the x-axis. Default is NULL, which allows limits to be determined automatically from the data. y_breaks Optional vector to specify breaks on the y-axis. y_limits Optional vector of length 2 to specify the limits of the y-axis. y_expand Optional vector to expand the limits of the y-axis. Default is NULL. warning Logical indicating whether warnings should be returned when multiple models appear in the data. Default is TRUE. traffic_light_assumptions Optional logical to specify coloring of strips of the facet grids to emphasize assumptions about the likelihood the given analytical scenario. See Vembye, Pustejovsky, & Pigott (In preparation) for further details. es_min Optional integer or vector to specify a horizontal line or interval, indicating a benchmark value or values for the minimum effect size of practical concern (default is NULL). expected_studies Optional vector of length 2 specifying a range for the number of studies one expects to include in the meta-analysis. If specified, this interval will be shaded across facet_grip plots (default is NULL). ... Additional arguments available for some classes of objects. Details In general, it can be rather difficult to guess/approximate the true model parameters and sample characteristics a priori. Calculating the minimum detectable effect size under just a single set of assumptions can easily be misleading even if the true model and data structure only slightly diverge from the yielded data and model assumptions. To maximize the informativeness of the analy- sis, Vembye, Pustejovsky, & Pigott (In preparation) suggest accommodating the uncertainty of the power approximations by reporting or plotting minimum detectable effect size estimates across a range of possible scenarios, which can be done using plot_MADE.mdes. Value A ggplot plot showing the minimum detectable effect size across the expected number of stud- ies, faceted by the between-study and within-study SDs, with different colors, lines, and shapes corresponding to different values of the assumed sample correlation. References <NAME>., <NAME>., & <NAME>. (In preparation). Conducting power analysis for meta-analysis of dependent effect sizes: Common guidelines and an introduction to the POMADE R package. See Also plot_MADE Examples mdes_MADE( J = c(25, 35), tau = 0.05, omega = 0, rho = 0, target_power = .6, alpha = 0.1, sigma2_dist = 4 / 200, n_ES_dist = 8, seed = 10052510 ) |> plot_MADE(expected_studies = c(28, 32), numbers = FALSE) plot_MADE.min_studies Plot function for a ’min_studies’ object Description Creates a faceted plot with analyses of the minimum number of studies needed to obtained a given effect size with specified levels of power, as calculated using min_studies_MADE. Usage ## S3 method for class 'min_studies' plot_MADE( data, v_lines = NULL, legend_position = "bottom", color = TRUE, numbers = TRUE, number_size = 2.5, numbers_ynudge = NULL, caption = TRUE, x_lab = NULL, x_breaks = NULL, x_limits = NULL, y_breaks = ggplot2::waiver(), y_limits = NULL, y_expand = NULL, warning = TRUE, traffic_light_assumptions = NULL, v_shade = NULL, h_lines = NULL, ... ) Arguments data Data/object for which the plot should be made. v_lines Integer or vector to specify vertical line(s) in within each plot. Default is NULL. legend_position Character string to specify position of legend. Default is "bottom". color Logical indicating whether to use color in the plot(s). Default is TRUE. numbers Logical indicating whether to number the plots. Default is TRUE. number_size Integer value specifying the size of the (optional) plot numbers. Default is 2.5. numbers_ynudge Integer value for vertical nudge of the (optional) plot numbers. caption Logical indicating whether to include a caption with detailed information re- garding the analysis. Default is TRUE. x_lab Title for the x-axis. If NULL (the default), the x_lab is specified automatically. x_breaks Optional vector to specify breaks on the x-axis. Default is NULL. x_limits Optional vector of length 2 to specify the limits of the x-axis. Default is NULL, which allows limits to be determined automatically from the data. y_breaks Optional vector to specify breaks on the y-axis. y_limits Optional vector of length 2 to specify the limits of the y-axis. y_expand Optional vector to expand the limits of the y-axis. Default is NULL. warning Logical indicating whether warnings should be returned when multiple models appear in the data. Default is TRUE. traffic_light_assumptions Optional logical to specify coloring of strips of the facet grids to emphasize assumptions about the likelihood the given analytical scenario. See Vembye, Pustejovsky, & Pigott (In preparation) for further details. v_shade Optional vector of length 2 specifying the range of the x-axis interval to be shaded in each plot. h_lines Optional integer or vector specifying horizontal lines on each plot. ... Additional arguments available for some classes of objects. Details In general, it can be rather difficult to guess/approximate the true model parameters and sample characteristics a priori. Calculating the minimum number of studies needed under just a single set of assumptions can easily be misleading even if the true model and data structure only slightly diverge from the yielded data and model assumptions. To maximize the informativeness of the analysis, Vembye, Pustejovsky, & Pigott (In preparation) suggest accommodating the uncertainty of the power approximations by reporting or plotting power estimates across a range of possible scenarios, which can be done using plot_MADE.power. Value A ggplot plot showing the minimum number of studies needed to obtain a given effect size with a certain amount of power and level-alpha, faceted across levels of the within-study SD and the between-study SD, with different colors, lines, and shapes corresponding to different values of the assumed sample correlation. If length(unique(data$mu)) > 1, it returns a ggplot plot showing the minimum studies needed to obtained a given effect size with a certain amount of power and level-alpha across effect sizes of practical concern, faceted by the between-study and within-study SDs, with different colors, lines, and shapes corresponding to different values of the assumed sam- ple correlation. References <NAME>., <NAME>., & <NAME>. (In preparation). Conducting power analysis for meta-analysis of dependent effect sizes: Common guidelines and an introduction to the POMADE R package. See Also plot_MADE Examples min_studies_MADE( mu = c(0.25, 0.35), tau = 0.05, omega = 0.02, rho = 0.2, target_power = .7, sigma2_dist = 4 / 200, n_ES_dist = 6, seed = 10052510 ) |> plot_MADE(y_breaks = seq(0, 10, 2), numbers = FALSE) plot_MADE.power Plot function for a ’power’ object Description Creates a faceted plot or plots for power analyses conducted with power_MADE. Usage ## S3 method for class 'power' plot_MADE( data, v_lines = NULL, legend_position = "bottom", color = TRUE, numbers = TRUE, number_size = 2.5, numbers_ynudge = 0, caption = TRUE, x_lab = NULL, x_breaks = NULL, x_limits = NULL, y_breaks = seq(0, 1, 0.2), y_limits = c(0, 1), y_expand = NULL, warning = TRUE, traffic_light_assumptions = NULL, power_min = NULL, expected_studies = NULL, model_comparison = FALSE, ... ) Arguments data Data/object for which the plot should be made. v_lines Integer or vector to specify vertical line(s) in within each plot. Default is NULL. legend_position Character string to specify position of legend. Default is "bottom". color Logical indicating whether to use color in the plot(s). Default is TRUE. numbers Logical indicating whether to number the plots. Default is TRUE. number_size Integer value specifying the size of the (optional) plot numbers. Default is 2.5. numbers_ynudge Integer value for vertical nudge of the (optional) plot numbers. caption Logical indicating whether to include a caption with detailed information re- garding the analysis. Default is TRUE. x_lab Title for the x-axis. If NULL (the default), the x_lab is specified automatically. x_breaks Optional vector to specify breaks on the x-axis. Default is NULL. x_limits Optional vector of length 2 to specify the limits of the x-axis. Default is NULL, which allows limits to be determined automatically from the data. y_breaks Optional vector to specify breaks on the y-axis. y_limits Optional vector of length 2 to specify the limits of the y-axis. y_expand Optional vector to expand the limits of the y-axis. Default is NULL. warning Logical indicating whether warnings should be returned when multiple models appear in the data. Default is TRUE. traffic_light_assumptions Optional logical to specify coloring of strips of the facet grids to emphasize assumptions about the likelihood the given analytical scenario. See Vembye, Pustejovsky, & Pigott (In preparation) for further details. power_min Either an integer specify a horizontal line or a length-2 vector to specify an interval, indicating a benchmark level of power (default is NULL). expected_studies Optional vector of length 2 specifying a range for the number of studies one expects to include in the meta-analysis. If specified, this interval will be shaded across facet_grip plots (default is NULL). model_comparison Logical indicating whether power estimates should be plotted across different working models for dependent effect size estimates (default is FALSE) instead of across values for the sampling correlation. ... Additional arguments available for some classes of objects. Details In general, it can be rather difficult to guess/approximate the true model parameters and sample characteristics a priori. Calculating power under only a single set of assumptions can easily be misleading even if the true model and data structure only slightly diverge from the yielded data and model assumptions. To maximize the informativeness of the power approximations, Vembye, Pustejovsky, & Pigott (In preparation) suggest accommodating the uncertainty of the power approx- imations by reporting or plotting power estimates across a range of possible scenarios, which can be done using plot_MADE.power. Value A ggplot plot showing power across the expected number of studies, faceted by the between-study and within-study SDs, with different colors, lines, and shapes corresponding to different values of the assumed sample correlation. If model_comparison = TRUE, it returns a ggplot plot showing power across the expected number of studies, faceted by the between-study and within-study SDs, with different colors, lines, and shapes corresponding to different working models for dependent effect size estimates References <NAME>., <NAME>., & <NAME>. (In preparation). Conducting power analysis for meta-analysis of dependent effect sizes: Common guidelines and an introduction to the POMADE R package. See Also plot_MADE Examples power_dat <- power_MADE( J = c(50, 56), mu = 0.15, tau = 0.1, omega = 0.05, rho = 0, sigma2_dist = 4 / 200, n_ES_dist = 6 ) power_example <- plot_MADE( data = power_dat, power_min = 0.8, expected_studies = c(52, 54), warning = FALSE, caption = TRUE, color = TRUE, model_comparison = FALSE, numbers = FALSE ) power_example power_MADE Power Approximation for Overall Average Effects in Meta-Analysis With Dependent Effect Sizes Description Compute power of the test of the overall average effect size in a meta-analysis of dependent effect size estimates, given a specified number of studies, effect size of practical concern, estimation method, and further assumptions about the distribution of studies. Usage power_MADE( J, mu, tau, omega, rho, alpha = 0.05, d = 0, model = "CHE", var_df = "RVE", sigma2_dist = NULL, n_ES_dist = NULL, iterations = 100, seed = NULL, warning = TRUE, average_power = TRUE ) Arguments J Number of studies. Can be one value or a vector of multiple values. mu Effect size of practical concern. Can be one value or a vector of multiple values. tau Between-study SD. Can be one value or a vector of multiple values. omega Within-study SD. Can be one value or a vector of multiple values. rho Correlation coefficient between effect size estimates from the same study. Can be one value or a vector of multiple values. alpha Level of statistical significance. Can be one value or a vector of multiple values. Default is 0.05. d Contrast value. Can be one value or a vector of multiple values. Default is 0. model Assumed working model for dependent effect sizes, either "CHE" for the correlated- and-hierarchical effects model, "CE" for the correlated effects model, or "MLMA" for the multi-level meta-analysis model. Default is "CHE". Can be one value or a vector of multiple values. var_df Indicates the technique used to obtain the sampling variance of the average ef- fect size estimate and the degrees of freedom, either "Model" for model-based variance estimator with degrees of freedom of J - 1, "Satt" for model-based variance estimator with Satterthwaite degrees of freedom, or "RVE" for robust variance estimator with Satterthwaite degrees of freedom. Default is "RVE". Can be one value or a vector of multiple values. sigma2_dist Distribution of sampling variance estimates from each study. Can be either a single value, a vector of plausible values, or a function that generates random values. n_ES_dist Distribution of the number of effect sizes per study. Can be either a single value, a vector of plausible values, or a function that generates random values. iterations Number of iterations per condition (default is 100). seed Numerical value for a seed to ensure reproducibility of the iterated power ap- proximations. warning Logical indicating whether to return a warning when either sigma2_dist or n_ES_dist is based on balanced assumptions. average_power Logical indicating whether to calculate average power across the iterations for each condition. Details Find all background material behind the power approximations in Vembye, Pustejovsky, & Pigott (2022), including arguments for why it is suggested neither to conduct power analysis based on balanced assumptions about the number of effects per study and the study variance nor to use the original power approximation assuming independence among effect sizes (Hedges & Pigott, 2001). Value Returns a tibble with information about the expectation of the number of studies, the effect size of practical concern, the between-study and within-study variance components, the sample correlation, the contrast effect, the level of statistical significance, the sampling variance of overall average effect size of practical concern, the degrees of freedom, the power, the mcse, the number of iterations, the model to handle dependent effect sizes, and the methods used to obtain sampling variance estimates as well as the number effect sizes per study. References <NAME>., <NAME>., & <NAME>. (2022). Power approximations for overall average effects in meta-analysis with dependent effect sizes. Journal of Educational and Behavioral Statistics, 1–33. doi:10.3102/10769986221127379 <NAME>., & <NAME>. (2001). The power of statistical tests in meta-analysis. Psychological Methods, 6(3), 203–217. doi:10.1037/1082989X.6.3.203 Examples power <- power_MADE( J = c(40, 60), mu = 0.2, tau = 0.2, omega = 0.1, rho = 0.7, sigma2_dist = \(x) rgamma(x, shape = 5, rate = 10), n_ES_dist = \(x) 1 + stats::rpois(x, 5.5 - 1), model = c("CHE", "MLMA", "CE"), var_df = c("Model", "Satt", "RVE"), alpha = .05, seed = 10052510, iterations =5 ) power tau2_approximation Between-Study Variance Approximation Function Description Rough approximation of the between-study variance based on assumption about the typical sample size of studies included in the synthesis Usage tau2_approximation(sample_size = 100, es, df_minus2 = TRUE) Arguments sample_size Typical sample size of studies es Smallest effect size of practical concern df_minus2 If degrees of freedom should be df-2 or just df Value A tibble with small, medium, and large magnitudes of tau2 Examples tau2_approximation( sample_size = 50, es = 0.1, df_minus2 = TRUE ) VWB22_pilot Co-teaching Dataset Description Data from a meta-analysis on the effects of collaborative models of instruction on student achieve- ment from Vembye, Weiss, and Bhat (In press/forthcoming). Usage VWB22_pilot Format A tibble with 76 rows/studies and 9 variables study_year Study author and year of publication studyid Unique study ID esid Unique effect size ID kj Number of effect sizes per study N_meanj Average sample size of study Nt_meanj Average sample size of treatment group within study Nc_meanj Average sample size of control group within study ESS_meanj Roughly approximated effective sample sizes vg_ms_mean Average cluster bias corrected sampling variance estimates Source Find background material on Vembye’s OSF page, and the preprint at https://osf.io/preprints/ metaarxiv/mq5v7/. References <NAME>., <NAME>., & <NAME>. (In press/forthcoming). The Effects Co-Teaching and Related Collaborative Models of Instruction on Student Achievement: A Systematic Review and Meta-Analysis. Review of Educational Research. Access to background material at https://osf. io/fby7w/.
squasher
ruby
Ruby
Squasher === [![Build Status](https://travis-ci.org/jalkoby/squasher.svg?branch=master)](https://travis-ci.org/jalkoby/squasher) [![Code Climate](https://codeclimate.com/github/jalkoby/squasher.svg)](https://codeclimate.com/github/jalkoby/squasher) [![Gem Version](https://badge.fury.io/rb/squasher.svg)](http://badge.fury.io/rb/squasher) Squasher compresses old ActiveRecord migrations. If you work on a big project with lots of migrations, every `rake db:migrate` might take a few seconds, or creating of a new database might take a few minutes. That's because ActiveRecord loads all those migration files. Squasher removes all the migrations and creates a single migration with the final database state of the specified date (the new migration will look like a schema). Attention --- Prior to 0.6.2 squasher could damage your real data as generate "force" tables. Please upgrade to 0.6.2+ & manually clean "force" tag from the init migration Installation --- You don't have to add it into your Gemfile. Just a standalone installation: ``` $ gem install squasher ``` **@note** if you use Rbenv don't forget to run `rbenv rehash`. If you want to share it with your rails/sinatra/etc app add the below: ``` # Yep, the missing group in most Gemfiles where all utilities should be! group :tools do gem 'squasher', '>= 0.6.0' gem 'capistrano' gem 'rubocop' end ``` Don't forget to run `bundle`. To integrate `squasher` with your app even more do the below: ``` $ bundle binstub squasher $ # and you have a runner inside the `bin` folder $ bin/squasher ``` Usage --- **@note** stop all preloading systems if there are present (spring, zeus, etc) Suppose your application was created a few years ago. `%app_root%/db/migrate` folder looks like this: ``` 2012...._first_migration.rb 2012...._another_migration.rb # and a lot of other files 2013...._adding_model_foo.rb # few years later 2016...._removing_model_foo.rb # and so on ``` Storing these atomic changes over time is painful and useless. It's time to archive this history. Once you install the gem you can run the `squasher` command. For example, you want to compress all migrations which were created prior to the year 2017: ``` $ squasher 2017 # rails 3 & 4 $ squasher 2017 -m 5.0 # rails 5+ ``` You can tell `squasher` a more detailed date, for example: ``` $ squasher 2016/12 # prior to December 2016 $ squasher 2016/12/19 # prior to 19 December 2016 ``` ### Options Run `squasher -h` or just `squasher` to see how you can use squasher: * in sql schema rails app * in rails 5+ app * inside an engine * in "dry" mode * in "reuse" mode Requirements --- It works and was tested on Ruby 2.0+ and ActiveRecord 3.1+. It also requires a valid development configuration in `config/database.yml`. If an old migration inserted data (created ActiveRecord model records) you will lose this code in the squashed migration, **BUT** `squasher` will ask you to leave a tmp database which will have all data that was inserted while migrating. Using this database you could add that data as another migration, or into `config/seed.rb` (the expected place for this stuff). Changelog --- All changes are located in [the changelog file](CHANGELOG.md) with contribution notes Contributing --- 1. Fork it 2. Create your feature branch (`git checkout -b my-new-feature`) 3. Commit your changes (`git commit -am 'Add some feature'`) 4. Push to the branch (`git push origin my-new-feature`) 5. Create new Pull Request
gatoRs
cran
R
Package ‘gatoRs’ July 5, 2023 Type Package Title Geographic and Taxonomic Occurrence R-Based Scrubbing Version 1.0.0 Date 2023-07-02 Imports ridigbio, dplyr (>= 1.1.0), rgbif, magrittr, CoordinateCleaner, raster, spThin, stringr, leaflet, parsedate, spatstat.geom Encoding UTF-8 VignetteBuilder knitr LazyData true RoxygenNote 7.2.3 Suggests knitr, rmarkdown, testthat License GPL-3 Description Streamlines downloading and cleaning biodiversity data from Integrated Digitized Bio- collections (iDigBio) and the Global Biodiversity Information Facility (GBIF). Maintainer <NAME> <<EMAIL>> URL https://nataliepatten.github.io/gatoRs/, https://github.com/nataliepatten/gatoRs BugReports https://github.com/nataliepatten/gatoRs/issues Depends R (>= 3.5.0) NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0001-8090-1324>), <NAME> [aut] (<https://orcid.org/0000-0002-3912-6079>), <NAME> [ctb] (<https://orcid.org/0000-0001-8638-4137>), <NAME> [ctb] (<https://orcid.org/0000-0001-9310-8659>) Repository CRAN Date/Publication 2023-07-05 13:50:02 UTC R topics documented: basic_locality_clea... 2 basis_clea... 3 citation_bello... 4 data_chom... 5 filter_fix_name... 6 fix_column... 7 fix_name... 8 full_clea... 8 gators_downloa... 11 get_gbi... 13 get_idigbi... 14 needed_record... 14 need_to_georeferenc... 15 one_point_per_pixe... 16 process_flagge... 17 remove_duplicate... 18 remove_skewe... 19 taxa_clea... 20 thin_point... 21 basic_locality_clean Locality Cleaning - Remove missing and improbable coordinates Description The basic_locality_clean() function cleans locality by removing missing or impossible coor- dinates and correcting precision. This function requires columns named ’latitude’ and ’longitude’. These columns should be of type ’numeric’. Usage basic_locality_clean( df, latitude = "latitude", longitude = "longitude", remove.zero = TRUE, precision = TRUE, digits = 2, remove.skewed = TRUE, info.withheld = "informationWithheld" ) Arguments df Data frame of occurrence records returned from gators_download(). latitude Default = "latitude". The name of the latitude column in the data frame. longitude Default = "longitude". The name of the longitude column in the data frame. remove.zero Default = TRUE. Indicates that points at (0.00, 0.00) should be removed. precision Default = TRUE. Indicates that coordinates should be rounded to match the coordinate uncertainty. digits Default = 2. Indicates digits to round coordinates to when precision = TRUE. remove.skewed Default = TRUE. Utilizes the remove_skewed() function to remove skewed coordinate values. info.withheld Default = "informationWithheld". The name of the information withheld col- umn in the data frame. Details This function requires no additional packages. Value Return data frame with specimen removed that had missing or improper coordinate values. Examples cleaned_data <- basic_locality_clean(data) basis_clean Basis Cleaning - Removes records with certain record basis Description The basis_clean() function removes records based on basisOfRecord column. Usage basis_clean(df, basis.list = NA, basis.of.record = "basisOfRecord") Arguments df Data frame of occurrence records returned from gators_download(). basis.list A list of basis to keep. If a list is not supplied, the filter will be interactive and users must respond to the function. basis.of.record Default = "basisOfRecord". The name of the basis of record column in the data frame. Details This function requires no additional packages. Value Returns a data frame with records of desired record basis. Examples cleaned_data <- basis_clean(data, basis.list = c("Preserved Specimen","Physical specimen")) citation_bellow Cite Data - Get GBIF citations Description The citation_bellow function retrieves and returns the citation information for the data provided by GBIF in a data frame. Usage citation_bellow(df, id = "ID", aggregator = "aggregator") Arguments df Data frame of occurrence records returned from gators_download(). id Default = "ID". The name of the id column in the data frame, which contains unique IDs defined from GBIF or iDigBio. aggregator Default = "aggregator". The name of the column in the data frame that identifies the aggregator that provided the record. Details This function requires the rgbif package. Value Returns a list with citation information for the GBIF data downloaded. Examples citations <- citation_bellow(data) data_chomp Subset Data - Get species, longitude, and latitude columns Description The data_chomp() function "chomps" (subsets) a data frame of occurrence records to only contain the following columns: "species", "longitude", and "latitude". After using this function data will be ready for use in Maxent, for example. Usage data_chomp( df, accepted.name = NA, longitude = "longitude", latitude = "latitude" ) Arguments df Data frame of occurrence records returned from gators_download(). accepted.name The accepted species name for the records. longitude Default = "longitude". The name of the longitude column in the data frame. latitude Default = "latitude". The name of the latitude column in the data frame. Details This function requires the package dplyr. Value Returns data frame with a subset of columns ready for downstream applications such as Maxent. Examples chomped_data <- data_chomp(data, accepted.name = "Galax urceolata") filter_fix_names Used in gators_download() - Filter iDigBio results by scientific name Description The filter_fix_names() function filters a data frame for relevant results, based on the scientific name given. Some downloaded results from iDigBio might contain occurrences of other species that have "notes" mentioning the desired species. Hence, this function looks for relevant results that are actually occurrences of the desired species. Usage filter_fix_names( df, synonyms.list, filter = "fuzzy", scientific.name = "scientificName", accepted.name = NA ) Arguments df Data frame with name column to be fixed. synonyms.list A list of synonyms for a species. filter Default = "fuzzy". Indicates the type of filter to be used–either "exact" or "fuzzy". scientific.name Default = "scientificName". The name of the scientific name column in the data frame. accepted.name The accepted scientific name for the species. If provided, an additional col- umn will be added to the data frame with the accepted name for further manual comparison. Details This function requires no additional packages. Value Returns data frame with filtered results. Examples cleaned_data <- filter_fix_names(data, c("Galax urceolata", "Galax aphylla"), filter = "exact") cleaned_data <- filter_fix_names(data, c("Galax urceolata", "Galax aphylla"), accepted.name = "Galax urceolata") fix_columns Used in gators_download() - Fill out taxonomic name columns Description The fix_columns() function fills out the taxonomic name columns based on available information in the data set. For example, if a row has a name provided for the scientificName column, this information will be used to generate the respective genus, specificEpithet, and infraspecificEpithet columns for that row. Usage fix_columns( df, scientific.name = "scientificName", genus = "genus", species = "specificEpithet", infraspecific.epithet = "infraspecificEpithet" ) Arguments df Data frame of occurrence records. scientific.name Default = "scientificName". The name of the scientific name column in the data frame. genus Default = "genus". The name of the genus column in the data frame. species Default = "specificEpithet". The name of the specific epithet column in the data frame. infraspecific.epithet Default = "infraspecificEpithet". The name of the infraspecific epithet column in the data frame. Details This function requires package stringr. Value Returns the original data frame with the specified columns. Examples fixed_data <- fix_columns(data) fix_names Used in gators_download() - Fix taxonomic name capitalization Description The fix_names() function fixes the capitalization of species names in the data frame provided to align with accepted capitalization standards. Usage fix_names(df, scientific.name = "scientificName") Arguments df Data frame with name column to be fixed. scientific.name Default = "scientificName". The name of the scientific name column in the data frame. Details This function uses the fixAfterPeriod() function. This function requires package stringr. Value Returns df with fixed capitalization in name column. Examples fixed_data <- fix_names(data) full_clean Full Cleaning - Wrapper function to speed clean Description The full_clean() function performs automated cleaning steps, including options for: removing duplicate data points, checking locality precision, removing points with skewed coordinates, remov- ing plain zero records, removing records based on basis of record, and spatially thinning collection points. This function also provides the option to interactively inspect and remove types of basis of record. Usage full_clean( df, synonyms.list, event.date = "eventDate", year = "year", month = "month", day = "day", occ.id = "occurrenceID", remove.NA.occ.id = FALSE, remove.NA.date = FALSE, aggregator = "aggregator", id = "ID", taxa.filter = "fuzzy", scientific.name = "scientificName", accepted.name = NA, remove.zero = TRUE, precision = TRUE, digits = 2, remove.skewed = TRUE, basis.list = NA, basis.of.record = "basisOfRecord", latitude = "latitude", longitude = "longitude", remove.flagged = TRUE, thin.points = TRUE, distance = 5, reps = 100, one.point.per.pixel = TRUE, raster = NA, resolution = 0.5 ) Arguments df Data frame of occurrence records. synonyms.list A list of synonyms for a species. event.date Default = "eventDate". The name of the event date column in the data frame. year Default = "year". The name of the event date year column in the data frame. month Default = "month". The name of the event date month column in the data frame. day Default = "day". The name of the event date day column in the data frame. occ.id Default = "occurrenceId". The name of the occurrence ID column in the data frame. remove.NA.occ.id Default = FALSE. This will remove records with missing occurrence IDs when set to TRUE. remove.NA.date Default = FALSE. This will remove records with missing event dates when set to TRUE. aggregator Default = "aggregator". The name of the column in the data frame that identifies the aggregator that provided the record. id Default = "ID". The name of the id column in the data frame, which contains unique IDs defined from GBIF or iDigBio. taxa.filter The type of filter to be used–either "exact", "fuzzy", or "interactive". scientific.name Default = "scientificName". The name of the scientific name column in the data frame. accepted.name The accepted scientific name for the species. If provided, an additional col- umn will be added to the data frame with the accepted name for further manual comparison. remove.zero Default = TRUE. Indicates that points at (0.00, 0.00) should be removed. precision Default = TRUE. Indicates that coordinates should be rounded to match the coordinate uncertainty. digits Default = 2. Indicates digits to round coordinates to when precision = TRUE. remove.skewed Default = TRUE. Utilizes the remove_skewed() function to remove skewed coordinate values. basis.list A list of basis to keep. If a list is not supplied, this filter will not occur. basis.of.record Default = "basisOfRecord". The name of the basis of record column in the data frame. latitude Default = "latitude". The name of the latitude column in the data frame. longitude Default = "longitude". The name of the longitude column in the data frame. remove.flagged Default = TRUE. An option to remove points with problematic locality informa- tion. thin.points Default = TRUE. An option to spatially thin occurrence records. distance Default = 5. Distance in km to separate records. reps Default = 100. Number of times to perform thinning algorithm. one.point.per.pixel Default = TRUE. An option to only retain one point per pixel. raster Raster object which will be used for ecological niche comparisons. resolution Default = 0.5. Options - 0.5, 2.5, 5, and 10 (in min of a degree). 0.5 min of a degree is equal to 30 arc sec. Details This function requires packages dplyr, magrittr, and raster. Value df is a data frame with the cleaned data. Examples cleaned_data <- full_clean(data, synonyms.list = c("Galax urceolata", "Galax aphylla"), digits = 3, basis.list = c("Preserved Specimen","Physical specimen"), accepted.name = "Galax urceolata", remove.flagged = FALSE) gators_download Download - Download specimen data from both iDigBio and GBIF Description The gators_download() function downloads data from GBIF and iDigBio for your desired species. Usage gators_download( synonyms.list, write.file = FALSE, filename = NA, gbif.match = "fuzzy", gbif.prov = FALSE, idigbio.filter = TRUE, limit = 1e+05 ) Arguments synonyms.list A list of synonyms for your desired species. For example, synonyms.list = c("Asclepias curtissii","Asclepias aceratoides", "Asclepias arenicola", "Oxypteryx arenicola", "Oxypteryx curtissii"). This parameter is required. write.file A parameter to choose whether to produce a .csv file containing search results. This parameter is not required and is assigned FALSE by default. filename The path and file name for the retrieved data. Note that this parameter should in- clude the ".csv" extension as well. For example, filename = "base_folder/other_folder/my_file.cs The file path can be entered either as relative to the current working direc- tory (example: "../my_file.csv") or as a full path. This parameter is required if write.file = TRUE. gbif.match A parameter to select either search by fuzzy matching of scientific name or to search by species code. For example, gbif.match = "fuzzy" will search by fuzzy match and gbif.match = "code" will search by code. This parameter is not required and is assigned "fuzzy" by default. gbif.prov A parameter to obtain the provider/verbatim columns from GBIF. This parame- ter is optional and is assigned FALSE by default. idigbio.filter A parameter to remove less relevant search results from iDigBio. Based on the search input, results may include data points for a different species that mention the desired species in the locality information, for example. Choos- ing idigbio.filter = TRUE will return the data frame with rows in which the name column fuzzy matches a name on the synonym list. This parameter is not required and is assigned TRUE by default. limit Default = 100,000 (maximum). Set limit to the number of records requested for each element in synonyms.list. Details This function uses the get_idigbio(), get_gbif(), fix_columns(), fix_names(), and filter_fix_names() functions. This function requires packages magrittr, rgbif, dplyr, ridigbio, and stringr. Value Returns a data frame and writes a csv file as specified in the input. This csv file will contain search results for the desired species from the GBIF and iDigBio databases. The columns are as follows: • scientificName • genus • specificEpithet • infraspecificEpithet • ID (contains unique IDs defined from GBIF or iDigBio) • occurrenceID • basisOfRecord • eventDate • year • month • day • institutionCode • recordedBy • informationWithheld • country • county • stateProvince • locality • latitude • longitude • coordinateUncertaintyInMeters • habitat • aggregator (either GBIF or iDigBio) Examples df <- gators_download(synonyms.list = c("Galax urceolata", "Galax aphylla"), limit = 10) df <- gators_download(synonyms.list = "Galax urceolata", gbif.match = "code", idigbio.filter = FALSE, limit = 10) get_gbif Used in gators_download() - Download data from the Global Biodi- versity Information Facility Description The get_gbif() function queries the Global Biodiversity Information Facility (GBIF) for your desired species. Limited to 100,000 record downloads. Usage get_gbif(synonyms.list, gbif.match = "fuzzy", gbif.prov = FALSE, limit = 1e+05) Arguments synonyms.list A list of affiliated names for your query. gbif.match Default = "fuzzy". Either "fuzzy" for fuzzy matching of name or "code" to search by species code. gbif.prov Default = FALSE. A parameter to obtain the provider/verbatim columns from GBIF. limit Default = 100,000 (maximum). Set limit to the number of records requested for each element in synonyms.list. Details This function uses the correct_class() function. This function requires the packages rgbif, magrittr, and dplyr. Value Returns a data frame with desired columns from GBIF. Examples df <- get_gbif(c("Galax urceolata", "Galax aphylla"), limit = 5) df <- get_gbif(c("Galax urceolata", "Galax aphylla"), gbif.match = "code", limit = 5) df <- get_gbif(c("Galax urceolata", "Galax aphylla"), gbif.prov = TRUE, limit = 5) get_idigbio Used in gators_download() - Download data from Integrated Digitized Biocollections Description The get_idigbio() function queries iDigBio for your desired species. Limited to 100,000 record downloads. Usage get_idigbio(synonyms.list, limit = 1e+05) Arguments synonyms.list A list of affiliated names for your query. limit Default = 100,000 (maximum). Set limit to the number of records requested for each element in synonyms.list. Details This function uses the correct_class() function. This function requires the packages ridigbio, magrittr, and dplyr. Value A data frame with desired columns from iDigBio. Examples df <- get_idigbio(c("Galax urceolata", "Galax aphylla"), limit = 100) needed_records Identify Missing Information - Find records with redacted or missing data Description The needed_records() function identifies records with flags. This indicates that information is withheld from these records due to endangered species status, for example. Accessing this infor- mation may require a permit. Or, these records can be removed from the data set. Usage needed_records(df, info.withheld = "informationWithheld") Arguments df A data frame downloaded with gators_download(). info.withheld Default = "informationWithheld". The name of the information withheld col- umn in the data frame. Details This function requires no additional packages. Value A data frame with only records for which locality was flagged. Examples need_info <- needed_records(data) need_to_georeference Identify Missing Information - Find records which lack coordinate in- formation Description The need_to_georeference() function allows you to find records that are missing coordinates but contain locality information. These records can then be manually georeferenced. Usage need_to_georeference( df, longitude = "longitude", latitude = "latitude", locality = "locality" ) Arguments df A data frame downloaded with gators_download(). longitude Default = "longitude". The name of the longitude column in the data frame. latitude Default = "latitude". The name of the latitude column in the data frame. locality Default = "locality". The name of the locality column in the data frame. Details This function requires no additional packages. Value Returns a data frame of the points that need to be georeferenced. For more information about this data frame, see gators_download(). Examples need_coords <- need_to_georeference(data) one_point_per_pixel Spatial Correction - One point per pixel Description The one_point_per_pixel function retains only one point per raster pixel. This function is useful for creating present-absent models. Usage one_point_per_pixel( df, raster = NA, resolution = 0.5, precision = TRUE, digits = 2, longitude = "longitude", latitude = "latitude" ) Arguments df Data frame of occurrence records. raster Raster object which will be used for ecological niche comparisons. resolution Default = 0.5. Options - 0.5, 2.5, 5, and 10 (in min of a degree). 0.5 min of a degree is equal to 30 arc sec. precision Default = TRUE. Indicates that coordinates should be rounded to match the coordinate uncertainty. digits Default = 2. Indicates digits to round coordinates to when precision = TRUE. longitude Default = "longitude". The name of the longitude column in the data frame. latitude Default = "latitude". The name of the latitude column in the data frame. Details This function requires package raster and spatstat.geom. Value df is a data frame with only one point per pixel. Examples ready_data <- one_point_per_pixel(data) process_flagged Locality Cleaning - Find possibly problematic occurrence records Description The process_flagged() function allows you to find and map possible problematic points and manually inspect and remove these points, if desired. When running the function interactively you can hover over a point to see the record’s scientific name, and click on a point to see the record’s coordinates. Usage process_flagged( df, interactive = TRUE, latitude = "latitude", longitude = "longitude", scientific.name = "scientificName" ) Arguments df Data frame of occurrence records returned from gators_download(). interactive Default = TRUE. The interactive option allows for a visual display of possi- ble problematic points and the ability to manually remove these points. Set- ting interactive = FALSE will automatically remove these points from the data frame. latitude Default = "latitude". The name of the latitude column in the data frame. longitude Default = "longitude". The name of the longitude column in the data frame. scientific.name Default = "scientificName". The name of the scientific name column in the data frame. Details This function requires packages CoordinateCleaner, leaflet, and magrittr. This function requires interactive user input. Value Return cleaned data frame. Examples cleaned_data <- process_flagged(data, interactive = FALSE) remove_duplicates Remove Duplicates - Remove records with identical event dates and coordinates Description The remove_duplicates() function removes records with identical event dates and occurrence IDs. Prior to utilizing this function, longitude and latitude columns should be rounded to match the coordinate uncertainty using the basic_locality_clean() function. Usage remove_duplicates( df, event.date = "eventDate", aggregator = "aggregator", id = "ID", occ.id = "occurrenceID", year = "year", month = "month", day = "day", latitude = "latitude", longitude = "longitude", remove.NA.occ.id = FALSE, remove.NA.date = FALSE, remove.unparseable = FALSE ) Arguments df Data frame of occurrence records returned from gators_download(). event.date Default = "eventDate". The name of the event date column in the data frame. aggregator Default = "aggregator". The name of the column in the data frame that identifies the aggregator that provided the record. id Default = "ID". The name of the id column in the data frame, which contains unique IDs defined from GBIF or iDigBio. occ.id Default = "occurrenceId". The name of the occurrence ID column in the data frame. year Default = "year". The name of the event date year column in the data frame. month Default = "month". The name of the event date month column in the data frame. day Default = "day". The name of the event date day column in the data frame. latitude Default = "latitude". The name of the latitude column in the data frame. longitude Default = "longitude". The name of the longitude column in the data frame. remove.NA.occ.id Default = FALSE. This will remove records with missing occurrence IDs when set to TRUE. remove.NA.date Default = FALSE. This will remove records with missing event dates when set to TRUE. remove.unparseable Default = FALSE. If we cannot parse the event date into individual year, month, day categories the user can manually specify. Otherwise, if set to TRUE, these rows will simply be removed. Details This function requires the parsedate and dplyr packages. This function will ignore missing occur- rence ID and year, month, date columns if not provided in the data set. Value Return data frame with duplicates removed. Examples cleaned_data <- remove_duplicates(data) cleaned_data <- remove_duplicates(data, remove.NA.occ.id = TRUE, remove.NA.date = TRUE) cleaned_data <- remove_duplicates(data, remove.unparseable = TRUE) remove_skewed Used in basic_locality_clean() - Removed skewed locality Description The remove_skewed() function identifies and removes records where locality has been skewed. Usage remove_skewed(df, info.withheld = "informationWithheld") Arguments df A data frame downloaded with gators_download(). info.withheld Default = "informationWithheld". The name of the information withheld col- umn in the data frame. Details This function requires no additional packages. Value A data frame with records remove only records for which locality was skewed. Examples cleaned_data <- remove_skewed(data) taxa_clean Taxonomic Cleaning - Filter and resolve taxon names Description The taxa_clean() function filters a data frame for relevant results, based on the scientific name given. Filtering can be done with scripts by exact or fuzzy match. Or, for a more controlled approach, this function provides interactive filtering by providing the user with prompts. The inter- active method allows the user to manually determine whether they wish to keep results containing certain scientific names. Usage taxa_clean( df, synonyms.list, taxa.filter = "fuzzy", scientific.name = "scientificName", accepted.name = NA ) Arguments df Data frame of occurrence records returned from gators_download(). synonyms.list A list of synonyms for a species. taxa.filter The type of filter to be used–either "exact", "fuzzy", or "interactive". scientific.name Default = "scientificName". The name of the scientific name column in the data frame. accepted.name The accepted scientific name for the species. If provided, an additional col- umn will be added to the data frame with the accepted name for further manual comparison. Details This function requires no additional packages. Value Returns data frame with filtered results and new column with the accepted name labeled as "ac- cepted_name". Examples cleaned_data <- taxa_clean(data, c("Galax urceolata", "Galax aphylla"), taxa.filter = "exact") cleaned_data <- taxa_clean(data, c("Galax urceolata", "<NAME>"), accepted.name = "<NAME>") thin_points Spatial Correction - Spatially thin records Description The thin_points function returns records based on coordinate thinning. Usage thin_points( df, accepted.name = NA, distance = 5, reps = 100, latitude = "latitude", longitude = "longitude" ) Arguments df Data frame of occurrence records. accepted.name Accepted name of your species. This argument is not required if the data frame already contains an accepted_name column. distance Default = 5. Distance in km to separate records. reps Default = 100. Number of times to perform thinning algorithm. latitude Default = "latitude". The name of the latitude column in the data frame. longitude Default = "longitude". The name of the longitude column in the data frame. Details This function requires package spThin. Value df is a data frame with the cleaned data. Examples thinned_data <- thin_points(data, accepted.name = "<NAME>")
frozen_record
ruby
Ruby
FrozenRecord === [![Build Status](https://secure.travis-ci.org/byroot/frozen_record.svg)](http://travis-ci.org/byroot/frozen_record) [![Gem Version](https://badge.fury.io/rb/frozen_record.svg)](http://badge.fury.io/rb/frozen_record) Active Record-like interface for **read only** access to static data files of reasonable size. Installation --- Add this line to your application's Gemfile: ``` gem 'frozen_record' ``` And then execute: ``` $ bundle ``` Or install it yourself as: ``` $ gem install frozen_record ``` Models definition --- Just like with Active Record, your models need to inherits from `FrozenRecord::Base`: ``` class Country < [FrozenRecord](/gems/frozen_record/FrozenRecord "FrozenRecord (module)")::[Base](/gems/frozen_record/FrozenRecord/Base "FrozenRecord::Base (class)") end ``` But you also have to specify in which directory your data files are located. You can either do it globally ``` [FrozenRecord](/gems/frozen_record/FrozenRecord "FrozenRecord (module)")::[Base](/gems/frozen_record/FrozenRecord/Base "FrozenRecord::Base (class)").base_path = '/path/to/some/directory' ``` Or per model: ``` class Country < [FrozenRecord](/gems/frozen_record/FrozenRecord "FrozenRecord (module)")::[Base](/gems/frozen_record/FrozenRecord/Base "FrozenRecord::Base (class)") self.base_path = '/path/to/some/directory' end ``` You can also specify a custom backend. Backends are classes that know how to load records from a static file. By default FrozenRecord expects an YAML file, but this option can be changed per model: ``` class Country < [FrozenRecord](/gems/frozen_record/FrozenRecord "FrozenRecord (module)")::[Base](/gems/frozen_record/FrozenRecord/Base "FrozenRecord::Base (class)") self.backend = [FrozenRecord](/gems/frozen_record/FrozenRecord "FrozenRecord (module)")::[Backends](/gems/frozen_record/FrozenRecord/Backends "FrozenRecord::Backends (module)")::[Json](/gems/frozen_record/FrozenRecord/Backends/Json "FrozenRecord::Backends::Json (module)") end ``` ### Custom backends A custom backend must implement the methods `filename` and `load` as follows: ``` module MyCustomBackend extend self def filename(model_name) # Returns the file name as a String end def load(file_path) # Reads file and returns records as an Array of Hash objects end end ``` Query interface --- FrozenRecord aim to replicate only modern Active Record querying interface, and only the non "string typed" ones. e.g ``` # Supported query interfaces Country. where(region: 'Europe'). where.not(language: 'English'). order(id: :desc). limit(10). offset(2). pluck(:name) # Non supported query interfaces Country. where('region = "Europe" AND language != "English"'). order('id DESC') ``` ### Scopes Basic `scope :symbol, lambda` syntax is now supported in addition to class method syntax. ``` class Country scope :european, -> { where(continent: 'Europe' ) } def self.republics where(king: nil) end def self.part_of_nato where(nato: true) end end Country.european.republics.part_of_nato.order(id: :desc) ``` ### Supported query methods * where * where.not * order * limit * offset ### Supported finder methods * find * first * last * to_a * exists? ### Supported calculation methods * count * pluck * ids * minimum * maximum * sum * average Indexing --- Querying is implemented as a simple linear search (`O(n)`). However if you are using Frozen Record with larger datasets, or are querying a collection repeatedly, you can define indices for faster access. ``` class Country < [FrozenRecord](/gems/frozen_record/FrozenRecord "FrozenRecord (module)")::[Base](/gems/frozen_record/FrozenRecord/Base "FrozenRecord::Base (class)") add_index :name, unique: true add_index :continent end ``` Composite index keys are not supported. The primary key isn't indexed by default. Rich Types --- The `attribute` method can be used to provide a custom class to convert an attribute to a richer type. The class must implement a `load` class method that takes the raw attribute value and returns the deserialized value (similar to [ActiveRecord serialization](https://api.rubyonrails.org/v7.0.4/classes/ActiveRecord/AttributeMethods/Serialization/ClassMethods.html#method-i-serialize)). ``` class ContinentString < String class << self alias_method :load, :new end end Size = Struct.new(:length, :width, :depth) do def self.load(value) # value is lxwxd eg: "23x12x5" new(*value.split('x')) end end class Country < [FrozenRecord](/gems/frozen_record/FrozenRecord "FrozenRecord (module)")::[Base](/gems/frozen_record/FrozenRecord/Base "FrozenRecord::Base (class)") attribute :continent, ContinentString attribute :size, Size end ``` Limitations --- Frozen Record is not meant to operate on large unindexed datasets. To ensure that it doesn't happen by accident, you can set `FrozenRecord::Base.max_records_scan = 500` (or whatever limit makes sense to you), in your development and test environments. This setting will cause Frozen Record to raise an error if it has to scan more than `max_records_scan` records. This property can also be set on a per model basis. Configuration --- ### Reloading By default the YAML files are parsed once and then cached in memory. But in development you might want changes to be reflected without having to restart your application. For such cases you can set `auto_reloading` to `true` either globally or on a model basis: ``` [FrozenRecord](/gems/frozen_record/FrozenRecord "FrozenRecord (module)")::[Base](/gems/frozen_record/FrozenRecord/Base "FrozenRecord::Base (class)").auto_reloading = true # Activate reloading for all models Country.auto_reloading # Activate reloading for `Country` only ``` Testing --- Testing your FrozenRecord-backed models with test fixtures is made easier with: ``` require 'frozen_record/test_helper' # During test/spec setup test_fixtures_base_path = 'alternate/fixture/path' [FrozenRecord](/gems/frozen_record/FrozenRecord "FrozenRecord (module)")::[TestHelper](/gems/frozen_record/FrozenRecord/TestHelper "FrozenRecord::TestHelper (module)").[load_fixture](/gems/frozen_record/FrozenRecord/TestHelper#load_fixture-class_method "FrozenRecord::TestHelper.load_fixture (method)")(Country, test_fixtures_base_path) # During test/spec teardown [FrozenRecord](/gems/frozen_record/FrozenRecord "FrozenRecord (module)")::[TestHelper](/gems/frozen_record/FrozenRecord/TestHelper "FrozenRecord::TestHelper (module)").[unload_fixtures](/gems/frozen_record/FrozenRecord/TestHelper#unload_fixtures-class_method "FrozenRecord::TestHelper.unload_fixtures (method)") ``` Here's a Rails-specific example: ``` require "test_helper" require 'frozen_record/test_helper' class CountryTest < ActiveSupport::TestCase setup do test_fixtures_base_path = Rails.root.join('test/support/fixtures') FrozenRecord::TestHelper.load_fixture(Country, test_fixtures_base_path) end teardown do FrozenRecord::TestHelper.unload_fixtures end test "countries have a valid name" do # ... ``` Contributors --- FrozenRecord is a from scratch reimplementation of a [Shopify](https://github.com/Shopify) project from 2007 named `YamlRecord`. So thanks to: * <NAME> - [@jduff](https://github.com/jduff) * <NAME> - [@dennisoconnor](https://github.com/dennisoconnor) * <NAME> - [@csaunders](https://github.com/csaunders) * <NAME>berg - [@titanous](https://github.com/titanous) * <NAME> - [@jstorimer](https://github.com/jstorimer) * <NAME> - [@codyfauser](https://github.com/codyfauser) * <NAME> - [@tobi](https://github.com/tobi)
incanter_incanter-charts
hex
Erlang
incanter.charts === This is the core charting library for Incanter. It provides basic scatter plots, histograms, box plots xy plots, bar charts, line charts, as well as specialized charts like trace plots and Bland-Altman plots. This library is built on the JFreeChart library (<http://www.jfree.org/jfreechart/>). ``` This is the core charting library for Incanter. It provides basic scatter plots, histograms, box plots xy plots, bar charts, line charts, as well as specialized charts like trace plots and Bland-Altman plots. This library is built on the JFreeChart library (http://www.jfree.org/jfreechart/). ``` [raw docstring](#) --- #### add-box-plotclj/smacro ``` (add-box-plot chart x & options) ``` Adds an additional box to an existing box-plot, returns the modified chart object. Options: :series-label (default x expression) Examples: (use '(incanter core charts stats datasets)) (doto (box-plot (sample-normal 1000) :legend true) view (add-box-plot (sample-normal 1000 :sd 2)) (add-box-plot (sample-gamma 1000))) (with-data (get-dataset :iris) (doto (box-plot :Sepal.Length :legend true) (add-box-plot :Petal.Length) (add-box-plot :Sepal.Width) (add-box-plot :Petal.Width) view)) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Adds an additional box to an existing box-plot, returns the modified chart object. Options: :series-label (default x expression) Examples: (use '(incanter core charts stats datasets)) (doto (box-plot (sample-normal 1000) :legend true) view (add-box-plot (sample-normal 1000 :sd 2)) (add-box-plot (sample-gamma 1000))) (with-data (get-dataset :iris) (doto (box-plot :Sepal.Length :legend true) (add-box-plot :Petal.Length) (add-box-plot :Sepal.Width) (add-box-plot :Petal.Width) view)) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L435)[raw docstring](#) --- #### add-box-plot*clj ``` (add-box-plot* chart x & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L420) --- #### add-categoriesclj/smacro ``` (add-categories chart categories values & options) ``` Adds an additional categories to an existing bar-chart or line-chart, returns the modified chart object. Options: :group-by :series-label Examples: (use '(incanter core charts stats datasets)) (def seasons (mapcat identity (repeat 3 ["winter" "spring" "summer" "fall"]))) (def years (mapcat identity (repeat 4 [2007 2008 2009]))) (def x (sample-uniform 12 :integers true :max 100)) (def plot (bar-chart years x :group-by seasons :legend true)) (view plot) (add-categories plot years [10 20 40] :series-label "winter-break") (add-categories plot (plus 3 years) (sample-uniform 12 :integers true :max 100) :group-by seasons) (def plot2 (line-chart years x :group-by seasons :legend true)) (view plot2) (add-categories plot2 (plus 3 years) (sample-uniform 12 :integers true :max 100) :group-by seasons) ``` (with-data (get-dataset :iris) (doto (line-chart :Species :Sepal.Length :data ($rollup mean :Sepal.Length :Species) :legend true) (add-categories :Species :Sepal.Width :data ($rollup mean :Sepal.Width :Species)) (add-categories :Species :Petal.Length :data ($rollup mean :Petal.Length :Species)) (add-categories :Species :Petal.Width :data ($rollup mean :Petal.Width :Species)) view)) ``` References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Adds an additional categories to an existing bar-chart or line-chart, returns the modified chart object. Options: :group-by :series-label Examples: (use '(incanter core charts stats datasets)) (def seasons (mapcat identity (repeat 3 ["winter" "spring" "summer" "fall"]))) (def years (mapcat identity (repeat 4 [2007 2008 2009]))) (def x (sample-uniform 12 :integers true :max 100)) (def plot (bar-chart years x :group-by seasons :legend true)) (view plot) (add-categories plot years [10 20 40] :series-label "winter-break") (add-categories plot (plus 3 years) (sample-uniform 12 :integers true :max 100) :group-by seasons) (def plot2 (line-chart years x :group-by seasons :legend true)) (view plot2) (add-categories plot2 (plus 3 years) (sample-uniform 12 :integers true :max 100) :group-by seasons) (with-data (get-dataset :iris) (doto (line-chart :Species :Sepal.Length :data ($rollup mean :Sepal.Length :Species) :legend true) (add-categories :Species :Sepal.Width :data ($rollup mean :Sepal.Width :Species)) (add-categories :Species :Petal.Length :data ($rollup mean :Petal.Length :Species)) (add-categories :Species :Petal.Width :data ($rollup mean :Petal.Width :Species)) view)) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L498)[raw docstring](#) --- #### add-categories*clj ``` (add-categories* chart categories values & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L471) --- #### add-functionclj/smacro ``` (add-function chart function min-range max-range & options) ``` Adds a xy-plot of the given function to the given chart, returning a modified version of the chart. Options: :series-label (default x expression) :step-size (default (/ (- max-range min-range) 500)) See also: function-plot, view, save, add-function, add-points, add-lines Examples: (use '(incanter core stats charts)) ;; plot the sine and cosine functions (doto (function-plot sin (- Math/PI) Math/PI) (add-function cos (- Math/PI) Math/PI) view) ;; plot two normal pdf functions (doto (function-plot pdf-normal -3 3 :legend true) (add-function (fn [x] (pdf-normal x :mean 0.5 :sd 0.5)) -3 3) view) ;; plot a user defined function and its derivative (use '(incanter core charts optimize)) ;; define the function, x^3 + 2x^2 + 2x + 3 (defn cubic [x] (+ (* x x x) (* 2 x x) (* 2 x) 3)) ;; use the derivative function to get a function ;; that approximates its derivative (def deriv-cubic (derivative cubic)) ;; plot the cubic function and its derivative (doto (function-plot cubic -10 10) (add-function deriv-cubic -10 10) view) ``` Adds a xy-plot of the given function to the given chart, returning a modified version of the chart. Options: :series-label (default x expression) :step-size (default (/ (- max-range min-range) 500)) See also: function-plot, view, save, add-function, add-points, add-lines Examples: (use '(incanter core stats charts)) ;; plot the sine and cosine functions (doto (function-plot sin (- Math/PI) Math/PI) (add-function cos (- Math/PI) Math/PI) view) ;; plot two normal pdf functions (doto (function-plot pdf-normal -3 3 :legend true) (add-function (fn [x] (pdf-normal x :mean 0.5 :sd 0.5)) -3 3) view) ;; plot a user defined function and its derivative (use '(incanter core charts optimize)) ;; define the function, x^3 + 2x^2 + 2x + 3 (defn cubic [x] (+ (* x x x) (* 2 x x) (* 2 x) 3)) ;; use the derivative function to get a function ;; that approximates its derivative (def deriv-cubic (derivative cubic)) ;; plot the cubic function and its derivative (doto (function-plot cubic -10 10) (add-function deriv-cubic -10 10) view) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L703)[raw docstring](#) --- #### add-function*clj ``` (add-function* chart function min-range max-range & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L692) --- #### add-histogramclj/smacro ``` (add-histogram chart x & options) ``` Adds a histogram to an existing histogram plot, returns the modified chart object. Options: :nbins (default 10) number of bins for histogram :series-label (default x expression) Examples: (use '(incanter core charts stats datasets)) (doto (histogram (sample-normal 1000) :legend true) view (add-histogram (sample-normal 1000 :sd 0.5))) (with-data (get-dataset :iris) (doto (histogram :Sepal.Length :legend true) (add-histogram :Petal.Length) view)) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Adds a histogram to an existing histogram plot, returns the modified chart object. Options: :nbins (default 10) number of bins for histogram :series-label (default x expression) Examples: (use '(incanter core charts stats datasets)) (doto (histogram (sample-normal 1000) :legend true) view (add-histogram (sample-normal 1000 :sd 0.5))) (with-data (get-dataset :iris) (doto (histogram :Sepal.Length :legend true) (add-histogram :Petal.Length) view)) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L384)[raw docstring](#) --- #### add-histogram*clj ``` (add-histogram* chart x & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L368) --- #### add-imageclj ``` (add-image chart x y img & options) ``` Adds an image to the chart at the given coordinates. Arguments: chart -- the chart to add the polygon to. x, y -- the coordinates to place the image img -- a java.awt.Image object Examples: (use '(incanter core charts latex)) (doto (function-plot sin -10 10) (add-image 0 0 (latex "\frac{(a+b)^2} {(a-b)^2}")) view) ``` Adds an image to the chart at the given coordinates. Arguments: chart -- the chart to add the polygon to. x, y -- the coordinates to place the image img -- a java.awt.Image object Examples: (use '(incanter core charts latex)) (doto (function-plot sin -10 10) (add-image 0 0 (latex "\\frac{(a+b)^2} {(a-b)^2}")) view) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3260)[raw docstring](#) --- #### add-linesclj/smacro ``` (add-lines chart x y & options) ``` Plots lines on the given scatter or line plot (xy-plot) of the (x,y) points. Equivalent to R's lines function, returns the modified chart object. Options: :series-label (default x expression) :points (default false) :auto-sort (default true) sort data by x Examples: (use '(incanter core stats io datasets charts)) (def cars (to-matrix (get-dataset :cars))) (def y (sel cars :cols 0)) (def x (sel cars :cols 1)) (def plot1 (scatter-plot x y :legend true)) (view plot1) ;; add regression line to scatter plot (def lm1 (linear-model y x)) (add-lines plot1 x (:fitted lm1)) ;; model the data without an intercept (def lm2 (linear-model y x :intercept false)) (add-lines plot1 x (:fitted lm2)) ;; Clojure's doto macro can be used to build a chart (doto (histogram (sample-normal 1000) :density true) (add-lines (range -3 3 0.05) (pdf-normal (range -3 3 0.05))) view) (with-data (get-dataset :iris) (doto (xy-plot :Sepal.Width :Sepal.Length :legend true) (add-lines :Petal.Width :Petal.Length) view)) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Plots lines on the given scatter or line plot (xy-plot) of the (x,y) points. Equivalent to R's lines function, returns the modified chart object. Options: :series-label (default x expression) :points (default false) :auto-sort (default true) sort data by x Examples: (use '(incanter core stats io datasets charts)) (def cars (to-matrix (get-dataset :cars))) (def y (sel cars :cols 0)) (def x (sel cars :cols 1)) (def plot1 (scatter-plot x y :legend true)) (view plot1) ;; add regression line to scatter plot (def lm1 (linear-model y x)) (add-lines plot1 x (:fitted lm1)) ;; model the data without an intercept (def lm2 (linear-model y x :intercept false)) (add-lines plot1 x (:fitted lm2)) ;; Clojure's doto macro can be used to build a chart (doto (histogram (sample-normal 1000) :density true) (add-lines (range -3 3 0.05) (pdf-normal (range -3 3 0.05))) view) (with-data (get-dataset :iris) (doto (xy-plot :Sepal.Width :Sepal.Length :legend true) (add-lines :Petal.Width :Petal.Length) view)) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L635)[raw docstring](#) --- #### add-lines*cljmultimethod [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L552) --- #### add-parametricclj/smacro ``` (add-parametric chart function min-range max-range & options) ``` Adds a xy-plot of the given parametric function to the given chart, returning a modified version of the chart. Function takes 1 argument t and returns point [x y]. Options: :series-label (default function expression) :step-size (default (/ (- max-range min-range) 500)) See also: parametric-plot, view, save, add-function, add-points, add-lines Examples: (use '(incanter core charts)) ;;; Plot square with circle inside. (defn circle [t] [(cos t) (sin t)]) (doto (xy-plot [1 -1 -1 1 1] [1 1 -1 -1 1] :auto-sort false) (add-parametric circle 0 (* 2 Math/PI)) (view)) ``` Adds a xy-plot of the given parametric function to the given chart, returning a modified version of the chart. Function takes 1 argument t and returns point [x y]. Options: :series-label (default function expression) :step-size (default (/ (- max-range min-range) 500)) See also: parametric-plot, view, save, add-function, add-points, add-lines Examples: (use '(incanter core charts)) ;;; Plot square with circle inside. (defn circle [t] [(cos t) (sin t)]) (doto (xy-plot [1 -1 -1 1 1] [1 1 -1 -1 1] :auto-sort false) (add-parametric circle 0 (* 2 Math/PI)) (view)) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L769)[raw docstring](#) --- #### add-parametric*clj ``` (add-parametric* chart function min-range max-range & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L757) --- #### add-pointerclj ``` (add-pointer chart x y & options) ``` Adds an arrow annotation to the given chart. Arguments: chart -- the chart to annotate x, y -- the coordinate to add the annotation Options: :text -- (default "") text to include at the end of the arrow :angle -- (default :nw) either a number indicating the angle of the arrow or a keyword indicating a direction (:north :nw :west :sw :south :se :east :ne) Examples: (use '(incanter core charts)) (def x (range (* -2 Math/PI) (* 2 Math/PI) 0.01)) (def plot (xy-plot x (sin x))) (view plot) ;; annotate the plot (doto plot (add-pointer (- Math/PI) (sin (- Math/PI)) :text "(-pi, (sin -pi))") (add-pointer Math/PI (sin Math/PI) :text "(pi, (sin pi))" :angle :ne) (add-pointer (* 1/2 Math/PI) (sin (* 1/2 Math/PI)) :text "(pi/2, (sin pi/2))" :angle :south)) ;; try the different angle options (add-pointer plot 0 0 :text "north" :angle :north) (add-pointer plot 0 0 :text "nw" :angle :nw) (add-pointer plot 0 0 :text "ne" :angle :ne) (add-pointer plot 0 0 :text "west" :angle :west) (add-pointer plot 0 0 :text "east" :angle :east) (add-pointer plot 0 0 :text "south" :angle :south) (add-pointer plot 0 0 :text "sw" :angle :sw) (add-pointer plot 0 0 :text "se" :angle :se) ``` Adds an arrow annotation to the given chart. Arguments: chart -- the chart to annotate x, y -- the coordinate to add the annotation Options: :text -- (default "") text to include at the end of the arrow :angle -- (default :nw) either a number indicating the angle of the arrow or a keyword indicating a direction (:north :nw :west :sw :south :se :east :ne) Examples: (use '(incanter core charts)) (def x (range (* -2 Math/PI) (* 2 Math/PI) 0.01)) (def plot (xy-plot x (sin x))) (view plot) ;; annotate the plot (doto plot (add-pointer (- Math/PI) (sin (- Math/PI)) :text "(-pi, (sin -pi))") (add-pointer Math/PI (sin Math/PI) :text "(pi, (sin pi))" :angle :ne) (add-pointer (* 1/2 Math/PI) (sin (* 1/2 Math/PI)) :text "(pi/2, (sin pi/2))" :angle :south)) ;; try the different angle options (add-pointer plot 0 0 :text "north" :angle :north) (add-pointer plot 0 0 :text "nw" :angle :nw) (add-pointer plot 0 0 :text "ne" :angle :ne) (add-pointer plot 0 0 :text "west" :angle :west) (add-pointer plot 0 0 :text "east" :angle :east) (add-pointer plot 0 0 :text "south" :angle :south) (add-pointer plot 0 0 :text "sw" :angle :sw) (add-pointer plot 0 0 :text "se" :angle :se) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3066)[raw docstring](#) --- #### add-pointsclj/smacro ``` (add-points chart x y & options) ``` Plots points on the given scatter-plot or xy-plot of the (x,y) points. Equivalent to R's lines function, returns the modified chart object. Options: :series-label (default x expression) Examples: (use '(incanter core stats io datasets charts)) (def cars (to-matrix (get-dataset :cars))) (def y (sel cars :cols 0)) (def x (sel cars :cols 1)) ;; add regression line to scatter plot (def lm1 (linear-model y x)) ;; model the data without an intercept (def lm2 (linear-model y x :intercept false)) (doto (xy-plot x (:fitted lm1) :legend true) view (add-points x y) (add-lines x (:fitted lm2))) (with-data (get-dataset :iris) (doto (scatter-plot :Sepal.Length :Sepal.Width :data ($where {:Species "setosa"})) (add-points :Sepal.Length :Sepal.Width :data ($where {:Species "versicolor"})) (add-points :Sepal.Length :Sepal.Width :data ($where {:Species "virginica"})) view)) ;; of course this chart can be achieved in a single line: (view (scatter-plot :Sepal.Length :Sepal.Width :group-by :Species :data (get-dataset :iris))) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Plots points on the given scatter-plot or xy-plot of the (x,y) points. Equivalent to R's lines function, returns the modified chart object. Options: :series-label (default x expression) Examples: (use '(incanter core stats io datasets charts)) (def cars (to-matrix (get-dataset :cars))) (def y (sel cars :cols 0)) (def x (sel cars :cols 1)) ;; add regression line to scatter plot (def lm1 (linear-model y x)) ;; model the data without an intercept (def lm2 (linear-model y x :intercept false)) (doto (xy-plot x (:fitted lm1) :legend true) view (add-points x y) (add-lines x (:fitted lm2))) (with-data (get-dataset :iris) (doto (scatter-plot :Sepal.Length :Sepal.Width :data ($where {:Species "setosa"})) (add-points :Sepal.Length :Sepal.Width :data ($where {:Species "versicolor"})) (add-points :Sepal.Length :Sepal.Width :data ($where {:Species "virginica"})) view)) ;; of course this chart can be achieved in a single line: (view (scatter-plot :Sepal.Length :Sepal.Width :group-by :Species :data (get-dataset :iris))) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L827)[raw docstring](#) --- #### add-points*clj ``` (add-points* chart x y & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L802) --- #### add-polygonclj ``` (add-polygon chart coords & options) ``` Adds a polygon outline defined by a given coordinates. The last coordinate will close with the first. If only two points are given, it will plot a line. Arguments: chart -- the chart to add the polygon to. coords -- a list of coords (an n-by-2 matrix can also be used) Examples: (use '(incanter core stats charts)) (def x (range -3 3 0.01)) (def plot (xy-plot x (pdf-normal x))) (view plot) ;; add polygon to the chart (add-polygon plot [[-1.96 0] [1.96 0] [1.96 0.4] [-1.96 0.4]]) ;; the coordinates can also be passed in a matrix ;; (def points (matrix [[-1.96 0] [1.96 0] [1.96 0.4] [-1.96 0.4]])) ;; (add-polygon plot points) ;; add a text annotation (add-text plot -1.25 0.35 "95% Conf Interval") ;; PCA chart example (use '(incanter core stats charts datasets)) ;; load the iris dataset (def iris (to-matrix (get-dataset :iris))) ;; run the pca (def pca (principal-components (sel iris :cols (range 4)))) ;; extract the first two principal components (def pc1 (sel (:rotation pca) :cols 0)) (def pc2 (sel (:rotation pca) :cols 1)) ;; project the first four dimension of the iris data onto the first ;; two principal components (def x1 (mmult (sel iris :cols (range 4)) pc1)) (def x2 (mmult (sel iris :cols (range 4)) pc2)) ;; now plot the transformed data, coloring each species a different color (def plot (scatter-plot x1 x2 :group-by (sel iris :cols 4) :x-label "PC1" :y-label "PC2" :title "Iris PCA")) (view plot) ;; put box around the first group (add-polygon plot [[-3.2 -6.3] [-2 -6.3] [-2 -3.78] [-3.2 -3.78]]) ;; add some text annotations (add-text plot -2.5 -6.5 "Setosa") (add-text plot -5 -5.5 "Versicolor") (add-text plot -8 -5.5 "Virginica") ``` Adds a polygon outline defined by a given coordinates. The last coordinate will close with the first. If only two points are given, it will plot a line. Arguments: chart -- the chart to add the polygon to. coords -- a list of coords (an n-by-2 matrix can also be used) Examples: (use '(incanter core stats charts)) (def x (range -3 3 0.01)) (def plot (xy-plot x (pdf-normal x))) (view plot) ;; add polygon to the chart (add-polygon plot [[-1.96 0] [1.96 0] [1.96 0.4] [-1.96 0.4]]) ;; the coordinates can also be passed in a matrix ;; (def points (matrix [[-1.96 0] [1.96 0] [1.96 0.4] [-1.96 0.4]])) ;; (add-polygon plot points) ;; add a text annotation (add-text plot -1.25 0.35 "95% Conf Interval") ;; PCA chart example (use '(incanter core stats charts datasets)) ;; load the iris dataset (def iris (to-matrix (get-dataset :iris))) ;; run the pca (def pca (principal-components (sel iris :cols (range 4)))) ;; extract the first two principal components (def pc1 (sel (:rotation pca) :cols 0)) (def pc2 (sel (:rotation pca) :cols 1)) ;; project the first four dimension of the iris data onto the first ;; two principal components (def x1 (mmult (sel iris :cols (range 4)) pc1)) (def x2 (mmult (sel iris :cols (range 4)) pc2)) ;; now plot the transformed data, coloring each species a different color (def plot (scatter-plot x1 x2 :group-by (sel iris :cols 4) :x-label "PC1" :y-label "PC2" :title "Iris PCA")) (view plot) ;; put box around the first group (add-polygon plot [[-3.2 -6.3] [-2 -6.3] [-2 -3.78] [-3.2 -3.78]]) ;; add some text annotations (add-text plot -2.5 -6.5 "Setosa") (add-text plot -5 -5.5 "Versicolor") (add-text plot -8 -5.5 "Virginica") ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3198)[raw docstring](#) --- #### add-subtitlecljmultimethod Adds a JFreeChart title object to a chart as a subtitle. Examples: (use '(incanter core charts latex)) (doto (function-plot sin -10 10) (add-subtitle "subtitle") (add-subtitle (latex " \frac{(a+b)^2} {(a-b)^2}")) view) ``` Adds a JFreeChart title object to a chart as a subtitle. Examples: (use '(incanter core charts latex)) (doto (function-plot sin -10 10) (add-subtitle "subtitle") (add-subtitle (latex " \\frac{(a+b)^2} {(a-b)^2}")) view) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L4010)[raw docstring](#) --- #### add-textclj ``` (add-text chart x y text & options) ``` Adds a text annotation centered at the given coordinates. Arguments: chart -- the chart to annotate x, y -- the coordinates to center the text text -- the text to add Examples: ;; PCA chart example (use '(incanter core stats charts datasets)) ;; load the iris dataset (def iris (to-matrix (get-dataset :iris))) ;; run the pca (def pca (principal-components (sel iris :cols (range 4)))) ;; extract the first two principal components (def pc1 (sel (:rotation pca) :cols 0)) (def pc2 (sel (:rotation pca) :cols 1)) ;; project the first four dimension of the iris data onto the first ;; two principal components (def x1 (mmult (sel iris :cols (range 4)) pc1)) (def x2 (mmult (sel iris :cols (range 4)) pc2)) ;; now plot the transformed data, coloring each species a different color (def plot (scatter-plot x1 x2 :group-by (sel iris :cols 4) :x-label "PC1" :y-label "PC2" :title "Iris PCA")) (view plot) ;; add some text annotations (add-text plot -2.5 -6.5 "Setosa") (add-text plot -5 -5.5 "Versicolor") (add-text plot -8 -5.5 "Virginica") ``` Adds a text annotation centered at the given coordinates. Arguments: chart -- the chart to annotate x, y -- the coordinates to center the text text -- the text to add Examples: ;; PCA chart example (use '(incanter core stats charts datasets)) ;; load the iris dataset (def iris (to-matrix (get-dataset :iris))) ;; run the pca (def pca (principal-components (sel iris :cols (range 4)))) ;; extract the first two principal components (def pc1 (sel (:rotation pca) :cols 0)) (def pc2 (sel (:rotation pca) :cols 1)) ;; project the first four dimension of the iris data onto the first ;; two principal components (def x1 (mmult (sel iris :cols (range 4)) pc1)) (def x2 (mmult (sel iris :cols (range 4)) pc2)) ;; now plot the transformed data, coloring each species a different color (def plot (scatter-plot x1 x2 :group-by (sel iris :cols 4) :x-label "PC1" :y-label "PC2" :title "Iris PCA")) (view plot) ;; add some text annotations (add-text plot -2.5 -6.5 "Setosa") (add-text plot -5 -5.5 "Versicolor") (add-text plot -8 -5.5 "Virginica") ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3153)[raw docstring](#) --- #### area-chartclj/smacro ``` (area-chart categories values & options) ``` Returns a JFreeChart object representing an area-chart of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Arguments: categories -- a sequence of categories values -- a sequence of numeric values Options: :title (default '') main title :x-label (default 'Categories') :y-label (default 'Value') :series-label :legend (default false) prints legend :vertical (default true) the orientation of the plot :group-by (default nil) -- a vector of values used to group the values into series within each category. See also: view and save Examples: (use '(incanter core stats charts datasets)) (with-data (get-dataset :co2) (view (area-chart :Type :uptake :title "CO2 Uptake" :group-by :Treatment :x-label "Grass Types" :y-label "Uptake" :legend true))) (def data (get-dataset :airline-passengers)) (view (area-chart :year :passengers :group-by :month :legend true :data data)) (with-data (get-dataset :airline-passengers) (view (area-chart :month :passengers :group-by :year :legend true))) (def data (get-dataset :austres)) (view data) (def plot (area-chart :year :population :group-by :quarter :legend true :data data)) (view plot) (save plot "/tmp/austres_plot.png" :width 1000) (view "file:///tmp/austres_plot.png") (def seasons (mapcat identity (repeat 3 ["winter" "spring" "summer" "fall"]))) (def years (mapcat identity (repeat 4 [2007 2008 2009]))) (def values (sample-uniform 12 :integers true :max 100)) (view (area-chart years values :group-by seasons :legend true)) (view (area-chart ["a" "b" "c"] [10 20 30])) (view (area-chart ["a" "a" "b" "b" "c" "c" ] [10 20 30 10 40 20] :legend true :group-by ["I" "II" "I" "II" "I" "II"])) ;; add a series label (def plot (area-chart ["a" "b" "c"] [10 20 30] :legend true :series-label "s1")) (view plot) (add-categories plot ["a" "b" "c"] [5 25 40] :series-label "s2") (view (area-chart (sample "abcdefghij" :size 10 :replacement true) (sample-uniform 10 :max 50) :legend true)) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Returns a JFreeChart object representing an area-chart of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Arguments: categories -- a sequence of categories values -- a sequence of numeric values Options: :title (default '') main title :x-label (default 'Categories') :y-label (default 'Value') :series-label :legend (default false) prints legend :vertical (default true) the orientation of the plot :group-by (default nil) -- a vector of values used to group the values into series within each category. See also: view and save Examples: (use '(incanter core stats charts datasets)) (with-data (get-dataset :co2) (view (area-chart :Type :uptake :title "CO2 Uptake" :group-by :Treatment :x-label "Grass Types" :y-label "Uptake" :legend true))) (def data (get-dataset :airline-passengers)) (view (area-chart :year :passengers :group-by :month :legend true :data data)) (with-data (get-dataset :airline-passengers) (view (area-chart :month :passengers :group-by :year :legend true))) (def data (get-dataset :austres)) (view data) (def plot (area-chart :year :population :group-by :quarter :legend true :data data)) (view plot) (save plot "/tmp/austres_plot.png" :width 1000) (view "file:///tmp/austres_plot.png") (def seasons (mapcat identity (repeat 3 ["winter" "spring" "summer" "fall"]))) (def years (mapcat identity (repeat 4 [2007 2008 2009]))) (def values (sample-uniform 12 :integers true :max 100)) (view (area-chart years values :group-by seasons :legend true)) (view (area-chart ["a" "b" "c"] [10 20 30])) (view (area-chart ["a" "a" "b" "b" "c" "c" ] [10 20 30 10 40 20] :legend true :group-by ["I" "II" "I" "II" "I" "II"])) ;; add a series label (def plot (area-chart ["a" "b" "c"] [10 20 30] :legend true :series-label "s1")) (view plot) (add-categories plot ["a" "b" "c"] [5 25 40] :series-label "s2") (view (area-chart (sample "abcdefghij" :size 10 :replacement true) (sample-uniform 10 :max 50) :legend true)) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2234)[raw docstring](#) --- #### area-chart*clj ``` (area-chart* categories values & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2191) --- #### bar-chartclj/smacro ``` (bar-chart categories values & options) ``` Returns a JFreeChart object representing a bar-chart of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Arguments: categories -- a sequence of categories values -- a sequence of numeric values Options: :title (default '') main title :x-label (default 'Categories') :y-label (default 'Value') :series-label :legend (default false) prints legend :vertical (default true) the orientation of the plot :group-by (default nil) -- a vector of values used to group the values into series within each category. See also: view and save Examples: (use '(incanter core stats charts datasets)) (with-data (get-dataset :co2) (view (bar-chart :Type :uptake :title "CO2 Uptake" :group-by :Treatment :x-label "Grass Types" :y-label "Uptake" :legend true))) (def data (get-dataset :airline-passengers)) (view (bar-chart :year :passengers :group-by :month :legend true :data data)) (with-data (get-dataset :airline-passengers) (view (bar-chart :month :passengers :group-by :year :legend true))) (def data (get-dataset :austres)) (view data) (def plot (bar-chart :year :population :group-by :quarter :legend true :data data)) (view plot) (save plot "/tmp/austres_plot.png" :width 1000) (view "file:///tmp/austres_plot.png") (def seasons (mapcat identity (repeat 3 ["winter" "spring" "summer" "fall"]))) (def years (mapcat identity (repeat 4 [2007 2008 2009]))) (def values (sample-uniform 12 :integers true :max 100)) (view (bar-chart years values :group-by seasons :legend true)) (view (bar-chart ["a" "b" "c"] [10 20 30])) (view (bar-chart ["a" "a" "b" "b" "c" "c" ] [10 20 30 10 40 20] :legend true :group-by ["I" "II" "I" "II" "I" "II"])) ;; add a series label (def plot (bar-chart ["a" "b" "c"] [10 20 30] :legend true :series-label "s1")) (view plot) (add-categories plot ["a" "b" "c"] [5 25 40] :series-label "s2") (view (bar-chart (sample "abcdefghij" :size 10 :replacement true) (sample-uniform 10 :max 50) :legend true)) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Returns a JFreeChart object representing a bar-chart of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Arguments: categories -- a sequence of categories values -- a sequence of numeric values Options: :title (default '') main title :x-label (default 'Categories') :y-label (default 'Value') :series-label :legend (default false) prints legend :vertical (default true) the orientation of the plot :group-by (default nil) -- a vector of values used to group the values into series within each category. See also: view and save Examples: (use '(incanter core stats charts datasets)) (with-data (get-dataset :co2) (view (bar-chart :Type :uptake :title "CO2 Uptake" :group-by :Treatment :x-label "Grass Types" :y-label "Uptake" :legend true))) (def data (get-dataset :airline-passengers)) (view (bar-chart :year :passengers :group-by :month :legend true :data data)) (with-data (get-dataset :airline-passengers) (view (bar-chart :month :passengers :group-by :year :legend true))) (def data (get-dataset :austres)) (view data) (def plot (bar-chart :year :population :group-by :quarter :legend true :data data)) (view plot) (save plot "/tmp/austres_plot.png" :width 1000) (view "file:///tmp/austres_plot.png") (def seasons (mapcat identity (repeat 3 ["winter" "spring" "summer" "fall"]))) (def years (mapcat identity (repeat 4 [2007 2008 2009]))) (def values (sample-uniform 12 :integers true :max 100)) (view (bar-chart years values :group-by seasons :legend true)) (view (bar-chart ["a" "b" "c"] [10 20 30])) (view (bar-chart ["a" "a" "b" "b" "c" "c" ] [10 20 30 10 40 20] :legend true :group-by ["I" "II" "I" "II" "I" "II"])) ;; add a series label (def plot (bar-chart ["a" "b" "c"] [10 20 30] :legend true :series-label "s1")) (view plot) (add-categories plot ["a" "b" "c"] [5 25 40] :series-label "s2") (view (bar-chart (sample "abcdefghij" :size 10 :replacement true) (sample-uniform 10 :max 50) :legend true)) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2095)[raw docstring](#) --- #### bar-chart*clj ``` (bar-chart* categories values & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2052) --- #### bland-altman-plotclj ``` (bland-altman-plot x1 x2 & options) ``` Options: :data (default nil) If the :data option is provided a dataset, column names can be used instead of sequences of data for arguments x1 and x2. Examples: (use '(incanter core datasets charts)) (def flow-meter (to-matrix (get-dataset :flow-meter))) (def x1 (sel flow-meter :cols 1)) (def x2 (sel flow-meter :cols 3)) (view (bland-altman-plot x1 x2)) (with-data (get-dataset :flow-meter) (view (bland-altman-plot (keyword "Wright 1st PEFR") (keyword "Mini Wright 1st PEFR")))) References: <http://en.wikipedia.org/wiki/Bland-Altman_plot> <http://www-users.york.ac.uk/~mb55/meas/ba.htm> ``` Options: :data (default nil) If the :data option is provided a dataset, column names can be used instead of sequences of data for arguments x1 and x2. Examples: (use '(incanter core datasets charts)) (def flow-meter (to-matrix (get-dataset :flow-meter))) (def x1 (sel flow-meter :cols 1)) (def x2 (sel flow-meter :cols 3)) (view (bland-altman-plot x1 x2)) (with-data (get-dataset :flow-meter) (view (bland-altman-plot (keyword "Wright 1st PEFR") (keyword "Mini Wright 1st PEFR")))) References: http://en.wikipedia.org/wiki/Bland-Altman_plot http://www-users.york.ac.uk/~mb55/meas/ba.htm ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3377)[raw docstring](#) --- #### box-plotclj/smacro ``` (box-plot x & options) ``` Returns a JFreeChart object representing a box-plot of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Options: :title (default '') main title :x-label (default x expression) :y-label (default 'Frequency') :legend (default false) prints legend :series-label (default x expression) :group-by (default nil) -- a vector of values used to group the x values into series. See also: view and save Examples: (use '(incanter core stats charts)) (def gamma-box-plot (box-plot (sample-gamma 1000 :shape 1 :scale 2) :title "Gamma Boxplot" :legend true)) (view gamma-box-plot) (add-box-plot gamma-box-plot (sample-gamma 1000 :shape 2 :scale 2)) (add-box-plot gamma-box-plot (sample-gamma 1000 :shape 3 :scale 2)) ;; use the group-by options (use '(incanter core stats datasets charts)) (with-data (get-dataset :iris) (view (box-plot :Petal.Length :group-by :Species :legend true)) (view (box-plot :Petal.Width :group-by :Species :legend true)) (view (box-plot :Sepal.Length :group-by :Species :legend true)) (view (box-plot :Sepal.Width :group-by :Species :legend true))) ;; see INCANTER_HOME/examples/probability_plots.clj for more examples of plots References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Returns a JFreeChart object representing a box-plot of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Options: :title (default '') main title :x-label (default x expression) :y-label (default 'Frequency') :legend (default false) prints legend :series-label (default x expression) :group-by (default nil) -- a vector of values used to group the x values into series. See also: view and save Examples: (use '(incanter core stats charts)) (def gamma-box-plot (box-plot (sample-gamma 1000 :shape 1 :scale 2) :title "Gamma Boxplot" :legend true)) (view gamma-box-plot) (add-box-plot gamma-box-plot (sample-gamma 1000 :shape 2 :scale 2)) (add-box-plot gamma-box-plot (sample-gamma 1000 :shape 3 :scale 2)) ;; use the group-by options (use '(incanter core stats datasets charts)) (with-data (get-dataset :iris) (view (box-plot :Petal.Length :group-by :Species :legend true)) (view (box-plot :Petal.Width :group-by :Species :legend true)) (view (box-plot :Sepal.Length :group-by :Species :legend true)) (view (box-plot :Sepal.Width :group-by :Species :legend true))) ;; see INCANTER_HOME/examples/probability_plots.clj for more examples of plots References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2711)[raw docstring](#) --- #### box-plot*clj ``` (box-plot* x & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2667) --- #### candle-stick-plotclj/smacro ``` (candle-stick-plot & options) ``` Produces a candle stick chart Options: :data (default nil) If the :data option is provided a dataset, column names can be used instead of sequences of data as arguments to xy-plot. :date Key for accessing the underlying date series (defaults to :date) :high Key for accessing high value data (defaults to :high) :low Key for accessing low value data (defaults to :low) :open Key for accessing open value data (defaults to :open) :close Key for accessing close value data (defaults to :close) :volume Key for accessing volume data (defaults to :volume). Volume data is optional :title (default 'Candle Stick Plot') main title :time-label (default empty) :value-label (default empty) :legend (default false) prints legend :series-label (default empty) Example: ;; use default mappings so the dataset must have ;; :date, :high, :low, :open, :close and :volume keys (candle-stick-plot :data <dataset>) ;; more customization (candle-stick-plot :data dataset :high :HighPrice :low :LowPrice :open :StartOfDay :close :CoB :volume :TransactionVolume :legend true :time-label "CoB date" :value-label "Price" :series-label "Price time series" :title "Price information") ``` Produces a candle stick chart Options: :data (default nil) If the :data option is provided a dataset, column names can be used instead of sequences of data as arguments to xy-plot. :date Key for accessing the underlying date series (defaults to :date) :high Key for accessing high value data (defaults to :high) :low Key for accessing low value data (defaults to :low) :open Key for accessing open value data (defaults to :open) :close Key for accessing close value data (defaults to :close) :volume Key for accessing volume data (defaults to :volume). Volume data is optional :title (default 'Candle Stick Plot') main title :time-label (default empty) :value-label (default empty) :legend (default false) prints legend :series-label (default empty) Example: ;; use default mappings so the dataset must have ;; :date, :high, :low, :open, :close and :volume keys (candle-stick-plot :data <dataset>) ;; more customization (candle-stick-plot :data dataset :high :HighPrice :low :LowPrice :open :StartOfDay :close :CoB :volume :TransactionVolume :legend true :time-label "CoB date" :value-label "Price" :series-label "Price time series" :title "Price information") ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1318)[raw docstring](#) --- #### candle-stick-plot*clj ``` (candle-stick-plot* & opts) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1289) --- #### clear-backgroundclj ``` (clear-background chart) ``` Sets the alpha level (transparency) of the plot's background to zero removing the default grid, returns the modified chart object. References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Sets the alpha level (transparency) of the plot's background to zero removing the default grid, returns the modified chart object. References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1009)[raw docstring](#) --- #### dynamic-scatter-plotclj/smacro ``` (dynamic-scatter-plot [& slider-bindings] expression & options) ``` Returns an scatter-plot bound to sliders (which tend to appear behind the chart). See the sliders macro for more information. Examples: (use '(incanter core stats charts)) (let [x (range -3 3 0.1)] (view (dynamic-scatter-plot [mean (range -3 3 0.1) sd (range 0.1 10 0.1)] [x (pdf-normal x :mean mean :sd sd)] :title "Normal PDF Plot"))) (let [x (range -3 3 0.1)] (view (dynamic-scatter-plot [mean (range -3 3 0.1) sd (range 0.1 10 0.1)] (for [xi x] [xi (pdf-normal xi :mean mean :sd sd)]) :title "Normal PDF Plot"))) ``` Returns an scatter-plot bound to sliders (which tend to appear behind the chart). See the sliders macro for more information. Examples: (use '(incanter core stats charts)) (let [x (range -3 3 0.1)] (view (dynamic-scatter-plot [mean (range -3 3 0.1) sd (range 0.1 10 0.1)] [x (pdf-normal x :mean mean :sd sd)] :title "Normal PDF Plot"))) (let [x (range -3 3 0.1)] (view (dynamic-scatter-plot [mean (range -3 3 0.1) sd (range 0.1 10 0.1)] (for [xi x] [xi (pdf-normal xi :mean mean :sd sd)]) :title "Normal PDF Plot"))) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3668)[raw docstring](#) --- #### dynamic-xy-plotclj/smacro ``` (dynamic-xy-plot [& slider-bindings] expression & options) ``` Returns an xy-plot bound to sliders (which tend to appear behind the chart). See the sliders macro for more information. Examples: (use '(incanter core stats charts)) (let [x (range -3 3 0.1)] (view (dynamic-xy-plot [mean (range -3 3 0.1) sd (range 0.1 10 0.1)] [x (pdf-normal x :mean mean :sd sd)] :title "Normal PDF Plot"))) (let [x (range -3 3 0.1)] (view (dynamic-xy-plot [mean (range -3 3 0.1) sd (range 0.1 10 0.1)] (for [xi x] [xi (pdf-normal xi :mean mean :sd sd)]) :title "Normal PDF Plot"))) ``` Returns an xy-plot bound to sliders (which tend to appear behind the chart). See the sliders macro for more information. Examples: (use '(incanter core stats charts)) (let [x (range -3 3 0.1)] (view (dynamic-xy-plot [mean (range -3 3 0.1) sd (range 0.1 10 0.1)] [x (pdf-normal x :mean mean :sd sd)] :title "Normal PDF Plot"))) (let [x (range -3 3 0.1)] (view (dynamic-xy-plot [mean (range -3 3 0.1) sd (range 0.1 10 0.1)] (for [xi x] [xi (pdf-normal xi :mean mean :sd sd)]) :title "Normal PDF Plot"))) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3635)[raw docstring](#) --- #### extend-lineclj ``` (extend-line chart x y & options) ``` Add new data set to an exiting series if it already exists, otherwise, data set will be added to a newly created series. ``` Add new data set to an exiting series if it already exists, otherwise, data set will be added to a newly created series. ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L582)[raw docstring](#) --- #### function-plotclj/smacro ``` (function-plot function min-range max-range & options) ``` Returns a xy-plot object of the given function over the range indicated by the min-range and max-range arguments. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Options: :title (default '') main title :x-label (default x expression) :y-label (default 'Frequency') :legend (default false) prints legend :series-label (default x expression) :step-size (default (/ (- max-range min-range) 500)) See also: view, save, add-points, add-lines Examples: (use '(incanter core stats charts)) (view (function-plot sin (- Math/PI) Math/PI)) (view (function-plot pdf-normal -3 3)) (defn cubic [x] (+ (* x x x) (* 2 x x) (* 2 x) 3)) (view (function-plot cubic -10 10)) ``` Returns a xy-plot object of the given function over the range indicated by the min-range and max-range arguments. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Options: :title (default '') main title :x-label (default x expression) :y-label (default 'Frequency') :legend (default false) prints legend :series-label (default x expression) :step-size (default (/ (- max-range min-range) 500)) See also: view, save, add-points, add-lines Examples: (use '(incanter core stats charts)) (view (function-plot sin (- Math/PI) Math/PI)) (view (function-plot pdf-normal -3 3)) (defn cubic [x] (+ (* x x x) (* 2 x x) (* 2 x) 3)) (view (function-plot cubic -10 10)) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2790)[raw docstring](#) --- #### function-plot*clj ``` (function-plot* function min-range max-range & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2771) --- #### get-seriesclj ``` (get-series chart) ``` ``` (get-series chart series-idx) ``` get-series ``` get-series ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3459)[raw docstring](#) --- #### has-series?clj ``` (has-series? chart series-label) ``` Test to see if a chart has a series name series-lab ``` Test to see if a chart has a series name series-lab ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L359)[raw docstring](#) --- #### heat-mapclj/smacro ``` (heat-map function x-min x-max y-min y-max & options) ``` Usage: (heat-map function x-min x-max y-min y-max & options) Returns a JFreeChart object representing a heat map of the function across the given x and y ranges. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Callers may define the number of samples in each direction, and select if they want a sparser representation by disabling :auto-scale? . By default, the heat-map will try to scale the 'blocks' or sampled pixels to cover the ranges specified. Depending on the number of samples, this may result in a pixelated but performant look. Disabling auto-scale? will keep the 'blocks' a constant size, leading to potentially sparsely sampled points on the surface surrounded by blank regions. Arguments: function -- a function that takes two scalar arguments and returns a scalar x-min -- lower bound for the first value of the function x-max -- upper bound for the first value of the function y-min -- lower bound for the second value of the function y-max -- upper bound for the second value of the function Options: :title :x-label (default 'x-min < x < x-max') :y-label (default 'y-min < y < y-max') :z-label -- defaults to function's name :color? (default true) -- should the plot be in color or not? :include-zero? (default true) -- should the plot include the origin if it is not in the ranges specified? :x-res (default 100) -- amount of samples to take in the x range :y-res (default 100) -- amount of samples to take in the y range :auto-scale? (default true) -- automatically scale the block width/height to provide a continuous surface Examples: (use '(incanter core charts)) (defn f [x y] (sin (sqrt (plus (sq x) (sq y))))) (view (heat-map f -10 10 -15 15)) (view (heat-map f -10 10 -10 10 :color? false)) (view (heat-map f 5 10 5 10 :include-zero? false)) (defn f2 [x y] (plus (sq x) (sq y))) (view (heat-map f2 -10 10 -10 10)) (view (heat-map f2 -10 10 -10 10 :color? false)) (use 'incanter.stats) (defn f3 [x y] (pdf-normal (sqrt (plus (sq x) (sq y))))) (view (heat-map f3 -3 3 -3 3 :x-label "x1" :y-label "x2" :z-label "pdf")) (view (heat-map f3 -3 3 -3 3 :color? false)) (defn f4 [x y] (minus (sq x) (sq y))) (view (heat-map f4 -10 10 -10 10)) (view (heat-map f4 -10 10 -10 10 :color? false)) (use '(incanter core stats charts)) (let [data [[0 5 1 2] [0 10 1.9 1] [15 0 0.5 1.5] [18 10 4.5 2.1]] diffusion (fn [x y] (sum (map #(pdf-normal (euclidean-distance [x y] (take 2 %)) :mean (nth % 2) :sd (last %)) data)))] (view (heat-map diffusion -5 20 -5 20))) ``` Usage: (heat-map function x-min x-max y-min y-max & options) Returns a JFreeChart object representing a heat map of the function across the given x and y ranges. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Callers may define the number of samples in each direction, and select if they want a sparser representation by disabling :auto-scale? . By default, the heat-map will try to scale the 'blocks' or sampled pixels to cover the ranges specified. Depending on the number of samples, this may result in a pixelated but performant look. Disabling auto-scale? will keep the 'blocks' a constant size, leading to potentially sparsely sampled points on the surface surrounded by blank regions. Arguments: function -- a function that takes two scalar arguments and returns a scalar x-min -- lower bound for the first value of the function x-max -- upper bound for the first value of the function y-min -- lower bound for the second value of the function y-max -- upper bound for the second value of the function Options: :title :x-label (default 'x-min < x < x-max') :y-label (default 'y-min < y < y-max') :z-label -- defaults to function's name :color? (default true) -- should the plot be in color or not? :include-zero? (default true) -- should the plot include the origin if it is not in the ranges specified? :x-res (default 100) -- amount of samples to take in the x range :y-res (default 100) -- amount of samples to take in the y range :auto-scale? (default true) -- automatically scale the block width/height to provide a continuous surface Examples: (use '(incanter core charts)) (defn f [x y] (sin (sqrt (plus (sq x) (sq y))))) (view (heat-map f -10 10 -15 15)) (view (heat-map f -10 10 -10 10 :color? false)) (view (heat-map f 5 10 5 10 :include-zero? false)) (defn f2 [x y] (plus (sq x) (sq y))) (view (heat-map f2 -10 10 -10 10)) (view (heat-map f2 -10 10 -10 10 :color? false)) (use 'incanter.stats) (defn f3 [x y] (pdf-normal (sqrt (plus (sq x) (sq y))))) (view (heat-map f3 -3 3 -3 3 :x-label "x1" :y-label "x2" :z-label "pdf")) (view (heat-map f3 -3 3 -3 3 :color? false)) (defn f4 [x y] (minus (sq x) (sq y))) (view (heat-map f4 -10 10 -10 10)) (view (heat-map f4 -10 10 -10 10 :color? false)) (use '(incanter core stats charts)) (let [data [[0 5 1 2] [0 10 1.9 1] [15 0 0.5 1.5] [18 10 4.5 2.1]] diffusion (fn [x y] (sum (map #(pdf-normal (euclidean-distance [x y] (take 2 %)) :mean (nth % 2) :sd (last %)) data)))] (view (heat-map diffusion -5 20 -5 20))) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2980)[raw docstring](#) --- #### heat-map*clj ``` (heat-map* function x-min x-max y-min y-max & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2899) --- #### histogramclj/smacro ``` (histogram x & options) ``` Returns a JFreeChart object representing the histogram of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Options: :nbins (default 10) number of bins :density (default false) if false, plots frequency, otherwise density :title (default 'Histogram') main title :x-label (default x expression) :y-label (default 'Frequency') :legend (default false) prints legend :series-label (default x expression) See also: view, save, add-histogram Examples: (use '(incanter core charts stats)) (view (histogram (sample-normal 1000))) [plot a density histogram](#plot-a-density-histogram) === (def hist (histogram (sample-normal 1000) :density true)) (view hist) [add a normal density line to the plot](#add-a-normal-density-line-to-the-plot) === (def x (range -4 4 0.01)) (add-lines hist x (pdf-normal x)) [plot some gamma data](#plot-some-gamma-data) === (def gam-hist (histogram (sample-gamma 1000) :density true :nbins 30)) (view gam-hist) (def x (range 0 8 0.01)) (add-lines gam-hist x (pdf-gamma x)) (use 'incanter.datasets) (def iris (get-dataset :iris)) (view (histogram :Sepal.Width :data iris)) (with-data (get-dataset :iris) (view (histogram :Petal.Length))) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Returns a JFreeChart object representing the histogram of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Options: :nbins (default 10) number of bins :density (default false) if false, plots frequency, otherwise density :title (default 'Histogram') main title :x-label (default x expression) :y-label (default 'Frequency') :legend (default false) prints legend :series-label (default x expression) See also: view, save, add-histogram Examples: (use '(incanter core charts stats)) (view (histogram (sample-normal 1000))) # plot a density histogram (def hist (histogram (sample-normal 1000) :density true)) (view hist) # add a normal density line to the plot (def x (range -4 4 0.01)) (add-lines hist x (pdf-normal x)) # plot some gamma data (def gam-hist (histogram (sample-gamma 1000) :density true :nbins 30)) (view gam-hist) (def x (range 0 8 0.01)) (add-lines gam-hist x (pdf-gamma x)) (use 'incanter.datasets) (def iris (get-dataset :iris)) (view (histogram :Sepal.Width :data iris)) (with-data (get-dataset :iris) (view (histogram :Petal.Length))) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1848)[raw docstring](#) --- #### histogram*clj ``` (histogram* x & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1817) --- #### line-chartclj/smacro ``` (line-chart categories values & options) ``` Returns a JFreeChart object representing a line-chart of the given values and categories. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Arguments: categories -- a sequence of categories values -- a sequence of numeric values Options: :title (default '') main title :x-label (default 'Categories') :y-label (default 'Value') :legend (default false) prints legend :series-label :group-by (default nil) -- a vector of values used to group the values into series within each category. :gradient? (default false) -- use gradient on bars See also: view and save Examples: (use '(incanter core stats charts datasets)) (def data (get-dataset :airline-passengers)) (def years (sel data :cols 0)) (def months (sel data :cols 2)) (def passengers (sel data :cols 1)) (view (line-chart years passengers :group-by months :legend true)) (view (line-chart months passengers :group-by years :legend true)) (def seasons (mapcat identity (repeat 3 ["winter" "spring" "summer" "fall"]))) (def years (mapcat identity (repeat 4 [2007 2008 2009]))) (def x (sample-uniform 12 :integers true :max 100)) (view (line-chart years x :group-by seasons :legend true)) (view (line-chart ["a" "b" "c" "d" "e" "f"] [10 20 30 10 40 20])) (view (line-chart (sample "abcdefghij" :size 10 :replacement true) (sample-uniform 10 :max 50) :legend true)) ;; add a series label (def plot (line-chart ["a" "b" "c"] [10 20 30] :legend true :series-label "s1")) (view plot) (add-categories plot ["a" "b" "c"] [5 25 40] :series-label "s2") (view (line-chart :year :passengers :group-by :month :legend true :data data)) (view (line-chart :month :passengers :group-by :year :legend true :data data)) (with-data data (view (line-chart :month :passengers :group-by :year :legend true))) (with-data (->> ($rollup :sum :passengers :year (get-dataset :airline-passengers)) ($order :year :asc)) (view (line-chart :year :passengers))) (with-data (->> ($rollup :sum :passengers :month (get-dataset :airline-passengers)) ($order :passengers :asc)) (view (line-chart :month :passengers))) (with-data ($rollup :sum :passengers :month (get-dataset :airline-passengers)) (view (line-chart :month :passengers))) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Returns a JFreeChart object representing a line-chart of the given values and categories. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Arguments: categories -- a sequence of categories values -- a sequence of numeric values Options: :title (default '') main title :x-label (default 'Categories') :y-label (default 'Value') :legend (default false) prints legend :series-label :group-by (default nil) -- a vector of values used to group the values into series within each category. :gradient? (default false) -- use gradient on bars See also: view and save Examples: (use '(incanter core stats charts datasets)) (def data (get-dataset :airline-passengers)) (def years (sel data :cols 0)) (def months (sel data :cols 2)) (def passengers (sel data :cols 1)) (view (line-chart years passengers :group-by months :legend true)) (view (line-chart months passengers :group-by years :legend true)) (def seasons (mapcat identity (repeat 3 ["winter" "spring" "summer" "fall"]))) (def years (mapcat identity (repeat 4 [2007 2008 2009]))) (def x (sample-uniform 12 :integers true :max 100)) (view (line-chart years x :group-by seasons :legend true)) (view (line-chart ["a" "b" "c" "d" "e" "f"] [10 20 30 10 40 20])) (view (line-chart (sample "abcdefghij" :size 10 :replacement true) (sample-uniform 10 :max 50) :legend true)) ;; add a series label (def plot (line-chart ["a" "b" "c"] [10 20 30] :legend true :series-label "s1")) (view plot) (add-categories plot ["a" "b" "c"] [5 25 40] :series-label "s2") (view (line-chart :year :passengers :group-by :month :legend true :data data)) (view (line-chart :month :passengers :group-by :year :legend true :data data)) (with-data data (view (line-chart :month :passengers :group-by :year :legend true))) (with-data (->> ($rollup :sum :passengers :year (get-dataset :airline-passengers)) ($order :year :asc)) (view (line-chart :year :passengers))) (with-data (->> ($rollup :sum :passengers :month (get-dataset :airline-passengers)) ($order :passengers :asc)) (view (line-chart :month :passengers))) (with-data ($rollup :sum :passengers :month (get-dataset :airline-passengers)) (view (line-chart :month :passengers))) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1954)[raw docstring](#) --- #### line-chart*clj ``` (line-chart* categories values & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1912) --- #### log-axisclj ``` (log-axis & options) ``` Create a logarithmic axis. Note: By default, values smaller than 0.5 are rounded to 0.5 to prevent strange behavior that happens for values close to 0. Options: :base (default 10) base of the logarithm; typically 2 or 10 :label (default none) the label of the axis :int-ticks? (default true) if true, use normal numbers instead of <base>^<exponent>, i.e. 1 instead of f.ex. 10^0.0 :smallest-value (default: 0.5) Set the smallest value represented by the axis, set to 0.0 to 'reset' See also: set-axis References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/axis/LogAxis.html> ``` Create a logarithmic axis. Note: By default, values smaller than 0.5 are rounded to 0.5 to prevent strange behavior that happens for values close to 0. Options: :base (default 10) base of the logarithm; typically 2 or 10 :label (default none) the label of the axis :int-ticks? (default true) if true, use normal numbers instead of <base>^<exponent>, i.e. 1 instead of f.ex. 10^0.0 :smallest-value (default: 0.5) Set the smallest value represented by the axis, set to 0.0 to 'reset' See also: set-axis References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/axis/LogAxis.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L877)[raw docstring](#) --- #### parametric-plotclj/smacro ``` (parametric-plot function min-range max-range & options) ``` Returns a xy-plot object of the given parametric function over the range indicated by the min-range and max-range arguments. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Function must take 1 argument - parameter t and return point [x y]. Options: :title (default '') main title :x-label (default 'min-x < x < max-x') :y-label (default 'min-y < y < max-y') :legend (default false) prints legend :series-label (default function expression) :step-size (default (/ (- max-range min-range) 500)) See also: view, save, add-parametric, function-plot Examples: (use '(incanter core charts)) (defn circle [t] [(cos t) (sin t)]) (view (parametric-plot circle (- Math/PI) Math/PI)) (defn spiral [t] [(* t (cos t)) (* t (sin t))]) (view (parametric-plot spiral 0 (* 6 Math/PI))) ``` Returns a xy-plot object of the given parametric function over the range indicated by the min-range and max-range arguments. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Function must take 1 argument - parameter t and return point [x y]. Options: :title (default '') main title :x-label (default 'min-x < x < max-x') :y-label (default 'min-y < y < max-y') :legend (default false) prints legend :series-label (default function expression) :step-size (default (/ (- max-range min-range) 500)) See also: view, save, add-parametric, function-plot Examples: (use '(incanter core charts)) (defn circle [t] [(cos t) (sin t)]) (view (parametric-plot circle (- Math/PI) Math/PI)) (defn spiral [t] [(* t (cos t)) (* t (sin t))]) (view (parametric-plot spiral 0 (* 6 Math/PI))) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2857)[raw docstring](#) --- #### parametric-plot*clj ``` (parametric-plot* function min-range max-range & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2836) --- #### pie-chartclj/smacro ``` (pie-chart categories values & options) ``` Returns a JFreeChart object representing a pie-chart of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Arguments: categories -- a sequence of categories values -- a sequence of numeric values Options: :title (default '') main title :legend (default false) prints legend See also: view and save Examples: (use '(incanter core stats charts datasets)) (view (pie-chart ["a" "b" "c"] [10 20 30])) (view (pie-chart (sample "abcdefghij" :size 10 :replacement true) (sample-uniform 10 :max 50) :legend true)) (with-data (->> (get-dataset :hair-eye-color) ($rollup :sum :count [:hair :eye])) (view $data) (view (pie-chart :hair :count :title "Hair Color")) (view (pie-chart :eye :count :title "Eye Color"))) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Returns a JFreeChart object representing a pie-chart of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Arguments: categories -- a sequence of categories values -- a sequence of numeric values Options: :title (default '') main title :legend (default false) prints legend See also: view and save Examples: (use '(incanter core stats charts datasets)) (view (pie-chart ["a" "b" "c"] [10 20 30])) (view (pie-chart (sample "abcdefghij" :size 10 :replacement true) (sample-uniform 10 :max 50) :legend true)) (with-data (->> (get-dataset :hair-eye-color) ($rollup :sum :count [:hair :eye])) (view $data) (view (pie-chart :hair :count :title "Hair Color")) (view (pie-chart :eye :count :title "Eye Color"))) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2616)[raw docstring](#) --- #### pie-chart*clj ``` (pie-chart* categories values & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2593) --- #### qq-plotclj ``` (qq-plot x & options) ``` Returns a QQ-Plot object. Use the 'view' function to display it. Options: :data (default nil) If the :data option is provided a dataset, a column name can be used instead of a sequence of data for argument x. References: <http://en.wikipedia.org/wiki/QQ_plotExamples: (use '(incanter core stats charts datasets)) (view (qq-plot (sample-normal 100))) (view (qq-plot (sample-exp 100))) (view (qq-plot (sample-gamma 100))) (with-data (get-dataset :iris) (view (qq-plot :Sepal.Length))) ``` Returns a QQ-Plot object. Use the 'view' function to display it. Options: :data (default nil) If the :data option is provided a dataset, a column name can be used instead of a sequence of data for argument x. References: http://en.wikipedia.org/wiki/QQ_plot Examples: (use '(incanter core stats charts datasets)) (view (qq-plot (sample-normal 100))) (view (qq-plot (sample-exp 100))) (view (qq-plot (sample-gamma 100))) (with-data (get-dataset :iris) (view (qq-plot :Sepal.Length))) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3337)[raw docstring](#) --- #### remove-seriesclj ``` (remove-series chart series-label) ``` Remove an existing series speicified by series-lab. If the series does not exist it return nil ``` Remove an existing series speicified by series-lab. If the series does not exist it return nil ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L348)[raw docstring](#) --- #### scatter-plotclj/smacro ``` (scatter-plot) ``` ``` (scatter-plot x y & options) ``` Returns a JFreeChart object representing a scatter-plot of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Options: :title (default '') main title :x-label (default x expression) :y-label (default 'Frequency') :legend (default false) prints legend :series-label (default x expression) :group-by (default nil) -- a vector of values used to group the x and y values into series. :density? (default false) -- chart will represent density instead of frequency. :nbins (default 10) -- number of bins (i.e. bars) :gradient? (default false) -- use gradient on bars See also: view, save, add-points, add-lines Examples: (use '(incanter core stats charts datasets)) ;; create some data (def mvn-samp (sample-mvn 1000 :mean [7 5] :sigma (matrix [[2 1.5] [1.5 3]]))) ;; create scatter-plot of points (def mvn-plot (scatter-plot (sel mvn-samp :cols 0) (sel mvn-samp :cols 1))) (view mvn-plot) ;; add regression line to scatter plot (def x (sel mvn-samp :cols 0)) (def y (sel mvn-samp :cols 1)) (def lm (linear-model y x)) (add-lines mvn-plot x (:fitted lm)) ;; use :group-by option (use '(incanter core stats datasets charts)) ;; load the :iris dataset (def iris (get-dataset :iris)) ;; plot the first two columns grouped by the fifth column (view (scatter-plot ($ :Sepal.Width iris) ($ :Sepal.Length iris) :group-by ($ :Species iris))) (view (scatter-plot :Sepal.Length :Sepal.Width :data (get-dataset :iris))) (view (scatter-plot :Sepal.Length :Sepal.Width :group-by :Species :data (get-dataset :iris))) (with-data (get-dataset :iris) (view (scatter-plot :Sepal.Length :Sepal.Width))) (with-data (get-dataset :iris) (view (scatter-plot :Sepal.Length :Sepal.Width :group-by :Species))) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Returns a JFreeChart object representing a scatter-plot of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Options: :title (default '') main title :x-label (default x expression) :y-label (default 'Frequency') :legend (default false) prints legend :series-label (default x expression) :group-by (default nil) -- a vector of values used to group the x and y values into series. :density? (default false) -- chart will represent density instead of frequency. :nbins (default 10) -- number of bins (i.e. bars) :gradient? (default false) -- use gradient on bars See also: view, save, add-points, add-lines Examples: (use '(incanter core stats charts datasets)) ;; create some data (def mvn-samp (sample-mvn 1000 :mean [7 5] :sigma (matrix [[2 1.5] [1.5 3]]))) ;; create scatter-plot of points (def mvn-plot (scatter-plot (sel mvn-samp :cols 0) (sel mvn-samp :cols 1))) (view mvn-plot) ;; add regression line to scatter plot (def x (sel mvn-samp :cols 0)) (def y (sel mvn-samp :cols 1)) (def lm (linear-model y x)) (add-lines mvn-plot x (:fitted lm)) ;; use :group-by option (use '(incanter core stats datasets charts)) ;; load the :iris dataset (def iris (get-dataset :iris)) ;; plot the first two columns grouped by the fifth column (view (scatter-plot ($ :Sepal.Width iris) ($ :Sepal.Length iris) :group-by ($ :Species iris))) (view (scatter-plot :Sepal.Length :Sepal.Width :data (get-dataset :iris))) (view (scatter-plot :Sepal.Length :Sepal.Width :group-by :Species :data (get-dataset :iris))) (with-data (get-dataset :iris) (view (scatter-plot :Sepal.Length :Sepal.Width))) (with-data (get-dataset :iris) (view (scatter-plot :Sepal.Length :Sepal.Width :group-by :Species))) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1487)[raw docstring](#) --- #### scatter-plot*clj ``` (scatter-plot* x y & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1432) --- #### scatter-plot-matrixclj ``` (scatter-plot-matrix & opts) ``` Returns a JFreeChart object displaying a scatter plot matrix for the given data. Use the 'view' function to display the chart or 'save' to write it to a file. Use: (scatter-plot-matrix & options) (scatter-plot-matrix data & options) Options: :data data (default $data) the data set for the plot. :title s (default "Scatter Plot Matrix"). :nbins n (default 10) number of bins (ie. bars) in histogram. :group-by grp (default nil) name of the column for grouping data. :only-first n (default 6) show only the first n most correlating columns of the data set. :only-triangle b (default false) shows only the upper triangle of the plot matrix. Examples: (use '(incanter core stats charts datasets pdf)) (view (scatter-plot-matrix (get-dataset :iris) :nbins 20 :group-by :Species )) (with-data (get-dataset :iris) (view (scatter-plot-matrix :nbins 20 :group-by :Species ))) (view (scatter-plot-matrix (get-dataset :chick-weight) :group-by :Diet :nbins 20)) ;;;Input examples for Iris ;; Input dataset examples: Incanter data repo, local file, remote file (url) (def iris (get-dataset :iris)) (def iris (read-dataset "data/iris.dat" :delim \space :header true)) ; relative to project home (def iris (read-dataset "<https://raw.githubusercontent.com/incanter/incanter/master/data/iris.dat>" :delim \space :header true)) ;; Filter dataset to specific columns only (def iris ($ [:Sepal.Length :Sepal.Width :Petal.Length :Petal.Width :Species] (get-dataset :iris))) (def iris (sel (get-dataset :iris) :cols [:Sepal.Length :Sepal.Width :Petal.Length :Petal.Width :Species])) ;;; Scatter plot matrix examples ;; Using default options (def iris-spm (scatter-plot-matrix iris :group-by :Species)) ;; filter to metrics only, no categorical dimension for grouping (def iris-spm (scatter-plot-matrix :data ($ [:Sepal.Length :Sepal.Width :Petal.Length :Petal.Width] iris))) ;; Using more options (def iris-spm (scatter-plot-matrix iris :title "Iris Scatter Plot Matrix" :bins 20 ; number of histogram bars :group-by :Species :only-first 4 ; most correlating columns :only-triangle false)) ;;;Output examples ;; View on Display (view iris-spm :width 1280 :height 800) ;; Save as PDF (save-pdf iris-spm "out/iris-spm.pdf" :width 2560 :height 1600) ;; Save as PNG (save iris-spm "out/iris-spm.png" :width 2560 :height 1600) ;; Airline dataset (def airline ($ [:year :passengers :month] (read-dataset "<https://raw.github.com/liebke/incanter/master/data/airline_passengers.csv>" :header true))) (def airline-spm (scatter-plot-matrix airline :group-by :month :bins 20 :title "Airline Scatter Plot Matrix")) (view airline-spm) ;; Chick-weight dataset (view (scatter-plot-matrix (get-dataset :chick-weight) :group-by :Diet :bins 20 :title "Chick-weight Scatter Plot Matrix" )) ``` Returns a JFreeChart object displaying a scatter plot matrix for the given data. Use the 'view' function to display the chart or 'save' to write it to a file. Use: (scatter-plot-matrix & options) (scatter-plot-matrix data & options) Options: :data data (default $data) the data set for the plot. :title s (default "Scatter Plot Matrix"). :nbins n (default 10) number of bins (ie. bars) in histogram. :group-by grp (default nil) name of the column for grouping data. :only-first n (default 6) show only the first n most correlating columns of the data set. :only-triangle b (default false) shows only the upper triangle of the plot matrix. Examples: (use '(incanter core stats charts datasets pdf)) (view (scatter-plot-matrix (get-dataset :iris) :nbins 20 :group-by :Species )) (with-data (get-dataset :iris) (view (scatter-plot-matrix :nbins 20 :group-by :Species ))) (view (scatter-plot-matrix (get-dataset :chick-weight) :group-by :Diet :nbins 20)) ;;;Input examples for Iris ;; Input dataset examples: Incanter data repo, local file, remote file (url) (def iris (get-dataset :iris)) (def iris (read-dataset "data/iris.dat" :delim \space :header true)) ; relative to project home (def iris (read-dataset "https://raw.githubusercontent.com/incanter/incanter/master/data/iris.dat" :delim \space :header true)) ;; Filter dataset to specific columns only (def iris ($ [:Sepal.Length :Sepal.Width :Petal.Length :Petal.Width :Species] (get-dataset :iris))) (def iris (sel (get-dataset :iris) :cols [:Sepal.Length :Sepal.Width :Petal.Length :Petal.Width :Species])) ;;; Scatter plot matrix examples ;; Using default options (def iris-spm (scatter-plot-matrix iris :group-by :Species)) ;; filter to metrics only, no categorical dimension for grouping (def iris-spm (scatter-plot-matrix :data ($ [:Sepal.Length :Sepal.Width :Petal.Length :Petal.Width] iris))) ;; Using more options (def iris-spm (scatter-plot-matrix iris :title "Iris Scatter Plot Matrix" :bins 20 ; number of histogram bars :group-by :Species :only-first 4 ; most correlating columns :only-triangle false)) ;;;Output examples ;; View on Display (view iris-spm :width 1280 :height 800) ;; Save as PDF (save-pdf iris-spm "out/iris-spm.pdf" :width 2560 :height 1600) ;; Save as PNG (save iris-spm "out/iris-spm.png" :width 2560 :height 1600) ;; Airline dataset (def airline ($ [:year :passengers :month] (read-dataset "https://raw.github.com/liebke/incanter/master/data/airline_passengers.csv" :header true))) (def airline-spm (scatter-plot-matrix airline :group-by :month :bins 20 :title "Airline Scatter Plot Matrix")) (view airline-spm) ;; Chick-weight dataset (view (scatter-plot-matrix (get-dataset :chick-weight) :group-by :Diet :bins 20 :title "Chick-weight Scatter Plot Matrix" )) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1752)[raw docstring](#) --- #### scatter-plot-matrix*clj ``` (scatter-plot-matrix* & {:keys [data group-by title nbins only-first only-triangle] :or {data $data group-by nil title "Scatter Plot Matrix" nbins 10 only-first 6 only-triangle false}}) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1567) --- #### set-alphaclj ``` (set-alpha chart alpha) ``` Sets the alpha level (transparency) of the plot's foreground returns the modified chart object. References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Sets the alpha level (transparency) of the plot's foreground returns the modified chart object. References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L980)[raw docstring](#) --- #### set-axiscljmultimethod Set the selected axis of the chart, returning the chart. (Beware: the axis' label will replace axis label set previously on the chart.) Arguments: chart - the JFreeChart object whose axis to change dimension - depends on the plot type for plots with mutliple axes f.ex. :x or :y for an XYPlot (x is the domain axis, y the range one) axis - the axis to set, an instance of ValueAxis See also: log-axis Note: Not applicable to DialPlot MeterPlot PiePlot MultiplePiePlot CompassPlot WaferMapPlot SpiderWebPlot References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/axis/ValueAxis.html> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/plot/XYPlot.htmlExamples: (use '(incanter core charts)) (view (doto (function-plot #(Math/pow 10 %) 0 5) (set-axis :x (log-axis :base 10, :label "log(x)")))) ``` Set the selected axis of the chart, returning the chart. (Beware: the axis' label will replace axis label set previously on the chart.) Arguments: chart - the JFreeChart object whose axis to change dimension - depends on the plot type for plots with mutliple axes f.ex. :x or :y for an XYPlot (x is the domain axis, y the range one) axis - the axis to set, an instance of ValueAxis See also: log-axis Note: Not applicable to DialPlot MeterPlot PiePlot MultiplePiePlot CompassPlot WaferMapPlot SpiderWebPlot References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/axis/ValueAxis.html http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/plot/XYPlot.html Examples: (use '(incanter core charts)) (view (doto (function-plot #(Math/pow 10 %) 0 5) (set-axis :x (log-axis :base 10, :label "log(x)")))) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L915)[raw docstring](#) --- #### set-background-alphaclj ``` (set-background-alpha chart alpha) ``` Sets the alpha level (transparency) of the plot's background returns the modified chart object. References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Sets the alpha level (transparency) of the plot's background returns the modified chart object. References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L994)[raw docstring](#) --- #### set-background-defaultcljmultimethod Examples: (use '(incanter core stats charts datasets)) (doto (histogram (sample-normal 1000) :title (str :Test-Tittle)) set-theme-bw view) (doto (histogram (sample-normal 1000)) set-background-default (add-histogram (sample-normal 1000 :mean 1)) view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-theme-bw view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-theme-bw (set-stroke :dash 5) (add-points (plus ($ :speed (get-dataset :cars)) 5) (plus ($ :dist (get-dataset :cars)) 10)) view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-background-default (set-stroke :dash 5) (add-function sin 0 25) view) (doto (xy-plot :speed :dist :data (get-dataset :cars) :legend true) set-background-default view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-background-default view) (doto (box-plot (sample-gamma 1000 :shape 1 :scale 2) :legend true) view set-background-default (add-box-plot (sample-gamma 1000 :shape 2 :scale 2)) (add-box-plot (sample-gamma 1000 :shape 3 :scale 2))) (doto (bar-chart [:a :b :c] [10 20 30] :legend true) view set-background-default (add-categories [:a :b :c] [5 25 40])) (doto (line-chart [:a :b :c] [10 20 30] :legend true) view set-background-default (add-categories [:a :b :c] [5 25 40])) ;; time-series-plot (def epoch 0) (defn num-years-to-milliseconds [x] (* 365 24 60 60 1000 x)) (def dates (map num-years-to-milliseconds (range 100))) (def chart1 (time-series-plot dates (range 100))) (def cw1 (view chart1)) (add-lines chart1 dates (mult 1/2 (range 100))) (def chart2 (time-series-plot (take 10 dates) (mult 1/2 (range 10)))) (def cw2 (view chart2)) ``` Examples: (use '(incanter core stats charts datasets)) (doto (histogram (sample-normal 1000) :title (str :Test-Tittle)) set-theme-bw view) (doto (histogram (sample-normal 1000)) set-background-default (add-histogram (sample-normal 1000 :mean 1)) view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-theme-bw view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-theme-bw (set-stroke :dash 5) (add-points (plus ($ :speed (get-dataset :cars)) 5) (plus ($ :dist (get-dataset :cars)) 10)) view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-background-default (set-stroke :dash 5) (add-function sin 0 25) view) (doto (xy-plot :speed :dist :data (get-dataset :cars) :legend true) set-background-default view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-background-default view) (doto (box-plot (sample-gamma 1000 :shape 1 :scale 2) :legend true) view set-background-default (add-box-plot (sample-gamma 1000 :shape 2 :scale 2)) (add-box-plot (sample-gamma 1000 :shape 3 :scale 2))) (doto (bar-chart [:a :b :c] [10 20 30] :legend true) view set-background-default (add-categories [:a :b :c] [5 25 40])) (doto (line-chart [:a :b :c] [10 20 30] :legend true) view set-background-default (add-categories [:a :b :c] [5 25 40])) ;; time-series-plot (def epoch 0) (defn num-years-to-milliseconds [x] (* 365 24 60 60 1000 x)) (def dates (map num-years-to-milliseconds (range 100))) (def chart1 (time-series-plot dates (range 100))) (def cw1 (view chart1)) (add-lines chart1 dates (mult 1/2 (range 100))) (def chart2 (time-series-plot (take 10 dates) (mult 1/2 (range 10)))) (def cw2 (view chart2)) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L78)[raw docstring](#) --- #### set-point-sizeclj ``` (set-point-size chart point-size & {:keys [series dataset] :or {series :all dataset 0}}) ``` Set the point size of a scatter plot. Use series option to apply point-size to only one series. ``` Set the point size of a scatter plot. Use series option to apply point-size to only one series. ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3777)[raw docstring](#) --- #### set-strokeclj ``` (set-stroke chart & options) ``` Examples: (use '(incanter core charts)) (doto (line-chart [:a :b :c :d] [10 20 5 35]) (set-stroke :width 4 :dash 5) view) (doto (line-chart [:a :b :c :d] [10 20 5 35]) (add-categories [:a :b :c :d] [20 5 30 15]) (set-stroke :width 4 :dash 5) (set-stroke :series 1 :width 2 :dash 10) view) (doto (function-plot sin -10 10 :step-size 0.1) (set-stroke :width 3 :dash 5) view) (doto (line-chart [:a :b :c :d] [10 20 5 35]) (add-categories [:a :b :c :d] [20 5 30 15]) (set-stroke :series 0 :width 4 :dash 5) (set-stroke :series 1 :width 4 :dash 5 :cap java.awt.BasicStroke/CAP_SQUARE)) ``` Examples: (use '(incanter core charts)) (doto (line-chart [:a :b :c :d] [10 20 5 35]) (set-stroke :width 4 :dash 5) view) (doto (line-chart [:a :b :c :d] [10 20 5 35]) (add-categories [:a :b :c :d] [20 5 30 15]) (set-stroke :width 4 :dash 5) (set-stroke :series 1 :width 2 :dash 10) view) (doto (function-plot sin -10 10 :step-size 0.1) (set-stroke :width 3 :dash 5) view) (doto (line-chart [:a :b :c :d] [10 20 5 35]) (add-categories [:a :b :c :d] [20 5 30 15]) (set-stroke :series 0 :width 4 :dash 5) (set-stroke :series 1 :width 4 :dash 5 :cap java.awt.BasicStroke/CAP_SQUARE)) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3704)[raw docstring](#) --- #### set-stroke-colorclj ``` (set-stroke-color chart color & options) ``` Examples: (use '(incanter core charts)) (doto (line-chart [:a :b :c :d] [10 20 5 35]) (set-stroke :width 4 :dash 5) (set-stroke-color java.awt.Color/blue) view) (doto (xy-plot [1 2 3] [4 5 6]) (add-points [1 2 3] [4.1 5.1 6.1]) (set-stroke-color java.awt.Color/black :series 0) (set-stroke-color java.awt.Color/red :series 1)) (doto (function-plot sin -10 10 :step-size 0.1) (set-stroke :width 3 :dash 5) (set-stroke-color java.awt.Color/gray) view) ``` Examples: (use '(incanter core charts)) (doto (line-chart [:a :b :c :d] [10 20 5 35]) (set-stroke :width 4 :dash 5) (set-stroke-color java.awt.Color/blue) view) (doto (xy-plot [1 2 3] [4 5 6]) (add-points [1 2 3] [4.1 5.1 6.1]) (set-stroke-color java.awt.Color/black :series 0) (set-stroke-color java.awt.Color/red :series 1)) (doto (function-plot sin -10 10 :step-size 0.1) (set-stroke :width 3 :dash 5) (set-stroke-color java.awt.Color/gray) view) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3749)[raw docstring](#) --- #### set-themeclj ``` (set-theme chart theme) ``` Changes the chart theme. Arguments: chart -- an Incanter/JFreeChart object theme -- either a keyword indicating one of the built-in themes, or a JFreeChart ChartTheme object. Built-in Themes: :default :dark Examples: (use '(incanter core charts)) (def chart (function-plot sin -4 4)) (view chart) ;; change the theme of chart to :dark (set-theme chart :dark) ;; change it back to the default (set-theme chart :default) ;; Example using JFreeTheme (use '(incanter core stats charts datasets)) (import '(org.jfree.chart StandardChartTheme) '(org.jfree.chart.plot DefaultDrawingSupplier) '(java.awt Color)) (def all-red-theme (doto (StandardChartTheme/createJFreeTheme) (.setDrawingSupplier (proxy [DefaultDrawingSupplier] [] (getNextPaint [] Color/red))))) (def data (get-dataset :airline-passengers)) (def chart (bar-chart :month :passengers :group-by :year :legend true :data data)) (doto chart ;; has no effect (set-theme all-red-theme) view) References: <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/StandardChartTheme.html> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/ChartTheme.html> ``` Changes the chart theme. Arguments: chart -- an Incanter/JFreeChart object theme -- either a keyword indicating one of the built-in themes, or a JFreeChart ChartTheme object. Built-in Themes: :default :dark Examples: (use '(incanter core charts)) (def chart (function-plot sin -4 4)) (view chart) ;; change the theme of chart to :dark (set-theme chart :dark) ;; change it back to the default (set-theme chart :default) ;; Example using JFreeTheme (use '(incanter core stats charts datasets)) (import '(org.jfree.chart StandardChartTheme) '(org.jfree.chart.plot DefaultDrawingSupplier) '(java.awt Color)) (def all-red-theme (doto (StandardChartTheme/createJFreeTheme) (.setDrawingSupplier (proxy [DefaultDrawingSupplier] [] (getNextPaint [] Color/red))))) (def data (get-dataset :airline-passengers)) (def chart (bar-chart :month :passengers :group-by :year :legend true :data data)) (doto chart ;; has no effect (set-theme all-red-theme) view) References: http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/StandardChartTheme.html http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/ChartTheme.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L226)[raw docstring](#) --- #### set-theme-bwcljmultimethod Examples: (use '(incanter core stats charts datasets)) (doto (histogram (sample-normal 1000)) set-theme-bw view) (doto (histogram (sample-normal 1000)) set-theme-bw (add-histogram (sample-normal 1000 :mean 1)) view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-theme-bw view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-theme-bw (set-stroke :dash 5) (add-points (plus ($ :speed (get-dataset :cars)) 5) (plus ($ :dist (get-dataset :cars)) 10)) view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-theme-bw (set-stroke :dash 5) (add-function sin 0 25) view) (doto (xy-plot :speed :dist :data (get-dataset :cars)) set-theme-bw view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-theme-bw (add-lines :speed :dist :data (get-dataset :cars)) view) (doto (box-plot (sample-gamma 1000 :shape 1 :scale 2) :legend true) view (add-box-plot (sample-gamma 1000 :shape 2 :scale 2)) (add-box-plot (sample-gamma 1000 :shape 3 :scale 2)) set-theme-bw) (doto (bar-chart [:a :b :c] [10 20 30] :legend true) view set-theme-bw (add-categories [:a :b :c] [5 25 40])) (doto (line-chart [:a :b :c] [10 20 30] :legend true) view set-theme-bw (add-categories [:a :b :c] [5 25 40])) ``` Examples: (use '(incanter core stats charts datasets)) (doto (histogram (sample-normal 1000)) set-theme-bw view) (doto (histogram (sample-normal 1000)) set-theme-bw (add-histogram (sample-normal 1000 :mean 1)) view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-theme-bw view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-theme-bw (set-stroke :dash 5) (add-points (plus ($ :speed (get-dataset :cars)) 5) (plus ($ :dist (get-dataset :cars)) 10)) view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-theme-bw (set-stroke :dash 5) (add-function sin 0 25) view) (doto (xy-plot :speed :dist :data (get-dataset :cars)) set-theme-bw view) (doto (scatter-plot :speed :dist :data (get-dataset :cars)) set-theme-bw (add-lines :speed :dist :data (get-dataset :cars)) view) (doto (box-plot (sample-gamma 1000 :shape 1 :scale 2) :legend true) view (add-box-plot (sample-gamma 1000 :shape 2 :scale 2)) (add-box-plot (sample-gamma 1000 :shape 3 :scale 2)) set-theme-bw) (doto (bar-chart [:a :b :c] [10 20 30] :legend true) view set-theme-bw (add-categories [:a :b :c] [5 25 40])) (doto (line-chart [:a :b :c] [10 20 30] :legend true) view set-theme-bw (add-categories [:a :b :c] [5 25 40])) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L158)[raw docstring](#) --- #### set-theme-defaultcljmultimethod [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L154) --- #### set-titleclj ``` (set-title chart title) ``` Sets the main title of the plot, returns the modified chart object. References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Sets the main title of the plot, returns the modified chart object. References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1023)[raw docstring](#) --- #### set-x-labelclj ``` (set-x-label chart label) ``` Sets the label of the x-axis, returns the modified chart object. References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Sets the label of the x-axis, returns the modified chart object. References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1037)[raw docstring](#) --- #### set-x-rangeclj ``` (set-x-range chart lower upper) ``` Sets the range of the x-axis on the given chart. Examples: (use '(incanter core charts datasets)) (def chart (xy-plot :speed :dist :data (get-dataset :cars))) (view chart) (set-x-range chart 10 20) ``` Sets the range of the x-axis on the given chart. Examples: (use '(incanter core charts datasets)) (def chart (xy-plot :speed :dist :data (get-dataset :cars))) (view chart) (set-x-range chart 10 20) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1065)[raw docstring](#) --- #### set-y-labelclj ``` (set-y-label chart label) ``` Sets the label of the y-axis, returns the modified chart object. References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Sets the label of the y-axis, returns the modified chart object. References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1051)[raw docstring](#) --- #### set-y-rangeclj ``` (set-y-range chart lower upper) ``` Sets the range of the y-axis on the given chart. Examples: (use '(incanter core charts datasets)) (def chart (xy-plot :speed :dist :data (get-dataset :cars))) (view chart) (set-y-range chart 10 60) ``` Sets the range of the y-axis on the given chart. Examples: (use '(incanter core charts datasets)) (def chart (xy-plot :speed :dist :data (get-dataset :cars))) (view chart) (set-y-range chart 10 60) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1086)[raw docstring](#) --- #### sliderclj ``` (slider updater-fn slider-values) ``` ``` (slider updater-fn slider-values slider-label) ``` Examples: (use '(incanter core stats charts)) (def pdf-chart (function-plot pdf-normal -3 3)) (view pdf-chart) (add-function pdf-chart pdf-normal -3 3) (let [x (range -3 3 0.1)] (slider #(set-data pdf-chart [x (pdf-normal x :sd %)]) (range 0.1 10 0.1))) (let [x (range -3 3 0.1)] (slider #(set-data pdf-chart [x (pdf-normal x :sd %)]) (range 0.1 10 0.1) "sd")) ``` Examples: (use '(incanter core stats charts)) (def pdf-chart (function-plot pdf-normal -3 3)) (view pdf-chart) (add-function pdf-chart pdf-normal -3 3) (let [x (range -3 3 0.1)] (slider #(set-data pdf-chart [x (pdf-normal x :sd %)]) (range 0.1 10 0.1))) (let [x (range -3 3 0.1)] (slider #(set-data pdf-chart [x (pdf-normal x :sd %)]) (range 0.1 10 0.1) "sd")) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3499)[raw docstring](#) --- #### slidersclj/smacro ``` (sliders [& slider-bindings] body) ``` Creates one slider control for each of the given sequence bindings. Each slider calls the given expression when manipulated. Examples: (use '(incanter core stats charts)) ;; manipulate a normal pdf (let [x (range -3 3 0.1)] (def pdf-chart (xy-plot)) (view pdf-chart) (sliders [mean (range -3 3 0.1) stdev (range 0.1 10 0.1)] (set-data pdf-chart [x (pdf-normal x :mean mean :sd stdev)]))) ;; manipulate a gamma pdf (let [x (range 0 20 0.1)] (def pdf-chart (xy-plot)) (view pdf-chart) (sliders [scale (range 0.1 10 0.1) shape (range 0.1 10 0.1)] (set-data pdf-chart [x (pdf-gamma x :scale scale :shape shape)]))) ;; find the start values of a non-linear model function (use '(incanter core charts datasets)) ;; create model function used in the following data-sorcery post: ;; <http://data-sorcery.org/2009/06/06/fitting-non-linear-models/(defn f [theta x] (let [[b1 b2 b3] theta] (div (exp (mult (minus b1) x)) (plus b2 (mult b3 x))))) (with-data (get-dataset :chwirut) (view $data) (def chart (scatter-plot ($ :x) ($ :y))) (view chart) (add-lines chart ($ :x) (f [0 0.01 0] ($ :x))) ``` ;; manipulate the model line to find some good start values. ;; give the index of the line data (i.e. 1) to set-data. (let [x ($ :x)] (sliders [b1 (range 0 2 0.01) b2 (range 0.01 2 0.01) b3 (range 0 2 0.01)] (set-data chart [x (f [b1 b2 b3] x)] 1)))) ``` ``` Creates one slider control for each of the given sequence bindings. Each slider calls the given expression when manipulated. Examples: (use '(incanter core stats charts)) ;; manipulate a normal pdf (let [x (range -3 3 0.1)] (def pdf-chart (xy-plot)) (view pdf-chart) (sliders [mean (range -3 3 0.1) stdev (range 0.1 10 0.1)] (set-data pdf-chart [x (pdf-normal x :mean mean :sd stdev)]))) ;; manipulate a gamma pdf (let [x (range 0 20 0.1)] (def pdf-chart (xy-plot)) (view pdf-chart) (sliders [scale (range 0.1 10 0.1) shape (range 0.1 10 0.1)] (set-data pdf-chart [x (pdf-gamma x :scale scale :shape shape)]))) ;; find the start values of a non-linear model function (use '(incanter core charts datasets)) ;; create model function used in the following data-sorcery post: ;; http://data-sorcery.org/2009/06/06/fitting-non-linear-models/ (defn f [theta x] (let [[b1 b2 b3] theta] (div (exp (mult (minus b1) x)) (plus b2 (mult b3 x))))) (with-data (get-dataset :chwirut) (view $data) (def chart (scatter-plot ($ :x) ($ :y))) (view chart) (add-lines chart ($ :x) (f [0 0.01 0] ($ :x))) ;; manipulate the model line to find some good start values. ;; give the index of the line data (i.e. 1) to set-data. (let [x ($ :x)] (sliders [b1 (range 0 2 0.01) b2 (range 0.01 2 0.01) b3 (range 0 2 0.01)] (set-data chart [x (f [b1 b2 b3] x)] 1)))) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3574)[raw docstring](#) --- #### sliders*clj ``` (sliders* f [& slider-values]) ``` ``` (sliders* f [& slider-values] [& slider-labels]) ``` sliders* Examples: (use '(incanter core stats charts)) (let [x (range -3 3 0.1)] (do (def pdf-chart (xy-plot x (pdf-normal x :mean -3 :sd 0.1))) (view pdf-chart) (sliders* #(set-data pdf-chart [x (pdf-normal x :mean %1 :sd %2)]) [(range -3 3 0.1) (range 0.1 10 0.1)] ["mean" "sd"]))) ``` sliders* Examples: (use '(incanter core stats charts)) (let [x (range -3 3 0.1)] (do (def pdf-chart (xy-plot x (pdf-normal x :mean -3 :sd 0.1))) (view pdf-chart) (sliders* #(set-data pdf-chart [x (pdf-normal x :mean %1 :sd %2)]) [(range -3 3 0.1) (range 0.1 10 0.1)] ["mean" "sd"]))) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3544)[raw docstring](#) --- #### stacked-area-chartclj/smacro ``` (stacked-area-chart categories values & options) ``` Returns a JFreeChart object representing an stacked-area-chart of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Arguments: categories -- a sequence of categories values -- a sequence of numeric values Options: :title (default '') main title :x-label (default 'Categories') :y-label (default 'Value') :series-label :legend (default false) prints legend :vertical (default true) the orientation of the plot :group-by (default nil) -- a vector of values used to group the values into series within each category. See also: view and save Examples: (use '(incanter core stats charts datasets)) (with-data (get-dataset :co2) (view (stacked-area-chart :Type :uptake :title "CO2 Uptake" :group-by :Treatment :x-label "Grass Types" :y-label "Uptake" :legend true))) (def data (get-dataset :airline-passengers)) (view (stacked-area-chart :year :passengers :group-by :month :legend true :data data)) (with-data (get-dataset :airline-passengers) (view (stacked-area-chart :month :passengers :group-by :year :legend true))) (def data (get-dataset :austres)) (view data) (def plot (stacked-area-chart :year :population :group-by :quarter :legend true :data data)) (view plot) (save plot "/tmp/austres_plot.png" :width 1000) (view "file:///tmp/austres_plot.png") (def seasons (mapcat identity (repeat 3 ["winter" "spring" "summer" "fall"]))) (def years (mapcat identity (repeat 4 [2007 2008 2009]))) (def values (sample-uniform 12 :integers true :max 100)) (view (stacked-area-chart years values :group-by seasons :legend true)) (view (stacked-area-chart ["a" "a" "b" "b" "c" "c" ] [10 20 30 10 40 20] :legend true :group-by ["I" "II" "I" "II" "I" "II"])) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Returns a JFreeChart object representing an stacked-area-chart of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Arguments: categories -- a sequence of categories values -- a sequence of numeric values Options: :title (default '') main title :x-label (default 'Categories') :y-label (default 'Value') :series-label :legend (default false) prints legend :vertical (default true) the orientation of the plot :group-by (default nil) -- a vector of values used to group the values into series within each category. See also: view and save Examples: (use '(incanter core stats charts datasets)) (with-data (get-dataset :co2) (view (stacked-area-chart :Type :uptake :title "CO2 Uptake" :group-by :Treatment :x-label "Grass Types" :y-label "Uptake" :legend true))) (def data (get-dataset :airline-passengers)) (view (stacked-area-chart :year :passengers :group-by :month :legend true :data data)) (with-data (get-dataset :airline-passengers) (view (stacked-area-chart :month :passengers :group-by :year :legend true))) (def data (get-dataset :austres)) (view data) (def plot (stacked-area-chart :year :population :group-by :quarter :legend true :data data)) (view plot) (save plot "/tmp/austres_plot.png" :width 1000) (view "file:///tmp/austres_plot.png") (def seasons (mapcat identity (repeat 3 ["winter" "spring" "summer" "fall"]))) (def years (mapcat identity (repeat 4 [2007 2008 2009]))) (def values (sample-uniform 12 :integers true :max 100)) (view (stacked-area-chart years values :group-by seasons :legend true)) (view (stacked-area-chart ["a" "a" "b" "b" "c" "c" ] [10 20 30 10 40 20] :legend true :group-by ["I" "II" "I" "II" "I" "II"])) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2371)[raw docstring](#) --- #### stacked-area-chart*clj ``` (stacked-area-chart* categories values & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2328) --- #### stacked-bar-chartclj/smacro ``` (stacked-bar-chart categories values & options) ``` Returns a JFreeChart object representing an stacked-bar-chart of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Arguments: categories -- a sequence of categories values -- a sequence of numeric values Options: :title (default '') main title :x-label (default 'Categories') :y-label (default 'Value') :series-label :legend (default false) prints legend :vertical (default true) the orientation of the plot :group-by (default nil) -- a vector of values used to group the values into series within each category. See also: view and save Examples: (use '(incanter core stats charts datasets)) (with-data (get-dataset :co2) (view (stacked-bar-chart :Type :uptake :title "CO2 Uptake" :group-by :Treatment :x-label "Grass Types" :y-label "Uptake" :legend true))) (def data (get-dataset :airline-passengers)) (view (stacked-bar-chart :year :passengers :group-by :month :legend true :data data)) (with-data (get-dataset :airline-passengers) (view (stacked-bar-chart :month :passengers :group-by :year :legend true))) (def data (get-dataset :austres)) (view data) (def plot (stacked-bar-chart :year :population :group-by :quarter :legend true :data data)) (view plot) (save plot "/tmp/austres_plot.png" :width 1000) (view "file:///tmp/austres_plot.png") (def seasons (mapcat identity (repeat 3 ["winter" "spring" "summer" "fall"]))) (def years (mapcat identity (repeat 4 [2007 2008 2009]))) (def values (sample-uniform 12 :integers true :max 100)) (view (stacked-bar-chart years values :group-by seasons :legend true)) (view (stacked-bar-chart ["a" "b" "c"] [10 20 30])) (view (stacked-bar-chart ["a" "a" "b" "b" "c" "c" ] [10 20 30 10 40 20] :legend true :group-by ["I" "II" "I" "II" "I" "II"])) ;; add a series label (def plot (stacked-bar-chart ["a" "b" "c"] [10 20 30] :legend true :series-label "s1")) (view plot) (add-categories plot ["a" "b" "c"] [5 25 40] :series-label "s2") (view (stacked-bar-chart (sample "abcdefghij" :size 10 :replacement true) (sample-uniform 10 :max 50) :legend true)) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Returns a JFreeChart object representing an stacked-bar-chart of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Arguments: categories -- a sequence of categories values -- a sequence of numeric values Options: :title (default '') main title :x-label (default 'Categories') :y-label (default 'Value') :series-label :legend (default false) prints legend :vertical (default true) the orientation of the plot :group-by (default nil) -- a vector of values used to group the values into series within each category. See also: view and save Examples: (use '(incanter core stats charts datasets)) (with-data (get-dataset :co2) (view (stacked-bar-chart :Type :uptake :title "CO2 Uptake" :group-by :Treatment :x-label "Grass Types" :y-label "Uptake" :legend true))) (def data (get-dataset :airline-passengers)) (view (stacked-bar-chart :year :passengers :group-by :month :legend true :data data)) (with-data (get-dataset :airline-passengers) (view (stacked-bar-chart :month :passengers :group-by :year :legend true))) (def data (get-dataset :austres)) (view data) (def plot (stacked-bar-chart :year :population :group-by :quarter :legend true :data data)) (view plot) (save plot "/tmp/austres_plot.png" :width 1000) (view "file:///tmp/austres_plot.png") (def seasons (mapcat identity (repeat 3 ["winter" "spring" "summer" "fall"]))) (def years (mapcat identity (repeat 4 [2007 2008 2009]))) (def values (sample-uniform 12 :integers true :max 100)) (view (stacked-bar-chart years values :group-by seasons :legend true)) (view (stacked-bar-chart ["a" "b" "c"] [10 20 30])) (view (stacked-bar-chart ["a" "a" "b" "b" "c" "c" ] [10 20 30 10 40 20] :legend true :group-by ["I" "II" "I" "II" "I" "II"])) ;; add a series label (def plot (stacked-bar-chart ["a" "b" "c"] [10 20 30] :legend true :series-label "s1")) (view plot) (add-categories plot ["a" "b" "c"] [5 25 40] :series-label "s2") (view (stacked-bar-chart (sample "abcdefghij" :size 10 :replacement true) (sample-uniform 10 :max 50) :legend true)) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2498)[raw docstring](#) --- #### stacked-bar-chart*clj ``` (stacked-bar-chart* categories values & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L2455) --- #### time-series-plotclj/smacro ``` (time-series-plot x y & options) ``` Returns a JFreeChart object representing a time series plot of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Sequence passed in for the x axis should be number of milliseconds from the epoch (1 January 1970). Options: :data (default nil) If the :data option is provided a dataset, column names can be used instead of sequences of data as arguments to xy-plot. :title (default '') main title :x-label (default x expression) :y-label (default y expression) :legend (default false) prints legend :series-label (default x expression) :group-by (default nil) -- a vector of values used to group the x and y values into series. See also: view, save, add-points, add-lines Examples: (use '(incanter core stats charts)) (require '[clj-time.core :refer [date-time]]) ;; plot numbers against years starting with 1900 (def dates (map #(-> (date-time (+ 1900 %)) .getMillis) (range 100))) (def y (range 100)) (view (time-series-plot dates y :x-label "Year")) References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Returns a JFreeChart object representing a time series plot of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Sequence passed in for the x axis should be number of milliseconds from the epoch (1 January 1970). Options: :data (default nil) If the :data option is provided a dataset, column names can be used instead of sequences of data as arguments to xy-plot. :title (default '') main title :x-label (default x expression) :y-label (default y expression) :legend (default false) prints legend :series-label (default x expression) :group-by (default nil) -- a vector of values used to group the x and y values into series. See also: view, save, add-points, add-lines Examples: (use '(incanter core stats charts)) (require '[clj-time.core :refer [date-time]]) ;; plot numbers against years starting with 1900 (def dates (map #(-> (date-time (+ 1900 %)) .getMillis) (range 100))) (def y (range 100)) (view (time-series-plot dates y :x-label "Year")) References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1373)[raw docstring](#) --- #### time-series-plot*clj ``` (time-series-plot* x y & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1370) --- #### trace-plotclj ``` (trace-plot x & options) ``` Returns a trace-plot object, use the 'view' function to display it. Options: :data (default nil) If the :data option is provided a dataset, a column name can be used instead of a sequence of data for argument x. :title (default 'Trace Plot') main title :x-label (default 'Iteration') :y-label (default 'Value') :series-label (default 'Value') Examples: (use '(incanter core datasets stats bayes charts)) (def ols-data (to-matrix (get-dataset :survey))) (def x (sel ols-data (range 0 2313) (range 1 10))) (def y (sel ols-data (range 0 2313) 10)) (def sample-params (sample-model-params 5000 (linear-model y x :intercept false))) (view (trace-plot (:var sample-params))) ``` (view (trace-plot (sel (:coefs sample-params) :cols 0))) ``` ``` Returns a trace-plot object, use the 'view' function to display it. Options: :data (default nil) If the :data option is provided a dataset, a column name can be used instead of a sequence of data for argument x. :title (default 'Trace Plot') main title :x-label (default 'Iteration') :y-label (default 'Value') :series-label (default 'Value') Examples: (use '(incanter core datasets stats bayes charts)) (def ols-data (to-matrix (get-dataset :survey))) (def x (sel ols-data (range 0 2313) (range 1 10))) (def y (sel ols-data (range 0 2313) 10)) (def sample-params (sample-model-params 5000 (linear-model y x :intercept false))) (view (trace-plot (:var sample-params))) (view (trace-plot (sel (:coefs sample-params) :cols 0))) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L3288)[raw docstring](#) --- #### xy-plotclj/smacro ``` (xy-plot) ``` ``` (xy-plot x y & options) ``` Returns a JFreeChart object representing a xy-plot of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Options: :data (default nil) If the :data option is provided a dataset, column names can be used instead of sequences of data as arguments to xy-plot. :title (default 'XY Plot') main title :x-label (default x expression) :y-label (default 'Frequency') :legend (default false) prints legend :series-label (default x expression) :group-by (default nil) -- a vector of values used to group the x and y values into series. :points (default false) includes point-markers :auto-sort (default true) sort data by x See also: view, save, add-points, add-lines Examples: (use '(incanter core stats charts)) ;; plot the cosine function (def x (range -1 5 0.01)) (def y (cos (mult 2 Math/PI x))) (view (xy-plot x y)) ;; plot gamma pdf with different parameters (def x2 (range 0 20 0.1)) (def gamma-plot (xy-plot x2 (pdf-gamma x2 :shape 1 :scale 2) :legend true :title "Gamma PDF" :y-label "Density")) (view gamma-plot) (add-lines gamma-plot x2 (pdf-gamma x2 :shape 2 :scale 2)) (add-lines gamma-plot x2 (pdf-gamma x2 :shape 3 :scale 2)) (add-lines gamma-plot x2 (pdf-gamma x2 :shape 5 :scale 1)) (add-lines gamma-plot x2 (pdf-gamma x2 :shape 9 :scale 0.5)) ;; use :group-by option (use '(incanter core charts datasets)) (with-data (get-dataset :chick-weight) (view (xy-plot :Time :weight :group-by :Chick))) ;; see INCANTER_HOME/examples/probability_plots.clj for more examples of plots References: <http://www.jfree.org/jfreechart/api/javadoc/> <http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html> ``` Returns a JFreeChart object representing a xy-plot of the given data. Use the 'view' function to display the chart, or the 'save' function to write it to a file. Options: :data (default nil) If the :data option is provided a dataset, column names can be used instead of sequences of data as arguments to xy-plot. :title (default 'XY Plot') main title :x-label (default x expression) :y-label (default 'Frequency') :legend (default false) prints legend :series-label (default x expression) :group-by (default nil) -- a vector of values used to group the x and y values into series. :points (default false) includes point-markers :auto-sort (default true) sort data by x See also: view, save, add-points, add-lines Examples: (use '(incanter core stats charts)) ;; plot the cosine function (def x (range -1 5 0.01)) (def y (cos (mult 2 Math/PI x))) (view (xy-plot x y)) ;; plot gamma pdf with different parameters (def x2 (range 0 20 0.1)) (def gamma-plot (xy-plot x2 (pdf-gamma x2 :shape 1 :scale 2) :legend true :title "Gamma PDF" :y-label "Density")) (view gamma-plot) (add-lines gamma-plot x2 (pdf-gamma x2 :shape 2 :scale 2)) (add-lines gamma-plot x2 (pdf-gamma x2 :shape 3 :scale 2)) (add-lines gamma-plot x2 (pdf-gamma x2 :shape 5 :scale 1)) (add-lines gamma-plot x2 (pdf-gamma x2 :shape 9 :scale 0.5)) ;; use :group-by option (use '(incanter core charts datasets)) (with-data (get-dataset :chick-weight) (view (xy-plot :Time :weight :group-by :Chick))) ;; see INCANTER_HOME/examples/probability_plots.clj for more examples of plots References: http://www.jfree.org/jfreechart/api/javadoc/ http://www.jfree.org/jfreechart/api/javadoc/org/jfree/chart/JFreeChart.html ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1212)[raw docstring](#) --- #### xy-plot*clj ``` (xy-plot* x y & options) ``` [source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-charts/src/incanter/charts.clj#L1209)
rlog
cran
R
Package ‘rlog’ October 14, 2022 Title A Simple, Opinionated Logging Utility Version 0.1.0 Description A very lightweight package that writes out log messages in an opinionated way. Simpler and lighter than other logging packages, 'rlog' provides a compact feature set that focuses on getting the job done in a Unix-like way. URL https://github.com/sellorm/rlog BugReports https://github.com/sellorm/rlog/issues License MIT + file LICENSE Encoding UTF-8 LazyData true RoxygenNote 7.1.1 Suggests testthat (>= 3.0.0) Config/testthat/edition 3 NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2021-02-24 09:20:05 UTC R topics documented: log_debu... 2 log_erro... 2 log_fata... 3 log_inf... 4 log_trac... 4 log_war... 5 log_debug Log a debug message Description Log messages will only be emitted if the log priority matches or is higher than the priority of your message Usage log_debug(message) Arguments message your message to log Value invisibly returns TRUE/FALSE Examples ## Not run: log_debug("This is a debug message") Sys.setenv("LOG_LEVEL" = "TRACE") log_debug("This is a debug message") ## End(Not run) log_error Log an error message Description Log messages will only be emitted if the log priority matches or is higher than the priority of your message Usage log_error(message) Arguments message your message to log Value invisibly returns TRUE/FALSE Examples ## Not run: log_error("This is an error message") Sys.setenv("LOG_LEVEL" = "TRACE") log_error("This is an error message") ## End(Not run) log_fatal Log a fatal message Description Log messages will only be emitted if the log priority matches or is higher than the priority of your message Usage log_fatal(message) Arguments message your message to log Value invisibly returns TRUE/FALSE Examples ## Not run: log_fatal("This is a fatal message") Sys.setenv("LOG_LEVEL" = "TRACE") log_fatal("This is a fatal message") ## End(Not run) log_info Log an info message Description Log messages will only be emitted if the log priority matches or is higher than the priority of your message Usage log_info(message) Arguments message your message to log Value invisibly returns TRUE/FALSE Examples ## Not run: log_info("This is an info message") Sys.setenv("LOG_LEVEL" = "TRACE") log_info("This is an info message") ## End(Not run) log_trace Log a trace message Description Log messages will only be emitted if the log priority matches or is higher than the priority of your message Usage log_trace(message) Arguments message your message to log Value invisibly returns TRUE/FALSE Examples ## Not run: log_trace("This is a trace message") Sys.setenv("LOG_LEVEL" = "TRACE") log_trace("This is a trace message") ## End(Not run) log_warn Log a warning message Description Log messages will only be emitted if the log priority matches or is higher than the priority of your message Usage log_warn(message) Arguments message your message to log Value invisibly returns TRUE/FALSE Examples ## Not run: log_warn("This is a warning message") Sys.setenv("LOG_LEVEL" = "TRACE") log_warn("This is a warning message") ## End(Not run)
wfe
cran
R
Package ‘wfe’ October 12, 2022 Type Package Title Weighted Linear Fixed Effects Regression Models for Causal Inference Version 1.9.1 Date 2019-04-17 Description Provides a computationally efficient way of fitting weighted linear fixed effects estimators for causal inference with various weighting schemes. Weighted linear fixed effects estimators can be used to estimate the average treatment effects under different identification strategies. This includes stratified randomized experiments, matching and stratification for observational studies, first differencing, and difference-in-differences. The package implements methods described in Imai and Kim (2017) ``When should We Use Linear Fixed Effects Regression Models for Causal Inference with Longitudinal Data?'', available at <https://imai.fas.harvard.edu/research/FEmatch.html>. License GPL (>= 2) Imports utils, arm, Matrix, MASS, methods Depends R (>= 3.2.0) Encoding UTF-8 LazyData true BugReports https://github.com/insongkim/wfe/issues NeedsCompilation yes Author In <NAME> [aut, cre], <NAME> [aut] Maintainer In <NAME> <<EMAIL>> Repository CRAN Date/Publication 2019-04-17 21:50:03 UTC R topics documented: pwf... 2 wf... 7 pwfe Fitting the Weighted Fixed Effects Model with Propensity Score Weighting Description pwfe is used to fit weighted fixed effects model for causal inference after transforming outcome variable based on estimated propensity score. pwfe also derives the regression weights for different causal quantity of interest. Usage pwfe(formula, treat = "treat.name", outcome, data, pscore = NULL, unit.index, time.index = NULL, method = "unit", within.unit = TRUE, qoi = c("ate", "att"), estimator = NULL, C.it = NULL, White = TRUE, White.alpha = 0.05, hetero.se = TRUE, auto.se = TRUE, unbiased.se = FALSE, verbose = TRUE) Arguments formula a symbolic description of the model for estimating propensity score. The for- mula should not include dummmies for fixed effects. The details of model spec- ifications are given under ‘Details’. treat a character string indicating the name of treatment variable used in the models. The treatment should be binary indicator (integer with 0 for the control group and 1 for the treatment group). outcome a character string indicating the name of outcome variable. data data frame containing the variables in the model. pscore an optional character string indicating the name of estimated propensity score. Note that pre-specified propensity score should be bounded away from zero and one. unit.index a character string indicating the name of unit variable used in the models. The index of unit should be factor. time.index a character string indicating the name of time variable used in the models. The index of time should be factor. method method for weighted fixed effects regression, either unit for unit fixed effects; time for time fixed effects. The default is unit. within.unit a logical value indicating whether propensity score is estimated within unit. The default is TRUE. qoi one of "ate" or "att". The default is "ate". "fd" and "did" are not compati- ble with pwfe. estimator an optional character string "fd" indicating whether the first-difference estima- tor will be used. C.it an optional non-negative numeric vector specifying relative weights for each unit of analysis. White a logical value indicating whether White misspecification statistics should be calculated. The default is TRUE. White.alpha level of functional specification test. See White (1980) and Imai . The default is 0.05. hetero.se a logical value indicating whether heteroskedasticity across units is allowed in calculating standard errors. The default is TRUE. auto.se a logical value indicating whether arbitrary autocorrelation is allowed in calcu- lating standard errors. The default is TRUE. unbiased.se logical. If TRUE, bias-asjusted heteroskedasticity-robust standard errors are used. See Stock and Watson (2008). Should be used only for balanced panel. The default is FALSE. verbose logical. If TRUE, helpful messages along with a progress report of the weight calculation are printed on the screen. The default is TRUE. Details To fit the weighted unit (time) fixed effects model with propensity score weighting, use the syntax for the formula, ~ x1 + x2, where x1 and x2 are unit (time) varying covariates. One can provide his/her own estimated pscore which can be used to transform the outcome vari- albe. If so, one does not need to specify formula. If pscore is not provided, bayesglm will be used to estimate propensity scores. If within.unit = TRUE, propensity score will be separately estimated within time (unit) when method is unit (time). Otherwise, propensity score will be estimated on entire data at once. The estimated propensity scores will be used to transform the outcome variable as described in Imai and Kim (2018). pwfe calculates weights based on different underlying causal quantity of interest: Average Treat- ment Effect (qoi = "ate") or Average Treatment Effect for the Treated (qoi = "att"). One can further set estimating methods: First-Difference (estimator ="fd") or Difference-in- differences (estimator = "did"). To specify different ex-ante weights for each unit of analysis, use non-negative weights C.it. For instance, using the survey weights for C.it enables the estimation fo the average treatement effect for the target population. Value pwfe returns an object of class "pwfe", a list that contains the components listed below. The function summary (i.e., summary.pwfe) can be used to obtain a table of the results. coefficients a named vector of coefficients residuals the residuals, that is respons minus fitted values df the degree of freedom W weight matrix calculated from the model. Row and column indices can be found from unit.name, time.name. call the matched call causal causal quantity of interest estimator the estimating method unit.name a vector containing unique unit names unit.index a vector containing unique unit index number time.name a vector containing unique time names time.index a vector containing unique time index number method call of the method used vcov the variance covariance matrix White.alpha the alpha level for White specification test White.pvalue the p-value for White specification test White.stat the White statistics x the design matrix y the response vector mf the model frame Author(s) <NAME>, Massachusetts Institute of Technology, <<EMAIL>> and <NAME>, Prince- ton University, <<EMAIL>> References Imai, Kosuke and <NAME>. (2018) “When Should We Use Unit Fixed Effects Regression Mod- els for Causal Inference with Longitudinal Data?" American Journal of Political Science, Forthcom- ing. Stock, James and <NAME>. (2008) “Heteroskedasticity-Robust Standard Errors for Fixed Ef- fect Panel Data Regression” Econometrica, 76, 1. White, Halbert. (1980) ‘Using Least Squares to Approximate Unknown Regression Functions.” International Economic Review, 21, 1, 149–170. See Also wfe for fitting weighted fixed effect models. Examples ### NOTE: this example illustrates the use of wfe function with randomly ### generated panel data with arbitrary number of units and time. ## generate panel data with number of units = N, number of time = Time ## Not run: N <- 10 # number of distinct units Time <- 15 # number of distinct time ## generate treatment variable treat <- matrix(rbinom(N*Time, size = 1, 0.25), ncol = N) ## make sure at least one observation is treated for each unit while ((sum(apply(treat, 2, mean) == 0) > 0) | (sum(apply(treat, 2, mean) == 1) > 0) | (sum(apply(treat, 1, mean) == 0) > 0) | (sum(apply(treat, 1, mean) == 1) > 0)) { treat <- matrix(rbinom(N*Time, size = 1, 0.25), ncol = N) } treat.vec <- c(treat) ## unit fixed effects alphai <- rnorm(N, mean = apply(treat, 2, mean)) ## geneate two random covariates x1 <- matrix(rnorm(N*Time, 0.5,1), ncol=N) x2 <- matrix(rbeta(N*Time, 5,1), ncol=N) pscore <- matrix(runif(N*Time, 0,1), ncol=N) x1.vec <- c(x1) x2.vec <- c(x2) pscore <- c(pscore) ## generate outcome variable y <- matrix(NA, ncol = N, nrow = Time) for (i in 1:N) { y[, i] <- alphai[i] + treat[, i] + x1[,i] + x2[,i] + rnorm(Time) } y.vec <- c(y) ## generate unit and time index unit.index <- rep(1:N, each = Time) time.index <- rep(1:Time, N) Data.str <- as.data.frame(cbind(y.vec, treat.vec, unit.index, x1.vec, x2.vec)) colnames(Data.str) <- c("y", "tr", "strata.id", "x1", "x2") Data.obs <- as.data.frame(cbind(y.vec, treat.vec, unit.index, time.index, x1.vec, x2.vec, pscore)) colnames(Data.obs) <- c("y", "tr", "unit", "time", "x1", "x2", "pscore") ############################################################ # Example 1: Stratified Randomized Experiments ############################################################ ## run the weighted fixed effect regression with strata fixed effect. ## Note: the quantity of interest is Average Treatment Effect ("ate") ## and the standard errors allow heteroskedasticity and arbitrary ## autocorrelation. ### Average Treatment Effect ps.ate <- pwfe(~ x1+x2, treat = "tr", outcome = "y", data = Data.str, unit.index = "strata.id", method = "unit", within.unit = TRUE, qoi = "ate", hetero.se=TRUE, auto.se=TRUE) ## summarize the results summary(ps.ate) ### Average Treatment Effect for the Treated ps.att <- pwfe(~ x1+x2, treat = "tr", outcome = "y", data = Data.str, unit.index = "strata.id", method = "unit", within.unit = TRUE, qoi = "att", hetero.se=TRUE, auto.se=TRUE) ## summarize the results summary(ps.att) ############################################################ # Example 2: Observational Studies with Unit Fixed-effects ############################################################ ## run the weighted fixed effect regression with unit fixed effect. ## Note: the quantity of interest is Average Treatment Effect ("ate") ## and the standard errors allow heteroskedasticity and arbitrary ## autocorrelation. ### Average Treatment Effect ps.obs <- pwfe(~ x1+x2, treat = "tr", outcome = "y", data = Data.obs, unit.index = "unit", time.index = "time", method = "unit", within.unit = TRUE, qoi = "ate", hetero.se=TRUE, auto.se=TRUE) ## summarize the results summary(ps.obs) ## extracting weigths summary(ps.obs)$Weights ### Average Treatment Effect with First-difference ps.fd <- pwfe(~ x1+x2, treat = "tr", outcome = "y", data = Data.obs, unit.index = "unit", time.index = "time", method = "unit", within.unit = TRUE, qoi = "ate", estimator = "fd", hetero.se=TRUE, auto.se=TRUE) ## summarize the results summary(ps.fd) ############################################################ # Example 3: Estimation with pre-specified propensity score ############################################################ ### Average Treatment Effect with Pre-specified Propensity Scores mod.ps <- pwfe(treat = "tr", outcome = "y", data = Data.obs, pscore = "pscore", unit.index = "unit", time.index = "time", method = "unit", within.unit = TRUE, qoi = "ate", hetero.se=TRUE, auto.se=TRUE) ## summarize the results summary(mod.ps) ## End(Not run) wfe Fitting the Weighted Fixed Effects Model for Causal Inference Description wfe is used to fit weighted fixed effects model for causal inference. wfe also derives the regression weights for different causal quantity of interest. Usage wfe(formula, data, treat = "treat.name", unit.index, time.index = NULL, method = "unit", dyad1.index = NULL, dyad2.index = NULL, qoi = "ate", estimator = NULL, C.it = NULL, hetero.se = TRUE, auto.se = TRUE, dyad.se = FALSE, White = TRUE, White.alpha = 0.05, verbose = TRUE, unbiased.se = FALSE, unweighted = FALSE, store.wdm = FALSE, maxdev.did = NULL, tol = sqrt(.Machine$double.eps)) Arguments formula a symbolic description of the model to be fitted. The formula should not include dummmies for fixed effects. The details of model specifications are given under ‘Details’. data data frame containing the variables in the model. treat a character string indicating the name of treatment variable used in the models. The treatment should be binary indicator (integer with 0 for the control group and 1 for the treatment group). unit.index a character string indicating the name of unit variable used in the models. The index of unit should be factor. time.index a character string indicating the name of time variable used in the models. The index of time should be factor. method method for weighted fixed effects regression, either unit for unit fixed effects; time for time fixed effects. The default is unit. For two-way weighted fixed effects regression models, set method to the default value unit. dyad1.index a character string indicating the variable name of first unit of a given dyad. The default is NULL. This is required to calculate robust standard errors with dyadic data. dyad2.index a character string indicating the variable name of second unit of a given dyad. The default is NULL. This is required to calculate robust standard errors with dyadic data. qoi one of "ate" or "att". The default is "ate". If set to "att" in implement- ing "fd" and "did" estimators, the comparison of the treated observation is restricted to the control observation from the previous time period but not with the control observation from the next time period. estimator an optional character string indicating the estimating method. One of "fd", "did", or "Mdid". "fd" is for First-Difference Design. "did" is for multi- period Difference-in-Differences design. The default is NULL. Setting estimator to be "Mdid" implements the Difference-in-Differences design with Matching on the pretreatment outcome variables. C.it an optional non-negative numeric vector specifying relative weights for each unit of analysis. If not specified, the weights will be calculated based on the estimator and quantity of interest. hetero.se a logical value indicating whether heteroskedasticity across units is allowed in calculating standard errors. The default is TRUE. auto.se a logical value indicating whether arbitrary autocorrelation is allowed in calcu- lating standard errors. The default is TRUE. dyad.se a logical value indicating whether correlations across dyads exist. The default is FALSE. White a logical value indicating whether White misspecification statistics should be calculated. The default is TRUE. White.alpha level of functional specification test. See White (1980) and Imai and Kim (2018). The default is 0.05. verbose logical. If TRUE, helpful messages along with a progress report of the weight calculation are printed on the screen. The default is TRUE. unbiased.se logical. If TRUE, bias-asjusted heteroskedasticity-robust standard errors are used. See Stock and Watson (2008). Should be used only for balanced panel. The default is FALSE. unweighted logical. If TRUE, standard unweighted fixed effects model is estimated. The de- fault is FALSE. Note: users do not need to specify qoi when unweighted=TRUE. For standard two-way fixed effects model (unit and time), set estimator="did" and unweighted="TRUE". store.wdm logical. If TRUE, weighted demeaned dataframe will be stored. The default is FALSE. maxdev.did an optional positive numeric value specifying the maximum deviation in pre- treatment outcome when "Mdid" is implemented. The default is NULL, which implements nearest-neighbor matching. tol a relative tolerance to detect zero singular values for generalized inverse. The default is sqrt(.Machine$double.eps) Details To fit the weighted unit (time) fixed effects model, use the syntax for the formula, y ~ x1 + x2, where y is a dependent variable and x1 and x2 are unit (time) varying covariates. wfe calculates weights based on different underlying causal quantity of interest: Average Treatment Effect (qoi = "ate") or Average Treatment Effect for the Treated (qoi = "att"). One can further set estimating methods: First-Difference (estimator ="fd") or Difference-in- differences (estimator = "did"). For the two-way fixed effects model, set estimator = "did" To specify different ex-ante weights for each unit of analysis, use non-negative weights C.it. For instance, using the survey weights for C.it enables the estimation fo the average treatement effect for the target population. An object of class "wfe" contains vectors of unique unit(time) names and unique unit(time) indices. Value wfe returns an object of class "wfe", a list that contains the components listed below. The function summary (i.e., summary.wfe) can be used to obtain a table of the results. coefficients a named vector of coefficients residuals the residuals, that is respons minus fitted values df the degree of freedom W a dataframe containing unit and time indices along with the weights used for the observation. If method=unit, integer numbers corresponding to the order of input data will be used for generating time index. Num.nonzero Number of observations with non-zero weights call the matched call causal causal quantity of interest estimator the estimating method units a dataframe containing unit names used for W times a dataframe containing time names used for W method call of the method used vcov the variance covariance matrix White.alpha the alpha level for White specification test White.pvalue the p-value for White specification test White.stat the White statistics X the design matrix Y the response vector X.wdm the demeaned design matrix Y.wdm the demeaned response vector mf the model frame where the last column is the weights used for the analysis Author(s) <NAME>, Massachusetts Institute of Technology, <<EMAIL>> and <NAME>, Prince- ton University, <<EMAIL>> References Imai, Kosuke and <NAME>. (2018) “When Should We Use Unit Fixed Effects Regression Mod- els for Causal Inference with Longitudinal Data?" American Journal of Political Science, Forthcom- ing. Aronow, <NAME>., <NAME>, and <NAME> (2015) “Cluster–robust Variance Esti- mation for Dyadic Data." Political Analysis 23, no. 4, 564–577. Stock, James and <NAME>. (2008) “Heteroskedasticity-Robust Standard Errors for Fixed Ef- fect Panel Data Regression” Econometrica, 76, 1. White, Halbert. (1980) “Using Least Squares to Approximate Unknown Regression Functions.” International Economic Review, 21, 1, 149–170. See Also pwfe for fitting weighted fixed effects models with propensity score weighting Examples ### NOTE: this example illustrates the use of wfe function with randomly ### generated panel data with arbitrary number of units and time. ## generate panel data with number of units = N, number of time = Time N <- 10 # number of distinct units Time <- 15 # number of distinct time ## treatment effect beta <- 1 ## generate treatment variable treat <- matrix(rbinom(N*Time, size = 1, 0.25), ncol = N) ## make sure at least one observation is treated for each unit while ((sum(apply(treat, 2, mean) == 0) > 0) | (sum(apply(treat, 2, mean) == 1) > 0) | (sum(apply(treat, 1, mean) == 0) > 0) | (sum(apply(treat, 1, mean) == 1) > 0)) { treat <- matrix(rbinom(N*Time, size = 1, 0.25), ncol = N) } treat.vec <- c(treat) ## unit fixed effects alphai <- rnorm(N, mean = apply(treat, 2, mean)) ## geneate two random covariates x1 <- matrix(rnorm(N*Time, 0.5,1), ncol=N) x2 <- matrix(rbeta(N*Time, 5,1), ncol=N) x1.vec <- c(x1) x2.vec <- c(x2) ## generate outcome variable y <- matrix(NA, ncol = N, nrow = Time) for (i in 1:N) { y[, i] <- alphai[i] + treat[, i] + x1[,i] + x2[,i] + rnorm(Time) } y.vec <- c(y) ## generate unit and time index unit.index <- rep(1:N, each = Time) time.index <- rep(1:Time, N) Data.str <- as.data.frame(cbind(y.vec, treat.vec, unit.index, x1.vec, x2.vec)) colnames(Data.str) <- c("y", "tr", "strata.id", "x1", "x2") Data.obs <- as.data.frame(cbind(y.vec, treat.vec, unit.index, time.index, x1.vec, x2.vec)) colnames(Data.obs) <- c("y", "tr", "unit", "time", "x1", "x2") ############################################################ # Example 1: Stratified Randomized Experiments ############################################################ ## run the weighted fixed effect regression with strata fixed effect. ## Note: the quantity of interest is Average Treatment Effect ("ate") ## and the standard errors allow heteroskedasticity and arbitrary ## autocorrelation. ### Average Treatment Effect mod.ate <- wfe(y~ tr+x1+x2, data = Data.str, treat = "tr", unit.index = "strata.id", method = "unit", qoi = "ate", hetero.se=TRUE, auto.se=TRUE) ## summarize the results summary(mod.ate) ### Average Treatment Effect for the Treated mod.att <- wfe(y~ tr+x1+x2, data = Data.str, treat = "tr", unit.index = "strata.id", method = "unit", qoi = "att", hetero.se=TRUE, auto.se=TRUE) ## summarize the results summary(mod.att) ############################################################ # Example 2: Observational Studies with Unit Fixed-effects ############################################################ ## run the weighted fixed effect regression with unit fixed effect. ## Note: the quantity of interest is Average Treatment Effect ("ate") ## and the standard errors allow heteroskedasticity and arbitrary ## autocorrelation. mod.obs <- wfe(y~ tr+x1+x2, data = Data.obs, treat = "tr", unit.index = "unit", time.index = "time", method = "unit", qoi = "ate", hetero.se=TRUE, auto.se=TRUE, White = TRUE, White.alpha = 0.05) ## summarize the results summary(mod.obs) ## extracting weigths summary(mod.obs)$W ## Not run: ################################################################### # Example 3: Observational Studies with differences-in-differences ################################################################### ## run difference-in-differences estimator. ## Note: the quantity of interest is Average Treatment Effect ("ate") ## and the standard errors allow heteroskedasticity and arbitrary ## autocorrelation. mod.did <- wfe(y~ tr+x1+x2, data = Data.obs, treat = "tr", unit.index = "unit", time.index = "time", method = "unit", qoi = "ate", estimator ="did", hetero.se=TRUE, auto.se=TRUE, White = TRUE, White.alpha = 0.05, verbose = TRUE) ## summarize the results summary(mod.did) ## extracting weigths summary(mod.did)$W ######################################################################### # Example 4: DID with Matching on Pre-treatment Outcomes ######################################################################### ## implements matching on pre-treatment outcomes where the maximum ## deviation is specified as 0.5 mod.Mdid <- wfe(y~ tr+x1+x2, data = Data.obs, treat = "tr", unit.index = "unit", time.index = "time", method = "unit", qoi = "ate", estimator ="Mdid", hetero.se=TRUE, auto.se=TRUE, White = TRUE, White.alpha = 0.05, maxdev.did = 0.5, verbose = TRUE) ## summarize the results summary(mod.Mdid) ## Note: setting the maximum deviation to infinity (or any value ## bigger than the maximum pair-wise difference in the outcome) will ## return the same result as Example 3. dev <- 1000+max(Data.obs$y)-min(Data.obs$y) mod.did2 <- wfe(y~ tr+x1+x2, data = Data.obs, treat = "tr", unit.index = "unit", time.index = "time", method = "unit", qoi = "ate", estimator ="Mdid", hetero.se=TRUE, auto.se=TRUE, White = TRUE, White.alpha = 0.05, maxdev.did = dev, verbose = TRUE) ## summarize the results summary(mod.did2) mod.did2$coef[1] == mod.did$coef[1] ## End(Not run)
github.com/hahahrfool/v2ray_simple
go
Go
README [¶](#section-readme) --- ![GoVersion](https://img.shields.io/github/go-mod/go-version/hahahrfool/v2ray_simple?style=flat-square) [![GoDoc](https://godoc.org/github.com/hahahrfool/v2ray_simple?status.svg)](https://godoc.org/github.com/hahahrfool/v2ray_simple) [![MIT licensed](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/hahahrfool/v2ray_simple/blob/v1.1.8/LICENSE) [![Go Report Card](https://goreportcard.com/badge/github.com/hahahrfool/v2ray_simple)](https://goreportcard.com/report/github.com/hahahrfool/v2ray_simple) [![Downloads](https://img.shields.io/github/downloads/hahahrfool/v2ray_simple/total.svg)](https://github.com/hahahrfool/v2ray_simple/releases/latest) [![release](https://img.shields.io/github/release/hahahrfool/v2ray_simple/all.svg?style=flat-square)](https://github.com/hahahrfool/v2ray_simple/releases/latest) Documentation [¶](#section-documentation) --- [Rendered for](https://go.dev/about#build-context) linux/amd64 windows/amd64 darwin/amd64 js/wasm ### Overview [¶](#pkg-overview) * [Config Format 配置格式](#hdr-Config_Format______) Package main 读取配置文件,将其内容转化为 proxy.Client和 proxy.Server,然后进行代理转发. 命令行参数请使用 --help查看详情。 如果一个命令行参数无法在标准配置中进行配置,那么它就属于高级选项,或者不推荐的选项,或者正在开发中的功能. #### Config Format 配置格式 [¶](#hdr-Config_Format______) 一共有三种配置格式,极简模式,标准模式,兼容模式。 “极简模式”(即 verysimple mode),入口和出口仅有一个,而且都是使用共享链接的url格式来配置. 标准模式使用toml格式。 兼容模式可以兼容v2ray现有json格式。(暂未实现)。 极简模式的理念是,配置文件的字符尽量少,尽量短小精悍; 还有个命令行模式,就是直接把极简模式的url 放到命令行参数中,比如: ``` verysimple -L socks5://sfdfsaf -D direct:// ``` Structure 本项目结构 ``` main -> proxy.Standard(配置文件) -> netLayer-> tlsLayer -> httpLayer -> advLayer -> proxy. main.go 中,读取配置文件,生成 Dail、Listen 、 RoutePolicy 和 Fallback等 对象后,开始监听,并顺便选择性打开交互模式和 apiServer; 具体 转发过程 的 调用链 是 listenSer -> handleNewIncomeConnection -> handshakeInserver_and_passToOutClient -> handshakeInserver -> passToOutClient ( -> checkfallback) -> dialClient_andRelay -> dialClient ( -> dialInnerMux ), netLayer.Relay / netLayer.RelayUDP 用 netLayer操纵路由,用tlsLayer嗅探tls,用httpLayer操纵回落,可选经过ws/grpc, 然后都搞好后,传到proxy,然后就开始转发 当然如果遇到quic这种自己处理从传输层到高级层所有阶段的“超级协议”的话, 在操纵路由后直接传给 quic,然后quic握手建立后直接传到 proxy ```
segen
cran
R
Package ‘segen’ October 14, 2022 Type Package Title Sequence Generalization Through Similarity Network Version 1.1.0 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Proposes an application for sequence prediction generalizing the similarity within the net- work of previous sequences. License GPL-3 Encoding UTF-8 LazyData true RoxygenNote 7.1.1 Depends R (>= 3.6) Imports purrr (>= 0.3.4), ggplot2 (>= 3.3.5), readr (>= 2.1.2), lubridate (>= 1.7.10), imputeTS (>= 3.2), fANCOVA (>= 0.6-1), scales (>= 1.1.1), tictoc (>= 1.0.1), modeest (>= 2.4.0), moments (>= 0.14), greybox (>= 1.0.1), philentropy (>= 0.5.0), entropy (>= 1.3.1), Rfast (>= 2.0.6), narray (>= 0.4.1.1), fastDummies (>= 1.6.3) URL https://rpubs.com/giancarlo_vercellino/segen NeedsCompilation no Repository CRAN Date/Publication 2022-08-15 19:30:02 UTC R topics documented: sege... 2 time_feature... 4 segen segen Description Sequence Generalization Through Similarity Network Usage segen( df, seq_len = NULL, similarity = NULL, dist_method = NULL, rescale = NULL, smoother = FALSE, ci = 0.8, error_scale = "naive", error_benchmark = "naive", n_windows = 10, n_samp = 30, dates = NULL, seed = 42 ) Arguments df A data frame with time features on columns. They could be numeric variables or categorical, but not both. seq_len Positive integer. Time-step number of the forecasting sequence. Default: NULL (automatic selection between 2 and max limit). similarity Positive numeric. Degree of similarity between two sequences, based on quan- tile conversion of distance. Default: NULL (automatic selection between 0.01, maximal difference, and 0.99, minimal difference). dist_method String. Method for calculating distance among sequences. Available options are: "euclidean", "manhattan", "maximum", "minkowski". Default: NULL (ran- dom search). rescale Logical. Flag to TRUE for min-max scaling of distances. Default: NULL (ran- dom search). smoother Logical. Flag to TRUE for loess smoothing. Default: FALSE. ci Confidence interval for prediction. Default: 0.8 error_scale String. Scale for the scaled error metrics (for continuous variables). Two op- tions: "naive" (average of naive one-step absolute error for the historical series) or "deviation" (standard error of the historical series). Default: "naive". error_benchmark String. Benchmark for the relative error metrics (for continuous variables). Two options: "naive" (sequential extension of last value) or "average" (mean value of true sequence). Default: "naive". n_windows Positive integer. Number of validation windows to test prediction error. Default: 10. n_samp Positive integer. Number of samples for random search. Default: 30. dates Date. Vector with dates for time features. seed Positive integer. Random seed. Default: 42. Value This function returns a list including: • exploration: list of all not-null models, complete with predictions and error metrics • history: a table with the sampled models, hyper-parameters, validation errors • best_model: results for the best selected model according to the weighted average rank, in- cluding: – predictions: for continuous variables, min, max, q25, q50, q75, quantiles at selected ci, mean, sd, mode, skewness, kurtosis, IQR to range, risk ratio, upside probability and divergence for each point fo predicted sequences; for factor variables, min, max, q25, q50, q75, quantiles at selected ci, proportions, difformity (deviation of proportions normalized over the maximum possible deviation), entropy, upgrade probability and divergence for each point fo predicted sequences – testing_errors: testing errors for each time feature for the best selected model (for con- tinuous variables: me, mae, mse, rmsse, mpe, mape, rmae, rrmse, rame, mase, smse, sce, gmrae; for factor variables: czekanowski, tanimoto, cosine, hassebrook, jaccard, dice, canberra, gower, lorentzian, clark) – plots: standard plots with confidence interval for each time feature • time_log Author(s) <NAME> <<EMAIL>> See Also Useful links: • https://rpubs.com/giancarlo_vercellino/segen Examples segen(time_features[, 1, drop = FALSE], seq_len = 30, similarity = 0.7, n_windows = 3, n_samp = 1) time_features time features example: IBM and Microsoft Close Prices Description A data frame with with daily with daily prices for IBM and Microsoft since April 2020 Usage time_features Format A data frame with 2 columns and 1324 rows. Source finance.yahoo.com
github.com/flyteorg/flyte/flyteadmin
go
Go
README [¶](#section-readme) --- ### FlyteAdmin [![Current Release](https://img.shields.io/github/release/flyteorg/flyteadmin.svg)](https://github.com/flyteorg/flyteadmin/releases/latest) ![Master](https://github.com/flyteorg/flyteadmin/workflows/Master/badge.svg) [![GoDoc](https://godoc.org/github.com/flyteorg/flyteadmin?status.svg)](https://pkg.go.dev/mod/github.com/flyteorg/flyteadmin) [![License](https://img.shields.io/badge/LICENSE-Apache2.0-ff69b4.svg)](http://www.apache.org/licenses/LICENSE-2.0.html) [![CodeCoverage](https://img.shields.io/codecov/c/github/flyteorg/flyteadmin.svg)](https://codecov.io/gh/flyteorg/flyteadmin) [![Go Report Card](https://goreportcard.com/badge/github.com/flyteorg/flyteadmin)](https://goreportcard.com/report/github.com/flyteorg/flyteadmin) ![Commit activity](https://img.shields.io/github/commit-activity/w/flyteorg/flyteadmin.svg?style=plastic) ![Commit since last release](https://img.shields.io/github/commits-since/flyteorg/flyteadmin/latest.svg?style=plastic) [![Slack](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://slack.flyte.org) None
@jsonak/hr-ui
npm
JavaScript
cube-ui === > A fantastic mobile ui lib implement by Vue. ### Links * [Home](https://didi.github.io/cube-ui/) * [Docs](https://didi.github.io/cube-ui/#/en-US/docs) * [Example](https://didi.github.io/cube-ui/example/) * [Application Guide](https://github.com/cube-ui/cube-application-guide) ### Communication ### New cube-ui project ? Recommend use the [CLI tools](https://github.com/cube-ui/cube-template) base on [vue-cli](https://github.com/vuejs/vue-cli) to init the config and base code: ``` $ vue init cube-ui/cube-template projectname ``` ### Install ``` npm install cube-ui --save ``` ### Usage ``` import Vue from 'vue' import Cube from 'cube-ui' Vue.use(Cube) ``` #### Use modularized cube-ui ``` import Vue from 'vue' import { /* eslint-disable no-unused-vars */ Style, Button, ActionSheet } from 'cube-ui' Vue.use(Button) Vue.use(ActionSheet) ``` For more information, please refer to [Quick Start](https://didi.github.io/cube-ui/#/en-US/docs/quick-start) ### ToDo * More components * Support theme ### Development ``` git clone <EMAIL>:didi/cube-ui.git cd cube-ui npm install npm run dev # or run document development npm run doc-dev ``` ### Changelog Detailed changes for each release are documented in the [release notes](https://github.com/didi/cube-ui/releases). Readme --- ### Keywords * @jsonak/hr-ui * vue * components
django-plotly-dash
readthedoc
Python
django-plotly-dash documentation [django-plotly-dash](index.html#document-index) --- django-plotly-dash[¶](#django-plotly-dash) === [Plotly Dash](https://dash.plot.ly/) applications served up in Django templates using tags. Contents[¶](#contents) --- ### Introduction[¶](#introduction) The purpose of `django-plotly-dash` is to enable [Plotly Dash](https://dash.plot.ly) applications to be served up as part of a [Django](https://www.djangoproject.com/) application, in order to provide these features: * Multiple dash applications can be used on a single page * Separate instances of a dash application can persist along with internal state * Leverage user management and access control and other parts of the Django infrastructure * Consolidate into a single server process to simplify scaling There is nothing here that cannot be achieved through expanding the Flask app around Plotly Dash, or indeed by using an alternative web framework. The purpose of this project is to enable the above features, given that the choice to use Django has already been made. The source code can be found in [this github repository](https://github.com/GibbsConsulting/django-plotly-dash). This repository also includes a self-contained demo application, which can also be viewed [online](https://djangoplotlydash.com). #### Overview[¶](#overview) `django_plotly_dash` works by wrapping around the `dash.Dash` object. The http endpoints exposed by the `Dash` application are mapped to Django ones, and an application is embedded into a webpage through the use of a template tag. Multiple `Dash` applications can be used in a single page. A subset of the internal state of a `Dash` application can be persisted as a standard Django model instance, and the application with this internal state is then available at its own URL. This can then be embedded into one or more pages in the same manner as described above for stateless applications. Also, an enhanced version of the `Dash` callback is provided, giving the callback access to the current User, the current session, and also the model instance associated with the application’s internal state. This package is compatible with version 2.0 onwards of Django. Use of the [live updating](index.html#updating) feature requires the Django Channels extension; in turn this requires a suitable messaging backend such as Redis. ### Installation[¶](#installation) The package requires version 3.2 or greater of Django, and a minimum Python version needed of 3.8. Use `pip` to install the package, preferably to a local `virtualenv`: ``` pip install django_plotly_dash ``` Then, add `django_plotly_dash` to `INSTALLED_APPS` in the Django `settings.py` file: ``` INSTALLED_APPS = [ ... 'django_plotly_dash.apps.DjangoPlotlyDashConfig', ... ] ``` The project directory name `django_plotly_dash` can also be used on its own if preferred, but this will stop the use of readable application names in the Django admin interface. Also, enable the use of frames within HTML documents by also adding to the `settings.py` file: ``` X_FRAME_OPTIONS = 'SAMEORIGIN' ``` Further, if the [header and footer](index.html#plotly-header-footer) tags are in use then `django_plotly_dash.middleware.BaseMiddleware` should be added to `MIDDLEWARE` in the same file. This can be safely added now even if not used. If assets are being served locally through the use of the global `serve_locally` or on a per-app basis, then `django_plotly_dash.middleware.ExternalRedirectionMiddleware` should be added, along with the `whitenoise` package whose middleware should also be added as per the instructions for that package. In addition, `dpd_static_support` should be added to the `INSTALLED_APPS` setting. The application’s routes need to be registered within the routing structure by an appropriate `include` statement in a `urls.py` file: ``` urlpatterns = [ ... path('django_plotly_dash/', include('django_plotly_dash.urls')), ] ``` The name within the URL is not important and can be changed. For the final installation step, a migration is needed to update the database: ``` ./manage.py migrate ``` It is important to ensure that any applications are registered using the `DjangoDash` class. This means that any python module containing the registration code has to be known to Django and loaded at the appropriate time. Note An easy way to register the Plotly app is to import it into `views.py` or `urls.py` as in the following example, which assumes the `plotly_app` module (`plotly_app.py`) is located in the same folder as `views.py`: ``` ``from . import plotly_app`` ``` Once your Plotly app is registered, `plotly_app` tag in the `plotly_dash` tag library can then be used to render it as a dash component. See [Simple usage](index.html#simple-use) for a simple example. #### Extra steps for live state[¶](#extra-steps-for-live-state) The live updating of application state uses the Django [Channels](https://channels.readthedocs.io/en/latest/index.html) project and a suitable message-passing backend. The included demonstration uses `Redis`: ``` pip install channels daphne redis django-redis channels-redis ``` A standard installation of the Redis package is required. Assuming the use of `docker` and the current production version: ``` docker pull redis:4 docker run -p 6379:6379 -d redis ``` The `prepare_redis` script in the root of the repository performs these steps. This will launch a container running on the localhost. Following the channels documentation, as well as adding `channels` to the `INSTALLED_APPS` list, a `CHANNEL_LAYERS` entry in `settings.py` is also needed: ``` INSTALLED_APPS = [ ... 'django_plotly_dash.apps.DjangoPlotlyDashConfig', 'channels', ... ] CHANNEL_LAYERS = { 'default': { 'BACKEND': 'channels_redis.core.RedisChannelLayer', 'CONFIG': { 'hosts': [('127.0.0.1', 6379),], }, }, } ``` The host and port entries in `hosts` should be adjusted to match the network location of the Redis instance. #### Further configuration[¶](#further-configuration) Further configuration options can be specified through the optional `PLOTLY_DASH` settings variable. The available options are detailed in the [configuration](index.html#configuration) section. This includes arranging for Dash assets to be served using the Django `staticfiles` functionality. A checklist for using `dash-bootstrap-components` can be found in the [bootstrap](index.html#bootstrap) section. #### Source code and demo[¶](#source-code-and-demo) The source code repository contains a [simple demo](index.html#demo-notes) application. To install and run it: ``` git clone https://github.com/GibbsConsulting/django-plotly-dash.git cd django-plotly-dash ./make_env # sets up a virtual environment # with direct use of the source # code for the package ./prepare_redis # downloads a redis docker container # and launches it with default settings # *THIS STEP IS OPTIONAL* ./prepare_demo # prepares and launches the demo # using the Django debug server # at http://localhost:8000 ``` This will launch a simple Django application. A superuser account is also configured, with both username and password set to `admin`. If the `prepare_redis` step is skipped then the fourth demo page, exhibiting live updating, will not work. More details on setting up a development environment, which is also sufficient for running the demo, can be found in the [development](index.html#development) section. Note that the current demo, along with the codebase, is in a prerelease and very raw form. An overview can be found in the [demonstration application](index.html#demo-notes) section.` ### Simple usage[¶](#simple-usage) To use existing dash applications, first register them using the `DjangoDash` class. This replaces the `Dash` class from the `dash` package. Taking a simple example inspired by the excellent [getting started](https://dash.plot.ly/getting-started-part-2) guide: ``` import dash from dash import dcc, html from django_plotly_dash import DjangoDash app = DjangoDash('SimpleExample') # replaces dash.Dash app.layout = html.Div([ dcc.RadioItems( id='dropdown-color', options=[{'label': c, 'value': c.lower()} for c in ['Red', 'Green', 'Blue']], value='red' ), html.Div(id='output-color'), dcc.RadioItems( id='dropdown-size', options=[{'label': i, 'value': j} for i, j in [('L','large'), ('M','medium'), ('S','small')]], value='medium' ), html.Div(id='output-size') ]) @app.callback( dash.dependencies.Output('output-color', 'children'), [dash.dependencies.Input('dropdown-color', 'value')]) def callback_color(dropdown_value): return "The selected color is %s." % dropdown_value @app.callback( dash.dependencies.Output('output-size', 'children'), [dash.dependencies.Input('dropdown-color', 'value'), dash.dependencies.Input('dropdown-size', 'value')]) def callback_size(dropdown_color, dropdown_size): return "The chosen T-shirt is a %s %s one." %(dropdown_size, dropdown_color) ``` Note that the `DjangoDash` constructor requires a name to be specified. This name is then used to identify the dash app in [templates](index.html#template-tags): ``` {%load plotly_dash%} {%plotly_app name="SimpleExample"%} ``` Direct use in this manner, without any application state or use of live updating, is equivalent to inserting an `iframe` containing the URL of a `Dash` application. Note The registration code needs to be in a location that will be imported into the Django process before any model or template tag attempts to use it. The example Django application in the demo subdirectory achieves this through an import in the main `urls.py` file, but any `views.py` would also be sufficient. ### Django models and application state[¶](#django-models-and-application-state) The `django_plotly_dash` application defines `DashApp` and `StatelessApp` models. #### The `StatelessApp` model[¶](#the-statelessapp-model) An instance of the `StatelessApp` model represents a single dash application. Every instantiation of a `DjangoDash` object is registered, and any object that is referenced through the `DashApp` model - this includes all template access as well as model instances themselves - causes a `StatelessApp` model instance to be created if one does not already exist. ``` class StatelessApp(models.Model): ''' A stateless Dash app. An instance of this model represents a dash app without any specific state ''' app_name = models.CharField(max_length=100, blank=False, null=False, unique=True) slug = models.SlugField(max_length=110, unique=True, blank=True) def as_dash_app(self): ''' Return a DjangoDash instance of the dash application ''' ``` The main role of a `StatelessApp` instance is to manage access to the associated `DjangoDash` object, as exposed through the `as_dash_app` member function. In the Django admin, an action is provided to check all of the known stateless instances. Those that cannot be instantiated are logged; this is a useful quick check to see what apps are avalilable. Also, in the same admin an additional button is provided to create `StatelessApp` instances for any known instance that does not have an ORM entry. #### The `DashApp` model[¶](#the-dashapp-model) An instance of the `DashApp` model represents an instance of application state. ``` class DashApp(models.Model): ''' An instance of this model represents a Dash application and its internal state ''' stateless_app = models.ForeignKey(StatelessApp, on_delete=models.PROTECT, unique=False, null=False, blank=False) instance_name = models.CharField(max_length=100, unique=True, blank=True, null=False) slug = models.SlugField(max_length=110, unique=True, blank=True) base_state = models.TextField(null=False, default="{}") creation = models.DateTimeField(auto_now_add=True) update = models.DateTimeField(auto_now=True) save_on_change = models.BooleanField(null=False,default=False) ... methods, mainly for managing the Dash application state ... def current_state(self): ''' Return the current internal state of the model instance ''' def update_current_state(self, wid, key, value): ''' Update the current internal state, ignorning non-tracked objects ''' def populate_values(self): ''' Add values from the underlying dash layout configuration ''' ``` The `stateless_app` references an instance of the `StatelessApp` model described above. The `slug` field provides a unique identifier that is used in URLs to identify the instance of an application, and also its associated server-side state. The persisted state of the instance is contained, serialised as JSON, in the `base_state` variable. This is an arbitrary subset of the internal state of the object. Whenever a `Dash` application requests its state (through the `<app slug>_dash-layout` url), any values from the underlying application that are present in `base_state` are overwritten with the persisted values. The `populate_values` member function can be used to insert all possible initial values into `base_state`. This functionality is also exposed in the Django admin for these model instances, as a `Populate app` action. From callback code, the `update_current_state` method can be called to change the initial value of any variable tracked within the `base_state`. Variables not tracked will be ignored. This function is automatically called for any callback argument and return value. Finally, after any callback has finished, and after any result stored through `update_current_state`, then the application model instance will be persisted by means of a call to its `save` method, if any changes have been detected and the `save_on_change` flag is `True`. ### Extended callback syntax[¶](#extended-callback-syntax) The `DjangoDash` class allows callbacks to request extra arguments when registered. To do this, simply add to your callback function the extra arguments you would like to receive after the usual parameters for your `Input` and `State`. This will cause these callbacks registered with this application to receive extra parameters in addition to their usual callback parameters. If you specify a `kwargs` in your callback, it will receive all possible extra parameters (see below for a list). If you specify explicitly extra parameters from the list below, only these will be passed to your callback. For example, the `plotly_apps.py` example contains this dash application: ``` import dash from dash import dcc, html from django_plotly_dash import DjangoDash a2 = DjangoDash("Ex2") a2.layout = html.Div([ dcc.RadioItems(id="dropdown-one",options=[{'label':i,'value':j} for i,j in [ ("O2","Oxygen"),("N2","Nitrogen"),("CO2","Carbon Dioxide")] ],value="Oxygen"), html.Div(id="output-one") ]) @a2.callback( dash.dependencies.Output('output-one','children'), [dash.dependencies.Input('dropdown-one','value')] ) def callback_c(*args,**kwargs): da = kwargs['dash_app'] return "Args are [%s] and kwargs are %s" %(",".join(args), kwargs) ``` The additional arguments, which are reported as the `kwargs` content in this example, include | callback_context: | | --- | | | The `Dash` callback context. See the [`documentation <https://dash.plotly.com/advanced-callbacks`_](#id2) on the content of this variable. This variable is provided as an argument to the callback as well as the `dash.callback_context` global variable. | | dash_app: | For stateful applications, the `DashApp` model instance | | dash_app_id: | The application identifier. For stateless applications, this is the (slugified) name given to the `DjangoDash` constructor. For stateful applications, it is the (slugified) unique identifier for the associated model instance. | | request: | The Django request object. | | session_state: | A dictionary of information, unique to this user session. Any changes made to its content during the callback are persisted as part of the Django session framework. | | user: | The Django User instance. | Possible alternatives to `kwargs` ``` @a2.callback( dash.dependencies.Output('output-one','children'), [dash.dependencies.Input('dropdown-one','value')] ) def callback_c(*args, dash_app): return "Args are [%s] and the extra parameter dash_app is %s" %(",".join(args), dash_app) @a2.callback( dash.dependencies.Output('output-one','children'), [dash.dependencies.Input('dropdown-one','value')] ) def callback_c(*args, dash_app, **kwargs): return "Args are [%s], the extra parameter dash_app is %s and kwargs are %s" %(",".join(args), dash_app, kwargs) ``` The `DashApp` model instance can also be configured to persist itself on any change. This is discussed in the [Django models and application state](index.html#models-and-state) section. The `callback_context` argument is provided in addition to the `dash.callback_context` global variable. As a rule, the use of global variables should generally be avoided. The context provided by `django-plotly-dash` is not the same as the one provided by the underlying `Dash` library, although its property values are the same and code that uses the content of this variable should work unchanged. The use of this global variable in any asychronous or multithreaded application is not supported, and the use of the callback argument is strongly recommended for all use cases. #### Using session state[¶](#id1) The [walkthrough](index.html#session-example) of the session state example details how the XXX demo interacts with a `Django` session. Unless an explicit pipe is created, changes to the session state and other server-side objects are not automatically propagated to an application. Something in the front-end UI has to invoke a callback; at this point the latest version of these objects will be provided to the callback. The same considerations as in other Dash [live updates](https://dash.plot.ly/live-updates) apply. The [live updating](index.html#updating) section discusses how `django-plotly-dash` provides an explicit pipe that directly enables the updating of applications. ### Live updating[¶](#live-updating) Live updating is supported using additional `Dash` [components](index.html#dash-components) and leveraging [Django Channels](https://channels.readthedocs.io/en/latest/) to provide websocket endpoints. Server-initiated messages are sent to all interested clients. The content of the message is then injected into the application from the client, and from that point it is handled like any other value passed to a callback function. The messages are constrained to be JSON serialisable, as that is how they are transmitted to and from the clients, and should also be as small as possible given that they travel from the server, to each interested client, and then back to the server again as an argument to one or more callback functions. The round-trip of the message is a deliberate design choice, in order to enable the value within the message to be treated as much as possible like any other piece of data within a `Dash` application. This data is essentially stored on the client side of the client-server split, and passed to the server when each callback is invoked; note that this also encourages designs that keep the size of in-application data small. An alternative approach, such as directly invoking a callback in the server, would require the server to maintain its own copy of the application state. Live updating requires a server setup that is considerably more complex than the alternative, namely use of the built-in [Interval](https://dash.plot.ly/live-updates) component. However, live updating can be used to reduce server load (as callbacks are only made when needed) and application latency (as callbacks are invoked as needed, not on the tempo of the Interval component). #### Message channels[¶](#message-channels) Messages are passed through named channels, and each message consists of a `label` and `value` pair. A [Pipe](index.html#pipe-component) component is provided that listens for messages and makes them available to `Dash` callbacks. Each message is sent through a message channel to all `Pipe` components that have registered their interest in that channel, and in turn the components will select messages by `label`. A message channel exists as soon as a component signals that it is listening for messages on it. The message delivery requirement is ‘hopefully at least once’. In other words, applications should be robust against both the failure of a message to be delivered, and also for a message to be delivered multiple times. A design approach that has messages of the form ‘you should look at X and see if something should be done’ is strongly encouraged. The accompanying demo has messages of the form ‘button X at time T’, for example. #### Sending messages from within Django[¶](#sending-messages-from-within-django) Messages can be easily sent from within Django, provided that they are within the ASGI server. ``` from django_plotly_dash.consumers import send_to_pipe_channel # Send a message # # This function may return *before* the message has been sent # to the pipe channel. # send_to_pipe_channel(channel_name="live_button_counter", label="named_counts", value=value) # Send a message asynchronously # await async_send_to_pipe_channel(channel_name="live_button_counter", label="named_counts", value=value) ``` In general, making assumptions about the ordering of code between message sending and receiving is unsafe. The `send_to_pipe` function uses the Django Channels `async_to_sync` wrapper around a call to `async_send_to_pipe` and therefore may return before the asynchronous call is made (perhaps on a different thread). Furthermore, the transit of the message through the channels backend introduces another indeterminacy. #### HTTP Endpoint[¶](#http-endpoint) There is an HTTP endpoint, [configured](index.html#configuration) with the `http_route` option, that allows direct insertion of messages into a message channel. It is a direct equivalent of calling the `send_to_pipe_channel` function, and expects the `channel_name`, `label` and `value` arguments to be provided in a JSON-encoded dictionary. ``` curl -d '{"channel_name":"live_button_counter", "label":"named_counts", "value":{"click_colour":"cyan"}}' http://localhost:8000/dpd/views/poke/ ``` This will cause the (JSON-encoded) `value` argument to be sent on the `channel_name` channel with the given `label`. The provided endpoint skips any CSRF checks and does not perform any security checks such as authentication or authorisation, and should be regarded as a starting point for a more complete implementation if exposing this functionality is desired. On the other hand, if this endpoint is restricted so that it is only available from trusted sources such as the server itself, it does provide a mechanism for Django code running outside of the ASGI server, such as in a WSGI process or Celery worker, to push a message out to running applications. The `http_poke_enabled` flag controls the availability of the endpoint. If false, then it is not registered at all and all requests will receive a 404 HTTP error code. #### Deployment[¶](#deployment) The live updating feature needs both Redis, as it is the only supported backend at present for v2.0 and up of Channels, and Daphne or any other ASGI server for production use. It is also good practise to place the server(s) behind a reverse proxy such as Nginx; this can then also be configured to serve Django’s static files. A further consideration is the use of a WSGI server, such as Gunicorn, to serve the non-asynchronous subset of the http routes, albeit at the expense of having to separately manage ASGI and WSGI servers. This can be easily achieved through selective routing at the reverse proxy level, and is the driver behind the `ws_route` configuration option. In passing, note that the demo also uses Redis as the caching backend for Django. ### Template tags[¶](#template-tags) Template tags are provided in the `plotly_dash` library: ``` {%load plotly_dash%} ``` #### The `plotly_app` template tag[¶](#the-plotly-app-template-tag) Importing the `plotly_dash` library provides the `plotly_app` template tag: ``` {%load plotly_dash%} {%plotly_app name="SimpleExample"%} ``` This tag inserts a `DjangoDash` app within a page as a responsive `iframe` element. The tag arguments are: | name = None: | The name of the application, as passed to a `DjangoDash` constructor. | | slug = None: | The slug of an existing `DashApp` instance. | | da = None: | An existing `django_plotly_dash.models.DashApp` model instance. | | ratio = 0.1: | The ratio of height to width. The container will inherit its width as 100% of its parent, and then rely on this ratio to set its height. | | use_frameborder = “0”: | | | HTML element property of the iframe containing the application. | | initial_arguments = None: | | | Initial arguments overriding app defaults and saved state. | At least one of `da`, `slug` or `name` must be provided. An object identified by `slug` will always be used, otherwise any identified by `name` will be. If either of these arguments are provided, they must resolve to valid objects even if not used. If neither are provided, then the model instance in `da` will be used. The `initial_arguments` are specified as a python dictionary. This can be the actual `dict` object, or a JSON-encoded string representation. Each entry in the dictionary has the `id` as key, and the corresponding value is a dictionary mapping property name keys to initial values. #### The `plotly_app_bootstrap` template tag[¶](#the-plotly-app-bootstrap-template-tag) This is a variant of the `plotly_app` template for use with responsive layouts using the Bootstrap library ``` {%load plotly_dash%} {%plotly_app_bootstrap name="SimpleExample" aspect="16by9"%} ``` The tag arguments are similar to the `plotly_app` ones: | name = None: | The name of the application, as passed to a `DjangoDash` constructor. | | slug = None: | The slug of an existing `DashApp` instance. | | da = None: | An existing `django_plotly_dash.models.DashApp` model instance. | | aspect= “4by3”: | The aspect ratio of the app. Should be one of 21by9, 16by9, 4by3 or 1by1. | | initial_arguments = None: | | | Initial arguments overriding app defaults and saved state. | At least one of `da`, `slug` or `name` must be provided. An object identified by `slug` will always be used, otherwise any identified by `name` will be. If either of these arguments are provided, they must resolve to valid objects even if not used. If neither are provided, then the model instance in `da` will be used. The aspect ratio has to be one of the available ones from the [Bootstrap](https://getbootstrap.com/docs/4.3/utilities/borders/) framework. The `initial_arguments` are specified as a python dictionary. This can be the actual `dict` object, or a JSON-encoded string representation. Each entry in the dictionary has the `id` as key, and the corresponding value is a dictionary mapping property name keys to initial values. #### The `plotly_direct` template tag[¶](#the-plotly-direct-template-tag) This template tag allows the direct insertion of html into a template, instead of embedding it in an iframe. ``` {%load plotly_dash%} {%plotly_direct name="SimpleExample"%} ``` The tag arguments are: | name = None: | The name of the application, as passed to a `DjangoDash` constructor. | | slug = None: | The slug of an existing `DashApp` instance. | | da = None: | An existing `django_plotly_dash.models.DashApp` model instance. | These arguments are equivalent to the same ones for the `plotly_app` template tag. Note that `initial_arguments` are not currently supported, and as the app is directly injected into the page there are no arguments to control the size of the iframe. This tag should not appear more than once on a page. This rule however is not enforced at present. If this tag is used, then the [header and footer](#plotly-header-footer) tags should also be added to the template. Note that these tags in turn have middleware requirements. #### The `plotly_header` and `plotly_footer` template tags[¶](#the-plotly-header-and-plotly-footer-template-tags) `DjangoDash` allows you to inject directly the html generated by `Dash` in the DOM of the page without wrapping it in an iframe. To include the app CSS and JS, two tags should be included in the template, namely `plotly_header` and `plotly_footer`, as follows: ``` <!-- templates/base.html --> <!DOCTYPE html> <html> <head> ... {% load plotly_dash%} ... {% plotly_header %} ... </head> <body> ... {%plotly_direct name="SimpleExample"%} ... </body> ... {% plotly_footer %} </html> ``` This part in mandatory if you want to use the [plotly_direct](#plotly-direct) tag, and these two tags can safely be included on any page that has loaded the `plotly_dash` template tag library with minimal overhead, making them suitable for inclusion in a base template. Neither tag has any arguments. Note that if you are using any functionality that needs the use of these tags, then the associated middleware should be added in `settings.py` ``` MIDDLEWARE = [ ... 'django_plotly_dash.middleware.BaseMiddleware', ] ``` This middleware should appear low down the middleware list. #### The `plotly_message_pipe` template tag[¶](#the-plotly-message-pipe-template-tag) This template tag has to be inserted on every page that uses live updating: ``` {%load plotly_dash%} {%plotly_app ... DjangoDash instances using live updating ... %} {%plotly_message_pipe%} ``` The tag inserts javascript needed for the [Pipe](index.html#pipe-component) component to operate. It can be inserted anywhere on the page, and its ordering relative to the `Dash` instances using updating is not important, so placing it in the page footer - to avoid delaying the main page load - along with other scripts is generally advisable. #### The `plotly_app_identifier` template tag[¶](#the-plotly-app-identifier-template-tag) This tag provides an identifier for an app, in a form that is suitable for use as a classname or identifier in HTML: ``` {%load plotly_dash%} {%plotly_app_identifier name="SimpleExample"%} {%plotly_app_identifier slug="liveoutput-2" postfix="A"%} ``` The identifier, if the tag is not passed a `slug`, is the result of passing the identifier of the app through the `django.utils.text.slugify` function. The tag arguments are: | name = None: | The name of the application, as passed to a `DjangoDash` constructor. | | slug = None: | The slug of an existing `DashApp` instance. | | da = None: | An existing `django_plotly_dash.models.DashApp` model instance. | | postfix = None: | An optional string; if specified it is appended to the identifier with a hyphen. | The validity rules for these arguments are the same as those for the `plotly_app` template tag. If supplied, the `postfix` argument should already be in a slug-friendly form, as no processing is performed on it. #### The `plotly_class` template tag[¶](#the-plotly-class-template-tag) Generate a string of class names, suitable for a `div` or other element that wraps around `django-plotly-dash` template content. ``` {%load plotly_dash%} <div class="{%plotly_class slug="liveoutput-2" postfix="A"%}"> {%plotly_app slug="liveoutput-2" ratio="0.5" %} </div> ``` The identifier, if the tag is not passed a `slug`, is the result of passing the identifier of the app through the `django.utils.text.slugify` function. The tag arguments are: | name = None: | The name of the application, as passed to a `DjangoDash` constructor. | | slug = None: | The slug of an existing `DashApp` instance. | | da = None: | An existing `django_plotly_dash.models.DashApp` model instance. | | prefix = None: | Optional prefix to use in place of the text `django-plotly-dash` in each class name | | postfix = None: | An optional string; if specified it is appended to the app-specific identifier with a hyphen. | | template_type = None: | | | Optional text to use in place of `iframe` in the template-specific class name | The tag inserts a string with three class names in it. One is just the `prefix` argument, one has the `template_type` appended, and the final one has the app identifier (as generated by the `plotly_app_identifier` tag) and any `postfix` appended. The validity rules for these arguments are the same as those for the `plotly_app` and `plotly_app_identifier` template tags. Note that none of the `prefix`, `postfix` and `template_type` arguments are modified and they should already be in a slug-friendly form, or otherwise fit for their intended purpose. ### Dash components[¶](#dash-components) The `dpd-components` package contains `Dash` components. This package is installed as a dependency of `django-plotly-dash`. #### The `Pipe` component[¶](#the-pipe-component) Each `Pipe` component instance listens for messages on a single channel. The `value` member of any message on that channel whose `label` matches that of the component will be used to update the `value` property of the component. This property can then be used in callbacks like any other `Dash` component property. An example, from the demo application: ``` import dpd_components as dpd app.layout = html.Div([ ... dpd.Pipe(id="named_count_pipe", # ID in callback value=None, # Initial value prior to any message label="named_counts", # Label used to identify relevant messages channel_name="live_button_counter"), # Channel whose messages are to be examined ... ]) ``` The `value` of the message is sent from the server to all front ends with `Pipe` components listening on the given `channel_name`. This means that this part of the message should be small, and it must be JSON serialisable. Also, there is no guarantee that any callbacks will be executed in the same Python process as the one that initiated the initial message from server to front end. The `Pipe` properties can be persisted like any other `DashApp` instance, although it is unlikely that continued persistence of state on each update of this component is likely to be useful. This component requires a bidirectional connection, such as a websocket, to the server. Inserting a `plotly_message_pipe` [template tag](index.html#plotly-message-pipe) is sufficient. ### Configuration options[¶](#configuration-options) The `PLOTLY_DASH` settings variable is used for configuring `django-plotly-dash`. Default values are shown below. ``` PLOTLY_DASH = { # Route used for the message pipe websocket connection "ws_route" : "dpd/ws/channel", # Route used for direct http insertion of pipe messages "http_route" : "dpd/views", # Flag controlling existince of http poke endpoint "http_poke_enabled" : True, # Insert data for the demo when migrating "insert_demo_migrations" : False, # Timeout for caching of initial arguments in seconds "cache_timeout_initial_arguments": 60, # Name of view wrapping function "view_decorator": None, # Flag to control location of initial argument storage "cache_arguments": True, # Flag controlling local serving of assets "serve_locally": False, } ``` Defaults are inserted for missing values. It is also permissible to not have any `PLOTLY_DASH` entry in the Django settings file. The Django staticfiles infrastructure is used to serve all local static files for the Dash apps. This requires adding a setting for the specification of additional static file finders ``` # Staticfiles finders for locating dash app assets and related files STATICFILES_FINDERS = [ 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', 'django_plotly_dash.finders.DashAssetFinder', 'django_plotly_dash.finders.DashComponentFinder', 'django_plotly_dash.finders.DashAppDirectoryFinder', ] ``` and also providing a list of components used ``` # Plotly components containing static content that should # be handled by the Django staticfiles infrastructure PLOTLY_COMPONENTS = [ # Common components (ie within dash itself) are automatically added # django-plotly-dash components 'dpd_components', # static support if serving local assets 'dpd_static_support', # Other components, as needed 'dash_bootstrap_components', ] ``` This list should be extended with any additional components that the applications use, where the components have files that have to be served locally. The components that are part of the core `dash` package are automatically included and do not need to be provided in this list. Furthermore, middleware should be added for redirection of external assets from underlying packages, such as `dash-bootstrap-components`. With the standard Django middleware, along with `whitenoise`, the entry within the `settings.py` file will look something like ``` # Standard Django middleware with the addition of both # whitenoise and django_plotly_dash items MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django_plotly_dash.middleware.BaseMiddleware', 'django_plotly_dash.middleware.ExternalRedirectionMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ``` Individual apps can set their `serve_locally` flag. However, it is recommended to use the equivalent global `PLOTLY_DASH` setting to provide a common approach for all static assets. See [Local assets](index.html#local-assets) for more information on how local assets are configured and served as part of the standard Django staticfiles approach, along with details on the integration of other components and some known issues. #### Endpoints[¶](#endpoints) The websocket and direct http message endpoints are separately configurable. The configuration options exist to satisfy two requirements > * Isolate paths that require serving with ASGI. This allows the asynchronous routes - essentially the websocket connections > and any other ones from the rest of the application - to be served using `daphne` or similar, and the bulk of the > (synchronous) routes to be served using a WSGI server such as `gunicorn`. > * Isolate direct http posting of messages to restrict their use. The motivation behind this http endpoint is to provide > a private service that allows other > parts of the overall application to send notifications to `Dash` applications, rather than expose this functionality > as part of the public API. A reverse proxy front end, such as `nginx`, can route appropriately according to URL. #### View decoration[¶](#view-decoration) Each view delegated through to `plotly_dash` can be wrapped using a view decoration function. This enables access to be restricted to logged-in users, or using a desired conditions based on the user and session state. To restrict all access to logged-in users, use the `login_required` wrapper: ``` PLOTLY_DASH = { ... # Name of view wrapping function "view_decorator": "django_plotly_dash.access.login_required", ... } ``` More information can be found in the [view decoration](index.html#access-control) section. #### Initial arguments[¶](#initial-arguments) Initial arguments are stored within the server between the specification of an app in a template tag and the invocation of the view functions for the app. This storage is transient and can be efficiently performed using Django’s caching framework. In some situations, however, a suitably configured cache is not available. For this use case, setting the `cache_arguments` flag to `False` will cause initial arguments to be placed inside the Django session. ### Local assets[¶](#local-assets) Local ploty dash assets are integrated into the standard Django staticfiles structure. This requires additional settings for both staticfiles finders and middleware, and also providing a list of the components used. The specific steps are listed in the [Configuration options](index.html#configuration) section. Individual applications can set a `serve_locally` flag but the use of the global setting in the `PLOTLY_DASH` variable is recommended. #### Additional components[¶](#additional-components) Some components, such as `dash-bootstrap-components`, require external packages such as Bootstrap to be supplied. In turn this can be achieved using for example the `bootstrap4` Django application. As a consequence, dependencies on external URLs are introduced. This can be avoided by use of the `dpd-static-support` package, which supplies mappings to locally served versions of these assets. Installation is through the standard `pip` approach ``` pip install dpd-static-support ``` and then the package should be added as both an installed app and to the `PLOTLY_COMPONENTS` list in `settings.py`, along with the associated middleware ``` INSTALLED_APPS = [ ... 'dpd_static_support', ] MIDDLEWARE = [ ... 'django_plotly_dash.middleware.ExternalRedirectionMiddleware', ] PLOTLY_COMPONENTS = [ ... 'dpd_static_support' ] ``` Note that the middleware can be safely added even if the `serve_locally` functionality is not in use. #### Known issues[¶](#known-issues) Absolute paths to assets will not work correctly. For example: ``` app.layout = html.Div([html.Img(src=localState.get_asset_url('image_one.png')), html.Img(src='assets/image_two.png'), html.Img(src='/assets/image_three.png'), ]) ``` Of these three images, both `image_one.png` and `image_two.png` will be served up - through the static files infrastructure - from the `assets` subdirectory relative to the code defining the `app` object. However, when rendered the application will attempt to load `image_three.png` using an absolute path. This is unlikely to be the desired result, but does permit the use of absolute URLs within the server. ### Using Bootstrap[¶](#using-bootstrap) The `django-plotly-dash` package is frequently used with the `dash-bootstrap-components` package, and this requires a number of steps to set up correctly. This section is a checkist of the required confiuration steps. * install the package as descrbed in the [installation](index.html#installation) section * install the static support package with `pip install dpd-static-support` * add the various settings in the [configuration](index.html#configuration) section, particularly the STATICFILES_FINDERS, PLOTLY_COMPONENTS and MIDDLEWARE ones. * install django-bootstrap 4 with `pip install django-bootstrap4` and add `bootstrap4` to INSTALLED_APPS in the project’s `settings.py` file * make sure that the settings for serving static files are set correctly, particularly STATIC_ROOT, as described in the Django [documentation](https://docs.djangoproject.com/en/3.0/howto/static-files/) * use the `prepare_demo` script or perform the equivalent steps, paricularly the `migrate` and `collectstatic` steps * make sure `add_bootstrap_links=True` is set for apps using `dash-bootstrap-components` * the Django documentation [deployment](https://docs.djangoproject.com/en/3.0/howto/static-files/deployment/) section covers setting up the serving of static files for production ### Demonstration application[¶](#demonstration-application) There are a number of pages in the demo application in the source repository. 1. Direct insertion of one or more dash applications 2. Initial state storage within Django 3. Enhanced callbacks 4. Live updating 5. Injection without using an iframe 6. Simple html injection 7. Bootstrap components 8. Session state storage 9. Local serving of assets 10. Multiple callback values The templates that drive each of these can be found in the [github repository](https://github.com/GibbsConsulting/django-plotly-dash/tree/master/demo/demo/templates). There is a more details walkthrough of the [session state storage](#session-example) example. This example also shows the use of [dash bootstrap components](https://pypi.org/project/dash-bootstrap-components/). The demo application can also be viewed [online](https://djangoplotlydash.com). #### Session state example walkthrough[¶](#session-state-example-walkthrough) The session state example has three separate components in the demo application * A template to render the application * The `django-plotly-dash` application itself * A view to render the template having initialised the session state if needed The first of these is a standard Django template, containing instructions to render the Dash application: ``` {%load plotly-dash%} ... <div class="{%plotly_class name="DjangoSessionState"%}"> {%plotly_app name="DjangoSessionState" ratio=0.3 %} </div> ``` The view sets up the initial state of the application prior to rendering. For this example we have a simple variant of rendering a template view: ``` def session_state_view(request, template_name, **kwargs): # Set up a context dict here context = { ... values for template go here, see below ... } return render(request, template_name=template_name, context=context) ``` and it suffices to register this view at a convenient URL as it does not use any parameters: ``` ... url('^demo-eight', session_state_view, {'template_name':'demo_eight.html'}, name="demo-eight"), ... ``` In passing, we note that accepting parameters as part of the URL and passing them as initial parameters to the app through the template is a straightforward extension of this example. The session state can be accessed in the app as well as the view. The app is essentially formed from a layout function and a number of callbacks. In this particular example, [dash-bootstrap-components](https://dash-bootstrap-components.opensource.asidatascience.com/) are used to form the layout: ``` dis = DjangoDash("DjangoSessionState", add_bootstrap_links=True) dis.layout = html.Div( [ dbc.Alert("This is an alert", id="base-alert", color="primary"), dbc.Alert(children="Danger", id="danger-alert", color="danger"), dbc.Button("Update session state", id="update-button", color="warning"), ] ) ``` Within the [extended callback](index.html#extended-callbacks), the session state is passed as an extra argument compared to the standard `Dash` callback: ``` @dis.callback( dash.dependencies.Output("danger-alert", 'children'), [dash.dependencies.Input('update-button', 'n_clicks'),] ) def session_demo_danger_callback(n_clicks, session_state=None, **kwargs): if session_state is None: raise NotImplementedError("Cannot handle a missing session state") csf = session_state.get('bootstrap_demo_state', None) if not csf: csf = dict(clicks=0) session_state['bootstrap_demo_state'] = csf else: csf['clicks'] = n_clicks return "Button has been clicked %s times since the page was rendered" %n_clicks ``` The session state is also set during the view: ``` def session_state_view(request, template_name, **kwargs): session = request.session demo_count = session.get('django_plotly_dash', {}) ind_use = demo_count.get('ind_use', 0) ind_use += 1 demo_count['ind_use'] = ind_use session['django_plotly_dash'] = demo_count # Use some of the information during template rendering context = {'ind_use' : ind_use} return render(request, template_name=template_name, context=context) ``` Reloading the demonstration page will cause the page render count to be incremented, and the button click count to be reset. Loading the page in a different session, for example by using a different browser or machine, will have an independent render count. ### View decoration[¶](#view-decoration) The `django-plotly-dash` views, as served by Django, can be wrapped with an arbitrary decoration function. This allows the use of the Django [login_required](https://docs.djangoproject.com/en/2.1/topics/auth/default/#the-login-required-decorator) view decorator as well as enabling more specialised and fine-grained control. #### The `login_required` decorator[¶](#the-login-required-decorator) The `login_required` decorator from the Django authentication system can be used as a view decorator. A wrapper function is provided in `django_plotly_dash.access`. ``` PLOTLY_DASH = { ... # Name of view wrapping function "view_decorator": "django_plotly_dash.access.login_required", ... } ``` Note that the view wrapping is on all of the `django-plotly-dash` views. #### Fine-grained control[¶](#fine-grained-control) The view decoration function is called for each variant exposed in the `django_plotly_dash.urls` file. As well as the underlying view function, each call to the decorator is given the name of the route, as used by `django.urls.reverse`, the specific url fragment for the view, and a name describing the type of view. From this information, it is possible to implement view-specific wrapping of the view functions, and in turn the wrapper functions can then use the request content, along with other information, to control access to the underlying view function. ``` from django.views.decorators.csrf import csrf_exempt def check_access_permitted(request, **kwargs): # See if access is allowed; if so return True # This function is called on each request ... return True def user_app_control(view_function, name=None, **kwargs): # This function is called on the registration of each django-plotly-dash view # name is one of main component-suites routes layout dependencies update-component def wrapped_view(request, *args, **kwargs): is_permitted = check_access_permitted(request, **kwargs) if not is_permitted: # Access not permitted, so raise error or generate an appropriate response ... else: return view_function(request, *args, **kwargs) if getattr(view_function,"csrf_exempt",False): return csrf_exempt(wrapped_view) return wrapped_view ``` The above sketch highlights how access can be controlled based on each request. Note that the `csrf_exempt` property of any wrapped view is preserved by the decoration function and this approach needs to be extended to other properties if needed. Also, this sketch only passes `kwargs` to the permission function. ### FAQ[¶](#faq) * What Dash versions are supported? Dash v2.0 onwards is supported. The non-backwards-compatible changes of Dash make supporting earlier versions hard. Note that v1.7.2 is the last version to support (and require) Dash versions prior to v2.0 * What environment versions are supported? At least v3.8 of Python, and v2.2 of Django, are needed. * Is a `virtualenv` mandatory? No, but it is strongly recommended for any Python work. * What about Windows? The python package should work anywhere that Python does. Related applications, such as Redis, have their own requirements but are accessed using standard network protocols. * How do I report a bug or other issue? Create a [github issue](https://github.com/GibbsConsulting/django-plotly-dash/issues). See [bug reporting](index.html#bug-reporting) for details on what makes a good bug report. * Where should `Dash` layout and callback functions be placed? In general, the only constraint on the files containing these functions is that they should be imported into the file containing the `DjangoDash` instantiation. This is discussed in the [Installation](index.html#installation) section and also in this github [issue](https://github.com/GibbsConsulting/django-plotly-dash/issues/58). * Can per-user or other fine-grained access control be used? > Yes. See the [View decoration](index.html#view-decoration) configuration setting and [View decoration](index.html#access-control) section. * What settings are needed to run the server in debug mode? The `prepare_demo` script in the root of the git repository contains the full set of commands for running the server in debug mode. In particular, the debug server is launched with the `--nostatic` option. This will cause the staticfiles to be served from the collected files in the `STATIC_ROOT` location rather than the normal `runserver` behaviour of serving directly from the various locations in the `STATICFILES_DIRS` list. * Is use of the `get_asset_url` function optional for including static assets? No, it is needed. Consider this example (it is part of `demo-nine`): ``` localState = DjangoDash("LocalState", serve_locally=True) localState.layout = html.Div([html.Img(src=localState.get_asset_url('image_one.png')), html.Img(src='/assets/image_two.png'), ]) ``` The first `Img` will have its source file correctly served up by Django as a standard static file. However, the second image will not be rendered as the path will be incorrect. See the [Local assets](index.html#local-assets) section for more information on configuration with local assets. * Is there a live demo available? Yes. It can be found [here](https://djangoplotlydash.com) ### Development[¶](#development) The application and demo are developed, built and tested in a virtualenv enviroment, supported by a number of `bash` shell scripts. The resultant package should work on any Python installation that meets the requirements. Automatic builds have been set up on [Travis-CI](https://travis-ci.org/GibbsConsulting/django-plotly-dash) including running tests and reporting code coverage. Current status: #### Environment setup[¶](#environment-setup) To set up a development environment, first clone the repository, and then use the `make_env` script: ``` git clone https://github.com/GibbsConsulting/django-plotly-dash.git cd django-plotly-dash ./make_env ``` The script creates a virtual environment and uses `pip` to install the package requirements from the `requirements.txt` file, and then also the extra packages for development listed in the `dev_requirements.txt` file. It also installs `django-plotly-dash` as a development package. Redis is an optional dependency, and is used for live updates of Dash applications through channels endpoints. The `prepare_redis` script can be used to install Redis using Docker. It essentially pulls the container and launches it: ``` # prepare_redis content: docker pull redis:4 docker run -p 6379:6379 -d redis ``` The use of Docker is not mandatory, and any method to install Redis can be used provided that the [configuration](index.html#configuration) of the host and port for channels is set correcty in the `settings.py` for the Django project. During development, it can be convenient to serve the `Dash` components locally. Whilst passing `serve_locally=True` to a `DjangoDash` constructor will cause all of the css and javascript files for the components in that application from the local server, it is recommended to use the global `serve_locally` configuration setting. Note that it is not good practice to serve static content in production through Django. #### Coding and testing[¶](#coding-and-testing) The `pylint` and `pytest` packages are important tools in the development process. The global configuration used for `pylint` is in the `pylintrc` file in the root directory of the codebase. Tests of the package are contained within the `django_plotly_dash/tests.py` file, and are invoked using the Django settings for the demo. Running the tests from the perspective of the demo also enables code coverage for both the application and the demo to be measured together, simplifying the bookkeeping. Two helper scripts are provided for running the linter and test code: ``` # Run pylint on django-plotly-dash module ./check_code_dpd # Run pylint on the demo code, and then execute the test suite ./check_code_demo ``` It is also possible to run all of these actions together: ``` # Run all of the checks ./check_code ``` The goal is for complete code coverage within the test suite and for maximal (‘ten out of ten’) marks from the linter. Perfection is however very hard and expensive to achieve, so the working requirement is for every release to keep the linter score above 9.5, and ideally improve it, and for the level of code coverage of the tests to increase. #### Documentation[¶](#documentation) Documentation lives in the `docs` subdirectory as reStructuredText and is built using the `sphinx` toolchain. Automatic local building of the documentation is possible with the development environment: ``` source env/bin/activate cd docs && sphinx-autobuild . _build/html ``` In addition, the `grip` tool can be used to serve a rendered version of the `README` file: ``` source env/bin/activate grip ``` The online documentation is automatically built by the `readthedocs` infrastructure when a release is formed in the main `github` repository. #### Release builds[¶](#release-builds) This section contains the recipe for building a release of the project. First, update the version number appropriately in `django_plotly_dash/version.py`, and then ensure that the checks and tests have been run: ``` ./check_code ``` Next, construct the `pip` packages and push them to [pypi](https://pypi.org/project/django-plotly-dash/): ``` source env/bin/activate python setup.py sdist python setup.py bdist_wheel twine upload dist/* ``` Committing a new release to the main github repository will invoke a build of the online documentation, but first a snapshot of the development environment used for the build should be generated: ``` pip freeze > frozen_dev.txt git add frozen_dev.txt git add django_plotly_dash/version.py git commit -m" ... suitable commit message for this release ..." # Create PR, merge into main repo, check content on PYPI and RTD ``` This preserves the state used for building and testing for future reference. #### Bug reports and other issues[¶](#bug-reports-and-other-issues) The ideal bug report is a pull request containing the addition of a failing test exhibiting the problem to the test suite. However, this rarely happens in practice! The essential requirement of a bug report is that it contains enough information to characterise the issue, and ideally also provides some way of replicating it. Issues that cannot be replicated within a virtualenv are unlikely to get much attention, if any. To report a bug, create a [github issue](https://github.com/GibbsConsulting/django-plotly-dash/issues). ### License[¶](#license) The `django-plotly-dash` package is made available under the MIT license. The license text can be found in the LICENSE file in the root directory of the source code, along with a CONTRIBUTIONS.md file that includes a list of the contributors to the codebase. A copy of the license, correct at the time of writing of this documentation, follows: MIT License Copyright (c) 2018 <NAME> and others - see CONTRIBUTIONS.md Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
bpk-component-autosuggest
npm
JavaScript
bpk-component-autosuggest === > Backpack autosuggest component. Installation --- ``` npm install bpk-component-autosuggest --save-dev ``` Usage --- ``` import React, { Component } from 'react'; import BpkLabel from 'bpk-component-label'; import { withRtlSupport } from 'bpk-component-icon'; import FlightIcon from 'bpk-component-icon/lg/flight'; import BpkAutosuggest, { BpkAutosuggestSuggestion } from 'bpk-component-autosuggest'; const BpkFlightIcon = withRtlSupport(FlightIcon); const offices = [ { name: 'Barcelona', code: 'BCN', country: 'Spain', }, ... ]; const getSuggestions = (value) => { const inputValue = value.trim().toLowerCase(); const inputLength = inputValue.length; return inputLength === 0 ? [] : offices.filter(office => office.name.toLowerCase().indexOf(inputValue) !== -1, ); }; const getSuggestionValue = ({ name, code }) => `${name} (${code})`; const renderSuggestion = suggestion => ( <BpkAutosuggestSuggestion value={getSuggestionValue(suggestion)} subHeading={suggestion.country} tertiaryLabel="Airport" indent={suggestion.indent} icon={BpkFlightIcon} /> ); class MyComponent extends Component { constructor() { super(); this.state = { value: '', suggestions: [], }; } onChange = (e, { newValue }) => { this.setState({ value: newValue, }); } onSuggestionsFetchRequested = ({ value }) => { this.setState({ suggestions: getSuggestions(value), }); } onSuggestionsClearRequested = () => { this.setState({ suggestions: [], }); } render() { const { value, suggestions } = this.state; const inputProps = { id: 'my-autosuggest', name: 'my-autosuggest', placeholder: 'Enter an office name', value, onChange: this.onChange, }; return ( <div> <BpkLabel htmlFor="my-autosuggest">Office</BpkLabel> <BpkAutosuggest suggestions={suggestions} onSuggestionsFetchRequested={this.onSuggestionsFetchRequested} onSuggestionsClearRequested={this.onSuggestionsClearRequested} getSuggestionValue={getSuggestionValue} renderSuggestion={renderSuggestion} inputProps={inputProps} /> </div> ); } } ``` Props --- *BpkAutosuggest:* [Please refer to `react-autosuggest`'s documentation for a full list of props](https://github.com/moroshko/react-autosuggest#props). **Note:** The `inputProps` object is passed directly to a [`BpkInput`](https://backpack.github.io/components/text-input?platform=web) component, so its prop types apply also. *BpkAutosuggestSuggestion:* | Property | PropType | Required | Default Value | | --- | --- | --- | --- | | value | string | true | - | | subHeading | string | false | null | | tertiaryLabel | string | false | null | | icon | func | false | null | | indent | bool | false | false | | className | string | false | null | Readme --- ### Keywords none
processanimateR
cran
R
Package ‘processanimateR’ October 14, 2022 Type Package Title Process Map Token Replay Animation Version 1.0.5 Date 2022-07-14 Description Provides animated process maps based on the 'procesmapR' package. Cases stored in event logs created with with 'bupaR' S3 class eventlog() are rendered as tokens (SVG shapes) and animated according to their occurrence times on top of the process map. For rendering SVG animations ('SMIL') and the 'htmlwidget' package are used. License MIT + file LICENSE Encoding UTF-8 LazyData true Depends R (>= 2.10) Imports bupaR, processmapR (>= 0.3.1), rlang, magrittr, dplyr, tidyr, htmlwidgets, DiagrammeR (>= 1.0.0), grDevices, stringr, htmltools Suggests eventdataR, edeaR, testthat, knitr, rmarkdown, shiny, RColorBrewer, lubridate RoxygenNote 7.2.1 URL https://github.com/bupaverse/processanimateR/ BugReports https://github.com/bupaverse/processanimateR/issues VignetteBuilder knitr NeedsCompilation no Author <NAME> [aut, cre], <NAME> [ctb] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2022-07-20 12:40:03 UTC R topics documented: activity_select_decoration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 animate_process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 attribution_osm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 example_log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 icon_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 icon_marker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 processanimaterOutput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 renderer_graphviz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 renderer_leaflet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 renderProcessanimater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 token_aes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 token_scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 token_select_decoration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 activity_select_decoration Decoration callback for activity selection Description Decoration callback for activity selection Usage activity_select_decoration( stroke_dasharray = "2", stroke_width = "2", stroke = "black" ) Arguments stroke_dasharray Sets the ‘stroke-dasharray‘ attribute for selected activities. stroke_width Sets the ‘stroke-width‘ attribute for selected activities. stroke Sets the ‘stroke‘ attribute for selected activities. Value A JavaScript callback function called when activity selection changes. See Also animate_process Examples # Create a decoration callback that increases the activity stroke width activity_select_decoration(stroke_width = "5") animate_process Animate cases on a process map Description This function animates the cases stored in a ‘bupaR‘ event log on top of a process model. Each case is represented by a token that travels through the process model according to the waiting and processing times of activities. Currently, animation is only supported for process models created by process_map of the ‘processmapR‘ package. The animation will be rendered as SVG animation (SMIL) using the ‘htmlwidgets‘ framework. Each token is a SVG shape and customizable. Usage animate_process( eventlog, processmap = process_map(eventlog, render = F, ...), renderer = renderer_graphviz(), mode = c("absolute", "relative", "off"), duration = 60, jitter = 0, timeline = TRUE, legend = NULL, initial_state = c("playing", "paused"), initial_time = 0, repeat_count = 1, repeat_delay = 0.5, epsilon_time = duration/1000, mapping = token_aes(), token_callback_onclick = c("function(svg_root, svg_element, case_id) {", "}"), token_callback_select = token_select_decoration(), activity_callback_onclick = c("function(svg_root, svg_element, activity_id) {", "}"), activity_callback_select = activity_select_decoration(), elementId = NULL, preRenderHook = NULL, width = NULL, height = NULL, sizingPolicy = htmlwidgets::sizingPolicy(browser.fill = TRUE, viewer.fill = TRUE, knitr.figure = FALSE, knitr.defaultWidth = "100%", knitr.defaultHeight = "300"), ... ) Arguments eventlog The ‘bupaR‘ event log object that should be animated processmap A process map created with ‘processmapR‘ (process_map) on which the event log will be animated. If not provided a standard process map will be generated from the supplied event log. renderer Whether to use Graphviz (renderer_graphviz) to layout and render the pro- cess map, or to render the process map using Leaflet (renderer_leaflet) on a geographical map. mode Whether to animate the cases according to their actual time of occurrence (‘ab- solute‘) or to start all cases at once (‘relative‘). duration The overall duration of the animation, all times are scaled according to this overall duration. jitter The magnitude of a random coordinate translation, known as jitter in scatter- plots, which is added to each token. Adding jitter can help to disambiguate tokens drawn on top of each other. timeline Whether to render a timeline slider in supported browsers (Work only on recent versions of Chrome and Firefox). legend Whether to show a legend for the ‘size‘ or the ‘color‘ scale. The default is not to show a legend. initial_state Whether the initial playback state is ‘playing‘ or ‘paused‘. The default is ‘play- ing‘. initial_time Sets the initial time of the animation. The default value is ‘0‘. repeat_count The number of times the process animation is repeated. repeat_delay The seconds to wait before one repetition of the animation. epsilon_time A (small) time to be added to every animation to ensure that tokens are visible. mapping A list of aesthetic mappings from event log attributes to certain visual parameters of the tokens. Use token_aes to create a suitable mapping list. token_callback_onclick A JavaScript function that is called when a token is clicked. The function is parsed by JS and received three parameters: ‘svg_root‘, ’svg_element’, and ’case_id’. token_callback_select A JavaScript callback function called when token selection changes. activity_callback_onclick A JavaScript function that is called when an activity is clicked. The function is parsed by JS and received three parameters: ’svg_root’, ’svg_element’, and ’activity_id’. activity_callback_select A JavaScript callback function called when activity selection changes. elementId passed through to createWidget. A custom elementId is useful to capture the selection events via input$elementId_tokens and input$elementId_activities when used in Shiny. preRenderHook passed through to createWidget. width, height Fixed size for widget (in css units). The default is NULL, which results in intelligent automatic sizing based on the widget’s container. sizingPolicy Options that govern how the widget is sized in various containers (e.g. a stan- dalone browser, the RStudio Viewer, a knitr figure, or a Shiny output binding). These options can be specified by calling the sizingPolicy function. ... Options passed on to process_map. See Also process_map, token_aes Examples data(example_log) # Animate the process with default options (absolute time and 60s duration) animate_process(example_log) # Animate the process with default options (relative time, with jitter, infinite repeat) animate_process(example_log, mode = "relative", jitter = 10, repeat_count = Inf) attribution_osm Standard attribution Description This is the standard attribution advised for OPenStreetMap tiles. Usage attribution_osm() Value The attribution character vector. Examples attribution_osm() example_log Example event log used in documentation Description Example event log used in documentation Usage example_log Format An bupaR event log icon_circle Standard circle marker Description The marker is based on Material Design (Apache 2.0 License): https://material.io/ Usage icon_circle() Value SVG code for a map marker. Examples icon_circle() icon_marker Standard map marker Description The marker is based on Material Design (Apache 2.0 License): https://material.io/ Usage icon_marker() Value SVG code for a map marker. Examples icon_marker() processanimaterOutput Create a process animation output element Description Renders a renderProcessanimater within an application page. Usage processanimaterOutput(outputId, width = "100%", height = "400px") Arguments outputId Output variable to read the animation from width, height Must be a valid CSS unit (like 100 which will be coerced to a string and have px appended. renderer_graphviz Render as a plain graph Description This renderer uses viz.js to render the process map using the DOT layout. Usage renderer_graphviz( svg_fit = TRUE, svg_contain = FALSE, svg_resize_fit = TRUE, zoom_controls = TRUE, zoom_initial = NULL ) Arguments svg_fit Whether to scale the process map to fully fit in its container. If set to ‘TRUE‘ the process map will be scaled to be fully visible and may appear very small. svg_contain Whether to scale the process map to use all available space (contain) from its container. If set to ‘FALSE‘, if ‘svg_fit‘ is set this takes precedence. svg_resize_fit Whether to (re)-fit the process map to its container upon resize. zoom_controls Whether to show zoom controls. zoom_initial The initial zoom level to use. Value A rendering function to be used with animate_process See Also animate_process Examples data(example_log) # Animate the process with the default GraphViz DOT renderer animate_process(example_log, renderer = renderer_graphviz()) renderer_leaflet Render as graph on a geographical map Description This renderer uses Leaflet to draw the nodes and egdes of the process map on a geographical map. Usage renderer_leaflet( node_coordinates, edge_coordinates = data.frame(act_from = character(0), act_to = character(0), lat = numeric(0), lng = numeric(0), stringsAsFactors = FALSE), layer = c(paste0("new L.TileLayer('", tile, "',"), paste0("{ attribution : '", attribution_osm(), "'})")), tile = "http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png", options = list(), grayscale = TRUE, icon_act = icon_marker(), icon_start = icon_circle(), icon_end = icon_circle(), scale_max = 4, scale_min = 0.25 ) Arguments node_coordinates A data frame with node coordinates in the format ‘act‘, ‘lat‘, ‘lng‘. edge_coordinates A data frame with additional edge coordinates in the format ‘act_from‘, ‘act_to‘, ‘lat‘, ‘lng‘. layer The JavaScript code used to create a Leaflet layer. A TileLayer is used as default value. tile The URL to be used for the standard Leaflet TileLayer. options A named list of leaflet options, such as the center point of the map and the initial zoom level. grayscale Whether to apply a grayscale filter to the map. icon_act The SVG code used for the activity icon. icon_start The SVG code used for the start icon. icon_end The SVG code used for the end icon. scale_max The maximum factor to be used to scale the process map with when zooming out. scale_min The minimum factor to be used to scale the process map with when zooming in. Value A rendering function to be used with animate_process See Also animate_process Examples data(example_log) # Animate the example process with activities placed in some locations animate_process(example_log, renderer = renderer_leaflet( node_coordinates = data.frame( act = c("A", "B", "C", "D", "ARTIFICIAL_START", "ARTIFICIAL_END"), lat = c(63.443680, 63.426925, 63.409207, 63.422336, 63.450950, 63.419706), lng = c(10.383625, 10.396972, 10.406418, 10.432119, 10.383368, 10.252347), stringsAsFactors = FALSE), edge_coordinates = data.frame( act_from = c("B"), act_to = c("C"), lat = c(63.419207), lng = c(10.386418), stringsAsFactors = FALSE), options = list(center = c(63.412273, 10.399590), zoom = 12)), duration = 5, repeat_count = Inf) renderProcessanimater Renders process animation output Description Renders a SVG process animation suitable to be used by processanimaterOutput. Usage renderProcessanimater(expr, env = parent.frame(), quoted = FALSE) Arguments expr The expression generating a process animation (animate_process). env The environment in which to evaluate expr. quoted Is expr a quoted expression (with quote())? This is useful if you want to save an expression in a variable. token_aes Tokens aesthetics mapping Description Tokens aesthetics mapping Usage token_aes( size = token_scale(), color = token_scale(), image = token_scale(), opacity = token_scale(), shape = "circle", attributes = list() ) Arguments size The scale used for the token size. color The scale used for the token color, image The scale used for the token image. opacity The scale used for the token opacity. shape The (fixed) SVG shape to be used to draw tokens. Can be either ’circle’ (de- fault), ’rect’ or ’image’. In the latter case the image URL needs to be specified as parameter ’token_image’. attributes A list of additional (fixed) SVG attributes to be added to each token. Value An aesthetics mapping for ‘animate_process‘. See Also animate_process, token_scale Examples data(example_log) # Change default token sizes / shape animate_process(example_log, mapping = token_aes(size = token_scale(12), shape = "rect")) # Change default token color animate_process(example_log, mapping = token_aes(color = token_scale("red"))) # Change default token opacity animate_process(example_log, mapping = token_aes(opacity = token_scale("0.2"))) # Change default token image (GIFs work too) animate_process(example_log, mapping = token_aes(shape = "image", size = token_scale(10), image = token_scale("https://upload.wikimedia.org/wikipedia/en/5/5f/Pacman.gif"))) # A more elaborate example with a secondary data frame library(eventdataR) data(traffic_fines) # Change token color based on a numeric attribute, here the nonsensical 'time' of an event animate_process(edeaR::filter_trace_frequency(bupaR::sample_n(traffic_fines,1000),percentage=0.95), legend = "color", mode = "relative", mapping = token_aes(color = token_scale("amount", scale = "linear", range = c("yellow","red")))) token_scale Token scale mapping values to aesthetics Description Creates a ‘list‘ of parameters suitable to be used as token scale in (token_aes) for mapping values to certain aesthetics of the tokens in a process map animation. Refer to the d3-scale documenta- tion (https://github.com/d3/d3-scale) for more information about how to set ‘domain‘ and ‘range‘ properly. Usage token_scale( attribute = NULL, scale = c("identity", "linear", "sqrt", "log", "quantize", "ordinal", "time"), domain = NULL, range = NULL ) Arguments attribute This may be (1) the name of the event attribute to be used as values, (2) a data frame with three columns (case, time, value) in which the values in the case col- umn are matching the case identifier of the supplied event log, or (3) a constant value that does not change over time. scale Which D3 scale function to be used out of ‘identity‘, ‘linear‘, ‘sqrt‘, ‘log‘, ‘quan- tize‘, ‘ordinal‘, or ‘time‘. domain The domain of the D3 scale function. Can be left NULL in which case it will be automatically determined based on the values. range The range of the D3 scale function. Should be a vector of two or more numerical values. Value A scale to be used with ‘token_mapping‘ See Also animate_process, token_aes Examples data(example_log) # (1) Change token color based on a factor attribute animate_process(example_log, legend = "color", mapping = token_aes(color = token_scale("res", scale = "ordinal", range = RColorBrewer::brewer.pal(8, "Paired")))) # (2) Change token color based on second data frame x <- data.frame(case = as.character(rep(c(1,2,3), 2)), time = seq(from = as.POSIXct("2018-10-03 03:41:00"), to = as.POSIXct("2018-10-03 06:00:00"), length.out = 6), value = rep(c("orange", "green"), 3), stringsAsFactors = FALSE) animate_process(example_log, mode = "relative", jitter = 10, legend = "color", mapping = token_aes(color = token_scale(x))) # (3) Constant token color animate_process(example_log, legend = "color", mapping = token_aes(color = token_scale("red"))) token_select_decoration Decoration callback for token selection Description Decoration callback for token selection Usage token_select_decoration(stroke = "black") Arguments stroke Sets the ‘stroke‘ attribute of selected tokens. Value A JavaScript callback function called when token selection changes. See Also animate_process Examples # Create a decoration callback that paints tokens red token_select_decoration("red")
FMStable
cran
R
Package ‘FMStable’ October 12, 2022 Type Package Title Finite Moment Stable Distributions Version 0.1-4 Date 2022-06-03 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Some basic procedures for dealing with log maximally skew stable distributions, which are also called finite moment log stable distributions. License GPL-3 NeedsCompilation yes Repository CRAN Date/Publication 2022-06-06 20:10:26 UTC R topics documented: Establ... 2 FMstabl... 4 Gstabl... 5 impliedVolatilit... 8 moment... 9 optionValue... 10 stableParameter... 13 Estable Extremal or Maximally Skew Stable Distributions Description Density function, distribution function, quantile function and random generation for stable distribu- tions which are maximally skewed to the right. These distributions are called Extremal by Zolotarev (1986). Usage dEstable(x, stableParamObj, log=FALSE) pEstable(x, stableParamObj, log=FALSE, lower.tail=TRUE) qEstable(p, stableParamObj, log=FALSE, lower.tail=TRUE) tailsEstable(x, stableParamObj) Arguments x Vector of quantiles. stableParamObj An object of class stableParameters which describes a maximally skew stable distribution. It may, for instance, have been created by setParam or setMomentsFMstable. p Vector of tail probabilities. log Logical; if TRUE, the log density or log tail probability is returned by functions dEstable and pEstable; and logarithms of probabilities are input to function qEstable. lower.tail Logical; if TRUE, the lower tail probability is returned. Otherwise, the upper tail probability. Details The values are worked out by interpolation, with several different interpolation formulae in various regions. Value dEstable gives the density function; pEstable gives the distribution function or its complement; qEstable gives quantiles; tailsEstable returns a list with the following components which are all the same length as x: density The probability density function. F The probability distribution function. i.e. the probability of being less than or equal to x. righttail The probability of being larger than x. logdensity The probability density function. logF The logarithm of the probability of being less than or equal to x. logrighttail The logarithm of the probability of being larger than x. References <NAME>., <NAME>. and <NAME>. (1976). A method for simulating stable random variables. Journal of the American Statistical Association, 71, 340–344. See Also If x has an extremal stable distribution then exp(−x) has a finite moment log stable distribution. The left hand tail probability computed using pEstable should be the same as the coresponding right hand tail probability computed using pFMstable. Aspects of extremal stable distributions may also be computed (though more slowly) using tailsGstable with beta=1. Functions for generation of random variables having stable distributions are available in package stabledist. Examples tailsEstable(-2:3, setMomentsFMstable(mean=1, sd=1.5, alpha=1.7)) # Compare Estable and FMstable obj <- setMomentsFMstable(1.7, mean=.5, sd=.2) x <- c(.001, 1, 10) pFMstable(x, obj, lower.tail=TRUE, log=TRUE) pEstable(-log(x), obj, lower.tail=FALSE, log=TRUE) x <- seq(from=-5, to=10, length=30) plot(x, dEstable(x, setMomentsFMstable(alpha=1.5)), type="l", log="y", ylab="Log(density) for stable distribution", main="log stable distribution with alpha=1.5, mean=1, sd=1" ) x <- seq(from=-2, to=5, length=30) plot(x, x, ylim=c(0,1), type="n", ylab="Distribution function") for (i in 0:2)lines(x, pEstable(x, setParam(location=0, logscale=-.5, alpha=1.5, pm=i)), col=i+1) legend("bottomright", legend=paste("S", 0:2, sep=""), lty=rep(1,3), col=1:3) p <- c(1.e-10, .01, .1, .2, .5, .99, 1-1.e-10) obj <- setMomentsFMstable(alpha=1.95) result <- qEstable(p, obj) pEstable(result, obj) - p # Plot to illustrate continuity near alpha=1 y <- seq(from=-36, to=0, length=30) logprob <- -exp(-y) plot(0, 0, type="n", xlim=c(-25,0), ylim=c(-35, -1), xlab="x (M parametrization)", ylab="-log(-log(distribution function))") for (oneminusalpha in seq(from=-.2, to=0.2, by=.02)){ obj <- setParam(oneminusalpha=oneminusalpha, location=0, logscale=0, pm=0) type <- if(oneminusalpha==0) 2 else 1 lines(qEstable(logprob, obj, log=TRUE), y, lty=type, lwd=type) } FMstable Finite Moment Log Stable Distributions Description Density function, distribution function, and quantile function for a log stable distribution with loca- tion, scale and shape parameters. For such families of distributions all moments are finite. Carr and Wu (2003) refer to such distributions as “finite moment log stable processes”. The finite moment log stable distribution is well-defined for α = 0, when the distribution is discrete with probability concentrated at x=0 and at one other point. The distribution function may be computed by pFMstable.alpha0. Usage dFMstable(x, stableParamObj, log=FALSE) pFMstable(x, stableParamObj, log=FALSE, lower.tail=TRUE) pFMstable.alpha0(x, mean=1, sd=1, lower.tail=TRUE) qFMstable(p, stableParamObj, lower.tail=TRUE) tailsFMstable(x, stableParamObj) Arguments x Vector of quantiles. stableParamObj An object of class stableParameters which describes a maximally skew stable distribution. It may, for instance, have been created by setMomentsFMstable or fitGivenQuantile. mean Mean of logstable distribution. sd Standard deviation of logstable distribution. p Vector of tail probabilities. log Logical; if TRUE, the log density or log tail probability is returned by functions dFMstable and pFMstable; and logarithms of probabilities are input to function qFMstable. lower.tail Logical; if TRUE, the lower tail probability is returned. Otherwise, the upper tail probability. Details The values are worked out by interpolation, with several different interpolation formulae in various regions. Value dFMstable gives the density function; pFMstable gives the distribution function or its complement; qFMstable gives quantiles; tailsFMstable returns a list with the following components which are all the same length as x: density The probability density function. F The probability distribution function. i.e. the probability of being less than or equal to x. righttail The probability of being larger than x. logdensity The probability density function. logF The logarithm of the probability of being less than or equal to x. logrighttail The logarithm of the probability of being larger than x. References <NAME>. and <NAME>. (2003). The Finite Moment Log Stable Process and Option Pricing. Journal of Finance, American Finance Association, vol. 58(2), pages 753-778 See Also If a random variable X has a finite moment stable distribution then log(X) has the corresponding extremal stable distribution. The density of log(X) can be found using dEstable. Option prices can be found using callFMstable and putFMstable. Examples tailsFMstable(1:10, setMomentsFMstable(3, 1.5, alpha=1.7)) x <- c(-1, 0, 1.e-5, .001, .01, .03, seq(from=.1, to=4.5, length=100)) plot(x, pFMstable(x, setMomentsFMstable(1, 1.5, 2)), type="l" ,xlim=c(0, 4.3), ylim=c(0,1), ylab="Distribution function") for (alpha in c(.03, 1:19/10)) lines(x, pFMstable(x, setMomentsFMstable(1, 1.5, alpha)), col=2) lines(x, pFMstable.alpha0(x, mean=1, sd=1.5), col=3) p <- c(1.e-10, .01, .1, .2, .5, .99, 1-1.e-10) obj <- setMomentsFMstable(alpha=1.95) result <- qFMstable(p, obj) OK <- result > 0 pFMstable(result[OK], obj) - p[OK] Gstable General Stable Distributions Description A procedure based on the R function integrate for computing the distribution function for stable distributions which may be skew but have standard location and scale parameters. This computation is not fast. It is not designed to work for alpha near to 1. Usage tailsGstable(x, logabsx, alpha, oneminusalpha, twominusalpha, beta, betaplus1, betaminus1, parametrization, lower.tail=TRUE) Arguments x Value (scalar). logabsx Logarithm of absolute value of x. Must be used when x is outside the range over which numbers can be stored. (e.g. 1.e-5000) alpha Value of parameter of stable distribution. oneminusalpha Value of alpha. This should be specified when alpha is near to 1 so that the difference from 1 is specified accurately. twominusalpha Value of 2 - alpha. This should be specified when alpha is near to 2 so that the difference from 2 is specified accurately. beta Value of parameter of stable distribution. betaplus1 Value of beta + 1. This should be specified when beta is near to -1 so that the difference from -1 is specified accurately. betaminus1 Value of beta - 1. This should be specified when beta is near to 1 so that the difference from 1 is specified accurately. parametrization Parametrization: 0 for Zolotarev’s M = Nolan S0, 1 for Zolotarev’s A = Nolan S1 and 2 for Zolotarev’s C = Chambers, Mallows and Stuck. lower.tail Logical: Whether the lower tail of the distribution is of primary interest. This parameter affects whether numerical integration is used for the lower or upper tail. The other tail is computed by subtraction. Value Returns a list with the following components: left.tail.prob The probability distribution function. I.e. the probability of being less than or equal to x. right.tail.prob The probability of being larger than x. est.error An estimate of the computational error in the previous two numbers. message A message produced by R’s standard integrate routine. Note This code is included mainly as an illustration of a way to deal with the problem that different parametrizations are useful in different regions. It is also of some value for checking other code, particularly since it was not used as the basis for the interpolation tables. For the C parametrization for alpha greater than 1, the parameter beta needs to be set to -1 for the distribution to be skewed to the right. References <NAME>., <NAME>. and <NAME>. (1976) A method for simulating stable random variables. Journal of the American Statistical Association 71, 340–344. Examples # Check relationship between maximally skew and other stable distributions # in paper by <NAME>, C.L. Mallows and <NAME> alpha <- 1.9 beta <- -.5 k <- 1- abs(1-alpha) denom <- sin(pi*k) p <- (sin(.5*pi*k * (1+beta))/denom)^(1/alpha) q <- (sin(.5*pi*k * (1-beta))/denom)^(1/alpha) # Probability that p S1 - q S2 < x S1 <- setParam(alpha=1.9, location=0, logscale =log(p), pm="C") S2 <- setParam(alpha=1.9, location=0, logscale =log(q), pm="C") S3 <- setParam(alpha=1.9, location=0, logscale =0, pm="C") xgiven <- 1 f <- function(x) dEstable(x, S1) * pEstable(xgiven + x, S2) print(integrate(f, lower=-Inf, upper=Inf, rel.tol=1.e-12)$value, digits=16) f <- function(x) dEstable(x, S3) * pEstable((xgiven + p*x)/q, S3) print(integrate(f, lower=-Inf, upper=Inf, rel.tol=1.e-8)$value, digits=16) direct <- tailsGstable(x=xgiven, logabsx=log(xgiven),alpha=alpha, beta=beta, parametrization=2) print(direct$left.tail.prob, digits=16) # Compare Estable and Gstable # List fractional discrepancies disc <- function(tol){ for(pm in pms) for (a in alphas) for(x in xs) { lx <- log(abs(x)) beta <- if(pm==2 && a > 1) -1 else 1 if(x > 0 || a > 1){ a1 <- pEstable(x, setParam(alpha=a, location=0, logscale=0, pm=pm)) a2 <- tailsGstable(x=x, logabsx=lx, alpha=a, beta=beta, parametrization=pm)$left.tail.prob print(paste("parametrization=", pm, "alpha=", a,"x=", x, "Frac disc=", a1/a2-1), quote=FALSE) } } } alphas <- c(.3, .8, 1.1, 1.5, 1.9) pms <- 0:2 xs <- c(-2, .01, 4.3) disc() impliedVolatility Computations Regarding Value of Options for Log Normal Distribu- tions Description Computes values of European-style call and put options over assets whose future price is expected to follow a log normal distribution. Usage BSOptionValue(spot, strike, expiry, volatility, intRate=0, carryCost=0, Call=TRUE) ImpliedVol(spot, strike, expiry, price, intRate=0, carryCost=0, Call=TRUE, ImpliedVolLowerBound=.01, ImpliedVolUpperBound=1, tol=1.e-9) lnorm.param(mean, sd) Arguments spot The current price of a security. strike The strike price for an option. expiry The time when an option may be exercised. (We are only dealing with European options which have a single date on which they may be exercised.) volatility The volatility of the price of a security per unit time. This is the standard devia- tion of the logarithm of price. price The price for an option. This is used as an input parameter when computing the implied volatility. intRate The interest rate. carryCost The carrying cost for a security. This may be negative when a security is ex- pected to pay a dividend. Call Logical: Whether the option for which a price is given is a call option. ImpliedVolLowerBound Lower bound used when searching for the inplied volatility. ImpliedVolUpperBound Upper bound used when searching for the inplied volatility. tol Tolerance specifying accuracy of search for implied volatility. mean The mean of a quantity which has a lognormal distribution. sd The standard deviation of a quantity which has a lognormal distribution. Details The lognormal distribution is the limit of finite moment log stable distributions as alpha tends to 2. The function lnorm.param finds the mean and standard deviation of a lognormal distribution on the log scale given the mean and standard deviation on the raw scale. The function BSOptionValue finds the value of a European call or put option. The function ImpliedVol allows computation of the implied volatility, which is the volatility on the logarithmic scale which matches the value of an option to a specified price. Value impVol returns the implied volatility when the value of options is computed using a finite mo- ment log stable distribution. approx.impVol returns an approximation to the implied volatility. lnorm.param returns the mean and standard deviation of the underlying normal distribution. See Also Option prices computed using the log normal model can be compared to those computed for the finite moment log stable model using putFMstable and callFMstable. Examples lnorm.param(mean=5, sd=.8) BSOptionValue(spot=4, strike=c(4, 4.5), expiry=.5, volatility=.15) ImpliedVol(spot=4, strike=c(4, 4.5), expiry=.5, price=c(.18,.025)) moments Convolutions of Finite Moment Log Stable Distributions and the Mo- ments of such Distributions Description If X1 , . . . , Xn are independent random variables with the same stable distribution then X1 + . . . + Xn has a stable distribution with the same alpha. The function iidcombine allows the parameters of the resulting stable distribution to be computed. Because stable distributions are infinitely divisible, it is also easy to find the parameters describing the distribution of X1 from the parameters describing the distribution of X1 + . . . + Xn . Convolutions of maximally skew stable distributions correspond to products of logstable distribu- tions. The raw moments of these distributions (i.e. moments about zero, not moments about the mean) can be readily computed using the function moments. Note that the raw moments of the convolution of two independent distributions are the products of the corresponding moments of the component distributions, so the accuracy of iidcombine can be checked by using moments. Usage iidcombine(n, stableParamObj) moments(powers, stableParamObj, log=FALSE) Arguments n Number of random variables to be convoluted. May be any positive number. powers Raw moments of logstable distributions to be computed. stableParamObj An object of class stableParameters which describes a maximally skew stable distribution. log Logical; if TRUE, the logarithms of moments are returned. Value The value returned by iidcombine is another object of class stableParameters. The value re- turned by moments is a numeric vector giving the values of the specified raw moments. See Also Objects of class stableParameters can be created using functions such as setParam. The taking of convolutions is sometimes associated with the computing of values of options using functions such as callFMstable. Examples yearDsn <- fitGivenQuantile(mean=1, sd=2, prob=.7, value=.1) upper <- exp(-yearDsn$location) # Only sensible for alpha<.5 x <- exp(seq(from=log(.0001), to=log(upper), length=50)) plot(x, pFMstable(x, yearDsn), type="l", ylim=c(.2,1), lwd=2, xlab="Price", ylab="Distribution function of future price") half <- iidcombine(.5, yearDsn) lines(x, pFMstable(x, half), lty=2, lwd=2) quarter <- iidcombine(.25, yearDsn) lines(x, pFMstable(x, quarter), lty=3, lwd=2) legend("bottomright", legend=paste(c("1","1/2","1/4"),"year"), lty=c(1,2,3), lwd=c(2,2,2)) moments(1:2, yearDsn) moments(1:2, half) moments(1:2, quarter) # Check logstable against lognormal iidcombine(2, setMomentsFMstable(.5, .2, alpha=2)) p <- lnorm.param(.5, .2) 2*p$meanlog # Gives the mean log(p$sdlog) # Gives the logscale optionValues Values of Options over Finite Moment Log Stable Distributions Description Computes values of European-style call and put options over assets whose future price is expected to follow a finite moment log stable distribution. Usage putFMstable(strike, paramObj, rel.tol=1.e-10) callFMstable(strike, paramObj, rel.tol=1.e-10) optionsFMstable(strike, paramObj, rel.tol=1.e-10) Arguments strike The strike price for an option. paramObj An object of class stableParameters which describes a maximally skew sta- ble distribution. This is the distribution which describes possible prices for the underlying security at the time of expiry. rel.tol The relative tolerance used for numerical integration for finding option values. Value optionsFMstable returns a list containing the values of put options and the values of call options. Note When comparing option values based on finite moment log stable distributions with ones based on log normal distributions, remember that the interest rate and holding cost have been ignored here. Rather than using functions putFMstable and callFMstable for options that are extremely in- the-money (i.e. the options are almost certain to be exercised), the values of such options can be computed more accurately by first computing the value of the out-of-the-money option and then using the relationship spot + put = call + strike. This is done by function optionsFMstable. See Also An example of how an object of class stableParameters may be created is by setParam. Proce- dures for dealing with the log normal model for options pricing include BSOptionValue. Examples paramObj <- setMomentsFMstable(mean=10, sd=1.5, alpha=1.8) putFMstable(c(10,7), paramObj) callFMstable(c(10,7), paramObj) optionsFMstable(8:12, paramObj) # Note that call - put = mean - strike # Values of some extreme put options paramObj <- setMomentsFMstable(mean=1, sd=1.5, alpha=0.02) putFMstable(1.e-200, paramObj) putFMstable(1.e-100, paramObj) pFMstable(1.e-100, paramObj) putFMstable(1.e-50, paramObj) # Asymptotic behaviour logmlogx <- seq(from=2, to=6, length=30) logx <- -exp(logmlogx) x <- exp(logx) plot(logmlogx , putFMstable(x, paramObj)/(x*pFMstable(x, paramObj)), type="l") # Work out the values of some options using FMstable model spot <- 20 strikes <- c(15,18:20, 20:24, 18:20, 20:23) isCall <- rep(c(FALSE,TRUE,FALSE,TRUE), c(4,5,3,4)) expiry <- rep(c(.2, .5), c(9,7)) # Distributions for 0.2 and 0.5 of a year given distribution describing # multiplicative change in price over a year: annual <- fitGivenQuantile(mean=1, sd=.2, prob=2.e-4, value=.01) timep2 <- iidcombine(.2, annual) timep5 <- iidcombine(.5, annual) imp.vols <- prices <- rep(NA, length(strikes)) use <- isCall & expiry==.2 prices[use] <- callFMstable(strikes[use]/spot, timep2) * spot use <- !isCall & expiry==.2 prices[use] <- putFMstable(strikes[use]/spot, timep2) * spot use <- isCall & expiry==.5 prices[use] <- callFMstable(strikes[use]/spot, timep5) * spot use <- !isCall & expiry==.5 prices[use] <- putFMstable(strikes[use]/spot, timep5) * spot # Compute implied volatilities. imp.vols[isCall] <- ImpliedVol(spot=spot, strike=strikes[isCall], expiry=expiry[isCall], price=prices[isCall], Call=TRUE) imp.vols[!isCall] <- ImpliedVol(spot=spot, strike=strikes[!isCall], expiry=expiry[!isCall], price=prices[!isCall], Call=FALSE) # List values of options cbind(strikes, expiry, isCall, prices, imp.vols) # Can the distribution be recovered from the values of the options? discrepancy <- function(alpha, cv){ annual.fit <- setMomentsFMstable(mean=1, sd=cv, alpha=alpha) timep2.fit <- iidcombine(.2, annual.fit) timep5.fit <- iidcombine(.5, annual.fit) prices.fit <- rep(NA, length(strikes)) use <- isCall & expiry==.2 prices.fit[use] <- callFMstable(strikes[use]/spot, timep2.fit) * spot use <- !isCall & expiry==.2 prices.fit[use] <- putFMstable(strikes[use]/spot, timep2.fit) * spot use <- isCall & expiry==.5 prices.fit[use] <- callFMstable(strikes[use]/spot, timep5.fit) * spot use <- !isCall & expiry==.5 prices.fit[use] <- putFMstable(strikes[use]/spot, timep5.fit) * spot return(sum((prices.fit - prices)^2)) } # Search on scales of log(2-alpha) and log(cv) d <- function(param) discrepancy(2-exp(param[1]), exp(param[2])) system.time(result <- nlm(d, p=c(-2,-1.5))) # Estimated alpha 2-exp(result$estimate[1]) # Estimated cv exp(result$estimate[2]) # Searching just for best alpha d <- function(param) discrepancy(param, .2) system.time(result <- optimize(d, lower=1.6, upper=1.98)) # Estimated alpha result$minimum stableParameters Setting up Parameters to Describe both Extremal Stable Distributions and Finite Moment Log Stable Distributions Description Functions which create stable distributions having specified properties. Each of these functions takes scalar arguments and produces a description of a single stable distribution. Usage setParam(alpha, oneminusalpha, twominusalpha, location, logscale, pm) setMomentsFMstable(mean=1, sd=1, alpha, oneminusalpha, twominusalpha) fitGivenQuantile(mean, sd, prob, value, tol=1.e-10) matchQuartiles(quartiles, alpha, oneminusalpha, twominusalpha, tol=1.e-10) Arguments alpha Stable distribution parameter which must be a single value satisfying 0 < α <= 2. oneminusalpha Alternative specification of stable distribution parameter: Specify 1-alpha. twominusalpha Alternative specification of stable distribution parameter: Specify 2-alpha. location Location parameter of stable distribution. logscale Logarithm of scale parameter of stable distribution. pm Parametrization used in specifying stable distribution which is maximally skewed to the right. Allowable values are 0, "S0", "M", 1, "S1", "A", 2, "CMS" or "C" for some common parametrizations. mean Mean of logstable distribution. sd Standard deviation of logstable distribution. value, prob Required probability distribution function (> 0) for a logstable distribution at a value (> 0). quartiles Vector of two quartiles to be matched by a logstable distribution. tol Tolerance for matching of quantile or quartiles. Details The parametrizations used internally by this package are Nolan’s "S0" (or Zolotarev’s "M") parametriza- tion when alpha >= 0.5, and the Zolotarev’s "C" parametrization (which was used by Chambers, Mallows and Struck (1976) when alpha < 0.5. By using objects of class stableParameters to store descriptions of stable distributions, it will generally be possible to write code in a way which is not affected by the internal representation. Such usage is encouraged. Value Each of the functions described here produces an object of class stableParameters which de- scribes a maximally skew stable distribution. Its components include at least the shape parameter alpha, a location parameter referred to as location and the logarithm of a scale parameter referred to as logscale. Currently objects of this class also store information about how they were created, as well as storing the numbers 1-alpha and 2-alpha in order to improve computational precision. References <NAME>., <NAME>. and <NAME>. (1976). A method for simulating stable random variables. Journal of the American Statistical Association, Vol. 71, 340–344. <NAME>. (2012). Stable Distributions. ISBN 9780817641597 Z<NAME>. (1986). One-Dimensional Stable Distributions. Amer. Math. Soc. Transl. of Math. Monographs, Vol. 65. Amer Math. Soc., Providence, RI. (Original Russian version was published in 1983.) See Also Extremal stable distributions with parameters set up using these procedures can be used by functions such as dEstable. The corresponding finite moment log stable distributions can be dealt with using functions such as dFMstable. Examples setParam(alpha=1.5, location=1, logscale=-.6, pm="M") setParam(alpha=.4, location=1, logscale=-.6, pm="M") setMomentsFMstable(alpha=1.7, mean=.5, sd=.2) fitGivenQuantile(mean=5, sd=1, prob=.001, value=.01, tol=1.e-10) fitGivenQuantile(mean=20, sd=1, prob=1.e-20, value=1, tol=1.e-24) matchQuartiles(quartiles=c(9,11), alpha=1.8)
condusco
cran
R
Package ‘condusco’ October 12, 2022 Type Package Title Query-Driven Pipeline Execution and Query Templates Version 0.1.0 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Runs a function iteratively over each row of either a dataframe or the results of a query. Use the 'BigQuery' and 'DBI' wrappers to iteratively pass each row of query results to a function. If a field contains a 'JSON' string, it will be converted to an object. This is helpful for queries that return 'JSON' strings that represent objects. These fields can then be treated as objects by the pipeline. License GPL-3 URL https://github.com/ras44/condusco BugReports https://github.com/ras44/condusco/issues Encoding UTF-8 LazyData true Suggests knitr, rmarkdown, whisker, testthat, RSQLite VignetteBuilder knitr Depends R (>= 3.3.2), jsonlite, assertthat, bigrquery, DBI RoxygenNote 6.0.1.9000 NeedsCompilation no Repository CRAN Date/Publication 2017-11-08 19:30:17 UTC R topics documented: run_pipelin... 2 run_pipeline_db... 2 run_pipeline_gb... 4 run_pipeline Runs user-provided pipeline for each row of arguments in parameters, converting any JSON strings to objects Description Runs user-provided pipeline for each row of arguments in parameters, converting any JSON strings to objects Usage run_pipeline(pipeline, parameters) Arguments pipeline User-provided function with one argument, a dataframe parameters An dataframe of fields to convert to json Examples library(whisker) run_pipeline( function(params){ query <- "SELECT result FROM {{table_prefix}}_results;" whisker.render(query,params) }, data.frame( table_prefix = c('batman', 'robin') ) ) run_pipeline_dbi A wrapper for running pipelines with a DBI connection invocation query Description A wrapper for running pipelines with a DBI connection invocation query Usage run_pipeline_dbi(pipeline, query, con, ...) Arguments pipeline User-provided function with one argument, one row of query results query A query to execute via the DBI connection con The DBI connection ... Additional arguments passed to dbSendQuery() and dbFetch() Examples ## Not run: library(whisker) library(RSQLite) con <- dbConnect(RSQLite::SQLite(), ":memory:") dbWriteTable(con, "mtcars", mtcars) #for each cylinder count, count the number of top 5 hps it has pipeline <- function(params){ query <- "SELECT {{#list}} SUM(CASE WHEN hp='{{val}}' THEN 1 ELSE 0 END )as n_hp_{{val}}, {{/list}} cyl FROM mtcars GROUP BY cyl ;" dbGetQuery( con, whisker.render(query,params) ) } #pass the top 5 most common hps as val params run_pipeline_dbi( pipeline, ' SELECT "[" || GROUP_CONCAT("{ ""val"": """ || hp || """ }") || "]" AS list FROM ( SELECT CAST(hp as INTEGER) as HP, count(hp) as cnt FROM mtcars GROUP BY hp ORDER BY cnt DESC LIMIT 5 ) ', con ) dbDisconnect(con) ## End(Not run) run_pipeline_gbq A wrapper for running pipelines with a BigQuery invocation query Description A wrapper for running pipelines with a BigQuery invocation query Usage run_pipeline_gbq(pipeline, query, project, ...) Arguments pipeline User-provided function with one argument, one row of query results query A query to execute in Google BigQuery project The Google BigQuery project to bill ... Additional arguments passed to query_exec() Examples ## Not run: library(whisker) #Set GBQ project project <- '' #Set the following options for GBQ authentication on a cloud instance options("httr_oauth_cache" = "~/.httr-oauth") options(httr_oob_default=TRUE) #Run the below query to authenticate and write credentials to .httr-oauth file query_exec("SELECT 'foo' as bar",project=project); pipeline <- function(params){ query <- " SELECT {{#list}} SUM(CASE WHEN author.name ='{{name}}' THEN 1 ELSE 0 END) as n_{{name_clean}}, {{/list}} repo_name FROM `bigquery-public-data.github_repos.sample_commits` GROUP BY repo_name ;" res <- query_exec( whisker.render(query,params), project=project, use_legacy_sql = FALSE ); print(res) } run_pipeline_gbq(pipeline, " SELECT CONCAT('[', STRING_AGG( CONCAT('{\"name\":\"',name,'\",' ,'\"name_clean\":\"', REGEXP_REPLACE(name, r'[^[:alpha:]]', ''),'\"}' ) ), ']') as list FROM ( SELECT author.name, COUNT(commit) n_commits FROM `bigquery-public-data.github_repos.sample_commits` GROUP BY 1 ORDER BY 2 DESC LIMIT 10 ) ", project, use_legacy_sql = FALSE ) ## End(Not run)
github.com/prometheus/tsdb
go
Go
README [¶](#section-readme) --- ### TSDB [![Build Status](https://travis-ci.org/prometheus/tsdb.svg?branch=master)](https://travis-ci.org/prometheus/tsdb) [![GoDoc](https://godoc.org/github.com/prometheus/tsdb?status.svg)](https://godoc.org/github.com/prometheus/tsdb) [![Go Report Card](https://goreportcard.com/badge/github.com/prometheus/tsdb)](https://goreportcard.com/report/github.com/prometheus/tsdb) This repository contains the Prometheus storage layer that is used in its 2.x releases. A writeup of its design can be found [here](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/). Based on the Gorilla TSDB [white papers](http://www.vldb.org/pvldb/vol8/p1816-teller.pdf). Video: [Storing 16 Bytes at Scale](https://youtu.be/b_pEevMAC3I) from [PromCon 2017](https://promcon.io/2017-munich/). See also the [format documentation](https://github.com/prometheus/tsdb/blob/v0.10.0/docs/format/README.md). Documentation [¶](#section-documentation) --- [Rendered for](https://go.dev/about#build-context) linux/amd64 windows/amd64 darwin/amd64 js/wasm ### Overview [¶](#pkg-overview) Package tsdb implements a time series storage for float64 sample data. ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [Variables](#pkg-variables) * [func DeleteCheckpoints(dir string, maxIndex int) error](#DeleteCheckpoints) * [func ExponentialBlockRanges(minSize int64, steps, stepSize int) []int64](#ExponentialBlockRanges) * [func LastCheckpoint(dir string) (string, int, error)](#LastCheckpoint) * [func MigrateWAL(logger log.Logger, dir string) (err error)](#MigrateWAL) * [func PostingsForMatchers(ix IndexReader, ms ...labels.Matcher) (index.Postings, error)](#PostingsForMatchers) * [type Appendable](#Appendable) * [type Appender](#Appender) * [type Block](#Block) * + [func OpenBlock(logger log.Logger, dir string, pool chunkenc.Pool) (pb *Block, err error)](#OpenBlock) * + [func (pb *Block) Chunks() (ChunkReader, error)](#Block.Chunks) + [func (pb *Block) CleanTombstones(dest string, c Compactor) (*ulid.ULID, error)](#Block.CleanTombstones) + [func (pb *Block) Close() error](#Block.Close) + [func (pb *Block) Delete(mint, maxt int64, ms ...labels.Matcher) error](#Block.Delete) + [func (pb *Block) Dir() string](#Block.Dir) + [func (pb *Block) GetSymbolTableSize() uint64](#Block.GetSymbolTableSize) + [func (pb *Block) Index() (IndexReader, error)](#Block.Index) + [func (pb *Block) LabelNames() ([]string, error)](#Block.LabelNames) + [func (pb *Block) MaxTime() int64](#Block.MaxTime) + [func (pb *Block) Meta() BlockMeta](#Block.Meta) + [func (pb *Block) MinTime() int64](#Block.MinTime) + [func (pb *Block) OverlapsClosedInterval(mint, maxt int64) bool](#Block.OverlapsClosedInterval) + [func (pb *Block) Size() int64](#Block.Size) + [func (pb *Block) Snapshot(dir string) error](#Block.Snapshot) + [func (pb *Block) String() string](#Block.String) + [func (pb *Block) Tombstones() (TombstoneReader, error)](#Block.Tombstones) * [type BlockDesc](#BlockDesc) * [type BlockMeta](#BlockMeta) * [type BlockMetaCompaction](#BlockMetaCompaction) * [type BlockReader](#BlockReader) * [type BlockStats](#BlockStats) * [type CheckpointStats](#CheckpointStats) * + [func Checkpoint(w *wal.WAL, from, to int, keep func(id uint64) bool, mint int64) (*CheckpointStats, error)](#Checkpoint) * [type ChunkReader](#ChunkReader) * [type ChunkSeriesSet](#ChunkSeriesSet) * + [func LookupChunkSeries(ir IndexReader, tr TombstoneReader, ms ...labels.Matcher) (ChunkSeriesSet, error)](#LookupChunkSeries) * [type ChunkWriter](#ChunkWriter) * [type Compactor](#Compactor) * [type DB](#DB) * + [func Open(dir string, l log.Logger, r prometheus.Registerer, opts *Options) (db *DB, err error)](#Open) * + [func (db *DB) Appender() Appender](#DB.Appender) + [func (db *DB) Blocks() []*Block](#DB.Blocks) + [func (db *DB) CleanTombstones() (err error)](#DB.CleanTombstones) + [func (db *DB) Close() error](#DB.Close) + [func (db *DB) Delete(mint, maxt int64, ms ...labels.Matcher) error](#DB.Delete) + [func (db *DB) Dir() string](#DB.Dir) + [func (db *DB) DisableCompactions()](#DB.DisableCompactions) + [func (db *DB) EnableCompactions()](#DB.EnableCompactions) + [func (db *DB) Head() *Head](#DB.Head) + [func (db *DB) Querier(mint, maxt int64) (Querier, error)](#DB.Querier) + [func (db *DB) Snapshot(dir string, withHead bool) error](#DB.Snapshot) + [func (db *DB) String() string](#DB.String) * [type DBReadOnly](#DBReadOnly) * + [func OpenDBReadOnly(dir string, l log.Logger) (*DBReadOnly, error)](#OpenDBReadOnly) * + [func (db *DBReadOnly) Blocks() ([]BlockReader, error)](#DBReadOnly.Blocks) + [func (db *DBReadOnly) Close() error](#DBReadOnly.Close) + [func (db *DBReadOnly) Querier(mint, maxt int64) (Querier, error)](#DBReadOnly.Querier) * [type Head](#Head) * + [func NewHead(r prometheus.Registerer, l log.Logger, wal *wal.WAL, chunkRange int64) (*Head, error)](#NewHead) * + [func (h *Head) Appender() Appender](#Head.Appender) + [func (h *Head) Chunks() (ChunkReader, error)](#Head.Chunks) + [func (h *Head) Close() error](#Head.Close) + [func (h *Head) Delete(mint, maxt int64, ms ...labels.Matcher) error](#Head.Delete) + [func (h *Head) Index() (IndexReader, error)](#Head.Index) + [func (h *Head) Init(minValidTime int64) error](#Head.Init) + [func (h *Head) MaxTime() int64](#Head.MaxTime) + [func (h *Head) Meta() BlockMeta](#Head.Meta) + [func (h *Head) MinTime() int64](#Head.MinTime) + [func (h *Head) NumSeries() uint64](#Head.NumSeries) + [func (h *Head) Tombstones() (TombstoneReader, error)](#Head.Tombstones) + [func (h *Head) Truncate(mint int64) (err error)](#Head.Truncate) * [type IndexReader](#IndexReader) * [type IndexWriter](#IndexWriter) * [type Interval](#Interval) * [type Intervals](#Intervals) * [type LeveledCompactor](#LeveledCompactor) * + [func NewLeveledCompactor(ctx context.Context, r prometheus.Registerer, l log.Logger, ranges []int64, ...) (*LeveledCompactor, error)](#NewLeveledCompactor) * + [func (c *LeveledCompactor) Compact(dest string, dirs []string, open []*Block) (uid ulid.ULID, err error)](#LeveledCompactor.Compact) + [func (c *LeveledCompactor) Plan(dir string) ([]string, error)](#LeveledCompactor.Plan) + [func (c *LeveledCompactor) Write(dest string, b BlockReader, mint, maxt int64, parent *BlockMeta) (ulid.ULID, error)](#LeveledCompactor.Write) * [type Options](#Options) * [type Overlaps](#Overlaps) * + [func OverlappingBlocks(bm []BlockMeta) Overlaps](#OverlappingBlocks) * + [func (o Overlaps) String() string](#Overlaps.String) * [type Querier](#Querier) * + [func NewBlockQuerier(b BlockReader, mint, maxt int64) (Querier, error)](#NewBlockQuerier) * [type RecordDecoder](#RecordDecoder) * + [func (d *RecordDecoder) Samples(rec []byte, samples []RefSample) ([]RefSample, error)](#RecordDecoder.Samples) + [func (d *RecordDecoder) Series(rec []byte, series []RefSeries) ([]RefSeries, error)](#RecordDecoder.Series) + [func (d *RecordDecoder) Tombstones(rec []byte, tstones []Stone) ([]Stone, error)](#RecordDecoder.Tombstones) + [func (d *RecordDecoder) Type(rec []byte) RecordType](#RecordDecoder.Type) * [type RecordEncoder](#RecordEncoder) * + [func (e *RecordEncoder) Samples(samples []RefSample, b []byte) []byte](#RecordEncoder.Samples) + [func (e *RecordEncoder) Series(series []RefSeries, b []byte) []byte](#RecordEncoder.Series) + [func (e *RecordEncoder) Tombstones(tstones []Stone, b []byte) []byte](#RecordEncoder.Tombstones) * [type RecordType](#RecordType) * [type RefSample](#RefSample) * [type RefSeries](#RefSeries) * [type SegmentWAL](#SegmentWAL) * + [func OpenSegmentWAL(dir string, logger log.Logger, flushInterval time.Duration, ...) (*SegmentWAL, error)](#OpenSegmentWAL) * + [func (w *SegmentWAL) Close() error](#SegmentWAL.Close) + [func (w *SegmentWAL) LogDeletes(stones []Stone) error](#SegmentWAL.LogDeletes) + [func (w *SegmentWAL) LogSamples(samples []RefSample) error](#SegmentWAL.LogSamples) + [func (w *SegmentWAL) LogSeries(series []RefSeries) error](#SegmentWAL.LogSeries) + [func (w *SegmentWAL) Reader() WALReader](#SegmentWAL.Reader) + [func (w *SegmentWAL) Sync() error](#SegmentWAL.Sync) + [func (w *SegmentWAL) Truncate(mint int64, keep func(uint64) bool) error](#SegmentWAL.Truncate) * [type Series](#Series) * [type SeriesIterator](#SeriesIterator) * [type SeriesSet](#SeriesSet) * + [func EmptySeriesSet() SeriesSet](#EmptySeriesSet) + [func NewMergedSeriesSet(a, b SeriesSet) SeriesSet](#NewMergedSeriesSet) + [func NewMergedVerticalSeriesSet(a, b SeriesSet) SeriesSet](#NewMergedVerticalSeriesSet) * [type Stone](#Stone) * [type StringTuples](#StringTuples) * [type TimeRange](#TimeRange) * [type TombstoneReader](#TombstoneReader) * [type WAL](#WAL) * [type WALEntryType](#WALEntryType) * [type WALReader](#WALReader) ### Constants [¶](#pkg-constants) ``` const ( // WALMagic is a 4 byte number every WAL segment file starts with. WALMagic = [uint32](/builtin#uint32)(0x43AF00EF) // WALFormatDefault is the version flag for the default outer segment file format. WALFormatDefault = [byte](/builtin#byte)(1) ) ``` ``` const ( // MagicTombstone is 4 bytes at the head of a tombstone file. MagicTombstone = 0x0130BA30 ) ``` ### Variables [¶](#pkg-variables) ``` var ( // ErrNotFound is returned if a looked up resource was not found. ErrNotFound = [errors](/github.com/pkg/errors).[Errorf](/github.com/pkg/errors#Errorf)("not found") // ErrOutOfOrderSample is returned if an appended sample has a // timestamp smaller than the most recent sample. ErrOutOfOrderSample = [errors](/github.com/pkg/errors).[New](/github.com/pkg/errors#New)("out of order sample") // ErrAmendSample is returned if an appended sample has the same timestamp // as the most recent sample but a different value. ErrAmendSample = [errors](/github.com/pkg/errors).[New](/github.com/pkg/errors#New)("amending sample") // ErrOutOfBounds is returned if an appended sample is out of the // writable time range. ErrOutOfBounds = [errors](/github.com/pkg/errors).[New](/github.com/pkg/errors#New)("out of bounds") ) ``` ``` var DefaultOptions = &[Options](#Options){ WALSegmentSize: [wal](/github.com/prometheus/[email protected]/wal).[DefaultSegmentSize](/github.com/prometheus/[email protected]/wal#DefaultSegmentSize), RetentionDuration: 15 * 24 * 60 * 60 * 1000, BlockRanges: [ExponentialBlockRanges](#ExponentialBlockRanges)([int64](/builtin#int64)(2*[time](/time).[Hour](/time#Hour))/1e6, 3, 5), NoLockfile: [false](/builtin#false), AllowOverlappingBlocks: [false](/builtin#false), WALCompression: [false](/builtin#false), } ``` DefaultOptions used for the DB. They are sane for setups using millisecond precision timestamps. ``` var ErrClosed = [errors](/github.com/pkg/errors).[New](/github.com/pkg/errors#New)("db already closed") ``` ErrClosed is returned when the db is closed. ``` var ErrClosing = [errors](/github.com/pkg/errors).[New](/github.com/pkg/errors#New)("block is closing") ``` ErrClosing is returned when a block is in the process of being closed. ### Functions [¶](#pkg-functions) #### func [DeleteCheckpoints](https://github.com/prometheus/tsdb/blob/v0.10.0/checkpoint.go#L70) [¶](#DeleteCheckpoints) ``` func DeleteCheckpoints(dir [string](/builtin#string), maxIndex [int](/builtin#int)) [error](/builtin#error) ``` DeleteCheckpoints deletes all checkpoints in a directory below a given index. #### func [ExponentialBlockRanges](https://github.com/prometheus/tsdb/blob/v0.10.0/compact.go#L41) [¶](#ExponentialBlockRanges) ``` func ExponentialBlockRanges(minSize [int64](/builtin#int64), steps, stepSize [int](/builtin#int)) [][int64](/builtin#int64) ``` ExponentialBlockRanges returns the time ranges based on the stepSize. #### func [LastCheckpoint](https://github.com/prometheus/tsdb/blob/v0.10.0/checkpoint.go#L45) [¶](#LastCheckpoint) ``` func LastCheckpoint(dir [string](/builtin#string)) ([string](/builtin#string), [int](/builtin#int), [error](/builtin#error)) ``` LastCheckpoint returns the directory name and index of the most recent checkpoint. If dir does not contain any checkpoints, ErrNotFound is returned. #### func [MigrateWAL](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L1236) [¶](#MigrateWAL) ``` func MigrateWAL(logger [log](/github.com/go-kit/kit/log).[Logger](/github.com/go-kit/kit/log#Logger), dir [string](/builtin#string)) (err [error](/builtin#error)) ``` MigrateWAL rewrites the deprecated write ahead log into the new format. #### func [PostingsForMatchers](https://github.com/prometheus/tsdb/blob/v0.10.0/querier.go#L328) [¶](#PostingsForMatchers) ``` func PostingsForMatchers(ix [IndexReader](#IndexReader), ms ...[labels](/github.com/prometheus/[email protected]/labels).[Matcher](/github.com/prometheus/[email protected]/labels#Matcher)) ([index](/github.com/prometheus/[email protected]/index).[Postings](/github.com/prometheus/[email protected]/index#Postings), [error](/builtin#error)) ``` PostingsForMatchers assembles a single postings iterator against the index reader based on the given matchers. ### Types [¶](#pkg-types) #### type [Appendable](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L146) [¶](#Appendable) ``` type Appendable interface { // Appender returns a new Appender against an underlying store. Appender() [Appender](#Appender) } ``` Appendable defines an entity to which data can be appended. #### type [Appender](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L93) [¶](#Appender) ``` type Appender interface { // Add adds a sample pair for the given series. A reference number is // returned which can be used to add further samples in the same or later // transactions. // Returned reference numbers are ephemeral and may be rejected in calls // to AddFast() at any point. Adding the sample via Add() returns a new // reference number. // If the reference is 0 it must not be used for caching. Add(l [labels](/github.com/prometheus/[email protected]/labels).[Labels](/github.com/prometheus/[email protected]/labels#Labels), t [int64](/builtin#int64), v [float64](/builtin#float64)) ([uint64](/builtin#uint64), [error](/builtin#error)) // AddFast adds a sample pair for the referenced series. It is generally // faster than adding a sample by providing its full label set. AddFast(ref [uint64](/builtin#uint64), t [int64](/builtin#int64), v [float64](/builtin#float64)) [error](/builtin#error) // Commit submits the collected samples and purges the batch. Commit() [error](/builtin#error) // Rollback rolls back all modifications made in the appender so far. Rollback() [error](/builtin#error) } ``` Appender allows appending a batch of data. It must be completed with a call to Commit or Rollback and must not be reused afterwards. Operations on the Appender interface are not goroutine-safe. #### type [Block](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L267) [¶](#Block) ``` type Block struct { // contains filtered or unexported fields } ``` Block represents a directory of time series data covering a continuous time range. #### func [OpenBlock](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L293) [¶](#OpenBlock) ``` func OpenBlock(logger [log](/github.com/go-kit/kit/log).[Logger](/github.com/go-kit/kit/log#Logger), dir [string](/builtin#string), pool [chunkenc](/github.com/prometheus/[email protected]/chunkenc).[Pool](/github.com/prometheus/[email protected]/chunkenc#Pool)) (pb *[Block](#Block), err [error](/builtin#error)) ``` OpenBlock opens the block in the directory. It can be passed a chunk pool, which is used to instantiate chunk structs. #### func (*Block) [Chunks](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L406) [¶](#Block.Chunks) ``` func (pb *[Block](#Block)) Chunks() ([ChunkReader](#ChunkReader), [error](/builtin#error)) ``` Chunks returns a new ChunkReader against the block data. #### func (*Block) [CleanTombstones](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L574) [¶](#Block.CleanTombstones) ``` func (pb *[Block](#Block)) CleanTombstones(dest [string](/builtin#string), c [Compactor](#Compactor)) (*[ulid](/github.com/oklog/ulid).[ULID](/github.com/oklog/ulid#ULID), [error](/builtin#error)) ``` CleanTombstones will remove the tombstones and rewrite the block (only if there are any tombstones). If there was a rewrite, then it returns the ULID of the new block written, else nil. #### func (*Block) [Close](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L346) [¶](#Block.Close) ``` func (pb *[Block](#Block)) Close() [error](/builtin#error) ``` Close closes the on-disk block. It blocks as long as there are readers reading from the block. #### func (*Block) [Delete](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L505) [¶](#Block.Delete) ``` func (pb *[Block](#Block)) Delete(mint, maxt [int64](/builtin#int64), ms ...[labels](/github.com/prometheus/[email protected]/labels).[Matcher](/github.com/prometheus/[email protected]/labels#Matcher)) [error](/builtin#error) ``` Delete matching series between mint and maxt in the block. #### func (*Block) [Dir](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L367) [¶](#Block.Dir) ``` func (pb *[Block](#Block)) Dir() [string](/builtin#string) ``` Dir returns the directory of the block. #### func (*Block) [GetSymbolTableSize](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L422) [¶](#Block.GetSymbolTableSize) ``` func (pb *[Block](#Block)) GetSymbolTableSize() [uint64](/builtin#uint64) ``` GetSymbolTableSize returns the Symbol Table Size in the index of this block. #### func (*Block) [Index](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L398) [¶](#Block.Index) ``` func (pb *[Block](#Block)) Index() ([IndexReader](#IndexReader), [error](/builtin#error)) ``` Index returns a new IndexReader against the block data. #### func (*Block) [LabelNames](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L644) [¶](#Block.LabelNames) ``` func (pb *[Block](#Block)) LabelNames() ([][string](/builtin#string), [error](/builtin#error)) ``` LabelNames returns all the unique label names present in the Block in sorted order. #### func (*Block) [MaxTime](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L376) [¶](#Block.MaxTime) added in v0.5.0 ``` func (pb *[Block](#Block)) MaxTime() [int64](/builtin#int64) ``` MaxTime returns the max time of the meta. #### func (*Block) [Meta](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L370) [¶](#Block.Meta) ``` func (pb *[Block](#Block)) Meta() [BlockMeta](#BlockMeta) ``` Meta returns meta information about the block. #### func (*Block) [MinTime](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L373) [¶](#Block.MinTime) added in v0.5.0 ``` func (pb *[Block](#Block)) MinTime() [int64](/builtin#int64) ``` MinTime returns the min time of the meta. #### func (*Block) [OverlapsClosedInterval](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L637) [¶](#Block.OverlapsClosedInterval) ``` func (pb *[Block](#Block)) OverlapsClosedInterval(mint, maxt [int64](/builtin#int64)) [bool](/builtin#bool) ``` OverlapsClosedInterval returns true if the block overlaps [mint, maxt]. #### func (*Block) [Size](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L379) [¶](#Block.Size) added in v0.4.0 ``` func (pb *[Block](#Block)) Size() [int64](/builtin#int64) ``` Size returns the number of bytes that the block takes up. #### func (*Block) [Snapshot](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L597) [¶](#Block.Snapshot) ``` func (pb *[Block](#Block)) Snapshot(dir [string](/builtin#string)) [error](/builtin#error) ``` Snapshot creates snapshot of the block into dir. #### func (*Block) [String](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L362) [¶](#Block.String) ``` func (pb *[Block](#Block)) String() [string](/builtin#string) ``` #### func (*Block) [Tombstones](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L414) [¶](#Block.Tombstones) ``` func (pb *[Block](#Block)) Tombstones() ([TombstoneReader](#TombstoneReader), [error](/builtin#error)) ``` Tombstones returns a new TombstoneReader against the block data. #### type [BlockDesc](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L180) [¶](#BlockDesc) ``` type BlockDesc struct { ULID [ulid](/github.com/oklog/ulid).[ULID](/github.com/oklog/ulid#ULID) `json:"ulid"` MinTime [int64](/builtin#int64) `json:"minTime"` MaxTime [int64](/builtin#int64) `json:"maxTime"` } ``` BlockDesc describes a block by ULID and time range. #### type [BlockMeta](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L152) [¶](#BlockMeta) ``` type BlockMeta struct { // Unique identifier for the block and its contents. Changes on compaction. ULID [ulid](/github.com/oklog/ulid).[ULID](/github.com/oklog/ulid#ULID) `json:"ulid"` // MinTime and MaxTime specify the time range all samples // in the block are in. MinTime [int64](/builtin#int64) `json:"minTime"` MaxTime [int64](/builtin#int64) `json:"maxTime"` // Stats about the contents of the block. Stats [BlockStats](#BlockStats) `json:"stats,omitempty"` // Information on compactions the block was created from. Compaction [BlockMetaCompaction](#BlockMetaCompaction) `json:"compaction"` // Version of the index format. Version [int](/builtin#int) `json:"version"` } ``` BlockMeta provides meta information about a block. #### type [BlockMetaCompaction](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L187) [¶](#BlockMetaCompaction) ``` type BlockMetaCompaction struct { // Maximum number of compaction cycles any source block has // gone through. Level [int](/builtin#int) `json:"level"` // ULIDs of all source head blocks that went into the block. Sources [][ulid](/github.com/oklog/ulid).[ULID](/github.com/oklog/ulid#ULID) `json:"sources,omitempty"` // Indicates that during compaction it resulted in a block without any samples // so it should be deleted on the next reload. Deletable [bool](/builtin#bool) `json:"deletable,omitempty"` // Short descriptions of the direct blocks that were used to create // this block. Parents [][BlockDesc](#BlockDesc) `json:"parents,omitempty"` Failed [bool](/builtin#bool) `json:"failed,omitempty"` } ``` BlockMetaCompaction holds information about compactions a block went through. #### type [BlockReader](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L131) [¶](#BlockReader) ``` type BlockReader interface { // Index returns an IndexReader over the block's data. Index() ([IndexReader](#IndexReader), [error](/builtin#error)) // Chunks returns a ChunkReader over the block's data. Chunks() ([ChunkReader](#ChunkReader), [error](/builtin#error)) // Tombstones returns a TombstoneReader over the block's deleted data. Tombstones() ([TombstoneReader](#TombstoneReader), [error](/builtin#error)) // Meta provides meta information about the block reader. Meta() [BlockMeta](#BlockMeta) } ``` BlockReader provides reading access to a data block. #### type [BlockStats](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L172) [¶](#BlockStats) ``` type BlockStats struct { NumSamples [uint64](/builtin#uint64) `json:"numSamples,omitempty"` NumSeries [uint64](/builtin#uint64) `json:"numSeries,omitempty"` NumChunks [uint64](/builtin#uint64) `json:"numChunks,omitempty"` NumTombstones [uint64](/builtin#uint64) `json:"numTombstones,omitempty"` } ``` BlockStats contains stats about contents of a block. #### type [CheckpointStats](https://github.com/prometheus/tsdb/blob/v0.10.0/checkpoint.go#L34) [¶](#CheckpointStats) ``` type CheckpointStats struct { DroppedSeries [int](/builtin#int) DroppedSamples [int](/builtin#int) DroppedTombstones [int](/builtin#int) TotalSeries [int](/builtin#int) // Processed series including dropped ones. TotalSamples [int](/builtin#int) // Processed samples including dropped ones. TotalTombstones [int](/builtin#int) // Processed tombstones including dropped ones. } ``` CheckpointStats returns stats about a created checkpoint. #### func [Checkpoint](https://github.com/prometheus/tsdb/blob/v0.10.0/checkpoint.go#L102) [¶](#Checkpoint) ``` func Checkpoint(w *[wal](/github.com/prometheus/[email protected]/wal).[WAL](/github.com/prometheus/[email protected]/wal#WAL), from, to [int](/builtin#int), keep func(id [uint64](/builtin#uint64)) [bool](/builtin#bool), mint [int64](/builtin#int64)) (*[CheckpointStats](#CheckpointStats), [error](/builtin#error)) ``` Checkpoint creates a compacted checkpoint of segments in range [first, last] in the given WAL. It includes the most recent checkpoint if it exists. All series not satisfying keep and samples below mint are dropped. The checkpoint is stored in a directory named checkpoint.N in the same segmented format as the original WAL itself. This makes it easy to read it through the WAL package and concatenate it with the original WAL. #### type [ChunkReader](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L122) [¶](#ChunkReader) ``` type ChunkReader interface { // Chunk returns the series data chunk with the given reference. Chunk(ref [uint64](/builtin#uint64)) ([chunkenc](/github.com/prometheus/[email protected]/chunkenc).[Chunk](/github.com/prometheus/[email protected]/chunkenc#Chunk), [error](/builtin#error)) // Close releases all underlying resources of the reader. Close() [error](/builtin#error) } ``` ChunkReader provides reading access of serialized time series data. #### type [ChunkSeriesSet](https://github.com/prometheus/tsdb/blob/v0.10.0/querier.go#L671) [¶](#ChunkSeriesSet) ``` type ChunkSeriesSet interface { Next() [bool](/builtin#bool) At() ([labels](/github.com/prometheus/[email protected]/labels).[Labels](/github.com/prometheus/[email protected]/labels#Labels), [][chunks](/github.com/prometheus/[email protected]/chunks).[Meta](/github.com/prometheus/[email protected]/chunks#Meta), [Intervals](#Intervals)) Err() [error](/builtin#error) } ``` ChunkSeriesSet exposes the chunks and intervals of a series instead of the actual series itself. #### func [LookupChunkSeries](https://github.com/prometheus/tsdb/blob/v0.10.0/querier.go#L692) [¶](#LookupChunkSeries) ``` func LookupChunkSeries(ir [IndexReader](#IndexReader), tr [TombstoneReader](#TombstoneReader), ms ...[labels](/github.com/prometheus/[email protected]/labels).[Matcher](/github.com/prometheus/[email protected]/labels#Matcher)) ([ChunkSeriesSet](#ChunkSeriesSet), [error](/builtin#error)) ``` LookupChunkSeries retrieves all series for the given matchers and returns a ChunkSeriesSet over them. It drops chunks based on tombstones in the given reader. #### type [ChunkWriter](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L109) [¶](#ChunkWriter) ``` type ChunkWriter interface { // WriteChunks writes several chunks. The Chunk field of the ChunkMetas // must be populated. // After returning successfully, the Ref fields in the ChunkMetas // are set and can be used to retrieve the chunks from the written data. WriteChunks(chunks ...[chunks](/github.com/prometheus/[email protected]/chunks).[Meta](/github.com/prometheus/[email protected]/chunks#Meta)) [error](/builtin#error) // Close writes any required finalization and closes the resources // associated with the underlying writer. Close() [error](/builtin#error) } ``` ChunkWriter serializes a time block of chunked series data. #### type [Compactor](https://github.com/prometheus/tsdb/blob/v0.10.0/compact.go#L54) [¶](#Compactor) ``` type Compactor interface { // Plan returns a set of directories that can be compacted concurrently. // The directories can be overlapping. // Results returned when compactions are in progress are undefined. Plan(dir [string](/builtin#string)) ([][string](/builtin#string), [error](/builtin#error)) // Write persists a Block into a directory. // No Block is written when resulting Block has 0 samples, and returns empty ulid.ULID{}. Write(dest [string](/builtin#string), b [BlockReader](#BlockReader), mint, maxt [int64](/builtin#int64), parent *[BlockMeta](#BlockMeta)) ([ulid](/github.com/oklog/ulid).[ULID](/github.com/oklog/ulid#ULID), [error](/builtin#error)) // Compact runs compaction against the provided directories. Must // only be called concurrently with results of Plan(). // Can optionally pass a list of already open blocks, // to avoid having to reopen them. // When resulting Block has 0 samples // * No block is written. // * The source dirs are marked Deletable. // * Returns empty ulid.ULID{}. Compact(dest [string](/builtin#string), dirs [][string](/builtin#string), open []*[Block](#Block)) ([ulid](/github.com/oklog/ulid).[ULID](/github.com/oklog/ulid#ULID), [error](/builtin#error)) } ``` Compactor provides compaction against an underlying storage of time series data. #### type [DB](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L116) [¶](#DB) ``` type DB struct { // contains filtered or unexported fields } ``` DB handles reads and writes of time series falling into a hashed partition of a seriedb. #### func [Open](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L426) [¶](#Open) ``` func Open(dir [string](/builtin#string), l [log](/github.com/go-kit/kit/log).[Logger](/github.com/go-kit/kit/log#Logger), r [prometheus](/github.com/prometheus/client_golang/prometheus).[Registerer](/github.com/prometheus/client_golang/prometheus#Registerer), opts *[Options](#Options)) (db *[DB](#DB), err [error](/builtin#error)) ``` Open returns a new DB in the given directory. #### func (*DB) [Appender](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L565) [¶](#DB.Appender) ``` func (db *[DB](#DB)) Appender() [Appender](#Appender) ``` Appender opens a new appender against the database. #### func (*DB) [Blocks](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L1037) [¶](#DB.Blocks) ``` func (db *[DB](#DB)) Blocks() []*[Block](#Block) ``` Blocks returns the databases persisted blocks. #### func (*DB) [CleanTombstones](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L1214) [¶](#DB.CleanTombstones) ``` func (db *[DB](#DB)) CleanTombstones() (err [error](/builtin#error)) ``` CleanTombstones re-writes any blocks with tombstones. #### func (*DB) [Close](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L1050) [¶](#DB.Close) ``` func (db *[DB](#DB)) Close() [error](/builtin#error) ``` Close the partition. #### func (*DB) [Delete](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L1191) [¶](#DB.Delete) ``` func (db *[DB](#DB)) Delete(mint, maxt [int64](/builtin#int64), ms ...[labels](/github.com/prometheus/[email protected]/labels).[Matcher](/github.com/prometheus/[email protected]/labels#Matcher)) [error](/builtin#error) ``` Delete implements deletion of metrics. It only has atomicity guarantees on a per-block basis. #### func (*DB) [Dir](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L521) [¶](#DB.Dir) ``` func (db *[DB](#DB)) Dir() [string](/builtin#string) ``` Dir returns the directory of the database. #### func (*DB) [DisableCompactions](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L1077) [¶](#DB.DisableCompactions) ``` func (db *[DB](#DB)) DisableCompactions() ``` DisableCompactions disables auto compactions. #### func (*DB) [EnableCompactions](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L1086) [¶](#DB.EnableCompactions) ``` func (db *[DB](#DB)) EnableCompactions() ``` EnableCompactions enables auto compactions. #### func (*DB) [Head](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L1045) [¶](#DB.Head) ``` func (db *[DB](#DB)) Head() *[Head](#Head) ``` Head returns the databases's head. #### func (*DB) [Querier](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L1138) [¶](#DB.Querier) ``` func (db *[DB](#DB)) Querier(mint, maxt [int64](/builtin#int64)) ([Querier](#Querier), [error](/builtin#error)) ``` Querier returns a new querier over the data partition for the given time range. A goroutine must not handle more than one open Querier. #### func (*DB) [Snapshot](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L1096) [¶](#DB.Snapshot) ``` func (db *[DB](#DB)) Snapshot(dir [string](/builtin#string), withHead [bool](/builtin#bool)) [error](/builtin#error) ``` Snapshot writes the current data to the directory. If withHead is set to true it will create a new block containing all data that's currently in the memory buffer/WAL. #### func (*DB) [String](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L1032) [¶](#DB.String) ``` func (db *[DB](#DB)) String() [string](/builtin#string) ``` #### type [DBReadOnly](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L259) [¶](#DBReadOnly) added in v0.10.0 ``` type DBReadOnly struct { // contains filtered or unexported fields } ``` DBReadOnly provides APIs for read only operations on a database. Current implementation doesn't support concurency so all API calls should happen in the same go routine. #### func [OpenDBReadOnly](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L267) [¶](#OpenDBReadOnly) added in v0.10.0 ``` func OpenDBReadOnly(dir [string](/builtin#string), l [log](/github.com/go-kit/kit/log).[Logger](/github.com/go-kit/kit/log#Logger)) (*[DBReadOnly](#DBReadOnly), [error](/builtin#error)) ``` OpenDBReadOnly opens DB in the given directory for read only operations. #### func (*DBReadOnly) [Blocks](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L350) [¶](#DBReadOnly.Blocks) added in v0.10.0 ``` func (db *[DBReadOnly](#DBReadOnly)) Blocks() ([][BlockReader](#BlockReader), [error](/builtin#error)) ``` Blocks returns a slice of block readers for persisted blocks. #### func (*DBReadOnly) [Close](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L409) [¶](#DBReadOnly.Close) added in v0.10.0 ``` func (db *[DBReadOnly](#DBReadOnly)) Close() [error](/builtin#error) ``` Close all block readers. #### func (*DBReadOnly) [Querier](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L285) [¶](#DBReadOnly.Querier) added in v0.10.0 ``` func (db *[DBReadOnly](#DBReadOnly)) Querier(mint, maxt [int64](/builtin#int64)) ([Querier](#Querier), [error](/builtin#error)) ``` Querier loads the wal and returns a new querier over the data partition for the given time range. Current implementation doesn't support multiple Queriers. #### type [Head](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L61) [¶](#Head) ``` type Head struct { // contains filtered or unexported fields } ``` Head handles reads and writes of time series data within a time window. #### func [NewHead](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L230) [¶](#NewHead) ``` func NewHead(r [prometheus](/github.com/prometheus/client_golang/prometheus).[Registerer](/github.com/prometheus/client_golang/prometheus#Registerer), l [log](/github.com/go-kit/kit/log).[Logger](/github.com/go-kit/kit/log#Logger), wal *[wal](/github.com/prometheus/[email protected]/wal).[WAL](/github.com/prometheus/[email protected]/wal#WAL), chunkRange [int64](/builtin#int64)) (*[Head](#Head), [error](/builtin#error)) ``` NewHead opens the head block in dir. #### func (*Head) [Appender](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L762) [¶](#Head.Appender) ``` func (h *[Head](#Head)) Appender() [Appender](#Appender) ``` Appender returns a new Appender on the database. #### func (*Head) [Chunks](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L1117) [¶](#Head.Chunks) ``` func (h *[Head](#Head)) Chunks() ([ChunkReader](#ChunkReader), [error](/builtin#error)) ``` Chunks returns a ChunkReader against the block. #### func (*Head) [Close](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L1166) [¶](#Head.Close) ``` func (h *[Head](#Head)) Close() [error](/builtin#error) ``` Close flushes the WAL and closes the head. #### func (*Head) [Delete](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L956) [¶](#Head.Delete) ``` func (h *[Head](#Head)) Delete(mint, maxt [int64](/builtin#int64), ms ...[labels](/github.com/prometheus/[email protected]/labels).[Matcher](/github.com/prometheus/[email protected]/labels#Matcher)) [error](/builtin#error) ``` Delete all samples in the range of [mint, maxt] for series that satisfy the given label matchers. #### func (*Head) [Index](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L1105) [¶](#Head.Index) ``` func (h *[Head](#Head)) Index() ([IndexReader](#IndexReader), [error](/builtin#error)) ``` Index returns an IndexReader against the block. #### func (*Head) [Init](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L500) [¶](#Head.Init) ``` func (h *[Head](#Head)) Init(minValidTime [int64](/builtin#int64)) [error](/builtin#error) ``` Init loads data from the write ahead log and prepares the head for writes. It should be called before using an appender so that limits the ingested samples to the head min valid time. #### func (*Head) [MaxTime](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L1154) [¶](#Head.MaxTime) ``` func (h *[Head](#Head)) MaxTime() [int64](/builtin#int64) ``` MaxTime returns the highest timestamp seen in data of the head. #### func (*Head) [Meta](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L1135) [¶](#Head.Meta) added in v0.10.0 ``` func (h *[Head](#Head)) Meta() [BlockMeta](#BlockMeta) ``` Meta returns meta information about the head. The head is dynamic so will return dynamic results. #### func (*Head) [MinTime](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L1149) [¶](#Head.MinTime) ``` func (h *[Head](#Head)) MinTime() [int64](/builtin#int64) ``` MinTime returns the lowest time bound on visible data in the head. #### func (*Head) [NumSeries](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L1129) [¶](#Head.NumSeries) added in v0.10.0 ``` func (h *[Head](#Head)) NumSeries() [uint64](/builtin#uint64) ``` NumSeries returns the number of active series in the head. #### func (*Head) [Tombstones](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L1100) [¶](#Head.Tombstones) ``` func (h *[Head](#Head)) Tombstones() ([TombstoneReader](#TombstoneReader), [error](/builtin#error)) ``` Tombstones returns a new reader over the head's tombstones #### func (*Head) [Truncate](https://github.com/prometheus/tsdb/blob/v0.10.0/head.go#L564) [¶](#Head.Truncate) ``` func (h *[Head](#Head)) Truncate(mint [int64](/builtin#int64)) (err [error](/builtin#error)) ``` Truncate removes old data before mint from the head. #### type [IndexReader](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L66) [¶](#IndexReader) ``` type IndexReader interface { // Symbols returns a set of string symbols that may occur in series' labels // and indices. Symbols() (map[[string](/builtin#string)]struct{}, [error](/builtin#error)) // LabelValues returns the possible label values. LabelValues(names ...[string](/builtin#string)) ([index](/github.com/prometheus/[email protected]/index).[StringTuples](/github.com/prometheus/[email protected]/index#StringTuples), [error](/builtin#error)) // Postings returns the postings list iterator for the label pair. // The Postings here contain the offsets to the series inside the index. // Found IDs are not strictly required to point to a valid Series, e.g. during // background garbage collections. Postings(name, value [string](/builtin#string)) ([index](/github.com/prometheus/[email protected]/index).[Postings](/github.com/prometheus/[email protected]/index#Postings), [error](/builtin#error)) // SortedPostings returns a postings list that is reordered to be sorted // by the label set of the underlying series. SortedPostings([index](/github.com/prometheus/[email protected]/index).[Postings](/github.com/prometheus/[email protected]/index#Postings)) [index](/github.com/prometheus/[email protected]/index).[Postings](/github.com/prometheus/[email protected]/index#Postings) // Series populates the given labels and chunk metas for the series identified // by the reference. // Returns ErrNotFound if the ref does not resolve to a known series. Series(ref [uint64](/builtin#uint64), lset *[labels](/github.com/prometheus/[email protected]/labels).[Labels](/github.com/prometheus/[email protected]/labels#Labels), chks *[][chunks](/github.com/prometheus/[email protected]/chunks).[Meta](/github.com/prometheus/[email protected]/chunks#Meta)) [error](/builtin#error) // LabelIndices returns a list of string tuples for which a label value index exists. // NOTE: This is deprecated. Use `LabelNames()` instead. LabelIndices() ([][][string](/builtin#string), [error](/builtin#error)) // LabelNames returns all the unique label names present in the index in sorted order. LabelNames() ([][string](/builtin#string), [error](/builtin#error)) // Close releases the underlying resources of the reader. Close() [error](/builtin#error) } ``` IndexReader provides reading access of serialized index data. #### type [IndexWriter](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L39) [¶](#IndexWriter) ``` type IndexWriter interface { // AddSymbols registers all string symbols that are encountered in series // and other indices. AddSymbols(sym map[[string](/builtin#string)]struct{}) [error](/builtin#error) // AddSeries populates the index writer with a series and its offsets // of chunks that the index can reference. // Implementations may require series to be insert in increasing order by // their labels. // The reference numbers are used to resolve entries in postings lists that // are added later. AddSeries(ref [uint64](/builtin#uint64), l [labels](/github.com/prometheus/[email protected]/labels).[Labels](/github.com/prometheus/[email protected]/labels#Labels), chunks ...[chunks](/github.com/prometheus/[email protected]/chunks).[Meta](/github.com/prometheus/[email protected]/chunks#Meta)) [error](/builtin#error) // WriteLabelIndex serializes an index from label names to values. // The passed in values chained tuples of strings of the length of names. WriteLabelIndex(names [][string](/builtin#string), values [][string](/builtin#string)) [error](/builtin#error) // WritePostings writes a postings list for a single label pair. // The Postings here contain refs to the series that were added. WritePostings(name, value [string](/builtin#string), it [index](/github.com/prometheus/[email protected]/index).[Postings](/github.com/prometheus/[email protected]/index#Postings)) [error](/builtin#error) // Close writes any finalization and closes the resources associated with // the underlying writer. Close() [error](/builtin#error) } ``` IndexWriter serializes the index for a block of series data. The methods must be called in the order they are specified in. #### type [Interval](https://github.com/prometheus/tsdb/blob/v0.10.0/tombstones.go#L238) [¶](#Interval) ``` type Interval struct { Mint, Maxt [int64](/builtin#int64) } ``` Interval represents a single time-interval. #### type [Intervals](https://github.com/prometheus/tsdb/blob/v0.10.0/tombstones.go#L257) [¶](#Intervals) ``` type Intervals [][Interval](#Interval) ``` Intervals represents a set of increasing and non-overlapping time-intervals. #### type [LeveledCompactor](https://github.com/prometheus/tsdb/blob/v0.10.0/compact.go#L76) [¶](#LeveledCompactor) ``` type LeveledCompactor struct { // contains filtered or unexported fields } ``` LeveledCompactor implements the Compactor interface. #### func [NewLeveledCompactor](https://github.com/prometheus/tsdb/blob/v0.10.0/compact.go#L145) [¶](#NewLeveledCompactor) ``` func NewLeveledCompactor(ctx [context](/context).[Context](/context#Context), r [prometheus](/github.com/prometheus/client_golang/prometheus).[Registerer](/github.com/prometheus/client_golang/prometheus#Registerer), l [log](/github.com/go-kit/kit/log).[Logger](/github.com/go-kit/kit/log#Logger), ranges [][int64](/builtin#int64), pool [chunkenc](/github.com/prometheus/[email protected]/chunkenc).[Pool](/github.com/prometheus/[email protected]/chunkenc#Pool)) (*[LeveledCompactor](#LeveledCompactor), [error](/builtin#error)) ``` NewLeveledCompactor returns a LeveledCompactor. #### func (*LeveledCompactor) [Compact](https://github.com/prometheus/tsdb/blob/v0.10.0/compact.go#L373) [¶](#LeveledCompactor.Compact) ``` func (c *[LeveledCompactor](#LeveledCompactor)) Compact(dest [string](/builtin#string), dirs [][string](/builtin#string), open []*[Block](#Block)) (uid [ulid](/github.com/oklog/ulid).[ULID](/github.com/oklog/ulid#ULID), err [error](/builtin#error)) ``` Compact creates a new block in the compactor's directory from the blocks in the provided directories. #### func (*LeveledCompactor) [Plan](https://github.com/prometheus/tsdb/blob/v0.10.0/compact.go#L170) [¶](#LeveledCompactor.Plan) ``` func (c *[LeveledCompactor](#LeveledCompactor)) Plan(dir [string](/builtin#string)) ([][string](/builtin#string), [error](/builtin#error)) ``` Plan returns a list of compactable blocks in the provided directory. #### func (*LeveledCompactor) [Write](https://github.com/prometheus/tsdb/blob/v0.10.0/compact.go#L466) [¶](#LeveledCompactor.Write) ``` func (c *[LeveledCompactor](#LeveledCompactor)) Write(dest [string](/builtin#string), b [BlockReader](#BlockReader), mint, maxt [int64](/builtin#int64), parent *[BlockMeta](#BlockMeta)) ([ulid](/github.com/oklog/ulid).[ULID](/github.com/oklog/ulid#ULID), [error](/builtin#error)) ``` #### type [Options](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L58) [¶](#Options) ``` type Options struct { // Segments (wal files) max size. // WALSegmentSize = 0, segment size is default size. // WALSegmentSize > 0, segment size is WALSegmentSize. // WALSegmentSize < 0, wal is disabled. WALSegmentSize [int](/builtin#int) // Duration of persisted data to keep. RetentionDuration [uint64](/builtin#uint64) // Maximum number of bytes in blocks to be retained. // 0 or less means disabled. // NOTE: For proper storage calculations need to consider // the size of the WAL folder which is not added when calculating // the current size of the database. MaxBytes [int64](/builtin#int64) // The sizes of the Blocks. BlockRanges [][int64](/builtin#int64) // NoLockfile disables creation and consideration of a lock file. NoLockfile [bool](/builtin#bool) // Overlapping blocks are allowed if AllowOverlappingBlocks is true. // This in-turn enables vertical compaction and vertical query merge. AllowOverlappingBlocks [bool](/builtin#bool) // WALCompression will turn on Snappy compression for records on the WAL. WALCompression [bool](/builtin#bool) } ``` Options of the DB storage. #### type [Overlaps](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L938) [¶](#Overlaps) ``` type Overlaps map[[TimeRange](#TimeRange)][][BlockMeta](#BlockMeta) ``` Overlaps contains overlapping blocks aggregated by overlapping range. #### func [OverlappingBlocks](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L966) [¶](#OverlappingBlocks) ``` func OverlappingBlocks(bm [][BlockMeta](#BlockMeta)) [Overlaps](#Overlaps) ``` OverlappingBlocks returns all overlapping blocks from given meta files. #### func (Overlaps) [String](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L941) [¶](#Overlaps.String) ``` func (o [Overlaps](#Overlaps)) String() [string](/builtin#string) ``` String returns human readable string form of overlapped blocks. #### type [Querier](https://github.com/prometheus/tsdb/blob/v0.10.0/querier.go#L32) [¶](#Querier) ``` type Querier interface { // Select returns a set of series that matches the given label matchers. Select(...[labels](/github.com/prometheus/[email protected]/labels).[Matcher](/github.com/prometheus/[email protected]/labels#Matcher)) ([SeriesSet](#SeriesSet), [error](/builtin#error)) // LabelValues returns all potential values for a label name. LabelValues([string](/builtin#string)) ([][string](/builtin#string), [error](/builtin#error)) // LabelValuesFor returns all potential values for a label name. // under the constraint of another label. LabelValuesFor([string](/builtin#string), [labels](/github.com/prometheus/[email protected]/labels).[Label](/github.com/prometheus/[email protected]/labels#Label)) ([][string](/builtin#string), [error](/builtin#error)) // LabelNames returns all the unique label names present in the block in sorted order. LabelNames() ([][string](/builtin#string), [error](/builtin#error)) // Close releases the resources of the Querier. Close() [error](/builtin#error) } ``` Querier provides querying access over time series data of a fixed time range. #### func [NewBlockQuerier](https://github.com/prometheus/tsdb/blob/v0.10.0/querier.go#L178) [¶](#NewBlockQuerier) ``` func NewBlockQuerier(b [BlockReader](#BlockReader), mint, maxt [int64](/builtin#int64)) ([Querier](#Querier), [error](/builtin#error)) ``` NewBlockQuerier returns a querier against the reader. #### type [RecordDecoder](https://github.com/prometheus/tsdb/blob/v0.10.0/record.go#L42) [¶](#RecordDecoder) ``` type RecordDecoder struct { } ``` RecordDecoder decodes series, sample, and tombstone records. The zero value is ready to use. #### func (*RecordDecoder) [Samples](https://github.com/prometheus/tsdb/blob/v0.10.0/record.go#L91) [¶](#RecordDecoder.Samples) ``` func (d *[RecordDecoder](#RecordDecoder)) Samples(rec [][byte](/builtin#byte), samples [][RefSample](#RefSample)) ([][RefSample](#RefSample), [error](/builtin#error)) ``` Samples appends samples in rec to the given slice. #### func (*RecordDecoder) [Series](https://github.com/prometheus/tsdb/blob/v0.10.0/record.go#L59) [¶](#RecordDecoder.Series) ``` func (d *[RecordDecoder](#RecordDecoder)) Series(rec [][byte](/builtin#byte), series [][RefSeries](#RefSeries)) ([][RefSeries](#RefSeries), [error](/builtin#error)) ``` Series appends series in rec to the given slice. #### func (*RecordDecoder) [Tombstones](https://github.com/prometheus/tsdb/blob/v0.10.0/record.go#L126) [¶](#RecordDecoder.Tombstones) ``` func (d *[RecordDecoder](#RecordDecoder)) Tombstones(rec [][byte](/builtin#byte), tstones [][Stone](#Stone)) ([][Stone](#Stone), [error](/builtin#error)) ``` Tombstones appends tombstones in rec to the given slice. #### func (*RecordDecoder) [Type](https://github.com/prometheus/tsdb/blob/v0.10.0/record.go#L47) [¶](#RecordDecoder.Type) ``` func (d *[RecordDecoder](#RecordDecoder)) Type(rec [][byte](/builtin#byte)) [RecordType](#RecordType) ``` Type returns the type of the record. Return RecordInvalid if no valid record type is found. #### type [RecordEncoder](https://github.com/prometheus/tsdb/blob/v0.10.0/record.go#L151) [¶](#RecordEncoder) ``` type RecordEncoder struct { } ``` RecordEncoder encodes series, sample, and tombstones records. The zero value is ready to use. #### func (*RecordEncoder) [Samples](https://github.com/prometheus/tsdb/blob/v0.10.0/record.go#L172) [¶](#RecordEncoder.Samples) ``` func (e *[RecordEncoder](#RecordEncoder)) Samples(samples [][RefSample](#RefSample), b [][byte](/builtin#byte)) [][byte](/builtin#byte) ``` Samples appends the encoded samples to b and returns the resulting slice. #### func (*RecordEncoder) [Series](https://github.com/prometheus/tsdb/blob/v0.10.0/record.go#L155) [¶](#RecordEncoder.Series) ``` func (e *[RecordEncoder](#RecordEncoder)) Series(series [][RefSeries](#RefSeries), b [][byte](/builtin#byte)) [][byte](/builtin#byte) ``` Series appends the encoded series to b and returns the resulting slice. #### func (*RecordEncoder) [Tombstones](https://github.com/prometheus/tsdb/blob/v0.10.0/record.go#L196) [¶](#RecordEncoder.Tombstones) ``` func (e *[RecordEncoder](#RecordEncoder)) Tombstones(tstones [][Stone](#Stone), b [][byte](/builtin#byte)) [][byte](/builtin#byte) ``` Tombstones appends the encoded tombstones to b and returns the resulting slice. #### type [RecordType](https://github.com/prometheus/tsdb/blob/v0.10.0/record.go#L27) [¶](#RecordType) ``` type RecordType [uint8](/builtin#uint8) ``` RecordType represents the data type of a record. ``` const ( // RecordInvalid is returned for unrecognised WAL record types. RecordInvalid [RecordType](#RecordType) = 255 // RecordSeries is used to match WAL records of type Series. RecordSeries [RecordType](#RecordType) = 1 // RecordSamples is used to match WAL records of type Samples. RecordSamples [RecordType](#RecordType) = 2 // RecordTombstones is used to match WAL records of type Tombstones. RecordTombstones [RecordType](#RecordType) = 3 ) ``` #### type [RefSample](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L115) [¶](#RefSample) ``` type RefSample struct { Ref [uint64](/builtin#uint64) T [int64](/builtin#int64) V [float64](/builtin#float64) // contains filtered or unexported fields } ``` RefSample is a timestamp/value pair associated with a reference to a series. #### type [RefSeries](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L109) [¶](#RefSeries) ``` type RefSeries struct { Ref [uint64](/builtin#uint64) Labels [labels](/github.com/prometheus/[email protected]/labels).[Labels](/github.com/prometheus/[email protected]/labels#Labels) } ``` RefSeries is the series labels with the series ID. #### type [SegmentWAL](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L162) [¶](#SegmentWAL) ``` type SegmentWAL struct { // contains filtered or unexported fields } ``` SegmentWAL is a write ahead log for series data. DEPRECATED: use wal pkg combined with the record coders instead. #### func [OpenSegmentWAL](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L185) [¶](#OpenSegmentWAL) ``` func OpenSegmentWAL(dir [string](/builtin#string), logger [log](/github.com/go-kit/kit/log).[Logger](/github.com/go-kit/kit/log#Logger), flushInterval [time](/time).[Duration](/time#Duration), r [prometheus](/github.com/prometheus/client_golang/prometheus).[Registerer](/github.com/prometheus/client_golang/prometheus#Registerer)) (*[SegmentWAL](#SegmentWAL), [error](/builtin#error)) ``` OpenSegmentWAL opens or creates a write ahead log in the given directory. The WAL must be read completely before new data is written. #### func (*SegmentWAL) [Close](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L712) [¶](#SegmentWAL.Close) ``` func (w *[SegmentWAL](#SegmentWAL)) Close() [error](/builtin#error) ``` Close syncs all data and closes the underlying resources. #### func (*SegmentWAL) [LogDeletes](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L483) [¶](#SegmentWAL.LogDeletes) ``` func (w *[SegmentWAL](#SegmentWAL)) LogDeletes(stones [][Stone](#Stone)) [error](/builtin#error) ``` LogDeletes write a batch of new deletes to the log. #### func (*SegmentWAL) [LogSamples](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L457) [¶](#SegmentWAL.LogSamples) ``` func (w *[SegmentWAL](#SegmentWAL)) LogSamples(samples [][RefSample](#RefSample)) [error](/builtin#error) ``` LogSamples writes a batch of new samples to the log. #### func (*SegmentWAL) [LogSeries](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L430) [¶](#SegmentWAL.LogSeries) ``` func (w *[SegmentWAL](#SegmentWAL)) LogSeries(series [][RefSeries](#RefSeries)) [error](/builtin#error) ``` LogSeries writes a batch of new series labels to the log. The series have to be ordered. #### func (*SegmentWAL) [Reader](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L285) [¶](#SegmentWAL.Reader) ``` func (w *[SegmentWAL](#SegmentWAL)) Reader() [WALReader](#WALReader) ``` Reader returns a new reader over the the write ahead log data. It must be completely consumed before writing to the WAL. #### func (*SegmentWAL) [Sync](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L628) [¶](#SegmentWAL.Sync) ``` func (w *[SegmentWAL](#SegmentWAL)) Sync() [error](/builtin#error) ``` Sync flushes the changes to disk. #### func (*SegmentWAL) [Truncate](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L307) [¶](#SegmentWAL.Truncate) ``` func (w *[SegmentWAL](#SegmentWAL)) Truncate(mint [int64](/builtin#int64), keep func([uint64](/builtin#uint64)) [bool](/builtin#bool)) [error](/builtin#error) ``` Truncate deletes the values prior to mint and the series which the keep function does not indicate to preserve. #### type [Series](https://github.com/prometheus/tsdb/blob/v0.10.0/querier.go#L51) [¶](#Series) ``` type Series interface { // Labels returns the complete set of labels identifying the series. Labels() [labels](/github.com/prometheus/[email protected]/labels).[Labels](/github.com/prometheus/[email protected]/labels#Labels) // Iterator returns a new iterator of the data of the series. Iterator() [SeriesIterator](#SeriesIterator) } ``` Series exposes a single time series. #### type [SeriesIterator](https://github.com/prometheus/tsdb/blob/v0.10.0/querier.go#L880) [¶](#SeriesIterator) ``` type SeriesIterator interface { // Seek advances the iterator forward to the given timestamp. // If there's no value exactly at t, it advances to the first value // after t. Seek(t [int64](/builtin#int64)) [bool](/builtin#bool) // At returns the current timestamp/value pair. At() (t [int64](/builtin#int64), v [float64](/builtin#float64)) // Next advances the iterator by one. Next() [bool](/builtin#bool) // Err returns the current error. Err() [error](/builtin#error) } ``` SeriesIterator iterates over the data of a time series. #### type [SeriesSet](https://github.com/prometheus/tsdb/blob/v0.10.0/querier.go#L520) [¶](#SeriesSet) ``` type SeriesSet interface { Next() [bool](/builtin#bool) At() [Series](#Series) Err() [error](/builtin#error) } ``` SeriesSet contains a set of series. #### func [EmptySeriesSet](https://github.com/prometheus/tsdb/blob/v0.10.0/querier.go#L529) [¶](#EmptySeriesSet) ``` func EmptySeriesSet() [SeriesSet](#SeriesSet) ``` EmptySeriesSet returns a series set that's always empty. #### func [NewMergedSeriesSet](https://github.com/prometheus/tsdb/blob/v0.10.0/querier.go#L546) [¶](#NewMergedSeriesSet) ``` func NewMergedSeriesSet(a, b [SeriesSet](#SeriesSet)) [SeriesSet](#SeriesSet) ``` NewMergedSeriesSet takes two series sets as a single series set. The input series sets must be sorted and sequential in time, i.e. if they have the same label set, the datapoints of a must be before the datapoints of b. #### func [NewMergedVerticalSeriesSet](https://github.com/prometheus/tsdb/blob/v0.10.0/querier.go#L612) [¶](#NewMergedVerticalSeriesSet) added in v0.5.0 ``` func NewMergedVerticalSeriesSet(a, b [SeriesSet](#SeriesSet)) [SeriesSet](#SeriesSet) ``` NewMergedVerticalSeriesSet takes two series sets as a single series set. The input series sets must be sorted and the time ranges of the series can be overlapping. #### type [Stone](https://github.com/prometheus/tsdb/blob/v0.10.0/tombstones.go#L131) [¶](#Stone) ``` type Stone struct { // contains filtered or unexported fields } ``` Stone holds the information on the posting and time-range that is deleted. #### type [StringTuples](https://github.com/prometheus/tsdb/blob/v0.10.0/block.go#L101) [¶](#StringTuples) ``` type StringTuples interface { // Total number of tuples in the list. Len() [int](/builtin#int) // At returns the tuple at position i. At(i [int](/builtin#int)) ([][string](/builtin#string), [error](/builtin#error)) } ``` StringTuples provides access to a sorted list of string tuples. #### type [TimeRange](https://github.com/prometheus/tsdb/blob/v0.10.0/db.go#L933) [¶](#TimeRange) ``` type TimeRange struct { Min, Max [int64](/builtin#int64) } ``` TimeRange specifies minTime and maxTime range. #### type [TombstoneReader](https://github.com/prometheus/tsdb/blob/v0.10.0/tombstones.go#L43) [¶](#TombstoneReader) ``` type TombstoneReader interface { // Get returns deletion intervals for the series with the given reference. Get(ref [uint64](/builtin#uint64)) ([Intervals](#Intervals), [error](/builtin#error)) // Iter calls the given function for each encountered interval. Iter(func([uint64](/builtin#uint64), [Intervals](#Intervals)) [error](/builtin#error)) [error](/builtin#error) // Total returns the total count of tombstones. Total() [uint64](/builtin#uint64) // Close any underlying resources Close() [error](/builtin#error) } ``` TombstoneReader gives access to tombstone intervals by series reference. #### type [WAL](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L90) [¶](#WAL) ``` type WAL interface { Reader() [WALReader](#WALReader) LogSeries([][RefSeries](#RefSeries)) [error](/builtin#error) LogSamples([][RefSample](#RefSample)) [error](/builtin#error) LogDeletes([][Stone](#Stone)) [error](/builtin#error) Truncate(mint [int64](/builtin#int64), keep func([uint64](/builtin#uint64)) [bool](/builtin#bool)) [error](/builtin#error) Close() [error](/builtin#error) } ``` WAL is a write ahead log that can log new series labels and samples. It must be completely read before new entries are logged. DEPRECATED: use wal pkg combined with the record codex instead. #### type [WALEntryType](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L41) [¶](#WALEntryType) ``` type WALEntryType [uint8](/builtin#uint8) ``` WALEntryType indicates what data a WAL entry contains. ``` const ( WALEntrySymbols [WALEntryType](#WALEntryType) = 1 WALEntrySeries [WALEntryType](#WALEntryType) = 2 WALEntrySamples [WALEntryType](#WALEntryType) = 3 WALEntryDeletes [WALEntryType](#WALEntryType) = 4 ) ``` Entry types in a segment file. #### type [WALReader](https://github.com/prometheus/tsdb/blob/v0.10.0/wal.go#L100) [¶](#WALReader) ``` type WALReader interface { Read( seriesf func([][RefSeries](#RefSeries)), samplesf func([][RefSample](#RefSample)), deletesf func([][Stone](#Stone)), ) [error](/builtin#error) } ``` WALReader reads entries from a WAL.
BayesXsrc
cran
R
Package ‘BayesXsrc’ February 3, 2023 Version 3.0-4 Date 2023-02-03 Title Distribution of the 'BayesX' C++ Sources Description 'BayesX' performs Bayesian inference in structured additive regression (STAR) models. The R package BayesXsrc provides the 'BayesX' command line tool for easy installation. A convenient R interface is provided in package R2BayesX. Depends R (>= 2.8.0) Suggests R2BayesX SystemRequirements GNU make, C++14 License GPL-2 | GPL-3 URL https://www.uni-goettingen.de/de/bayesx/550513.html NeedsCompilation yes Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-2160-9803>), <NAME> [aut], <NAME> [aut], <NAME> [aut], <NAME> [aut] (<https://orcid.org/0000-0003-0918-3766>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-02-03 13:02:31 UTC R topics documented: run.bayes... 2 run.bayesx Run BayesX Description Run BayesX program files from R. Usage run.bayesx(prg = NULL, verbose = TRUE, ...) Arguments prg a file path to a BayesX program file. If set to NULL, BayesX will start in batch mode. verbose should output be printed to the R console during runtime of BayesX. ... further arguments to be passed to system. Details Function uses system to run BayesX within an R session. Value If a prg file is provided, the function returns a list containg information if BayesX was succesfully launched and how long the process was running. Author(s) <NAME>, <NAME>, <NAME>, <NAME>, <NAME>. Examples ## Not run: ## create a temporary directory for this example dir <- tempdir() prg <- file.path(dir, "demo.prg") ## generate some data set.seed(111) n <- 200 ## regressor dat <- data.frame(x = runif(n, -3, 3)) ## response dat$y <- with(dat, 1.5 + sin(x) + rnorm(n, sd = 0.6)) ## write data to dir write.table(dat, file.path(dir, "data.raw"), quote = FALSE, row.names = FALSE) ## create the .prg file writeLines(" bayesreg b dataset d d.infile using data.raw b.outfile = mcmc b.regress y = x(psplinerw2,nrknots=20,degree=3), family=gaussian predict using d b.getsample", prg) ## run the .prg file from R run.bayesx(prg) ## End(Not run)
@aws-sdk/client-application-insights
npm
JavaScript
[@aws-sdk/client-application-insights](#aws-sdkclient-application-insights) === [Description](#description) --- AWS SDK for JavaScript ApplicationInsights Client for Node.js, Browser and React Native. Amazon CloudWatch Application Insights Amazon CloudWatch Application Insights is a service that helps you detect common problems with your applications. It enables you to pinpoint the source of issues in your applications (built with technologies such as Microsoft IIS, .NET, and Microsoft SQL Server), by providing key insights into detected problems. After you onboard your application, CloudWatch Application Insights identifies, recommends, and sets up metrics and logs. It continuously analyzes and correlates your metrics and logs for unusual behavior to surface actionable problems with your application. For example, if your application is slow and unresponsive and leading to HTTP 500 errors in your Application Load Balancer (ALB), Application Insights informs you that a memory pressure problem with your SQL Server database is occurring. It bases this analysis on impactful metrics and log errors. [Installing](#installing) --- To install the this package, simply type add or install @aws-sdk/client-application-insights using your favorite package manager: * `npm install @aws-sdk/client-application-insights` * `yarn add @aws-sdk/client-application-insights` * `pnpm add @aws-sdk/client-application-insights` [Getting Started](#getting-started) --- ### [Import](#import) The AWS SDK is modulized by clients and commands. To send a request, you only need to import the `ApplicationInsightsClient` and the commands you need, for example `ListApplicationsCommand`: ``` // ES5 example const { ApplicationInsightsClient, ListApplicationsCommand } = require("@aws-sdk/client-application-insights"); ``` ``` // ES6+ example import { ApplicationInsightsClient, ListApplicationsCommand } from "@aws-sdk/client-application-insights"; ``` ### [Usage](#usage) To send a request, you: * Initiate client with configuration (e.g. credentials, region). * Initiate command with input parameters. * Call `send` operation on client with command object as input. * If you are using a custom http handler, you may call `destroy()` to close open connections. ``` // a client can be shared by different commands. const client = new ApplicationInsightsClient({ region: "REGION" }); const params = { /** input parameters */ }; const command = new ListApplicationsCommand(params); ``` #### [Async/await](#asyncawait) We recommend using [await](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await) operator to wait for the promise returned by send operation as follows: ``` // async/await. try { const data = await client.send(command); // process data. } catch (error) { // error handling. } finally { // finally. } ``` Async-await is clean, concise, intuitive, easy to debug and has better error handling as compared to using Promise chains or callbacks. #### [Promises](#promises) You can also use [Promise chaining](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises#chaining) to execute send operation. ``` client.send(command).then( (data) => { // process data. }, (error) => { // error handling. } ); ``` Promises can also be called using `.catch()` and `.finally()` as follows: ``` client .send(command) .then((data) => { // process data. }) .catch((error) => { // error handling. }) .finally(() => { // finally. }); ``` #### [Callbacks](#callbacks) We do not recommend using callbacks because of [callback hell](http://callbackhell.com/), but they are supported by the send operation. ``` // callbacks. client.send(command, (err, data) => { // process err and data. }); ``` #### [v2 compatible style](#v2-compatible-style) The client can also send requests using v2 compatible style. However, it results in a bigger bundle size and may be dropped in next major version. More details in the blog post on [modular packages in AWS SDK for JavaScript](https://aws.amazon.com/blogs/developer/modular-packages-in-aws-sdk-for-javascript/) ``` import * as AWS from "@aws-sdk/client-application-insights"; const client = new AWS.ApplicationInsights({ region: "REGION" }); // async/await. try { const data = await client.listApplications(params); // process data. } catch (error) { // error handling. } // Promises. client .listApplications(params) .then((data) => { // process data. }) .catch((error) => { // error handling. }); // callbacks. client.listApplications(params, (err, data) => { // process err and data. }); ``` ### [Troubleshooting](#troubleshooting) When the service returns an exception, the error will include the exception information, as well as response metadata (e.g. request id). ``` try { const data = await client.send(command); // process data. } catch (error) { const { requestId, cfId, extendedRequestId } = error.$$metadata; console.log({ requestId, cfId, extendedRequestId }); /** * The keys within exceptions are also parsed. * You can access them by specifying exception names: * if (error.name === 'SomeServiceException') { * const value = error.specialKeyInException; * } */ } ``` [Getting Help](#getting-help) --- Please use these community resources for getting help. We use the GitHub issues for tracking bugs and feature requests, but have limited bandwidth to address them. * Visit [Developer Guide](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/welcome.html) or [API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/index.html). * Check out the blog posts tagged with [`aws-sdk-js`](https://aws.amazon.com/blogs/developer/tag/aws-sdk-js/) on AWS Developer Blog. * Ask a question on [StackOverflow](https://stackoverflow.com/questions/tagged/aws-sdk-js) and tag it with `aws-sdk-js`. * Join the AWS JavaScript community on [gitter](https://gitter.im/aws/aws-sdk-js-v3). * If it turns out that you may have found a bug, please [open an issue](https://github.com/aws/aws-sdk-js-v3/issues/new/choose). To test your universal JavaScript code in Node.js, browser and react-native environments, visit our [code samples repo](https://github.com/aws-samples/aws-sdk-js-tests). [Contributing](#contributing) --- This client code is generated automatically. Any modifications will be overwritten the next time the `@aws-sdk/client-application-insights` package is updated. To contribute to client you can check our [generate clients scripts](https://github.com/aws/aws-sdk-js-v3/tree/main/scripts/generate-clients). [License](#license) --- This SDK is distributed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0), see LICENSE for more information. [Client Commands (Operations List)](#client-commands-operations-list) --- AddWorkload [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/addworkloadcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/addworkloadcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/addworkloadcommandoutput.html) CreateApplication [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/createapplicationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/createapplicationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/createapplicationcommandoutput.html) CreateComponent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/createcomponentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/createcomponentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/createcomponentcommandoutput.html) CreateLogPattern [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/createlogpatterncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/createlogpatterncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/createlogpatterncommandoutput.html) DeleteApplication [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/deleteapplicationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/deleteapplicationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/deleteapplicationcommandoutput.html) DeleteComponent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/deletecomponentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/deletecomponentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/deletecomponentcommandoutput.html) DeleteLogPattern [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/deletelogpatterncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/deletelogpatterncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/deletelogpatterncommandoutput.html) DescribeApplication [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/describeapplicationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describeapplicationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describeapplicationcommandoutput.html) DescribeComponent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/describecomponentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describecomponentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describecomponentcommandoutput.html) DescribeComponentConfiguration [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/describecomponentconfigurationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describecomponentconfigurationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describecomponentconfigurationcommandoutput.html) DescribeComponentConfigurationRecommendation [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/describecomponentconfigurationrecommendationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describecomponentconfigurationrecommendationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describecomponentconfigurationrecommendationcommandoutput.html) DescribeLogPattern [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/describelogpatterncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describelogpatterncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describelogpatterncommandoutput.html) DescribeObservation [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/describeobservationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describeobservationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describeobservationcommandoutput.html) DescribeProblem [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/describeproblemcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describeproblemcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describeproblemcommandoutput.html) DescribeProblemObservations [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/describeproblemobservationscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describeproblemobservationscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describeproblemobservationscommandoutput.html) DescribeWorkload [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/describeworkloadcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describeworkloadcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/describeworkloadcommandoutput.html) ListApplications [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/listapplicationscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listapplicationscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listapplicationscommandoutput.html) ListComponents [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/listcomponentscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listcomponentscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listcomponentscommandoutput.html) ListConfigurationHistory [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/listconfigurationhistorycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listconfigurationhistorycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listconfigurationhistorycommandoutput.html) ListLogPatterns [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/listlogpatternscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listlogpatternscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listlogpatternscommandoutput.html) ListLogPatternSets [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/listlogpatternsetscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listlogpatternsetscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listlogpatternsetscommandoutput.html) ListProblems [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/listproblemscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listproblemscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listproblemscommandoutput.html) ListTagsForResource [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/listtagsforresourcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listtagsforresourcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listtagsforresourcecommandoutput.html) ListWorkloads [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/listworkloadscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listworkloadscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/listworkloadscommandoutput.html) RemoveWorkload [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/removeworkloadcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/removeworkloadcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/removeworkloadcommandoutput.html) TagResource [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/tagresourcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/tagresourcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/tagresourcecommandoutput.html) UntagResource [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/untagresourcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/untagresourcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/untagresourcecommandoutput.html) UpdateApplication [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/updateapplicationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/updateapplicationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/updateapplicationcommandoutput.html) UpdateComponent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/updatecomponentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/updatecomponentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/updatecomponentcommandoutput.html) UpdateComponentConfiguration [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/updatecomponentconfigurationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/updatecomponentconfigurationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/updatecomponentconfigurationcommandoutput.html) UpdateLogPattern [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/updatelogpatterncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/updatelogpatterncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/updatelogpatterncommandoutput.html) UpdateProblem [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/updateproblemcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/updateproblemcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/updateproblemcommandoutput.html) UpdateWorkload [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/classes/updateworkloadcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/updateworkloadcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-application-insights/interfaces/updateworkloadcommandoutput.html) Readme --- ### Keywords none
gatsby-tinacms-json
npm
JavaScript
*gatsby-tinacms-json* === A Gatsby/Tina plugin for **editing JSON files stored in git**. Installation --- ``` yarn add gatsby-plugin-tinacms gatsby-tinacms-git gatsby-tinacms-json ``` Setup --- Include `gatsby-plugin-tinacms`, `gatsby-tinacms-git`, and `gatsby-tinacms-json` in your config: **gatsby-config.js** ``` module.exports = { // ... plugins: [ // ... { resolve: 'gatsby-plugin-tinacms', options: { plugins: ['gatsby-tinacms-git', 'gatsby-tinacms-json'], }, }, ], } ``` Creating JSON Forms --- There are two approaches to registering JSON forms with Tina. The approach you choose depends on whether the React template is a class or function. 1. [`useJsonForm`](#useJsonForm): A [Hook](https://reactjs.org/docs/hooks-intro.html) used when the template is a function. 2. [`JsonForm`](#JsonForm): A [Render Props](https://reactjs.org/docs/render-props.html#use-render-props-for-cross-cutting-concerns) component to use when the template is a class component. #### Note: required query data In order for the JSON forms to work, you must include the following fields in your `dataJson` graphql query: * `rawJson` * `fileRelativePath` An example `dataQuery` in your template might look like this: ``` query DataQuery($slug: String!) { dataJson(fields: { slug: { eq: $slug } }) { id firstName lastName rawJson fileRelativePath } } ``` Additionally, any fields that are **not** queried will be deleted when saving content via the CMS. ### useJsonForm This is a [React Hook](https://reactjs.org/docs/hooks-intro.html) for creating Json Forms. This is the recommended approach if your template is a Function Component. In order to use a form you must register it with the CMS. There are two main approaches to register forms in Tina: page forms and screen plugins. Please refer to the [form concepts](https://github.com/tinacms/tinacms/blob/HEAD/packages/gatsby-tinacms-json/docs/forms) doc to get clarity on the differences. **Interface** ``` useJsonForm(data): [values, form] ``` **Arguments** * `data`: The data returned from a Gatsby `dataJson` query. **Return** * `[values, form]` + `values`: The current values to be displayed. This has the same shape as the `data` argument. + `form`: A reference to the [CMS Form](https://github.com/tinacms/tinacms/blob/HEAD/packages/gatsby-tinacms-json/docs/forms) object. The `form` is rarely needed in the template. #### Example **src/templates/blog-post.js** ``` import { usePlugin } from 'tinacms' import { useJsonForm } from 'gatsby-tinacms-json' function BlogPostTemplate(props) { // Create the form const [data, form] = useJsonForm(props.data.dataJson) // Register it with the CMS usePlugin(form) return <h1>{data.firstName}</h1> } ``` ### JsonForm `JsonForm` is a [Render Props](https://reactjs.org/docs/render-props.html#use-render-props-for-cross-cutting-concerns) based component for accessing [CMS Forms](https://github.com/tinacms/tinacms/blob/HEAD/packages/gatsby-tinacms-json/docs/forms). This Component is a thin wrapper of `useJsonForm` and `usePlugin`. Since [React Hooks](https://reactjs.org/docs/hooks-intro.html) are only available within Function Components you will need to use `JsonForm` if your template is Class Component. **Props** * `data`: The data returned from a Gatsby `dataJson` query. * `render({ data, form }): JSX.Element`: A function that returns JSX elements + `data`: The current values to be displayed. This has the same shape as the data in the `Json` prop. + `form`: A reference to the [CMS Form](https://github.com/tinacms/tinacms/blob/HEAD/packages/gatsby-tinacms-json/docs/forms) object. The `form` is rarely needed in the template. **src/templates/blog-post.js** ``` import { JsonForm } from 'gatsby-tinacms-json' class DataTemplate extends React.Component { render() { return ( <JsonForm data={this.props.data.dataJson} render={({ data }) => { return <h1>{data.firstName}</h1> }} /> ) } } ``` Content Creator --- `JsonCreatorPlugin`: contstructs a `content-creator` plugin for JSON files. ``` interface JsonCreatorPlugin { label: string fields: Field[] filename(form: any): Promise<string> data?(form: any): Promise<any> } ``` **Example** ``` import { JsonCreatorPlugin } from 'gatsby-tinacms-json' const CreatePostPlugin = new JsonCreatorPlugin({ label: 'New JSON File', filename: form => { return form.filename }, fields: [ { name: 'filename', component: 'text', label: 'Filename', placeholder: 'content/data/puppies.json', description: 'The full path to the new Markdown file, relative to the repository root.', }, ], }) ``` Readme --- ### Keywords * tinacms * cms * gastby * json * react
github.com/goproxyio/goproxy/v2
go
Go
README [¶](#section-readme) --- ### GOPROXY [![CircleCI](https://circleci.com/gh/goproxyio/goproxy.svg?style=svg)](https://circleci.com/gh/goproxyio/goproxy) [![Go Report Card](https://goreportcard.com/badge/github.com/goproxyio/goproxy)](https://goreportcard.com/report/github.com/goproxyio/goproxy) [![GoDoc](https://godoc.org/github.com/goproxyio/goproxy?status.svg)](https://godoc.org/github.com/goproxyio/goproxy) A global proxy for go modules. see: <https://goproxy.io#### Requirements ``` It invokes the local go command to answer requests. The default cacheDir is GOPATH, you can set it up by yourself according to the situation. ``` #### Build ``` git clone https://github.com/goproxyio/goproxy.git cd goproxy make ``` #### Started ##### Proxy mode ``` ./bin/goproxy -listen=0.0.0.0:80 -cacheDir=/tmp/test ``` If you run `go get -v pkg` in the proxy machine, should set a new GOPATH which is different from the old GOPATH, or mayebe deadlock. See the file test/get_test.sh. ##### Router mode ``` ./bin/goproxy -listen=0.0.0.0:80 -proxy https://goproxy.io ``` Use the -proxy flag switch to "Router mode", which implements route filter to routing private module or public module . ``` direct +---> private repo | match|pattern | +---+---+ +---+ go get +---> |goproxy| +---> |goproxy.io| +---> golang.org/x/net +---+ +---+ router mode proxy mode ``` In Router mode, use the -exclude flag set pattern , direct to the repo which match the module path, pattern are matched to the full path specified, not only to the host component. ``` ./bin/goproxy -listen=0.0.0.0:80 -cacheDir=/tmp/test -proxy https://goproxy.io -exclude "*.corp.example.com,rsc.io/private" ``` #### Use docker image ``` docker run -d -p80:8081 goproxy/goproxy ``` Use the -v flag to persisting the proxy module data (change ***cacheDir*** to your own dir): ``` docker run -d -p80:8081 -v cacheDir:/go goproxy/goproxy ``` #### Docker Compose ``` docker-compose up ``` #### Appendix 1. set `export GOPROXY=http://localhost` to enable your goproxy. 2. set `export GOPROXY=direct` to disable it. Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Copyright 2019 The Go Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file. Usage: ``` goproxy [-listen [host]:port] [-cacheDir /tmp] ``` goproxy serves the Go module proxy HTTP protocol at the given address (default 0.0.0.0:8081). It invokes the local go command to answer requests and therefore reuses the current GOPATH's module download cache and configuration (GOPROXY, GOSUMDB, and so on). While the proxy is running, setting GOPROXY=<http://host:port> will instruct the go command to use it. Note that the module proxy cannot share a GOPATH with its own clients or else fetches will deadlock. (The client will lock the entry as “being downloaded” before sending the request to the proxy, which will then wait for the apparently-in-progress download to finish.)
sentopics
cran
R
Package ‘sentopics’ May 28, 2023 Type Package Title Tools for Joint Sentiment and Topic Analysis of Textual Data Version 0.7.2 Date 2023-05-27 Maintainer <NAME> <<EMAIL>> Description A framework that joins topic modeling and sentiment analysis of textual data. The package implements a fast Gibbs sampling estimation of Latent Dirichlet Allocation (Griffiths and Steyvers (2004) <doi:10.1073/pnas.0307752101>) and Joint Sentiment/Topic Model (Lin, He, Everson and Ruger (2012) <doi:10.1109/TKDE.2011.48>). It offers a variety of helpers and visualizations to analyze the result of topic modeling. The framework also allows enriching topic models with dates and externally computed sentiment measures. A flexible aggregation scheme enables the creation of time series of sentiment or topical proportions from the enriched topic models. Moreover, a novel method jointly aggregates topic proportions and sentiment measures to derive time series of topical sentiment. License GPL (>= 3) BugReports https://github.com/odelmarcelle/sentopics/issues URL https://github.com/odelmarcelle/sentopics Encoding UTF-8 Depends R (>= 3.5.0) Imports Rcpp (>= 1.0.4.6), methods, quanteda (>= 3.2.0), data.table (>= 1.13.6), RcppHungarian Suggests ggplot2, ggridges, plotly, RColorBrewer, xts, zoo, future, future.apply, progressr, progress, testthat, covr, stm, lda, topicmodels, seededlda, keyATM, LDAvis, servr, textcat, stringr, sentometrics, spacyr, knitr, rmarkdown, webshot LinkingTo Rcpp, RcppArmadillo, RcppProgress RcppModules model_module RoxygenNote 7.2.3 LazyData true VignetteBuilder knitr NeedsCompilation yes Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-4347-070X>), <NAME> [ctb] (<https://orcid.org/0000-0001-9533-1870>), <NAME> [cph] (Original JST implementation), <NAME> [cph] (Original JST implementation), <NAME> [cph] (Original JST implementation), <NAME> [cph] (Implementation of reorder_within()), <NAME> [cph] (Implementation of reorder_within(), <https://orcid.org/0000-0002-3671-836X>) Repository CRAN Date/Publication 2023-05-28 09:50:02 UTC R topics documented: sentopics-packag... 3 as.LD... 4 as.tokens.df... 6 chainsDistance... 7 chainsScore... 8 coherenc... 9 compute_PicaultRenault_score... 11 ECB_press_conference... 12 ECB_press_conferences_token... 13 get_ECB_press_conference... 14 get_ECB_speeche... 14 gro... 15 JS... 16 LD... 18 LDAvi... 19 LoughranMcDonal... 20 mel... 21 melt.sentopicmode... 22 mergeTopic... 23 PicaultRenaul... 24 PicaultRenault_dat... 25 plot.multiChain... 26 plot.sentopicmode... 27 print.sentopicmode... 28 proportion_topic... 28 rese... 30 rJS... 31 sentiment_breakdow... 33 sentiment_serie... 35 sentiment_topic... 36 sentopics_dat... 39 sentopics_label... 40 sentopics_sentimen... 41 topWord... 43 sentopics-package Tools for joining sentiment and topic analysis (sentopics) Description sentopics provides function to easily estimate a range of topic models and process their output. Particularly, it facilitates the integration of topic analysis with a time dimension through time-series generating functions. In addition, sentopics interacts with sentiment analysis to compute the sen- timent conveyed by topics. Finally, the package implements a number of visualization helping interpreting the results of topic models. Usage Please refer to the vignettes for a comprehensive introduction to the package functions. • Basic usage: Introduction to topic model estimation with sentopics • Topical time series: Integrate topic analysis with sentiment analysis along a time dimension For further details, you may browse the package documentation. Note Please cite the package in publications. Use citation("sentopics"). Author(s) Maintainer: <NAME> <<EMAIL>> (ORCID) Other contributors: • <NAME> <<EMAIL>> (ORCID) [contributor] • <NAME> <<EMAIL>> (Original JST implementation) [copyright holder] • <NAME> <<EMAIL>> (Original JST implementation) [copyright holder] • <NAME> (Original JST implementation) [copyright holder] • <NAME> <<EMAIL>> (Implementation of reorder_within()) [copy- right holder] • <NAME> <<EMAIL>> (ORCID) (Implementation of reorder_within()) [copy- right holder] See Also Useful links: • https://github.com/odelmarcelle/sentopics • Report bugs at https://github.com/odelmarcelle/sentopics/issues as.LDA Conversions from other packages to LDA Description These functions converts estimated models from other topic modeling packages to the format used by sentopics. Usage as.LDA(x, ...) ## S3 method for class 'STM' as.LDA(x, docs, ...) ## S3 method for class 'LDA_Gibbs' as.LDA(x, docs, ...) ## S3 method for class 'LDA_VEM' as.LDA(x, docs, ...) ## S3 method for class 'textmodel_lda' as.LDA(x, ...) as.LDA_lda(list, docs, alpha, eta) ## S3 method for class 'keyATM_output' as.LDA(x, docs, ...) Arguments x an estimated topic model from stm, topicmodels or seededlda. ... arguments passed to other methods. docs for some objects, the documents used to initialize the model. list the list containing an estimated model from lda. alpha for lda models, the document-topic mixture hyperparameter. If missing, the hyperparameter will be set to 50/K. eta for lda models, the topic-word mixture hyperparameter. Other packages refer to this hypeparameter as beta. If missing, the hyperparameter will be set to 0.01. Details Some models do not store the topic assignment of each word (for example, estimated through variational inference). For these, the conversion is limited and some functionalities of sentopics will be disabled. The list of affected functions is subject to change and currently includes grow(), mergeTopics() and rJST.LDA(). Since models from the lda package are simply lists of outputs, the function as.LDA_lda() is not related to the other methods and should be applied directly on lists containing a model. Value A S3 list of class LDA, as if it was created and estimated using LDA() and grow(). Examples ## stm library("stm") stm <- stm(poliblog5k.docs, poliblog5k.voc, K=25, prevalence=~rating, data=poliblog5k.meta, max.em.its=2, init.type="Spectral") as.LDA(stm, docs = poliblog5k.docs) ## lda library("lda") data("cora.documents") data("cora.vocab") lda <- lda.collapsed.gibbs.sampler(cora.documents, 5, ## Num clusters cora.vocab, 100, ## Num iterations 0.1, 0.1) LDA <- as.LDA_lda(lda, docs = cora.documents, alpha = .1, eta = .1) ## topicmodels data("AssociatedPress", package = "topicmodels") lda <- topicmodels::LDA(AssociatedPress[1:20,], control = list(alpha = 0.1), k = 2) LDA <- as.LDA(lda, docs = AssociatedPress[1:20,]) ## seededlda library("seededlda") lda <- textmodel_lda(dfm(ECB_press_conferences_tokens), k = 6, max_iter = 100) LDA <- as.LDA(lda) ## keyATM library("keyATM") data(keyATM_data_bills, package = "keyATM") keyATM_docs <- keyATM_read(keyATM_data_bills$doc_dfm) out <- keyATM(docs = keyATM_docs, model = "base", no_keyword_topics = 5, keywords = keyATM_data_bills$keywords) LDA <- as.LDA(out, docs = keyATM_docs) as.tokens.dfm Convert back a dfm to a tokens object Description Convert back a dfm to a tokens object Usage ## S3 method for class 'dfm' as.tokens( x, concatenator = NULL, tokens = NULL, ignore_list = NULL, case_insensitive = FALSE, padding = TRUE, ... ) Arguments x quanteda::dfm to be coerced concatenator only used for consistency with the generic tokens optionally, the tokens from which the dfm was created. Providing the initial tokens will ensure that the word order will be respected in the coerced object. ignore_list a character vector of words that should not be removed from the initial tokens object. Useful to avoid removing some lexicon word following the usage of quanteda::dfm_trim(). case_insensitive only used when the tokens argument is provided. Default to FALSE. This func- tion removes words in the initial tokens based on the remaining features in the dfm object. This check is case-sensitive by default, and can be relaxed by setting this argument to TRUE. padding if TRUE, leaves an empty string where the removed tokens previously existed. The use of padding is encouraged to improve the behavior of the coherence metrics (see coherence()) that rely on word positions. ... unused Value a quanteda quanteda::tokens object. See Also quanteda::as.tokens() quanteda::dfm() Examples library("quanteda") dfm <- dfm(ECB_press_conferences_tokens, tolower = FALSE) dfm <- dfm_trim(dfm, min_termfreq = 200) as.tokens(dfm) as.tokens(dfm, tokens = ECB_press_conferences_tokens) as.tokens(dfm, tokens = ECB_press_conferences_tokens, padding = FALSE) chainsDistances Distances between topic models (chains) Description Computes the distance between different estimates of a topic model. Since the estimation of a topic model is random, the results may largely differ as the process is repeated. This function allows to compute the distance between distinct realizations of the estimation process. Estimates are referred to as chains. Usage chainsDistances( x, method = c("euclidean", "hellinger", "cosine", "minMax", "naiveEuclidean", "invariantEuclidean"), ... ) Arguments x a valid multiChains object, obtained through the estimation of a topic model using grow() and the argument nChains greater than 1. method the method used to measure the distance between chains. ... further arguments passed to internal distance functions. Details The method argument determines how are computed distance. • euclidean finds the pairs of topics that minimizes and returns the total Euclidean distance. • hellinger does the same but based on the Hellinger distance. • cosine does the same but based on the Cosine distance. • minMax computes the maximum distance among the best pairs of distances. Inspired by the minimum-matching distance from Tang et al. (2014). • naiveEuclidean computes the Euclidean distance without searching for the best pairs of topics. • invariantEuclidean computes the best pairs of topics for all allowed permutations of topic indices. For JST and reversed-JST models, the two- levels hierarchy of document-sentiment- topic leads some permutations of indices to represent a drastically different outcome. This setting restricts the set of permutations to the ones that do not change the interpretation of the model. Equivalent to euclidean for LDA models. Value A matrix of distance between the elements of x Author(s) <NAME> References <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2014). Understanding the Limiting Factors of Topic Modeling via Posterior Contraction Analysis. In Proceedings of the 31st International Conference on Machine Learning, 32, 90–198. See Also plot.multiChains() chainsScores() Examples model <- LDA(ECB_press_conferences_tokens) model <- grow(model, 10, nChains = 5) chainsDistances(model) chainsScores Compute scores of topic models (chains) Description Compute various scores (log likelihood, coherence) for a list of topic models. Usage chainsScores(x, window = 110, nWords = 10) Arguments x a valid multiChains object, obtained through the estimation of a topic model using grow() and the argument nChains greater than 1. window optional. If NULL, use the default window for each coherence metric (10 for C_NPMI and 110 for C_V). It is possible to override these default windows by providing an integer or "boolean" to this argument, determining a new window size for all measures. nWords the number of words used to compute coherence. See coherence(). Value A data.table with some statistics about each chain. For the coherence metrics, the value shown is the mean coherence across all topics of a chain Parallelism When nChains > 1, the function can take advantage of future.apply::future_lapply (if installed) to spread the computation over multiple processes. This requires the specification of a parallel strategy using future::plan(). See the examples below. See Also chainsDistances() coherence() Examples model <- LDA(ECB_press_conferences_tokens[1:10]) model <- grow(model, 10, nChains = 5) chainsScores(model, window = 5) chainsScores(model, window = "boolean") # -- Parallel computation -- require(future.apply) future::plan("multisession", workers = 2) # Set up 2 workers chainsScores(model, window = "boolean") future::plan("sequential") # Shut down workers coherence Coherence of estimated topics Description Computes various coherence based metrics for topic models. It assesses the quality of estimated topics based on co-occurrences of words. For best results, consider cleaning the initial tokens object with padding = TRUE. Usage coherence( x, nWords = 10, method = c("C_NPMI", "C_V"), window = NULL, NPMIs = NULL ) Arguments x a model created from the LDA(), JST() or rJST() function and estimated with grow() nWords the number of words in each topic used for evaluation. method the coherence method used. window optional. If NULL, use the default window for each coherence metric (10 for C_NPMI and 110 for C_V). It is possible to override these default windows by providing an integer or "boolean" to this argument, determining a new window size for all measures. No effect is the NPMIs argument is also provided. NPMIs optional NPMI matrix. If provided, skip the computation of NPMI between words, substantially decreasing computing time. Details Currently, only C_NPMI and C_V are documented. The implementation follows Röder & al. (2015). For C_NPMI, the sliding window is 10 whereas it is 110 for C_V. Value A vector or matrix containing the coherence score of each topic. Author(s) <NAME> References <NAME>., <NAME>., & <NAME>. (2015). Exploring the Space of Topic Coherence Measures. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, 399- –408. compute_PicaultRenault_scores Compute scores using the Picault-Renault lexicon Description Computes Monetary Policy and Economic Condition scores using the Picault-Renault lexicon for central bank communication. Usage compute_PicaultRenault_scores(x, min_ngram = 2, return_dfm = FALSE) Arguments x a quanteda::corpus object. min_ngram the minimum length of n-grams considered in the computation return_dfm if TRUE, returns the scaled word-per-document score under two dfm, on for the Monetary Policy and one for the Economic Condition categories. If FALSE, returns the sum of all word scores per document. Details The computation is done on a per-document basis, such as each document is scored with a value between -1 and 1. This is relevant to the computation of the denominator of the score. It is possible to compute the score for paragraphs and sentences for a quanteda::corpus segmented using quanteda::corpus_reshape. Segmenting a corpus using quanteda’s helpers retain track to which document each paragraph/sentence belong. However, in that case, it is possible that para- graphs or sentences are scored outside the (-1,1) interval. In any case, the of the paragraph/sentences scores averaged over documents will be contained in the (-1,1) interval. Value A matrix with two columns, indicating respectively the MP (Monetary Policy) and EC (Economic Condition) scores of each document. References <NAME>. & <NAME>. (2017). Words are not all created equal: A new measure of ECB commu- nication. Journal of International Money and Finance, 79, 136–156. See Also PicaultRenault Examples # on documents docs <- quanteda::corpus_reshape(ECB_press_conferences, "documents") compute_PicaultRenault_scores(docs) # on paragraphs compute_PicaultRenault_scores(ECB_press_conferences) ECB_press_conferences Corpus of press conferences from the European Central Bank Description A corpus of 260 ECB press conference, split into 4224 paragraphs. The corpus contains a num- ber of docvars indicating the date of the press conference and a measured sentiment based on the Loughran-McDonald lexicon. Usage ECB_press_conferences Format A quanteda::corpus object. Source https://www.ecb.europa.eu/press/key/date/html/index.en.html. See Also ECB_press_conferences_tokens Examples docvars(ECB_press_conferences) ECB_press_conferences_tokens Tokenized press conferences Description The pre-processed and tokenized version of the ECB_press_conferences corpus of press confer- ences. The processing involved the following steps: • Subset paragraphs shorter than 10 words • Removal of stop words • Part-of-speech tagging, following which only nouns, proper nouns and adjective were re- tained. • Detection and merging of frequent compound words • Frequency-based cleaning of rare and very common words Usage ECB_press_conferences_tokens Format A quanteda::tokens object. Source https://www.ecb.europa.eu/press/key/date/html/index.en.html. See Also ECB_press_conferences Examples LDA(ECB_press_conferences_tokens) get_ECB_press_conferences Download press conferences from the European Central Bank Description This helper function automatically retrieve the full data set of press conferences made available by the ECB. It implements a number of pre-processing steps used to remove the Q&A section from the text. Usage get_ECB_press_conferences( years = 1998:2021, language = "en", data.table = TRUE ) Arguments years the years for which press conferences should be retrieved language the language in which press conferences should be retrieved data.table if TRUE, returns a data.table. Otherwise, return a list in which each element is a press conference. Value Depending on the arguments, returns either a data.frame or a quanteda::tokens object containing press conferences of the ECB. get_ECB_speeches Download and pre-process speeches from the European Central Bank Description This helper function automatically retrieve the full data set of speeches made available by the ECB. In addition, it implements a number of pre-processing steps that may be turned on or off as needed. Usage get_ECB_speeches( filter_english = TRUE, clean_footnotes = TRUE, compute_sentiment = TRUE, tokenize_w_POS = FALSE ) Arguments filter_english if TRUE, attempts to select English speeches only using textcat::textcat(). clean_footnotes if TRUE, attempts to clean footnotes from speeches texts using some regex pat- terns. compute_sentiment if TRUE, computes the sentiment of each speech using sentometrics::compute_sentiment() with the the Loughran & McDonald lexicon. tokenize_w_POS if TRUE, tokenizes and apply Part-Of-Speech tagging with spacyr::spacy_parse(). Nouns, adjectives and proper nouns are then extracted from the parsed speeches to form a tokens object. Value Depending on the arguments, returns either a data.frame or a quanteda::tokens object containing speeches of the ECB. grow Estimate a topic model Description This function is used to estimate a topic model created by LDA(), JST() or rJST(). In essence, this function iterates a Gibbs sampler MCMC. Usage grow( x, iterations = 100, nChains = 1, displayProgress = TRUE, computeLikelihood = TRUE, seed = NULL ) Arguments x a model created with the LDA(), JST() or rJST() function. iterations the number of iterations by which the model should be grown. nChains if set above 1, the model will be grown into multiple chains. from various start- ing positions. Latent variables will be re-initialized if x has not been grown before. displayProgress if TRUE, a progress bar will be displayed indicating the progress of the computa- tion. When nChains is greater than 1, this requires the package progressr and optionally progress. computeLikelihood if set to FALSE, does not compute the likelihood at each iteration. This can slightly decrease the computing time. seed for reproducibility, a seed can be provided. Value a sentopicmodel of the relevant model class if nChains is unspecified or equal to 1. A multiChains object if nChains is greater than 1. Parallelism When nChains > 1, the function can take advantage of future.apply::future_lapply (if installed) to spread the computation over multiple processes. This requires the specification of a parallel strategy using future::plan(). See the examples below. See Also LDA(), JST(), rJST(), reset() Examples model <- rJST(ECB_press_conferences_tokens) grow(model, 10) # -- Parallel computation -- require(future.apply) future::plan("multisession", workers = 2) # Set up 2 workers grow(model, 10, nChains = 2) future::plan("sequential") # Shut down workers JST Create a Joint Sentiment/Topic model Description This function initialize a Joint Sentiment/Topic model. Usage JST( x, lexicon = NULL, S = 3, K = 5, gamma = 1, alpha = 5, beta = 0.01, gammaCycle = 0, alphaCycle = 0 ) Arguments x tokens object containing the texts. A coercion will be attempted if x is not a tokens. lexicon a quanteda dictionary with positive and negative categories S the number of sentiments K the number of topics gamma the hyperparameter of sentiment-document distribution alpha the hyperparameter of topic-document distribution beta the hyperparameter of vocabulary distribution gammaCycle integer specifying the cycle size between two updates of the hyperparameter alpha alphaCycle integer specifying the cycle size between two updates of the hyperparameter alpha Details The rJST.LDA methods enable the transition from a previously estimated LDA model to a sentiment- aware rJST model. The function retains the previously estimated topics and randomly assigns sen- timent to every word of the corpus. The new model will retain the iteration count of the initial LDA model. Value An S3 list containing the model parameter and the estimated mixture. This object corresponds to a Gibbs sampler estimator with zero iterations. The MCMC can be iterated using the grow() function. • tokens is the tokens object used to create the model • vocabulary contains the set of words of the corpus • it tracks the number of Gibbs sampling iterations • za is the list of topic assignment, aligned to the tokens object with padding removed • logLikelihood returns the measured log-likelihood at each iteration, with a breakdown of the likelihood into hierarchical components as attribute The topWords() function easily extract the most probables words of each topic/sentiment. Author(s) <NAME> References <NAME>. and <NAME>. (2009). Joint sentiment/topic model for sentiment analysis. In Proceedings of the 18th ACM conference on Information and knowledge management, 375–384. <NAME>., <NAME>., <NAME>. and <NAME>. (2012). Weakly Supervised Joint Sentiment-Topic De- tection from Text. IEEE Transactions on Knowledge and Data Engineering, 24(6), 1134—1145. See Also Growing a model: grow(), extracting top words: topWords() Other topic models: LDA(), rJST(), sentopicmodel() Examples # creating a JST model JST(ECB_press_conferences_tokens) # estimating a JST model including a lexicon jst <- JST(ECB_press_conferences_tokens, lexicon = LoughranMcDonald) jst <- grow(jst, 100) LDA Create a Latent Dirichlet Allocation model Description This function initialize a Latent Dirichlet Allocation model. Usage LDA(x, K = 5, alpha = 1, beta = 0.01) Arguments x tokens object containing the texts. A coercion will be attempted if x is not a tokens. K the number of topics alpha the hyperparameter of topic-document distribution beta the hyperparameter of vocabulary distribution Details The rJST.LDA methods enable the transition from a previously estimated LDA model to a sentiment- aware rJST model. The function retains the previously estimated topics and randomly assigns sen- timent to every word of the corpus. The new model will retain the iteration count of the initial LDA model. Value An S3 list containing the model parameter and the estimated mixture. This object corresponds to a Gibbs sampler estimator with zero iterations. The MCMC can be iterated using the grow() function. • tokens is the tokens object used to create the model • vocabulary contains the set of words of the corpus • it tracks the number of Gibbs sampling iterations • za is the list of topic assignment, aligned to the tokens object with padding removed • logLikelihood returns the measured log-likelihood at each iteration, with a breakdown of the likelihood into hierarchical components as attribute The topWords() function easily extract the most probables words of each topic/sentiment. Author(s) <NAME> References <NAME>., <NAME>. and <NAME>. (2003). Latent Dirichlet Allocation. Journal of Machine Learning Research, 3, 993–1022. See Also Growing a model: grow(), extracting top words: topWords() Other topic models: JST(), rJST(), sentopicmodel() Examples # creating a model LDA(ECB_press_conferences_tokens, K = 5, alpha = 0.1, beta = 0.01) # estimating an LDA model lda <- LDA(ECB_press_conferences_tokens) lda <- grow(lda, 100) LDAvis Visualize a LDA model using LDAvis Description This function call LDAvis to create a dynamic visualization of an estimated topic model. Usage LDAvis(x, ...) Arguments x an LDA model ... further arguments passed on to LDAvis::createJSON() and LDAvis::serVis(). Details The CRAN release of LDAvis does not support UTF-8 characters and automatically reorder top- ics. To solve these two issues, please install the development version of LDAvis from github (devtools::install_github("cpsievert/LDAvis")). Value Nothing, called for its side effects. See Also plot.sentopicmodel() Examples lda <- LDA(ECB_press_conferences_tokens) lda <- grow(lda, 100) LDAvis(lda) LoughranMcDonald Loughran-McDonald lexicon Description The Loughran-McDonald lexicon for financial texts adapted for usage in sentopics. The lexicon is enhanced with two list of valence-shifting words. Usage LoughranMcDonald Format A quanteda::dictionary containing two polarity categories (negative and positive) and two valence- shifting categories (negator and amplifier). Source https://sraf.nd.edu/loughranmcdonald-master-dictionary/ for the lexicon and lexicon::hash_valence_shifters for the valence shifters. References <NAME>. & <NAME>. (2011). When Is a Liability Not a Liability? Textual Analysis, Dictionaries, and 10-Ks. The Journal of Finance, 66(1), 35–65. See Also JST(), rJST() Examples JST(ECB_press_conferences_tokens, lexicon = LoughranMcDonald) melt Replacement generic for data.table::melt() Description As of the CRAN release of the 1.14.8 version of data.table, the data.table::melt() function is not a generic. This function aims to temporary provide a generic to this function, so that melt.sentopicmodel() can be effectively dispatched when used. Expect this function to disappear shortly after the release of data.table 1.14.9. Usage melt(data, ...) Arguments data an object to melt ... arguments passed to other methods Value An unkeyed data.table containing the molten data. See Also data.table::melt(), melt.sentopicmodel() melt.sentopicmodel Melt for sentopicmodels Description This function extracts the estimated document mixtures from a topic model and returns them in a long data.table format. Usage ## S3 method for class 'sentopicmodel' melt(data, ..., include_docvars = FALSE) Arguments data a model created from the LDA(), JST() or rJST() function and estimated with grow() ... not used include_docvars if TRUE, the melted result will also include the docvars stored in the tokens object provided at model initialization Value A data.table in the long format, where each line is the estimated proportion of a single topic/sentiment for a document. For JST and rJST models, the probability is also decomposed into ’L1’ and ’L2’ layers, representing the probability at each layer of the topic-sentiment hierarchy. Author(s) <NAME> See Also topWords() for extracting representative words, data.table::melt() and data.table::dcast() Examples # only returns topic proportion for LDA models lda <- LDA(ECB_press_conferences_tokens) lda <- grow(lda, 10) melt(lda) # includes sentiment for JST and rJST models jst <- JST(ECB_press_conferences_tokens) jst <- grow(jst, 10) melt(jst) mergeTopics Merge topics into fewer themes Description This operation is especially useful for the analysis of the model’s output, by grouping together topics that share a common theme. Usage mergeTopics(x, merging_list) Arguments x a LDA() or rJST() model. merging_list a list where each element is an integer vector containing the indices of topics to be merged. If named, the list’s names become the label of the aggregated themes. Details Topics are aggregated at the word assignment level. New document-topic and topic-word probabil- ities are derived from the aggregated assignments. Note that the output of this function does not constitute an estimated topic model, but merely an aggregation to ease the analysis. It is not advised to use grow() on the merged topic model as it will radically affect the content and proportions of the new themes. Value A LDA() or rJST() model with the merged topics. See Also sentopics_labels Examples lda <- LDA(ECB_press_conferences_tokens, K = 5) lda <- grow(lda, 100) merging_list <- list( c(1,5), 2:4 ) mergeTopics(lda, merging_list) # also possible with a named list merging_list2 <- list( mytheme_1 = c(1,5), mytheme_2 = 2:4 ) merged <- mergeTopics(lda, merging_list2) sentopics_labels(merged) # implemented for rJST rjst <- rJST(ECB_press_conferences_tokens, lexicon = LoughranMcDonald) rjst <- grow(rjst, 100) mergeTopics(rjst, merging_list2) PicaultRenault Picault-Renault lexicon Description The Picault-Renault lexicon, specialized in the analysis of central bank communication. The lexicon identifies a large number of n-grams and gives their probability to belong to six categories: • Monetary Policy - accommodative • Monetary Policy - neutral • Monetary Policy - restrictive • Economic Condition - negative • Economic Condition - neutral • Economic Condition - positive Usage PicaultRenault Format A data.table object. Source http://www.cbcomindex.com/lexicon.php References <NAME>. & <NAME>. (2017). Words are not all created equal: A new measure of ECB commu- nication. Journal of International Money and Finance, 79, 136–156. See Also compute_PicaultRenault_scores() Examples head(PicaultRenault) PicaultRenault_data Regression dataset based on Picault & Renault (2017) Description A regression dataset built to partially replicate the result of Picault & Renault. This dataset contains, for each press conference published after 2000: • The Main Refinancing Rate (MRR) of the ECB set following the press conference • The change in the MRR following the press conference • The change in the MRR observed at the previous press conference • The Bloomberg consensus on the announced MRR • The Surprise brought by the announcement, computed as the Bloomberg consensus minus the MRR following the conference • The EURO STOXX 50 return on the day of the press conference • The EURO STOXX 50 return on the day preceding the announcement Usage PicaultRenault_data Format An xts::xts object. Source The data was manually prepared by the author of this package. References <NAME>. & <NAME>. (2017). Words are not all created equal: A new measure of ECB commu- nication. Journal of International Money and Finance, 79, 136–156. Examples head(PicaultRenault_data) plot.multiChains Plot the distances between topic models (chains) Description Plot the results of chainsDistance(x) using multidimensional scaling. See chainsDistances() for details on the distance computation and stats::cmdscale() for the implementation of the multidimensional scaling. Usage ## S3 method for class 'multiChains' plot( x, ..., method = c("euclidean", "hellinger", "cosine", "minMax", "naiveEuclidean", "invariantEuclidean") ) Arguments x a valid multiChains object, obtained through the estimation of a topic model using grow() and the argument nChains greater than 1. ... not used method the method used to measure the distance between chains. Value Invisibly, the coordinates of each topic model resulting from the multidimensional scaling. See Also chainsDistances() cmdscale() Examples models <- LDA(ECB_press_conferences_tokens) models <- grow(models, 10, nChains = 5) plot(models) plot.sentopicmodel Plot a topic model using Plotly Description Summarize and plot a sentopics model using a sunburst chart from the plotly::plotly library. Usage ## S3 method for class 'sentopicmodel' plot(x, nWords = 15, layers = 3, sort = FALSE, ...) Arguments x a model created from the LDA(), JST() or rJST() function and estimated with grow() nWords the number of words per topic/sentiment to display in the outer layer of the plot layers specifies the number of layers for the sunburst chart. This will restrict the output to the layers uppermost levels of the chart. For example, setting layers = 1 will only display the top level of the hierarchy (topics for an LDA model). sort if TRUE, sorts the plotted topics in a decreasing frequency. ... not used Value A plotly sunburst chart. See Also topWords() LDAvis() Examples lda <- LDA(ECB_press_conferences_tokens) lda <- grow(lda, 100) plot(lda, nWords = 5) # only displays the topic proportions plot(lda, layers = 1) print.sentopicmodel Print method for sentopics models Description Print methods for sentopics models. Once per session (or forced by using extended = TRUE), it lists the most important function related to sentopics models. Usage ## S3 method for class 'sentopicmodel' print(x, extended = FALSE, ...) ## S3 method for class 'rJST' print(x, extended = FALSE, ...) ## S3 method for class 'LDA' print(x, extended = FALSE, ...) ## S3 method for class 'JST' print(x, extended = FALSE, ...) Arguments x the model to be printed extended if TRUE, extends the print to include some helpful related functions. Automati- cally displayed once per session. ... not used Value No return value, called for side effects (printing). proportion_topics Compute the topic or sentiment proportion time series Description Aggregate the topical or sentiment proportions at the document level into time series. Usage proportion_topics( x, period = c("year", "quarter", "month", "day", "identity"), rolling_window = 1, complete = TRUE, plot = c(FALSE, TRUE, "silent"), plot_ridgelines = TRUE, as.xts = TRUE, ... ) plot_proportion_topics( x, period = c("year", "quarter", "month", "day"), rolling_window = 1, complete = TRUE, plot_ridgelines = TRUE, ... ) Arguments x a LDA(), JST() or rJST() model populated with internal dates and/or internal sentiment. period the sampling period within which the sentiment of documents will be averaged. period = "identity" is a special case that will return document-level variables before the aggregation happens. Useful to rapidly compute topical sentiment at the document level. rolling_window if greater than 1, determines the rolling window to compute a moving average of sentiment. The rolling window is based on the period unit and rely on actual dates (i.e, is not affected by unequally spaced data points). complete if FALSE, only compute proportions at the upper level of the topic model hier- archy (topics for rJST and sentiment for JST). No effect on LDA models. plot if TRUE, prints a plot of the time series and attaches it as an attribute to the returned object. If 'silent', do not print the plot but still attaches it as an attribute. plot_ridgelines if TRUE, time series are plotted as ridgelines. Requires ggridges package in- stalled. If FALSE, the plot will use only standards ggplot2 functions. If the argument is missing and the package ggridges is not installed, this will quietly switch to a ggplot2 output. as.xts if TRUE, returns an xts::xts object. Otherwise, returns a data.frame. ... other arguments passed on to zoo::rollapply() or mean() and sd(). Value A time series of proportions, stored as an xts::xts object or as a data.frame. See Also sentopics_sentiment sentopics_date Other series functions: sentiment_breakdown(), sentiment_series(), sentiment_topics() Examples lda <- LDA(ECB_press_conferences_tokens) lda <- grow(lda, 100) proportion_topics(lda) # plot shortcut plot_proportion_topics(lda, period = "month", rolling_window = 3) # with or without ridgelines plot_proportion_topics(lda, period = "month", plot_ridgelines = FALSE) # also available for rJST and JST models jst <- JST(ECB_press_conferences_tokens, lexicon = LoughranMcDonald) jst <- grow(jst, 100) # including both layers proportion_topics(jst) # or not proportion_topics(jst, complete = FALSE) reset Re-initialize a topic model Description This function is used re-initialize a topic model, as if it was created from LDA(), JST() or another model. The re-initialized model retains its original parameter specification. Usage reset(x) Arguments x a model created from the LDA(), JST() or rJST() function and estimated with grow() Value a sentopicmodel of the relevant model class, with the iteration count reset to zero and re-initialized assignment of latent variables. Author(s) <NAME> See Also grow() Examples model <- LDA(ECB_press_conferences_tokens) model <- grow(model, 10) reset(model) rJST Create a Reversed Joint Sentiment/Topic model Description This function initialize a Reversed Joint Sentiment/Topic model. Usage rJST(x, ...) ## Default S3 method: rJST( x, lexicon = NULL, K = 5, S = 3, alpha = 1, gamma = 5, beta = 0.01, alphaCycle = 0, gammaCycle = 0, ... ) ## S3 method for class 'LDA' rJST(x, lexicon = NULL, S = 3, gamma = 5, ...) Arguments x tokens object containing the texts. A coercion will be attempted if x is not a tokens. ... not used lexicon a quanteda dictionary with positive and negative categories K the number of topics S the number of sentiments alpha the hyperparameter of topic-document distribution gamma the hyperparameter of sentiment-document distribution beta the hyperparameter of vocabulary distribution alphaCycle integer specifying the cycle size between two updates of the hyperparameter alpha gammaCycle integer specifying the cycle size between two updates of the hyperparameter alpha Details The rJST.LDA methods enable the transition from a previously estimated LDA model to a sentiment- aware rJST model. The function retains the previously estimated topics and randomly assigns sen- timent to every word of the corpus. The new model will retain the iteration count of the initial LDA model. Value An S3 list containing the model parameter and the estimated mixture. This object corresponds to a Gibbs sampler estimator with zero iterations. The MCMC can be iterated using the grow() function. • tokens is the tokens object used to create the model • vocabulary contains the set of words of the corpus • it tracks the number of Gibbs sampling iterations • za is the list of topic assignment, aligned to the tokens object with padding removed • logLikelihood returns the measured log-likelihood at each iteration, with a breakdown of the likelihood into hierarchical components as attribute The topWords() function easily extract the most probables words of each topic/sentiment. Author(s) <NAME> References <NAME>. and <NAME>. (2009). Joint sentiment/topic model for sentiment analysis. In Proceedings of the 18th ACM conference on Information and knowledge management, 375–384. <NAME>., <NAME>., <NAME>. and <NAME>. (2012). Weakly Supervised Joint Sentiment-Topic De- tection from Text. IEEE Transactions on Knowledge and Data Engineering, 24(6), 1134—1145. See Also Growing a model: grow(), extracting top words: topWords() Other topic models: JST(), LDA(), sentopicmodel() Examples # simple rJST model rJST(ECB_press_conferences_tokens) # estimating a rJST model including lexicon rjst <- rJST(ECB_press_conferences_tokens, lexicon = LoughranMcDonald) rjst <- grow(rjst, 100) # from an LDA model: lda <- LDA(ECB_press_conferences_tokens) lda <- grow(lda, 100) # creating a rJST model out of it rjst <- rJST(lda, lexicon = LoughranMcDonald) # topic proportions remain identical identical(lda$theta, rjst$theta) # model should be iterated to estimate sentiment proportions rjst <- grow(rjst, 100) sentiment_breakdown Breakdown the sentiment into topical components Description Break down the sentiment series obtained with sentiment_series() into topical components. Sentiment is broken down at the document level using estimated topic proportions, then processed to create a time series and its components. Usage sentiment_breakdown( x, period = c("year", "quarter", "month", "day", "identity"), rolling_window = 1, scale = TRUE, scaling_period = c("1900-01-01", "2099-12-31"), plot = c(FALSE, TRUE, "silent"), as.xts = TRUE, ... ) plot_sentiment_breakdown( x, period = c("year", "quarter", "month", "day"), rolling_window = 1, scale = TRUE, scaling_period = c("1900-01-01", "2099-12-31"), ... ) Arguments x a LDA() or rJST() model populated with internal dates and/or internal senti- ment. period the sampling period within which the sentiment of documents will be averaged. period = "identity" is a special case that will return document-level variables before the aggregation happens. Useful to rapidly compute topical sentiment at the document level. rolling_window if greater than 1, determines the rolling window to compute a moving average of sentiment. The rolling window is based on the period unit and rely on actual dates (i.e, is not affected by unequally spaced data points). scale if TRUE, the resulting time series will be scaled to a mean of zero and a stan- dard deviation of 1. This argument also has the side effect of attaching scaled sentiment values as docvars to the input object with the _scaled suffix. scaling_period the date range over which the scaling should be applied. Particularly useful to normalize only the beginning of the time series. plot if TRUE, prints a plot of the time series and attaches it as an attribute to the returned object. If 'silent', do not print the plot but still attaches it as an attribute. as.xts if TRUE, returns an xts::xts object. Otherwise, returns a data.frame. ... other arguments passed on to zoo::rollapply() or mean() and sd(). Details The sentiment is broken down at the sentiment level assuming the following composition: X K s= si × θi , where si is the sentiment of topic i and thetai the proportion of topic i in a given document. For an LDA model, the sentiment of each topic is considered equal to the document sentiment (i.e. si = s∀i ∈ K). The topical sentiment attention, defined by s∗i = si × θi represent the effective sentiment conveyed by a topic in a document. The topical sentiment attention of all documents in a period are averaged to compute the breakdown of the sentiment time series. Value A time series of sentiment, stored as an xts::xts object or as a data.frame. See Also sentopics_sentiment sentopics_date Other series functions: proportion_topics(), sentiment_series(), sentiment_topics() Examples lda <- LDA(ECB_press_conferences_tokens) lda <- grow(lda, 100) sentiment_breakdown(lda) # plot shortcut plot_sentiment_breakdown(lda) # also available for rJST models (with topic-level sentiment) rjst <- rJST(ECB_press_conferences_tokens, lexicon = LoughranMcDonald) rjst <- grow(rjst, 100) sentopics_sentiment(rjst, override = TRUE) plot_sentiment_breakdown(rjst) sentiment_series Compute a sentiment time series Description Compute a sentiment time series based on the internal sentiment and dates of a sentopicmodel. The time series computation supports multiple sampling period and optionally allow computing a moving average. Usage sentiment_series( x, period = c("year", "quarter", "month", "day"), rolling_window = 1, scale = TRUE, scaling_period = c("1900-01-01", "2099-12-31"), as.xts = TRUE, ... ) Arguments x a LDA(), JST() or rJST() model populated with internal dates and/or internal sentiment. period the sampling period within which the sentiment of documents will be averaged. period = "identity" is a special case that will return document-level variables before the aggregation happens. Useful to rapidly compute topical sentiment at the document level. rolling_window if greater than 1, determines the rolling window to compute a moving average of sentiment. The rolling window is based on the period unit and rely on actual dates (i.e, is not affected by unequally spaced data points). scale if TRUE, the resulting time series will be scaled to a mean of zero and a stan- dard deviation of 1. This argument also has the side effect of attaching scaled sentiment values as docvars to the input object with the _scaled suffix. scaling_period the date range over which the scaling should be applied. Particularly useful to normalize only the beginning of the time series. as.xts if TRUE, returns an xts::xts object. Otherwise, returns a data.frame. ... other arguments passed on to zoo::rollapply() or mean() and sd(). Value A time series of sentiment, stored as an xts::xts or data.frame. See Also sentopics_sentiment sentopics_date Other series functions: proportion_topics(), sentiment_breakdown(), sentiment_topics() Examples lda <- LDA(ECB_press_conferences_tokens) series <- sentiment_series(lda, period = "month") # JST and rJST models can use computed sentiment from the sentiment layer, # but the model must be estimated first. rjst <- rJST(ECB_press_conferences_tokens, lexicon = LoughranMcDonald) sentiment_series(rjst) sentopics_sentiment(rjst) <- NULL ## remove existing sentiment rjst <- grow(rjst, 10) ## estimating the model is then needed sentiment_series(rjst) # note the presence of both raw and scaled sentiment values # in the initial object sentopics_sentiment(lda) sentopics_sentiment(rjst) sentiment_topics Compute time series of topical sentiments Description Derive topical time series of sentiment from a LDA() or rJST() model. The time series are created by leveraging on estimated topic proportions and internal sentiment (for LDA models) or topical sentiment (for rJST models). Usage sentiment_topics( x, period = c("year", "quarter", "month", "day", "identity"), rolling_window = 1, scale = TRUE, scaling_period = c("1900-01-01", "2099-12-31"), plot = c(FALSE, TRUE, "silent"), plot_ridgelines = TRUE, as.xts = TRUE, ... ) plot_sentiment_topics( x, period = c("year", "quarter", "month", "day"), rolling_window = 1, scale = TRUE, scaling_period = c("1900-01-01", "2099-12-31"), plot_ridgelines = TRUE, ... ) Arguments x a LDA() or rJST() model populated with internal dates and/or internal senti- ment. period the sampling period within which the sentiment of documents will be averaged. period = "identity" is a special case that will return document-level variables before the aggregation happens. Useful to rapidly compute topical sentiment at the document level. rolling_window if greater than 1, determines the rolling window to compute a moving average of sentiment. The rolling window is based on the period unit and rely on actual dates (i.e, is not affected by unequally spaced data points). scale if TRUE, the resulting time series will be scaled to a mean of zero and a stan- dard deviation of 1. This argument also has the side effect of attaching scaled sentiment values as docvars to the input object with the _scaled suffix. scaling_period the date range over which the scaling should be applied. Particularly useful to normalize only the beginning of the time series. plot if TRUE, prints a plot of the time series and attaches it as an attribute to the returned object. If 'silent', do not print the plot but still attaches it as an attribute. plot_ridgelines if TRUE, time series are plotted as ridgelines. Requires ggridges package in- stalled. If FALSE, the plot will use only standards ggplot2 functions. If the argument is missing and the package ggridges is not installed, this will quietly switch to a ggplot2 output. as.xts if TRUE, returns an xts::xts object. Otherwise, returns a data.frame. ... other arguments passed on to zoo::rollapply() or mean() and sd(). Details A topical sentiment is computed at the document level for each topic. For an LDA model, the sentiment of each topic is considered equal to the document sentiment (i.e. si = s∀i ∈ K). For a rJST model, these result from the proportions in the sentiment layer under each topic. To compute the topical time series, the topical sentiment of all documents in a period are aggregated according to their respective topic proportion. For example, for a given topic, the topical sentiment in period t is computed using: PD sd × θd st = d=1 PD , where sd is the sentiment of the topic in document d and thetad the topic proportion in a document d. Value an xts::xts or data.frame containing the time series of topical sentiments. See Also sentopics_sentiment sentopics_date Other series functions: proportion_topics(), sentiment_breakdown(), sentiment_series() Examples lda <- LDA(ECB_press_conferences_tokens) lda <- grow(lda, 100) sentiment_topics(lda) # plot shortcut plot_sentiment_topics(lda, period = "month", rolling_window = 3) # with or without ridgelines plot_sentiment_topics(lda, period = "month", plot_ridgelines = FALSE) # also available for rJST models with internal sentiment computation rjst <- rJST(ECB_press_conferences_tokens, lexicon = LoughranMcDonald) rjst <- grow(rjst, 100) sentopics_sentiment(rjst, override = TRUE) sentiment_topics(rjst) sentopics_date Internal date Description Extract or replace the internal dates of a sentopicmodel. The internal dates are used to create time series using the functions sentiment_series() or sentiment_topics(). Dates should be provided by using sentopics_date(x) <- value or by storing a ’.date’ docvars in the tokens object used to create the model. Usage sentopics_date(x, include_docvars = FALSE) sentopics_date(x) <- value Arguments x a sentopicmodel created from the LDA(), JST(), rJST() or sentopicmodel() function include_docvars if TRUE the function will return all docvars stored in the internal tokens object of the model value a Date-coercible vector of dates to input into the model. Value a data.frame with the stored date per document. Note The internal date is stored internally in the docvars of the topic model. This means that dates may also be accessed through the docvars() function, although this is discouraged. Author(s) <NAME> See Also Other sentopics helpers: sentopics_labels(), sentopics_sentiment() Examples # example dataset already contains ".date" docvar docvars(ECB_press_conferences_tokens) # dates are automatically stored in the sentopicmodel object lda <- LDA(ECB_press_conferences_tokens) sentopics_date(lda) # dates can be removed or modified by the assignment operator sentopics_date(lda) <- NULL sentopics_date(lda) <- docvars(ECB_press_conferences_tokens, ".date") sentopics_labels Setting topic or sentiment labels Description Extract or replace the labels of a sentopicmodel. The replaced labels will appear in most functions dealing with the output of the sentomicmodel. Usage sentopics_labels(x, flat = TRUE) sentopics_labels(x) <- value Arguments x a sentopicmodel created from the LDA(), JST(), rJST() or sentopicmodel() function flat if FALSE, return a list of dimension labels instead of a character vector. value a list of future labels for the topic model. The list should be named and contain a character vector for each dimension to label. See the examples for a correct usage. Value a character vector of topic/sentiment labels. Author(s) <NAME> See Also mergeTopics Other sentopics helpers: sentopics_date(), sentopics_sentiment() Examples # by default, sentopics_labels() generate standard topic names lda <- LDA(ECB_press_conferences_tokens) sentopics_labels(lda) # to change labels, a named list must be provided sentopics_labels(lda) <- list( topic = paste0("superTopic", 1:lda$K) ) sentopics_labels(lda) # using NULL remove labels sentopics_labels(lda) <- NULL sentopics_labels(lda) # also works for JST/rJST models jst <- JST(ECB_press_conferences_tokens) sentopics_labels(jst) <- list( topic = paste0("superTopic", 1:jst$K), sentiment = c("negative", "neutral", "positive") ) sentopics_labels(jst) # setting flat = FALSE return a list or labels for each dimension sentopics_labels(jst, flat = FALSE) sentopics_sentiment Internal sentiment Description Compute, extract or replace the internal sentiment of a sentopicmodel. The internal sentiment is used to create time series using the functions sentiment_series() or sentiment_topics(). If the input model contains a sentiment layer, sentiment can be computed directly from the output of the model. Otherwise, sentiment obtained externally should be added for each document. Usage sentopics_sentiment( x, method = c("proportional", "proportionalPol"), override = FALSE, quiet = FALSE, include_docvars = FALSE ) sentopics_sentiment(x) <- value Arguments x a sentopicmodel created from the LDA(), JST(), rJST() or sentopicmodel() function method the method used to compute sentiment, see "Methods" below. Ignored if an internal sentiment is already stored, unless override is TRUE. override by default, the function computes sentiment only if no internal sentiment is al- ready stored within the sentopicmodel object. This avoid, for example, erasing externally provided sentiment. Set to TRUE to force computation of new senti- ment values. Only useful for models with a sentiment layer. quiet if FALSE, print a message when internal sentiment is found. include_docvars if TRUE the function will return all docvars stored in the internal tokens object of the model value a numeric vector of sentiment to input into the model. Details The computed sentiment varies depending on the model. For LDA, sentiment computation is not possible. For JST, the sentiment is computed on a per-document basis according to the document-level sen- timent mixtures. For a rJST model, a sentiment is computed for each topic, resulting in K sentiment values per document. In that case, the .sentiment column is an average of the K sentiment values, weighted by they respective topical proportions. Value A data.frame with the stored sentiment per document. Methods The function accepts two methods of computing sentiment: • proportional computes the difference between the estimated positive and negative proportions for each document (and possibly each topic). positive − negative • proportionalPol computes the difference between positive and negative proportions, divided by the sum of positive and negative proportions. As a result, the computed sentiment lies within the (-1;1) interval. positive − negative positive + negative Both methods will lead to the same result for a JST model containing only negative and positive sentiments. Note The internal sentiment is stored internally in the docvars of the topic model. This means that sentiment may also be accessed through the docvars() function, although this is discouraged. Author(s) <NAME> See Also Other sentopics helpers: sentopics_date(), sentopics_labels() Examples # example dataset already contains ".sentiment" docvar docvars(ECB_press_conferences_tokens) # sentiment is automatically stored in the sentopicmodel object lda <- LDA(ECB_press_conferences_tokens) sentopics_sentiment(lda) # sentiment can be removed or modified by the assignment operator sentopics_sentiment(lda) <- NULL sentopics_sentiment(lda) <- docvars(ECB_press_conferences_tokens, ".sentiment") # for JST models, sentiment can be computed from the output of the model jst <- JST(ECB_press_conferences_tokens, lexicon = LoughranMcDonald) jst <- grow(jst, 100) sentopics_sentiment(jst, override = TRUE) # replace existing sentiment ## for rJST models one sentiment value is computed by topic rjst <- rJST(ECB_press_conferences_tokens, lexicon = LoughranMcDonald) rjst <- grow(rjst, 100) sentopics_sentiment(rjst, override = TRUE) topWords Extract the most representative words from topics Description Extract the top words in each topic/sentiment from a sentopicmodel. Usage topWords( x, nWords = 10, method = c("frequency", "probability", "term-score", "FREX"), output = c("data.frame", "plot", "matrix"), subset, w = 0.5 ) plot_topWords( x, nWords = 10, method = c("frequency", "probability", "term-score", "FREX"), subset, w = 0.5 ) Arguments x a sentopicmodel created from the LDA(), JST() or rJST() nWords the number of top words to extract method specify if a re-ranking function should be applied before returning the top words output determines the output of the function subset allows to subset using a logical expression, as in subset(). Particularly useful to limit the number of observation on plot outputs. The logical expression uses topic and sentiment indices rather than their label. It is possible to subset on both topic and sentiment but adding a & operator between two expressions. w only used when method = "FREX". Determines the weight assigned to the fre- quency score at the expense of the exclusivity score. Value The top words of the topic model. Depending on the output chosen, can result in either a long-style data.frame, a ggplot2 object or a matrix. Author(s) <NAME> See Also melt.sentopicmodel() for extracting estimated mixtures Examples model <- LDA(ECB_press_conferences_tokens) model <- grow(model, 10) topWords(model) topWords(model, output = "matrix") topWords(model, method = "FREX") plot_topWords(model) plot_topWords(model, subset = topic %in% 1:2) jst <- JST(ECB_press_conferences_tokens) jst <- grow(jst, 10) plot_topWords(jst) plot_topWords(jst, subset = topic %in% 1:2 & sentiment == 3)
docs_spring_io_spring_docs_current_spring-framework-reference
free_programming_book
Markdown
Date: 2003-01-01 Categories: Tags: Spring makes it easy to create Java enterprise applications. It provides everything you need to embrace the Java language in an enterprise environment, with support for Groovy and Kotlin as alternative languages on the JVM, and with the flexibility to create many kinds of architectures depending on an application’s needs. As of Spring Framework 6.0, Spring requires Java 17+. Spring supports a wide range of application scenarios. In a large enterprise, applications often exist for a long time and have to run on a JDK and application server whose upgrade cycle is beyond developer control. Others may run as a single jar with the server embedded, possibly in a cloud environment. Yet others may be standalone applications (such as batch or integration workloads) that do not need a server. Spring is open source. It has a large and active community that provides continuous feedback based on a diverse range of real-world use cases. This has helped Spring to successfully evolve over a very long time. ## What We Mean by "Spring" The term "Spring" means different things in different contexts. It can be used to refer to the Spring Framework project itself, which is where it all started. Over time, other Spring projects have been built on top of the Spring Framework. Most often, when people say "Spring", they mean the entire family of projects. This reference documentation focuses on the foundation: the Spring Framework itself. The Spring Framework is divided into modules. Applications can choose which modules they need. At the heart are the modules of the core container, including a configuration model and a dependency injection mechanism. Beyond that, the Spring Framework provides foundational support for different application architectures, including messaging, transactional data and persistence, and web. It also includes the Servlet-based Spring MVC web framework and, in parallel, the Spring WebFlux reactive web framework. A note about modules: Spring’s framework jars allow for deployment to JDK 9’s module path ("Jigsaw"). For use in Jigsaw-enabled applications, the Spring Framework 5 jars come with "Automatic-Module-Name" manifest entries which define stable language-level module names ("spring.core", "spring.context", etc.) independent from jar artifact names (the jars follow the same naming pattern with "-" instead of ".", e.g. "spring-core" and "spring-context"). Of course, Spring’s framework jars keep working fine on the classpath on both JDK 8 and 9+. ## History of Spring and the Spring Framework Spring came into being in 2003 as a response to the complexity of the early J2EE specifications. While some consider Java EE and its modern-day successor Jakarta EE to be in competition with Spring, they are in fact complementary. The Spring programming model does not embrace the Jakarta EE platform specification; rather, it integrates with carefully selected individual specifications from the traditional EE umbrella: The Spring Framework also supports the Dependency Injection (JSR 330) and Common Annotations (JSR 250) specifications, which application developers may choose to use instead of the Spring-specific mechanisms provided by the Spring Framework. Originally, those were based on common `javax` packages. As of Spring Framework 6.0, Spring has been upgraded to the Jakarta EE 9 level (e.g. Servlet 5.0+, JPA 3.0+), based on the `jakarta` namespace instead of the traditional `javax` packages. With EE 9 as the minimum and EE 10 supported already, Spring is prepared to provide out-of-the-box support for the further evolution of the Jakarta EE APIs. Spring Framework 6.0 is fully compatible with Tomcat 10.1, Jetty 11 and Undertow 2.3 as web servers, and also with Hibernate ORM 6.1. Over time, the role of Java/Jakarta EE in application development has evolved. In the early days of J2EE and Spring, applications were created to be deployed to an application server. Today, with the help of Spring Boot, applications are created in a devops- and cloud-friendly way, with the Servlet container embedded and trivial to change. As of Spring Framework 5, a WebFlux application does not even use the Servlet API directly and can run on servers (such as Netty) that are not Servlet containers. Spring continues to innovate and to evolve. Beyond the Spring Framework, there are other projects, such as Spring Boot, Spring Security, Spring Data, Spring Cloud, Spring Batch, among others. It’s important to remember that each project has its own source code repository, issue tracker, and release cadence. See spring.io/projects for the complete list of Spring projects. ## Design Philosophy When you learn about a framework, it’s important to know not only what it does but what principles it follows. Here are the guiding principles of the Spring Framework: Provide choice at every level. Spring lets you defer design decisions as late as possible. For example, you can switch persistence providers through configuration without changing your code. The same is true for many other infrastructure concerns and integration with third-party APIs. * Accommodate diverse perspectives. Spring embraces flexibility and is not opinionated about how things should be done. It supports a wide range of application needs with different perspectives. * Maintain strong backward compatibility. Spring’s evolution has been carefully managed to force few breaking changes between versions. Spring supports a carefully chosen range of JDK versions and third-party libraries to facilitate maintenance of applications and libraries that depend on Spring. * Care about API design. The Spring team puts a lot of thought and time into making APIs that are intuitive and that hold up across many versions and many years. * Set high standards for code quality. The Spring Framework puts a strong emphasis on meaningful, current, and accurate javadoc. It is one of very few projects that can claim clean code structure with no circular dependencies between packages. ## Feedback and Contributions For how-to questions or diagnosing or debugging issues, we suggest using Stack Overflow. Click here for a list of the suggested tags to use on Stack Overflow. If you’re fairly certain that there is a problem in the Spring Framework or would like to suggest a feature, please use the GitHub Issues. If you have a solution in mind or a suggested fix, you can submit a pull request on Github. However, please keep in mind that, for all but the most trivial issues, we expect a ticket to be filed in the issue tracker, where discussions take place and leave a record for future reference. For more details see the guidelines at the CONTRIBUTING, top-level project page. ## Getting Started If you are just getting started with Spring, you may want to begin using the Spring Framework by creating a Spring Boot-based application. Spring Boot provides a quick (and opinionated) way to create a production-ready Spring-based application. It is based on the Spring Framework, favors convention over configuration, and is designed to get you up and running as quickly as possible. You can use start.spring.io to generate a basic project or follow one of the "Getting Started" guides, such as Getting Started Building a RESTful Web Service. As well as being easier to digest, these guides are very task focused, and most of them are based on Spring Boot. They also cover other projects from the Spring portfolio that you might want to consider when solving a particular problem. This part of the reference documentation covers all the technologies that are absolutely integral to the Spring Framework. Foremost amongst these is the Spring Framework’s Inversion of Control (IoC) container. A thorough treatment of the Spring Framework’s IoC container is closely followed by comprehensive coverage of Spring’s Aspect-Oriented Programming (AOP) technologies. The Spring Framework has its own AOP framework, which is conceptually easy to understand and which successfully addresses the 80% sweet spot of AOP requirements in Java enterprise programming. Coverage of Spring’s integration with AspectJ (currently the richest — in terms of features — and certainly most mature AOP implementation in the Java enterprise space) is also provided. AOT processing can be used to optimize your application ahead-of-time. It is typically used for native image deployment using GraalVM. # The IoC Container The IoC Container This chapter covers Spring’s Inversion of Control (IoC) container. Section Summary Introduction to the Spring IoC Container and Beans Container Overview Bean Overview Dependencies Bean Scopes Customizing the Nature of a Bean Bean Definition Inheritance Container Extension Points Annotation-based Container Configuration Classpath Scanning and Managed Components Using JSR 330 Standard Annotations Java-based Container Configuration Environment Abstraction Registering a LoadTimeWeaver Additional Capabilities of the ApplicationContext The BeanFactory API ``` org.springframework.beans ``` packages are the basis for Spring Framework’s IoC container. The `BeanFactory` interface provides an advanced configuration mechanism capable of managing any type of object. `ApplicationContext` is a sub-interface of `BeanFactory` . It adds: Easier integration with Spring’s AOP features * Message resource handling (for use in internationalization) * Event publication * Application-layer specific contexts such as the for use in web applications. In short, the `BeanFactory` provides the configuration framework and basic functionality, and the `ApplicationContext` adds more enterprise-specific functionality. The `ApplicationContext` is a complete superset of the `BeanFactory` and is used exclusively in this chapter in descriptions of Spring’s IoC container. For more information on using the `BeanFactory` instead of the `ApplicationContext,` see the section covering the `BeanFactory` API. In Spring, the objects that form the backbone of your application and that are managed by the Spring IoC container are called beans. A bean is an object that is instantiated, assembled, and managed by a Spring IoC container. Otherwise, a bean is simply one of many objects in your application. Beans, and the dependencies among them, are reflected in the configuration metadata used by a container. ``` org.springframework.context.ApplicationContext ``` interface represents the Spring IoC container and is responsible for instantiating, configuring, and assembling the beans. The container gets its instructions on what objects to instantiate, configure, and assemble by reading configuration metadata. The configuration metadata is represented in XML, Java annotations, or Java code. It lets you express the objects that compose your application and the rich interdependencies between those objects. Several implementations of the `ApplicationContext` interface are supplied with Spring. In stand-alone applications, it is common to create an instance of . While XML has been the traditional format for defining configuration metadata, you can instruct the container to use Java annotations or code as the metadata format by providing a small amount of XML configuration to declaratively enable support for these additional metadata formats. In most application scenarios, explicit user code is not required to instantiate one or more instances of a Spring IoC container. For example, in a web application scenario, a simple eight (or so) lines of boilerplate web descriptor XML in the `web.xml` file of the application typically suffices (see Convenient ApplicationContext Instantiation for Web Applications). If you use the Spring Tools for Eclipse (an Eclipse-powered development environment), you can easily create this boilerplate configuration with a few mouse clicks or keystrokes. The following diagram shows a high-level view of how Spring works. Your application classes are combined with configuration metadata so that, after the `ApplicationContext` is created and initialized, you have a fully configured and executable system or application. ## Configuration Metadata As the preceding diagram shows, the Spring IoC container consumes a form of configuration metadata. This configuration metadata represents how you, as an application developer, tell the Spring container to instantiate, configure, and assemble the objects in your application. Configuration metadata is traditionally supplied in a simple and intuitive XML format, which is what most of this chapter uses to convey key concepts and features of the Spring IoC container. XML-based metadata is not the only allowed form of configuration metadata. The Spring IoC container itself is totally decoupled from the format in which this configuration metadata is actually written. These days, many developers choose Java-based configuration for their Spring applications. | | --- | For information about using other forms of metadata with the Spring container, see: Annotation-based configuration: define beans using annotation-based configuration metadata. * Java-based configuration: define beans external to your application classes by using Java rather than XML files. To use these features, see the `@Configuration` , `@Bean` , `@Import` , and `@DependsOn` annotations. Spring configuration consists of at least one and typically more than one bean definition that the container must manage. XML-based configuration metadata configures these beans as `<bean/>` elements inside a top-level `<beans/>` element. Java configuration typically uses `@Bean` -annotated methods within a `@Configuration` class. These bean definitions correspond to the actual objects that make up your application. Typically, you define service layer objects, persistence layer objects such as repositories or data access objects (DAOs), presentation objects such as Web controllers, infrastructure objects such as a JPA `EntityManagerFactory` , JMS queues, and so forth. Typically, one does not configure fine-grained domain objects in the container, because it is usually the responsibility of repositories and business logic to create and load domain objects. The following example shows the basic structure of XML-based configuration metadata: The value of the `id` attribute can be used to refer to collaborating objects. The XML for referring to collaborating objects is not shown in this example. See Dependencies for more information. ## Instantiating a Container The location path or paths supplied to an `ApplicationContext` constructor are resource strings that let the container load configuration metadata from a variety of external resources, such as the local file system, the Java `CLASSPATH` , and so on. ``` ApplicationContext context = new ClassPathXmlApplicationContext("services.xml", "daos.xml"); ``` ``` val context = ClassPathXmlApplicationContext("services.xml", "daos.xml") ``` The following example shows the service layer objects `(services.xml)` configuration file: <!-- services -- <bean id="petStore" class="org.springframework.samples.jpetstore.services.PetStoreServiceImpl"> <property name="accountDao" ref="accountDao"/> <property name="itemDao" ref="itemDao"/> <!-- additional collaborators and configuration for this bean go here --> </bean The following example shows the data access objects `daos.xml` file: <bean id="accountDao" class="org.springframework.samples.jpetstore.dao.jpa.JpaAccountDao"> <!-- additional collaborators and configuration for this bean go here --> </bean <bean id="itemDao" class="org.springframework.samples.jpetstore.dao.jpa.JpaItemDao"> <!-- additional collaborators and configuration for this bean go here --> </bean <!-- more bean definitions for data access objects go here -- In the preceding example, the service layer consists of the `PetStoreServiceImpl` class and two data access objects of the types `JpaAccountDao` and `JpaItemDao` (based on the JPA Object-Relational Mapping standard). The `property name` element refers to the name of the JavaBean property, and the `ref` element refers to the name of another bean definition. This linkage between `id` and `ref` elements expresses the dependency between collaborating objects. For details of configuring an object’s dependencies, see Dependencies. ### Composing XML-based Configuration Metadata It can be useful to have bean definitions span multiple XML files. Often, each individual XML configuration file represents a logical layer or module in your architecture. You can use the application context constructor to load bean definitions from all these XML fragments. This constructor takes multiple `Resource` locations, as was shown in the previous section. Alternatively, use one or more occurrences of the `<import/>` element to load bean definitions from another file or files. The following example shows how to do so: ``` <beans> <import resource="services.xml"/> <import resource="resources/messageSource.xml"/> <import resource="/resources/themeSource.xml"/ <bean id="bean1" class="..."/> <bean id="bean2" class="..."/> </beans> ``` In the preceding example, external bean definitions are loaded from three files: `services.xml` , `messageSource.xml` , and `themeSource.xml` . All location paths are relative to the definition file doing the importing, so `services.xml` must be in the same directory or classpath location as the file doing the importing, while `messageSource.xml` and `themeSource.xml` must be in a `resources` location below the location of the importing file. As you can see, a leading slash is ignored. However, given that these paths are relative, it is better form not to use the slash at all. The contents of the files being imported, including the top level `<beans/>` element, must be valid XML bean definitions, according to the Spring Schema. The namespace itself provides the import directive feature. Further configuration features beyond plain bean definitions are available in a selection of XML namespaces provided by Spring — for example, the `context` and `util` namespaces. ### The Groovy Bean Definition DSL As a further example for externalized configuration metadata, bean definitions can also be expressed in Spring’s Groovy Bean Definition DSL, as known from the Grails framework. Typically, such configuration live in a ".groovy" file with the structure shown in the following example: ``` beans { dataSource(BasicDataSource) { driverClassName = "org.hsqldb.jdbcDriver" url = "jdbc:hsqldb:mem:grailsDB" username = "sa" password = "" settings = [mynew:"setting"] } sessionFactory(SessionFactory) { dataSource = dataSource } myService(MyService) { nestedBean = { AnotherBean bean -> dataSource = dataSource } } } ``` This configuration style is largely equivalent to XML bean definitions and even supports Spring’s XML configuration namespaces. It also allows for importing XML bean definition files through an `importBeans` directive. ## Using the Container The `ApplicationContext` is the interface for an advanced factory capable of maintaining a registry of different beans and their dependencies. By using the method ``` T getBean(String name, Class<T> requiredType) ``` , you can retrieve instances of your beans. The `ApplicationContext` lets you read bean definitions and access them, as the following example shows: ``` // create and configure beans ApplicationContext context = new ClassPathXmlApplicationContext("services.xml", "daos.xml"); // retrieve configured instance PetStoreService service = context.getBean("petStore", PetStoreService.class); // use configured instance List<String> userList = service.getUsernameList(); ``` // create and configure beans val context = ClassPathXmlApplicationContext("services.xml", "daos.xml") // retrieve configured instance val service = context.getBean<PetStoreService>("petStore") // use configured instance var userList = service.getUsernameList() ``` With Groovy configuration, bootstrapping looks very similar. It has a different context implementation class which is Groovy-aware (but also understands XML bean definitions). The following example shows Groovy configuration: ``` ApplicationContext context = new GenericGroovyApplicationContext("services.groovy", "daos.groovy"); ``` ``` val context = GenericGroovyApplicationContext("services.groovy", "daos.groovy") ``` The most flexible variant is in combination with reader delegates — for example, with for XML files, as the following example shows: ``` GenericApplicationContext context = new GenericApplicationContext(); new XmlBeanDefinitionReader(context).loadBeanDefinitions("services.xml", "daos.xml"); context.refresh(); ``` ``` val context = GenericApplicationContext() XmlBeanDefinitionReader(context).loadBeanDefinitions("services.xml", "daos.xml") context.refresh() ``` ``` GroovyBeanDefinitionReader ``` ``` GenericApplicationContext context = new GenericApplicationContext(); new GroovyBeanDefinitionReader(context).loadBeanDefinitions("services.groovy", "daos.groovy"); context.refresh(); ``` ``` val context = GenericApplicationContext() GroovyBeanDefinitionReader(context).loadBeanDefinitions("services.groovy", "daos.groovy") context.refresh() ``` You can mix and match such reader delegates on the same `ApplicationContext` , reading bean definitions from diverse configuration sources. You can then use `getBean` to retrieve instances of your beans. The `ApplicationContext` interface has a few other methods for retrieving beans, but, ideally, your application code should never use them. Indeed, your application code should have no calls to the `getBean()` method at all and thus have no dependency on Spring APIs at all. For example, Spring’s integration with web frameworks provides dependency injection for various web framework components such as controllers and JSF-managed beans, letting you declare a dependency on a specific bean through metadata (such as an autowiring annotation). A Spring IoC container manages one or more beans. These beans are created with the configuration metadata that you supply to the container (for example, in the form of XML `<bean/>` definitions). Within the container itself, these bean definitions are represented as `BeanDefinition` objects, which contain (among other information) the following metadata: A package-qualified class name: typically, the actual implementation class of the bean being defined. * Bean behavioral configuration elements, which state how the bean should behave in the container (scope, lifecycle callbacks, and so forth). * References to other beans that are needed for the bean to do its work. These references are also called collaborators or dependencies. * Other configuration settings to set in the newly created object — for example, the size limit of the pool or the number of connections to use in a bean that manages a connection pool. This metadata translates to a set of properties that make up each bean definition. The following table describes these properties: Property | Explained in… | | --- | --- | | | | | | | | | | In addition to bean definitions that contain information on how to create a specific bean, the `ApplicationContext` implementations also permit the registration of existing objects that are created outside the container (by users). This is done by accessing the ApplicationContext’s `BeanFactory` through the `getBeanFactory()` method, which returns the implementation. supports this registration through the ``` registerSingleton(..) ``` ``` registerBeanDefinition(..) ``` methods. However, typical applications work solely with beans defined through regular bean definition metadata. Every bean has one or more identifiers. These identifiers must be unique within the container that hosts the bean. A bean usually has only one identifier. However, if it requires more than one, the extra ones can be considered aliases. In XML-based configuration metadata, you use the `id` attribute, the `name` attribute, or both to specify bean identifiers. The `id` attribute lets you specify exactly one `id` . Conventionally, these names are alphanumeric ('myBean', 'someService', etc.), but they can contain special characters as well. If you want to introduce other aliases for the bean, you can also specify them in the `name` attribute, separated by a comma ( `,` ), semicolon ( `;` ), or white space. Although the `id` attribute is defined as an `xsd:string` type, bean `id` uniqueness is enforced by the container, though not by XML parsers. You are not required to supply a `name` or an `id` for a bean. If you do not supply a `name` or `id` explicitly, the container generates a unique name for that bean. However, if you want to refer to that bean by name, through the use of the `ref` element or a Service Locator style lookup, you must provide a name. Motivations for not supplying a name are related to using inner beans and autowiring collaborators. With component scanning in the classpath, Spring generates bean names for unnamed components, following the rules described earlier: essentially, taking the simple class name and turning its initial character to lower-case. However, in the (unusual) special case when there is more than one character and both the first and second characters are upper case, the original casing gets preserved. These are the same rules as defined by | | --- | ### Aliasing a Bean outside the Bean Definition In a bean definition itself, you can supply more than one name for the bean, by using a combination of up to one name specified by the `id` attribute and any number of other names in the `name` attribute. These names can be equivalent aliases to the same bean and are useful for some situations, such as letting each component in an application refer to a common dependency by using a bean name that is specific to that component itself. Specifying all aliases where the bean is actually defined is not always adequate, however. It is sometimes desirable to introduce an alias for a bean that is defined elsewhere. This is commonly the case in large systems where configuration is split amongst each subsystem, with each subsystem having its own set of object definitions. In XML-based configuration metadata, you can use the `<alias/>` element to accomplish this. The following example shows how to do so: ``` <alias name="fromName" alias="toName"/> ``` In this case, a bean (in the same container) named `fromName` may also, after the use of this alias definition, be referred to as `toName` . For example, the configuration metadata for subsystem A may refer to a DataSource by the name of ``` subsystemA-dataSource ``` . The configuration metadata for subsystem B may refer to a DataSource by the name of ``` subsystemB-dataSource ``` . When composing the main application that uses both these subsystems, the main application refers to the DataSource by the name of `myApp-dataSource` . To have all three names refer to the same object, you can add the following alias definitions to the configuration metadata: ``` <alias name="myApp-dataSource" alias="subsystemA-dataSource"/> <alias name="myApp-dataSource" alias="subsystemB-dataSource"/> ``` Now each component and the main application can refer to the dataSource through a name that is unique and guaranteed not to clash with any other definition (effectively creating a namespace), yet they refer to the same bean. ## Instantiating Beans A bean definition is essentially a recipe for creating one or more objects. The container looks at the recipe for a named bean when asked and uses the configuration metadata encapsulated by that bean definition to create (or acquire) an actual object. If you use XML-based configuration metadata, you specify the type (or class) of object that is to be instantiated in the `class` attribute of the `<bean/>` element. This `class` attribute (which, internally, is a `Class` property on a `BeanDefinition` instance) is usually mandatory. (For exceptions, see Instantiation by Using an Instance Factory Method and Bean Definition Inheritance.) You can use the `Class` property in one of two ways: Typically, to specify the bean class to be constructed in the case where the container itself directly creates the bean by calling its constructor reflectively, somewhat equivalent to Java code with the `new` operator. * To specify the actual class containing the `static` factory method that is invoked to create the object, in the less common case where the container invokes a `static` factory method on a class to create the bean. The object type returned from the invocation of the `static` factory method may be the same class or another class entirely. ### Instantiation with a Constructor When you create a bean by the constructor approach, all normal classes are usable by and compatible with Spring. That is, the class being developed does not need to implement any specific interfaces or to be coded in a specific fashion. Simply specifying the bean class should suffice. However, depending on what type of IoC you use for that specific bean, you may need a default (empty) constructor. The Spring IoC container can manage virtually any class you want it to manage. It is not limited to managing true JavaBeans. Most Spring users prefer actual JavaBeans with only a default (no-argument) constructor and appropriate setters and getters modeled after the properties in the container. You can also have more exotic non-bean-style classes in your container. If, for example, you need to use a legacy connection pool that absolutely does not adhere to the JavaBean specification, Spring can manage it as well. With XML-based configuration metadata you can specify your bean class as follows: ``` <bean id="exampleBean" class="examples.ExampleBean"/<bean name="anotherExample" class="examples.ExampleBeanTwo"/> ``` For details about the mechanism for supplying arguments to the constructor (if required) and setting object instance properties after the object is constructed, see Injecting Dependencies. ### Instantiation with a Static Factory Method When defining a bean that you create with a static factory method, use the `class` attribute to specify the class that contains the `static` factory method and an attribute named `factory-method` to specify the name of the factory method itself. You should be able to call this method (with optional arguments, as described later) and return a live object, which subsequently is treated as if it had been created through a constructor. One use for such a bean definition is to call `static` factories in legacy code. The following bean definition specifies that the bean will be created by calling a factory method. The definition does not specify the type (class) of the returned object, but rather the class containing the factory method. In this example, the `createInstance()` method must be a `static` method. The following example shows how to specify a factory method: ``` <bean id="clientService" class="examples.ClientService" factory-method="createInstance"/> ``` The following example shows a class that would work with the preceding bean definition: ``` public class ClientService { private static ClientService clientService = new ClientService(); private ClientService() {} public static ClientService createInstance() { return clientService; } } ``` ``` class ClientService private constructor() { companion object { private val clientService = ClientService() @JvmStatic fun createInstance() = clientService } } ``` For details about the mechanism for supplying (optional) arguments to the factory method and setting object instance properties after the object is returned from the factory, see Dependencies and Configuration in Detail. ### Instantiation by Using an Instance Factory Method Similar to instantiation through a static factory method , instantiation with an instance factory method invokes a non-static method of an existing bean from the container to create a new bean. To use this mechanism, leave the `class` attribute empty and, in the `factory-bean` attribute, specify the name of a bean in the current (or parent or ancestor) container that contains the instance method that is to be invoked to create the object. Set the name of the factory method itself with the `factory-method` attribute. The following example shows how to configure such a bean: ``` <!-- the factory bean, which contains a method called createClientServiceInstance() --> <bean id="serviceLocator" class="examples.DefaultServiceLocator"> <!-- inject any dependencies required by this locator bean --> </bean<!-- the bean to be created via the factory bean --> <bean id="clientService" factory-bean="serviceLocator" factory-method="createClientServiceInstance"/> ``` ``` class DefaultServiceLocator { companion object { private val clientService = ClientServiceImpl() } fun createClientServiceInstance(): ClientService { return clientService } } ``` One factory class can also hold more than one factory method, as the following example shows: ``` <bean id="serviceLocator" class="examples.DefaultServiceLocator"> <!-- inject any dependencies required by this locator bean --> </bean<bean id="clientService" factory-bean="serviceLocator" factory-method="createClientServiceInstance"/<bean id="accountService" factory-bean="serviceLocator" factory-method="createAccountServiceInstance"/> ``` private static AccountService accountService = new AccountServiceImpl(); public AccountService createAccountServiceInstance() { return accountService; } } ``` ``` class DefaultServiceLocator { companion object { private val clientService = ClientServiceImpl() private val accountService = AccountServiceImpl() } fun createClientServiceInstance(): ClientService { return clientService } fun createAccountServiceInstance(): AccountService { return accountService } } ``` This approach shows that the factory bean itself can be managed and configured through dependency injection (DI). See Dependencies and Configuration in Detail . In Spring documentation, "factory bean" refers to a bean that is configured in the Spring container and that creates objects through an instance or static factory method. By contrast, | | --- | ### Determining a Bean’s Runtime Type The runtime type of a specific bean is non-trivial to determine. A specified class in the bean metadata definition is just an initial class reference, potentially combined with a declared factory method or being a `FactoryBean` class which may lead to a different runtime type of the bean, or not being set at all in case of an instance-level factory method (which is resolved via the specified `factory-bean` name instead). Additionally, AOP proxying may wrap a bean instance with an interface-based proxy with limited exposure of the target bean’s actual type (just its implemented interfaces). The recommended way to find out about the actual runtime type of a particular bean is a `BeanFactory.getType` call for the specified bean name. This takes all of the above cases into account and returns the type of object that a `BeanFactory.getBean` call is going to return for the same bean name. A typical enterprise application does not consist of a single object (or bean in the Spring parlance). Even the simplest application has a few objects that work together to present what the end-user sees as a coherent application. This next section explains how you go from defining a number of bean definitions that stand alone to a fully realized application where objects collaborate to achieve a goal. Code is cleaner with the DI principle, and decoupling is more effective when objects are provided with their dependencies. The object does not look up its dependencies and does not know the location or class of the dependencies. As a result, your classes become easier to test, particularly when the dependencies are on interfaces or abstract base classes, which allow for stub or mock implementations to be used in unit tests. DI exists in two major variants: Constructor-based dependency injection and Setter-based dependency injection. ## Constructor-based Dependency Injection Constructor-based DI is accomplished by the container invoking a constructor with a number of arguments, each representing a dependency. Calling a `static` factory method with specific arguments to construct the bean is nearly equivalent, and this discussion treats arguments to a constructor and to a `static` factory method similarly. The following example shows a class that can only be dependency-injected with constructor injection: // a constructor so that the Spring container can inject a MovieFinder public SimpleMovieLister(MovieFinder movieFinder) { this.movieFinder = movieFinder; } ``` // a constructor so that the Spring container can inject a MovieFinder class SimpleMovieLister(private val movieFinder: MovieFinder) { // business logic that actually uses the injected MovieFinder is omitted... } ``` Notice that there is nothing special about this class. It is a POJO that has no dependencies on container specific interfaces, base classes, or annotations. ### Constructor Argument Resolution Constructor argument resolution matching occurs by using the argument’s type. If no potential ambiguity exists in the constructor arguments of a bean definition, the order in which the constructor arguments are defined in a bean definition is the order in which those arguments are supplied to the appropriate constructor when the bean is being instantiated. Consider the following class: public class ThingOne { public ThingOne(ThingTwo thingTwo, ThingThree thingThree) { // ... } } ``` class ThingOne(thingTwo: ThingTwo, thingThree: ThingThree) ``` Assuming that the `ThingTwo` and `ThingThree` classes are not related by inheritance, no potential ambiguity exists. Thus, the following configuration works fine, and you do not need to specify the constructor argument indexes or types explicitly in the `<constructor-arg/>` element. ``` <beans> <bean id="beanOne" class="x.y.ThingOne"> <constructor-arg ref="beanTwo"/> <constructor-arg ref="beanThree"/> </bean <bean id="beanTwo" class="x.y.ThingTwo"/ <bean id="beanThree" class="x.y.ThingThree"/> </beans> ``` When another bean is referenced, the type is known, and matching can occur (as was the case with the preceding example). When a simple type is used, such as `<value>true</value>` , Spring cannot determine the type of the value, and so cannot match by type without help. Consider the following class: // Number of years to calculate the Ultimate Answer private final int years; // The Answer to Life, the Universe, and Everything private final String ultimateAnswer; class ExampleBean( private val years: Int, // Number of years to calculate the Ultimate Answer private val ultimateAnswer: String // The Answer to Life, the Universe, and Everything ) ``` In the preceding scenario, the container can use type matching with simple types if you explicitly specify the type of the constructor argument by using the `type` attribute, as the following example shows: ``` <bean id="exampleBean" class="examples.ExampleBean"> <constructor-arg type="int" value="7500000"/> <constructor-arg type="java.lang.String" value="42"/> </bean> ``` You can use the `index` attribute to specify explicitly the index of constructor arguments, as the following example shows: ``` <bean id="exampleBean" class="examples.ExampleBean"> <constructor-arg index="0" value="7500000"/> <constructor-arg index="1" value="42"/> </bean> ``` In addition to resolving the ambiguity of multiple simple values, specifying an index resolves ambiguity where a constructor has two arguments of the same type. The index is 0-based. | | --- | You can also use the constructor parameter name for value disambiguation, as the following example shows: ``` <bean id="exampleBean" class="examples.ExampleBean"> <constructor-arg name="years" value="7500000"/> <constructor-arg name="ultimateAnswer" value="42"/> </bean> ``` Keep in mind that, to make this work out of the box, your code must be compiled with the debug flag enabled so that Spring can look up the parameter name from the constructor. If you cannot or do not want to compile your code with the debug flag, you can use the @ConstructorProperties JDK annotation to explicitly name your constructor arguments. The sample class would then have to look as follows: // Fields omitted class ExampleBean @ConstructorProperties("years", "ultimateAnswer") constructor(val years: Int, val ultimateAnswer: String) ``` ## Setter-based Dependency Injection Setter-based DI is accomplished by the container calling setter methods on your beans after invoking a no-argument constructor or a no-argument `static` factory method to instantiate your bean. The following example shows a class that can only be dependency-injected by using pure setter injection. This class is conventional Java. It is a POJO that has no dependencies on container specific interfaces, base classes, or annotations. // a setter method so that the Spring container can inject a MovieFinder public void setMovieFinder(MovieFinder movieFinder) { this.movieFinder = movieFinder; } // a late-initialized property so that the Spring container can inject a MovieFinder lateinit var movieFinder: MovieFinder The `ApplicationContext` supports constructor-based and setter-based DI for the beans it manages. It also supports setter-based DI after some dependencies have already been injected through the constructor approach. You configure the dependencies in the form of a `BeanDefinition` , which you use in conjunction with `PropertyEditor` instances to convert properties from one format to another. However, most Spring users do not work with these classes directly (that is, programmatically) but rather with XML `bean` definitions, annotated components (that is, classes annotated with `@Component` , `@Controller` , and so forth), or `@Bean` methods in Java-based `@Configuration` classes. These sources are then converted internally into instances of `BeanDefinition` and used to load an entire Spring IoC container instance. ## Dependency Resolution Process The container performs bean dependency resolution as follows: The `ApplicationContext` is created and initialized with configuration metadata that describes all the beans. Configuration metadata can be specified by XML, Java code, or annotations. * For each bean, its dependencies are expressed in the form of properties, constructor arguments, or arguments to the static-factory method (if you use that instead of a normal constructor). These dependencies are provided to the bean, when the bean is actually created. * Each property or constructor argument is an actual definition of the value to set, or a reference to another bean in the container. * Each property or constructor argument that is a value is converted from its specified format to the actual type of that property or constructor argument. By default, Spring can convert a value supplied in string format to all built-in types, such as `int` , `long` , `String` , `boolean` , and so forth. The Spring container validates the configuration of each bean as the container is created. However, the bean properties themselves are not set until the bean is actually created. Beans that are singleton-scoped and set to be pre-instantiated (the default) are created when the container is created. Scopes are defined in Bean Scopes. Otherwise, the bean is created only when it is requested. Creation of a bean potentially causes a graph of beans to be created, as the bean’s dependencies and its dependencies' dependencies (and so on) are created and assigned. Note that resolution mismatches among those dependencies may show up late — that is, on first creation of the affected bean. You can generally trust Spring to do the right thing. It detects configuration problems, such as references to non-existent beans and circular dependencies, at container load-time. Spring sets properties and resolves dependencies as late as possible, when the bean is actually created. This means that a Spring container that has loaded correctly can later generate an exception when you request an object if there is a problem creating that object or one of its dependencies — for example, the bean throws an exception as a result of a missing or invalid property. This potentially delayed visibility of some configuration issues is why `ApplicationContext` implementations by default pre-instantiate singleton beans. At the cost of some upfront time and memory to create these beans before they are actually needed, you discover configuration issues when the `ApplicationContext` is created, not later. You can still override this default behavior so that singleton beans initialize lazily, rather than being eagerly pre-instantiated. If no circular dependencies exist, when one or more collaborating beans are being injected into a dependent bean, each collaborating bean is totally configured prior to being injected into the dependent bean. This means that, if bean A has a dependency on bean B, the Spring IoC container completely configures bean B prior to invoking the setter method on bean A. In other words, the bean is instantiated (if it is not a pre-instantiated singleton), its dependencies are set, and the relevant lifecycle methods (such as a configured init method or the InitializingBean callback method) are invoked. ## Examples of Dependency Injection The following example uses XML-based configuration metadata for setter-based DI. A small part of a Spring XML configuration file specifies some bean definitions as follows: ``` <bean id="exampleBean" class="examples.ExampleBean"> <!-- setter injection using the nested ref element --> <property name="beanOne"> <ref bean="anotherExampleBean"/> </property <!-- setter injection using the neater ref attribute --> <property name="beanTwo" ref="yetAnotherBean"/> <property name="integerProperty" value="1"/> </bean public void setBeanOne(AnotherBean beanOne) { this.beanOne = beanOne; } public void setBeanTwo(YetAnotherBean beanTwo) { this.beanTwo = beanTwo; } public void setIntegerProperty(int i) { this.i = i; } } ``` ``` class ExampleBean { lateinit var beanOne: AnotherBean lateinit var beanTwo: YetAnotherBean var i: Int = 0 } ``` In the preceding example, setters are declared to match against the properties specified in the XML file. The following example uses constructor-based DI: ``` <bean id="exampleBean" class="examples.ExampleBean"> <!-- constructor injection using the nested ref element --> <constructor-arg> <ref bean="anotherExampleBean"/> </constructor-arg <!-- constructor injection using the neater ref attribute --> <constructor-arg ref="yetAnotherBean"/ <constructor-arg type="int" value="1"/> </bean public ExampleBean( AnotherBean anotherBean, YetAnotherBean yetAnotherBean, int i) { this.beanOne = anotherBean; this.beanTwo = yetAnotherBean; this.i = i; } } ``` ``` class ExampleBean( private val beanOne: AnotherBean, private val beanTwo: YetAnotherBean, private val i: Int) ``` The constructor arguments specified in the bean definition are used as arguments to the constructor of the `ExampleBean` . Now consider a variant of this example, where, instead of using a constructor, Spring is told to call a `static` factory method to return an instance of the object: ``` <bean id="exampleBean" class="examples.ExampleBean" factory-method="createInstance"> <constructor-arg ref="anotherExampleBean"/> <constructor-arg ref="yetAnotherBean"/> <constructor-arg value="1"/> </bean // a private constructor private ExampleBean(...) { ... } // a static factory method; the arguments to this method can be // considered the dependencies of the bean that is returned, // regardless of how those arguments are actually used. public static ExampleBean createInstance ( AnotherBean anotherBean, YetAnotherBean yetAnotherBean, int i) { ExampleBean eb = new ExampleBean (...); // some other operations... return eb; } } ``` ``` class ExampleBean private constructor() { companion object { // a static factory method; the arguments to this method can be // considered the dependencies of the bean that is returned, // regardless of how those arguments are actually used. @JvmStatic fun createInstance(anotherBean: AnotherBean, yetAnotherBean: YetAnotherBean, i: Int): ExampleBean { val eb = ExampleBean (...) // some other operations... return eb } } } ``` Arguments to the `static` factory method are supplied by `<constructor-arg/>` elements, exactly the same as if a constructor had actually been used. The type of the class being returned by the factory method does not have to be of the same type as the class that contains the `static` factory method (although, in this example, it is). An instance (non-static) factory method can be used in an essentially identical fashion (aside from the use of the `factory-bean` attribute instead of the `class` attribute), so we do not discuss those details here. As mentioned in the previous section, you can define bean properties and constructor arguments as references to other managed beans (collaborators) or as values defined inline. Spring’s XML-based configuration metadata supports sub-element types within its `<property/>` and `<constructor-arg/>` elements for this purpose. ## Straight Values (Primitives, Strings, and so on) The `value` attribute of the `<property/>` element specifies a property or constructor argument as a human-readable string representation. Spring’s conversion service is used to convert these values from a `String` to the actual type of the property or argument. The following example shows various values being set: ``` <bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <!-- results in a setDriverClassName(String) call --> <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/mydb"/> <property name="username" value="root"/> <property name="password" value="misterkaoli"/> </bean> ``` The following example uses the p-namespace for even more succinct XML configuration: <bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="com.mysql.jdbc.Driver" p:url="jdbc:mysql://localhost:3306/mydb" p:username="root" p:password="misterkaoli"/The preceding XML is more succinct. However, typos are discovered at runtime rather than design time, unless you use an IDE (such as IntelliJ IDEA or the Spring Tools for Eclipse) that supports automatic property completion when you create bean definitions. Such IDE assistance is highly recommended. You can also configure a `java.util.Properties` instance, as follows: ``` <bean id="mappings" class="org.springframework.context.support.PropertySourcesPlaceholderConfigurer" <!-- typed as a java.util.Properties --> <property name="properties"> <value> jdbc.driver.className=com.mysql.jdbc.Driver jdbc.url=jdbc:mysql://localhost:3306/mydb </value> </property> </bean> ``` The Spring container converts the text inside the `<value/>` element into a `java.util.Properties` instance by using the JavaBeans `PropertyEditor` mechanism. This is a nice shortcut, and is one of a few places where the Spring team do favor the use of the nested `<value/>` element over the `value` attribute style. `idref` element The `idref` element is simply an error-proof way to pass the `id` (a string value - not a reference) of another bean in the container to a `<constructor-arg/>` or `<property/>` element. The following example shows how to use it: ``` <bean id="theTargetBean" class="..."/<bean id="theClientBean" class="..."> <property name="targetName"> <idref bean="theTargetBean"/> </property> </bean> ``` The preceding bean definition snippet is exactly equivalent (at runtime) to the following snippet: ``` <bean id="theTargetBean" class="..." /<bean id="client" class="..."> <property name="targetName" value="theTargetBean"/> </bean> ``` The first form is preferable to the second, because using the `idref` tag lets the container validate at deployment time that the referenced, named bean actually exists. In the second variation, no validation is performed on the value that is passed to the `targetName` property of the `client` bean. Typos are only discovered (with most likely fatal results) when the `client` bean is actually instantiated. If the `client` bean is a prototype bean, this typo and the resulting exception may only be discovered long after the container is deployed. A common place (at least in versions earlier than Spring 2.0) where the `<idref/>` element brings value is in the configuration of AOP interceptors in a `ProxyFactoryBean` bean definition. Using `<idref/>` elements when you specify the interceptor names prevents you from misspelling an interceptor ID. ## References to Other Beans (Collaborators) The `ref` element is the final element inside a `<constructor-arg/>` or `<property/>` definition element. Here, you set the value of the specified property of a bean to be a reference to another bean (a collaborator) managed by the container. The referenced bean is a dependency of the bean whose property is to be set, and it is initialized on demand as needed before the property is set. (If the collaborator is a singleton bean, it may already be initialized by the container.) All references are ultimately a reference to another object. Scoping and validation depend on whether you specify the ID or name of the other object through the `bean` or `parent` attribute. Specifying the target bean through the `bean` attribute of the `<ref/>` tag is the most general form and allows creation of a reference to any bean in the same container or parent container, regardless of whether it is in the same XML file. The value of the `bean` attribute may be the same as the `id` attribute of the target bean or be the same as one of the values in the `name` attribute of the target bean. The following example shows how to use a `ref` element: ``` <ref bean="someBean"/> ``` Specifying the target bean through the `parent` attribute creates a reference to a bean that is in a parent container of the current container. The value of the `parent` attribute may be the same as either the `id` attribute of the target bean or one of the values in the `name` attribute of the target bean. The target bean must be in a parent container of the current one. You should use this bean reference variant mainly when you have a hierarchy of containers and you want to wrap an existing bean in a parent container with a proxy that has the same name as the parent bean. The following pair of listings shows how to use the `parent` attribute: ``` <!-- in the parent context --> <bean id="accountService" class="com.something.SimpleAccountService"> <!-- insert dependencies as required here --> </bean> ``` ``` <!-- in the child (descendant) context --> <bean id="accountService" <!-- bean name is the same as the parent bean --> class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="target"> <ref parent="accountService"/> <!-- notice how we refer to the parent bean --> </property> <!-- insert other configuration and dependencies as required here --> </bean> ``` ## Inner Beans A `<bean/>` element inside the `<property/>` or `<constructor-arg/>` elements defines an inner bean, as the following example shows: ``` <bean id="outer" class="..."> <!-- instead of using a reference to a target bean, simply define the target bean inline --> <property name="target"> <bean class="com.example.Person"> <!-- this is the inner bean --> <property name="name" value="<NAME>"/> <property name="age" value="25"/> </bean> </property> </bean> ``` An inner bean definition does not require a defined ID or name. If specified, the container does not use such a value as an identifier. The container also ignores the `scope` flag on creation, because inner beans are always anonymous and are always created with the outer bean. It is not possible to access inner beans independently or to inject them into collaborating beans other than into the enclosing bean. As a corner case, it is possible to receive destruction callbacks from a custom scope — for example, for a request-scoped inner bean contained within a singleton bean. The creation of the inner bean instance is tied to its containing bean, but destruction callbacks let it participate in the request scope’s lifecycle. This is not a common scenario. Inner beans typically simply share their containing bean’s scope. ## Collections The `<list/>` , `<set/>` , `<map/>` , and `<props/>` elements set the properties and arguments of the Java `Collection` types `List` , `Set` , `Map` , and `Properties` , respectively. The following example shows how to use them: ``` <bean id="moreComplexObject" class="example.ComplexObject"> <!-- results in a setAdminEmails(java.util.Properties) call --> <property name="adminEmails"> <props> <prop key="administrator">[email protected]</prop> <prop key="support">[email protected]</prop> <prop key="development">[email protected]</prop> </props> </property> <!-- results in a setSomeList(java.util.List) call --> <property name="someList"> <list> <value>a list element followed by a reference</value> <ref bean="myDataSource" /> </list> </property> <!-- results in a setSomeMap(java.util.Map) call --> <property name="someMap"> <map> <entry key="an entry" value="just some string"/> <entry key="a ref" value-ref="myDataSource"/> </map> </property> <!-- results in a setSomeSet(java.util.Set) call --> <property name="someSet"> <set> <value>just some string</value> <ref bean="myDataSource" /> </set> </property> </bean> ``` The value of a map key or value, or a set value, can also be any of the following elements: ``` bean | ref | idref | list | set | map | props | value | null ``` ### Collection Merging The Spring container also supports merging collections. An application developer can define a parent `<list/>` , `<map/>` , `<set/>` or `<props/>` element and have child `<list/>` , `<map/>` , `<set/>` or `<props/>` elements inherit and override values from the parent collection. That is, the child collection’s values are the result of merging the elements of the parent and child collections, with the child’s collection elements overriding values specified in the parent collection. This section on merging discusses the parent-child bean mechanism. Readers unfamiliar with parent and child bean definitions may wish to read the relevant section before continuing. The following example demonstrates collection merging: ``` <beans> <bean id="parent" abstract="true" class="example.ComplexObject"> <property name="adminEmails"> <props> <prop key="administrator">[email protected]</prop> <prop key="support">[email protected]</prop> </props> </property> </bean> <bean id="child" parent="parent"> <property name="adminEmails"> <!-- the merge is specified on the child collection definition --> <props merge="true"> <prop key="sales">[email protected]</prop> <prop key="support">[email protected]</prop> </props> </property> </bean> <beans> ``` Notice the use of the `merge=true` attribute on the `<props/>` element of the `adminEmails` property of the `child` bean definition. When the `child` bean is resolved and instantiated by the container, the resulting instance has an `adminEmails` `Properties` collection that contains the result of merging the child’s `adminEmails` collection with the parent’s `adminEmails` collection. The following listing shows the result: The child `Properties` collection’s value set inherits all property elements from the parent `<props/>` , and the child’s value for the `support` value overrides the value in the parent collection. This merging behavior applies similarly to the `<list/>` , `<map/>` , and `<set/>` collection types. In the specific case of the `<list/>` element, the semantics associated with the `List` collection type (that is, the notion of an `ordered` collection of values) is maintained. The parent’s values precede all of the child list’s values. In the case of the `Map` , `Set` , and `Properties` collection types, no ordering exists. Hence, no ordering semantics are in effect for the collection types that underlie the associated `Map` , `Set` , and `Properties` implementation types that the container uses internally. ### Limitations of Collection Merging You cannot merge different collection types (such as a `Map` and a `List` ). If you do attempt to do so, an appropriate `Exception` is thrown. The `merge` attribute must be specified on the lower, inherited, child definition. Specifying the `merge` attribute on a parent collection definition is redundant and does not result in the desired merging. ### Strongly-typed collection Thanks to Java’s support for generic types, you can use strongly typed collections. That is, it is possible to declare a `Collection` type such that it can only contain (for example) `String` elements. If you use Spring to dependency-inject a strongly-typed `Collection` into a bean, you can take advantage of Spring’s type-conversion support such that the elements of your strongly-typed `Collection` instances are converted to the appropriate type prior to being added to the `Collection` . The following Java class and bean definition show how to do so: ``` public class SomeClass { private Map<String, Float> accounts; public void setAccounts(Map<String, Float> accounts) { this.accounts = accounts; } } ``` ``` class SomeClass { lateinit var accounts: Map<String, Float> } ``` ``` <beans> <bean id="something" class="x.y.SomeClass"> <property name="accounts"> <map> <entry key="one" value="9.99"/> <entry key="two" value="2.75"/> <entry key="six" value="3.99"/> </map> </property> </bean> </beans> ``` When the `accounts` property of the `something` bean is prepared for injection, the generics information about the element type of the strongly-typed `Map<String, Float>` is available by reflection. Thus, Spring’s type conversion infrastructure recognizes the various value elements as being of type `Float` , and the string values ( `9.99` , `2.75` , and `3.99` ) are converted into an actual `Float` type. ## Null and Empty String Values Spring treats empty arguments for properties and the like as empty `Strings` . The following XML-based configuration metadata snippet sets the `String` value (""). ``` <bean class="ExampleBean"> <property name="email" value=""/> </bean> ``` The preceding example is equivalent to the following Java code: ``` exampleBean.setEmail(""); ``` ``` exampleBean.email = "" ``` The `<null/>` element handles `null` values. The following listing shows an example: ``` <bean class="ExampleBean"> <property name="email"> <null/> </property> </bean> ``` The preceding configuration is equivalent to the following Java code: ``` exampleBean.setEmail(null); ``` ``` exampleBean.email = null ``` ## XML Shortcut with the p-namespace The p-namespace lets you use the `bean` element’s attributes (instead of nested `<property/>` elements) to describe your property values collaborating beans, or both. Spring supports extensible configuration formats with namespaces, which are based on an XML Schema definition. The `beans` configuration format discussed in this chapter is defined in an XML Schema document. However, the p-namespace is not defined in an XSD file and exists only in the core of Spring. The following example shows two XML snippets (the first uses standard XML format and the second uses the p-namespace) that resolve to the same result: <bean name="classic" class="com.example.ExampleBean"> <property name="email" value="[email protected]"/> </bean <bean name="p-namespace" class="com.example.ExampleBean" p:email="[email protected]"/> </beans> ``` The example shows an attribute in the p-namespace called This next example includes two more bean definitions that both have a reference to another bean: <bean name="john-classic" class="com.example.Person"> <property name="name" value="<NAME>"/> <property name="spouse" ref="jane"/> </bean <bean name="john-modern" class="com.example.Person" p:name="<NAME>" p:spouse-ref="jane"/ <bean name="jane" class="com.example.Person"> <property name="name" value="<NAME>"/> </bean> </beans> ``` This example includes not only a property value using the p-namespace but also uses a special format to declare property references. Whereas the first bean definition uses ``` <property name="spouse" ref="jane"/> ``` to create a reference from bean `john` to bean `jane` , the second bean definition uses `p:spouse-ref="jane"` as an attribute to do the exact same thing. In this case, `spouse` is the property name, whereas the `-ref` part indicates that this is not a straight value but rather a reference to another bean. The p-namespace is not as flexible as the standard XML format. For example, the format for declaring property references clashes with properties that end in | | --- | ## XML Shortcut with the c-namespace Similar to the XML Shortcut with the p-namespace, the c-namespace, introduced in Spring 3.1, allows inlined attributes for configuring the constructor arguments rather then nested `constructor-arg` elements. The following example uses the `c:` namespace to do the same thing as the from Constructor-based Dependency Injection: ``` <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:c="http://www.springframework.org/schema/c" xsi:schemaLocation="http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd" <bean id="beanTwo" class="x.y.ThingTwo"/> <bean id="beanThree" class="x.y.ThingThree"/ <!-- traditional declaration with optional argument names --> <bean id="beanOne" class="x.y.ThingOne"> <constructor-arg name="thingTwo" ref="beanTwo"/> <constructor-arg name="thingThree" ref="beanThree"/> <constructor-arg name="email" value="[email protected]"/> </bean <!-- c-namespace declaration with argument names --> <bean id="beanOne" class="x.y.ThingOne" c:thingTwo-ref="beanTwo" c:thingThree-ref="beanThree" c:email="[email protected]"/ The `c:` namespace uses the same conventions as the `p:` one (a trailing `-ref` for bean references) for setting the constructor arguments by their names. Similarly, it needs to be declared in the XML file even though it is not defined in an XSD schema (it exists inside the Spring core). For the rare cases where the constructor argument names are not available (usually if the bytecode was compiled without debugging information), you can use fallback to the argument indexes, as follows: ``` <!-- c-namespace index declaration --> <bean id="beanOne" class="x.y.ThingOne" c:_0-ref="beanTwo" c:_1-ref="beanThree" c:_2="[email protected]"/> ``` Due to the XML grammar, the index notation requires the presence of the leading | | --- | In practice, the constructor resolution mechanism is quite efficient in matching arguments, so unless you really need to, we recommend using the name notation throughout your configuration. ## Compound Property Names You can use compound or nested property names when you set bean properties, as long as all components of the path except the final property name are not `null` . Consider the following bean definition: ``` <bean id="something" class="things.ThingOne"> <property name="fred.bob.sammy" value="123" /> </bean> ``` The `something` bean has a `fred` property, which has a `bob` property, which has a `sammy` property, and that final `sammy` property is being set to a value of `123` . In order for this to work, the `fred` property of `something` and the `bob` property of `fred` must not be `null` after the bean is constructed. Otherwise, a `NullPointerException` is thrown. # Using depends-on `depends-on` If a bean is a dependency of another bean, that usually means that one bean is set as a property of another. Typically you accomplish this with the `<ref/>` element in XML-based configuration metadata. However, sometimes dependencies between beans are less direct. An example is when a static initializer in a class needs to be triggered, such as for database driver registration. The `depends-on` attribute can explicitly force one or more beans to be initialized before the bean using this element is initialized. The following example uses the `depends-on` attribute to express a dependency on a single bean: ``` <bean id="beanOne" class="ExampleBean" depends-on="manager"/> <bean id="manager" class="ManagerBean" /> ``` To express a dependency on multiple beans, supply a list of bean names as the value of the `depends-on` attribute (commas, whitespace, and semicolons are valid delimiters): ``` <bean id="beanOne" class="ExampleBean" depends-on="manager,accountDao"> <property name="manager" ref="manager" /> </bean<bean id="manager" class="ManagerBean" /> <bean id="accountDao" class="x.y.jdbc.JdbcAccountDao" /> ``` By default, `ApplicationContext` implementations eagerly create and configure all singleton beans as part of the initialization process. Generally, this pre-instantiation is desirable, because errors in the configuration or surrounding environment are discovered immediately, as opposed to hours or even days later. When this behavior is not desirable, you can prevent pre-instantiation of a singleton bean by marking the bean definition as being lazy-initialized. A lazy-initialized bean tells the IoC container to create a bean instance when it is first requested, rather than at startup. In XML, this behavior is controlled by the `lazy-init` attribute on the `<bean/>` element, as the following example shows: ``` <bean id="lazy" class="com.something.ExpensiveToCreateBean" lazy-init="true"/> <bean name="not.lazy" class="com.something.AnotherBean"/> ``` When the preceding configuration is consumed by an `ApplicationContext` , the `lazy` bean is not eagerly pre-instantiated when the `ApplicationContext` starts, whereas the `not.lazy` bean is eagerly pre-instantiated. However, when a lazy-initialized bean is a dependency of a singleton bean that is not lazy-initialized, the `ApplicationContext` creates the lazy-initialized bean at startup, because it must satisfy the singleton’s dependencies. The lazy-initialized bean is injected into a singleton bean elsewhere that is not lazy-initialized. You can also control lazy-initialization at the container level by using the `default-lazy-init` attribute on the `<beans/>` element, as the following example shows: ``` <beans default-lazy-init="true"> <!-- no beans will be pre-instantiated... --> </beans> ``` The Spring container can autowire relationships between collaborating beans. You can let Spring resolve collaborators (other beans) automatically for your bean by inspecting the contents of the `ApplicationContext` . Autowiring has the following advantages: Autowiring can significantly reduce the need to specify properties or constructor arguments. (Other mechanisms such as a bean template discussed elsewhere in this chapter are also valuable in this regard.) * Autowiring can update a configuration as your objects evolve. For example, if you need to add a dependency to a class, that dependency can be satisfied automatically without you needing to modify the configuration. Thus autowiring can be especially useful during development, without negating the option of switching to explicit wiring when the code base becomes more stable. When using XML-based configuration metadata (see Dependency Injection), you can specify the autowire mode for a bean definition with the `autowire` attribute of the `<bean/>` element. The autowiring functionality has four modes. You specify autowiring per bean and can thus choose which ones to autowire. The following table describes the four autowiring modes: Mode | Explanation | | --- | --- | | | | | With `byType` or `constructor` autowiring mode, you can wire arrays and typed collections. In such cases, all autowire candidates within the container that match the expected type are provided to satisfy the dependency. You can autowire strongly-typed `Map` instances if the expected key type is `String` . An autowired `Map` instance’s values consist of all bean instances that match the expected type, and the `Map` instance’s keys contain the corresponding bean names. ## Limitations and Disadvantages of Autowiring Autowiring works best when it is used consistently across a project. If autowiring is not used in general, it might be confusing to developers to use it to wire only one or two bean definitions. Consider the limitations and disadvantages of autowiring: Explicit dependencies in `property` and `constructor-arg` settings always override autowiring. You cannot autowire simple properties such as primitives, `Strings` , and `Classes` (and arrays of such simple properties). This limitation is by-design. * Autowiring is less exact than explicit wiring. Although, as noted in the earlier table, Spring is careful to avoid guessing in case of ambiguity that might have unexpected results. The relationships between your Spring-managed objects are no longer documented explicitly. * Wiring information may not be available to tools that may generate documentation from a Spring container. * Multiple bean definitions within the container may match the type specified by the setter method or constructor argument to be autowired. For arrays, collections, or `Map` instances, this is not necessarily a problem. However, for dependencies that expect a single value, this ambiguity is not arbitrarily resolved. If no unique bean definition is available, an exception is thrown. In the latter scenario, you have several options: Abandon autowiring in favor of explicit wiring. * Avoid autowiring for a bean definition by setting its `autowire-candidate` attributes to `false` , as described in the next section. * Designate a single bean definition as the primary candidate by setting the `primary` attribute of its `<bean/>` element to `true` . * Implement the more fine-grained control available with annotation-based configuration, as described in Annotation-based Container Configuration. ## Excluding a Bean from Autowiring On a per-bean basis, you can exclude a bean from autowiring. In Spring’s XML format, set the `autowire-candidate` attribute of the `<bean/>` element to `false` . The container makes that specific bean definition unavailable to the autowiring infrastructure (including annotation style configurations such as `@Autowired` ). You can also limit autowire candidates based on pattern-matching against bean names. The top-level `<beans/>` element accepts one or more patterns within its attribute. For example, to limit autowire candidate status to any bean whose name ends with `Repository` , provide a value of `*Repository` . To provide multiple patterns, define them in a comma-separated list. An explicit value of `true` or `false` for a bean definition’s `autowire-candidate` attribute always takes precedence. For such beans, the pattern matching rules do not apply. These techniques are useful for beans that you never want to be injected into other beans by autowiring. It does not mean that an excluded bean cannot itself be configured by using autowiring. Rather, the bean itself is not a candidate for autowiring other beans. Date: 2004-08-06 Categories: Tags: In most application scenarios, most beans in the container are singletons. When a singleton bean needs to collaborate with another singleton bean or a non-singleton bean needs to collaborate with another non-singleton bean, you typically handle the dependency by defining one bean as a property of the other. A problem arises when the bean lifecycles are different. Suppose singleton bean A needs to use non-singleton (prototype) bean B, perhaps on each method invocation on A. The container creates the singleton bean A only once, and thus only gets one opportunity to set the properties. The container cannot provide bean A with a new instance of bean B every time one is needed. A solution is to forego some inversion of control. You can make bean A aware of the container by implementing the interface, and by making a `getBean("B")` call to the container ask for (a typically new) bean B instance every time bean A needs it. The following example shows this approach: // Spring-API imports import org.springframework.beans.BeansException; import org.springframework.context.ApplicationContext; import org.springframework.context.ApplicationContextAware; /** * A class that uses a stateful Command-style class to perform * some processing. */ public class CommandManager implements ApplicationContextAware { private ApplicationContext applicationContext; protected Command createCommand() { // notice the Spring API dependency! return this.applicationContext.getBean("command", Command.class); } public void setApplicationContext( ApplicationContext applicationContext) throws BeansException { this.applicationContext = applicationContext; } } ``` // Spring-API imports import org.springframework.context.ApplicationContext import org.springframework.context.ApplicationContextAware // A class that uses a stateful Command-style class to perform // some processing. class CommandManager : ApplicationContextAware { private lateinit var applicationContext: ApplicationContext // notice the Spring API dependency! protected fun createCommand() = applicationContext.getBean("command", Command::class.java) override fun setApplicationContext(applicationContext: ApplicationContext) { this.applicationContext = applicationContext } } ``` The preceding is not desirable, because the business code is aware of and coupled to the Spring Framework. Method Injection, a somewhat advanced feature of the Spring IoC container, lets you handle this use case cleanly. Lookup method injection is the ability of the container to override methods on container-managed beans and return the lookup result for another named bean in the container. The lookup typically involves a prototype bean, as in the scenario described in the preceding section. The Spring Framework implements this method injection by using bytecode generation from the CGLIB library to dynamically generate a subclass that overrides the method. In the case of the `CommandManager` class in the previous code snippet, the Spring container dynamically overrides the implementation of the `createCommand()` method. The `CommandManager` class does not have any Spring dependencies, as the reworked example shows: In the client class that contains the method to be injected (the `CommandManager` in this case), the method to be injected requires a signature of the following form: ``` <public|protected> [abstract] <return-type> theMethodName(no-arguments); ``` If the method is `abstract` , the dynamically-generated subclass implements the method. Otherwise, the dynamically-generated subclass overrides the concrete method defined in the original class. Consider the following example: ``` <!-- a stateful bean deployed as a prototype (non-singleton) --> <bean id="myCommand" class="fiona.apple.AsyncCommand" scope="prototype"> <!-- inject dependencies here as required --> </bean<!-- commandProcessor uses statefulCommandHelper --> <bean id="commandManager" class="fiona.apple.CommandManager"> <lookup-method name="createCommand" bean="myCommand"/> </bean> ``` The bean identified as `commandManager` calls its own `createCommand()` method whenever it needs a new instance of the `myCommand` bean. You must be careful to deploy the `myCommand` bean as a prototype if that is actually what is needed. If it is a singleton, the same instance of the `myCommand` bean is returned each time. Alternatively, within the annotation-based component model, you can declare a lookup method through the `@Lookup` annotation, as the following example shows: @Lookup("myCommand") protected abstract Command createCommand(); } ``` @Lookup("myCommand") protected abstract fun createCommand(): Command } ``` Or, more idiomatically, you can rely on the target bean getting resolved against the declared return type of the lookup method: @Lookup protected abstract Command createCommand(); } ``` @Lookup protected abstract fun createCommand(): Command } ``` Note that you should typically declare such annotated lookup methods with a concrete stub implementation, in order for them to be compatible with Spring’s component scanning rules where abstract classes get ignored by default. This limitation does not apply to explicitly registered or explicitly imported bean classes. ## Arbitrary Method Replacement A less useful form of method injection than lookup method injection is the ability to replace arbitrary methods in a managed bean with another method implementation. You can safely skip the rest of this section until you actually need this functionality. With XML-based configuration metadata, you can use the `replaced-method` element to replace an existing method implementation with another, for a deployed bean. Consider the following class, which has a method called `computeValue` that we want to override: public String computeValue(String input) { // some real code... } fun computeValue(input: String): String { // some real code... } A class that implements the ``` org.springframework.beans.factory.support.MethodReplacer ``` interface provides the new method definition, as the following example shows: public Object reimplement(Object o, Method m, Object[] args) throws Throwable { // get the input value, work with it, and return a computed result String input = (String) args[0]; ... return ...; } } ``` override fun reimplement(obj: Any, method: Method, args: Array<out Any>): Any { // get the input value, work with it, and return a computed result val input = args[0] as String; ... return ...; } } ``` The bean definition to deploy the original class and specify the method override would resemble the following example: ``` <bean id="myValueCalculator" class="x.y.z.MyValueCalculator"> <!-- arbitrary method replacement --> <replaced-method name="computeValue" replacer="replacementComputeValue"> <arg-type>String</arg-type> </replaced-method> </bean<bean id="replacementComputeValue" class="a.b.c.ReplacementComputeValue"/> ``` You can use one or more `<arg-type/>` elements within the `<replaced-method/>` element to indicate the method signature of the method being overridden. The signature for the arguments is necessary only if the method is overloaded and multiple variants exist within the class. For convenience, the type string for an argument may be a substring of the fully qualified type name. For example, the following all match `java.lang.String` : ``` java.lang.String String Str ``` Because the number of arguments is often enough to distinguish between each possible choice, this shortcut can save a lot of typing, by letting you type only the shortest string that matches an argument type. When you create a bean definition, you create a recipe for creating actual instances of the class defined by that bean definition. The idea that a bean definition is a recipe is important, because it means that, as with a class, you can create many object instances from a single recipe. You can control not only the various dependencies and configuration values that are to be plugged into an object that is created from a particular bean definition but also control the scope of the objects created from a particular bean definition. This approach is powerful and flexible, because you can choose the scope of the objects you create through configuration instead of having to bake in the scope of an object at the Java class level. Beans can be defined to be deployed in one of a number of scopes. The Spring Framework supports six scopes, four of which are available only if you use a web-aware `ApplicationContext` . You can also create a custom scope. The following table describes the supported scopes: Scope | Description | | --- | --- | | | | | | | A thread scope is available but is not registered by default. For more information, see the documentation for | | --- | ## The Singleton Scope Only one shared instance of a singleton bean is managed, and all requests for beans with an ID or IDs that match that bean definition result in that one specific bean instance being returned by the Spring container. To put it another way, when you define a bean definition and it is scoped as a singleton, the Spring IoC container creates exactly one instance of the object defined by that bean definition. This single instance is stored in a cache of such singleton beans, and all subsequent requests and references for that named bean return the cached object. The following image shows how the singleton scope works: Spring’s concept of a singleton bean differs from the singleton pattern as defined in the Gang of Four (GoF) patterns book. The GoF singleton hard-codes the scope of an object such that one and only one instance of a particular class is created per ClassLoader. The scope of the Spring singleton is best described as being per-container and per-bean. This means that, if you define one bean for a particular class in a single Spring container, the Spring container creates one and only one instance of the class defined by that bean definition. The singleton scope is the default scope in Spring. To define a bean as a singleton in XML, you can define a bean as shown in the following example: ``` <bean id="accountService" class="com.something.DefaultAccountService"/<!-- the following is equivalent, though redundant (singleton scope is the default) --> <bean id="accountService" class="com.something.DefaultAccountService" scope="singleton"/> ``` ## The Prototype Scope The non-singleton prototype scope of bean deployment results in the creation of a new bean instance every time a request for that specific bean is made. That is, the bean is injected into another bean or you request it through a `getBean()` method call on the container. As a rule, you should use the prototype scope for all stateful beans and the singleton scope for stateless beans. The following diagram illustrates the Spring prototype scope: (A data access object (DAO) is not typically configured as a prototype, because a typical DAO does not hold any conversational state. It was easier for us to reuse the core of the singleton diagram.) The following example defines a bean as a prototype in XML: ``` <bean id="accountService" class="com.something.DefaultAccountService" scope="prototype"/> ``` In contrast to the other scopes, Spring does not manage the complete lifecycle of a prototype bean. The container instantiates, configures, and otherwise assembles a prototype object and hands it to the client, with no further record of that prototype instance. Thus, although initialization lifecycle callback methods are called on all objects regardless of scope, in the case of prototypes, configured destruction lifecycle callbacks are not called. The client code must clean up prototype-scoped objects and release expensive resources that the prototype beans hold. To get the Spring container to release resources held by prototype-scoped beans, try using a custom bean post-processor which holds a reference to beans that need to be cleaned up. In some respects, the Spring container’s role in regard to a prototype-scoped bean is a replacement for the Java `new` operator. All lifecycle management past that point must be handled by the client. (For details on the lifecycle of a bean in the Spring container, see Lifecycle Callbacks.) ## Singleton Beans with Prototype-bean Dependencies When you use singleton-scoped beans with dependencies on prototype beans, be aware that dependencies are resolved at instantiation time. Thus, if you dependency-inject a prototype-scoped bean into a singleton-scoped bean, a new prototype bean is instantiated and then dependency-injected into the singleton bean. The prototype instance is the sole instance that is ever supplied to the singleton-scoped bean. However, suppose you want the singleton-scoped bean to acquire a new instance of the prototype-scoped bean repeatedly at runtime. You cannot dependency-inject a prototype-scoped bean into your singleton bean, because that injection occurs only once, when the Spring container instantiates the singleton bean and resolves and injects its dependencies. If you need a new instance of a prototype bean at runtime more than once, see Method Injection. ## Request, Session, Application, and WebSocket Scopes The `request` , `session` , `application` , and `websocket` scopes are available only if you use a web-aware Spring `ApplicationContext` implementation (such as ``` XmlWebApplicationContext ``` ). If you use these scopes with regular Spring IoC containers, such as the , an that complains about an unknown bean scope is thrown. ### Initial Web Configuration To support the scoping of beans at the `request` , `session` , `application` , and `websocket` levels (web-scoped beans), some minor initial configuration is required before you define your beans. (This initial setup is not required for the standard scopes: `singleton` and `prototype` .) How you accomplish this initial setup depends on your particular Servlet environment. If you access scoped beans within Spring Web MVC, in effect, within a request that is processed by the Spring `DispatcherServlet` , no special setup is necessary. `DispatcherServlet` already exposes all relevant state. If you use a Servlet web container, with requests processed outside of Spring’s `DispatcherServlet` (for example, when using JSF), you need to register the ``` org.springframework.web.context.request.RequestContextListener ``` ``` ServletRequestListener ``` . This can be done programmatically by using the interface. Alternatively, add the following declaration to your web application’s `web.xml` file: ``` <web-app> ... <listener> <listener-class> org.springframework.web.context.request.RequestContextListener </listener-class> </listener> ... </web-app> ``` Alternatively, if there are issues with your listener setup, consider using Spring’s `RequestContextFilter` . The filter mapping depends on the surrounding web application configuration, so you have to change it as appropriate. The following listing shows the filter part of a web application: ``` <web-app> ... <filter> <filter-name>requestContextFilter</filter-name> <filter-class>org.springframework.web.filter.RequestContextFilter</filter-class> </filter> <filter-mapping> <filter-name>requestContextFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> ... </web-app> ``` `DispatcherServlet` , ``` RequestContextListener ``` , and `RequestContextFilter` all do exactly the same thing, namely bind the HTTP request object to the `Thread` that is servicing that request. This makes beans that are request- and session-scoped available further down the call chain. ### Request scope ``` <bean id="loginAction" class="com.something.LoginAction" scope="request"/> ``` The Spring container creates a new instance of the `LoginAction` bean by using the `loginAction` bean definition for each and every HTTP request. That is, the `loginAction` bean is scoped at the HTTP request level. You can change the internal state of the instance that is created as much as you want, because other instances created from the same `loginAction` bean definition do not see these changes in state. They are particular to an individual request. When the request completes processing, the bean that is scoped to the request is discarded. When using annotation-driven components or Java configuration, the `@RequestScope` annotation can be used to assign a component to the `request` scope. The following example shows how to do so: ### Session Scope The Spring container creates a new instance of the `UserPreferences` bean by using the `userPreferences` bean definition for the lifetime of a single HTTP `Session` . In other words, the `userPreferences` bean is effectively scoped at the HTTP `Session` level. As with request-scoped beans, you can change the internal state of the instance that is created as much as you want, knowing that other HTTP `Session` instances that are also using instances created from the same `userPreferences` bean definition do not see these changes in state, because they are particular to an individual HTTP `Session` . When the HTTP `Session` is eventually discarded, the bean that is scoped to that particular HTTP `Session` is also discarded. When using annotation-driven components or Java configuration, you can use the `@SessionScope` annotation to assign a component to the `session` scope. ### Application Scope ``` <bean id="appPreferences" class="com.something.AppPreferences" scope="application"/> ``` The Spring container creates a new instance of the `AppPreferences` bean by using the `appPreferences` bean definition once for the entire web application. That is, the `appPreferences` bean is scoped at the `ServletContext` level and stored as a regular `ServletContext` attribute. This is somewhat similar to a Spring singleton bean but differs in two important ways: It is a singleton per `ServletContext` , not per Spring `ApplicationContext` (for which there may be several in any given web application), and it is actually exposed and therefore visible as a `ServletContext` attribute. When using annotation-driven components or Java configuration, you can use the `@ApplicationScope` annotation to assign a component to the `application` scope. The following example shows how to do so: ### WebSocket Scope WebSocket scope is associated with the lifecycle of a WebSocket session and applies to STOMP over WebSocket applications, see WebSocket scope for more details. ### Scoped Beans as Dependencies The Spring IoC container manages not only the instantiation of your objects (beans), but also the wiring up of collaborators (or dependencies). If you want to inject (for example) an HTTP request-scoped bean into another bean of a longer-lived scope, you may choose to inject an AOP proxy in place of the scoped bean. That is, you need to inject a proxy object that exposes the same public interface as the scoped object but that can also retrieve the real target object from the relevant scope (such as an HTTP request) and delegate method calls onto the real object. The configuration in the following example is only one line, but it is important to understand the “why” as well as the “how” behind it: <!-- an HTTP Session-scoped bean exposed as a proxy --> <bean id="userPreferences" class="com.something.UserPreferences" scope="session"> <!-- instructs the container to proxy the surrounding bean --> <aop:scoped-proxy/> (1) </bean <!-- a singleton-scoped bean injected with a proxy to the above bean --> <bean id="userService" class="com.something.SimpleUserService"> <!-- a reference to the proxied userPreferences bean --> <property name="userPreferences" ref="userPreferences"/> </bean> </beans> ``` 1 | The line that defines the proxy. | | --- | --- | To create such a proxy, you insert a child `<aop:scoped-proxy/>` element into a scoped bean definition (see Choosing the Type of Proxy to Create and XML Schema-based configuration). Why do definitions of beans scoped at the `request` , `session` and custom-scope levels require the `<aop:scoped-proxy/>` element in common scenarios? Consider the following singleton bean definition and contrast it with what you need to define for the aforementioned scopes (note that the following `userPreferences` bean definition as it stands is incomplete): In the preceding example, the singleton bean ( `userManager` ) is injected with a reference to the HTTP `Session` -scoped bean ( `userPreferences` ). The salient point here is that the `userManager` bean is a singleton: it is instantiated exactly once per container, and its dependencies (in this case only one, the `userPreferences` bean) are also injected only once. This means that the `userManager` bean operates only on the exact same `userPreferences` object (that is, the one with which it was originally injected). This is not the behavior you want when injecting a shorter-lived scoped bean into a longer-lived scoped bean (for example, injecting an HTTP `Session` -scoped collaborating bean as a dependency into singleton bean). Rather, you need a single `userManager` object, and, for the lifetime of an HTTP `Session` , you need a `userPreferences` object that is specific to the HTTP `Session` . Thus, the container creates an object that exposes the exact same public interface as the `UserPreferences` class (ideally an object that is a `UserPreferences` instance), which can fetch the real `UserPreferences` object from the scoping mechanism (HTTP request, `Session` , and so forth). The container injects this proxy object into the `userManager` bean, which is unaware that this `UserPreferences` reference is a proxy. In this example, when a `UserManager` instance invokes a method on the dependency-injected `UserPreferences` object, it is actually invoking a method on the proxy. The proxy then fetches the real `UserPreferences` object from (in this case) the HTTP `Session` and delegates the method invocation onto the retrieved real `UserPreferences` object. Thus, you need the following (correct and complete) configuration when injecting `request-` and `session-scoped` beans into collaborating objects, as the following example shows: ``` <bean id="userPreferences" class="com.something.UserPreferences" scope="session"> <aop:scoped-proxy/> </bean# Choosing the Type of Proxy to Create By default, when the Spring container creates a proxy for a bean that is marked up with the `<aop:scoped-proxy/>` element, a CGLIB-based class proxy is created. Alternatively, you can configure the Spring container to create standard JDK interface-based proxies for such scoped beans, by specifying `false` for the value of the `proxy-target-class` attribute of the `<aop:scoped-proxy/>` element. Using JDK interface-based proxies means that you do not need additional libraries in your application classpath to affect such proxying. However, it also means that the class of the scoped bean must implement at least one interface and that all collaborators into which the scoped bean is injected must reference the bean through one of its interfaces. The following example shows a proxy based on an interface: ``` <!-- DefaultUserPreferences implements the UserPreferences interface --> <bean id="userPreferences" class="com.stuff.DefaultUserPreferences" scope="session"> <aop:scoped-proxy proxy-target-class="false"/> </bean<bean id="userManager" class="com.stuff.UserManager"> <property name="userPreferences" ref="userPreferences"/> </bean> ``` For more detailed information about choosing class-based or interface-based proxying, see Proxying Mechanisms. ## Custom Scopes The bean scoping mechanism is extensible. You can define your own scopes or even redefine existing scopes, although the latter is considered bad practice and you cannot override the built-in `singleton` and `prototype` scopes. ### Creating a Custom Scope To integrate your custom scopes into the Spring container, you need to implement the ``` org.springframework.beans.factory.config.Scope ``` interface, which is described in this section. For an idea of how to implement your own scopes, see the `Scope` implementations that are supplied with the Spring Framework itself and the `Scope` javadoc, which explains the methods you need to implement in more detail. The `Scope` interface has four methods to get objects from the scope, remove them from the scope, and let them be destroyed. The session scope implementation, for example, returns the session-scoped bean (if it does not exist, the method returns a new instance of the bean, after having bound it to the session for future reference). The following method returns the object from the underlying scope: ``` Object get(String name, ObjectFactory<?> objectFactory) ``` ``` fun get(name: String, objectFactory: ObjectFactory<*>): Any ``` The session scope implementation, for example, removes the session-scoped bean from the underlying session. The object should be returned, but you can return `null` if the object with the specified name is not found. The following method removes the object from the underlying scope: ``` Object remove(String name) ``` ``` fun remove(name: String): Any ``` The following method registers a callback that the scope should invoke when it is destroyed or when the specified object in the scope is destroyed: ``` void registerDestructionCallback(String name, Runnable destructionCallback) ``` ``` fun registerDestructionCallback(name: String, destructionCallback: Runnable) ``` See the javadoc or a Spring scope implementation for more information on destruction callbacks. The following method obtains the conversation identifier for the underlying scope: ``` String getConversationId() ``` ``` fun getConversationId(): String ``` This identifier is different for each scope. For a session scoped implementation, this identifier can be the session identifier. ### Using a Custom Scope After you write and test one or more custom `Scope` implementations, you need to make the Spring container aware of your new scopes. The following method is the central method to register a new `Scope` with the Spring container: ``` void registerScope(String scopeName, Scope scope); ``` ``` fun registerScope(scopeName: String, scope: Scope) ``` This method is declared on the ``` ConfigurableBeanFactory ``` interface, which is available through the `BeanFactory` property on most of the concrete `ApplicationContext` implementations that ship with Spring. The first argument to the `registerScope(..)` method is the unique name associated with a scope. Examples of such names in the Spring container itself are `singleton` and `prototype` . The second argument to the `registerScope(..)` method is an actual instance of the custom `Scope` implementation that you wish to register and use. Suppose that you write your custom `Scope` implementation, and then register it as shown in the next example. The next example uses | | --- | ``` Scope threadScope = new SimpleThreadScope(); beanFactory.registerScope("thread", threadScope); ``` ``` val threadScope = SimpleThreadScope() beanFactory.registerScope("thread", threadScope) ``` You can then create bean definitions that adhere to the scoping rules of your custom `Scope` , as follows: ``` <bean id="..." class="..." scope="thread"> ``` With a custom `Scope` implementation, you are not limited to programmatic registration of the scope. You can also do the `Scope` registration declaratively, by using the ``` CustomScopeConfigurer ``` <bean class="org.springframework.beans.factory.config.CustomScopeConfigurer"> <property name="scopes"> <map> <entry key="thread"> <bean class="org.springframework.context.support.SimpleThreadScope"/> </entry> </map> </property> </bean <bean id="thing2" class="x.y.Thing2" scope="thread"> <property name="name" value="Rick"/> <aop:scoped-proxy/> </bean <bean id="thing1" class="x.y.Thing1"> <property name="thing2" ref="thing2"/> </beanWhen you place | | --- | The Spring Framework provides a number of interfaces you can use to customize the nature of a bean. This section groups them as follows: ## Lifecycle Callbacks To interact with the container’s management of the bean lifecycle, you can implement the Spring `InitializingBean` and `DisposableBean` interfaces. The container calls `afterPropertiesSet()` for the former and `destroy()` for the latter to let the bean perform certain actions upon initialization and destruction of your beans. Internally, the Spring Framework uses `BeanPostProcessor` implementations to process any callback interfaces it can find and call the appropriate methods. If you need custom features or other lifecycle behavior Spring does not by default offer, you can implement a `BeanPostProcessor` yourself. For more information, see Container Extension Points. In addition to the initialization and destruction callbacks, Spring-managed objects may also implement the `Lifecycle` interface so that those objects can participate in the startup and shutdown process, as driven by the container’s own lifecycle. The lifecycle callback interfaces are described in this section. ### Initialization Callbacks ``` org.springframework.beans.factory.InitializingBean ``` interface lets a bean perform initialization work after the container has set all necessary properties on the bean. The `InitializingBean` interface specifies a single method: ``` void afterPropertiesSet() throws Exception; ``` We recommend that you do not use the `InitializingBean` interface, because it unnecessarily couples the code to Spring. Alternatively, we suggest using the `@PostConstruct` annotation or specifying a POJO initialization method. In the case of XML-based configuration metadata, you can use the `init-method` attribute to specify the name of the method that has a void no-argument signature. With Java configuration, you can use the `initMethod` attribute of `@Bean` . See Receiving Lifecycle Callbacks. Consider the following example: ``` <bean id="exampleInitBean" class="examples.ExampleBean" init-method="init"/> ``` The preceding example has almost exactly the same effect as the following example (which consists of two listings): ``` <bean id="exampleInitBean" class="examples.AnotherExampleBean"/> ``` ``` public class AnotherExampleBean implements InitializingBean { ``` class AnotherExampleBean : InitializingBean { However, the first of the two preceding examples does not couple the code to Spring. ### Destruction Callbacks Implementing the ``` org.springframework.beans.factory.DisposableBean ``` interface lets a bean get a callback when the container that contains it is destroyed. The `DisposableBean` interface specifies a single method: ``` void destroy() throws Exception; ``` We recommend that you do not use the `DisposableBean` callback interface, because it unnecessarily couples the code to Spring. Alternatively, we suggest using the `@PreDestroy` annotation or specifying a generic method that is supported by bean definitions. With XML-based configuration metadata, you can use the `destroy-method` attribute on the `<bean/>` . With Java configuration, you can use the `destroyMethod` attribute of `@Bean` . See Receiving Lifecycle Callbacks. Consider the following definition: ``` <bean id="exampleDestructionBean" class="examples.ExampleBean" destroy-method="cleanup"/> ``` The preceding definition has almost exactly the same effect as the following definition: ``` <bean id="exampleDestructionBean" class="examples.AnotherExampleBean"/> ``` ``` public class AnotherExampleBean implements DisposableBean { ``` class AnotherExampleBean : DisposableBean { However, the first of the two preceding definitions does not couple the code to Spring. Note that Spring also supports inference of destroy methods, detecting a public `close` or `shutdown` method. This is the default behavior for `@Bean` methods in Java configuration classes and automatically matches ``` java.lang.AutoCloseable ``` or `java.io.Closeable` implementations, not coupling the destruction logic to Spring either. For destroy method inference with XML, you may assign the | | --- | ### Default Initialization and Destroy Methods When you write initialization and destroy method callbacks that do not use the Spring-specific `InitializingBean` and `DisposableBean` callback interfaces, you typically write methods with names such as `init()` , `initialize()` , `dispose()` , and so on. Ideally, the names of such lifecycle callback methods are standardized across a project so that all developers use the same method names and ensure consistency. You can configure the Spring container to “look” for named initialization and destroy callback method names on every bean. This means that you, as an application developer, can write your application classes and use an initialization callback called `init()` , without having to configure an `init-method="init"` attribute with each bean definition. The Spring IoC container calls that method when the bean is created (and in accordance with the standard lifecycle callback contract described previously). This feature also enforces a consistent naming convention for initialization and destroy method callbacks. Suppose that your initialization callback methods are named `init()` and your destroy callback methods are named `destroy()` . Your class then resembles the class in the following example: ``` public class DefaultBlogService implements BlogService { private BlogDao blogDao; public void setBlogDao(BlogDao blogDao) { this.blogDao = blogDao; } // this is (unsurprisingly) the initialization callback method public void init() { if (this.blogDao == null) { throw new IllegalStateException("The [blogDao] property must be set."); } } } ``` ``` class DefaultBlogService : BlogService { private var blogDao: BlogDao? = null // this is (unsurprisingly) the initialization callback method fun init() { if (blogDao == null) { throw IllegalStateException("The [blogDao] property must be set.") } } } ``` You could then use that class in a bean resembling the following: ``` <beans default-init-method="init" <bean id="blogService" class="com.something.DefaultBlogService"> <property name="blogDao" ref="blogDao" /> </bean The presence of the `default-init-method` attribute on the top-level `<beans/>` element attribute causes the Spring IoC container to recognize a method called `init` on the bean class as the initialization method callback. When a bean is created and assembled, if the bean class has such a method, it is invoked at the appropriate time. You can configure destroy method callbacks similarly (in XML, that is) by using the ``` default-destroy-method ``` attribute on the top-level `<beans/>` element. Where existing bean classes already have callback methods that are named at variance with the convention, you can override the default by specifying (in XML, that is) the method name by using the `init-method` and `destroy-method` attributes of the `<bean/>` itself. The Spring container guarantees that a configured initialization callback is called immediately after a bean is supplied with all dependencies. Thus, the initialization callback is called on the raw bean reference, which means that AOP interceptors and so forth are not yet applied to the bean. A target bean is fully created first and then an AOP proxy (for example) with its interceptor chain is applied. If the target bean and the proxy are defined separately, your code can even interact with the raw target bean, bypassing the proxy. Hence, it would be inconsistent to apply the interceptors to the `init` method, because doing so would couple the lifecycle of the target bean to its proxy or interceptors and leave strange semantics when your code interacts directly with the raw target bean. ### Combining Lifecycle Mechanisms As of Spring 2.5, you have three options for controlling bean lifecycle behavior: The `InitializingBean` and `DisposableBean` callback interfaces * Custom `init()` and `destroy()` methods * The `@PostConstruct` and `@PreDestroy` annotations You can combine these mechanisms to control a given bean. If multiple lifecycle mechanisms are configured for a bean and each mechanism is configured with a different method name, then each configured method is run in the order listed after this note. However, if the same method name is configured — for example, | | --- | Multiple lifecycle mechanisms configured for the same bean, with different initialization methods, are called as follows: Methods annotated with `@PostConstruct` * `afterPropertiesSet()` as defined by the `InitializingBean` callback interface * A custom configured `init()` method Destroy methods are called in the same order: Methods annotated with `@PreDestroy` * `destroy()` as defined by the `DisposableBean` callback interface * A custom configured `destroy()` method ### Startup and Shutdown Callbacks The `Lifecycle` interface defines the essential methods for any object that has its own lifecycle requirements (such as starting and stopping some background process): ``` public interface Lifecycle { void start(); void stop(); boolean isRunning(); } ``` Any Spring-managed object may implement the `Lifecycle` interface. Then, when the `ApplicationContext` itself receives start and stop signals (for example, for a stop/restart scenario at runtime), it cascades those calls to all `Lifecycle` implementations defined within that context. It does this by delegating to a `LifecycleProcessor` , shown in the following listing: ``` public interface LifecycleProcessor extends Lifecycle { void onRefresh(); void onClose(); } ``` Notice that the `LifecycleProcessor` is itself an extension of the `Lifecycle` interface. It also adds two other methods for reacting to the context being refreshed and closed. The order of startup and shutdown invocations can be important. If a “depends-on” relationship exists between any two objects, the dependent side starts after its dependency, and it stops before its dependency. However, at times, the direct dependencies are unknown. You may only know that objects of a certain type should start prior to objects of another type. In those cases, the `SmartLifecycle` interface defines another option, namely the `getPhase()` method as defined on its super-interface, `Phased` . The following listing shows the definition of the `Phased` interface: ``` public interface Phased { int getPhase(); } ``` The following listing shows the definition of the `SmartLifecycle` interface: ``` public interface SmartLifecycle extends Lifecycle, Phased { boolean isAutoStartup(); void stop(Runnable callback); } ``` When starting, the objects with the lowest phase start first. When stopping, the reverse order is followed. Therefore, an object that implements `SmartLifecycle` and whose `getPhase()` method returns `Integer.MIN_VALUE` would be among the first to start and the last to stop. At the other end of the spectrum, a phase value of `Integer.MAX_VALUE` would indicate that the object should be started last and stopped first (likely because it depends on other processes to be running). When considering the phase value, it is also important to know that the default phase for any “normal” `Lifecycle` object that does not implement `SmartLifecycle` is `0` . Therefore, any negative phase value indicates that an object should start before those standard components (and stop after them). The reverse is true for any positive phase value. The stop method defined by `SmartLifecycle` accepts a callback. Any implementation must invoke that callback’s `run()` method after that implementation’s shutdown process is complete. That enables asynchronous shutdown where necessary, since the default implementation of the `LifecycleProcessor` interface, ``` DefaultLifecycleProcessor ``` , waits up to its timeout value for the group of objects within each phase to invoke that callback. The default per-phase timeout is 30 seconds. You can override the default lifecycle processor instance by defining a bean named `lifecycleProcessor` within the context. If you want only to modify the timeout, defining the following would suffice: ``` <bean id="lifecycleProcessor" class="org.springframework.context.support.DefaultLifecycleProcessor"> <!-- timeout value in milliseconds --> <property name="timeoutPerShutdownPhase" value="10000"/> </bean> ``` As mentioned earlier, the `LifecycleProcessor` interface defines callback methods for the refreshing and closing of the context as well. The latter drives the shutdown process as if `stop()` had been called explicitly, but it happens when the context is closing. The 'refresh' callback, on the other hand, enables another feature of `SmartLifecycle` beans. When the context is refreshed (after all objects have been instantiated and initialized), that callback is invoked. At that point, the default lifecycle processor checks the boolean value returned by each `SmartLifecycle` object’s `isAutoStartup()` method. If `true` , that object is started at that point rather than waiting for an explicit invocation of the context’s or its own `start()` method (unlike the context refresh, the context start does not happen automatically for a standard context implementation). The `phase` value and any “depends-on” relationships determine the startup order as described earlier. ### Shutting Down the Spring IoC Container Gracefully in Non-Web Applications If you use Spring’s IoC container in a non-web application environment (for example, in a rich client desktop environment), register a shutdown hook with the JVM. Doing so ensures a graceful shutdown and calls the relevant destroy methods on your singleton beans so that all resources are released. You must still configure and implement these destroy callbacks correctly. To register a shutdown hook, call the ``` registerShutdownHook() ``` method that is declared on the ``` import org.springframework.context.ConfigurableApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; public static void main(final String[] args) throws Exception { ConfigurableApplicationContext ctx = new ClassPathXmlApplicationContext("beans.xml"); ``` import org.springframework.context.support.ClassPathXmlApplicationContext and `BeanNameAware` When an `ApplicationContext` creates an object instance that implements the ``` org.springframework.context.ApplicationContextAware ``` interface, the instance is provided with a reference to that `ApplicationContext` . The following listing shows the definition of the ``` public interface ApplicationContextAware { void setApplicationContext(ApplicationContext applicationContext) throws BeansException; } ``` Thus, beans can programmatically manipulate the `ApplicationContext` that created them, through the `ApplicationContext` interface or by casting the reference to a known subclass of this interface (such as , which exposes additional functionality). One use would be the programmatic retrieval of other beans. Sometimes this capability is useful. However, in general, you should avoid it, because it couples the code to Spring and does not follow the Inversion of Control style, where collaborators are provided to beans as properties. Other methods of the `ApplicationContext` provide access to file resources, publishing application events, and accessing a `MessageSource` . These additional features are described in Additional Capabilities of the `ApplicationContext` . Autowiring is another alternative to obtain a reference to the `ApplicationContext` . The traditional `constructor` and `byType` autowiring modes (as described in Autowiring Collaborators) can provide a dependency of type `ApplicationContext` for a constructor argument or a setter method parameter, respectively. For more flexibility, including the ability to autowire fields and multiple parameter methods, use the annotation-based autowiring features. If you do, the `ApplicationContext` is autowired into a field, constructor argument, or method parameter that expects the `ApplicationContext` type if the field, constructor, or method in question carries the `@Autowired` annotation. For more information, see Using `@Autowired` . When an `ApplicationContext` creates a class that implements the ``` org.springframework.beans.factory.BeanNameAware ``` interface, the class is provided with a reference to the name defined in its associated object definition. The following listing shows the definition of the BeanNameAware interface: ``` public interface BeanNameAware { void setBeanName(String name) throws BeansException; } ``` The callback is invoked after population of normal bean properties but before an initialization callback such as or a custom init-method. ## Other `Aware` Interfaces Besides and `BeanNameAware` (discussed earlier), Spring offers a wide range of `Aware` callback interfaces that let beans indicate to the container that they require a certain infrastructure dependency. As a general rule, the name indicates the dependency type. The following table summarizes the most important `Aware` interfaces: Name | Injected Dependency | Explained in… | | --- | --- | --- | | | | | | | | | | | | Note again that using these interfaces ties your code to the Spring API and does not follow the Inversion of Control style. As a result, we recommend them for infrastructure beans that require programmatic access to the container. A bean definition can contain a lot of configuration information, including constructor arguments, property values, and container-specific information, such as the initialization method, a static factory method name, and so on. A child bean definition inherits configuration data from a parent definition. The child definition can override some values or add others as needed. Using parent and child bean definitions can save a lot of typing. Effectively, this is a form of templating. If you work with an `ApplicationContext` interface programmatically, child bean definitions are represented by the `ChildBeanDefinition` class. Most users do not work with them on this level. Instead, they configure bean definitions declaratively in a class such as the . When you use XML-based configuration metadata, you can indicate a child bean definition by using the `parent` attribute, specifying the parent bean as the value of this attribute. The following example shows how to do so: ``` <bean id="inheritedTestBean" abstract="true" class="org.springframework.beans.TestBean"> <property name="name" value="parent"/> <property name="age" value="1"/> </bean<bean id="inheritsWithDifferentClass" class="org.springframework.beans.DerivedTestBean" parent="inheritedTestBean" init-method="initialize"> (1) <property name="name" value="override"/> <!-- the age property value of 1 will be inherited from parent --> </bean> ``` A child bean definition uses the bean class from the parent definition if none is specified but can also override it. In the latter case, the child bean class must be compatible with the parent (that is, it must accept the parent’s property values). A child bean definition inherits scope, constructor argument values, property values, and method overrides from the parent, with the option to add new values. Any scope, initialization method, destroy method, or `static` factory method settings that you specify override the corresponding parent settings. The remaining settings are always taken from the child definition: depends on, autowire mode, dependency check, singleton, and lazy init. The preceding example explicitly marks the parent bean definition as abstract by using the `abstract` attribute. If the parent definition does not specify a class, explicitly marking the parent bean definition as `abstract` is required, as the following example shows: ``` <bean id="inheritedTestBeanWithoutClass" abstract="true"> <property name="name" value="parent"/> <property name="age" value="1"/> </bean<bean id="inheritsWithClass" class="org.springframework.beans.DerivedTestBean" parent="inheritedTestBeanWithoutClass" init-method="initialize"> <property name="name" value="override"/> <!-- age will inherit the value of 1 from the parent bean definition--> </bean> ``` The parent bean cannot be instantiated on its own because it is incomplete, and it is also explicitly marked as `abstract` . When a definition is `abstract` , it is usable only as a pure template bean definition that serves as a parent definition for child definitions. Trying to use such an `abstract` parent bean on its own, by referring to it as a ref property of another bean or doing an explicit `getBean()` call with the parent bean ID returns an error. Similarly, the container’s internal ``` preInstantiateSingletons() ``` method ignores bean definitions that are defined as abstract. ApplicationContext pre-instantiates all singletons by default. Therefore, it is important (at least for singleton beans) that if you have a (parent) bean definition which you intend to use only as a template, and this definition specifies a class, you must make sure to set the abstract attribute to true, otherwise the application context will actually (attempt to) pre-instantiate the abstract bean. Typically, an application developer does not need to subclass `ApplicationContext` implementation classes. Instead, the Spring IoC container can be extended by plugging in implementations of special integration interfaces. The next few sections describe these integration interfaces. ## Customizing Beans by Using a `BeanPostProcessor` The `BeanPostProcessor` interface defines callback methods that you can implement to provide your own (or override the container’s default) instantiation logic, dependency resolution logic, and so forth. If you want to implement some custom logic after the Spring container finishes instantiating, configuring, and initializing a bean, you can plug in one or more custom `BeanPostProcessor` implementations. You can configure multiple `BeanPostProcessor` instances, and you can control the order in which these `BeanPostProcessor` instances run by setting the `order` property. You can set this property only if the `BeanPostProcessor` implements the `Ordered` interface. If you write your own `BeanPostProcessor` , you should consider implementing the `Ordered` interface, too. For further details, see the javadoc of the `BeanPostProcessor` and `Ordered` interfaces. See also the note on programmatic registration of `BeanPostProcessor` instances. interface consists of exactly two callback methods. When such a class is registered as a post-processor with the container, for each bean instance that is created by the container, the post-processor gets a callback from the container both before container initialization methods (such as or any declared `init` method) are called, and after any bean initialization callbacks. The post-processor can take any action with the bean instance, including ignoring the callback completely. A bean post-processor typically checks for callback interfaces, or it may wrap a bean with a proxy. Some Spring AOP infrastructure classes are implemented as bean post-processors in order to provide proxy-wrapping logic. An `ApplicationContext` automatically detects any beans that are defined in the configuration metadata that implement the `BeanPostProcessor` interface. The `ApplicationContext` registers these beans as post-processors so that they can be called later, upon bean creation. Bean post-processors can be deployed in the container in the same fashion as any other beans. Note that, when declaring a `BeanPostProcessor` by using an `@Bean` factory method on a configuration class, the return type of the factory method should be the implementation class itself or at least the interface, clearly indicating the post-processor nature of that bean. Otherwise, the `ApplicationContext` cannot autodetect it by type before fully creating it. Since a `BeanPostProcessor` needs to be instantiated early in order to apply to the initialization of other beans in the context, this early type detection is critical. Programmatically registeringWhile the recommended approach forBeanPostProcessor instancesBeanPostProcessor registration is through ApplicationContext auto-detection (as described earlier), you can register them programmatically against a ConfigurableBeanFactory by using the addBeanPostProcessor method. This can be useful when you need to evaluate conditional logic before registration or even for copying bean post processors across contexts in a hierarchy. Note, however, that BeanPostProcessor instances added programmatically do not respect the Ordered interface. Here, it is the order of registration that dictates the order of execution. Note also that BeanPostProcessor instances registered programmatically are always processed before those registered through auto-detection, regardless of any explicit ordering. BeanPostProcessor instances and AOP auto-proxyingClasses that implement the For any such bean, you should see an informational log message: If you have beans wired into your The following examples show how to write, register, and use `BeanPostProcessor` instances in an `ApplicationContext` . ### Example: Hello World, `BeanPostProcessor` -style This first example illustrates basic usage. The example shows a custom `BeanPostProcessor` implementation that invokes the `toString()` method of each bean as it is created by the container and prints the resulting string to the system console. The following listing shows the custom `BeanPostProcessor` implementation class definition: ``` package scripting; import org.springframework.beans.factory.config.BeanPostProcessor; public class InstantiationTracingBeanPostProcessor implements BeanPostProcessor { // simply return the instantiated bean as-is public Object postProcessBeforeInitialization(Object bean, String beanName) { return bean; // we could potentially return any object reference here... } public Object postProcessAfterInitialization(Object bean, String beanName) { System.out.println("Bean '" + beanName + "' created : " + bean.toString()); return bean; } } ``` ``` package scripting import org.springframework.beans.factory.config.BeanPostProcessor class InstantiationTracingBeanPostProcessor : BeanPostProcessor { // simply return the instantiated bean as-is override fun postProcessBeforeInitialization(bean: Any, beanName: String): Any? { return bean // we could potentially return any object reference here... } override fun postProcessAfterInitialization(bean: Any, beanName: String): Any? { println("Bean '$beanName' created : $bean") return bean } } ``` The following `beans` element uses the ``` <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:lang="http://www.springframework.org/schema/lang" xsi:schemaLocation="http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/lang https://www.springframework.org/schema/lang/spring-lang.xsd" <lang:groovy id="messenger" script-source="classpath:org/springframework/scripting/groovy/Messenger.groovy"> <lang:property name="message" value="Fiona Apple Is Just So Dreamy."/> </lang:groovy <!-- when the above bean (messenger) is instantiated, this custom BeanPostProcessor implementation will output the fact to the system console --> <bean class="scripting.InstantiationTracingBeanPostProcessor"/ Notice how the is merely defined. It does not even have a name, and, because it is a bean, it can be dependency-injected as you would any other bean. (The preceding configuration also defines a bean that is backed by a Groovy script. The Spring dynamic language support is detailed in the chapter entitled Dynamic Language Support.) The following Java application runs the preceding code and configuration: public static void main(final String[] args) throws Exception { ApplicationContext ctx = new ClassPathXmlApplicationContext("scripting/beans.xml"); Messenger messenger = ctx.getBean("messenger", Messenger.class); System.out.println(messenger); } fun main() { val ctx = ClassPathXmlApplicationContext("scripting/beans.xml") val messenger = ctx.getBean<Messenger>("messenger") println(messenger) } ``` The output of the preceding application resembles the following: > Bean 'messenger' created : org.springframework.scripting.groovy.GroovyMessenger@272961 org.springframework.scripting.groovy.GroovyMessenger@272961 Using callback interfaces or annotations in conjunction with a custom `BeanPostProcessor` implementation is a common means of extending the Spring IoC container. An example is Spring’s — a `BeanPostProcessor` implementation that ships with the Spring distribution and autowires annotated fields, setter methods, and arbitrary config methods. ## Customizing Configuration Metadata with a The next extension point that we look at is the ``` org.springframework.beans.factory.config.BeanFactoryPostProcessor ``` . The semantics of this interface are similar to those of the `BeanPostProcessor` , with one major difference: operates on the bean configuration metadata. That is, the Spring IoC container lets a read the configuration metadata and potentially change it before the container instantiates any beans other than instances. You can configure multiple instances, and you can control the order in which these instances run by setting the `order` property. However, you can only set this property if the implements the `Ordered` interface. If you write your own , you should consider implementing the `Ordered` interface, too. See the javadoc of the and `Ordered` interfaces for more details. A bean factory post-processor is automatically run when it is declared inside an `ApplicationContext` , in order to apply changes to the configuration metadata that define the container. Spring includes a number of predefined bean factory post-processors, such as . You can also use a custom — for example, to register custom property editors. An `ApplicationContext` automatically detects any beans that are deployed into it that implement the interface. It uses these beans as bean factory post-processors, at the appropriate time. You can deploy these post-processor beans as you would any other bean. As with | | --- | ### Example: The Class Name Substitution to externalize property values from a bean definition in a separate file by using the standard Java `Properties` format. Doing so enables the person deploying an application to customize environment-specific properties, such as database URLs and passwords, without the complexity or risk of modifying the main XML definition file or files for the container. Consider the following XML-based configuration metadata fragment, where a `DataSource` with placeholder values is defined: ``` <bean class="org.springframework.context.support.PropertySourcesPlaceholderConfigurer"> <property name="locations" value="classpath:com/something/jdbc.properties"/> </bean The example shows properties configured from an external `Properties` file. At runtime, a is applied to the metadata that replaces some properties of the DataSource. The values to replace are specified as placeholders of the form `${property-name}` , which follows the Ant and log4j and JSP EL style. The actual values come from another file in the standard Java `Properties` format: > jdbc.driverClassName=org.hsqldb.jdbcDriver jdbc.url=jdbc:hsqldb:hsql://production:9002 jdbc.username=sa jdbc.password=root Therefore, the `${jdbc.username}` string is replaced at runtime with the value, 'sa', and the same applies for other placeholder values that match keys in the properties file. The checks for placeholders in most properties and attributes of a bean definition. Furthermore, you can customize the placeholder prefix and suffix. With the `context` namespace introduced in Spring 2.5, you can configure property placeholders with a dedicated configuration element. You can provide one or more locations as a comma-separated list in the `location` attribute, as the following example shows: ``` <context:property-placeholder location="classpath:com/something/jdbc.properties"/> ``` not only looks for properties in the `Properties` file you specify. By default, if it cannot find a property in the specified properties files, it checks against Spring `Environment` properties and regular Java `System` properties. , another bean factory post-processor, resembles the , but unlike the latter, the original definitions can have default values or no values at all for bean properties. If an overriding `Properties` file does not have an entry for a certain bean property, the default context definition is used. Note that the bean definition is not aware of being overridden, so it is not immediately obvious from the XML definition file that the override configurer is being used. In case of multiple instances that define different values for the same bean property, the last one wins, due to the overriding mechanism. Properties file configuration lines take the following format: > beanName.property=value The following listing shows an example of the format: > dataSource.driverClassName=com.mysql.jdbc.Driver dataSource.url=jdbc:mysql:mydb This example file can be used with a container definition that contains a bean called `dataSource` that has `driver` and `url` properties. Compound property names are also supported, as long as every component of the path except the final property being overridden is already non-null (presumably initialized by the constructors). In the following example, the `sammy` property of the `bob` property of the `fred` property of the `tom` bean is set to the scalar value `123` : > tom.fred.bob.sammy=123 Specified override values are always literal values. They are not translated into bean references. This convention also applies when the original value in the XML bean definition specifies a bean reference. | | --- | With the `context` namespace introduced in Spring 2.5, it is possible to configure property overriding with a dedicated configuration element, as the following example shows: ``` <context:property-override location="classpath:override.properties"/> ``` ## Customizing Instantiation Logic with a `FactoryBean` You can implement the ``` org.springframework.beans.factory.FactoryBean ``` interface for objects that are themselves factories. The `FactoryBean` interface is a point of pluggability into the Spring IoC container’s instantiation logic. If you have complex initialization code that is better expressed in Java as opposed to a (potentially) verbose amount of XML, you can create your own `FactoryBean` , write the complex initialization inside that class, and then plug your custom `FactoryBean` into the container. The `FactoryBean<T>` interface provides three methods: * `T getObject()` : Returns an instance of the object this factory creates. The instance can possibly be shared, depending on whether this factory returns singletons or prototypes. * ``` boolean isSingleton() ``` : Returns `true` if this `FactoryBean` returns singletons or `false` otherwise. The default implementation of this method returns `true` . * ``` Class<?> getObjectType() ``` : Returns the object type returned by the `getObject()` method or `null` if the type is not known in advance. The `FactoryBean` concept and interface are used in a number of places within the Spring Framework. More than 50 implementations of the `FactoryBean` interface ship with Spring itself. When you need to ask a container for an actual `FactoryBean` instance itself instead of the bean it produces, prefix the bean’s `id` with the ampersand symbol ( `&` ) when calling the `getBean()` method of the `ApplicationContext` . So, for a given `FactoryBean` with an `id` of `myBean` , invoking `getBean("myBean")` on the container returns the product of the `FactoryBean` , whereas invoking `getBean("&myBean")` returns the `FactoryBean` instance itself. An alternative to XML setup is provided by annotation-based configuration, which relies on bytecode metadata for wiring up components instead of XML declarations. Instead of using XML to describe a bean wiring, the developer moves the configuration into the component class itself by using annotations on the relevant class, method, or field declaration. As mentioned in Example: The , using a `BeanPostProcessor` in conjunction with annotations is a common means of extending the Spring IoC container. For example, the `@Autowired` annotation provides the same capabilities as described in Autowiring Collaborators but with more fine-grained control and wider applicability. In addition, Spring provides support for JSR-250 annotations, such as `@PostConstruct` and `@PreDestroy` , as well as support for JSR-330 (Dependency Injection for Java) annotations contained in the `jakarta.inject` package such as `@Inject` and `@Named` . Details about those annotations can be found in the relevant section. As always, you can register the post-processors as individual bean definitions, but they can also be implicitly registered by including the following tag in an XML-based Spring configuration (notice the inclusion of the `context` namespace): element implicitly registers the following post-processors: # Using @Autowired `@Autowired` You can apply the `@Autowired` annotation to constructors, as the following example shows: You can also apply the `@Autowired` annotation to traditional setter methods, as the following example shows: @set:Autowired lateinit var movieFinder: MovieFinder You can also apply the annotation to methods with arbitrary names and multiple arguments, as the following example shows: private MovieCatalog movieCatalog; @Autowired public void prepare(MovieCatalog movieCatalog, CustomerPreferenceDao customerPreferenceDao) { this.movieCatalog = movieCatalog; this.customerPreferenceDao = customerPreferenceDao; } You can apply `@Autowired` to fields as well and even mix it with constructors, as the following example shows: You can also instruct Spring to provide all beans of a particular type from the `ApplicationContext` by adding the `@Autowired` annotation to a field or method that expects an array of that type, as the following example shows: @Autowired private MovieCatalog[] movieCatalogs; @Autowired private lateinit var movieCatalogs: Array<MovieCatalogThe same applies for typed collections, as the following example shows: private Set<MovieCatalog> movieCatalogs; @Autowired public void setMovieCatalogs(Set<MovieCatalog> movieCatalogs) { this.movieCatalogs = movieCatalogs; } @Autowired lateinit var movieCatalogs: Set<MovieCatalog Even typed `Map` instances can be autowired as long as the expected key type is `String` . The map values contain all beans of the expected type, and the keys contain the corresponding bean names, as the following example shows: private Map<String, MovieCatalog> movieCatalogs; @Autowired public void setMovieCatalogs(Map<String, MovieCatalog> movieCatalogs) { this.movieCatalogs = movieCatalogs; } @Autowired lateinit var movieCatalogs: Map<String, MovieCatalogBy default, autowiring fails when no matching candidate beans are available for a given injection point. In the case of a declared array, collection, or map, at least one matching element is expected. The default behavior is to treat annotated methods and fields as indicating required dependencies. You can change this behavior as demonstrated in the following example, enabling the framework to skip a non-satisfiable injection point through marking it as non-required (i.e., by setting the `required` attribute in `@Autowired` to `false` ): @Autowired(required = false) var movieFinder: MovieFinder? = null Injected constructor and factory method arguments are a special case since the `required` attribute in `@Autowired` has a somewhat different meaning due to Spring’s constructor resolution algorithm that may potentially deal with multiple constructors. Constructor and factory method arguments are effectively required by default but with a few special rules in a single-constructor scenario, such as multi-element injection points (arrays, collections, maps) resolving to empty instances if no matching beans are available. This allows for a common implementation pattern where all dependencies can be declared in a unique multi-argument constructor — for example, declared as a single public constructor without an `@Autowired` annotation. Alternatively, you can express the non-required nature of a particular dependency through Java 8’s `java.util.Optional` , as the following example shows: @Autowired public void setMovieFinder(Optional<MovieFinder> movieFinder) { ... } } ``` As of Spring Framework 5.0, you can also use a `@Nullable` annotation (of any kind in any package — for example, ``` javax.annotation.Nullable ``` from JSR-305) or just leverage Kotlin built-in null-safety support: @Autowired public void setMovieFinder(@Nullable MovieFinder movieFinder) { ... } } ``` @Autowired var movieFinder: MovieFinder? = null You can also use `@Autowired` for interfaces that are well-known resolvable dependencies: `BeanFactory` , `ApplicationContext` , `Environment` , `ResourceLoader` , , and `MessageSource` . These interfaces and their extended interfaces, such as , are automatically resolved, with no special setup necessary. The following example autowires an `ApplicationContext` object: @Autowired private ApplicationContext context; @Autowired lateinit var context: ApplicationContext `@Primary` Because autowiring by type may lead to multiple candidates, it is often necessary to have more control over the selection process. One way to accomplish this is with Spring’s `@Primary` annotation. `@Primary` indicates that a particular bean should be given preference when multiple beans are candidates to be autowired to a single-valued dependency. If exactly one primary bean exists among the candidates, it becomes the autowired value. Consider the following configuration that defines `firstMovieCatalog` as the primary `MovieCatalog` : @Bean @Primary public MovieCatalog firstMovieCatalog() { ... } @Bean public MovieCatalog secondMovieCatalog() { ... } @Bean @Primary fun firstMovieCatalog(): MovieCatalog { ... } @Bean fun secondMovieCatalog(): MovieCatalog { ... } With the preceding configuration, the following `MovieRecommender` is autowired with the `firstMovieCatalog` : The corresponding bean definitions follow: <bean class="example.SimpleMovieCatalog" primary="true"> <!-- inject any dependencies required by this bean --> </bean `@Primary` is an effective way to use autowiring by type with several instances when one primary candidate can be determined. When you need more control over the selection process, you can use Spring’s `@Qualifier` annotation. You can associate qualifier values with specific arguments, narrowing the set of type matches so that a specific bean is chosen for each argument. In the simplest case, this can be a plain descriptive value, as shown in the following example: You can also specify the `@Qualifier` annotation on individual constructor arguments or method parameters, as shown in the following example: private final MovieCatalog movieCatalog; @Autowired public void prepare(@Qualifier("main") MovieCatalog movieCatalog, CustomerPreferenceDao customerPreferenceDao) { this.movieCatalog = movieCatalog; this.customerPreferenceDao = customerPreferenceDao; } The following example shows corresponding bean definitions. <bean class="example.SimpleMovieCatalog"> <qualifier value="main"/> (1) <bean class="example.SimpleMovieCatalog"> <qualifier value="action"/> (2) 1 | The bean with the | | --- | --- | 2 | The bean with the | For a fallback match, the bean name is considered a default qualifier value. Thus, you can define the bean with an `id` of `main` instead of the nested qualifier element, leading to the same matching result. However, although you can use this convention to refer to specific beans by name, `@Autowired` is fundamentally about type-driven injection with optional semantic qualifiers. This means that qualifier values, even with the bean name fallback, always have narrowing semantics within the set of type matches. They do not semantically express a reference to a unique bean `id` . Good qualifier values are `main` or `EMEA` or `persistent` , expressing characteristics of a specific component that are independent from the bean `id` , which may be auto-generated in case of an anonymous bean definition such as the one in the preceding example. Qualifiers also apply to typed collections, as discussed earlier — for example, to `Set<MovieCatalog>` . In this case, all matching beans, according to the declared qualifiers, are injected as a collection. This implies that qualifiers do not have to be unique. Rather, they constitute filtering criteria. For example, you can define multiple `MovieCatalog` beans with the same qualifier value “action”, all of which are injected into a `Set<MovieCatalog>` annotated with `@Qualifier("action")` . That said, if you intend to express annotation-driven injection by name, do not primarily use `@Autowired` , even if it is capable of selecting by bean name among type-matching candidates. Instead, use the JSR-250 `@Resource` annotation, which is semantically defined to identify a specific target component by its unique name, with the declared type being irrelevant for the matching process. `@Autowired` has rather different semantics: After selecting candidate beans by type, the specified `String` qualifier value is considered within those type-selected candidates only (for example, matching an `account` qualifier against beans marked with the same qualifier label). For beans that are themselves defined as a collection, `Map` , or array type, `@Resource` is a fine solution, referring to the specific collection or array bean by unique name. That said, as of 4.3, you can match collection, `Map` , and array types through Spring’s `@Autowired` type matching algorithm as well, as long as the element type information is preserved in `@Bean` return type signatures or collection inheritance hierarchies. In this case, you can use qualifier values to select among same-typed collections, as outlined in the previous paragraph. As of 4.3, `@Autowired` also considers self references for injection (that is, references back to the bean that is currently injected). Note that self injection is a fallback. Regular dependencies on other components always have precedence. In that sense, self references do not participate in regular candidate selection and are therefore in particular never primary. On the contrary, they always end up as lowest precedence. In practice, you should use self references as a last resort only (for example, for calling other methods on the same instance through the bean’s transactional proxy). Consider factoring out the affected methods to a separate delegate bean in such a scenario. Alternatively, you can use `@Resource` , which may obtain a proxy back to the current bean by its unique name. `@Autowired` applies to fields, constructors, and multi-argument methods, allowing for narrowing through qualifier annotations at the parameter level. In contrast, `@Resource` is supported only for fields and bean property setter methods with a single argument. As a consequence, you should stick with qualifiers if your injection target is a constructor or a multi-argument method. You can create your own custom qualifier annotations. To do so, define an annotation and provide the `@Qualifier` annotation within your definition, as the following example shows: String value(); } ``` ``` @Target(AnnotationTarget.FIELD, AnnotationTarget.VALUE_PARAMETER) @Retention(AnnotationRetention.RUNTIME) @Qualifier annotation class Genre(val value: String) ``` Then you can provide the custom qualifier on autowired fields and parameters, as the following example shows: @Autowired @Genre("Action") private MovieCatalog actionCatalog; private MovieCatalog comedyCatalog; @Autowired public void setComedyCatalog(@Genre("Comedy") MovieCatalog comedyCatalog) { this.comedyCatalog = comedyCatalog; } @Autowired @Genre("Action") private lateinit var actionCatalog: MovieCatalog private lateinit var comedyCatalog: MovieCatalog @Autowired fun setComedyCatalog(@Genre("Comedy") comedyCatalog: MovieCatalog) { this.comedyCatalog = comedyCatalog } Next, you can provide the information for the candidate bean definitions. You can add `<qualifier/>` tags as sub-elements of the `<bean/>` tag and then specify the `type` and `value` to match your custom qualifier annotations. The type is matched against the fully-qualified class name of the annotation. Alternately, as a convenience if no risk of conflicting names exists, you can use the short class name. The following example demonstrates both approaches: <bean class="example.SimpleMovieCatalog"> <qualifier type="Genre" value="Action"/> <!-- inject any dependencies required by this bean --> </bean <bean class="example.SimpleMovieCatalog"> <qualifier type="example.Genre" value="Comedy"/> <!-- inject any dependencies required by this bean --> </beanIn Classpath Scanning and Managed Components, you can see an annotation-based alternative to providing the qualifier metadata in XML. Specifically, see Providing Qualifier Metadata with Annotations. In some cases, using an annotation without a value may suffice. This can be useful when the annotation serves a more generic purpose and can be applied across several different types of dependencies. For example, you may provide an offline catalog that can be searched when no Internet connection is available. First, define the simple annotation, as the following example shows: ``` @Target(AnnotationTarget.FIELD, AnnotationTarget.VALUE_PARAMETER) @Retention(AnnotationRetention.RUNTIME) @Qualifier annotation class Offline ``` Then add the annotation to the field or property to be autowired, as shown in the following example: @Autowired @Offline (1) private MovieCatalog offlineCatalog; @Autowired @Offline (1) private lateinit var offlineCatalog: MovieCatalog Now the bean definition only needs a qualifier `type` , as shown in the following example: ``` <bean class="example.SimpleMovieCatalog"> <qualifier type="Offline"/> (1) <!-- inject any dependencies required by this bean --> </bean> ``` 1 | This element specifies the qualifier. | | --- | --- | You can also define custom qualifier annotations that accept named attributes in addition to or instead of the simple `value` attribute. If multiple attribute values are then specified on a field or parameter to be autowired, a bean definition must match all such attribute values to be considered an autowire candidate. As an example, consider the following annotation definition: String genre(); Format format(); } ``` ``` @Target(AnnotationTarget.FIELD, AnnotationTarget.VALUE_PARAMETER) @Retention(AnnotationRetention.RUNTIME) @Qualifier annotation class MovieQualifier(val genre: String, val format: Format) ``` In this case `Format` is an enum, defined as follows: The fields to be autowired are annotated with the custom qualifier and include values for both attributes: `genre` and `format` , as the following example shows: @Autowired @MovieQualifier(format=Format.VHS, genre="Action") private MovieCatalog actionVhsCatalog; @Autowired @MovieQualifier(format=Format.VHS, genre="Comedy") private MovieCatalog comedyVhsCatalog; @Autowired @MovieQualifier(format=Format.DVD, genre="Action") private MovieCatalog actionDvdCatalog; @Autowired @MovieQualifier(format=Format.BLURAY, genre="Comedy") private MovieCatalog comedyBluRayCatalog; @Autowired @MovieQualifier(format = Format.VHS, genre = "Action") private lateinit var actionVhsCatalog: MovieCatalog @Autowired @MovieQualifier(format = Format.VHS, genre = "Comedy") private lateinit var comedyVhsCatalog: MovieCatalog @Autowired @MovieQualifier(format = Format.DVD, genre = "Action") private lateinit var actionDvdCatalog: MovieCatalog @Autowired @MovieQualifier(format = Format.BLURAY, genre = "Comedy") private lateinit var comedyBluRayCatalog: MovieCatalog Finally, the bean definitions should contain matching qualifier values. This example also demonstrates that you can use bean meta attributes instead of the `<qualifier/>` elements. If available, the `<qualifier/>` element and its attributes take precedence, but the autowiring mechanism falls back on the values provided within the `<meta/>` tags if no such qualifier is present, as in the last two bean definitions in the following example: <bean class="example.SimpleMovieCatalog"> <meta key="format" value="DVD"/> <meta key="genre" value="Action"/> <!-- inject any dependencies required by this bean --> </bean <bean class="example.SimpleMovieCatalog"> <meta key="format" value="BLURAY"/> <meta key="genre" value="Comedy"/> <!-- inject any dependencies required by this bean --> </bean In addition to the `@Qualifier` annotation, you can use Java generic types as an implicit form of qualification. For example, suppose you have the following configuration: @Bean public StringStore stringStore() { return new StringStore(); } @Bean public IntegerStore integerStore() { return new IntegerStore(); } } ``` @Bean fun stringStore() = StringStore() @Bean fun integerStore() = IntegerStore() } ``` Assuming that the preceding beans implement a generic interface, (that is, `Store<String>` and `Store<Integer>` ), you can `@Autowire` the `Store` interface and the generic is used as a qualifier, as the following example shows: ``` @Autowired private Store<String> s1; // <String> qualifier, injects the stringStore bean @Autowired private Store<Integer> s2; // <Integer> qualifier, injects the integerStore bean ``` ``` @Autowired private lateinit var s1: Store<String> // <String> qualifier, injects the stringStore bean @Autowired private lateinit var s2: Store<Integer> // <Integer> qualifier, injects the integerStore bean ``` Generic qualifiers also apply when autowiring lists, `Map` instances and arrays. The following example autowires a generic `List` : ``` // Inject all Store beans as long as they have an <Integer> generic // Store<String> beans will not appear in this list @Autowired private List<Store<Integer>> s; ``` ``` // Inject all Store beans as long as they have an <Integer> generic // Store<String> beans will not appear in this list @Autowired private lateinit var s: List<Store<Integer>> ``` # Using CustomAutowireConfigurer is a that lets you register your own custom qualifier annotation types, even if they are not annotated with Spring’s `@Qualifier` annotation. The following example shows how to use ``` <bean id="customAutowireConfigurer" class="org.springframework.beans.factory.annotation.CustomAutowireConfigurer"> <property name="customQualifierTypes"> <set> <value>example.CustomQualifier</value> </set> </property> </bean> ``` ``` AutowireCandidateResolver ``` determines autowire candidates by: The `autowire-candidate` value of each bean definition * Any patterns available on the `<beans/>` element * The presence of `@Qualifier` annotations and any custom annotations registered with the When multiple beans qualify as autowire candidates, the determination of a “primary” is as follows: If exactly one bean definition among the candidates has a `primary` attribute set to `true` , it is selected. # Injection with @Resource # Injection with `@Resource` Spring also supports injection by using the JSR-250 `@Resource` annotation ( ``` jakarta.annotation.Resource ``` ) on fields or bean property setter methods. This is a common pattern in Jakarta EE: for example, in JSF-managed beans and JAX-WS endpoints. Spring supports this pattern for Spring-managed objects as well. `@Resource` takes a name attribute. By default, Spring interprets that value as the bean name to be injected. In other words, it follows by-name semantics, as demonstrated in the following example: @Resource(name="myMovieFinder") (1) public void setMovieFinder(MovieFinder movieFinder) { this.movieFinder = movieFinder; } } ``` @Resource(name="myMovieFinder") (1) private lateinit var movieFinder:MovieFinder } ``` If no name is explicitly specified, the default name is derived from the field name or setter method. In case of a field, it takes the field name. In case of a setter method, it takes the bean property name. The following example is going to have the bean named `movieFinder` injected into its setter method: @Resource public void setMovieFinder(MovieFinder movieFinder) { this.movieFinder = movieFinder; } } ``` The name provided with the annotation is resolved as a bean name by the | | --- | In the exclusive case of `@Resource` usage with no explicit name specified, and similar to `@Autowired` , `@Resource` finds a primary type match instead of a specific named bean and resolves well known resolvable dependencies: the `BeanFactory` , `ApplicationContext` , `ResourceLoader` , , and `MessageSource` interfaces. Thus, in the following example, the ``` customerPreferenceDao ``` field first looks for a bean named "customerPreferenceDao" and then falls back to a primary type match for the type ``` CustomerPreferenceDao ``` @Resource private CustomerPreferenceDao customerPreferenceDao; @Resource private ApplicationContext context; (1) @Resource private lateinit var context: ApplicationContext (1) # Using @Value `@Value` `@Value` is typically used to inject externalized properties: ``` @Component class MovieRecommender(@Value("\${catalog.name}") private val catalog: String) ``` With the following configuration: ``` @Configuration @PropertySource("classpath:application.properties") public class AppConfig { } ``` ``` @Configuration @PropertySource("classpath:application.properties") class AppConfig ``` And the following ``` application.properties ``` ``` catalog.name=MovieCatalog ``` In that case, the `catalog` parameter and field will be equal to the `MovieCatalog` value. A default lenient embedded value resolver is provided by Spring. It will try to resolve the property value and if it cannot be resolved, the property name (for example `${catalog.name}` ) will be injected as the value. If you want to maintain strict control over nonexistent values, you should declare a bean, as the following example shows: @Bean public static PropertySourcesPlaceholderConfigurer propertyPlaceholderConfigurer() { return new PropertySourcesPlaceholderConfigurer(); } } ``` @Bean fun propertyPlaceholderConfigurer() = PropertySourcesPlaceholderConfigurer() } ``` When configuring a | | --- | Using the above configuration ensures Spring initialization failure if any `${}` placeholder could not be resolved. It is also possible to use methods like `setPlaceholderPrefix` , `setPlaceholderSuffix` , or `setValueSeparator` to customize placeholders. Spring Boot configures by default a | | --- | Built-in converter support provided by Spring allows simple type conversion (to `Integer` or `int` for example) to be automatically handled. Multiple comma-separated values can be automatically converted to `String` array without extra effort. It is possible to provide a default value as following: ``` @Component class MovieRecommender(@Value("\${catalog.name:defaultCatalog}") private val catalog: String) ``` A Spring `BeanPostProcessor` uses a `ConversionService` behind the scenes to handle the process for converting the `String` value in `@Value` to the target type. If you want to provide conversion support for your own custom type, you can provide your own `ConversionService` bean instance as the following example shows: @Bean public ConversionService conversionService() { DefaultFormattingConversionService conversionService = new DefaultFormattingConversionService(); conversionService.addConverter(new MyCustomConverter()); return conversionService; } } ``` @Bean fun conversionService(): ConversionService { return DefaultFormattingConversionService().apply { addConverter(MyCustomConverter()) } } } ``` When `@Value` contains a `SpEL` expression the value will be dynamically computed at runtime as the following example shows: public MovieRecommender(@Value("#{systemProperties['user.catalog'] + 'Catalog' }") String catalog) { this.catalog = catalog; } } ``` ``` @Component class MovieRecommender( @Value("#{systemProperties['user.catalog'] + 'Catalog' }") private val catalog: String) ``` SpEL also enables the use of more complex data structures: private final Map<String, Integer> countOfMoviesPerCatalog; public MovieRecommender( @Value("#{{'Thriller': 100, 'Comedy': 300}}") Map<String, Integer> countOfMoviesPerCatalog) { this.countOfMoviesPerCatalog = countOfMoviesPerCatalog; } } ``` ``` @Component class MovieRecommender( @Value("#{{'Thriller': 100, 'Comedy': 300}}") private val countOfMoviesPerCatalog: Map<String, Int>) ``` # Using @PostConstruct and @PreDestroy `@PostConstruct` and `@PreDestroy` The not only recognizes the `@Resource` annotation but also the JSR-250 lifecycle annotations: ``` jakarta.annotation.PostConstruct ``` ``` jakarta.annotation.PreDestroy ``` . Introduced in Spring 2.5, the support for these annotations offers an alternative to the lifecycle callback mechanism described in initialization callbacks and destruction callbacks. Provided that the is registered within the Spring `ApplicationContext` , a method carrying one of these annotations is invoked at the same point in the lifecycle as the corresponding Spring lifecycle interface method or explicitly declared callback method. In the following example, the cache is pre-populated upon initialization and cleared upon destruction: @PreDestroy public void clearMovieCache() { // clears the movie cache upon destruction... } } ``` @PreDestroy fun clearMovieCache() { // clears the movie cache upon destruction... } } ``` For details about the effects of combining various lifecycle mechanisms, see Combining Lifecycle Mechanisms. Most examples in this chapter use XML to specify the configuration metadata that produces each `BeanDefinition` within the Spring container. The previous section (Annotation-based Container Configuration) demonstrates how to provide a lot of the configuration metadata through source-level annotations. Even in those examples, however, the "base" bean definitions are explicitly defined in the XML file, while the annotations drive only the dependency injection. This section describes an option for implicitly detecting the candidate components by scanning the classpath. Candidate components are classes that match against a filter criteria and have a corresponding bean definition registered with the container. This removes the need to use XML to perform bean registration. Instead, you can use annotations (for example, `@Component` ), AspectJ type expressions, or your own custom filter criteria to select which classes have bean definitions registered with the container. `@Component` and Further Stereotype Annotations The `@Repository` annotation is a marker for any class that fulfills the role or stereotype of a repository (also known as Data Access Object or DAO). Among the uses of this marker is the automatic translation of exceptions, as described in Exception Translation. Spring provides further stereotype annotations: `@Component` , `@Service` , and `@Controller` . `@Component` is a generic stereotype for any Spring-managed component. `@Repository` , `@Service` , and `@Controller` are specializations of `@Component` for more specific use cases (in the persistence, service, and presentation layers, respectively). Therefore, you can annotate your component classes with `@Component` , but, by annotating them with `@Repository` , `@Service` , or `@Controller` instead, your classes are more properly suited for processing by tools or associating with aspects. For example, these stereotype annotations make ideal targets for pointcuts. `@Repository` , `@Service` , and `@Controller` can also carry additional semantics in future releases of the Spring Framework. Thus, if you are choosing between using `@Component` or `@Service` for your service layer, `@Service` is clearly the better choice. Similarly, as stated earlier, `@Repository` is already supported as a marker for automatic exception translation in your persistence layer. ## Using Meta-annotations and Composed Annotations Many of the annotations provided by Spring can be used as meta-annotations in your own code. A meta-annotation is an annotation that can be applied to another annotation. For example, the `@Service` annotation mentioned earlier is meta-annotated with `@Component` , as the following example shows: ``` @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) @Documented @Component (1) public @interface Service { ``` @Target(AnnotationTarget.TYPE) @Retention(AnnotationRetention.RUNTIME) @MustBeDocumented @Component (1) annotation class Service { You can also combine meta-annotations to create “composed annotations”. For example, the `@RestController` annotation from Spring MVC is composed of `@Controller` and `@ResponseBody` . In addition, composed annotations can optionally redeclare attributes from meta-annotations to allow customization. This can be particularly useful when you want to only expose a subset of the meta-annotation’s attributes. For example, Spring’s `@SessionScope` annotation hard codes the scope name to `session` but still allows customization of the `proxyMode` . The following listing shows the definition of the `SessionScope` annotation: ``` @Target({ElementType.TYPE, ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @Documented @Scope(WebApplicationContext.SCOPE_SESSION) public @interface SessionScope { /** * Alias for {@link Scope#proxyMode}. * <p>Defaults to {@link ScopedProxyMode#TARGET_CLASS}. */ @AliasFor(annotation = Scope.class) ScopedProxyMode proxyMode() default ScopedProxyMode.TARGET_CLASS; ``` @Target(AnnotationTarget.TYPE, AnnotationTarget.FUNCTION) @Retention(AnnotationRetention.RUNTIME) @MustBeDocumented @Scope(WebApplicationContext.SCOPE_SESSION) annotation class SessionScope( @get:AliasFor(annotation = Scope::class) val proxyMode: ScopedProxyMode = ScopedProxyMode.TARGET_CLASS ) ``` You can then use `@SessionScope` without declaring the `proxyMode` as follows: You can also override the value for the `proxyMode` , as the following example shows: ## Automatically Detecting Classes and Registering Bean Definitions Spring can automatically detect stereotyped classes and register corresponding `BeanDefinition` instances with the `ApplicationContext` . For example, the following two classes are eligible for such autodetection: public SimpleMovieLister(MovieFinder movieFinder) { this.movieFinder = movieFinder; } } ``` ``` @Service class SimpleMovieLister(private val movieFinder: MovieFinder) ``` To autodetect these classes and register the corresponding beans, you need to add `@ComponentScan` to your `@Configuration` class, where the `basePackages` attribute is a common parent package for the two classes. (Alternatively, you can specify a comma- or semicolon- or space-separated list that includes the parent package of each class.) For brevity, the preceding example could have used the | | --- | The following alternative uses XML: <context:component-scan base-package="org.example"/ Furthermore, the are both implicitly included when you use the component-scan element. That means that the two components are autodetected and wired together — all without any bean configuration metadata provided in XML. You can disable the registration of | | --- | ## Using Filters to Customize Scanning By default, classes annotated with `@Component` , `@Repository` , `@Service` , `@Controller` , `@Configuration` , or a custom annotation that itself is annotated with `@Component` are the only detected candidate components. However, you can modify and extend this behavior by applying custom filters. Add them as `includeFilters` or `excludeFilters` attributes of the `@ComponentScan` annotation (or as ``` <context:include-filter /> ``` ``` <context:exclude-filter /> ``` child elements of the ``` <context:component-scan> ``` element in XML configuration). Each filter element requires the `type` and `expression` attributes. The following table describes the filtering options: Filter Type | Example Expression | Description | | --- | --- | --- | | | | | | | | | | | The following example shows the configuration ignoring all `@Repository` annotations and using “stub” repositories instead: ``` @Configuration @ComponentScan(basePackages = "org.example", includeFilters = @Filter(type = FilterType.REGEX, pattern = ".*Stub.*Repository"), excludeFilters = @Filter(Repository.class)) public class AppConfig { // ... } ``` ``` @Configuration @ComponentScan(basePackages = ["org.example"], includeFilters = [Filter(type = FilterType.REGEX, pattern = [".*Stub.*Repository"])], excludeFilters = [Filter(Repository::class)]) class AppConfig { // ... } ``` The following listing shows the equivalent XML: ``` <beans> <context:component-scan base-package="org.example"> <context:include-filter type="regex" expression=".*Stub.*Repository"/> <context:exclude-filter type="annotation" expression="org.springframework.stereotype.Repository"/> </context:component-scan> </beans> ``` You can also disable the default filters by setting | | --- | ## Defining Bean Metadata within Components Spring components can also contribute bean definition metadata to the container. You can do this with the same `@Bean` annotation used to define bean metadata within `@Configuration` annotated classes. The following example shows how to do so: public void doWork() { // Component method implementation omitted } } ``` fun doWork() { // Component method implementation omitted } } ``` The preceding class is a Spring component that has application-specific code in its `doWork()` method. However, it also contributes a bean definition that has a factory method referring to the method `publicInstance()` . The `@Bean` annotation identifies the factory method and other bean definition properties, such as a qualifier value through the `@Qualifier` annotation. Other method-level annotations that can be specified are `@Scope` , `@Lazy` , and custom qualifier annotations. In addition to its role for component initialization, you can also place the | | --- | Autowired fields and methods are supported, as previously discussed, with additional support for autowiring of `@Bean` methods. The following example shows how to do so: private static int i; // use of a custom qualifier and autowiring of method parameters @Bean protected TestBean protectedInstance( @Qualifier("public") TestBean spouse, @Value("#{privateInstance.age}") String country) { TestBean tb = new TestBean("protectedInstance", 1); tb.setSpouse(spouse); tb.setCountry(country); return tb; } @Bean private TestBean privateInstance() { return new TestBean("privateInstance", i++); } @Bean @RequestScope public TestBean requestScopedInstance() { return new TestBean("requestScopedInstance", 3); } } ``` companion object { private var i: Int = 0 } // use of a custom qualifier and autowiring of method parameters @Bean protected fun protectedInstance( @Qualifier("public") spouse: TestBean, @Value("#{privateInstance.age}") country: String) = TestBean("protectedInstance", 1).apply { this.spouse = spouse this.country = country } @Bean private fun privateInstance() = TestBean("privateInstance", i++) @Bean @RequestScope fun requestScopedInstance() = TestBean("requestScopedInstance", 3) } ``` The example autowires the `String` method parameter `country` to the value of the `age` property on another bean named `privateInstance` . A Spring Expression Language element defines the value of the property through the notation `#{ <expression> }` . For `@Value` annotations, an expression resolver is preconfigured to look for bean names when resolving expression text. As of Spring Framework 4.3, you may also declare a factory method parameter of type `InjectionPoint` (or its more specific subclass: `DependencyDescriptor` ) to access the requesting injection point that triggers the creation of the current bean. Note that this applies only to the actual creation of bean instances, not to the injection of existing instances. As a consequence, this feature makes most sense for beans of prototype scope. For other scopes, the factory method only ever sees the injection point that triggered the creation of a new bean instance in the given scope (for example, the dependency that triggered the creation of a lazy singleton bean). You can use the provided injection point metadata with semantic care in such scenarios. The following example shows how to use `InjectionPoint` : @Bean @Scope("prototype") public TestBean prototypeInstance(InjectionPoint injectionPoint) { return new TestBean("prototypeInstance for " + injectionPoint.getMember()); } } ``` @Bean @Scope("prototype") fun prototypeInstance(injectionPoint: InjectionPoint) = TestBean("prototypeInstance for ${injectionPoint.member}") } ``` The `@Bean` methods in a regular Spring component are processed differently than their counterparts inside a Spring `@Configuration` class. The difference is that `@Component` classes are not enhanced with CGLIB to intercept the invocation of methods and fields. CGLIB proxying is the means by which invoking methods or fields within `@Bean` methods in `@Configuration` classes creates bean metadata references to collaborating objects. Such methods are not invoked with normal Java semantics but rather go through the container in order to provide the usual lifecycle management and proxying of Spring beans, even when referring to other beans through programmatic calls to `@Bean` methods. In contrast, invoking a method or field in a `@Bean` method within a plain `@Component` class has standard Java semantics, with no special CGLIB processing or other constraints applying. ## Naming Autodetected Components When a component is autodetected as part of the scanning process, its bean name is generated by the `BeanNameGenerator` strategy known to that scanner. By default, any Spring stereotype annotation ( `@Component` , `@Repository` , `@Service` , and `@Controller` ) that contains a name `value` thereby provides that name to the corresponding bean definition. If such an annotation contains no name `value` or for any other detected component (such as those discovered by custom filters), the default bean name generator returns the uncapitalized non-qualified class name. For example, if the following component classes were detected, the names would be `myMovieLister` and `movieFinderImpl` : If you do not want to rely on the default bean-naming strategy, you can provide a custom bean-naming strategy. First, implement the `BeanNameGenerator` interface, and be sure to include a default no-arg constructor. Then, provide the fully qualified class name when configuring the scanner, as the following example annotation and bean definition show. If you run into naming conflicts due to multiple autodetected components having the same non-qualified class name (i.e., classes with identical names but residing in different packages), you may need to configure a | | --- | ``` @Configuration @ComponentScan(basePackages = "org.example", nameGenerator = MyNameGenerator.class) public class AppConfig { // ... } ``` ``` @Configuration @ComponentScan(basePackages = ["org.example"], nameGenerator = MyNameGenerator::class) class AppConfig { // ... } ``` ``` <beans> <context:component-scan base-package="org.example" name-generator="org.example.MyNameGenerator" /> </beans> ``` As a general rule, consider specifying the name with the annotation whenever other components may be making explicit references to it. On the other hand, the auto-generated names are adequate whenever the container is responsible for wiring. ## Providing a Scope for Autodetected Components As with Spring-managed components in general, the default and most common scope for autodetected components is `singleton` . However, sometimes you need a different scope that can be specified by the `@Scope` annotation. You can provide the name of the scope within the annotation, as the following example shows: @Scope annotations are only introspected on the concrete bean class (for annotated components) or the factory method (for @Bean methods). In contrast to XML bean definitions, there is no notion of bean definition inheritance, and inheritance hierarchies at the class level are irrelevant for metadata purposes. For details on web-specific scopes such as “request” or “session” in a Spring context, see Request, Session, Application, and WebSocket Scopes. As with the pre-built annotations for those scopes, you may also compose your own scoping annotations by using Spring’s meta-annotation approach: for example, a custom annotation meta-annotated with `@Scope("prototype")` , possibly also declaring a custom scoped-proxy mode. To provide a custom strategy for scope resolution rather than relying on the annotation-based approach, you can implement the | | --- | ``` @Configuration @ComponentScan(basePackages = "org.example", scopeResolver = MyScopeResolver.class) public class AppConfig { // ... } ``` ``` @Configuration @ComponentScan(basePackages = ["org.example"], scopeResolver = MyScopeResolver::class) class AppConfig { // ... } ``` When using certain non-singleton scopes, it may be necessary to generate proxies for the scoped objects. The reasoning is described in Scoped Beans as Dependencies. For this purpose, a scoped-proxy attribute is available on the component-scan element. The three possible values are: `no` , `interfaces` , and `targetClass` . For example, the following configuration results in standard JDK dynamic proxies: ``` @Configuration @ComponentScan(basePackages = "org.example", scopedProxy = ScopedProxyMode.INTERFACES) public class AppConfig { // ... } ``` ``` @Configuration @ComponentScan(basePackages = ["org.example"], scopedProxy = ScopedProxyMode.INTERFACES) class AppConfig { // ... } ``` ## Providing Qualifier Metadata with Annotations The `@Qualifier` annotation is discussed in Fine-tuning Annotation-based Autowiring with Qualifiers. The examples in that section demonstrate the use of the `@Qualifier` annotation and custom qualifier annotations to provide fine-grained control when you resolve autowire candidates. Because those examples were based on XML bean definitions, the qualifier metadata was provided on the candidate bean definitions by using the `qualifier` or `meta` child elements of the `bean` element in the XML. When relying upon classpath scanning for auto-detection of components, you can provide the qualifier metadata with type-level annotations on the candidate class. The following three examples demonstrate this technique: ``` @Component @Qualifier("Action") public class ActionMovieCatalog implements MovieCatalog { // ... } ``` ``` @Component @Qualifier("Action") class ActionMovieCatalog : MovieCatalog ``` ``` @Component @Genre("Action") public class ActionMovieCatalog implements MovieCatalog { // ... } ``` ``` @Component @Genre("Action") class ActionMovieCatalog : MovieCatalog { // ... } ``` ``` @Component @Offline public class CachingMovieCatalog implements MovieCatalog { // ... } ``` ``` @Component @Offline class CachingMovieCatalog : MovieCatalog { // ... } ``` As with most annotation-based alternatives, keep in mind that the annotation metadata is bound to the class definition itself, while the use of XML allows for multiple beans of the same type to provide variations in their qualifier metadata, because that metadata is provided per-instance rather than per-class. | | --- | ## Generating an Index of Candidate Components While classpath scanning is very fast, it is possible to improve the startup performance of large applications by creating a static list of candidates at compilation time. In this mode, all modules that are targets of component scanning must use this mechanism. To generate the index, add an additional dependency to each module that contains components that are targets for component scan directives. The following example shows how to do so with Maven: ``` <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-indexer</artifactId> <version>6.0.13</version> <optional>true</optional> </dependency> </dependencies> ``` With Gradle 4.5 and earlier, the dependency should be declared in the `compileOnly` configuration, as shown in the following example: ``` dependencies { compileOnly "org.springframework:spring-context-indexer:6.0.13" } ``` With Gradle 4.6 and later, the dependency should be declared in the `annotationProcessor` configuration, as shown in the following example: ``` dependencies { annotationProcessor "org.springframework:spring-context-indexer:6.0.13" } ``` ``` spring-context-indexer ``` artifact generates a ``` META-INF/spring.components ``` file that is included in the jar file. When working with this mode in your IDE, the | | --- | The index is enabled automatically when a | | --- | Spring offers support for JSR-330 standard annotations (Dependency Injection). Those annotations are scanned in the same way as the Spring annotations. To use them, you need to have the relevant jars in your classpath. ## Dependency Injection with `@Inject` and `@Named` Instead of `@Autowired` , you can use ``` @jakarta.inject.Inject ``` ``` import jakarta.inject.Inject; public void listMovies() { this.movieFinder.findMovies(...); // ... } } ``` fun listMovies() { movieFinder.findMovies(...) // ... } } ``` As with `@Autowired` , you can use `@Inject` at the field level, method level and constructor-argument level. Furthermore, you may declare your injection point as a `Provider` , allowing for on-demand access to beans of shorter scopes or lazy access to other beans through a `Provider.get()` call. The following example offers a variant of the preceding example: ``` import jakarta.inject.Inject; import jakarta.inject.Provider; private Provider<MovieFinder> movieFinder; @Inject public void setMovieFinder(Provider<MovieFinder> movieFinder) { this.movieFinder = movieFinder; } public void listMovies() { this.movieFinder.get().findMovies(...); // ... } } ``` @Inject lateinit var movieFinder: Provider<MovieFinder fun listMovies() { movieFinder.get().findMovies(...) // ... } } ``` If you would like to use a qualified name for the dependency that should be injected, you should use the `@Named` annotation, as the following example shows: @Inject public void setMovieFinder(@Named("main") MovieFinder movieFinder) { this.movieFinder = movieFinder; } @Inject fun setMovieFinder(@Named("main") movieFinder: MovieFinder) { this.movieFinder = movieFinder } As with `@Autowired` , `@Inject` can also be used with `java.util.Optional` or `@Nullable` . This is even more applicable here, since `@Inject` does not have a `required` attribute. The following pair of examples show how to use `@Inject` and `@Nullable` : @Inject public void setMovieFinder(Optional<MovieFinder> movieFinder) { // ... } } ``` @Inject public void setMovieFinder(@Nullable MovieFinder movieFinder) { // ... } } ``` @Inject var movieFinder: MovieFinder? = null } ``` `@Named` and `@ManagedBean` : Standard Equivalents to the `@Component` Annotation Instead of `@Component` , you can use ``` @jakarta.inject.Named ``` ``` jakarta.annotation.ManagedBean ``` It is very common to use `@Component` without specifying a name for the component. `@Named` can be used in a similar fashion, as the following example shows: When you use `@Named` or `@ManagedBean` , you can use component scanning in the exact same way as when you use Spring annotations, as the following example shows: In contrast to | | --- | ## Limitations of JSR-330 Standard Annotations When you work with standard annotations, you should know that some significant features are not available, as the following table shows: Spring | jakarta.inject.* | jakarta.inject restrictions / comments | | --- | --- | --- | | | | | | | | | | | | | | | | | | | Instantiating the Spring Container by Using # Basic Concepts: @Bean and @Configuration # Basic Concepts: `@Bean` and `@Configuration` The central artifacts in Spring’s Java configuration support are `@Configuration` -annotated classes and `@Bean` -annotated methods. The `@Bean` annotation is used to indicate that a method instantiates, configures, and initializes a new object to be managed by the Spring IoC container. For those familiar with Spring’s `<beans/>` XML configuration, the `@Bean` annotation plays the same role as the `<bean/>` element. You can use `@Bean` -annotated methods with any Spring `@Component` . However, they are most often used with `@Configuration` beans. Annotating a class with `@Configuration` indicates that its primary purpose is as a source of bean definitions. Furthermore, `@Configuration` classes let inter-bean dependencies be defined by calling other `@Bean` methods in the same class. The simplest possible `@Configuration` class reads as follows: @Bean public MyServiceImpl myService() { return new MyServiceImpl(); } } ``` @Bean fun myService(): MyServiceImpl { return MyServiceImpl() } } ``` The preceding `AppConfig` class is equivalent to the following Spring `<beans/>` XML: ``` <beans> <bean id="myService" class="com.acme.services.MyServiceImpl"/> </beans> ``` The `@Bean` and `@Configuration` annotations are discussed in depth in the following sections. First, however, we cover the various ways of creating a spring container by using Java-based configuration. The following sections document Spring’s , introduced in Spring 3.0. This versatile `ApplicationContext` implementation is capable of accepting not only `@Configuration` classes as input but also plain `@Component` classes and classes annotated with JSR-330 metadata. When `@Configuration` classes are provided as input, the `@Configuration` class itself is registered as a bean definition and all declared `@Bean` methods within the class are also registered as bean definitions. When `@Component` and JSR-330 classes are provided, they are registered as bean definitions, and it is assumed that DI metadata such as `@Autowired` or `@Inject` are used within those classes where necessary. ## Simple Construction In much the same way that Spring XML files are used as input when instantiating a , you can use `@Configuration` classes as input when instantiating an . This allows for completely XML-free usage of the Spring container, as the following example shows: ``` public static void main(String[] args) { ApplicationContext ctx = new AnnotationConfigApplicationContext(AppConfig.class); MyService myService = ctx.getBean(MyService.class); myService.doStuff(); } ``` fun main() { val ctx = AnnotationConfigApplicationContext(AppConfig::class.java) val myService = ctx.getBean<MyService>() myService.doStuff() } ``` As mentioned earlier, is not limited to working only with `@Configuration` classes. Any `@Component` or JSR-330 annotated class may be supplied as input to the constructor, as the following example shows: fun main() { val ctx = AnnotationConfigApplicationContext(MyServiceImpl::class.java, Dependency1::class.java, Dependency2::class.java) val myService = ctx.getBean<MyService>() myService.doStuff() } ``` The preceding example assumes that `MyServiceImpl` , `Dependency1` , and `Dependency2` use Spring dependency injection annotations such as `@Autowired` . ## Building the Container Programmatically by Using `register(Class<?>…​)` You can instantiate an by using a no-arg constructor and then configure it by using the `register()` method. This approach is particularly useful when programmatically building an fun main() { val ctx = AnnotationConfigApplicationContext() ctx.register(AppConfig::class.java, OtherConfig::class.java) ctx.register(AdditionalConfig::class.java) ctx.refresh() val myService = ctx.getBean<MyService>() myService.doStuff() } ``` ## Enabling Component Scanning with `scan(String…​)` To enable component scanning, you can annotate your `@Configuration` class as follows: ``` @Configuration @ComponentScan(basePackages = "com.acme") (1) public class AppConfig { // ... } ``` ``` @Configuration @ComponentScan(basePackages = ["com.acme"]) (1) class AppConfig { // ... } ``` In the preceding example, the `com.acme` package is scanned to look for any `@Component` -annotated classes, and those classes are registered as Spring bean definitions within the container. exposes the `scan(String…​)` method to allow for the same component-scanning functionality, as the following example shows: ``` fun main() { val ctx = AnnotationConfigApplicationContext() ctx.scan("com.acme") ctx.refresh() val myService = ctx.getBean<MyService>() } ``` Remember that | | --- | ## Support for Web Applications with A variant of is available with . You can use this implementation when configuring the Spring servlet listener, Spring MVC `DispatcherServlet` , and so forth. The following `web.xml` snippet configures a typical Spring MVC web application (note the use of the `contextClass` context-param and init-param): ``` <web-app> <!-- Configure ContextLoaderListener to use AnnotationConfigWebApplicationContext instead of the default XmlWebApplicationContext --> <context-param> <param-name>contextClass</param-name> <param-value> org.springframework.web.context.support.AnnotationConfigWebApplicationContext </param-value> </context-param <!-- Configuration locations must consist of one or more comma- or space-delimited fully-qualified @Configuration classes. Fully-qualified packages may also be specified for component-scanning --> <context-param> <param-name>contextConfigLocation</param-name> <param-value>com.acme.AppConfig</param-value> </context-param <!-- Bootstrap the root application context as usual using ContextLoaderListener --> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener <!-- Declare a Spring MVC DispatcherServlet as usual --> <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <!-- Configure DispatcherServlet to use AnnotationConfigWebApplicationContext instead of the default XmlWebApplicationContext --> <init-param> <param-name>contextClass</param-name> <param-value> org.springframework.web.context.support.AnnotationConfigWebApplicationContext </param-value> </init-param> <!-- Again, config locations must consist of one or more comma- or space-delimited and fully-qualified @Configuration classes --> <init-param> <param-name>contextConfigLocation</param-name> <param-value>com.acme.web.MvcConfig</param-value> </init-param> </servlet <!-- map all requests for /app/* to the dispatcher servlet --> <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>/app/*</url-pattern> </servlet-mapping> </web-app> ``` # Using the @Bean Annotation `@Bean` Annotation `@Bean` is a method-level annotation and a direct analog of the XML `<bean/>` element. The annotation supports some of the attributes offered by `<bean/>` , such as: You can use the `@Bean` annotation in a `@Configuration` -annotated or in a `@Component` -annotated class. ## Declaring a Bean To declare a bean, you can annotate a method with the `@Bean` annotation. You use this method to register a bean definition within an `ApplicationContext` of the type specified as the method’s return value. By default, the bean name is the same as the method name. The following example shows a `@Bean` method declaration: @Bean fun transferService() = TransferServiceImpl() } ``` The preceding configuration is exactly equivalent to the following Spring XML: ``` <beans> <bean id="transferService" class="com.acme.TransferServiceImpl"/> </beans> ``` Both declarations make a bean named `transferService` available in the `ApplicationContext` , bound to an object instance of type `TransferServiceImpl` , as the following text image shows: > transferService -> com.acme.TransferServiceImpl You can also use default methods to define beans. This allows composition of bean configurations by implementing interfaces with bean definitions on default methods. ``` public interface BaseConfig { @Bean default TransferServiceImpl transferService() { return new TransferServiceImpl(); } } @Configuration public class AppConfig implements BaseConfig { You can also declare your `@Bean` method with an interface (or base class) return type, as the following example shows: @Bean fun transferService(): TransferService { return TransferServiceImpl() } } ``` However, this limits the visibility for advance type prediction to the specified interface type ( `TransferService` ). Then, with the full type ( `TransferServiceImpl` ) known to the container only once the affected singleton bean has been instantiated. Non-lazy singleton beans get instantiated according to their declaration order, so you may see different type matching results depending on when another component tries to match by a non-declared type (such as ``` @Autowired TransferServiceImpl ``` , which resolves only once the `transferService` bean has been instantiated). If you consistently refer to your types by a declared service interface, your | | --- | ## Bean Dependencies A `@Bean` -annotated method can have an arbitrary number of parameters that describe the dependencies required to build that bean. For instance, if our `TransferService` requires an `AccountRepository` , we can materialize that dependency with a method parameter, as the following example shows: The resolution mechanism is pretty much identical to constructor-based dependency injection. See the relevant section for more details. ## Receiving Lifecycle Callbacks Any classes defined with the `@Bean` annotation support the regular lifecycle callbacks and can use the `@PostConstruct` and `@PreDestroy` annotations from JSR-250. See JSR-250 annotations for further details. The regular Spring lifecycle callbacks are fully supported as well. If a bean implements `InitializingBean` , `DisposableBean` , or `Lifecycle` , their respective methods are called by the container. The standard set of `*Aware` interfaces (such as BeanFactoryAware, BeanNameAware, MessageSourceAware, ApplicationContextAware, and so on) are also fully supported. The `@Bean` annotation supports specifying arbitrary initialization and destruction callback methods, much like Spring XML’s `init-method` and `destroy-method` attributes on the `bean` element, as the following example shows: public void init() { // initialization logic } } public void cleanup() { // destruction logic } } @Bean(initMethod = "init") public BeanOne beanOne() { return new BeanOne(); } @Bean(destroyMethod = "cleanup") public BeanTwo beanTwo() { return new BeanTwo(); } } ``` fun init() { // initialization logic } } fun cleanup() { // destruction logic } } @Bean(initMethod = "init") fun beanOne() = BeanOne() @Bean(destroyMethod = "cleanup") fun beanTwo() = BeanTwo() } ``` In the case of `BeanOne` from the example above the preceding note, it would be equally valid to call the `init()` method directly during construction, as the following example shows: @Bean public BeanOne beanOne() { BeanOne beanOne = new BeanOne(); beanOne.init(); return beanOne; } @Bean fun beanOne() = BeanOne().apply { init() } When you work directly in Java, you can do anything you like with your objects and do not always need to rely on the container lifecycle. | | --- | ## Specifying Bean Scope Spring includes the `@Scope` annotation so that you can specify the scope of a bean. `@Scope` Annotation You can specify that your beans defined with the `@Bean` annotation should have a specific scope. You can use any of the standard scopes specified in the Bean Scopes section. The default scope is `singleton` , but you can override this with the `@Scope` annotation, as the following example shows: @Bean @Scope("prototype") public Encryptor encryptor() { // ... } } ``` @Bean @Scope("prototype") fun encryptor(): Encryptor { // ... } } ``` `@Scope` and `scoped-proxy` Spring offers a convenient way of working with scoped dependencies through scoped proxies. The easiest way to create such a proxy when using the XML configuration is the `<aop:scoped-proxy/>` element. Configuring your beans in Java with a `@Scope` annotation offers equivalent support with the `proxyMode` attribute. The default is ``` ScopedProxyMode.DEFAULT ``` , which typically indicates that no scoped proxy should be created unless a different default has been configured at the component-scan instruction level. You can specify ``` ScopedProxyMode.TARGET_CLASS ``` ``` ScopedProxyMode.INTERFACES ``` or `ScopedProxyMode.NO` . If you port the scoped proxy example from the XML reference documentation (see scoped proxies) to our `@Bean` using Java, it resembles the following: ``` // an HTTP Session-scoped bean exposed as a proxy @Bean @SessionScope public UserPreferences userPreferences() { return new UserPreferences(); } @Bean public Service userService() { UserService service = new SimpleUserService(); // a reference to the proxied userPreferences bean service.setUserPreferences(userPreferences()); return service; } ``` ``` // an HTTP Session-scoped bean exposed as a proxy @Bean @SessionScope fun userPreferences() = UserPreferences() @Bean fun userService(): Service { return SimpleUserService().apply { // a reference to the proxied userPreferences bean setUserPreferences(userPreferences()) } } ``` ## Customizing Bean Naming By default, configuration classes use a `@Bean` method’s name as the name of the resulting bean. This functionality can be overridden, however, with the `name` attribute, as the following example shows: @Bean("myThing") public Thing thing() { return new Thing(); } } ``` @Bean("myThing") fun thing() = Thing() } ``` ## Bean Aliasing As discussed in Naming Beans, it is sometimes desirable to give a single bean multiple names, otherwise known as bean aliasing. The `name` attribute of the `@Bean` annotation accepts a String array for this purpose. The following example shows how to set a number of aliases for a bean: @Bean({"dataSource", "subsystemA-dataSource", "subsystemB-dataSource"}) public DataSource dataSource() { // instantiate, configure and return DataSource bean... } } ``` @Bean("dataSource", "subsystemA-dataSource", "subsystemB-dataSource") fun dataSource(): DataSource { // instantiate, configure and return DataSource bean... } } ``` ## Bean Description Sometimes, it is helpful to provide a more detailed textual description of a bean. This can be particularly useful when beans are exposed (perhaps through JMX) for monitoring purposes. To add a description to a `@Bean` , you can use the `@Description` annotation, as the following example shows: @Bean @Description("Provides a basic example of a bean") public Thing thing() { return new Thing(); } } ``` @Bean @Description("Provides a basic example of a bean") fun thing() = Thing() } ``` # Using the @Configuration annotation `@Configuration` annotation `@Configuration` is a class-level annotation indicating that an object is a source of bean definitions. `@Configuration` classes declare beans through `@Bean` -annotated methods. Calls to `@Bean` methods on `@Configuration` classes can also be used to define inter-bean dependencies. See Basic Concepts: `@Bean` and `@Configuration` for a general introduction. ## Injecting Inter-bean Dependencies When beans have dependencies on one another, expressing that dependency is as simple as having one bean method call another, as the following example shows: @Bean public BeanOne beanOne() { return new BeanOne(beanTwo()); } @Bean public BeanTwo beanTwo() { return new BeanTwo(); } } ``` @Bean fun beanOne() = BeanOne(beanTwo()) @Bean fun beanTwo() = BeanTwo() } ``` In the preceding example, `beanOne` receives a reference to `beanTwo` through constructor injection. This method of declaring inter-bean dependencies works only when the | | --- | As noted earlier, lookup method injection is an advanced feature that you should use rarely. It is useful in cases where a singleton-scoped bean has a dependency on a prototype-scoped bean. Using Java for this type of configuration provides a natural means for implementing this pattern. The following example shows how to use lookup method injection: By using Java configuration, you can create a subclass of `CommandManager` where the abstract `createCommand()` method is overridden in such a way that it looks up a new (prototype) command object. The following example shows how to do so: ``` @Bean @Scope("prototype") public AsyncCommand asyncCommand() { AsyncCommand command = new AsyncCommand(); // inject dependencies here as required return command; } @Bean public CommandManager commandManager() { // return new anonymous implementation of CommandManager with createCommand() // overridden to return a new prototype Command object return new CommandManager() { protected Command createCommand() { return asyncCommand(); } } } ``` ``` @Bean @Scope("prototype") fun asyncCommand(): AsyncCommand { val command = AsyncCommand() // inject dependencies here as required return command } @Bean fun commandManager(): CommandManager { // return new anonymous implementation of CommandManager with createCommand() // overridden to return a new prototype Command object return object : CommandManager() { override fun createCommand(): Command { return asyncCommand() } } } ``` ## Further Information About How Java-based Configuration Works Internally Consider the following example, which shows a `@Bean` annotated method being called twice: @Bean public ClientDao clientDao() { return new ClientDaoImpl(); } } ``` @Bean fun clientService1(): ClientService { return ClientServiceImpl().apply { clientDao = clientDao() } } @Bean fun clientService2(): ClientService { return ClientServiceImpl().apply { clientDao = clientDao() } } @Bean fun clientDao(): ClientDao { return ClientDaoImpl() } } ``` `clientDao()` has been called once in `clientService1()` and once in `clientService2()` . Since this method creates a new instance of `ClientDaoImpl` and returns it, you would normally expect to have two instances (one for each service). That definitely would be problematic: In Spring, instantiated beans have a `singleton` scope by default. This is where the magic comes in: All `@Configuration` classes are subclassed at startup-time with `CGLIB` . In the subclass, the child method checks the container first for any cached (scoped) beans before it calls the parent method and creates a new instance. The behavior could be different according to the scope of your bean. We are talking about singletons here. | | --- | Spring’s Java-based configuration feature lets you compose annotations, which can reduce the complexity of your configuration. `@Import` Annotation Much as the `<import/>` element is used within Spring XML files to aid in modularizing configurations, the `@Import` annotation allows for loading `@Bean` definitions from another configuration class, as the following example shows: @Configuration @Import(ConfigA.class) public class ConfigB { @Bean public B b() { return new B(); } } ``` @Bean fun a() = A() } @Configuration @Import(ConfigA::class) class ConfigB { @Bean fun b() = B() } ``` Now, rather than needing to specify both `ConfigA.class` and `ConfigB.class` when instantiating the context, only `ConfigB` needs to be supplied explicitly, as the following example shows: fun main() { val ctx = AnnotationConfigApplicationContext(ConfigB::class.java) This approach simplifies container instantiation, as only one class needs to be dealt with, rather than requiring you to remember a potentially large number of `@Configuration` classes during construction. As of Spring Framework 4.2, | | --- | ### Injecting Dependencies on Imported `@Bean` Definitions The preceding example works but is simplistic. In most practical scenarios, beans have dependencies on one another across configuration classes. When using XML, this is not an issue, because no compiler is involved, and you can declare `ref="someBean"` and trust Spring to work it out during container initialization. When using `@Configuration` classes, the Java compiler places constraints on the configuration model, in that references to other beans must be valid Java syntax. Fortunately, solving this problem is simple. As we already discussed, a `@Bean` method can have an arbitrary number of parameters that describe the bean dependencies. Consider the following more real-world scenario with several `@Configuration` classes, each depending on beans declared in the others: @Bean public AccountRepository accountRepository(DataSource dataSource) { return new JdbcAccountRepository(dataSource); } } @Bean fun accountRepository(dataSource: DataSource): AccountRepository { return JdbcAccountRepository(dataSource) } } There is another way to achieve the same result. Remember that `@Configuration` classes are ultimately only another bean in the container: This means that they can take advantage of `@Autowired` and `@Value` injection and other features the same as any other bean. The following example shows how one bean can be autowired to another bean: @Autowired private AccountRepository accountRepository; private final DataSource dataSource; public RepositoryConfig(DataSource dataSource) { this.dataSource = dataSource; } @Autowired lateinit var accountRepository: AccountRepository @Configuration class RepositoryConfig(private val dataSource: DataSource) { Constructor injection in | | --- | In the preceding scenario, using `@Autowired` works well and provides the desired modularity, but determining exactly where the autowired bean definitions are declared is still somewhat ambiguous. For example, as a developer looking at `ServiceConfig` , how do you know exactly where the ``` @Autowired AccountRepository ``` bean is declared? It is not explicit in the code, and this may be just fine. Remember that the Spring Tools for Eclipse provides tooling that can render graphs showing how everything is wired, which may be all you need. Also, your Java IDE can easily find all declarations and uses of the `AccountRepository` type and quickly show you the location of `@Bean` methods that return that type. In cases where this ambiguity is not acceptable and you wish to have direct navigation from within your IDE from one `@Configuration` class to another, consider autowiring the configuration classes themselves. The following example shows how to do so: @Bean public TransferService transferService() { // navigate 'through' the config class to the @Bean method! return new TransferServiceImpl(repositoryConfig.accountRepository()); } } ``` @Bean fun transferService(): TransferService { // navigate 'through' the config class to the @Bean method! return TransferServiceImpl(repositoryConfig.accountRepository()) } } ``` In the preceding situation, where `AccountRepository` is defined is completely explicit. However, `ServiceConfig` is now tightly coupled to `RepositoryConfig` . That is the tradeoff. This tight coupling can be somewhat mitigated by using interface-based or abstract class-based `@Configuration` classes. Consider the following example: @Bean AccountRepository accountRepository(); } @Configuration public class DefaultRepositoryConfig implements RepositoryConfig { @Bean public AccountRepository accountRepository() { return new JdbcAccountRepository(...); } } @Configuration @Import({ServiceConfig.class, DefaultRepositoryConfig.class}) // import the concrete config! public class SystemTestConfig { public static void main(String[] args) { ApplicationContext ctx = new AnnotationConfigApplicationContext(SystemTestConfig.class); TransferService transferService = ctx.getBean(TransferService.class); transferService.transfer(100.00, "A123", "C456"); } ``` @Bean fun transferService(): TransferService { return TransferServiceImpl(repositoryConfig.accountRepository()) } } @Bean fun accountRepository(): AccountRepository } @Configuration class DefaultRepositoryConfig : RepositoryConfig { @Bean fun accountRepository(): AccountRepository { return JdbcAccountRepository(...) } } @Configuration @Import(ServiceConfig::class, DefaultRepositoryConfig::class) // import the concrete config! class SystemTestConfig { @Bean fun dataSource(): DataSource { // return DataSource } fun main() { val ctx = AnnotationConfigApplicationContext(SystemTestConfig::class.java) val transferService = ctx.getBean<TransferService>() transferService.transfer(100.00, "A123", "C456") } ``` Now `ServiceConfig` is loosely coupled with respect to the concrete ``` DefaultRepositoryConfig ``` , and built-in IDE tooling is still useful: You can easily get a type hierarchy of `RepositoryConfig` implementations. In this way, navigating `@Configuration` classes and their dependencies becomes no different than the usual process of navigating interface-based code. If you want to influence the startup creation order of certain beans, consider declaring some of them as | | --- | ## Conditionally Include `@Configuration` Classes or `@Bean` Methods It is often useful to conditionally enable or disable a complete `@Configuration` class or even individual `@Bean` methods, based on some arbitrary system state. One common example of this is to use the `@Profile` annotation to activate beans only when a specific profile has been enabled in the Spring `Environment` (see Bean Definition Profiles for details). The `@Profile` annotation is actually implemented by using a much more flexible annotation called `@Conditional` . The `@Conditional` annotation indicates specific ``` org.springframework.context.annotation.Condition ``` implementations that should be consulted before a `@Bean` is registered. Implementations of the `Condition` interface provide a `matches(…​)` method that returns `true` or `false` . For example, the following listing shows the actual `Condition` implementation used for `@Profile` : ``` @Override public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata) { // Read the @Profile annotation attributes MultiValueMap<String, Object> attrs = metadata.getAllAnnotationAttributes(Profile.class.getName()); if (attrs != null) { for (Object value : attrs.get("value")) { if (context.getEnvironment().acceptsProfiles(((String[]) value))) { return true; } } return false; } return true; } ``` ``` override fun matches(context: ConditionContext, metadata: AnnotatedTypeMetadata): Boolean { // Read the @Profile annotation attributes val attrs = metadata.getAllAnnotationAttributes(Profile::class.java.name) if (attrs != null) { for (value in attrs["value"]!!) { if (context.environment.acceptsProfiles(Profiles.of(*value as Array<String>))) { return true } } return false } return true } ``` See the `@Conditional` javadoc for more detail. ## Combining Java and XML Configuration Spring’s `@Configuration` class support does not aim to be a 100% complete replacement for Spring XML. Some facilities, such as Spring XML namespaces, remain an ideal way to configure the container. In cases where XML is convenient or necessary, you have a choice: either instantiate the container in an “XML-centric” way by using, for example, , or instantiate it in a “Java-centric” way by using and the `@ImportResource` annotation to import XML as needed. ### XML-centric Use of `@Configuration` Classes It may be preferable to bootstrap the Spring container from XML and include `@Configuration` classes in an ad-hoc fashion. For example, in a large existing codebase that uses Spring XML, it is easier to create `@Configuration` classes on an as-needed basis and include them from the existing XML files. Later in this section, we cover the options for using `@Configuration` classes in this kind of “XML-centric” situation. Remember that `@Configuration` classes are ultimately bean definitions in the container. In this series examples, we create a `@Configuration` class named `AppConfig` and include it within as a `<bean/>` definition. Because is switched on, the container recognizes the `@Configuration` annotation and processes the `@Bean` methods declared in `AppConfig` properly. The following example shows an ordinary configuration class in Java: @Autowired private DataSource dataSource; @Bean public TransferService transferService() { return new TransferService(accountRepository()); } } ``` @Bean fun transferService() = TransferService(accountRepository()) } ``` The following example shows part of a sample ``` <beans> <!-- enable processing of annotations such as @Autowired and @Configuration --> <context:annotation-config/> <context:property-placeholder location="classpath:/com/acme/jdbc.properties"/ <bean class="com.acme.AppConfig"/ The following example shows a possible `jdbc.properties` file: > jdbc.url=jdbc:hsqldb:hsql://localhost/xdb jdbc.username=sa jdbc.password= In | | --- | Because `@Configuration` is meta-annotated with `@Component` , `@Configuration` -annotated classes are automatically candidates for component scanning. Using the same scenario as described in the previous example, we can redefine to take advantage of component-scanning. Note that, in this case, we need not explicitly declare , because ``` <context:component-scan/> ``` enables the same functionality. The following example shows the modified ``` <beans> <!-- picks up and registers AppConfig as a bean definition --> <context:component-scan base-package="com.acme"/> <context:property-placeholder location="classpath:/com/acme/jdbc.properties"/ `@Configuration` Class-centric Use of XML with `@ImportResource` In applications where `@Configuration` classes are the primary mechanism for configuring the container, it is still likely necessary to use at least some XML. In these scenarios, you can use `@ImportResource` and define only as much XML as you need. Doing so achieves a “Java-centric” approach to configuring the container and keeps XML to a bare minimum. The following example (which includes a configuration class, an XML file that defines a bean, a properties file, and the `main` class) shows how to use the `@ImportResource` annotation to achieve “Java-centric” configuration that uses XML as needed: @Value("${jdbc.url}") private String url; @Value("${jdbc.username}") private String username; @Value("${jdbc.password}") private String password; @Bean public DataSource dataSource() { return new DriverManagerDataSource(url, username, password); } } ``` @Value("\${jdbc.url}") private lateinit var url: String @Value("\${jdbc.username}") private lateinit var username: String @Value("\${jdbc.password}") private lateinit var password: String @Bean fun dataSource(): DataSource { return DriverManagerDataSource(url, username, password) } } ``` ``` properties-config.xml <beans> <context:property-placeholder location="classpath:/com/acme/jdbc.properties"/> </beans> ``` > jdbc.properties jdbc.url=jdbc:hsqldb:hsql://localhost/xdb jdbc.username=sa jdbc.password= The `Environment` interface is an abstraction integrated in the container that models two key aspects of the application environment: profiles and properties. A profile is a named, logical group of bean definitions to be registered with the container only if the given profile is active. Beans may be assigned to a profile whether defined in XML or with annotations. The role of the `Environment` object with relation to profiles is in determining which profiles (if any) are currently active, and which profiles (if any) should be active by default. Properties play an important role in almost all applications and may originate from a variety of sources: properties files, JVM system properties, system environment variables, JNDI, servlet context parameters, ad-hoc `Properties` objects, `Map` objects, and so on. The role of the `Environment` object with relation to properties is to provide the user with a convenient service interface for configuring property sources and resolving properties from them. ## Bean Definition Profiles Bean definition profiles provide a mechanism in the core container that allows for registration of different beans in different environments. The word, “environment,” can mean different things to different users, and this feature can help with many use cases, including: Working against an in-memory datasource in development versus looking up that same datasource from JNDI when in QA or production. * Registering monitoring infrastructure only when deploying an application into a performance environment. * Registering customized implementations of beans for customer A versus customer B deployments. Consider the first use case in a practical application that requires a `DataSource` . In a test environment, the configuration might resemble the following: Now consider how this application can be deployed into a QA or production environment, assuming that the datasource for the application is registered with the production application server’s JNDI directory. Our `dataSource` bean now looks like the following listing: The problem is how to switch between using these two variations based on the current environment. Over time, Spring users have devised a number of ways to get this done, usually relying on a combination of system environment variables and XML `<import/>` statements containing `${placeholder}` tokens that resolve to the correct configuration file path depending on the value of an environment variable. Bean definition profiles is a core container feature that provides a solution to this problem. If we generalize the use case shown in the preceding example of environment-specific bean definitions, we end up with the need to register certain bean definitions in certain contexts but not in others. You could say that you want to register a certain profile of bean definitions in situation A and a different profile in situation B. We start by updating our configuration to reflect this need. `@Profile` The `@Profile` annotation lets you indicate that a component is eligible for registration when one or more specified profiles are active. Using our preceding example, we can rewrite the `dataSource` configuration as follows: As mentioned earlier, with | | --- | The profile string may contain a simple profile name (for example, `production` ) or a profile expression. A profile expression allows for more complicated profile logic to be expressed (for example, `production & us-east` ). The following operators are supported in profile expressions: * `!` : A logical `NOT` of the profile * `&` : A logical `AND` of the profiles * `|` : A logical `OR` of the profiles You cannot mix the | | --- | You can use `@Profile` as a meta-annotation for the purpose of creating a custom composed annotation. The following example defines a custom `@Production` annotation that you can use as a drop-in replacement for ``` @Profile("production") ``` ``` @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) @Profile("production") public @interface Production { } ``` ``` @Target(AnnotationTarget.CLASS) @Retention(AnnotationRetention.RUNTIME) @Profile("production") annotation class Production ``` If a | | --- | `@Profile` can also be declared at the method level to include only one particular bean of a configuration class (for example, for alternative variants of a particular bean), as the following example shows: @Bean("dataSource") @Profile("development") (1) public DataSource standaloneDataSource() { return new EmbeddedDatabaseBuilder() .setType(EmbeddedDatabaseType.HSQL) .addScript("classpath:com/bank/config/sql/schema.sql") .addScript("classpath:com/bank/config/sql/test-data.sql") .build(); } @Bean("dataSource") @Profile("production") (2) public DataSource jndiDataSource() throws Exception { Context ctx = new InitialContext(); return (DataSource) ctx.lookup("java:comp/env/jdbc/datasource"); } } ``` @Bean("dataSource") @Profile("development") (1) fun standaloneDataSource(): DataSource { return EmbeddedDatabaseBuilder() .setType(EmbeddedDatabaseType.HSQL) .addScript("classpath:com/bank/config/sql/schema.sql") .addScript("classpath:com/bank/config/sql/test-data.sql") .build() } @Bean("dataSource") @Profile("production") (2) fun jndiDataSource() = InitialContext().lookup("java:comp/env/jdbc/datasource") as DataSource } ``` ### XML Bean Definition Profiles The XML counterpart is the `profile` attribute of the `<beans>` element. Our preceding sample configuration can be rewritten in two XML files, as follows: ``` <beans profile="development" xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jdbc="http://www.springframework.org/schema/jdbc" xsi:schemaLocation="..." <jdbc:embedded-database id="dataSource"> <jdbc:script location="classpath:com/bank/config/sql/schema.sql"/> <jdbc:script location="classpath:com/bank/config/sql/test-data.sql"/> </jdbc:embedded-database> </beans> ``` ``` <beans profile="production" xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jee="http://www.springframework.org/schema/jee" xsi:schemaLocation="..." <jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/datasource"/> </beans> ``` It is also possible to avoid that split and nest `<beans/>` elements within the same file, as the following example shows: ``` <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jdbc="http://www.springframework.org/schema/jdbc" xmlns:jee="http://www.springframework.org/schema/jee" xsi:schemaLocation="..." <!-- other bean definitions -- The `spring-bean.xsd` has been constrained to allow such elements only as the last ones in the file. This should help provide flexibility without incurring clutter in the XML files. ### Activating a Profile Now that we have updated our configuration, we still need to instruct Spring which profile is active. If we started our sample application right now, we would see a ``` NoSuchBeanDefinitionException ``` thrown, because the container could not find the Spring bean named `dataSource` . Activating a profile can be done in several ways, but the most straightforward is to do it programmatically against the `Environment` API which is available through an `ApplicationContext` . The following example shows how to do so: ``` AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(); ctx.getEnvironment().setActiveProfiles("development"); ctx.register(SomeConfig.class, StandaloneDataConfig.class, JndiDataConfig.class); ctx.refresh(); ``` ``` val ctx = AnnotationConfigApplicationContext().apply { environment.setActiveProfiles("development") register(SomeConfig::class.java, StandaloneDataConfig::class.java, JndiDataConfig::class.java) refresh() } ``` In addition, you can also declaratively activate profiles through the property, which may be specified through system environment variables, JVM system properties, servlet context parameters in `web.xml` , or even as an entry in JNDI (see `PropertySource` Abstraction). In integration tests, active profiles can be declared by using the `@ActiveProfiles` annotation in the `spring-test` module (see context configuration with environment profiles ). Note that profiles are not an “either-or” proposition. You can activate multiple profiles at once. Programmatically, you can provide multiple profile names to the `setActiveProfiles()` method, which accepts `String…​` varargs. The following example activates multiple profiles: ``` ctx.getEnvironment().setActiveProfiles("profile1", "profile2"); ``` ``` ctx.getEnvironment().setActiveProfiles("profile1", "profile2") ``` Declaratively, may accept a comma-separated list of profile names, as the following example shows: > -Dspring.profiles.active="profile1,profile2" ### Default Profile The default profile represents the profile that is enabled by default. Consider the following example: If no profile is active, the `dataSource` is created. You can see this as a way to provide a default definition for one or more beans. If any profile is enabled, the default profile does not apply. You can change the name of the default profile by using `setDefaultProfiles()` on the `Environment` or, declaratively, by using the ``` spring.profiles.default ``` property. `PropertySource` Abstraction Spring’s `Environment` abstraction provides search operations over a configurable hierarchy of property sources. Consider the following listing: ``` ApplicationContext ctx = new GenericApplicationContext(); Environment env = ctx.getEnvironment(); boolean containsMyProperty = env.containsProperty("my-property"); System.out.println("Does my environment contain the 'my-property' property? " + containsMyProperty); ``` ``` val ctx = GenericApplicationContext() val env = ctx.environment val containsMyProperty = env.containsProperty("my-property") println("Does my environment contain the 'my-property' property? $containsMyProperty") ``` In the preceding snippet, we see a high-level way of asking Spring whether the `my-property` property is defined for the current environment. To answer this question, the `Environment` object performs a search over a set of `PropertySource` objects. A `PropertySource` is a simple abstraction over any source of key-value pairs, and Spring’s `StandardEnvironment` is configured with two PropertySource objects — one representing the set of JVM system properties ( ``` System.getProperties() ``` ) and one representing the set of system environment variables ( `System.getenv()` ). These default property sources are present for | | --- | Concretely, when you use the `StandardEnvironment` , the call to ``` env.containsProperty("my-property") ``` returns true if a `my-property` system property or `my-property` environment variable is present at runtime. Most importantly, the entire mechanism is configurable. Perhaps you have a custom source of properties that you want to integrate into this search. To do so, implement and instantiate your own `PropertySource` and add it to the set of `PropertySources` for the current `Environment` . The following example shows how to do so: ``` ConfigurableApplicationContext ctx = new GenericApplicationContext(); MutablePropertySources sources = ctx.getEnvironment().getPropertySources(); sources.addFirst(new MyPropertySource()); ``` ``` val ctx = GenericApplicationContext() val sources = ctx.environment.propertySources sources.addFirst(MyPropertySource()) ``` In the preceding code, `MyPropertySource` has been added with highest precedence in the search. If it contains a `my-property` property, the property is detected and returned, in favor of any `my-property` property in any other `PropertySource` . The ``` MutablePropertySources ``` API exposes a number of methods that allow for precise manipulation of the set of property sources. `@PropertySource` The `@PropertySource` annotation provides a convenient and declarative mechanism for adding a `PropertySource` to Spring’s `Environment` . Given a file called `app.properties` that contains the key-value pair ``` testbean.name=myTestBean ``` , the following `@Configuration` class uses `@PropertySource` in such a way that a call to `testBean.getName()` returns `myTestBean` : Any `${…​}` placeholders present in a `@PropertySource` resource location are resolved against the set of property sources already registered against the environment, as the following example shows: ``` @Configuration @PropertySource("classpath:/com/\${my.placeholder:default/path}/app.properties") class AppConfig { Assuming that `my.placeholder` is present in one of the property sources already registered (for example, system properties or environment variables), the placeholder is resolved to the corresponding value. If not, then `default/path` is used as a default. If no default is specified and a property cannot be resolved, an ## Placeholder Resolution in Statements Historically, the value of placeholders in elements could be resolved only against JVM system properties or environment variables. This is no longer the case. Because the `Environment` abstraction is integrated throughout the container, it is easy to route resolution of placeholders through it. This means that you may configure the resolution process in any way you like. You can change the precedence of searching through system properties and environment variables or remove them entirely. You can also add your own property sources to the mix, as appropriate. Concretely, the following statement works regardless of where the `customer` property is defined, as long as it is available in the `Environment` : ``` <beans> <import resource="com/bank/service/${customer}-config.xml"/> </beans> ``` `LoadTimeWeaver` The `LoadTimeWeaver` is used by Spring to dynamically transform classes as they are loaded into the Java virtual machine (JVM). To enable load-time weaving, you can add the to one of your `@Configuration` classes, as the following example shows: ``` @Configuration @EnableLoadTimeWeaving class AppConfig ``` Alternatively, for XML configuration, you can use the ``` <beans> <context:load-time-weaver/> </beans> ``` Once configured for the `ApplicationContext` , any bean within that `ApplicationContext` may implement `LoadTimeWeaverAware` , thereby receiving a reference to the load-time weaver instance. This is particularly useful in combination with Spring’s JPA support where load-time weaving may be necessary for JPA class transformation. Consult the javadoc for more detail. For more on AspectJ load-time weaving, see Load-time Weaving with AspectJ in the Spring Framework. # Additional Capabilities of the ApplicationContext # Additional Capabilities of the `ApplicationContext` As discussed in the chapter introduction, the ``` org.springframework.beans.factory ``` package provides basic functionality for managing and manipulating beans, including in a programmatic way. The package adds the `ApplicationContext` interface, which extends the `BeanFactory` interface, in addition to extending other interfaces to provide additional functionality in a more application framework-oriented style. Many people use the `ApplicationContext` in a completely declarative fashion, not even creating it programmatically, but instead relying on support classes such as `ContextLoader` to automatically instantiate an `ApplicationContext` as part of the normal startup process of a Jakarta EE web application. To enhance `BeanFactory` functionality in a more framework-oriented style, the context package also provides the following functionality: Access to messages in i18n-style, through the `MessageSource` interface. * Access to resources, such as URLs and files, through the `ResourceLoader` interface. * Event publication, namely to beans that implement the `ApplicationListener` interface, through the use of the interface. * Loading of multiple (hierarchical) contexts, letting each be focused on one particular layer, such as the web layer of an application, through the ``` HierarchicalBeanFactory ``` ## Internationalization using `MessageSource` The `ApplicationContext` interface extends an interface called `MessageSource` and, therefore, provides internationalization (“i18n”) functionality. Spring also provides the interface, which can resolve messages hierarchically. Together, these interfaces provide the foundation upon which Spring effects message resolution. The methods defined on these interfaces include: : The basic method used to retrieve a message from the `MessageSource` . When no message is found for the specified locale, the default message is used. Any arguments passed in become replacement values, using the `MessageFormat` functionality provided by the standard library. * : Essentially the same as the previous method but with one difference: No default message can be specified. If the message cannot be found, a ``` NoSuchMessageException ``` is thrown. * ``` String getMessage(MessageSourceResolvable resolvable, Locale locale) ``` : All properties used in the preceding methods are also wrapped in a class named ``` MessageSourceResolvable ``` , which you can use with this method. When an `ApplicationContext` is loaded, it automatically searches for a `MessageSource` bean defined in the context. The bean must have the name `messageSource` . If such a bean is found, all calls to the preceding methods are delegated to the message source. If no message source is found, the `ApplicationContext` attempts to find a parent containing a bean with the same name. If it does, it uses that bean as the `MessageSource` . If the `ApplicationContext` cannot find any source for messages, an empty ``` DelegatingMessageSource ``` is instantiated in order to be able to accept calls to the methods defined above. Spring provides three `MessageSource` implementations, ``` ReloadableResourceBundleMessageSource ``` and `StaticMessageSource` . All of them implement in order to do nested messaging. The `StaticMessageSource` is rarely used but provides programmatic ways to add messages to the source. The following example shows ``` <beans> <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basenames"> <list> <value>format</value> <value>exceptions</value> <value>windows</value> </list> </property> </bean> </beans> ``` The example assumes that you have three resource bundles called `format` , `exceptions` and `windows` defined in your classpath. Any request to resolve a message is handled in the JDK-standard way of resolving messages through `ResourceBundle` objects. For the purposes of the example, assume the contents of two of the above resource bundle files are as follows: ``` # in format.properties message=Alligators rock! ``` ``` # in exceptions.properties argument.required=The {0} argument is required. ``` The next example shows a program to run the `MessageSource` functionality. Remember that all `ApplicationContext` implementations are also `MessageSource` implementations and so can be cast to the `MessageSource` interface. ``` public static void main(String[] args) { MessageSource resources = new ClassPathXmlApplicationContext("beans.xml"); String message = resources.getMessage("message", null, "Default", Locale.ENGLISH); System.out.println(message); } ``` ``` fun main() { val resources = ClassPathXmlApplicationContext("beans.xml") val message = resources.getMessage("message", null, "Default", Locale.ENGLISH) println(message) } ``` The resulting output from the above program is as follows: > Alligators rock! To summarize, the `MessageSource` is defined in a file called `beans.xml` , which exists at the root of your classpath. The `messageSource` bean definition refers to a number of resource bundles through its `basenames` property. The three files that are passed in the list to the `basenames` property exist as files at the root of your classpath and are called `format.properties` , ``` exceptions.properties ``` , and `windows.properties` , respectively. The next example shows arguments passed to the message lookup. These arguments are converted into `String` objects and inserted into placeholders in the lookup message. <!-- this MessageSource is being used in a web application --> <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basename" value="exceptions"/> </bean <!-- lets inject the above MessageSource into this POJO --> <bean id="example" class="com.something.Example"> <property name="messages" ref="messageSource"/> </bean private MessageSource messages; public void setMessages(MessageSource messages) { this.messages = messages; } public void execute() { String message = this.messages.getMessage("argument.required", new Object [] {"userDao"}, "Required", Locale.ENGLISH); System.out.println(message); } } ``` lateinit var messages: MessageSource fun execute() { val message = messages.getMessage("argument.required", arrayOf("userDao"), "Required", Locale.ENGLISH) println(message) } } ``` The resulting output from the invocation of the `execute()` method is as follows: > The userDao argument is required. With regard to internationalization (“i18n”), Spring’s various `MessageSource` implementations follow the same locale resolution and fallback rules as the standard JDK `ResourceBundle` . In short, and continuing with the example `messageSource` defined previously, if you want to resolve messages against the British ( `en-GB` ) locale, you would create files called ``` format_en_GB.properties ``` ``` exceptions_en_GB.properties ``` ``` windows_en_GB.properties ``` , respectively. Typically, locale resolution is managed by the surrounding environment of the application. In the following example, the locale against which (British) messages are resolved is specified manually: > # in exceptions_en_GB.properties argument.required=Ebagum lad, the ''{0}'' argument is required, I say, required. ``` public static void main(final String[] args) { MessageSource resources = new ClassPathXmlApplicationContext("beans.xml"); String message = resources.getMessage("argument.required", new Object [] {"userDao"}, "Required", Locale.UK); System.out.println(message); } ``` ``` fun main() { val resources = ClassPathXmlApplicationContext("beans.xml") val message = resources.getMessage("argument.required", arrayOf("userDao"), "Required", Locale.UK) println(message) } ``` The resulting output from the running of the above program is as follows: > Ebagum lad, the 'userDao' argument is required, I say, required. You can also use the `MessageSourceAware` interface to acquire a reference to any `MessageSource` that has been defined. Any bean that is defined in an `ApplicationContext` that implements the `MessageSourceAware` interface is injected with the application context’s `MessageSource` when the bean is created and configured. Because Spring’s | | --- | ## Standard and Custom Events Event handling in the `ApplicationContext` is provided through the `ApplicationEvent` class and the `ApplicationListener` interface. If a bean that implements the `ApplicationListener` interface is deployed into the context, every time an `ApplicationEvent` gets published to the `ApplicationContext` , that bean is notified. Essentially, this is the standard Observer design pattern. As of Spring 4.2, the event infrastructure has been significantly improved and offers an annotation-based model as well as the ability to publish any arbitrary event (that is, an object that does not necessarily extend from | | --- | The following table describes the standard events that Spring provides: Event | Explanation | | --- | --- | | | | | | | You can also create and publish your own custom events. The following example shows a simple class that extends Spring’s `ApplicationEvent` base class: ``` public class BlockedListEvent extends ApplicationEvent { private final String address; private final String content; public BlockedListEvent(Object source, String address, String content) { super(source); this.address = address; this.content = content; } // accessor and other methods... } ``` ``` class BlockedListEvent(source: Any, val address: String, val content: String) : ApplicationEvent(source) ``` To publish a custom `ApplicationEvent` , call the `publishEvent()` method on an . Typically, this is done by creating a class that implements and registering it as a Spring bean. The following example shows such a class: ``` public class EmailService implements ApplicationEventPublisherAware { private List<String> blockedList; private ApplicationEventPublisher publisher; public void setBlockedList(List<String> blockedList) { this.blockedList = blockedList; } public void setApplicationEventPublisher(ApplicationEventPublisher publisher) { this.publisher = publisher; } public void sendEmail(String address, String content) { if (blockedList.contains(address)) { publisher.publishEvent(new BlockedListEvent(this, address, content)); return; } // send email... } } ``` ``` class EmailService : ApplicationEventPublisherAware { private lateinit var blockedList: List<String> private lateinit var publisher: ApplicationEventPublisher fun setBlockedList(blockedList: List<String>) { this.blockedList = blockedList } override fun setApplicationEventPublisher(publisher: ApplicationEventPublisher) { this.publisher = publisher } fun sendEmail(address: String, content: String) { if (blockedList!!.contains(address)) { publisher!!.publishEvent(BlockedListEvent(this, address, content)) return } // send email... } } ``` At configuration time, the Spring container detects that `EmailService` implements and automatically calls ``` setApplicationEventPublisher() ``` . In reality, the parameter passed in is the Spring container itself. You are interacting with the application context through its interface. To receive the custom `ApplicationEvent` , you can create a class that implements `ApplicationListener` and register it as a Spring bean. The following example shows such a class: ``` public class BlockedListNotifier implements ApplicationListener<BlockedListEvent> { public void onApplicationEvent(BlockedListEvent event) { // notify appropriate parties via notificationAddress... } } ``` ``` class BlockedListNotifier : ApplicationListener<BlockedListEvent> { override fun onApplicationEvent(event: BlockedListEvent) { // notify appropriate parties via notificationAddress... } } ``` Notice that `ApplicationListener` is generically parameterized with the type of your custom event ( `BlockedListEvent` in the preceding example). This means that the `onApplicationEvent()` method can remain type-safe, avoiding any need for downcasting. You can register as many event listeners as you wish, but note that, by default, event listeners receive events synchronously. This means that the `publishEvent()` method blocks until all listeners have finished processing the event. One advantage of this synchronous and single-threaded approach is that, when a listener receives an event, it operates inside the transaction context of the publisher if a transaction context is available. If another strategy for event publication becomes necessary, e.g. asynchronous event processing by default, see the javadoc for Spring’s interface and implementation for configuration options which can be applied to a custom "applicationEventMulticaster" bean definition. The following example shows the bean definitions used to register and configure each of the classes above: ``` <bean id="emailService" class="example.EmailService"> <property name="blockedList"> <list> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> </list> </property> </bean<bean id="blockedListNotifier" class="example.BlockedListNotifier"> <property name="notificationAddress" value="[email protected]"/> </bean <!-- optional: a custom ApplicationEventMulticaster definition --> <bean id="applicationEventMulticaster" class="org.springframework.context.event.SimpleApplicationEventMulticaster"> <property name="taskExecutor" ref="..."/> <property name="errorHandler" ref="..."/> </bean> ``` Putting it all together, when the `sendEmail()` method of the `emailService` bean is called, if there are any email messages that should be blocked, a custom event of type `BlockedListEvent` is published. The `blockedListNotifier` bean is registered as an `ApplicationListener` and receives the `BlockedListEvent` , at which point it can notify appropriate parties. Spring’s eventing mechanism is designed for simple communication between Spring beans within the same application context. However, for more sophisticated enterprise integration needs, the separately maintained Spring Integration project provides complete support for building lightweight, pattern-oriented, event-driven architectures that build upon the well-known Spring programming model. | | --- | ### Annotation-based Event Listeners You can register an event listener on any method of a managed bean by using the `@EventListener` annotation. The `BlockedListNotifier` can be rewritten as follows: The method signature once again declares the event type to which it listens, but, this time, with a flexible name and without implementing a specific listener interface. The event type can also be narrowed through generics as long as the actual event type resolves your generic parameter in its implementation hierarchy. If your method should listen to several events or if you want to define it with no parameter at all, the event types can also be specified on the annotation itself. The following example shows how to do so: ``` @EventListener({ContextStartedEvent.class, ContextRefreshedEvent.class}) public void handleContextStart() { // ... } ``` ``` @EventListener(ContextStartedEvent::class, ContextRefreshedEvent::class) fun handleContextStart() { // ... } ``` It is also possible to add additional runtime filtering by using the `condition` attribute of the annotation that defines a `SpEL` expression, which should match to actually invoke the method for a particular event. The following example shows how our notifier can be rewritten to be invoked only if the `content` attribute of the event is equal to `my-event` : ``` @EventListener(condition = "#blEvent.content == 'my-event'") public void processBlockedListEvent(BlockedListEvent blEvent) { // notify appropriate parties via notificationAddress... } ``` ``` @EventListener(condition = "#blEvent.content == 'my-event'") fun processBlockedListEvent(blEvent: BlockedListEvent) { // notify appropriate parties via notificationAddress... } ``` Each `SpEL` expression evaluates against a dedicated context. The following table lists the items made available to the context so that you can use them for conditional event processing: Note that `#root.event` gives you access to the underlying event, even if your method signature actually refers to an arbitrary object that was published. If you need to publish an event as the result of processing another event, you can change the method signature to return the event that should be published, as the following example shows: ``` @EventListener public ListUpdateEvent handleBlockedListEvent(BlockedListEvent event) { // notify appropriate parties via notificationAddress and // then publish a ListUpdateEvent... } ``` ``` @EventListener fun handleBlockedListEvent(event: BlockedListEvent): ListUpdateEvent { // notify appropriate parties via notificationAddress and // then publish a ListUpdateEvent... } ``` This feature is not supported for asynchronous listeners. | | --- | ``` handleBlockedListEvent() ``` method publishes a new `ListUpdateEvent` for every `BlockedListEvent` that it handles. If you need to publish several events, you can return a `Collection` or an array of events instead. ### Asynchronous Listeners If you want a particular listener to process events asynchronously, you can reuse the regular `@Async` support. The following example shows how to do so: ``` @EventListener @Async public void processBlockedListEvent(BlockedListEvent event) { // BlockedListEvent is processed in a separate thread } ``` ``` @EventListener @Async fun processBlockedListEvent(event: BlockedListEvent) { // BlockedListEvent is processed in a separate thread } ``` Be aware of the following limitations when using asynchronous events: If an asynchronous event listener throws an `Exception` , it is not propagated to the caller. See for more details. * Asynchronous event listener methods cannot publish a subsequent event by returning a value. If you need to publish another event as the result of the processing, inject an to publish the event manually. ### Ordering Listeners If you need one listener to be invoked before another one, you can add the `@Order` annotation to the method declaration, as the following example shows: ### Generic Events You can also use generics to further define the structure of your event. Consider using an ``` EntityCreatedEvent<T> ``` where `T` is the type of the actual entity that got created. For example, you can create the following listener definition to receive only `EntityCreatedEvent` for a `Person` : ``` @EventListener public void onPersonCreated(EntityCreatedEvent<Person> event) { // ... } ``` ``` @EventListener fun onPersonCreated(event: EntityCreatedEvent<Person>) { // ... } ``` Due to type erasure, this works only if the event that is fired resolves the generic parameters on which the event listener filters (that is, something like ``` class PersonCreatedEvent extends EntityCreatedEvent<Person> { …​ } ``` ). In certain circumstances, this may become quite tedious if all events follow the same structure (as should be the case for the event in the preceding example). In such a case, you can implement ``` ResolvableTypeProvider ``` to guide the framework beyond what the runtime environment provides. The following event shows how to do so: ``` public class EntityCreatedEvent<T> extends ApplicationEvent implements ResolvableTypeProvider { public EntityCreatedEvent(T entity) { super(entity); } @Override public ResolvableType getResolvableType() { return ResolvableType.forClassWithGenerics(getClass(), ResolvableType.forInstance(getSource())); } } ``` ``` class EntityCreatedEvent<T>(entity: T) : ApplicationEvent(entity), ResolvableTypeProvider { override fun getResolvableType(): ResolvableType? { return ResolvableType.forClassWithGenerics(javaClass, ResolvableType.forInstance(getSource())) } } ``` This works not only for | | --- | Finally, as with classic `ApplicationListener` implementations, the actual multicasting happens via a context-wide at runtime. By default, this is a with synchronous event publication in the caller thread. This can be replaced/customized through an "applicationEventMulticaster" bean definition, e.g. for processing all events asynchronously and/or for handling listener exceptions: ``` @Bean ApplicationEventMulticaster applicationEventMulticaster() { SimpleApplicationEventMulticaster multicaster = new SimpleApplicationEventMulticaster(); multicaster.setTaskExecutor(...); multicaster.setErrorHandler(...); return multicaster; } ``` ## Convenient Access to Low-level Resources For optimal usage and understanding of application contexts, you should familiarize yourself with Spring’s `Resource` abstraction, as described in Resources. An application context is a `ResourceLoader` , which can be used to load `Resource` objects. A `Resource` is essentially a more feature rich version of the JDK `java.net.URL` class. In fact, the implementations of the `Resource` wrap an instance of `java.net.URL` , where appropriate. A `Resource` can obtain low-level resources from almost any location in a transparent fashion, including from the classpath, a filesystem location, anywhere describable with a standard URL, and some other variations. If the resource location string is a simple path without any special prefixes, where those resources come from is specific and appropriate to the actual application context type. You can configure a bean deployed into the application context to implement the special callback interface, `ResourceLoaderAware` , to be automatically called back at initialization time with the application context itself passed in as the `ResourceLoader` . You can also expose properties of type `Resource` , to be used to access static resources. They are injected into it like any other properties. You can specify those `Resource` properties as simple `String` paths and rely on automatic conversion from those text strings to actual `Resource` objects when the bean is deployed. The location path or paths supplied to an `ApplicationContext` constructor are actually resource strings and, in simple form, are treated appropriately according to the specific context implementation. For example treats a simple location path as a classpath location. You can also use location paths (resource strings) with special prefixes to force loading of definitions from the classpath or a URL, regardless of the actual context type. ## Application Startup Tracking The `ApplicationContext` manages the lifecycle of Spring applications and provides a rich programming model around components. As a result, complex applications can have equally complex component graphs and startup phases. Tracking the application startup steps with specific metrics can help understand where time is being spent during the startup phase, but it can also be used as a way to better understand the context lifecycle as a whole. The (and its subclasses) is instrumented with an `ApplicationStartup` , which collects `StartupStep` data about various startup phases: application context lifecycle (base packages scanning, config classes management) * beans lifecycle (instantiation, smart initialization, post processing) * application events processing Here is an example of instrumentation in the ``` // create a startup step and start recording StartupStep scanPackages = getApplicationStartup().start("spring.context.base-packages.scan"); // add tagging information to the current step scanPackages.tag("packages", () -> Arrays.toString(basePackages)); // perform the actual phase we're instrumenting this.scanner.scan(basePackages); // end the current step scanPackages.end(); ``` ``` // create a startup step and start recording val scanPackages = getApplicationStartup().start("spring.context.base-packages.scan") // add tagging information to the current step scanPackages.tag("packages", () -> Arrays.toString(basePackages)) // perform the actual phase we're instrumenting this.scanner.scan(basePackages) // end the current step scanPackages.end() ``` The application context is already instrumented with multiple steps. Once recorded, these startup steps can be collected, displayed and analyzed with specific tools. For a complete list of existing startup steps, you can check out the dedicated appendix section. The default `ApplicationStartup` implementation is a no-op variant, for minimal overhead. This means no metrics will be collected during application startup by default. Spring Framework ships with an implementation for tracking startup steps with Java Flight Recorder: ``` FlightRecorderApplicationStartup ``` . To use this variant, you must configure an instance of it to the `ApplicationContext` as soon as it’s been created. Developers can also use the `ApplicationStartup` infrastructure if they’re providing their own subclass, or if they wish to collect more precise data. ApplicationStartup is meant to be only used during application startup and for the core container; this is by no means a replacement for Java profilers or metrics libraries like Micrometer. To start collecting custom `StartupStep` , components can either get the `ApplicationStartup` instance from the application context directly, make their component implement ``` ApplicationStartupAware ``` , or ask for the `ApplicationStartup` type on any injection point. Developers should not use the | | --- | ## Convenient ApplicationContext Instantiation for Web Applications You can create `ApplicationContext` instances declaratively by using, for example, a `ContextLoader` . Of course, you can also create `ApplicationContext` instances programmatically by using one of the `ApplicationContext` implementations. You can register an `ApplicationContext` by using the ``` <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/daoContext.xml /WEB-INF/applicationContext.xml</param-value> </context-param The listener inspects the parameter. If the parameter does not exist, the listener uses as a default. When the parameter does exist, the listener separates the `String` by using predefined delimiters (comma, semicolon, and whitespace) and uses the values as locations where application contexts are searched. Ant-style path patterns are supported as well. Examples are ``` /WEB-INF/*Context.xml ``` (for all files with names that end with `Context.xml` and that reside in the `WEB-INF` directory) and ``` /WEB-INF/**/*Context.xml ``` (for all such files in any subdirectory of `WEB-INF` ). ## Deploying a Spring `ApplicationContext` as a Jakarta EE RAR File It is possible to deploy a Spring `ApplicationContext` as a RAR file, encapsulating the context and all of its required bean classes and library JARs in a Jakarta EE RAR deployment unit. This is the equivalent of bootstrapping a stand-alone `ApplicationContext` (only hosted in Jakarta EE environment) being able to access the Jakarta EE servers facilities. RAR deployment is a more natural alternative to a scenario of deploying a headless WAR file — in effect, a WAR file without any HTTP entry points that is used only for bootstrapping a Spring `ApplicationContext` in a Jakarta EE environment. RAR deployment is ideal for application contexts that do not need HTTP entry points but rather consist only of message endpoints and scheduled jobs. Beans in such a context can use application server resources such as the JTA transaction manager and JNDI-bound JDBC `DataSource` instances and JMS `ConnectionFactory` instances and can also register with the platform’s JMX server — all through Spring’s standard transaction management and JNDI and JMX support facilities. Application components can also interact with the application server’s JCA `WorkManager` through Spring’s `TaskExecutor` abstraction. See the javadoc of the class for the configuration details involved in RAR deployment. For a simple deployment of a Spring ApplicationContext as a Jakarta EE RAR file: Package all application classes into a RAR file (which is a standard JAR file with a different file extension). * Add all required library JARs into the root of the RAR archive. * Add a `META-INF/ra.xml` deployment descriptor (as shown in the javadoc for ) and the corresponding Spring XML bean definition file(s) (typically ``` META-INF/applicationContext.xml ``` ). * Drop the resulting RAR file into your application server’s deployment directory. Such RAR deployment units are usually self-contained. They do not expose components to the outside world, not even to other modules of the same application. Interaction with a RAR-based | | --- | # The BeanFactory API `BeanFactory` API The `BeanFactory` API provides the underlying basis for Spring’s IoC functionality. Its specific contracts are mostly used in integration with other parts of Spring and related third-party frameworks, and its implementation is a key delegate within the higher-level container. `BeanFactory` and related interfaces (such as `BeanFactoryAware` , `InitializingBean` , `DisposableBean` ) are important integration points for other framework components. By not requiring any annotations or even reflection, they allow for very efficient interaction between the container and its components. Application-level beans may use the same callback interfaces but typically prefer declarative dependency injection instead, either through annotations or through programmatic configuration. Note that the core `BeanFactory` API level and its implementation do not make assumptions about the configuration format or any component annotations to be used. All of these flavors come in through extensions (such as ) and operate on shared `BeanDefinition` objects as a core metadata representation. This is the essence of what makes Spring’s container so flexible and extensible. `BeanFactory` or `ApplicationContext` ? This section explains the differences between the `BeanFactory` and `ApplicationContext` container levels and the implications on bootstrapping. You should use an `ApplicationContext` unless you have a good reason for not doing so, with and its subclass as the common implementations for custom bootstrapping. These are the primary entry points to Spring’s core container for all common purposes: loading of configuration files, triggering a classpath scan, programmatically registering bean definitions and annotated classes, and (as of 5.0) registering functional bean definitions. Because an `ApplicationContext` includes all the functionality of a `BeanFactory` , it is generally recommended over a plain `BeanFactory` , except for scenarios where full control over bean processing is needed. Within an `ApplicationContext` (such as the implementation), several kinds of beans are detected by convention (that is, by bean name or by bean type — in particular, post-processors), while a plain is agnostic about any special beans. For many extended container features, such as annotation processing and AOP proxying, the `BeanPostProcessor` extension point is essential. If you use only a plain , such post-processors do not get detected and activated by default. This situation could be confusing, because nothing is actually wrong with your bean configuration. Rather, in such a scenario, the container needs to be fully bootstrapped through additional setup. The following table lists features provided by the `BeanFactory` and `ApplicationContext` interfaces and implementations. Feature | | | | --- | --- | --- | | | | | | | | | | | | | | | | | | | To explicitly register a bean post-processor with a , you need to programmatically call `addBeanPostProcessor` , as the following example shows: ``` DefaultListableBeanFactory factory = new DefaultListableBeanFactory(); // populate the factory with bean definitions // now register any needed BeanPostProcessor instances factory.addBeanPostProcessor(new AutowiredAnnotationBeanPostProcessor()); factory.addBeanPostProcessor(new MyBeanPostProcessor()); ``` val factory = DefaultListableBeanFactory() // populate the factory with bean definitions // now register any needed BeanPostProcessor instances factory.addBeanPostProcessor(AutowiredAnnotationBeanPostProcessor()) factory.addBeanPostProcessor(MyBeanPostProcessor()) To apply a to a plain , you need to call its ``` postProcessBeanFactory ``` ``` DefaultListableBeanFactory factory = new DefaultListableBeanFactory(); XmlBeanDefinitionReader reader = new XmlBeanDefinitionReader(factory); reader.loadBeanDefinitions(new FileSystemResource("beans.xml")); // bring in some property values from a Properties file PropertySourcesPlaceholderConfigurer cfg = new PropertySourcesPlaceholderConfigurer(); cfg.setLocation(new FileSystemResource("jdbc.properties")); // now actually do the replacement cfg.postProcessBeanFactory(factory); ``` ``` val factory = DefaultListableBeanFactory() val reader = XmlBeanDefinitionReader(factory) reader.loadBeanDefinitions(FileSystemResource("beans.xml")) // bring in some property values from a Properties file val cfg = PropertySourcesPlaceholderConfigurer() cfg.setLocation(FileSystemResource("jdbc.properties")) // now actually do the replacement cfg.postProcessBeanFactory(factory) ``` In both cases, the explicit registration steps are inconvenient, which is why the various `ApplicationContext` variants are preferred over a plain in Spring-backed applications, especially when relying on and `BeanPostProcessor` instances for extended container functionality in a typical enterprise setup. This chapter covers how Spring handles resources and how you can work with resources in Spring. It includes the following topics: Java’s standard `java.net.URL` class and standard handlers for various URL prefixes, unfortunately, are not quite adequate enough for all access to low-level resources. For example, there is no standardized `URL` implementation that may be used to access a resource that needs to be obtained from the classpath or relative to a `ServletContext` . While it is possible to register new handlers for specialized `URL` prefixes (similar to existing handlers for prefixes such as `http:` ), this is generally quite complicated, and the `URL` interface still lacks some desirable functionality, such as a method to check for the existence of the resource being pointed to. `Resource` Interface Spring’s `Resource` interface located in the ``` org.springframework.core.io. ``` package is meant to be a more capable interface for abstracting access to low-level resources. The following listing provides an overview of the `Resource` interface. See the `Resource` javadoc for further details. ``` public interface Resource extends InputStreamSource { boolean exists(); boolean isReadable(); boolean isOpen(); boolean isFile(); URL getURL() throws IOException; URI getURI() throws IOException; File getFile() throws IOException; ReadableByteChannel readableChannel() throws IOException; long contentLength() throws IOException; long lastModified() throws IOException; Resource createRelative(String relativePath) throws IOException; String getFilename(); String getDescription(); } ``` As the definition of the `Resource` interface shows, it extends the `InputStreamSource` interface. The following listing shows the definition of the `InputStreamSource` interface: ``` public interface InputStreamSource { InputStream getInputStream() throws IOException; } ``` Some of the most important methods from the `Resource` interface are: * `getInputStream()` : Locates and opens the resource, returning an `InputStream` for reading from the resource. It is expected that each invocation returns a fresh `InputStream` . It is the responsibility of the caller to close the stream. * `exists()` : Returns a `boolean` indicating whether this resource actually exists in physical form. * `isOpen()` : Returns a `boolean` indicating whether this resource represents a handle with an open stream. If `true` , the `InputStream` cannot be read multiple times and must be read once only and then closed to avoid resource leaks. Returns `false` for all usual resource implementations, with the exception of `InputStreamResource` . * `getDescription()` : Returns a description for this resource, to be used for error output when working with the resource. This is often the fully qualified file name or the actual URL of the resource. Other methods let you obtain an actual `URL` or `File` object representing the resource (if the underlying implementation is compatible and supports that functionality). Some implementations of the `Resource` interface also implement the extended `WritableResource` interface for a resource that supports writing to it. Spring itself uses the `Resource` abstraction extensively, as an argument type in many method signatures when a resource is needed. Other methods in some Spring APIs (such as the constructors to various `ApplicationContext` implementations) take a `String` which in unadorned or simple form is used to create a `Resource` appropriate to that context implementation or, via special prefixes on the `String` path, let the caller specify that a specific `Resource` implementation must be created and used. While the `Resource` interface is used a lot with Spring and by Spring, it is actually very convenient to use as a general utility class by itself in your own code, for access to resources, even when your code does not know or care about any other parts of Spring. While this couples your code to Spring, it really only couples it to this small set of utility classes, which serves as a more capable replacement for `URL` and can be considered equivalent to any other library you would use for this purpose. ## Built-in `Resource` Implementations Spring includes several built-in `Resource` implementations: For a complete list of `Resource` implementations available in Spring, consult the "All Known Implementing Classes" section of the `Resource` javadoc. `UrlResource` `UrlResource` wraps a `java.net.URL` and can be used to access any object that is normally accessible with a URL, such as files, an HTTPS target, an FTP target, and others. All URLs have a standardized `String` representation, such that appropriate standardized prefixes are used to indicate one URL type from another. This includes `file:` for accessing filesystem paths, `https:` for accessing resources through the HTTPS protocol, `ftp:` for accessing resources through FTP, and others. A `UrlResource` is created by Java code by explicitly using the `UrlResource` constructor but is often created implicitly when you call an API method that takes a `String` argument meant to represent a path. For the latter case, a JavaBeans `PropertyEditor` ultimately decides which type of `Resource` to create. If the path string contains a well-known (to property editor, that is) prefix (such as `classpath:` ), it creates an appropriate specialized `Resource` for that prefix. However, if it does not recognize the prefix, it assumes the string is a standard URL string and creates a `UrlResource` . `ClassPathResource` This class represents a resource that should be obtained from the classpath. It uses either the thread context class loader, a given class loader, or a given class for loading resources. This `Resource` implementation supports resolution as a `java.io.File` if the class path resource resides in the file system but not for classpath resources that reside in a jar and have not been expanded (by the servlet engine or whatever the environment is) to the filesystem. To address this, the various `Resource` implementations always support resolution as a `java.net.URL` . A `ClassPathResource` is created by Java code by explicitly using the `ClassPathResource` constructor but is often created implicitly when you call an API method that takes a `String` argument meant to represent a path. For the latter case, a JavaBeans `PropertyEditor` recognizes the special prefix, `classpath:` , on the string path and creates a `ClassPathResource` in that case. `FileSystemResource` This is a `Resource` implementation for `java.io.File` handles. It also supports `java.nio.file.Path` handles, applying Spring’s standard String-based path transformations but performing all operations via the `java.nio.file.Files` API. For pure `java.nio.path.Path` based support use a `PathResource` instead. `FileSystemResource` supports resolution as a `File` and as a `URL` . `PathResource` This is a `Resource` implementation for `java.nio.file.Path` handles, performing all operations and transformations via the `Path` API. It supports resolution as a `File` and as a `URL` and also implements the extended `WritableResource` interface. `PathResource` is effectively a pure `java.nio.path.Path` based alternative to `FileSystemResource` with different `createRelative` behavior. This is a `Resource` implementation for `ServletContext` resources that interprets relative paths within the relevant web application’s root directory. It always supports stream access and URL access but allows `java.io.File` access only when the web application archive is expanded and the resource is physically on the filesystem. Whether or not it is expanded and on the filesystem or accessed directly from the JAR or somewhere else like a database (which is conceivable) is actually dependent on the Servlet container. `InputStreamResource` An `InputStreamResource` is a `Resource` implementation for a given `InputStream` . It should be used only if no specific `Resource` implementation is applicable. In particular, prefer `ByteArrayResource` or any of the file-based `Resource` implementations where possible. In contrast to other `Resource` implementations, this is a descriptor for an already-opened resource. Therefore, it returns `true` from `isOpen()` . Do not use it if you need to keep the resource descriptor somewhere or if you need to read a stream multiple times. `ResourceLoader` Interface The `ResourceLoader` interface is meant to be implemented by objects that can return (that is, load) `Resource` instances. The following listing shows the `ResourceLoader` interface definition: ``` public interface ResourceLoader { Resource getResource(String location); ClassLoader getClassLoader(); } ``` All application contexts implement the `ResourceLoader` interface. Therefore, all application contexts may be used to obtain `Resource` instances. When you call `getResource()` on a specific application context, and the location path specified doesn’t have a specific prefix, you get back a `Resource` type that is appropriate to that particular application context. For example, assume the following snippet of code was run against a ``` Resource template = ctx.getResource("some/resource/path/myTemplate.txt"); ``` ``` val template = ctx.getResource("some/resource/path/myTemplate.txt") ``` Against a , that code returns a `ClassPathResource` . If the same method were run against a instance, it would return a `FileSystemResource` . For a , it would return a . It would similarly return appropriate objects for each context. As a result, you can load resources in a fashion appropriate to the particular application context. On the other hand, you may also force `ClassPathResource` to be used, regardless of the application context type, by specifying the special `classpath:` prefix, as the following example shows: ``` Resource template = ctx.getResource("classpath:some/resource/path/myTemplate.txt"); ``` ``` val template = ctx.getResource("classpath:some/resource/path/myTemplate.txt") ``` Similarly, you can force a `UrlResource` to be used by specifying any of the standard `java.net.URL` prefixes. The following examples use the `file` and `https` prefixes: ``` Resource template = ctx.getResource("file:///some/resource/path/myTemplate.txt"); ``` ``` Resource template = ctx.getResource("https://myhost.com/resource/path/myTemplate.txt"); ``` The following table summarizes the strategy for converting `String` objects to `Resource` objects: Prefix | Example | Explanation | | --- | --- | --- | | | | | | | | | interface is an extension to the `ResourceLoader` interface which defines a strategy for resolving a location pattern (for example, an Ant-style path pattern) into `Resource` objects. ``` public interface ResourcePatternResolver extends ResourceLoader { String CLASSPATH_ALL_URL_PREFIX = "classpath*:"; Resource[] getResources(String locationPattern) throws IOException; } ``` As can be seen above, this interface also defines a special `classpath*:` resource prefix for all matching resources from the class path. Note that the resource location is expected to be a path without placeholders in this case — for example, ``` classpath*:/config/beans.xml ``` . JAR files or different directories in the class path can contain multiple files with the same path and the same name. See Wildcards in Application Context Constructor Resource Paths and its subsections for further details on wildcard support with the `classpath*:` resource prefix. A passed-in `ResourceLoader` (for example, one supplied via `ResourceLoaderAware` semantics) can be checked whether it implements this extended interface too. is a standalone implementation that is usable outside an `ApplicationContext` and is also used by ``` ResourceArrayPropertyEditor ``` for populating `Resource[]` bean properties. is able to resolve a specified resource location path into one or more matching `Resource` objects. The source path may be a simple path which has a one-to-one mapping to a target `Resource` , or alternatively may contain the special `classpath*:` prefix and/or internal Ant-style regular expressions (matched using Spring’s ``` org.springframework.util.AntPathMatcher ``` utility). Both of the latter are effectively wildcards. `ResourceLoaderAware` Interface The `ResourceLoaderAware` interface is a special callback interface which identifies components that expect to be provided a `ResourceLoader` reference. The following listing shows the definition of the `ResourceLoaderAware` interface: ``` public interface ResourceLoaderAware { void setResourceLoader(ResourceLoader resourceLoader); } ``` When a class implements `ResourceLoaderAware` and is deployed into an application context (as a Spring-managed bean), it is recognized as `ResourceLoaderAware` by the application context. The application context then invokes ``` setResourceLoader(ResourceLoader) ``` , supplying itself as the argument (remember, all application contexts in Spring implement the `ResourceLoader` interface). Since an `ApplicationContext` is a `ResourceLoader` , the bean could also implement the interface and use the supplied application context directly to load resources. However, in general, it is better to use the specialized `ResourceLoader` interface if that is all you need. The code would be coupled only to the resource loading interface (which can be considered a utility interface) and not to the whole Spring `ApplicationContext` interface. In application components, you may also rely upon autowiring of the `ResourceLoader` as an alternative to implementing the `ResourceLoaderAware` interface. The traditional `constructor` and `byType` autowiring modes (as described in Autowiring Collaborators) are capable of providing a `ResourceLoader` for either a constructor argument or a setter method parameter, respectively. For more flexibility (including the ability to autowire fields and multiple parameter methods), consider using the annotation-based autowiring features. In that case, the `ResourceLoader` is autowired into a field, constructor argument, or method parameter that expects the `ResourceLoader` type as long as the field, constructor, or method in question carries the `@Autowired` annotation. For more information, see Using `@Autowired` . To load one or more | | --- | ## Resources as Dependencies If the bean itself is going to determine and supply the resource path through some sort of dynamic process, it probably makes sense for the bean to use the `ResourceLoader` or interface to load resources. For example, consider the loading of a template of some sort, where the specific resource that is needed depends on the role of the user. If the resources are static, it makes sense to eliminate the use of the `ResourceLoader` interface (or interface) completely, have the bean expose the `Resource` properties it needs, and expect them to be injected into it. What makes it trivial to then inject these properties is that all application contexts register and use a special JavaBeans `PropertyEditor` , which can convert `String` paths to `Resource` objects. For example, the following `MyBean` class has a `template` property of type `Resource` . private Resource template; public setTemplate(Resource template) { this.template = template; } ``` class MyBean(var template: Resource) ``` In an XML configuration file, the `template` property can be configured with a simple string for that resource, as the following example shows: ``` <bean id="myBean" class="example.MyBean"> <property name="template" value="some/resource/path/myTemplate.txt"/> </bean> ``` Note that the resource path has no prefix. Consequently, because the application context itself is going to be used as the `ResourceLoader` , the resource is loaded through a `ClassPathResource` , a `FileSystemResource` , or a , depending on the exact type of the application context. If you need to force a specific `Resource` type to be used, you can use a prefix. The following two examples show how to force a `ClassPathResource` and a `UrlResource` (the latter being used to access a file in the filesystem): ``` <property name="template" value="classpath:some/resource/path/myTemplate.txt"> ``` ``` <property name="template" value="file:///some/resource/path/myTemplate.txt"/> ``` If the `MyBean` class is refactored for use with annotation-driven configuration, the path to `myTemplate.txt` can be stored under a key named `template.path` — for example, in a properties file made available to the Spring `Environment` (see Environment Abstraction). The template path can then be referenced via the `@Value` annotation using a property placeholder (see Using `@Value` ). Spring will retrieve the value of the template path as a string, and a special `PropertyEditor` will convert the string to a `Resource` object to be injected into the `MyBean` constructor. The following example demonstrates how to achieve this. private final Resource template; public MyBean(@Value("${template.path}") Resource template) { this.template = template; } ``` @Component class MyBean(@Value("\${template.path}") private val template: Resource) ``` If we want to support multiple templates discovered under the same path in multiple locations in the classpath — for example, in multiple jars in the classpath — we can use the special `classpath*:` prefix and wildcarding to define a `templates.path` key as ``` classpath*:/config/templates/*.txt ``` . If we redefine the `MyBean` class as follows, Spring will convert the template path pattern into an array of `Resource` objects that can be injected into the `MyBean` constructor. private final Resource[] templates; public MyBean(@Value("${templates.path}") Resource[] templates) { this.templates = templates; } ``` @Component class MyBean(@Value("\${templates.path}") private val templates: Resource[]) ``` ## Application Contexts and Resource Paths This section covers how to create application contexts with resources, including shortcuts that work with XML, how to use wildcards, and other details. ### Constructing Application Contexts An application context constructor (for a specific application context type) generally takes a string or array of strings as the location paths of the resources, such as XML files that make up the definition of the context. When such a location path does not have a prefix, the specific `Resource` type built from that path and used to load the bean definitions depends on and is appropriate to the specific application context. For example, consider the following example, which creates a ``` ApplicationContext ctx = new ClassPathXmlApplicationContext("conf/appContext.xml"); ``` ``` val ctx = ClassPathXmlApplicationContext("conf/appContext.xml") ``` The bean definitions are loaded from the classpath, because a `ClassPathResource` is used. However, consider the following example, which creates a ``` val ctx = FileSystemXmlApplicationContext("conf/appContext.xml") ``` Now the bean definitions are loaded from a filesystem location (in this case, relative to the current working directory). Note that the use of the special `classpath` prefix or a standard URL prefix on the location path overrides the default type of `Resource` created to load the bean definitions. Consider the following example: ``` ApplicationContext ctx = new FileSystemXmlApplicationContext("classpath:conf/appContext.xml"); ``` ``` val ctx = FileSystemXmlApplicationContext("classpath:conf/appContext.xml") ``` Using loads the bean definitions from the classpath. However, it is still a . If it is subsequently used as a `ResourceLoader` , any unprefixed paths are still treated as filesystem paths. # Constructing Instances — Shortcuts The exposes a number of constructors to enable convenient instantiation. The basic idea is that you can supply merely a string array that contains only the filenames of the XML files themselves (without the leading path information) and also supply a `Class` . The then derives the path information from the supplied class. Consider the following directory layout: > com/ example/ services.xml repositories.xml MessengerService.class The following example shows how a instance composed of the beans defined in files named `services.xml` and `repositories.xml` (which are on the classpath) can be instantiated: ``` ApplicationContext ctx = new ClassPathXmlApplicationContext( new String[] {"services.xml", "repositories.xml"}, MessengerService.class); ``` ``` val ctx = ClassPathXmlApplicationContext(arrayOf("services.xml", "repositories.xml"), MessengerService::class.java) ``` javadoc for details on the various constructors. ### Wildcards in Application Context Constructor Resource Paths The resource paths in application context constructor values may be simple paths (as shown earlier), each of which has a one-to-one mapping to a target `Resource` or, alternately, may contain the special `classpath*:` prefix or internal Ant-style patterns (matched by using Spring’s `PathMatcher` utility). Both of the latter are effectively wildcards. One use for this mechanism is when you need to do component-style application assembly. All components can publish context definition fragments to a well-known location path, and, when the final application context is created using the same path prefixed with `classpath*:` , all component fragments are automatically picked up. Note that this wildcarding is specific to the use of resource paths in application context constructors (or when you use the `PathMatcher` utility class hierarchy directly) and is resolved at construction time. It has nothing to do with the `Resource` type itself. You cannot use the `classpath*:` prefix to construct an actual `Resource` , as a resource points to just one resource at a time. # Ant-style Patterns Path locations can contain Ant-style patterns, as the following example shows: > /WEB-INF/*-context.xml com/mycompany/**/applicationContext.xml file:C:/some/path/*-context.xml classpath:com/mycompany/**/applicationContext.xml When the path location contains an Ant-style pattern, the resolver follows a more complex procedure to try to resolve the wildcard. It produces a `Resource` for the path up to the last non-wildcard segment and obtains a URL from it. If this URL is not a `jar:` URL or container-specific variant (such as `zip:` in WebLogic, `wsjar` in WebSphere, and so on), a `java.io.File` is obtained from it and used to resolve the wildcard by traversing the filesystem. In the case of a jar URL, the resolver either gets a from it or manually parses the jar URL and then traverses the contents of the jar file to resolve the wildcards. # Implications on Portability If the specified path is already a `file` URL (either implicitly because the base `ResourceLoader` is a filesystem one or explicitly), wildcarding is guaranteed to work in a completely portable fashion. If the specified path is a `classpath` location, the resolver must obtain the last non-wildcard path segment URL by making a ``` Classloader.getResource() ``` call. Since this is just a node of the path (not the file at the end), it is actually undefined (in the `ClassLoader` javadoc) exactly what sort of a URL is returned in this case. In practice, it is always a `java.io.File` representing the directory (where the classpath resource resolves to a filesystem location) or a jar URL of some sort (where the classpath resource resolves to a jar location). Still, there is a portability concern on this operation. If a jar URL is obtained for the last non-wildcard segment, the resolver must be able to get a from it or manually parse the jar URL, to be able to walk the contents of the jar and resolve the wildcard. This does work in most environments but fails in others, and we strongly recommend that the wildcard resolution of resources coming from jars be thoroughly tested in your specific environment before you rely on it. `classpath*:` Prefix When constructing an XML-based application context, a location string may use the special `classpath*:` prefix, as the following example shows: ``` ApplicationContext ctx = new ClassPathXmlApplicationContext("classpath*:conf/appContext.xml"); ``` ``` val ctx = ClassPathXmlApplicationContext("classpath*:conf/appContext.xml") ``` This special prefix specifies that all classpath resources that match the given name must be obtained (internally, this essentially happens through a call to ``` ClassLoader.getResources(…​) ``` ) and then merged to form the final application context definition. The wildcard classpath relies on the | | --- | You can also combine the `classpath*:` prefix with a `PathMatcher` pattern in the rest of the location path (for example, ``` classpath*:META-INF/*-beans.xml ``` ). In this case, the resolution strategy is fairly simple: A call is used on the last non-wildcard path segment to get all the matching resources in the class loader hierarchy and then, off each resource, the same `PathMatcher` resolution strategy described earlier is used for the wildcard subpath. # Other Notes Relating to Wildcards Note that `classpath*:` , when combined with Ant-style patterns, only works reliably with at least one root directory before the pattern starts, unless the actual target files reside in the file system. This means that a pattern such as `classpath*:*.xml` might not retrieve files from the root of jar files but rather only from the root of expanded directories. Spring’s ability to retrieve classpath entries originates from the JDK’s method, which only returns file system locations for an empty string (indicating potential roots to search). Spring evaluates `URLClassLoader` runtime configuration and the `java.class.path` manifest in jar files as well, but this is not guaranteed to lead to portable behavior. Ant-style patterns with `classpath:` resources are not guaranteed to find matching resources if the root package to search is available in multiple classpath locations. Consider the following example of a resource location: > com/mycompany/package1/service-context.xml Now consider an Ant-style path that someone might use to try to find that file: > classpath:com/mycompany/**/service-context.xml Such a resource may exist in only one location in the classpath, but when a path such as the preceding example is used to try to resolve it, the resolver works off the (first) URL returned by ``` getResource("com/mycompany"); ``` . If this base package node exists in multiple `ClassLoader` locations, the desired resource may not exist in the first location found. Therefore, in such cases you should prefer using `classpath*:` with the same Ant-style pattern, which searches all classpath locations that contain the `com.mycompany` base package: ``` classpath*:com/mycompany/**/service-context.xml ``` `FileSystemResource` Caveats A `FileSystemResource` that is not attached to a (that is, when a is not the actual `ResourceLoader` ) treats absolute and relative paths as you would expect. Relative paths are relative to the current working directory, while absolute paths are relative to the root of the filesystem. For backwards compatibility (historical) reasons however, this changes when the is the `ResourceLoader` . The forces all attached `FileSystemResource` instances to treat all location paths as relative, whether they start with a leading slash or not. In practice, this means the following examples are equivalent: ``` ApplicationContext ctx = new FileSystemXmlApplicationContext("conf/context.xml"); ``` ``` val ctx = FileSystemXmlApplicationContext("conf/context.xml") ``` ``` val ctx = FileSystemXmlApplicationContext("/conf/context.xml") ``` The following examples are also equivalent (even though it would make sense for them to be different, as one case is relative and the other absolute): ``` val ctx: FileSystemXmlApplicationContext = ... ctx.getResource("some/resource/path/myTemplate.txt") ``` ``` val ctx: FileSystemXmlApplicationContext = ... ctx.getResource("/some/resource/path/myTemplate.txt") ``` In practice, if you need true absolute filesystem paths, you should avoid using absolute paths with `FileSystemResource` or and force the use of a `UrlResource` by using the `file:` URL prefix. The following examples show how to do so: ``` // force this FileSystemXmlApplicationContext to load its definition via a UrlResource ApplicationContext ctx = new FileSystemXmlApplicationContext("file:///conf/context.xml"); ``` ``` // force this FileSystemXmlApplicationContext to load its definition via a UrlResource val ctx = FileSystemXmlApplicationContext("file:///conf/context.xml") ``` There are pros and cons for considering validation as business logic, and Spring offers a design for validation and data binding that does not exclude either one of them. Specifically, validation should not be tied to the web tier and should be easy to localize, and it should be possible to plug in any available validator. Considering these concerns, Spring provides a `Validator` contract that is both basic and eminently usable in every layer of an application. Data binding is useful for letting user input be dynamically bound to the domain model of an application (or whatever objects you use to process user input). Spring provides the aptly named `DataBinder` to do exactly that. The `Validator` and the `DataBinder` make up the `validation` package, which is primarily used in but not limited to the web layer. The `BeanWrapper` is a fundamental concept in the Spring Framework and is used in a lot of places. However, you probably do not need to use the `BeanWrapper` directly. Because this is reference documentation, however, we feel that some explanation might be in order. We explain the `BeanWrapper` in this chapter, since, if you are going to use it at all, you are most likely do so when trying to bind data to objects. Spring’s `DataBinder` and the lower-level `BeanWrapper` both use implementations to parse and format property values. The `PropertyEditor` and types are part of the JavaBeans specification and are also explained in this chapter. Spring’s `core.convert` package provides a general type conversion facility, as well as a higher-level `format` package for formatting UI field values. You can use these packages as simpler alternatives to implementations. They are also discussed in this chapter. Spring supports Java Bean Validation through setup infrastructure and an adaptor to Spring’s own `Validator` contract. Applications can enable Bean Validation once globally, as described in Java Bean Validation, and use it exclusively for all validation needs. In the web layer, applications can further register controller-local Spring `Validator` instances per `DataBinder` , as described in Configuring a `DataBinder` , which can be useful for plugging in custom validation logic. Spring features a `Validator` interface that you can use to validate objects. The `Validator` interface works by using an `Errors` object so that, while validating, validators can report validation failures to the `Errors` object. Consider the following example of a small data object: ``` public class Person { // the usual getters and setters... } ``` The next example provides validation behavior for the `Person` class by implementing the following two methods of the * `supports(Class)` : Can this `Validator` validate instances of the supplied `Class` ? * ``` validate(Object, org.springframework.validation.Errors) ``` : Validates the given object and, in case of validation errors, registers those with the given `Errors` object. Implementing a `Validator` is fairly straightforward, especially when you know of the `ValidationUtils` helper class that the Spring Framework also provides. The following example implements `Validator` for `Person` instances: ``` public class PersonValidator implements Validator { /** * This Validator validates only Person instances */ public boolean supports(Class clazz) { return Person.class.equals(clazz); } public void validate(Object obj, Errors e) { ValidationUtils.rejectIfEmpty(e, "name", "name.empty"); Person p = (Person) obj; if (p.getAge() < 0) { e.rejectValue("age", "negativevalue"); } else if (p.getAge() > 110) { e.rejectValue("age", "too.darn.old"); } } } ``` ``` class PersonValidator : Validator { /** * This Validator validates only Person instances */ override fun supports(clazz: Class<*>): Boolean { return Person::class.java == clazz } override fun validate(obj: Any, e: Errors) { ValidationUtils.rejectIfEmpty(e, "name", "name.empty") val p = obj as Person if (p.age < 0) { e.rejectValue("age", "negativevalue") } else if (p.age > 110) { e.rejectValue("age", "too.darn.old") } } } ``` The `static` `rejectIfEmpty(..)` method on the `ValidationUtils` class is used to reject the `name` property if it is `null` or the empty string. Have a look at the `ValidationUtils` javadoc to see what functionality it provides besides the example shown previously. While it is certainly possible to implement a single `Validator` class to validate each of the nested objects in a rich object, it may be better to encapsulate the validation logic for each nested class of object in its own `Validator` implementation. A simple example of a “rich” object would be a `Customer` that is composed of two `String` properties (a first and a second name) and a complex `Address` object. `Address` objects may be used independently of `Customer` objects, so a distinct `AddressValidator` has been implemented. If you want your `CustomerValidator` to reuse the logic contained within the `AddressValidator` class without resorting to copy-and-paste, you can dependency-inject or instantiate an `AddressValidator` within your `CustomerValidator` , as the following example shows: ``` public class CustomerValidator implements Validator { private final Validator addressValidator; public CustomerValidator(Validator addressValidator) { if (addressValidator == null) { throw new IllegalArgumentException("The supplied [Validator] is " + "required and must not be null."); } if (!addressValidator.supports(Address.class)) { throw new IllegalArgumentException("The supplied [Validator] must " + "support the validation of [Address] instances."); } this.addressValidator = addressValidator; } /** * This Validator validates Customer instances, and any subclasses of Customer too */ public boolean supports(Class clazz) { return Customer.class.isAssignableFrom(clazz); } public void validate(Object target, Errors errors) { ValidationUtils.rejectIfEmptyOrWhitespace(errors, "firstName", "field.required"); ValidationUtils.rejectIfEmptyOrWhitespace(errors, "surname", "field.required"); Customer customer = (Customer) target; try { errors.pushNestedPath("address"); ValidationUtils.invokeValidator(this.addressValidator, customer.getAddress(), errors); } finally { errors.popNestedPath(); } } } ``` ``` class CustomerValidator(private val addressValidator: Validator) : Validator { init { if (addressValidator == null) { throw IllegalArgumentException("The supplied [Validator] is required and must not be null.") } if (!addressValidator.supports(Address::class.java)) { throw IllegalArgumentException("The supplied [Validator] must support the validation of [Address] instances.") } } /* * This Validator validates Customer instances, and any subclasses of Customer too */ override fun supports(clazz: Class<>): Boolean { return Customer::class.java.isAssignableFrom(clazz) } override fun validate(target: Any, errors: Errors) { ValidationUtils.rejectIfEmptyOrWhitespace(errors, "firstName", "field.required") ValidationUtils.rejectIfEmptyOrWhitespace(errors, "surname", "field.required") val customer = target as Customer try { errors.pushNestedPath("address") ValidationUtils.invokeValidator(this.addressValidator, customer.address, errors) } finally { errors.popNestedPath() } } } ``` Validation errors are reported to the `Errors` object passed to the validator. In the case of Spring Web MVC, you can use the `<spring:bind/>` tag to inspect the error messages, but you can also inspect the `Errors` object yourself. More information about the methods it offers can be found in the javadoc. We covered databinding and validation. This section covers outputting messages that correspond to validation errors. In the example shown in the preceding section, we rejected the `name` and `age` fields. If we want to output the error messages by using a `MessageSource` , we can do so using the error code we provide when rejecting the field ('name' and 'age' in this case). When you call (either directly, or indirectly, by using, for example, the `ValidationUtils` class) `rejectValue` or one of the other `reject` methods from the `Errors` interface, the underlying implementation not only registers the code you passed in but also registers a number of additional error codes. The `MessageCodesResolver` determines which error codes the `Errors` interface registers. By default, the is used, which (for example) not only registers a message with the code you gave but also registers messages that include the field name you passed to the reject method. So, if you reject a field by using ``` rejectValue("age", "too.darn.old") ``` , apart from the `too.darn.old` code, Spring also registers `too.darn.old.age` and `too.darn.old.age.int` (the first includes the field name and the second includes the type of the field). This is done as a convenience to aid developers when targeting error messages. More information on the `MessageCodesResolver` and the default strategy can be found in the javadoc of `MessageCodesResolver` and The `core.convert` package provides a general type conversion system. The system defines an SPI to implement type conversion logic and an API to perform type conversions at runtime. Within a Spring container, you can use this system as an alternative to `PropertyEditor` implementations to convert externalized bean property value strings to the required property types. You can also use the public API anywhere in your application where type conversion is needed. ## Converter SPI The SPI to implement type conversion logic is simple and strongly typed, as the following interface definition shows: public interface Converter<S, T> { T convert(S source); } ``` To create your own converter, implement the `Converter` interface and parameterize `S` as the type you are converting from and `T` as the type you are converting to. You can also transparently apply such a converter if a collection or array of `S` needs to be converted to an array or collection of `T` , provided that a delegating array or collection converter has been registered as well (which does by default). For each call to `convert(S)` , the source argument is guaranteed to not be null. Your `Converter` may throw any unchecked exception if conversion fails. Specifically, it should throw an to report an invalid source value. Take care to ensure that your `Converter` implementation is thread-safe. Several converter implementations are provided in the `core.convert.support` package as a convenience. These include converters from strings to numbers and other common types. The following listing shows the `StringToInteger` class, which is a typical `Converter` implementation: final class StringToInteger implements Converter<String, Integer> { public Integer convert(String source) { return Integer.valueOf(source); } } ``` `ConverterFactory` When you need to centralize the conversion logic for an entire class hierarchy (for example, when converting from `String` to `Enum` objects), you can implement `ConverterFactory` , as the following example shows: public interface ConverterFactory<S, R> { <T extends R> Converter<S, T> getConverter(Class<T> targetType); } ``` Parameterize S to be the type you are converting from and R to be the base type defining the range of classes you can convert to. Then implement ``` getConverter(Class<T>) ``` , where T is a subclass of R. Consider the ``` StringToEnumConverterFactory ``` as an example: final class StringToEnumConverterFactory implements ConverterFactory<String, Enum> { public <T extends Enum> Converter<String, T> getConverter(Class<T> targetType) { return new StringToEnumConverter(targetType); } private final class StringToEnumConverter<T extends Enum> implements Converter<String, T> { private Class<T> enumType; public StringToEnumConverter(Class<T> enumType) { this.enumType = enumType; } public T convert(String source) { return (T) Enum.valueOf(this.enumType, source.trim()); } } } ``` `GenericConverter` When you require a sophisticated `Converter` implementation, consider using the `GenericConverter` interface. With a more flexible but less strongly typed signature than `Converter` , a `GenericConverter` supports converting between multiple source and target types. In addition, a `GenericConverter` makes available source and target field context that you can use when you implement your conversion logic. Such context lets a type conversion be driven by a field annotation or by generic information declared on a field signature. The following listing shows the interface definition of `GenericConverter` : public interface GenericConverter { public Set<ConvertiblePair> getConvertibleTypes(); To implement a `GenericConverter` , have ``` getConvertibleTypes() ``` return the supported source→target type pairs. Then implement ``` convert(Object, TypeDescriptor, TypeDescriptor) ``` to contain your conversion logic. The source `TypeDescriptor` provides access to the source field that holds the value being converted. The target `TypeDescriptor` provides access to the target field where the converted value is to be set. A good example of a `GenericConverter` is a converter that converts between a Java array and a collection. Such an ``` ArrayToCollectionConverter ``` introspects the field that declares the target collection type to resolve the collection’s element type. This lets each element in the source array be converted to the collection element type before the collection is set on the target field. Because | | --- | Sometimes, you want a `Converter` to run only if a specific condition holds true. For example, you might want to run a `Converter` only if a specific annotation is present on the target field, or you might want to run a `Converter` only if a specific method (such as a `static valueOf` method) is defined on the target class. is the union of the `GenericConverter` and `ConditionalConverter` interfaces that lets you define such custom matching criteria: ``` public interface ConditionalConverter { boolean matches(TypeDescriptor sourceType, TypeDescriptor targetType); } public interface ConditionalGenericConverter extends GenericConverter, ConditionalConverter { } ``` A good example of a is an `IdToEntityConverter` that converts between a persistent entity identifier and an entity reference. Such an `IdToEntityConverter` might match only if the target entity type declares a static finder method (for example, `findAccount(Long)` ). You might perform such a finder method check in the implementation of ``` matches(TypeDescriptor, TypeDescriptor) ``` `ConversionService` API `ConversionService` defines a unified API for executing type conversion logic at runtime. Converters are often run behind the following facade interface: ``` package org.springframework.core.convert; public interface ConversionService { boolean canConvert(Class<?> sourceType, Class<?> targetType); <T> T convert(Object source, Class<T> targetType); boolean canConvert(TypeDescriptor sourceType, TypeDescriptor targetType); Most `ConversionService` implementations also implement `ConverterRegistry` , which provides an SPI for registering converters. Internally, a `ConversionService` implementation delegates to its registered converters to carry out type conversion logic. A robust `ConversionService` implementation is provided in the `core.convert.support` package. ``` GenericConversionService ``` is the general-purpose implementation suitable for use in most environments. ``` ConversionServiceFactory ``` provides a convenient factory for creating common `ConversionService` configurations. `ConversionService` A `ConversionService` is a stateless object designed to be instantiated at application startup and then shared between multiple threads. In a Spring application, you typically configure a `ConversionService` instance for each Spring container (or `ApplicationContext` ). Spring picks up that `ConversionService` and uses it whenever a type conversion needs to be performed by the framework. You can also inject this `ConversionService` into any of your beans and invoke it directly. If no | | --- | To register a default `ConversionService` with Spring, add the following bean definition with an `id` of `conversionService` : ``` <bean id="conversionService" class="org.springframework.context.support.ConversionServiceFactoryBean"/> ``` A default `ConversionService` can convert between strings, numbers, enums, collections, maps, and other common types. To supplement or override the default converters with your own custom converters, set the `converters` property. Property values can implement any of the `Converter` , `ConverterFactory` , or `GenericConverter` interfaces. ``` <bean id="conversionService" class="org.springframework.context.support.ConversionServiceFactoryBean"> <property name="converters"> <set> <bean class="example.MyCustomConverter"/> </set> </property> </bean> ``` It is also common to use a `ConversionService` within a Spring MVC application. See Conversion and Formatting in the Spring MVC chapter. In certain situations, you may wish to apply formatting during conversion. See The `FormatterRegistry` SPI for details on using ## Using a `ConversionService` Programmatically To work with a `ConversionService` instance programmatically, you can inject a reference to it like you would for any other bean. The following example shows how to do so: private final ConversionService conversionService; public MyService(ConversionService conversionService) { this.conversionService = conversionService; } public void doIt() { this.conversionService.convert(...) } } ``` ``` @Service class MyService(private val conversionService: ConversionService) { fun doIt() { conversionService.convert(...) } } ``` For most use cases, you can use the `convert` method that specifies the `targetType` , but it does not work with more complex types, such as a collection of a parameterized element. For example, if you want to convert a `List` of `Integer` to a `List` of `String` programmatically, you need to provide a formal definition of the source and target types. Fortunately, `TypeDescriptor` provides various options to make doing so straightforward, as the following example shows: ``` DefaultConversionService cs = new DefaultConversionService(); List<Integer> input = ... cs.convert(input, TypeDescriptor.forObject(input), // List<Integer> type descriptor TypeDescriptor.collection(List.class, TypeDescriptor.valueOf(String.class))); ``` ``` val cs = DefaultConversionService() val input: List<Integer> = ... cs.convert(input, TypeDescriptor.forObject(input), // List<Integer> type descriptor TypeDescriptor.collection(List::class.java, TypeDescriptor.valueOf(String::class.java))) ``` automatically registers converters that are appropriate for most environments. This includes collection converters, scalar converters, and basic `Object` -to- `String` converters. You can register the same converters with any `ConverterRegistry` by using the static `addDefaultConverters` method on the class. Converters for value types are reused for arrays and collections, so there is no need to create a specific converter to convert from a `Collection` of `S` to a `Collection` of `T` , assuming that standard collection handling is appropriate. As discussed in the previous section, `core.convert` is a general-purpose type conversion system. It provides a unified `ConversionService` API as well as a strongly typed `Converter` SPI for implementing conversion logic from one type to another. A Spring container uses this system to bind bean property values. In addition, both the Spring Expression Language (SpEL) and `DataBinder` use this system to bind field values. For example, when SpEL needs to coerce a `Short` to a `Long` to complete an ``` expression.setValue(Object bean, Object value) ``` attempt, the `core.convert` system performs the coercion. Now consider the type conversion requirements of a typical client environment, such as a web or desktop application. In such environments, you typically convert from `String` to support the client postback process, as well as back to `String` to support the view rendering process. In addition, you often need to localize `String` values. The more general `core.convert` `Converter` SPI does not address such formatting requirements directly. To directly address them, Spring provides a convenient `Formatter` SPI that provides a simple and robust alternative to `PropertyEditor` implementations for client environments. In general, you can use the `Converter` SPI when you need to implement general-purpose type conversion logic — for example, for converting between a `java.util.Date` and a `Long` . You can use the `Formatter` SPI when you work in a client environment (such as a web application) and need to parse and print localized field values. The `ConversionService` provides a unified type conversion API for both SPIs. `Formatter` SPI The `Formatter` SPI to implement field formatting logic is simple and strongly typed. The following listing shows the `Formatter` interface definition: public interface Formatter<T> extends Printer<T>, Parser<T> { } ``` `Formatter` extends from the `Printer` and `Parser` building-block interfaces. The following listing shows the definitions of those two interfaces: String print(T fieldValue, Locale locale); } ``` ``` import java.text.ParseException; public interface Parser<T> { T parse(String clientValue, Locale locale) throws ParseException; } ``` To create your own `Formatter` , implement the `Formatter` interface shown earlier. Parameterize `T` to be the type of object you wish to format — for example, `java.util.Date` . Implement the `print()` operation to print an instance of `T` for display in the client locale. Implement the `parse()` operation to parse an instance of `T` from the formatted representation returned from the client locale. Your `Formatter` should throw a `ParseException` or an if a parse attempt fails. Take care to ensure that your `Formatter` implementation is thread-safe. The `format` subpackages provide several `Formatter` implementations as a convenience. The `number` package provides `NumberStyleFormatter` , ``` CurrencyStyleFormatter ``` ``` PercentStyleFormatter ``` to format `Number` objects that use a ``` java.text.NumberFormat ``` . The `datetime` package provides a `DateFormatter` to format `java.util.Date` objects with a `java.text.DateFormat` . The following `DateFormatter` is an example `Formatter` implementation: ``` package org.springframework.format.datetime; public final class DateFormatter implements Formatter<Date> { private String pattern; public DateFormatter(String pattern) { this.pattern = pattern; } public String print(Date date, Locale locale) { if (date == null) { return ""; } return getDateFormat(locale).format(date); } public Date parse(String formatted, Locale locale) throws ParseException { if (formatted.length() == 0) { return null; } return getDateFormat(locale).parse(formatted); } protected DateFormat getDateFormat(Locale locale) { DateFormat dateFormat = new SimpleDateFormat(this.pattern, locale); dateFormat.setLenient(false); return dateFormat; } } ``` ``` class DateFormatter(private val pattern: String) : Formatter<Date> { override fun print(date: Date, locale: Locale) = getDateFormat(locale).format(date) @Throws(ParseException::class) override fun parse(formatted: String, locale: Locale) = getDateFormat(locale).parse(formatted) protected fun getDateFormat(locale: Locale): DateFormat { val dateFormat = SimpleDateFormat(this.pattern, locale) dateFormat.isLenient = false return dateFormat } } ``` The Spring team welcomes community-driven `Formatter` contributions. See GitHub Issues to contribute. ## Annotation-driven Formatting Field formatting can be configured by field type or annotation. To bind an annotation to a `Formatter` , implement . The following listing shows the definition of the public interface AnnotationFormatterFactory<A extends Annotation> { Set<Class<?>> getFieldTypes(); Printer<?> getPrinter(A annotation, Class<?> fieldType); Parser<?> getParser(A annotation, Class<?> fieldType); } ``` To create an implementation: Parameterize `A` to be the field `annotationType` with which you wish to associate formatting logic — for example ``` org.springframework.format.annotation.DateTimeFormat ``` Have `getFieldTypes()` return the types of fields on which the annotation can be used. * Have `getPrinter()` return a `Printer` to print the value of an annotated field. * Have `getParser()` return a `Parser` to parse a `clientValue` for an annotated field. The following example implementation binds the `@NumberFormat` annotation to a formatter to let a number style or pattern be specified: ``` public final class NumberFormatAnnotationFormatterFactory implements AnnotationFormatterFactory<NumberFormat> { private static final Set<Class<?>> FIELD_TYPES = Set.of(Short.class, Integer.class, Long.class, Float.class, Double.class, BigDecimal.class, BigInteger.class); public Set<Class<?>> getFieldTypes() { return FIELD_TYPES; } public Printer<Number> getPrinter(NumberFormat annotation, Class<?> fieldType) { return configureFormatterFrom(annotation, fieldType); } public Parser<Number> getParser(NumberFormat annotation, Class<?> fieldType) { return configureFormatterFrom(annotation, fieldType); } private Formatter<Number> configureFormatterFrom(NumberFormat annotation, Class<?> fieldType) { if (!annotation.pattern().isEmpty()) { return new NumberStyleFormatter(annotation.pattern()); } // else return switch(annotation.style()) { case Style.PERCENT -> new PercentStyleFormatter(); case Style.CURRENCY -> new CurrencyStyleFormatter(); default -> new NumberStyleFormatter(); }; } } ``` ``` class NumberFormatAnnotationFormatterFactory : AnnotationFormatterFactory<NumberFormat> { override fun getFieldTypes(): Set<Class<*>> { return setOf(Short::class.java, Int::class.java, Long::class.java, Float::class.java, Double::class.java, BigDecimal::class.java, BigInteger::class.java) } override fun getPrinter(annotation: NumberFormat, fieldType: Class<*>): Printer<Number> { return configureFormatterFrom(annotation, fieldType) } override fun getParser(annotation: NumberFormat, fieldType: Class<*>): Parser<Number> { return configureFormatterFrom(annotation, fieldType) } private fun configureFormatterFrom(annotation: NumberFormat, fieldType: Class<*>): Formatter<Number> { return if (annotation.pattern.isNotEmpty()) { NumberStyleFormatter(annotation.pattern) } else { val style = annotation.style when { style === NumberFormat.Style.PERCENT -> PercentStyleFormatter() style === NumberFormat.Style.CURRENCY -> CurrencyStyleFormatter() else -> NumberStyleFormatter() } } } } ``` To trigger formatting, you can annotate fields with `@NumberFormat` , as the following example shows: @NumberFormat(style=Style.CURRENCY) private BigDecimal decimal; } ``` ``` class MyModel( @field:NumberFormat(style = Style.CURRENCY) private val decimal: BigDecimal ) ``` ### Format Annotation API A portable format annotation API exists in the ``` org.springframework.format.annotation ``` package. You can use `@NumberFormat` to format `Number` fields such as `Double` and `Long` , and `@DateTimeFormat` to format `java.util.Date` , `java.util.Calendar` , `Long` (for millisecond timestamps) as well as JSR-310 `java.time` . The following example uses `@DateTimeFormat` to format a `java.util.Date` as an ISO Date (yyyy-MM-dd): @DateTimeFormat(iso=ISO.DATE) private Date date; } ``` ``` class MyModel( @DateTimeFormat(iso=ISO.DATE) private val date: Date ) ``` `FormatterRegistry` SPI The `FormatterRegistry` is an SPI for registering formatters and converters. is an implementation of `FormatterRegistry` suitable for most environments. You can programmatically or declaratively configure this variant as a Spring bean, e.g. by using . Because this implementation also implements `ConversionService` , you can directly configure it for use with Spring’s `DataBinder` and the Spring Expression Language (SpEL). The following listing shows the `FormatterRegistry` SPI: public interface FormatterRegistry extends ConverterRegistry { void addPrinter(Printer<?> printer); void addParser(Parser<?> parser); void addFormatter(Formatter<?> formatter); void addFormatterForFieldType(Class<?> fieldType, Formatter<?> formatter); void addFormatterForFieldType(Class<?> fieldType, Printer<?> printer, Parser<?> parser); void addFormatterForFieldAnnotation(AnnotationFormatterFactory<? extends Annotation> annotationFormatterFactory); } ``` As shown in the preceding listing, you can register formatters by field type or by annotation. The `FormatterRegistry` SPI lets you configure formatting rules centrally, instead of duplicating such configuration across your controllers. For example, you might want to enforce that all date fields are formatted a certain way or that fields with a specific annotation are formatted in a certain way. With a shared `FormatterRegistry` , you define these rules once, and they are applied whenever formatting is needed. `FormatterRegistrar` SPI `FormatterRegistrar` is an SPI for registering formatters and converters through the FormatterRegistry. The following listing shows its interface definition: public interface FormatterRegistrar { void registerFormatters(FormatterRegistry registry); } ``` A `FormatterRegistrar` is useful when registering multiple related converters and formatters for a given formatting category, such as date formatting. It can also be useful where declarative registration is insufficient — for example, when a formatter needs to be indexed under a specific field type different from its own `<T>` or when registering a `Printer` / `Parser` pair. The next section provides more information on converter and formatter registration. ## Configuring Formatting in Spring MVC See Conversion and Formatting in the Spring MVC chapter. By default, date and time fields not annotated with `@DateTimeFormat` are converted from strings by using the `DateFormat.SHORT` style. If you prefer, you can change this by defining your own global format. To do that, ensure that Spring does not register default formatters. Instead, register formatters manually with the help of: ``` org.springframework.format.datetime.standard.DateTimeFormatterRegistrar ``` ``` org.springframework.format.datetime.DateFormatterRegistrar ``` For example, the following Java configuration registers a global `yyyyMMdd` format: @Bean public FormattingConversionService conversionService() { // Use the DefaultFormattingConversionService but do not register defaults DefaultFormattingConversionService conversionService = new DefaultFormattingConversionService(false); // Ensure @NumberFormat is still supported conversionService.addFormatterForFieldAnnotation( new NumberFormatAnnotationFormatterFactory()); // Register JSR-310 date conversion with a specific global format DateTimeFormatterRegistrar dateTimeRegistrar = new DateTimeFormatterRegistrar(); dateTimeRegistrar.setDateFormatter(DateTimeFormatter.ofPattern("yyyyMMdd")); dateTimeRegistrar.registerFormatters(conversionService); // Register date conversion with a specific global format DateFormatterRegistrar dateRegistrar = new DateFormatterRegistrar(); dateRegistrar.setFormatter(new DateFormatter("yyyyMMdd")); dateRegistrar.registerFormatters(conversionService); return conversionService; } } ``` @Bean fun conversionService(): FormattingConversionService { // Use the DefaultFormattingConversionService but do not register defaults return DefaultFormattingConversionService(false).apply { // Ensure @NumberFormat is still supported addFormatterForFieldAnnotation(NumberFormatAnnotationFormatterFactory()) // Register JSR-310 date conversion with a specific global format val dateTimeRegistrar = DateTimeFormatterRegistrar() dateTimeRegistrar.setDateFormatter(DateTimeFormatter.ofPattern("yyyyMMdd")) dateTimeRegistrar.registerFormatters(this) // Register date conversion with a specific global format val dateRegistrar = DateFormatterRegistrar() dateRegistrar.setFormatter(DateFormatter("yyyyMMdd")) dateRegistrar.registerFormatters(this) } } } ``` If you prefer XML-based configuration, you can use a <bean id="conversionService" class="org.springframework.format.support.FormattingConversionServiceFactoryBean"> <property name="registerDefaultFormatters" value="false" /> <property name="formatters"> <set> <bean class="org.springframework.format.number.NumberFormatAnnotationFormatterFactory" /> </set> </property> <property name="formatterRegistrars"> <set> <bean class="org.springframework.format.datetime.standard.DateTimeFormatterRegistrar"> <property name="dateFormatter"> <bean class="org.springframework.format.datetime.standard.DateTimeFormatterFactoryBean"> <property name="pattern" value="yyyyMMdd"/> </bean> </property> </bean> </set> </property> </bean> </beans> ``` Note there are extra considerations when configuring date and time formats in web applications. Please see WebMVC Conversion and Formatting or WebFlux Conversion and Formatting. ## Overview of Bean Validation Bean Validation provides a common way of validation through constraint declaration and metadata for Java applications. To use it, you annotate domain model properties with declarative validation constraints which are then enforced by the runtime. There are built-in constraints, and you can also define your own custom constraints. Consider the following example, which shows a simple `PersonForm` model with two properties: ``` public class PersonForm { private String name; private int age; } ``` ``` class PersonForm( private val name: String, private val age: Int ) ``` Bean Validation lets you declare constraints as the following example shows: ``` public class PersonForm { @NotNull @Size(max=64) private String name; @Min(0) private int age; } ``` ``` class PersonForm( @get:NotNull @get:Size(max=64) private val name: String, @get:Min(0) private val age: Int ) ``` A Bean Validation validator then validates instances of this class based on the declared constraints. See Bean Validation for general information about the API. See the Hibernate Validator documentation for specific constraints. To learn how to set up a bean validation provider as a Spring bean, keep reading. ## Configuring a Bean Validation Provider Spring provides full support for the Bean Validation API including the bootstrapping of a Bean Validation provider as a Spring bean. This lets you inject a wherever validation is needed in your application. You can use the to configure a default Validator as a Spring bean, as the following example shows: ``` import org.springframework.validation.beanvalidation.LocalValidatorFactoryBean; @Bean public LocalValidatorFactoryBean validator() { return new LocalValidatorFactoryBean(); } } ``` ``` <bean id="validator" class="org.springframework.validation.beanvalidation.LocalValidatorFactoryBean"/> ``` The basic configuration in the preceding example triggers bean validation to initialize by using its default bootstrap mechanism. A Bean Validation provider, such as the Hibernate Validator, is expected to be present in the classpath and is automatically detected. ### Injecting a Validator implements both , as well as Spring’s . You can inject a reference to either of these interfaces into beans that need to invoke validation logic. You can inject a reference to if you prefer to work with the Bean Validation API directly, as the following example shows: You can inject a reference to if your bean requires the Spring Validation API, as the following example shows: ``` import org.springframework.validation.Validator; ``` import org.springframework.validation.Validator ### Configuring Custom Constraints Each bean validation constraint consists of two parts: A `@Constraint` annotation that declares the constraint and its configurable properties. * An implementation of the ``` jakarta.validation.ConstraintValidator ``` interface that implements the constraint’s behavior. To associate a declaration with an implementation, each `@Constraint` annotation references a corresponding `ConstraintValidator` implementation class. At runtime, a ``` ConstraintValidatorFactory ``` instantiates the referenced implementation when the constraint annotation is encountered in your domain model. By default, the configures a ``` SpringConstraintValidatorFactory ``` that uses Spring to create `ConstraintValidator` instances. This lets your custom `ConstraintValidators` benefit from dependency injection like any other Spring bean. The following example shows a custom `@Constraint` declaration followed by an associated `ConstraintValidator` implementation that uses Spring for dependency injection: ``` @Target({ElementType.METHOD, ElementType.FIELD}) @Retention(RetentionPolicy.RUNTIME) @Constraint(validatedBy=MyConstraintValidator.class) public @interface MyConstraint { } ``` ``` @Target(AnnotationTarget.FUNCTION, AnnotationTarget.FIELD) @Retention(AnnotationRetention.RUNTIME) @Constraint(validatedBy = MyConstraintValidator::class) annotation class MyConstraint ``` ``` import jakarta.validation.ConstraintValidator; public class MyConstraintValidator implements ConstraintValidator { @Autowired; private Foo aDependency; ``` import jakarta.validation.ConstraintValidator class MyConstraintValidator(private val aDependency: Foo) : ConstraintValidator { As the preceding example shows, a `ConstraintValidator` implementation can have its dependencies `@Autowired` as any other Spring bean. ### Spring-driven Method Validation You can integrate the method validation feature supported by Bean Validation 1.1 (and, as a custom extension, also by Hibernate Validator 4.3) into a Spring context through a bean definition: ``` import org.springframework.validation.beanvalidation.MethodValidationPostProcessor; @Bean public MethodValidationPostProcessor validationPostProcessor() { return new MethodValidationPostProcessor(); } } ``` ``` <bean class="org.springframework.validation.beanvalidation.MethodValidationPostProcessor"/> ``` To be eligible for Spring-driven method validation, all target classes need to be annotated with Spring’s `@Validated` annotation, which can optionally also declare the validation groups to use. See for setup details with the Hibernate Validator and Bean Validation 1.1 providers. ### Additional Configuration Options The default configuration suffices for most cases. There are a number of configuration options for various Bean Validation constructs, from message interpolation to traversal resolution. See the javadoc for more information on these options. `DataBinder` You can configure a `DataBinder` instance with a `Validator` . Once configured, you can invoke the `Validator` by calling `binder.validate()` . Any validation `Errors` are automatically added to the binder’s `BindingResult` . The following example shows how to use a `DataBinder` programmatically to invoke validation logic after binding to a target object: ``` Foo target = new Foo(); DataBinder binder = new DataBinder(target); binder.setValidator(new FooValidator()); // bind to the target object binder.bind(propertyValues); // validate the target object binder.validate(); ``` val target = Foo() val binder = DataBinder(target) binder.validator = FooValidator() // bind to the target object binder.bind(propertyValues) // validate the target object binder.validate() You can also configure a `DataBinder` with multiple `Validator` instances through ``` dataBinder.addValidators ``` ``` dataBinder.replaceValidators ``` . This is useful when combining globally configured bean validation with a Spring `Validator` configured locally on a DataBinder instance. See Spring MVC Validation Configuration. ## Spring MVC 3 Validation See Validation in the Spring MVC chapter. The Spring Expression Language (“SpEL” for short) is a powerful expression language that supports querying and manipulating an object graph at runtime. The language syntax is similar to Unified EL but offers additional features, most notably method invocation and basic string templating functionality. While there are several other Java expression languages available — OGNL, MVEL, and JBoss EL, to name a few — the Spring Expression Language was created to provide the Spring community with a single well supported expression language that can be used across all the products in the Spring portfolio. Its language features are driven by the requirements of the projects in the Spring portfolio, including tooling requirements for code completion support within the Spring Tools for Eclipse. That said, SpEL is based on a technology-agnostic API that lets other expression language implementations be integrated, should the need arise. While SpEL serves as the foundation for expression evaluation within the Spring portfolio, it is not directly tied to Spring and can be used independently. To be self contained, many of the examples in this chapter use SpEL as if it were an independent expression language. This requires creating a few bootstrapping infrastructure classes, such as the parser. Most Spring users need not deal with this infrastructure and can, instead, author only expression strings for evaluation. An example of this typical use is the integration of SpEL into creating XML or annotation-based bean definitions, as shown in Expression support for defining bean definitions. This chapter covers the features of the expression language, its API, and its language syntax. In several places, `Inventor` and `Society` classes are used as the target objects for expression evaluation. These class declarations and the data used to populate them are listed at the end of the chapter. The expression language supports the following functionality: Literal expressions * Boolean and relational operators * Regular expressions * Class expressions * Accessing properties, arrays, lists, and maps * Method invocation * Assignment * Calling constructors * Bean references * Array construction * Inline lists * Inline maps * Ternary operator * Variables * User-defined functions * Collection projection * Collection selection * Templated expressions This section introduces the simple use of SpEL interfaces and its expression language. The complete language reference can be found in Language Reference. The following code introduces the SpEL API to evaluate the literal string expression, `Hello World` . ``` ExpressionParser parser = new SpelExpressionParser(); Expression exp = parser.parseExpression("'Hello World'"); (1) String message = (String) exp.getValue(); ``` ``` val parser = SpelExpressionParser() val exp = parser.parseExpression("'Hello World'") (1) val message = exp.value as String ``` The SpEL classes and interfaces you are most likely to use are located in the ``` org.springframework.expression ``` package and its sub-packages, such as `spel.support` . The `ExpressionParser` interface is responsible for parsing an expression string. In the preceding example, the expression string is a string literal denoted by the surrounding single quotation marks. The `Expression` interface is responsible for evaluating the previously defined expression string. Two exceptions that can be thrown, `ParseException` and `EvaluationException` , when calling ``` parser.parseExpression ``` and `exp.getValue` , respectively. SpEL supports a wide range of features, such as calling methods, accessing properties, and calling constructors. In the following example of method invocation, we call the `concat` method on the string literal: ``` ExpressionParser parser = new SpelExpressionParser(); Expression exp = parser.parseExpression("'Hello World'.concat('!')"); (1) String message = (String) exp.getValue(); ``` ``` val parser = SpelExpressionParser() val exp = parser.parseExpression("'Hello World'.concat('!')") (1) val message = exp.value as String ``` The following example of calling a JavaBean property calls the `String` property `Bytes` : // invokes 'getBytes()' Expression exp = parser.parseExpression("'Hello World'.bytes"); (1) byte[] bytes = (byte[]) exp.getValue(); ``` // invokes 'getBytes()' val exp = parser.parseExpression("'Hello World'.bytes") (1) val bytes = exp.value as ByteArray ``` SpEL also supports nested properties by using the standard dot notation (such as `prop1.prop2.prop3` ) and also the corresponding setting of property values. Public fields may also be accessed. The following example shows how to use dot notation to get the length of a literal: // invokes 'getBytes().length' Expression exp = parser.parseExpression("'Hello World'.bytes.length"); (1) int length = (Integer) exp.getValue(); ``` // invokes 'getBytes().length' val exp = parser.parseExpression("'Hello World'.bytes.length") (1) val length = exp.value as Int ``` The String’s constructor can be called instead of using a string literal, as the following example shows: ``` ExpressionParser parser = new SpelExpressionParser(); Expression exp = parser.parseExpression("new String('hello world').toUpperCase()"); (1) String message = exp.getValue(String.class); ``` ``` val parser = SpelExpressionParser() val exp = parser.parseExpression("new String('hello world').toUpperCase()") (1) val message = exp.getValue(String::class.java) ``` Note the use of the generic method: ``` public <T> T getValue(Class<T> desiredResultType) ``` . Using this method removes the need to cast the value of the expression to the desired result type. An `EvaluationException` is thrown if the value cannot be cast to the type `T` or converted by using the registered type converter. The more common usage of SpEL is to provide an expression string that is evaluated against a specific object instance (called the root object). The following example shows how to retrieve the `name` property from an instance of the `Inventor` class or create a boolean condition: ``` // Create and set a calendar GregorianCalendar c = new GregorianCalendar(); c.set(1856, 7, 9); // The constructor arguments are name, birthday, and nationality. Inventor tesla = new Inventor("<NAME>", c.getTime(), "Serbian"); ExpressionParser parser = new SpelExpressionParser(); Expression exp = parser.parseExpression("name"); // Parse name as an expression String name = (String) exp.getValue(tesla); // name == "<NAME>" exp = parser.parseExpression("name == '<NAME>'"); boolean result = exp.getValue(tesla, Boolean.class); // result == true ``` ``` // Create and set a calendar val c = GregorianCalendar() c.set(1856, 7, 9) // The constructor arguments are name, birthday, and nationality. val tesla = Inventor("<NAME>", c.time, "Serbian") val parser = SpelExpressionParser() var exp = parser.parseExpression("name") // Parse name as an expression val name = exp.getValue(tesla) as String // name == "<NAME>" exp = parser.parseExpression("name == '<NAME>'") val result = exp.getValue(tesla, Boolean::class.java) // result == true ``` `EvaluationContext` The `EvaluationContext` interface is used when evaluating an expression to resolve properties, methods, or fields and to help perform type conversion. Spring provides two implementations. : Exposes a subset of essential SpEL language features and configuration options, for categories of expressions that do not require the full extent of the SpEL language syntax and should be meaningfully restricted. Examples include but are not limited to data binding expressions and property-based filters. * : Exposes the full set of SpEL language features and configuration options. You can use it to specify a default root object and to configure every available evaluation-related strategy. is designed to support only a subset of the SpEL language syntax. It excludes Java type references, constructors, and bean references. It also requires you to explicitly choose the level of support for properties and methods in expressions. By default, the `create()` static factory method enables only read access to properties. You can also obtain a builder to configure the exact level of support needed, targeting one or some combination of the following: Custom `PropertyAccessor` only (no reflection) * Data binding properties for read-only access * Data binding properties for read and write ### Type Conversion By default, SpEL uses the conversion service available in Spring core ( ``` org.springframework.core.convert.ConversionService ``` ). This conversion service comes with many built-in converters for common conversions but is also fully extensible so that you can add custom conversions between types. Additionally, it is generics-aware. This means that, when you work with generic types in expressions, SpEL attempts conversions to maintain type correctness for any objects it encounters. What does this mean in practice? Suppose assignment, using `setValue()` , is being used to set a `List` property. The type of the property is actually `List<Boolean>` . SpEL recognizes that the elements of the list need to be converted to `Boolean` before being placed in it. The following example shows how to do so: ``` class Simple { public List<Boolean> booleanList = new ArrayList<>(); } Simple simple = new Simple(); simple.booleanList.add(true); EvaluationContext context = SimpleEvaluationContext.forReadOnlyDataBinding().build(); // b is false Boolean b = simple.booleanList.get(0); ``` ``` class Simple { var booleanList: MutableList<Boolean> = ArrayList() } val simple = Simple() simple.booleanList.add(true) val context = SimpleEvaluationContext.forReadOnlyDataBinding().build() // b is false val b = simple.booleanList[0] ``` ## Parser Configuration It is possible to configure the SpEL expression parser by using a parser configuration object ( ``` org.springframework.expression.spel.SpelParserConfiguration ``` ). The configuration object controls the behavior of some of the expression components. For example, if you index into an array or collection and the element at the specified index is `null` , SpEL can automatically create the element. This is useful when using expressions made up of a chain of property references. If you index into an array or list and specify an index that is beyond the end of the current size of the array or list, SpEL can automatically grow the array or list to accommodate that index. In order to add an element at the specified index, SpEL will try to create the element using the element type’s default constructor before setting the specified value. If the element type does not have a default constructor, `null` will be added to the array or list. If there is no built-in or custom converter that knows how to set the value, `null` will remain in the array or list at the specified index. The following example demonstrates how to automatically grow the list: ``` class Demo { public List<String> list; } // Turn on: // - auto null reference initialization // - auto collection growing SpelParserConfiguration config = new SpelParserConfiguration(true, true); ExpressionParser parser = new SpelExpressionParser(config); Expression expression = parser.parseExpression("list[3]"); Demo demo = new Demo(); Object o = expression.getValue(demo); ``` class Demo { var list: List<String>? = null } // Turn on: // - auto null reference initialization // - auto collection growing val config = SpelParserConfiguration(true, true) val expression = parser.parseExpression("list[3]") val demo = Demo() val o = expression.getValue(demo) ## SpEL Compilation Spring Framework 4.1 includes a basic expression compiler. Expressions are usually interpreted, which provides a lot of dynamic flexibility during evaluation but does not provide optimum performance. For occasional expression usage, this is fine, but, when used by other components such as Spring Integration, performance can be very important, and there is no real need for the dynamism. The SpEL compiler is intended to address this need. During evaluation, the compiler generates a Java class that embodies the expression behavior at runtime and uses that class to achieve much faster expression evaluation. Due to the lack of typing around expressions, the compiler uses information gathered during the interpreted evaluations of an expression when performing compilation. For example, it does not know the type of a property reference purely from the expression, but during the first interpreted evaluation, it finds out what it is. Of course, basing compilation on such derived information can cause trouble later if the types of the various expression elements change over time. For this reason, compilation is best suited to expressions whose type information is not going to change on repeated evaluations. Consider the following basic expression: > someArray[0].someProperty.someOtherProperty < 0.1 Because the preceding expression involves array access, some property de-referencing, and numeric operations, the performance gain can be very noticeable. In an example micro benchmark run of 50000 iterations, it took 75ms to evaluate by using the interpreter and only 3ms using the compiled version of the expression. ### Compiler Configuration The compiler is not turned on by default, but you can turn it on in either of two different ways. You can turn it on by using the parser configuration process (discussed earlier) or by using a Spring property when SpEL usage is embedded inside another component. This section discusses both of these options. The compiler can operate in one of three modes, which are captured in the ``` org.springframework.expression.spel.SpelCompilerMode ``` enum. The modes are as follows: * `OFF` (default): The compiler is switched off. * `IMMEDIATE` : In immediate mode, the expressions are compiled as soon as possible. This is typically after the first interpreted evaluation. If the compiled expression fails (typically due to a type changing, as described earlier), the caller of the expression evaluation receives an exception. * `MIXED` : In mixed mode, the expressions silently switch between interpreted and compiled mode over time. After some number of interpreted runs, they switch to compiled form and, if something goes wrong with the compiled form (such as a type changing, as described earlier), the expression automatically switches back to interpreted form again. Sometime later, it may generate another compiled form and switch to it. Basically, the exception that the user gets in `IMMEDIATE` mode is instead handled internally. `IMMEDIATE` mode exists because `MIXED` mode could cause issues for expressions that have side effects. If a compiled expression blows up after partially succeeding, it may have already done something that has affected the state of the system. If this has happened, the caller may not want it to silently re-run in interpreted mode, since part of the expression may be running twice. After selecting a mode, use the ``` SpelParserConfiguration ``` to configure the parser. The following example shows how to do so: ``` SpelParserConfiguration config = new SpelParserConfiguration(SpelCompilerMode.IMMEDIATE, this.getClass().getClassLoader()); SpelExpressionParser parser = new SpelExpressionParser(config); Expression expr = parser.parseExpression("payload"); MyMessage message = new MyMessage(); Object payload = expr.getValue(message); ``` ``` val config = SpelParserConfiguration(SpelCompilerMode.IMMEDIATE, this.javaClass.classLoader) val expr = parser.parseExpression("payload") val message = MyMessage() val payload = expr.getValue(message) ``` When you specify the compiler mode, you can also specify a classloader (passing null is allowed). Compiled expressions are defined in a child classloader created under any that is supplied. It is important to ensure that, if a classloader is specified, it can see all the types involved in the expression evaluation process. If you do not specify a classloader, a default classloader is used (typically the context classloader for the thread that is running during expression evaluation). The second way to configure the compiler is for use when SpEL is embedded inside some other component and it may not be possible to configure it through a configuration object. In these cases, it is possible to set the ``` spring.expression.compiler.mode ``` property via a JVM system property (or via the `SpringProperties` mechanism) to one of the `SpelCompilerMode` enum values ( `off` , `immediate` , or `mixed` ). ### Compiler Limitations Since Spring Framework 4.1, the basic compilation framework is in place. However, the framework does not yet support compiling every kind of expression. The initial focus has been on the common expressions that are likely to be used in performance-critical contexts. The following kinds of expression cannot be compiled at the moment: Expressions involving assignment * Expressions relying on the conversion service * Expressions using custom resolvers or accessors * Expressions using selection or projection More types of expressions will be compilable in the future. You can use SpEL expressions with XML-based or annotation-based configuration metadata for defining `BeanDefinition` instances. In both cases, the syntax to define the expression is of the form ``` #{ <expression string> } ``` ## XML Configuration A property or constructor argument value can be set by using expressions, as the following example shows: All beans in the application context are available as predefined variables with their common bean name. This includes standard context beans such as `environment` (of type ``` org.springframework.core.env.Environment ``` ) as well as `systemProperties` and `systemEnvironment` (of type `Map<String, Object>` ) for access to the runtime environment. The following example shows access to the `systemProperties` bean as a SpEL variable: ``` <bean id="taxCalculator" class="org.spring.samples.TaxCalculator"> <property name="defaultLocale" value="#{ systemProperties['user.region'] }"/ Note that you do not have to prefix the predefined variable with the `#` symbol here. You can also refer to other bean properties by name, as the following example shows: <bean id="shapeGuess" class="org.spring.samples.ShapeGuess"> <property name="initialShapeSeed" value="#{ numberGuess.randomNumber }"/## Annotation Configuration To specify a default value, you can place the `@Value` annotation on fields, methods, and method or constructor parameters. The following example sets the default value of a field: @Value("#{ systemProperties['user.region'] }") private String defaultLocale; public void setDefaultLocale(String defaultLocale) { this.defaultLocale = defaultLocale; } The following example shows the equivalent but on a property setter method: @Value("#{ systemProperties['user.region'] }") public void setDefaultLocale(String defaultLocale) { this.defaultLocale = defaultLocale; } Autowired methods and constructors can also use the `@Value` annotation, as the following examples show: private MovieFinder movieFinder; private String defaultLocale; @Autowired public void configure(MovieFinder movieFinder, @Value("#{ systemProperties['user.region'] }") String defaultLocale) { this.movieFinder = movieFinder; this.defaultLocale = defaultLocale; } private lateinit var movieFinder: MovieFinder private lateinit var defaultLocale: String @Autowired fun configure(movieFinder: MovieFinder, @Value("#{ systemProperties['user.region'] }") defaultLocale: String) { this.movieFinder = movieFinder this.defaultLocale = defaultLocale } public MovieRecommender(CustomerPreferenceDao customerPreferenceDao, @Value("#{systemProperties['user.country']}") String defaultLocale) { this.customerPreferenceDao = customerPreferenceDao; this.defaultLocale = defaultLocale; } ``` class MovieRecommender(private val customerPreferenceDao: CustomerPreferenceDao, @Value("#{systemProperties['user.country']}") private val defaultLocale: String) { // ... } ``` # Language Reference Language Reference This section describes how the Spring Expression Language works. It covers the following topics: Literal Expressions Properties, Arrays, Lists, Maps, and Indexers Inline Lists Inline Maps Array Construction Methods Operators Types Constructors Variables Functions Bean References Ternary Operator (If-Then-Else) The Elvis Operator Safe Navigation Operator Section Summary Literal Expressions Properties, Arrays, Lists, Maps, and Indexers Inline Lists Inline Maps Array Construction Methods Operators Types Constructors Variables Functions Bean References Ternary Operator (If-Then-Else) The Elvis Operator Safe Navigation Operator Collection Selection Collection Projection Expression templating SpEL supports the following types of literal expressions. strings * numeric values: integer ( `int` or `long` ), hexadecimal ( `int` or `long` ), real ( `float` or `double` ) * boolean values: `true` or `false` * null Strings can delimited by single quotation marks ( `'` ) or double quotation marks ( `"` ). To include a single quotation mark within a string literal enclosed in single quotation marks, use two adjacent single quotation mark characters. Similarly, to include a double quotation mark within a string literal enclosed in double quotation marks, use two adjacent double quotation mark characters. Numbers support the use of the negative sign, exponential notation, and decimal points. By default, real numbers are parsed by using `Double.parseDouble()` . The following listing shows simple usage of literals. Typically, they are not used in isolation like this but, rather, as part of a more complex expression — for example, using a literal on one side of a logical comparison operator or as an argument to a method. // evaluates to "Hello World" String helloWorld = (String) parser.parseExpression("'Hello World'").getValue(); // evaluates to "Tony's Pizza" String pizzaParlor = (String) parser.parseExpression("'Tony''s Pizza'").getValue(); double avogadrosNumber = (Double) parser.parseExpression("6.0221415E+23").getValue(); // evaluates to 2147483647 int maxValue = (Integer) parser.parseExpression("0x7FFFFFFF").getValue(); boolean trueValue = (Boolean) parser.parseExpression("true").getValue(); Object nullValue = parser.parseExpression("null").getValue(); ``` // evaluates to "Hello World" val helloWorld = parser.parseExpression("'Hello World'").value as String // evaluates to "Tony's Pizza" val pizzaParlor = parser.parseExpression("'Tony''s Pizza'").value as String val avogadrosNumber = parser.parseExpression("6.0221415E+23").value as Double // evaluates to 2147483647 val maxValue = parser.parseExpression("0x7FFFFFFF").value as Int val trueValue = parser.parseExpression("true").value as Boolean val nullValue = parser.parseExpression("null").value ``` Navigating with property references is easy. To do so, use a period to indicate a nested property value. The instances of the `Inventor` class, `pupin` and `tesla` , were populated with data listed in the Classes used in the examples section. To navigate "down" the object graph and get Tesla’s year of birth and Pupin’s city of birth, we use the following expressions: ``` // evaluates to 1856 int year = (Integer) parser.parseExpression("birthdate.year + 1900").getValue(context); String city = (String) parser.parseExpression("placeOfBirth.city").getValue(context); ``` ``` // evaluates to 1856 val year = parser.parseExpression("birthdate.year + 1900").getValue(context) as Int val city = parser.parseExpression("placeOfBirth.city").getValue(context) as String ``` The contents of arrays and lists are obtained by using square bracket notation, as the following example shows: // evaluates to "Induction motor" String invention = parser.parseExpression("inventions[3]").getValue( context, tesla, String.class); // evaluates to "<NAME>" String name = parser.parseExpression("members[0].name").getValue( context, ieee, String.class); // List and Array navigation // evaluates to "Wireless communication" String invention = parser.parseExpression("members[0].inventions[6]").getValue( context, ieee, String.class); ``` // evaluates to "Induction motor" val invention = parser.parseExpression("inventions[3]").getValue( context, tesla, String::class.java) // evaluates to "<NAME>" val name = parser.parseExpression("members[0].name").getValue( context, ieee, String::class.java) // List and Array navigation // evaluates to "Wireless communication" val invention = parser.parseExpression("members[0].inventions[6]").getValue( context, ieee, String::class.java) ``` The contents of maps are obtained by specifying the literal key value within the brackets. In the following example, because keys for the `officers` map are strings, we can specify string literals: Inventor pupin = parser.parseExpression("officers['president']").getValue( societyContext, Inventor.class); // evaluates to "Idvor" String city = parser.parseExpression("officers['president'].placeOfBirth.city").getValue( societyContext, String.class); // setting values parser.parseExpression("officers['advisors'][0].placeOfBirth.country").setValue( societyContext, "Croatia"); ``` val pupin = parser.parseExpression("officers['president']").getValue( societyContext, Inventor::class.java) // evaluates to "Idvor" val city = parser.parseExpression("officers['president'].placeOfBirth.city").getValue( societyContext, String::class.java) // setting values parser.parseExpression("officers['advisors'][0].placeOfBirth.country").setValue( societyContext, "Croatia") ``` You can directly express lists in an expression by using `{}` notation. ``` // evaluates to a Java list containing the four numbers List numbers = (List) parser.parseExpression("{1,2,3,4}").getValue(context); List listOfLists = (List) parser.parseExpression("{{'a','b'},{'x','y'}}").getValue(context); ``` ``` // evaluates to a Java list containing the four numbers val numbers = parser.parseExpression("{1,2,3,4}").getValue(context) as List<*val listOfLists = parser.parseExpression("{{'a','b'},{'x','y'}}").getValue(context) as List<*> ``` `{}` by itself means an empty list. For performance reasons, if the list is itself entirely composed of fixed literals, a constant list is created to represent the expression (rather than building a new list on each evaluation). You can also directly express maps in an expression by using `{key:value}` notation. The following example shows how to do so: ``` // evaluates to a Java map containing the two entries Map inventorInfo = (Map) parser.parseExpression("{name:'Nikola',dob:'10-July-1856'}").getValue(context); Map mapOfMaps = (Map) parser.parseExpression("{name:{first:'Nikola',last:'Tesla'},dob:{day:10,month:'July',year:1856}}").getValue(context); ``` ``` // evaluates to a Java map containing the two entries val inventorInfo = parser.parseExpression("{name:'Nikola',dob:'10-July-1856'}").getValue(context) as Map<*, *val mapOfMaps = parser.parseExpression("{name:{first:'Nikola',last:'Tesla'},dob:{day:10,month:'July',year:1856}}").getValue(context) as Map<*, *> ``` `{:}` by itself means an empty map. For performance reasons, if the map is itself composed of fixed literals or other nested constant structures (lists or maps), a constant map is created to represent the expression (rather than building a new map on each evaluation). Quoting of the map keys is optional (unless the key contains a period ( `.` )). The examples above do not use quoted keys. You can build arrays by using the familiar Java syntax, optionally supplying an initializer to have the array populated at construction time. The following example shows how to do so: ``` int[] numbers1 = (int[]) parser.parseExpression("new int[4]").getValue(context); // Array with initializer int[] numbers2 = (int[]) parser.parseExpression("new int[]{1,2,3}").getValue(context); // Multi dimensional array int[][] numbers3 = (int[][]) parser.parseExpression("new int[4][5]").getValue(context); ``` ``` val numbers1 = parser.parseExpression("new int[4]").getValue(context) as IntArray // Array with initializer val numbers2 = parser.parseExpression("new int[]{1,2,3}").getValue(context) as IntArray // Multi dimensional array val numbers3 = parser.parseExpression("new int[4][5]").getValue(context) as Array<IntArray> ``` You cannot currently supply an initializer when you construct a multi-dimensional array. You can invoke methods by using typical Java programming syntax. You can also invoke methods on literals. Variable arguments are also supported. The following examples show how to invoke methods: ``` // string literal, evaluates to "bc" String bc = parser.parseExpression("'abc'.substring(1, 3)").getValue(String.class); // evaluates to true boolean isMember = parser.parseExpression("isMember('<NAME>')").getValue( societyContext, Boolean.class); ``` ``` // string literal, evaluates to "bc" val bc = parser.parseExpression("'abc'.substring(1, 3)").getValue(String::class.java) // evaluates to true val isMember = parser.parseExpression("isMember('<NAME>')").getValue( societyContext, Boolean::class.java) ``` The Spring Expression Language supports the following kinds of operators: ## Relational Operators The relational operators (equal, not equal, less than, less than or equal, greater than, and greater than or equal) are supported by using standard operator notation. These operators work on `Number` types as well as types implementing `Comparable` . The following listing shows a few examples of operators: ``` // evaluates to true boolean trueValue = parser.parseExpression("2 == 2").getValue(Boolean.class); // evaluates to false boolean falseValue = parser.parseExpression("2 < -5.0").getValue(Boolean.class); // evaluates to true boolean trueValue = parser.parseExpression("'black' < 'block'").getValue(Boolean.class); // uses CustomValue:::compareTo boolean trueValue = parser.parseExpression("new CustomValue(1) < new CustomValue(2)").getValue(Boolean.class); ``` ``` // evaluates to true val trueValue = parser.parseExpression("2 == 2").getValue(Boolean::class.java) // evaluates to false val falseValue = parser.parseExpression("2 < -5.0").getValue(Boolean::class.java) // evaluates to true val trueValue = parser.parseExpression("'black' < 'block'").getValue(Boolean::class.java) // uses CustomValue:::compareTo val trueValue = parser.parseExpression("new CustomValue(1) < new CustomValue(2)").getValue(Boolean::class.java); ``` In addition to the standard relational operators, SpEL supports the `instanceof` and regular expression-based `matches` operator. The following listing shows examples of both: // evaluates to true boolean trueValue = parser.parseExpression( "'5.00' matches '^-?\\d+(\\.\\d{2})?$'").getValue(Boolean.class); ``` // evaluates to false val falseValue = parser.parseExpression( "'xyz' instanceof T(Integer)").getValue(Boolean::class.java) // evaluates to true val trueValue = parser.parseExpression( "'5.00' matches '^-?\\d+(\\.\\d{2})?$'").getValue(Boolean::class.java) // evaluates to false val falseValue = parser.parseExpression( "'5.0067' matches '^-?\\d+(\\.\\d{2})?$'").getValue(Boolean::class.java) ``` Be careful with primitive types, as they are immediately boxed up to their wrapper types. For example, | | --- | Each symbolic operator can also be specified as a purely alphabetic equivalent. This avoids problems where the symbols used have special meaning for the document type in which the expression is embedded (such as in an XML document). The textual equivalents are: * `lt` ( `<` ) * `gt` ( `>` ) * `le` ( `<=` ) * `ge` ( `>=` ) * `eq` ( `==` ) * `ne` ( `!=` ) * `div` ( `/` ) * `mod` ( `%` ) * `not` ( `!` ). All of the textual operators are case-insensitive. ## Logical Operators SpEL supports the following logical operators: * `and` ( `&&` ) * `or` ( `||` ) * `not` ( `!` ) The following example shows how to use the logical operators: // evaluates to true String expression = "isMember('<NAME>') and isMember('<NAME>')"; boolean trueValue = parser.parseExpression(expression).getValue(societyContext, Boolean.class); // evaluates to true boolean trueValue = parser.parseExpression("true or false").getValue(Boolean.class); // evaluates to true String expression = "isMember('<NAME>') or isMember('<NAME>')"; boolean trueValue = parser.parseExpression(expression).getValue(societyContext, Boolean.class); // -- AND and NOT -- String expression = "isMember('<NAME>') and !isMember('<NAME>')"; boolean falseValue = parser.parseExpression(expression).getValue(societyContext, Boolean.class); ``` // evaluates to false val falseValue = parser.parseExpression("true and false").getValue(Boolean::class.java) // evaluates to true val expression = "isMember('<NAME>') and isMember('<NAME>')" val trueValue = parser.parseExpression(expression).getValue(societyContext, Boolean::class.java) // evaluates to true val trueValue = parser.parseExpression("true or false").getValue(Boolean::class.java) // evaluates to true val expression = "isMember('<NAME>') or isMember('<NAME>')" val trueValue = parser.parseExpression(expression).getValue(societyContext, Boolean::class.java) // evaluates to false val falseValue = parser.parseExpression("!true").getValue(Boolean::class.java) // -- AND and NOT -- val expression = "isMember('<NAME>') and !isMember('<NAME>')" val falseValue = parser.parseExpression(expression).getValue(societyContext, Boolean::class.java) ``` ## Mathematical Operators You can use the addition operator ( `+` ) on both numbers and strings. You can use the subtraction ( `-` ), multiplication ( `*` ), and division ( `/` ) operators only on numbers. You can also use the modulus ( `%` ) and exponential power ( `^` ) operators on numbers. Standard operator precedence is enforced. The following example shows the mathematical operators in use: ``` // Addition int two = parser.parseExpression("1 + 1").getValue(Integer.class); // 2 String testString = parser.parseExpression( "'test' + ' ' + 'string'").getValue(String.class); // 'test string' // Subtraction int four = parser.parseExpression("1 - -3").getValue(Integer.class); // 4 double d = parser.parseExpression("1000.00 - 1e4").getValue(Double.class); // -9000 // Multiplication int six = parser.parseExpression("-2 * -3").getValue(Integer.class); // 6 double twentyFour = parser.parseExpression("2.0 * 3e0 * 4").getValue(Double.class); // 24.0 // Division int minusTwo = parser.parseExpression("6 / -3").getValue(Integer.class); // -2 double one = parser.parseExpression("8.0 / 4e0 / 2").getValue(Double.class); // 1.0 // Modulus int three = parser.parseExpression("7 % 4").getValue(Integer.class); // 3 int one = parser.parseExpression("8 / 5 % 2").getValue(Integer.class); // 1 // Operator precedence int minusTwentyOne = parser.parseExpression("1+2-3*8").getValue(Integer.class); // -21 ``` ``` // Addition val two = parser.parseExpression("1 + 1").getValue(Int::class.java) // 2 val testString = parser.parseExpression( "'test' + ' ' + 'string'").getValue(String::class.java) // 'test string' // Subtraction val four = parser.parseExpression("1 - -3").getValue(Int::class.java) // 4 val d = parser.parseExpression("1000.00 - 1e4").getValue(Double::class.java) // -9000 // Multiplication val six = parser.parseExpression("-2 * -3").getValue(Int::class.java) // 6 val twentyFour = parser.parseExpression("2.0 * 3e0 * 4").getValue(Double::class.java) // 24.0 // Division val minusTwo = parser.parseExpression("6 / -3").getValue(Int::class.java) // -2 val one = parser.parseExpression("8.0 / 4e0 / 2").getValue(Double::class.java) // 1.0 // Modulus val three = parser.parseExpression("7 % 4").getValue(Int::class.java) // 3 val one = parser.parseExpression("8 / 5 % 2").getValue(Int::class.java) // 1 // Operator precedence val minusTwentyOne = parser.parseExpression("1+2-3*8").getValue(Int::class.java) // -21 ``` ## The Assignment Operator To set a property, use the assignment operator ( `=` ). This is typically done within a call to `setValue` but can also be done inside a call to `getValue` . The following listing shows both ways to use the assignment operator: ``` Inventor inventor = new Inventor(); EvaluationContext context = SimpleEvaluationContext.forReadWriteDataBinding().build(); parser.parseExpression("name").setValue(context, inventor, "<NAME>"); // alternatively String aleks = parser.parseExpression( "name = '<NAME>'").getValue(context, inventor, String.class); ``` ``` val inventor = Inventor() val context = SimpleEvaluationContext.forReadWriteDataBinding().build() parser.parseExpression("name").setValue(context, inventor, "<NAME>") // alternatively val aleks = parser.parseExpression( "name = '<NAME>'").getValue(context, inventor, String::class.java) ``` You can use the special `T` operator to specify an instance of `java.lang.Class` (the type). Static methods are invoked by using this operator as well. The uses a `TypeLocator` to find types, and the `StandardTypeLocator` (which can be replaced) is built with an understanding of the `java.lang` package. This means that `T()` references to types within the `java.lang` package do not need to be fully qualified, but all other type references must be. The following example shows how to use the `T` operator: ``` Class dateClass = parser.parseExpression("T(java.util.Date)").getValue(Class.class); Class stringClass = parser.parseExpression("T(String)").getValue(Class.class); boolean trueValue = parser.parseExpression( "T(java.math.RoundingMode).CEILING < T(java.math.RoundingMode).FLOOR") .getValue(Boolean.class); ``` ``` val dateClass = parser.parseExpression("T(java.util.Date)").getValue(Class::class.java) val stringClass = parser.parseExpression("T(String)").getValue(Class::class.java) val trueValue = parser.parseExpression( "T(java.math.RoundingMode).CEILING < T(java.math.RoundingMode).FLOOR") .getValue(Boolean::class.java) ``` You can invoke constructors by using the `new` operator. You should use the fully qualified class name for all types except those located in the `java.lang` package ( `Integer` , `Float` , `String` , and so on). The following example shows how to use the `new` operator to invoke constructors: ``` Inventor einstein = p.parseExpression( "new org.spring.samples.spel.inventor.Inventor('<NAME>', 'German')") .getValue(Inventor.class); // create new Inventor instance within the add() method of List p.parseExpression( "Members.add(new org.spring.samples.spel.inventor.Inventor( '<NAME>', 'German'))").getValue(societyContext); ``` ``` val einstein = p.parseExpression( "new org.spring.samples.spel.inventor.Inventor('<NAME>', 'German')") .getValue(Inventor::class.java) // create new Inventor instance within the add() method of List p.parseExpression( "Members.add(new org.spring.samples.spel.inventor.Inventor('<NAME>', 'German'))") .getValue(societyContext) ``` You can reference variables in the expression by using the `#variableName` syntax. Variables are set by using the `setVariable` method on `EvaluationContext` implementations. EvaluationContext context = SimpleEvaluationContext.forReadWriteDataBinding().build(); context.setVariable("newName", "<NAME>"); parser.parseExpression("name = #newName").getValue(context, tesla); System.out.println(tesla.getName()) // "<NAME>" ``` ``` val tesla = Inventor("<NAME>", "Serbian") val context = SimpleEvaluationContext.forReadWriteDataBinding().build() context.setVariable("newName", "<NAME>") parser.parseExpression("name = #newName").getValue(context, tesla) println(tesla.name) // "<NAME>" ``` `#this` and `#root` Variables The `#this` variable is always defined and refers to the current evaluation object (against which unqualified references are resolved). The `#root` variable is always defined and refers to the root context object. Although `#this` may vary as components of an expression are evaluated, `#root` always refers to the root. The following examples show how to use the `#this` and `#root` variables: ``` // create an array of integers List<Integer> primes = new ArrayList<>(); primes.addAll(Arrays.asList(2,3,5,7,11,13,17)); // create parser and set variable 'primes' as the array of integers ExpressionParser parser = new SpelExpressionParser(); EvaluationContext context = SimpleEvaluationContext.forReadOnlyDataAccess(); context.setVariable("primes", primes); // all prime numbers > 10 from the list (using selection ?{...}) // evaluates to [11, 13, 17] List<Integer> primesGreaterThanTen = (List<Integer>) parser.parseExpression( "#primes.?[#this>10]").getValue(context); ``` ``` // create an array of integers val primes = ArrayList<Int>() primes.addAll(listOf(2, 3, 5, 7, 11, 13, 17)) // create parser and set variable 'primes' as the array of integers val parser = SpelExpressionParser() val context = SimpleEvaluationContext.forReadOnlyDataAccess() context.setVariable("primes", primes) // all prime numbers > 10 from the list (using selection ?{...}) // evaluates to [11, 13, 17] val primesGreaterThanTen = parser.parseExpression( "#primes.?[#this>10]").getValue(context) as List<Int> ``` You can extend SpEL by registering user-defined functions that can be called within the expression string. The function is registered through the `EvaluationContext` . The following example shows how to register a user-defined function: ``` Method method = ...; EvaluationContext context = SimpleEvaluationContext.forReadOnlyDataBinding().build(); context.setVariable("myFunction", method); ``` ``` val method: Method = ... val context = SimpleEvaluationContext.forReadOnlyDataBinding().build() context.setVariable("myFunction", method) ``` For example, consider the following utility method that reverses a string: ``` public abstract class StringUtils { public static String reverseString(String input) { StringBuilder backwards = new StringBuilder(input.length()); for (int i = 0; i < input.length(); i++) { backwards.append(input.charAt(input.length() - 1 - i)); } return backwards.toString(); } } ``` ``` fun reverseString(input: String): String { val backwards = StringBuilder(input.length) for (i in 0 until input.length) { backwards.append(input[input.length - 1 - i]) } return backwards.toString() } ``` You can then register and use the preceding method, as the following example shows: EvaluationContext context = SimpleEvaluationContext.forReadOnlyDataBinding().build(); context.setVariable("reverseString", StringUtils.class.getDeclaredMethod("reverseString", String.class)); String helloWorldReversed = parser.parseExpression( "#reverseString('hello')").getValue(context, String.class); ``` val context = SimpleEvaluationContext.forReadOnlyDataBinding().build() context.setVariable("reverseString", ::reverseString::javaMethod) val helloWorldReversed = parser.parseExpression( "#reverseString('hello')").getValue(context, String::class.java) ``` If the evaluation context has been configured with a bean resolver, you can look up beans from an expression by using the `@` symbol. The following example shows how to do so: // This will end up calling resolve(context,"something") on MyBeanResolver during evaluation Object bean = parser.parseExpression("@something").getValue(context); ``` // This will end up calling resolve(context,"something") on MyBeanResolver during evaluation val bean = parser.parseExpression("@something").getValue(context) ``` To access a factory bean itself, you should instead prefix the bean name with an `&` symbol. The following example shows how to do so: // This will end up calling resolve(context,"&foo") on MyBeanResolver during evaluation Object bean = parser.parseExpression("&foo").getValue(context); ``` // This will end up calling resolve(context,"&foo") on MyBeanResolver during evaluation val bean = parser.parseExpression("&foo").getValue(context) ``` You can use the ternary operator for performing if-then-else conditional logic inside the expression. The following listing shows a minimal example: ``` String falseString = parser.parseExpression( "false ? 'trueExp' : 'falseExp'").getValue(String.class); ``` ``` val falseString = parser.parseExpression( "false ? 'trueExp' : 'falseExp'").getValue(String::class.java) ``` In this case, the boolean `false` results in returning the string value `'falseExp'` . A more realistic example follows: ``` parser.parseExpression("name").setValue(societyContext, "IEEE"); societyContext.setVariable("queryName", "<NAME>"); ``` parser.parseExpression("name").setValue(societyContext, "IEEE") societyContext.setVariable("queryName", "<NAME>") See the next section on the Elvis operator for an even shorter syntax for the ternary operator. The Elvis operator is a shortening of the ternary operator syntax and is used in the Groovy language. With the ternary operator syntax, you usually have to repeat a variable twice, as the following example shows: ``` String name = "<NAME>"; String displayName = (name != null ? name : "Unknown"); ``` Instead, you can use the Elvis operator (named for the resemblance to Elvis' hair style). The following example shows how to use the Elvis operator: String name = parser.parseExpression("name?:'Unknown'").getValue(new Inventor(), String.class); System.out.println(name); // 'Unknown' ``` val name = parser.parseExpression("name?:'Unknown'").getValue(Inventor(), String::class.java) println(name) // 'Unknown' ``` The following listing shows a more complex example: Inventor tesla = new Inventor("<NAME>", "Serbian"); String name = parser.parseExpression("name?:'<NAME>'").getValue(context, tesla, String.class); System.out.println(name); // <NAME> tesla.setName(null); name = parser.parseExpression("name?:'<NAME>'").getValue(context, tesla, String.class); System.out.println(name); // <NAME> ``` val tesla = Inventor("<NAME>", "Serbian") var name = parser.parseExpression("name?:'<NAME>'").getValue(context, tesla, String::class.java) println(name) // <NAME> tesla.setName(null) name = parser.parseExpression("name?:'<NAME>'").getValue(context, tesla, String::class.java) println(name) // <NAME> ``` The safe navigation operator is used to avoid a `NullPointerException` and comes from the Groovy language. Typically, when you have a reference to an object, you might need to verify that it is not null before accessing methods or properties of the object. To avoid this, the safe navigation operator returns null instead of throwing an exception. The following example shows how to use the safe navigation operator: String city = parser.parseExpression("placeOfBirth?.city").getValue(context, tesla, String.class); System.out.println(city); // Smiljan tesla.setPlaceOfBirth(null); city = parser.parseExpression("placeOfBirth?.city").getValue(context, tesla, String.class); System.out.println(city); // null - does not throw NullPointerException!!! ``` val tesla = Inventor("<NAME>", "Serbian") tesla.setPlaceOfBirth(PlaceOfBirth("Smiljan")) var city = parser.parseExpression("placeOfBirth?.city").getValue(context, tesla, String::class.java) println(city) // Smiljan tesla.setPlaceOfBirth(null) city = parser.parseExpression("placeOfBirth?.city").getValue(context, tesla, String::class.java) println(city) // null - does not throw NullPointerException!!! ``` Selection is a powerful expression language feature that lets you transform a source collection into another collection by selecting from its entries. Selection uses a syntax of ``` .?[selectionExpression] ``` . It filters the collection and returns a new collection that contains a subset of the original elements. For example, selection lets us easily get a list of Serbian inventors, as the following example shows: ``` List<Inventor> list = (List<Inventor>) parser.parseExpression( "members.?[nationality == 'Serbian']").getValue(societyContext); ``` ``` val list = parser.parseExpression( "members.?[nationality == 'Serbian']").getValue(societyContext) as List<Inventor> ``` Selection is supported for arrays and anything that implements `java.lang.Iterable` or `java.util.Map` . For a list or array, the selection criteria is evaluated against each individual element. Against a map, the selection criteria is evaluated against each map entry (objects of the Java type `Map.Entry` ). Each map entry has its `key` and `value` accessible as properties for use in the selection. The following expression returns a new map that consists of those elements of the original map where the entry’s value is less than 27: ``` Map newMap = parser.parseExpression("map.?[value<27]").getValue(); ``` ``` val newMap = parser.parseExpression("map.?[value<27]").getValue() ``` In addition to returning all the selected elements, you can retrieve only the first or the last element. To obtain the first element matching the selection, the syntax is ``` .^[selectionExpression] ``` . To obtain the last matching selection, the syntax is ``` .$[selectionExpression] ``` Projection lets a collection drive the evaluation of a sub-expression, and the result is a new collection. The syntax for projection is ``` .![projectionExpression] ``` . For example, suppose we have a list of inventors but want the list of cities where they were born. Effectively, we want to evaluate 'placeOfBirth.city' for every entry in the inventor list. The following example uses projection to do so: ``` // returns ['Smiljan', 'Idvor' ] List placesOfBirth = (List)parser.parseExpression("members.![placeOfBirth.city]"); ``` ``` // returns ['Smiljan', 'Idvor' ] val placesOfBirth = parser.parseExpression("members.![placeOfBirth.city]") as List<*> ``` Projection is supported for arrays and anything that implements `java.lang.Iterable` or `java.util.Map` . When using a map to drive projection, the projection expression is evaluated against each entry in the map (represented as a Java `Map.Entry` ). The result of a projection across a map is a list that consists of the evaluation of the projection expression against each map entry. Expression templates allow mixing literal text with one or more evaluation blocks. Each evaluation block is delimited with prefix and suffix characters that you can define. A common choice is to use `#{ }` as the delimiters, as the following example shows: ``` String randomPhrase = parser.parseExpression( "random number is #{T(java.lang.Math).random()}", new TemplateParserContext()).getValue(String.class); ``` val randomPhrase = parser.parseExpression( "random number is #{T(java.lang.Math).random()}", TemplateParserContext()).getValue(String::class.java) The string is evaluated by concatenating the literal text `'random number is '` with the result of evaluating the expression inside the `#{ }` delimiter (in this case, the result of calling that `random()` method). The second argument to the `parseExpression()` method is of the type `ParserContext` . The `ParserContext` interface is used to influence how the expression is parsed in order to support the expression templating functionality. The definition of ``` TemplateParserContext ``` follows: ``` public class TemplateParserContext implements ParserContext { public String getExpressionSuffix() { return "}"; } public boolean isTemplate() { return true; } } ``` ``` class TemplateParserContext : ParserContext { override fun getExpressionPrefix(): String { return "#{" } override fun getExpressionSuffix(): String { return "}" } This section lists the classes used in the examples throughout this chapter. Inventor.Java * Inventor.kt import java.util.Date; import java.util.GregorianCalendar; public class Inventor { private String name; private String nationality; private String[] inventions; private Date birthdate; private PlaceOfBirth placeOfBirth; public Inventor(String name, String nationality) { GregorianCalendar c= new GregorianCalendar(); this.name = name; this.nationality = nationality; this.birthdate = c.getTime(); } public Inventor(String name, Date birthdate, String nationality) { this.name = name; this.nationality = nationality; this.birthdate = birthdate; } public Inventor() { } public String getNationality() { return nationality; } public void setNationality(String nationality) { this.nationality = nationality; } public Date getBirthdate() { return birthdate; } public void setBirthdate(Date birthdate) { this.birthdate = birthdate; } public PlaceOfBirth getPlaceOfBirth() { return placeOfBirth; } public void setPlaceOfBirth(PlaceOfBirth placeOfBirth) { this.placeOfBirth = placeOfBirth; } public void setInventions(String[] inventions) { this.inventions = inventions; } public String[] getInventions() { return inventions; } } ``` class Inventor( var name: String, var nationality: String, var inventions: Array<String>? = null, var birthdate: Date = GregorianCalendar().time, var placeOfBirth: PlaceOfBirth? = null) ``` PlaceOfBirth.java * PlaceOfBirth.kt public class PlaceOfBirth { private String city; private String country; public PlaceOfBirth(String city) { this.city=city; } public PlaceOfBirth(String city, String country) { this(city); this.country = country; } public String getCity() { return city; } public void setCity(String s) { this.city = s; } public String getCountry() { return country; } public void setCountry(String country) { this.country = country; } } ``` class PlaceOfBirth(var city: String, var country: String? = null) { ``` Society.java * Society.kt import java.util.*; public static String Advisors = "advisors"; public static String President = "president"; private List<Inventor> members = new ArrayList<>(); private Map officers = new HashMap(); public List getMembers() { return members; } public Map getOfficers() { return officers; } public boolean isMember(String name) { for (Inventor inventor : members) { if (inventor.getName().equals(name)) { return true; } } return false; } } ``` import java.util.* val Advisors = "advisors" val President = "president" var name: String? = null val members = ArrayList<Inventor>() val officers = mapOf<Any, Any>() fun isMember(name: String): Boolean { for (inventor in members) { if (inventor.name == name) { return true } } return false } } ``` Aspect-oriented Programming (AOP) complements Object-oriented Programming (OOP) by providing another way of thinking about program structure. The key unit of modularity in OOP is the class, whereas in AOP the unit of modularity is the aspect. Aspects enable the modularization of concerns (such as transaction management) that cut across multiple types and objects. (Such concerns are often termed "crosscutting" concerns in AOP literature.) One of the key components of Spring is the AOP framework. While the Spring IoC container does not depend on AOP (meaning you do not need to use AOP if you don’t want to), AOP complements Spring IoC to provide a very capable middleware solution. AOP is used in the Spring Framework to: Provide declarative enterprise services. The most important such service is declarative transaction management. * Let users implement custom aspects, complementing their use of OOP with AOP. If you are interested only in generic declarative services or other pre-packaged declarative middleware services such as pooling, you do not need to work directly with Spring AOP, and can skip most of this chapter. | | --- | Let us begin by defining some central AOP concepts and terminology. These terms are not Spring-specific. Unfortunately, AOP terminology is not particularly intuitive. However, it would be even more confusing if Spring used its own terminology. Aspect: A modularization of a concern that cuts across multiple classes. Transaction management is a good example of a crosscutting concern in enterprise Java applications. In Spring AOP, aspects are implemented by using regular classes (the schema-based approach) or regular classes annotated with the `@Aspect` annotation (the @AspectJ style). * Join point: A point during the execution of a program, such as the execution of a method or the handling of an exception. In Spring AOP, a join point always represents a method execution. * Advice: Action taken by an aspect at a particular join point. Different types of advice include "around", "before", and "after" advice. (Advice types are discussed later.) Many AOP frameworks, including Spring, model an advice as an interceptor and maintain a chain of interceptors around the join point. * Pointcut: A predicate that matches join points. Advice is associated with a pointcut expression and runs at any join point matched by the pointcut (for example, the execution of a method with a certain name). The concept of join points as matched by pointcut expressions is central to AOP, and Spring uses the AspectJ pointcut expression language by default. * Introduction: Declaring additional methods or fields on behalf of a type. Spring AOP lets you introduce new interfaces (and a corresponding implementation) to any advised object. For example, you could use an introduction to make a bean implement an `IsModified` interface, to simplify caching. (An introduction is known as an inter-type declaration in the AspectJ community.) * Target object: An object being advised by one or more aspects. Also referred to as the "advised object". Since Spring AOP is implemented by using runtime proxies, this object is always a proxied object. * AOP proxy: An object created by the AOP framework in order to implement the aspect contracts (advise method executions and so on). In the Spring Framework, an AOP proxy is a JDK dynamic proxy or a CGLIB proxy. * Weaving: linking aspects with other application types or objects to create an advised object. This can be done at compile time (using the AspectJ compiler, for example), load time, or at runtime. Spring AOP, like other pure Java AOP frameworks, performs weaving at runtime. Spring AOP includes the following types of advice: Before advice: Advice that runs before a join point but that does not have the ability to prevent execution flow proceeding to the join point (unless it throws an exception). * After returning advice: Advice to be run after a join point completes normally (for example, if a method returns without throwing an exception). * After throwing advice: Advice to be run if a method exits by throwing an exception. * After (finally) advice: Advice to be run regardless of the means by which a join point exits (normal or exceptional return). * Around advice: Advice that surrounds a join point such as a method invocation. This is the most powerful kind of advice. Around advice can perform custom behavior before and after the method invocation. It is also responsible for choosing whether to proceed to the join point or to shortcut the advised method execution by returning its own return value or throwing an exception. Around advice is the most general kind of advice. Since Spring AOP, like AspectJ, provides a full range of advice types, we recommend that you use the least powerful advice type that can implement the required behavior. For example, if you need only to update a cache with the return value of a method, you are better off implementing an after returning advice than an around advice, although an around advice can accomplish the same thing. Using the most specific advice type provides a simpler programming model with less potential for errors. For example, you do not need to invoke the `proceed()` method on the `JoinPoint` used for around advice, and, hence, you cannot fail to invoke it. All advice parameters are statically typed so that you work with advice parameters of the appropriate type (e.g. the type of the return value from a method execution) rather than `Object` arrays. The concept of join points matched by pointcuts is the key to AOP, which distinguishes it from older technologies offering only interception. Pointcuts enable advice to be targeted independently of the object-oriented hierarchy. For example, you can apply an around advice providing declarative transaction management to a set of methods that span multiple objects (such as all business operations in the service layer). Spring AOP is implemented in pure Java. There is no need for a special compilation process. Spring AOP does not need to control the class loader hierarchy and is thus suitable for use in a servlet container or application server. Spring AOP currently supports only method execution join points (advising the execution of methods on Spring beans). Field interception is not implemented, although support for field interception could be added without breaking the core Spring AOP APIs. If you need to advise field access and update join points, consider a language such as AspectJ. Spring AOP’s approach to AOP differs from that of most other AOP frameworks. The aim is not to provide the most complete AOP implementation (although Spring AOP is quite capable). Rather, the aim is to provide a close integration between AOP implementation and Spring IoC, to help solve common problems in enterprise applications. Thus, for example, the Spring Framework’s AOP functionality is normally used in conjunction with the Spring IoC container. Aspects are configured by using normal bean definition syntax (although this allows powerful "auto-proxying" capabilities). This is a crucial difference from other AOP implementations. You cannot do some things easily or efficiently with Spring AOP, such as advise very fine-grained objects (typically, domain objects). AspectJ is the best choice in such cases. However, our experience is that Spring AOP provides an excellent solution to most problems in enterprise Java applications that are amenable to AOP. Spring AOP never strives to compete with AspectJ to provide a comprehensive AOP solution. We believe that both proxy-based frameworks such as Spring AOP and full-blown frameworks such as AspectJ are valuable and that they are complementary, rather than in competition. Spring seamlessly integrates Spring AOP and IoC with AspectJ, to enable all uses of AOP within a consistent Spring-based application architecture. This integration does not affect the Spring AOP API or the AOP Alliance API. Spring AOP remains backward-compatible. See the following chapter for a discussion of the Spring AOP APIs. Spring AOP defaults to using standard JDK dynamic proxies for AOP proxies. This enables any interface (or set of interfaces) to be proxied. Spring AOP can also use CGLIB proxies. This is necessary to proxy classes rather than interfaces. By default, CGLIB is used if a business object does not implement an interface. As it is good practice to program to interfaces rather than classes, business classes normally implement one or more business interfaces. It is possible to force the use of CGLIB, in those (hopefully rare) cases where you need to advise a method that is not declared on an interface or where you need to pass a proxied object to a method as a concrete type. It is important to grasp the fact that Spring AOP is proxy-based. See Understanding AOP Proxies for a thorough examination of exactly what this implementation detail actually means. @AspectJ refers to a style of declaring aspects as regular Java classes annotated with annotations. The @AspectJ style was introduced by the AspectJ project as part of the AspectJ 5 release. Spring interprets the same annotations as AspectJ 5, using a library supplied by AspectJ for pointcut parsing and matching. The AOP runtime is still pure Spring AOP, though, and there is no dependency on the AspectJ compiler or weaver. Using the AspectJ compiler and weaver enables use of the full AspectJ language and is discussed in Using AspectJ with Spring Applications. | | --- | To use @AspectJ aspects in a Spring configuration, you need to enable Spring support for configuring Spring AOP based on @AspectJ aspects and auto-proxying beans based on whether or not they are advised by those aspects. By auto-proxying, we mean that, if Spring determines that a bean is advised by one or more aspects, it automatically generates a proxy for that bean to intercept method invocations and ensures that advice is run as needed. The @AspectJ support can be enabled with XML- or Java-style configuration. In either case, you also need to ensure that AspectJ’s `aspectjweaver.jar` library is on the classpath of your application (version 1.9 or later). This library is available in the `lib` directory of an AspectJ distribution or from the Maven Central repository. ## Enabling @AspectJ Support with Java Configuration To enable @AspectJ support with Java `@Configuration` , add the ``` @EnableAspectJAutoProxy ``` annotation, as the following example shows: ``` @Configuration @EnableAspectJAutoProxy public class AppConfig { } ``` ``` @Configuration @EnableAspectJAutoProxy class AppConfig ``` ## Enabling @AspectJ Support with XML Configuration To enable @AspectJ support with XML-based configuration, use the ``` aop:aspectj-autoproxy ``` This assumes that you use schema support as described in XML Schema-based configuration. See the AOP schema for how to import the tags in the `aop` namespace. With @AspectJ support enabled, any bean defined in your application context with a class that is an @AspectJ aspect (has the `@Aspect` annotation) is automatically detected by Spring and used to configure Spring AOP. The next two examples show the minimal steps required for a not-very-useful aspect. The first of the two examples shows a regular bean definition in the application context that points to a bean class that is annotated with `@Aspect` : ``` <bean id="myAspect" class="com.xyz.NotVeryUsefulAspect"> <!-- configure properties of the aspect here --> </bean> ``` The second of the two examples shows the `NotVeryUsefulAspect` class definition, which is annotated with `@Aspect` : import org.aspectj.lang.annotation.Aspect; @Aspect public class NotVeryUsefulAspect { } ``` import org.aspectj.lang.annotation.Aspect @Aspect class NotVeryUsefulAspect ``` Aspects (classes annotated with `@Aspect` ) can have methods and fields, the same as any other class. They can also contain pointcut, advice, and introduction (inter-type) declarations. Autodetecting aspects through component scanningYou can register aspect classes as regular beans in your Spring XML configuration, via@Bean methods in @Configuration classes, or have Spring autodetect them through classpath scanning — the same as any other Spring-managed bean. However, note that the @Aspect annotation is not sufficient for autodetection in the classpath. For that purpose, you need to add a separate @Component annotation (or, alternatively, a custom stereotype annotation that qualifies, as per the rules of Spring’s component scanner). Advising aspects with other aspects?In Spring AOP, aspects themselves cannot be the targets of advice from other aspects. The@Aspect annotation on a class marks it as an aspect and, hence, excludes it from auto-proxying. Pointcuts determine join points of interest and thus enable us to control when advice runs. Spring AOP only supports method execution join points for Spring beans, so you can think of a pointcut as matching the execution of methods on Spring beans. A pointcut declaration has two parts: a signature comprising a name and any parameters and a pointcut expression that determines exactly which method executions we are interested in. In the @AspectJ annotation-style of AOP, a pointcut signature is provided by a regular method definition, and the pointcut expression is indicated by using the `@Pointcut` annotation (the method serving as the pointcut signature must have a `void` return type). An example may help make this distinction between a pointcut signature and a pointcut expression clear. The following example defines a pointcut named `anyOldTransfer` that matches the execution of any method named `transfer` : The pointcut expression that forms the value of the `@Pointcut` annotation is a regular AspectJ pointcut expression. For a full discussion of AspectJ’s pointcut language, see the AspectJ Programming Guide (and, for extensions, the AspectJ 5 Developer’s Notebook) or one of the books on AspectJ (such as Eclipse AspectJ, by Colyer et al., or AspectJ in Action, by <NAME>). ## Supported Pointcut Designators Spring AOP supports the following AspectJ pointcut designators (PCD) for use in pointcut expressions: * `execution` : For matching method execution join points. This is the primary pointcut designator to use when working with Spring AOP. * `within` : Limits matching to join points within certain types (the execution of a method declared within a matching type when using Spring AOP). * `this` : Limits matching to join points (the execution of methods when using Spring AOP) where the bean reference (Spring AOP proxy) is an instance of the given type. * `target` : Limits matching to join points (the execution of methods when using Spring AOP) where the target object (application object being proxied) is an instance of the given type. * `args` : Limits matching to join points (the execution of methods when using Spring AOP) where the arguments are instances of the given types. * `@target` : Limits matching to join points (the execution of methods when using Spring AOP) where the class of the executing object has an annotation of the given type. * `@args` : Limits matching to join points (the execution of methods when using Spring AOP) where the runtime type of the actual arguments passed have annotations of the given types. * `@within` : Limits matching to join points within types that have the given annotation (the execution of methods declared in types with the given annotation when using Spring AOP). * `@annotation` : Limits matching to join points where the subject of the join point (the method being run in Spring AOP) has the given annotation. Because Spring AOP limits matching to only method execution join points, the preceding discussion of the pointcut designators gives a narrower definition than you can find in the AspectJ programming guide. In addition, AspectJ itself has type-based semantics and, at an execution join point, both `this` and `target` refer to the same object: the object executing the method. Spring AOP is a proxy-based system and differentiates between the proxy object itself (which is bound to `this` ) and the target object behind the proxy (which is bound to `target` ). Spring AOP also supports an additional PCD named `bean` . This PCD lets you limit the matching of join points to a particular named Spring bean or to a set of named Spring beans (when using wildcards). The `bean` PCD has the following form: `bean(idOrNameOfBean)` The `idOrNameOfBean` token can be the name of any Spring bean. Limited wildcard support that uses the `*` character is provided, so, if you establish some naming conventions for your Spring beans, you can write a `bean` PCD expression to select them. As is the case with other pointcut designators, the `bean` PCD can be used with the `&&` (and), `||` (or), and `!` (negation) operators, too. ## Combining Pointcut Expressions You can combine pointcut expressions by using `&&,` `||` and `!` . You can also refer to pointcut expressions by name. The following example shows three pointcut expressions: public class Pointcuts { @Pointcut("execution(public * *(..))") public void publicMethod() {} (1) @Pointcut("within(com.xyz.trading..*)") public void inTrading() {} (2) @Pointcut("publicMethod() && inTrading()") public void tradingOperation() {} (3) } ``` 3 tradingOperation matches if a method execution represents any public method in the trading module. class Pointcuts { @Pointcut("execution(public * *(..))") fun publicMethod() {} (1) @Pointcut("within(com.xyz.trading..*)") fun inTrading() {} (2) @Pointcut("publicMethod() && inTrading()") fun tradingOperation() {} (3) } ``` 3 tradingOperation matches if a method execution represents any public method in the trading module. It is a best practice to build more complex pointcut expressions out of smaller named pointcuts, as shown above. When referring to pointcuts by name, normal Java visibility rules apply (you can see `private` pointcuts in the same type, `protected` pointcuts in the hierarchy, `public` pointcuts anywhere, and so on). Visibility does not affect pointcut matching. ## Sharing Named Pointcut Definitions When working with enterprise applications, developers often have the need to refer to modules of the application and particular sets of operations from within several aspects. We recommend defining a dedicated class that captures commonly used named pointcut expressions for this purpose. Such a class typically resembles the following `CommonPointcuts` example (though what you name the class is up to you): import org.aspectj.lang.annotation.Pointcut; public class CommonPointcuts { import org.aspectj.lang.annotation.Pointcut class CommonPointcuts { You can refer to the pointcuts defined in such a class anywhere you need a pointcut expression by referencing the fully-qualified name of the class combined with the `@Pointcut` method’s name. For example, to make the service layer transactional, you could write the following which references the ``` com.xyz.CommonPointcuts.businessService() ``` named pointcut: ``` <aop:config> <aop:advisor pointcut="com.xyz.CommonPointcuts.businessService()" advice-ref="tx-advice"/> </aop:config The `<aop:config>` and `<aop:advisor>` elements are discussed in Schema-based AOP Support. The transaction elements are discussed in Transaction Management. Spring AOP users are likely to use the `execution` pointcut designator the most often. The format of an execution expression follows: > execution(modifiers-pattern? ret-type-pattern declaring-type-pattern?name-pattern(param-pattern) throws-pattern?) All parts except the returning type pattern ( `ret-type-pattern` in the preceding snippet), the name pattern, and the parameters pattern are optional. The returning type pattern determines what the return type of the method must be in order for a join point to be matched. `*` is most frequently used as the returning type pattern. It matches any return type. A fully-qualified type name matches only when the method returns the given type. The name pattern matches the method name. You can use the `*` wildcard as all or part of a name pattern. If you specify a declaring type pattern, include a trailing `.` to join it to the name pattern component. The parameters pattern is slightly more complex: `()` matches a method that takes no parameters, whereas `(..)` matches any number (zero or more) of parameters. The `(*)` pattern matches a method that takes one parameter of any type. `(*,String)` matches a method that takes two parameters. The first can be of any type, while the second must be a `String` . Consult the Language Semantics section of the AspectJ Programming Guide for more information. The following examples show some common pointcut expressions: The execution of any public method: > execution(public * *(..)) * The execution of any method with a name that begins with `set` : > execution(* set*(..)) * The execution of any method defined by the `AccountService` interface: > execution(* com.xyz.service.AccountService.*(..)) * The execution of any method defined in the `service` package: > execution(* com.xyz.service.*.*(..)) * The execution of any method defined in the service package or one of its sub-packages: > execution(* com.xyz.service..*.*(..)) * Any join point (method execution only in Spring AOP) where the proxy implements the `AccountService` interface: > this(com.xyz.service.AccountService) `this` is more commonly used in a binding form. See the section on Declaring Advice for how to make the proxy object available in the advice body. * Any join point (method execution only in Spring AOP) where the target object implements the `AccountService` interface: > target(com.xyz.service.AccountService) `target` is more commonly used in a binding form. See the Declaring Advice section for how to make the target object available in the advice body. * Any join point (method execution only in Spring AOP) that takes a single parameter and where the argument passed at runtime is `Serializable` : > args(java.io.Serializable) `args` is more commonly used in a binding form. See the Declaring Advice section for how to make the method arguments available in the advice body. Note that the pointcut given in this example is different from ``` execution(* *(java.io.Serializable)) ``` . The args version matches if the argument passed at runtime is `Serializable` , and the execution version matches if the method signature declares a single parameter of type `Serializable` . * Any join point (method execution only in Spring AOP) where the target object has a `@Transactional` annotation: > @target(org.springframework.transaction.annotation.Transactional) You can also use `@target` in a binding form. See the Declaring Advice section for how to make the annotation object available in the advice body. * Any join point (method execution only in Spring AOP) which takes a single parameter, and where the runtime type of the argument passed has the `@Classified` annotation: > @args(com.xyz.security.Classified) You can also use `@args` in a binding form. See the Declaring Advice section how to make the annotation object(s) available in the advice body. * Any join point (method execution only in Spring AOP) on a Spring bean named `tradeService` : > bean(tradeService) * Any join point (method execution only in Spring AOP) on Spring beans having names that match the wildcard expression `*Service` : > bean(*Service) ## Writing Good Pointcuts During compilation, AspectJ processes pointcuts in order to optimize matching performance. Examining code and determining if each join point matches (statically or dynamically) a given pointcut is a costly process. (A dynamic match means the match cannot be fully determined from static analysis and that a test is placed in the code to determine if there is an actual match when the code is running). On first encountering a pointcut declaration, AspectJ rewrites it into an optimal form for the matching process. What does this mean? Basically, pointcuts are rewritten in DNF (Disjunctive Normal Form) and the components of the pointcut are sorted such that those components that are cheaper to evaluate are checked first. This means you do not have to worry about understanding the performance of various pointcut designators and may supply them in any order in a pointcut declaration. However, AspectJ can work only with what it is told. For optimal performance of matching, you should think about what you are trying to achieve and narrow the search space for matches as much as possible in the definition. The existing designators naturally fall into one of three groups: kinded, scoping, and contextual: Kinded designators select a particular kind of join point: `execution` , `get` , `set` , `call` , and `handler` . * Scoping designators select a group of join points of interest (probably of many kinds): `within` and `withincode` * Contextual designators match (and optionally bind) based on context: `this` , `target` , and `@annotation` A well written pointcut should include at least the first two types (kinded and scoping). You can include the contextual designators to match based on join point context or bind that context for use in the advice. Supplying only a kinded designator or only a contextual designator works but could affect weaving performance (time and memory used), due to extra processing and analysis. Scoping designators are very fast to match, and using them means AspectJ can very quickly dismiss groups of join points that should not be further processed. A good pointcut should always include one if possible. Advice is associated with a pointcut expression and runs before, after, or around method executions matched by the pointcut. The pointcut expression may be either an inline pointcut or a reference to a named pointcut. ## Before Advice You can declare before advice in an aspect by using the `@Before` annotation. The following example uses an inline pointcut expression. @Before("execution(* com.xyz.dao.*.*(..))") fun doAccessCheck() { // ... } } ``` If we use a named pointcut, we can rewrite the preceding example as follows: @Before("com.xyz.CommonPointcuts.dataAccessOperation()") public void doAccessCheck() { // ... } } ``` @Before("com.xyz.CommonPointcuts.dataAccessOperation()") fun doAccessCheck() { // ... } } ``` ## After Returning Advice After returning advice runs when a matched method execution returns normally. You can declare it by using the `@AfterReturning` annotation. @AfterReturning("execution(* com.xyz.dao.*.*(..))") fun doAccessCheck() { // ... } } ``` You can have multiple advice declarations (and other members as well), all inside the same aspect. We show only a single advice declaration in these examples to focus the effect of each one. | | --- | Sometimes, you need access in the advice body to the actual value that was returned. You can use the form of `@AfterReturning` that binds the return value to get that access, as the following example shows: @AfterReturning( pointcut="execution(* com.xyz.dao.*.*(..))", returning="retVal") public void doAccessCheck(Object retVal) { // ... } } ``` @AfterReturning( pointcut = "execution(* com.xyz.dao.*.*(..))", returning = "retVal") fun doAccessCheck(retVal: Any?) { // ... } } ``` The name used in the `returning` attribute must correspond to the name of a parameter in the advice method. When a method execution returns, the return value is passed to the advice method as the corresponding argument value. A `returning` clause also restricts matching to only those method executions that return a value of the specified type (in this case, `Object` , which matches any return value). Please note that it is not possible to return a totally different reference when using after returning advice. ## After Throwing Advice After throwing advice runs when a matched method execution exits by throwing an exception. You can declare it by using the `@AfterThrowing` annotation, as the following example shows: @AfterThrowing("execution(* com.xyz.dao.*.*(..))") public void doRecoveryActions() { // ... } } ``` @AfterThrowing("execution(* com.xyz.dao.*.*(..))") fun doRecoveryActions() { // ... } } ``` Often, you want the advice to run only when exceptions of a given type are thrown, and you also often need access to the thrown exception in the advice body. You can use the `throwing` attribute to both restrict matching (if desired — use `Throwable` as the exception type otherwise) and bind the thrown exception to an advice parameter. The following example shows how to do so: @AfterThrowing( pointcut="execution(* com.xyz.dao.*.*(..))", throwing="ex") public void doRecoveryActions(DataAccessException ex) { // ... } } ``` @AfterThrowing( pointcut = "execution(* com.xyz.dao.*.*(..))", throwing = "ex") fun doRecoveryActions(ex: DataAccessException) { // ... } } ``` The name used in the `throwing` attribute must correspond to the name of a parameter in the advice method. When a method execution exits by throwing an exception, the exception is passed to the advice method as the corresponding argument value. A `throwing` clause also restricts matching to only those method executions that throw an exception of the specified type ( `DataAccessException` , in this case). ## After (Finally) Advice After (finally) advice runs when a matched method execution exits. It is declared by using the `@After` annotation. After advice must be prepared to handle both normal and exception return conditions. It is typically used for releasing resources and similar purposes. The following example shows how to use after finally advice: ``` import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.After; @Aspect public class AfterFinallyExample { @After("execution(* com.xyz.dao.*.*(..))") public void doReleaseLock() { // ... } } ``` ``` import org.aspectj.lang.annotation.Aspect import org.aspectj.lang.annotation.After @Aspect class AfterFinallyExample { @After("execution(* com.xyz.dao.*.*(..))") fun doReleaseLock() { // ... } } ``` ## Around Advice Around advice is declared by annotating a method with the `@Around` annotation. The method should declare `Object` as its return type, and the first parameter of the method must be of type `ProceedingJoinPoint` . Within the body of the advice method, you must invoke `proceed()` on the `ProceedingJoinPoint` in order for the underlying method to run. Invoking `proceed()` without arguments will result in the caller’s original arguments being supplied to the underlying method when it is invoked. For advanced use cases, there is an overloaded variant of the `proceed()` method which accepts an array of arguments ( `Object[]` ). The values in the array will be used as the arguments to the underlying method when it is invoked. The value returned by the around advice is the return value seen by the caller of the method. For example, a simple caching aspect could return a value from a cache if it has one or invoke `proceed()` (and return that value) if it does not. Note that `proceed` may be invoked once, many times, or not at all within the body of the around advice. All of these are legal. If you declare the return type of your around advice method as | | --- | The following example shows how to use around advice: ``` import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.ProceedingJoinPoint; ``` import org.aspectj.lang.annotation.Aspect import org.aspectj.lang.annotation.Around import org.aspectj.lang.ProceedingJoinPoint ## Advice Parameters Spring offers fully typed advice, meaning that you declare the parameters you need in the advice signature (as we saw earlier for the returning and throwing examples) rather than work with `Object[]` arrays all the time. We see how to make argument and other contextual values available to the advice body later in this section. First, we take a look at how to write generic advice that can find out about the method the advice is currently advising. ### Access to the Current `JoinPoint` Any advice method may declare, as its first parameter, a parameter of type ``` org.aspectj.lang.JoinPoint ``` . Note that around advice is required to declare a first parameter of type `ProceedingJoinPoint` , which is a subclass of `JoinPoint` . The `JoinPoint` interface provides a number of useful methods: * `getArgs()` : Returns the method arguments. * `getThis()` : Returns the proxy object. * `getTarget()` : Returns the target object. * `getSignature()` : Returns a description of the method that is being advised. * `toString()` : Prints a useful description of the method being advised. See the javadoc for more detail. ### Passing Parameters to Advice We have already seen how to bind the returned value or exception value (using after returning and after throwing advice). To make argument values available to the advice body, you can use the binding form of `args` . If you use a parameter name in place of a type name in an `args` expression, the value of the corresponding argument is passed as the parameter value when the advice is invoked. An example should make this clearer. Suppose you want to advise the execution of DAO operations that take an `Account` object as the first parameter, and you need access to the account in the advice body. You could write the following: ``` @Before("execution(* com.xyz.dao.*.*(..)) && args(account,..)") public void validateAccount(Account account) { // ... } ``` ``` @Before("execution(* com.xyz.dao.*.*(..)) && args(account,..)") fun validateAccount(account: Account) { // ... } ``` The `args(account,..)` part of the pointcut expression serves two purposes. First, it restricts matching to only those method executions where the method takes at least one parameter, and the argument passed to that parameter is an instance of `Account` . Second, it makes the actual `Account` object available to the advice through the `account` parameter. Another way of writing this is to declare a pointcut that "provides" the `Account` object value when it matches a join point, and then refer to the named pointcut from the advice. This would look as follows: ``` @Pointcut("execution(* com.xyz.dao.*.*(..)) && args(account,..)") private void accountDataAccessOperation(Account account) {} @Before("accountDataAccessOperation(account)") public void validateAccount(Account account) { // ... } ``` ``` @Pointcut("execution(* com.xyz.dao.*.*(..)) && args(account,..)") private fun accountDataAccessOperation(account: Account) { } @Before("accountDataAccessOperation(account)") fun validateAccount(account: Account) { // ... } ``` See the AspectJ programming guide for more details. The proxy object ( `this` ), target object ( `target` ), and annotations ( `@within` , `@target` , `@annotation` , and `@args` ) can all be bound in a similar fashion. The next set of examples shows how to match the execution of methods annotated with an `@Auditable` annotation and extract the audit code: The following shows the definition of the `@Auditable` annotation: ``` @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD) public @interface Auditable { AuditCode value(); } ``` ``` @Retention(AnnotationRetention.RUNTIME) @Target(AnnotationTarget.FUNCTION) annotation class Auditable(val value: AuditCode) ``` The following shows the advice that matches the execution of `@Auditable` methods: ``` @Before("com.xyz.Pointcuts.publicMethod() && @annotation(auditable)") (1) public void audit(Auditable auditable) { AuditCode code = auditable.value(); // ... } ``` ``` @Before("com.xyz.Pointcuts.publicMethod() && @annotation(auditable)") (1) fun audit(auditable: Auditable) { val code = auditable.value() // ... } ``` ### Advice Parameters and Generics Spring AOP can handle generics used in class declarations and method parameters. Suppose you have a generic type like the following: ``` public interface Sample<T> { void sampleGenericMethod(T param); void sampleGenericCollectionMethod(Collection<T> param); } ``` ``` interface Sample<T> { fun sampleGenericMethod(param: T) fun sampleGenericCollectionMethod(param: Collection<T>) } ``` You can restrict interception of method types to certain parameter types by tying the advice parameter to the parameter type for which you want to intercept the method: ``` @Before("execution(* ..Sample+.sampleGenericMethod(*)) && args(param)") public void beforeSampleMethod(MyType param) { // Advice implementation } ``` ``` @Before("execution(* ..Sample+.sampleGenericMethod(*)) && args(param)") fun beforeSampleMethod(param: MyType) { // Advice implementation } ``` This approach does not work for generic collections. So you cannot define a pointcut as follows: ``` @Before("execution(* ..Sample+.sampleGenericCollectionMethod(*)) && args(param)") public void beforeSampleMethod(Collection<MyType> param) { // Advice implementation } ``` ``` @Before("execution(* ..Sample+.sampleGenericCollectionMethod(*)) && args(param)") fun beforeSampleMethod(param: Collection<MyType>) { // Advice implementation } ``` To make this work, we would have to inspect every element of the collection, which is not reasonable, as we also cannot decide how to treat `null` values in general. To achieve something similar to this, you have to type the parameter to `Collection<?>` and manually check the type of the elements. ### Determining Argument Names Parameter binding in advice invocations relies on matching the names used in pointcut expressions to the parameter names declared in advice and pointcut method signatures. This section uses the terms argument and parameter interchangeably, since AspectJ APIs refer to parameter names as argument names. | | --- | Spring AOP uses the following ``` ParameterNameDiscoverer ``` implementations to determine parameter names. Each discoverer will be given a chance to discover parameter names, and the first successful discoverer wins. If none of the registered discoverers is capable of determining parameter names, an exception will be thrown. ``` AspectJAnnotationParameterNameDiscoverer ``` Uses parameter names that have been explicitly specified by the user via the `argNames` attribute in the corresponding advice or pointcut annotation. See Explicit Argument Names for details. * Uses Kotlin reflection APIs to determine parameter names. This discoverer is only used if such APIs are present on the classpath. * ``` StandardReflectionParameterNameDiscoverer ``` Uses the standard ``` java.lang.reflect.Parameter ``` API to determine parameter names. Requires that code be compiled with the `-parameters` flag for `javac` . Recommended approach on Java 8+. * ``` LocalVariableTableParameterNameDiscoverer ``` Analyzes the local variable table available in the byte code of the advice class to determine parameter names from debug information. Requires that code be compiled with debug symbols ( `-g:vars` at a minimum). Deprecated as of Spring Framework 6.0 for removal in Spring Framework 6.1 in favor of compiling code with `-parameters` . Not supported in a GraalVM native image. * ``` AspectJAdviceParameterNameDiscoverer ``` Deduces parameter names from the pointcut expression, `returning` , and `throwing` clauses. See the javadoc for details on the algorithm used. ### Explicit Argument Names @AspectJ advice and pointcut annotations have an optional `argNames` attribute that you can use to specify the argument names of the annotated method. The following example shows how to use the `argNames` attribute: ``` @Before( value = "com.xyz.Pointcuts.publicMethod() && target(bean) && @annotation(auditable)", (1) argNames = "bean,auditable") (2) public void audit(Object bean, Auditable auditable) { AuditCode code = auditable.value(); // ... use code and bean } ``` ``` @Before( value = "com.xyz.Pointcuts.publicMethod() && target(bean) && @annotation(auditable)", (1) argNames = "bean,auditable") (2) fun audit(bean: Any, auditable: Auditable) { val code = auditable.value() // ... use code and bean } ``` If the first parameter is of type `JoinPoint` , `ProceedingJoinPoint` , or `JoinPoint.StaticPart` , you can omit the name of the parameter from the value of the `argNames` attribute. For example, if you modify the preceding advice to receive the join point object, the `argNames` attribute does not need to include it: ``` @Before( value = "com.xyz.Pointcuts.publicMethod() && target(bean) && @annotation(auditable)", (1) argNames = "bean,auditable") (2) public void audit(JoinPoint jp, Object bean, Auditable auditable) { AuditCode code = auditable.value(); // ... use code, bean, and jp } ``` ``` @Before( value = "com.xyz.Pointcuts.publicMethod() && target(bean) && @annotation(auditable)", (1) argNames = "bean,auditable") (2) fun audit(jp: JoinPoint, bean: Any, auditable: Auditable) { val code = auditable.value() // ... use code, bean, and jp } ``` The special treatment given to the first parameter of type `JoinPoint` , `ProceedingJoinPoint` , or `JoinPoint.StaticPart` is particularly convenient for advice methods that do not collect any other join point context. In such situations, you may omit the `argNames` attribute. For example, the following advice does not need to declare the `argNames` attribute: ``` @Before("com.xyz.Pointcuts.publicMethod()") (1) public void audit(JoinPoint jp) { // ... use jp } ``` ``` @Before("com.xyz.Pointcuts.publicMethod()") (1) fun audit(jp: JoinPoint) { // ... use jp } ``` ### Proceeding with Arguments We remarked earlier that we would describe how to write a `proceed` call with arguments that works consistently across Spring AOP and AspectJ. The solution is to ensure that the advice signature binds each of the method parameters in order. The following example shows how to do so: ``` @Around("execution(List<Account> find*(..)) && " + "com.xyz.CommonPointcuts.inDataAccessLayer() && " + "args(accountHolderNamePattern)") (1) public Object preProcessQueryPattern(ProceedingJoinPoint pjp, String accountHolderNamePattern) throws Throwable { String newPattern = preProcess(accountHolderNamePattern); return pjp.proceed(new Object[] {newPattern}); } ``` ``` @Around("execution(List<Account> find*(..)) && " + "com.xyz.CommonPointcuts.inDataAccessLayer() && " + "args(accountHolderNamePattern)") (1) fun preProcessQueryPattern(pjp: ProceedingJoinPoint, accountHolderNamePattern: String): Any? { val newPattern = preProcess(accountHolderNamePattern) return pjp.proceed(arrayOf<Any>(newPattern)) } ``` In many cases, you do this binding anyway (as in the preceding example). ## Advice Ordering What happens when multiple pieces of advice all want to run at the same join point? Spring AOP follows the same precedence rules as AspectJ to determine the order of advice execution. The highest precedence advice runs first "on the way in" (so, given two pieces of before advice, the one with highest precedence runs first). "On the way out" from a join point, the highest precedence advice runs last (so, given two pieces of after advice, the one with the highest precedence will run second). When two pieces of advice defined in different aspects both need to run at the same join point, unless you specify otherwise, the order of execution is undefined. You can control the order of execution by specifying precedence. This is done in the normal Spring way by either implementing the interface in the aspect class or annotating it with the `@Order` annotation. Given two aspects, the aspect returning the lower value from `Ordered.getOrder()` (or the annotation value) has the higher precedence. Introductions (known as inter-type declarations in AspectJ) enable an aspect to declare that advised objects implement a given interface, and to provide an implementation of that interface on behalf of those objects. You can make an introduction by using the `@DeclareParents` annotation. This annotation is used to declare that matching types have a new parent (hence the name). For example, given an interface named `UsageTracked` and an implementation of that interface named `DefaultUsageTracked` , the following aspect declares that all implementors of service interfaces also implement the `UsageTracked` interface (e.g. for statistics via JMX): @DeclareParents(value="com.xyz.service.*+", defaultImpl=DefaultUsageTracked.class) public static UsageTracked mixin; @Before("execution(* com.xyz..service.*.*(..)) && this(usageTracked)") public void recordUsage(UsageTracked usageTracked) { usageTracked.incrementUseCount(); } companion object { @DeclareParents(value = "com.xyz.service.*+", defaultImpl = DefaultUsageTracked::class) lateinit var mixin: UsageTracked } @Before("execution(* com.xyz..service.*.*(..)) && this(usageTracked)") fun recordUsage(usageTracked: UsageTracked) { usageTracked.incrementUseCount() } } ``` The interface to be implemented is determined by the type of the annotated field. The `value` attribute of the `@DeclareParents` annotation is an AspectJ type pattern. Any bean of a matching type implements the `UsageTracked` interface. Note that, in the before advice of the preceding example, service beans can be directly used as implementations of the `UsageTracked` interface. If accessing a bean programmatically, you would write the following: This is an advanced topic. If you are just starting out with AOP, you can safely skip it until later. | | --- | By default, there is a single instance of each aspect within the application context. AspectJ calls this the singleton instantiation model. It is possible to define aspects with alternate lifecycles. Spring supports AspectJ’s `perthis` and `pertarget` instantiation models; `percflow` , `percflowbelow` , and `pertypewithin` are not currently supported. You can declare a `perthis` aspect by specifying a `perthis` clause in the `@Aspect` annotation. Consider the following example: private int someState; @Before("execution(* com.xyz..service.*.*(..))") public void recordServiceUsage() { // ... } } ``` private val someState: Int = 0 @Before("execution(* com.xyz..service.*.*(..))") fun recordServiceUsage() { // ... } } ``` In the preceding example, the effect of the `perthis` clause is that one aspect instance is created for each unique service object that performs a business service (each unique object bound to `this` at join points matched by the pointcut expression). The aspect instance is created the first time that a method is invoked on the service object. The aspect goes out of scope when the service object goes out of scope. Before the aspect instance is created, none of the advice within it runs. As soon as the aspect instance has been created, the advice declared within it runs at matched join points, but only when the service object is the one with which this aspect is associated. See the AspectJ Programming Guide for more information on `per` clauses. The `pertarget` instantiation model works in exactly the same way as `perthis` , but it creates one aspect instance for each unique target object at matched join points. Now that you have seen how all the constituent parts work, we can put them together to do something useful. The execution of business services can sometimes fail due to concurrency issues (for example, a deadlock loser). If the operation is retried, it is likely to succeed on the next try. For business services where it is appropriate to retry in such conditions (idempotent operations that do not need to go back to the user for conflict resolution), we want to transparently retry the operation to avoid the client seeing a ``` @Aspect public class ConcurrentOperationExecutor implements Ordered { around advice. Notice that, for the moment, we apply the retry logic to each `businessService` . We try to proceed, and if we fail with a , we try again, unless we have exhausted all of our retry attempts. The corresponding Spring configuration follows: To refine the aspect so that it retries only idempotent operations, we might define the following `Idempotent` annotation: We can then use the annotation to annotate the implementation of service operations. The change to the aspect to retry only idempotent operations involves refining the pointcut expression so that only `@Idempotent` operations match, as follows: ``` @Around("execution(* com.xyz..service.*.*(..)) && " + "@annotation(com.xyz.service.Idempotent)") public Object doConcurrentOperation(ProceedingJoinPoint pjp) throws Throwable { // ... } ``` ``` @Around("execution(* com.xyz..service.*.*(..)) && " + "@annotation(com.xyz.service.Idempotent)") fun doConcurrentOperation(pjp: ProceedingJoinPoint): Any? { // ... } ``` If you prefer an XML-based format, Spring also offers support for defining aspects using the `aop` namespace tags. The exact same pointcut expressions and advice kinds as when using the @AspectJ style are supported. Hence, in this section we focus on that syntax and refer the reader to the discussion in the previous section (@AspectJ support) for an understanding of writing pointcut expressions and the binding of advice parameters. To use the aop namespace tags described in this section, you need to import the `spring-aop` schema, as described in XML Schema-based configuration . See the AOP schema for how to import the tags in the `aop` namespace. Within your Spring configurations, all aspect and advisor elements must be placed within an `<aop:config>` element (you can have more than one `<aop:config>` element in an application context configuration). An `<aop:config>` element can contain pointcut, advisor, and aspect elements (note that these must be declared in that order). ## Declaring an Aspect When you use the schema support, an aspect is a regular Java object defined as a bean in your Spring application context. The state and behavior are captured in the fields and methods of the object, and the pointcut and advice information are captured in the XML. You can declare an aspect by using the `<aop:aspect>` element, and reference the backing bean by using the `ref` attribute, as the following example shows: ``` <aop:config> <aop:aspect id="myAspect" ref="aBean"> ... </aop:aspect> </aop:config<bean id="aBean" class="..."> ... </bean> ``` The bean that backs the aspect ( `aBean` in this case) can of course be configured and dependency injected just like any other Spring bean. ## Declaring a Pointcut You can declare a named pointcut inside an `<aop:config>` element, letting the pointcut definition be shared across several aspects and advisors. A pointcut that represents the execution of any business service in the service layer can be defined as follows: <aop:pointcut id="businessService" expression="execution(* com.xyz.service.*.*(..))" / Note that the pointcut expression itself uses the same AspectJ pointcut expression language as described in @AspectJ support. If you use the schema based declaration style, you can also refer to named pointcuts defined in `@Aspect` types within the pointcut expression. Thus, another way of defining the above pointcut would be as follows: <aop:pointcut id="businessService" expression="com.xyz.CommonPointcuts.businessService()" /> (1) Declaring a pointcut inside an aspect is very similar to declaring a top-level pointcut, as the following example shows: In much the same way as an @AspectJ aspect, pointcuts declared by using the schema based definition style can collect join point context. For example, the following pointcut collects the `this` object as the join point context and passes it to the advice: <aop:pointcut id="businessService" expression="execution(* com.xyz.service.*.*(..)) &amp;&amp; this(service)"/The advice must be declared to receive the collected join point context by including parameters of the matching names, as follows: ``` public void monitor(Object service) { // ... } ``` ``` fun monitor(service: Any) { // ... } ``` When combining pointcut sub-expressions, `&amp;&amp;` is awkward within an XML document, so you can use the `and` , `or` , and `not` keywords in place of `&amp;&amp;` , `||` , and `!` , respectively. For example, the previous pointcut can be better written as follows: <aop:pointcut id="businessService" expression="execution(* com.xyz.service.*.*(..)) and this(service)"/ Note that pointcuts defined in this way are referred to by their XML `id` and cannot be used as named pointcuts to form composite pointcuts. The named pointcut support in the schema-based definition style is thus more limited than that offered by the @AspectJ style. ## Declaring Advice The schema-based AOP support uses the same five kinds of advice as the @AspectJ style, and they have exactly the same semantics. Before advice runs before a matched method execution. It is declared inside an `<aop:aspect>` by using the `<aop:before>` element, as the following example shows: <aop:before pointcut-ref="dataAccessOperation" method="doAccessCheck"/ In the example above, `dataAccessOperation` is the `id` of a named pointcut defined at the top ( `<aop:config>` ) level (see Declaring a Pointcut). As we noted in the discussion of the @AspectJ style, using named pointcuts can significantly improve the readability of your code. See Sharing Named Pointcut Definitions for details. | | --- | To define the pointcut inline instead, replace the `pointcut-ref` attribute with a `pointcut` attribute, as follows: <aop:before pointcut="execution(* com.xyz.dao.*.*(..))" method="doAccessCheck"/ The `method` attribute identifies a method ( `doAccessCheck` ) that provides the body of the advice. This method must be defined for the bean referenced by the aspect element that contains the advice. Before a data access operation is performed (a method execution join point matched by the pointcut expression), the `doAccessCheck` method on the aspect bean is invoked. After returning advice runs when a matched method execution completes normally. It is declared inside an `<aop:aspect>` in the same way as before advice. The following example shows how to declare it: As in the @AspectJ style, you can get the return value within the advice body. To do so, use the `returning` attribute to specify the name of the parameter to which the return value should be passed, as the following example shows: The `doAccessCheck` method must declare a parameter named `retVal` . The type of this parameter constrains matching in the same way as described for `@AfterReturning` . For example, you can declare the method signature as follows: ``` public void doAccessCheck(Object retVal) {... ``` ``` fun doAccessCheck(retVal: Any) {... ``` After throwing advice runs when a matched method execution exits by throwing an exception. It is declared inside an `<aop:aspect>` by using the `after-throwing` element, as the following example shows: <aop:after-throwing pointcut="execution(* com.xyz.dao.*.*(..))" method="doRecoveryActions"/ As in the @AspectJ style, you can get the thrown exception within the advice body. To do so, use the `throwing` attribute to specify the name of the parameter to which the exception should be passed as the following example shows: <aop:after-throwing pointcut="execution(* com.xyz.dao.*.*(..))" throwing="dataAccessEx" method="doRecoveryActions"/ The `doRecoveryActions` method must declare a parameter named `dataAccessEx` . The type of this parameter constrains matching in the same way as described for `@AfterThrowing` . For example, the method signature may be declared as follows: ``` public void doRecoveryActions(DataAccessException dataAccessEx) {... ``` ``` fun doRecoveryActions(dataAccessEx: DataAccessException) {... ``` After (finally) advice runs no matter how a matched method execution exits. You can declare it by using the `after` element, as the following example shows: ``` <aop:aspect id="afterFinallyExample" ref="aBean" <aop:after pointcut="execution(* com.xyz.dao.*.*(..))" method="doReleaseLock"/ You can declare around advice by using the `aop:around` element. The advice method should declare `Object` as its return type, and the first parameter of the method must be of type `ProceedingJoinPoint` . Within the body of the advice method, you must invoke `proceed()` on the `ProceedingJoinPoint` in order for the underlying method to run. Invoking `proceed()` without arguments will result in the caller’s original arguments being supplied to the underlying method when it is invoked. For advanced use cases, there is an overloaded variant of the `proceed()` method which accepts an array of arguments ( `Object[]` ). The values in the array will be used as the arguments to the underlying method when it is invoked. See Around Advice for notes on calling `proceed` with an `Object[]` . The following example shows how to declare around advice in XML: ``` <aop:aspect id="aroundExample" ref="aBean" <aop:around pointcut="execution(* com.xyz.service.*.*(..))" method="doBasicProfiling"/ The implementation of the `doBasicProfiling` advice can be exactly the same as in the @AspectJ example (minus the annotation, of course), as the following example shows: ### Advice Parameters The schema-based declaration style supports fully typed advice in the same way as described for the @AspectJ support — by matching pointcut parameters by name against advice method parameters. See Advice Parameters for details. If you wish to explicitly specify argument names for the advice methods (not relying on the detection strategies previously described), you can do so by using the `arg-names` attribute of the advice element, which is treated in the same manner as the `argNames` attribute in an advice annotation (as described in Determining Argument Names). The following example shows how to specify an argument name in XML: ``` <aop:before pointcut="com.xyz.Pointcuts.publicMethod() and @annotation(auditable)" (1) method="audit" arg-names="auditable" /> ``` The `arg-names` attribute accepts a comma-delimited list of parameter names. The following slightly more involved example of the XSD-based approach shows some around advice used in conjunction with a number of strongly typed parameters: ``` package com.xyz.service; Person getPerson(String personName, int age); } public class DefaultPersonService implements PersonService { public Person getPerson(String name, int age) { return new Person(name, age); } } ``` ``` package com.xyz.service fun getPerson(personName: String, age: Int): Person } class DefaultPersonService : PersonService { fun getPerson(name: String, age: Int): Person { return Person(name, age) } } ``` Next up is the aspect. Notice the fact that the `profile(..)` method accepts a number of strongly-typed parameters, the first of which happens to be the join point used to proceed with the method call. The presence of this parameter is an indication that the `profile(..)` is to be used as `around` advice, as the following example shows: import org.aspectj.lang.ProceedingJoinPoint; import org.springframework.util.StopWatch; public Object profile(ProceedingJoinPoint call, String name, int age) throws Throwable { StopWatch clock = new StopWatch("Profiling for '" + name + "' and '" + age + "'"); try { clock.start(call.toShortString()); return call.proceed(); } finally { clock.stop(); System.out.println(clock.prettyPrint()); } } } ``` import org.aspectj.lang.ProceedingJoinPoint import org.springframework.util.StopWatch fun profile(call: ProceedingJoinPoint, name: String, age: Int): Any? { val clock = StopWatch("Profiling for '$name' and '$age'") try { clock.start(call.toShortString()) return call.proceed() } finally { clock.stop() println(clock.prettyPrint()) } } } ``` Finally, the following example XML configuration effects the execution of the preceding advice for a particular join point: ``` <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation=" http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/aop https://www.springframework.org/schema/aop/spring-aop.xsd" <!-- this is the object that will be proxied by Spring's AOP infrastructure --> <bean id="personService" class="com.xyz.service.DefaultPersonService"/ <!-- this is the actual advice itself --> <bean id="profiler" class="com.xyz.SimpleProfiler"/ <aop:config> <aop:aspect ref="profiler" <aop:pointcut id="theExecutionOfSomePersonServiceMethod" expression="execution(* com.xyz.service.PersonService.getPerson(String,int)) and args(name, age)"/ <aop:around pointcut-ref="theExecutionOfSomePersonServiceMethod" method="profile"/ </aop:aspect> </aop:configConsider the following driver script: public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext("beans.xml"); PersonService person = ctx.getBean(PersonService.class); person.getPerson("Pengo", 12); } } ``` ``` fun main() { val ctx = ClassPathXmlApplicationContext("beans.xml") val person = ctx.getBean(PersonService.class) person.getPerson("Pengo", 12) } ``` With such a `Boot` class, we would get output similar to the following on standard output: > StopWatch 'Profiling for 'Pengo' and '12': running time (millis) = 0 ----------------------------------------- ms % Task name ----------------------------------------- 00000 ? execution(getFoo) ### Advice Ordering When multiple pieces of advice need to run at the same join point (executing method) the ordering rules are as described in Advice Ordering. The precedence between aspects is determined via the `order` attribute in the `<aop:aspect>` element or by either adding the `@Order` annotation to the bean that backs the aspect or by having the bean implement the `Ordered` interface. ## Introductions Introductions (known as inter-type declarations in AspectJ) let an aspect declare that advised objects implement a given interface and provide an implementation of that interface on behalf of those objects. You can make an introduction by using the `aop:declare-parents` element inside an `aop:aspect` . You can use the `aop:declare-parents` element to declare that matching types have a new parent (hence the name). For example, given an interface named `UsageTracked` and an implementation of that interface named `DefaultUsageTracked` , the following aspect declares that all implementors of service interfaces also implement the `UsageTracked` interface. (In order to expose statistics through JMX for example.) ``` <aop:aspect id="usageTrackerAspect" ref="usageTracking" <aop:declare-parents types-matching="com.xyz.service.*+" implement-interface="com.xyz.service.tracking.UsageTracked" default-impl="com.xyz.service.tracking.DefaultUsageTracked"/ <aop:before pointcut="execution(* com.xyz..service.*.*(..)) and this(usageTracked)" method="recordUsage"/ The class that backs the `usageTracking` bean would then contain the following method: ``` public void recordUsage(UsageTracked usageTracked) { usageTracked.incrementUseCount(); } ``` ``` fun recordUsage(usageTracked: UsageTracked) { usageTracked.incrementUseCount() } ``` The interface to be implemented is determined by the `implement-interface` attribute. The value of the `types-matching` attribute is an AspectJ type pattern. Any bean of a matching type implements the `UsageTracked` interface. Note that, in the before advice of the preceding example, service beans can be directly used as implementations of the `UsageTracked` interface. To access a bean programmatically, you could write the following: ## Aspect Instantiation Models The only supported instantiation model for schema-defined aspects is the singleton model. Other instantiation models may be supported in future releases. ## Advisors The concept of "advisors" comes from the AOP support defined in Spring and does not have a direct equivalent in AspectJ. An advisor is like a small self-contained aspect that has a single piece of advice. The advice itself is represented by a bean and must implement one of the advice interfaces described in Advice Types in Spring. Advisors can take advantage of AspectJ pointcut expressions. Spring supports the advisor concept with the `<aop:advisor>` element. You most commonly see it used in conjunction with transactional advice, which also has its own namespace support in Spring. The following example shows an advisor: <aop:advisor pointcut-ref="businessService" advice-ref="tx-advice" / As well as the `pointcut-ref` attribute used in the preceding example, you can also use the `pointcut` attribute to define a pointcut expression inline. To define the precedence of an advisor so that the advice can participate in ordering, use the `order` attribute to define the `Ordered` value of the advisor. ## An AOP Schema Example This section shows how the concurrent locking failure retry example from An AOP Example looks when rewritten with the schema support. The execution of business services can sometimes fail due to concurrency issues (for example, a deadlock loser). If the operation is retried, it is likely to succeed on the next try. For business services where it is appropriate to retry in such conditions (idempotent operations that do not need to go back to the user for conflict resolution), we want to transparently retry the operation to avoid the client seeing a ``` public class ConcurrentOperationExecutor implements Ordered { private val DEFAULT_MAX_RETRIES = 2 around advice method. We try to proceed. If we fail with a , we try again, unless we have exhausted all of our retry attempts. This class is identical to the one used in the @AspectJ example, but with the annotations removed. | | --- | The corresponding Spring configuration is as follows: <aop:aspect id="concurrentOperationRetry" ref="concurrentOperationExecutor" <aop:pointcut id="idempotentOperation" expression="execution(* com.xyz.service.*.*(..))"/ <aop:around pointcut-ref="idempotentOperation" method="doConcurrentOperation"/ </aop:aspect Notice that, for the time being, we assume that all business services are idempotent. If this is not the case, we can refine the aspect so that it retries only genuinely idempotent operations, by introducing an `Idempotent` annotation and using the annotation to annotate the implementation of service operations, as the following example shows: The change to the aspect to retry only idempotent operations involves refining the pointcut expression so that only `@Idempotent` operations match, as follows: ``` <aop:pointcut id="idempotentOperation" expression="execution(* com.xyz.service.*.*(..)) and @annotation(com.xyz.service.Idempotent)"/> ``` Once you have decided that an aspect is the best approach for implementing a given requirement, how do you decide between using Spring AOP or AspectJ and between the Aspect language (code) style, the @AspectJ annotation style, or the Spring XML style? These decisions are influenced by a number of factors including application requirements, development tools, and team familiarity with AOP. ## Spring AOP or Full AspectJ? Use the simplest thing that can work. Spring AOP is simpler than using full AspectJ, as there is no requirement to introduce the AspectJ compiler / weaver into your development and build processes. If you only need to advise the execution of operations on Spring beans, Spring AOP is the right choice. If you need to advise objects not managed by the Spring container (such as domain objects, typically), you need to use AspectJ. You also need to use AspectJ if you wish to advise join points other than simple method executions (for example, field get or set join points and so on). When you use AspectJ, you have the choice of the AspectJ language syntax (also known as the "code style") or the @AspectJ annotation style. If aspects play a large role in your design, and you are able to use the AspectJ Development Tools (AJDT) plugin for Eclipse, the AspectJ language syntax is the preferred option. It is cleaner and simpler because the language was purposefully designed for writing aspects. If you do not use Eclipse or have only a few aspects that do not play a major role in your application, you may want to consider using the @AspectJ style, sticking with regular Java compilation in your IDE, and adding an aspect weaving phase to your build script. ## @AspectJ or XML for Spring AOP? If you have chosen to use Spring AOP, you have a choice of @AspectJ or XML style. There are various tradeoffs to consider. The XML style may be most familiar to existing Spring users, and it is backed by genuine POJOs. When using AOP as a tool to configure enterprise services, XML can be a good choice (a good test is whether you consider the pointcut expression to be a part of your configuration that you might want to change independently). With the XML style, it is arguably clearer from your configuration which aspects are present in the system. The XML style has two disadvantages. First, it does not fully encapsulate the implementation of the requirement it addresses in a single place. The DRY principle says that there should be a single, unambiguous, authoritative representation of any piece of knowledge within a system. When using the XML style, the knowledge of how a requirement is implemented is split across the declaration of the backing bean class and the XML in the configuration file. When you use the @AspectJ style, this information is encapsulated in a single module: the aspect. Secondly, the XML style is slightly more limited in what it can express than the @AspectJ style: Only the "singleton" aspect instantiation model is supported, and it is not possible to combine named pointcuts declared in XML. For example, in the @AspectJ style you can write something like the following: ``` @Pointcut("execution(* get*())") public void propertyAccess() {} @Pointcut("execution(com.xyz.Account+ *(..))") public void operationReturningAnAccount() {} @Pointcut("propertyAccess() && operationReturningAnAccount()") public void accountPropertyAccess() {} ``` ``` @Pointcut("execution(* get*())") fun propertyAccess() {} @Pointcut("execution(com.xyz.Account+ *(..))") fun operationReturningAnAccount() {} @Pointcut("propertyAccess() && operationReturningAnAccount()") fun accountPropertyAccess() {} ``` In the XML style you can declare the first two pointcuts: ``` <aop:pointcut id="propertyAccess" expression="execution(* get*())"/<aop:pointcut id="operationReturningAnAccount" expression="execution(com.xyz.Account+ *(..))"/> ``` The downside of the XML approach is that you cannot define the ``` accountPropertyAccess ``` pointcut by combining these definitions. The @AspectJ style supports additional instantiation models and richer pointcut composition. It has the advantage of keeping the aspect as a modular unit. It also has the advantage that the @AspectJ aspects can be understood (and thus consumed) both by Spring AOP and by AspectJ. So, if you later decide you need the capabilities of AspectJ to implement additional requirements, you can easily migrate to a classic AspectJ setup. In general, the Spring team prefers the @AspectJ style for custom aspects beyond simple configuration of enterprise services. It is perfectly possible to mix @AspectJ style aspects by using the auto-proxying support, schema-defined `<aop:aspect>` aspects, `<aop:advisor>` declared advisors, and even proxies and interceptors in other styles in the same configuration. All of these are implemented by using the same underlying support mechanism and can co-exist without any difficulty. Spring AOP uses either JDK dynamic proxies or CGLIB to create the proxy for a given target object. JDK dynamic proxies are built into the JDK, whereas CGLIB is a common open-source class definition library (repackaged into `spring-core` ). If the target object to be proxied implements at least one interface, a JDK dynamic proxy is used. All of the interfaces implemented by the target type are proxied. If the target object does not implement any interfaces, a CGLIB proxy is created. If you want to force the use of CGLIB proxying (for example, to proxy every method defined for the target object, not only those implemented by its interfaces), you can do so. However, you should consider the following issues: With CGLIB, `final` methods cannot be advised, as they cannot be overridden in runtime-generated subclasses. * As of Spring 4.0, the constructor of your proxied object is NOT called twice anymore, since the CGLIB proxy instance is created through Objenesis. Only if your JVM does not allow for constructor bypassing, you might see double invocations and corresponding debug log entries from Spring’s AOP support. To force the use of CGLIB proxies, set the value of the `proxy-target-class` attribute of the `<aop:config>` element to true, as follows: ``` <aop:config proxy-target-class="true"> <!-- other beans defined here... --> </aop:config> ``` To force CGLIB proxying when you use the @AspectJ auto-proxy support, set the `proxy-target-class` attribute of the element to `true` , as follows: ``` <aop:aspectj-autoproxy proxy-target-class="true"/> ``` ## Understanding AOP Proxies Spring AOP is proxy-based. It is vitally important that you grasp the semantics of what that last statement actually means before you write your own aspects or use any of the Spring AOP-based aspects supplied with the Spring Framework. Consider first the scenario where you have a plain-vanilla, un-proxied, nothing-special-about-it, straight object reference, as the following code snippet shows: public void foo() { // this next method invocation is a direct call on the 'this' reference this.bar(); } fun foo() { // this next method invocation is a direct call on the 'this' reference this.bar() } If you invoke a method on an object reference, the method is invoked directly on that object reference, as the following image and listing show: public static void main(String[] args) { Pojo pojo = new SimplePojo(); // this is a direct method call on the 'pojo' reference pojo.foo(); } } ``` ``` fun main() { val pojo = SimplePojo() // this is a direct method call on the 'pojo' reference pojo.foo() } ``` Things change slightly when the reference that client code has is a proxy. Consider the following diagram and code snippet: ``` fun main() { val factory = ProxyFactory(SimplePojo()) factory.addInterface(Pojo::class.java) factory.addAdvice(RetryAdvice()) The key thing to understand here is that the client code inside the `main(..)` method of the `Main` class has a reference to the proxy. This means that method calls on that object reference are calls on the proxy. As a result, the proxy can delegate to all of the interceptors (advice) that are relevant to that particular method call. However, once the call has finally reached the target object (the `SimplePojo` reference in this case), any method calls that it may make on itself, such as `this.bar()` or `this.foo()` , are going to be invoked against the `this` reference, and not the proxy. This has important implications. It means that self-invocation is not going to result in the advice associated with a method invocation getting a chance to run. Okay, so what is to be done about this? The best approach (the term "best" is used loosely here) is to refactor your code such that the self-invocation does not happen. This does entail some work on your part, but it is the best, least-invasive approach. The next approach is absolutely horrendous, and we hesitate to point it out, precisely because it is so horrendous. You can (painful as it is to us) totally tie the logic within your class to Spring AOP, as the following example shows: public void foo() { // this works, but... gah! ((Pojo) AopContext.currentProxy()).bar(); } fun foo() { // this works, but... gah! (AopContext.currentProxy() as Pojo).bar() } This totally couples your code to Spring AOP, and it makes the class itself aware of the fact that it is being used in an AOP context, which flies in the face of AOP. It also requires some additional configuration when the proxy is being created, as the following example shows: ``` fun main() { val factory = ProxyFactory(SimplePojo()) factory.addInterface(Pojo::class.java) factory.addAdvice(RetryAdvice()) factory.isExposeProxy = true Finally, it must be noted that AspectJ does not have this self-invocation issue because it is not a proxy-based AOP framework. In addition to declaring aspects in your configuration by using either `<aop:config>` or , it is also possible to programmatically create proxies that advise target objects. For the full details of Spring’s AOP API, see the next chapter. Here, we want to focus on the ability to automatically create proxies by using @AspectJ aspects. You can use the ``` org.springframework.aop.aspectj.annotation.AspectJProxyFactory ``` class to create a proxy for a target object that is advised by one or more @AspectJ aspects. The basic usage for this class is very simple, as the following example shows: // add an aspect, the class must be an @AspectJ aspect // you can call this as many times as you need with different aspects factory.addAspect(SecurityManager.class); // now get the proxy object... MyInterfaceType proxy = factory.getProxy(); ``` // add an aspect, the class must be an @AspectJ aspect // you can call this as many times as you need with different aspects factory.addAspect(SecurityManager::class.java) // now get the proxy object... val proxy = factory.getProxy<Any>() ``` See the javadoc for more information. Everything we have covered so far in this chapter is pure Spring AOP. In this section, we look at how you can use the AspectJ compiler or weaver instead of or in addition to Spring AOP if your needs go beyond the facilities offered by Spring AOP alone. Spring ships with a small AspectJ aspect library, which is available stand-alone in your distribution as `spring-aspects.jar` . You need to add this to your classpath in order to use the aspects in it. Using AspectJ to Dependency Inject Domain Objects with Spring and Other Spring aspects for AspectJ discuss the content of this library and how you can use it. Configuring AspectJ Aspects by Using Spring IoC discusses how to dependency inject AspectJ aspects that are woven using the AspectJ compiler. Finally, Load-time Weaving with AspectJ in the Spring Framework provides an introduction to load-time weaving for Spring applications that use AspectJ. ## Using AspectJ to Dependency Inject Domain Objects with Spring The Spring container instantiates and configures beans defined in your application context. It is also possible to ask a bean factory to configure a pre-existing object, given the name of a bean definition that contains the configuration to be applied. `spring-aspects.jar` contains an annotation-driven aspect that exploits this capability to allow dependency injection of any object. The support is intended to be used for objects created outside of the control of any container. Domain objects often fall into this category because they are often created programmatically with the `new` operator or by an ORM tool as a result of a database query. The `@Configurable` annotation marks a class as being eligible for Spring-driven configuration. In the simplest case, you can use purely it as a marker annotation, as the following example shows: When used as a marker interface in this way, Spring configures new instances of the annotated type ( `Account` , in this case) by using a bean definition (typically prototype-scoped) with the same name as the fully-qualified type name ( ``` com.xyz.domain.Account ``` ). Since the default name for a bean is the fully-qualified name of its type, a convenient way to declare the prototype definition is to omit the `id` attribute, as the following example shows: ``` <bean class="com.xyz.domain.Account" scope="prototype"> <property name="fundsTransferService" ref="fundsTransferService"/> </bean> ``` If you want to explicitly specify the name of the prototype bean definition to use, you can do so directly in the annotation, as the following example shows: Spring now looks for a bean definition named `account` and uses that as the definition to configure new `Account` instances. You can also use autowiring to avoid having to specify a dedicated bean definition at all. To have Spring apply autowiring, use the `autowire` property of the `@Configurable` annotation. You can specify either ``` @Configurable(autowire=Autowire.BY_TYPE) ``` ``` @Configurable(autowire=Autowire.BY_NAME) ``` for autowiring by type or by name, respectively. As an alternative, it is preferable to specify explicit, annotation-driven dependency injection for your `@Configurable` beans through `@Autowired` or `@Inject` at the field or method level (see Annotation-based Container Configuration for further details). Finally, you can enable Spring dependency checking for the object references in the newly created and configured object by using the `dependencyCheck` attribute (for example, ``` @Configurable(autowire=Autowire.BY_NAME,dependencyCheck=true) ``` ). If this attribute is set to `true` , Spring validates after configuration that all properties (which are not primitives or collections) have been set. Note that using the annotation on its own does nothing. It is the in `spring-aspects.jar` that acts on the presence of the annotation. In essence, the aspect says, "after returning from the initialization of a new object of a type annotated with `@Configurable` , configure the newly created object using Spring in accordance with the properties of the annotation". In this context, "initialization" refers to newly instantiated objects (for example, objects instantiated with the `new` operator) as well as to `Serializable` objects that are undergoing deserialization (for example, through readResolve()). For this to work, the annotated types must be woven with the AspectJ weaver. You can either use a build-time Ant or Maven task to do this (see, for example, the AspectJ Development Environment Guide) or load-time weaving (see Load-time Weaving with AspectJ in the Spring Framework). The itself needs to be configured by Spring (in order to obtain a reference to the bean factory that is to be used to configure new objects). If you use Java-based configuration, you can add to any `@Configuration` class, as follows: If you prefer XML based configuration, the Spring `context` namespace defines a convenient ``` context:spring-configured ``` element, which you can use as follows: ``` <context:spring-configured/> ``` Instances of `@Configurable` objects created before the aspect has been configured result in a message being issued to the debug log and no configuration of the object taking place. An example might be a bean in the Spring configuration that creates domain objects when it is initialized by Spring. In this case, you can use the `depends-on` bean attribute to manually specify that the bean depends on the configuration aspect. The following example shows how to use the `depends-on` attribute: ``` <bean id="myService" class="com.xyz.service.MyService" depends-on="org.springframework.beans.factory.aspectj.AnnotationBeanConfigurerAspect" <!-- ... --Do not activate | | --- | ### Unit Testing `@Configurable` Objects One of the goals of the `@Configurable` support is to enable independent unit testing of domain objects without the difficulties associated with hard-coded lookups. If `@Configurable` types have not been woven by AspectJ, the annotation has no affect during unit testing. You can set mock or stub property references in the object under test and proceed as normal. If `@Configurable` types have been woven by AspectJ, you can still unit test outside of the container as normal, but you see a warning message each time that you construct a `@Configurable` object indicating that it has not been configured by Spring. ### Working with Multiple Application Contexts that is used to implement the `@Configurable` support is an AspectJ singleton aspect. The scope of a singleton aspect is the same as the scope of `static` members: There is one aspect instance per `ClassLoader` that defines the type. This means that, if you define multiple application contexts within the same `ClassLoader` hierarchy, you need to consider where to define the bean and where to place `spring-aspects.jar` on the classpath. Consider a typical Spring web application configuration that has a shared parent application context that defines common business services, everything needed to support those services, and one child application context for each servlet (which contains definitions particular to that servlet). All of these contexts co-exist within the same `ClassLoader` hierarchy, and so the can hold a reference to only one of them. In this case, we recommend defining the bean in the shared (parent) application context. This defines the services that you are likely to want to inject into domain objects. A consequence is that you cannot configure domain objects with references to beans defined in the child (servlet-specific) contexts by using the @Configurable mechanism (which is probably not something you want to do anyway). When deploying multiple web applications within the same container, ensure that each web application loads the types in `spring-aspects.jar` by using its own `ClassLoader` (for example, by placing `spring-aspects.jar` in `WEB-INF/lib` ). If `spring-aspects.jar` is added only to the container-wide classpath (and hence loaded by the shared parent `ClassLoader` ), all web applications share the same aspect instance (which is probably not what you want). ## Other Spring aspects for AspectJ In addition to the `@Configurable` aspect, `spring-aspects.jar` contains an AspectJ aspect that you can use to drive Spring’s transaction management for types and methods annotated with the `@Transactional` annotation. This is primarily intended for users who want to use the Spring Framework’s transaction support outside of the Spring container. The aspect that interprets `@Transactional` annotations is the . When you use this aspect, you must annotate the implementation class (or methods within that class or both), not the interface (if any) that the class implements. AspectJ follows Java’s rule that annotations on interfaces are not inherited. A `@Transactional` annotation on a class specifies the default transaction semantics for the execution of any public operation in the class. A `@Transactional` annotation on a method within the class overrides the default transaction semantics given by the class annotation (if present). Methods of any visibility may be annotated, including private methods. Annotating non-public methods directly is the only way to get transaction demarcation for the execution of such methods. Since Spring Framework 4.2, | | --- | For AspectJ programmers who want to use the Spring configuration and transaction management support but do not want to (or cannot) use annotations, `spring-aspects.jar` also contains `abstract` aspects you can extend to provide your own pointcut definitions. See the sources for the ``` AbstractBeanConfigurerAspect ``` ``` AbstractTransactionAspect ``` aspects for more information. As an example, the following excerpt shows how you could write an aspect to configure all instances of objects defined in the domain model by using prototype bean definitions that match the fully qualified class names: ``` public aspect DomainObjectConfiguration extends AbstractBeanConfigurerAspect { public DomainObjectConfiguration() { setBeanWiringInfoResolver(new ClassNameBeanWiringInfoResolver()); } // the creation of a new bean (any object in the domain model) protected pointcut beanCreation(Object beanInstance) : initialization(new(..)) && CommonPointcuts.inDomainModel() && this(beanInstance); } ``` ## Configuring AspectJ Aspects by Using Spring IoC When you use AspectJ aspects with Spring applications, it is natural to both want and expect to be able to configure such aspects with Spring. The AspectJ runtime itself is responsible for aspect creation, and the means of configuring the AspectJ-created aspects through Spring depends on the AspectJ instantiation model (the `per-xxx` clause) used by the aspect. The majority of AspectJ aspects are singleton aspects. Configuration of these aspects is easy. You can create a bean definition that references the aspect type as normal and include the bean attribute. This ensures that Spring obtains the aspect instance by asking AspectJ for it rather than trying to create an instance itself. The following example shows how to use the attribute: ``` <bean id="profiler" class="com.xyz.profiler.Profiler" factory-method="aspectOf"> (1) <property name="profilingStrategy" ref="jamonProfilingStrategy"/> </bean> ``` Non-singleton aspects are harder to configure. However, it is possible to do so by creating prototype bean definitions and using the `@Configurable` support from `spring-aspects.jar` to configure the aspect instances once they have bean created by the AspectJ runtime. If you have some @AspectJ aspects that you want to weave with AspectJ (for example, using load-time weaving for domain model types) and other @AspectJ aspects that you want to use with Spring AOP, and these aspects are all configured in Spring, you need to tell the Spring AOP @AspectJ auto-proxying support which exact subset of the @AspectJ aspects defined in the configuration should be used for auto-proxying. You can do this by using one or more `<include/>` elements inside the declaration. Each `<include/>` element specifies a name pattern, and only beans with names matched by at least one of the patterns are used for Spring AOP auto-proxy configuration. The following example shows how to use `<include/>` elements: ``` <aop:aspectj-autoproxy> <aop:include name="thisBean"/> <aop:include name="thatBean"/> </aop:aspectj-autoproxy> ``` Do not be misled by the name of the | | --- | ## Load-time Weaving with AspectJ in the Spring Framework Load-time weaving (LTW) refers to the process of weaving AspectJ aspects into an application’s class files as they are being loaded into the Java virtual machine (JVM). The focus of this section is on configuring and using LTW in the specific context of the Spring Framework. This section is not a general introduction to LTW. For full details on the specifics of LTW and configuring LTW with only AspectJ (with Spring not being involved at all), see the LTW section of the AspectJ Development Environment Guide. The value that the Spring Framework brings to AspectJ LTW is in enabling much finer-grained control over the weaving process. 'Vanilla' AspectJ LTW is effected by using a Java (5+) agent, which is switched on by specifying a VM argument when starting up a JVM. It is, thus, a JVM-wide setting, which may be fine in some situations but is often a little too coarse. Spring-enabled LTW lets you switch on LTW on a per- `ClassLoader` basis, which is more fine-grained and which can make more sense in a 'single-JVM-multiple-application' environment (such as is found in a typical application server environment). Further, in certain environments, this support enables load-time weaving without making any modifications to the application server’s launch script that is needed to add ``` -javaagent:path/to/aspectjweaver.jar ``` or (as we describe later in this section) . Developers configure the application context to enable load-time weaving instead of relying on administrators who typically are in charge of the deployment configuration, such as the launch script. Now that the sales pitch is over, let us first walk through a quick example of AspectJ LTW that uses Spring, followed by detailed specifics about elements introduced in the example. For a complete example, see the Petclinic sample application. ### A First Example Assume that you are an application developer who has been tasked with diagnosing the cause of some performance problems in a system. Rather than break out a profiling tool, we are going to switch on a simple profiling aspect that lets us quickly get some performance metrics. We can then apply a finer-grained profiling tool to that specific area immediately afterwards. The example presented here uses XML configuration. You can also configure and use @AspectJ with Java configuration. Specifically, you can use the | | --- | The following example shows the profiling aspect, which is not fancy. It is a time-based profiler that uses the @AspectJ-style of aspect declaration: import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Pointcut; import org.springframework.util.StopWatch; import org.springframework.core.annotation.Order; @Around("methodsToBeProfiled()") public Object profile(ProceedingJoinPoint pjp) throws Throwable { StopWatch sw = new StopWatch(getClass().getSimpleName()); try { sw.start(pjp.getSignature().getName()); return pjp.proceed(); } finally { sw.stop(); System.out.println(sw.prettyPrint()); } } @Pointcut("execution(public * com.xyz..*.*(..))") public void methodsToBeProfiled(){} } ``` import org.aspectj.lang.ProceedingJoinPoint import org.aspectj.lang.annotation.Aspect import org.aspectj.lang.annotation.Around import org.aspectj.lang.annotation.Pointcut import org.springframework.util.StopWatch import org.springframework.core.annotation.Order @Around("methodsToBeProfiled()") fun profile(pjp: ProceedingJoinPoint): Any? { val sw = StopWatch(javaClass.simpleName) try { sw.start(pjp.getSignature().getName()) return pjp.proceed() } finally { sw.stop() println(sw.prettyPrint()) } } @Pointcut("execution(public * com.xyz..*.*(..))") fun methodsToBeProfiled() { } } ``` We also need to create an `META-INF/aop.xml` file, to inform the AspectJ weaver that we want to weave our `ProfilingAspect` into our classes. This file convention, namely the presence of a file (or files) on the Java classpath called `META-INF/aop.xml` is standard AspectJ. The following example shows the `aop.xml` file: ``` <!DOCTYPE aspectj PUBLIC "-//AspectJ//DTD//EN" "https://www.eclipse.org/aspectj/dtd/aspectj.dtd"> <aspectj <weaver> <!-- only weave classes in our application-specific packages --> <include within="com.xyz.*"/> </weaver <aspects> <!-- weave in just this aspect --> <aspect name="com.xyz.ProfilingAspect"/> </aspects</aspectj> ``` Now we can move on to the Spring-specific portion of the configuration. We need to configure a `LoadTimeWeaver` (explained later). This load-time weaver is the essential component responsible for weaving the aspect configuration in one or more `META-INF/aop.xml` files into the classes in your application. The good thing is that it does not require a lot of configuration (there are some more options that you can specify, but these are detailed later), as can be seen in the following example: <!-- a service object; we will be profiling its methods --> <bean id="entitlementCalculationService" class="com.xyz.StubEntitlementCalculationService"/ <!-- this switches on the load-time weaving --> <context:load-time-weaver/> </beans> ``` Now that all the required artifacts (the aspect, the `META-INF/aop.xml` file, and the Spring configuration) are in place, we can create the following driver class with a `main(..)` method to demonstrate the LTW in action: public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext("beans.xml"); EntitlementCalculationService service = ctx.getBean(EntitlementCalculationService.class); val service = ctx.getBean(EntitlementCalculationService.class) We have one last thing to do. The introduction to this section did say that one could switch on LTW selectively on a per- `ClassLoader` basis with Spring, and this is true. However, for this example, we use a Java agent (supplied with Spring) to switch on LTW. We use the following command to run the `Main` class shown earlier: > java -javaagent:C:/projects/xyz/lib/spring-instrument.jar com.xyz.Main The `-javaagent` is a flag for specifying and enabling agents to instrument programs that run on the JVM. The Spring Framework ships with such an agent, the ``` InstrumentationSavingAgent ``` , which is packaged in the that was supplied as the value of the `-javaagent` argument in the preceding example. The output from the execution of the `Main` program looks something like the next example. (I have introduced a `Thread.sleep(..)` statement into the ``` calculateEntitlement() ``` implementation so that the profiler actually captures something other than 0 milliseconds (the `01234` milliseconds is not an overhead introduced by the AOP). The following listing shows the output we got when we ran our profiler: > Calculating entitlement StopWatch 'ProfilingAspect': running time (millis) = 1234 ------ ----- ---------------------------- ms % Task name ------ ----- ---------------------------- 01234 100% calculateEntitlement Since this LTW is effected by using full-blown AspectJ, we are not limited only to advising Spring beans. The following slight variation on the `Main` program yields the same result: public static void main(String[] args) { new ClassPathXmlApplicationContext("beans.xml"); EntitlementCalculationService service = new StubEntitlementCalculationService(); fun main(args: Array<String>) { ClassPathXmlApplicationContext("beans.xml") val service = StubEntitlementCalculationService() Notice how, in the preceding program, we bootstrap the Spring container and then create a new instance of the ``` StubEntitlementCalculationService ``` totally outside the context of Spring. The profiling advice still gets woven in. Admittedly, the example is simplistic. However, the basics of the LTW support in Spring have all been introduced in the earlier example, and the rest of this section explains the "why" behind each bit of configuration and usage in detail. ### Aspects The aspects that you use in LTW have to be AspectJ aspects. You can write them in either the AspectJ language itself, or you can write your aspects in the @AspectJ-style. Your aspects are then both valid AspectJ and Spring AOP aspects. Furthermore, the compiled aspect classes need to be available on the classpath. ### 'META-INF/aop.xml' The AspectJ LTW infrastructure is configured by using one or more `META-INF/aop.xml` files that are on the Java classpath (either directly or, more typically, in jar files). The structure and contents of this file is detailed in the LTW part of the AspectJ reference documentation. Because the `aop.xml` file is 100% AspectJ, we do not describe it further here. ### Required libraries (JARS) At minimum, you need the following libraries to use the Spring Framework’s support for AspectJ LTW: * `spring-aop.jar` * `aspectjweaver.jar` If you use the Spring-provided agent to enable instrumentation , you also need: ### Spring Configuration The key component in Spring’s LTW support is the `LoadTimeWeaver` interface (in the ``` org.springframework.instrument.classloading ``` package), and the numerous implementations of it that ship with the Spring distribution. A `LoadTimeWeaver` is responsible for adding one or more ``` java.lang.instrument.ClassFileTransformers ``` to a `ClassLoader` at runtime, which opens the door to all manner of interesting applications, one of which happens to be the LTW of aspects. If you are unfamiliar with the idea of runtime class file transformation, see the javadoc API documentation for the | | --- | Configuring a `LoadTimeWeaver` for a particular `ApplicationContext` can be as easy as adding one line. (Note that you almost certainly need to use an `ApplicationContext` as your Spring container — typically, a `BeanFactory` is not enough because the LTW support uses ``` BeanFactoryPostProcessors ``` .) To enable the Spring Framework’s LTW support, you need to configure a `LoadTimeWeaver` , which typically is done by using the Alternatively, if you prefer XML-based configuration, use the element. Note that the element is defined in the `context` namespace. The following example shows how to use <context:load-time-weaver/ The preceding configuration automatically defines and registers a number of LTW-specific infrastructure beans, such as a `LoadTimeWeaver` and an ``` AspectJWeavingEnabler ``` , for you. The default `LoadTimeWeaver` is the class, which attempts to decorate an automatically detected `LoadTimeWeaver` . The exact type of `LoadTimeWeaver` that is "automatically detected" is dependent upon your runtime environment. The following table summarizes various `LoadTimeWeaver` implementations: Runtime Environment LoadTimeWeaver implementation Running in Apache Tomcat Running in GlassFish (limited to EAR deployments) JVM started with Spring Fallback, expecting the underlying ClassLoader to follow common conventions (namely Note that the table lists only the `LoadTimeWeavers` that are autodetected when you use the . You can specify exactly which `LoadTimeWeaver` implementation to use. To specify a specific `LoadTimeWeaver` with Java configuration, implement the ``` LoadTimeWeavingConfigurer ``` interface and override the `getLoadTimeWeaver()` method. The following example specifies a ``` @Configuration @EnableLoadTimeWeaving public class AppConfig implements LoadTimeWeavingConfigurer { @Override public LoadTimeWeaver getLoadTimeWeaver() { return new ReflectiveLoadTimeWeaver(); } } ``` ``` @Configuration @EnableLoadTimeWeaving class AppConfig : LoadTimeWeavingConfigurer { override fun getLoadTimeWeaver(): LoadTimeWeaver { return ReflectiveLoadTimeWeaver() } } ``` If you use XML-based configuration, you can specify the fully qualified class name as the value of the `weaver-class` attribute on the element. Again, the following example specifies a <context:load-time-weaver weaver-class="org.springframework.instrument.classloading.ReflectiveLoadTimeWeaver"/ The `LoadTimeWeaver` that is defined and registered by the configuration can be later retrieved from the Spring container by using the well known name, `loadTimeWeaver` . Remember that the `LoadTimeWeaver` exists only as a mechanism for Spring’s LTW infrastructure to add one or more ``` ClassFileTransformers ``` . The actual `ClassFileTransformer` that does the LTW is the (from the ``` org.aspectj.weaver.loadtime ``` package) class. See the class-level javadoc of the class for further details, because the specifics of how the weaving is actually effected is beyond the scope of this document. There is one final attribute of the configuration left to discuss: the `aspectjWeaving` attribute (or `aspectj-weaving` if you use XML). This attribute controls whether LTW is enabled or not. It accepts one of three possible values, with the default value being `autodetect` if the attribute is not present. The following table summarizes the three possible values: Annotation Value | XML Value | Explanation | | --- | --- | --- | | | | ### Environment-specific Configuration This last section contains any additional settings and configuration that you need when you use Spring’s LTW support in environments such as application servers and web containers. # Tomcat, JBoss, WildFly Tomcat and JBoss/WildFly provide a general app `ClassLoader` that is capable of local instrumentation. Spring’s native LTW may leverage those ClassLoader implementations to provide AspectJ weaving. You can simply enable load-time weaving, as described earlier. Specifically, you do not need to modify the JVM launch script to add . Note that on JBoss, you may need to disable the app server scanning to prevent it from loading the classes before the application actually starts. A quick workaround is to add to your artifact a file named ``` WEB-INF/jboss-scanning.xml ``` with the following content: ``` <scanning xmlns="urn:jboss:scanning:1.0"/> ``` # Generic Java Applications When class instrumentation is required in environments that are not supported by specific `LoadTimeWeaver` implementations, a JVM agent is the general solution. For such cases, Spring provides ``` InstrumentationLoadTimeWeaver ``` which requires a Spring-specific (but very general) JVM agent, , autodetected by common setups. To use it, you must start the virtual machine with the Spring agent by supplying the following JVM options: > -javaagent:/path/to/spring-instrument.jar Note that this requires modification of the JVM launch script, which may prevent you from using this in application server environments (depending on your server and your operation policies). That said, for one-app-per-JVM deployments such as standalone Spring Boot applications, you typically control the entire JVM setup in any case. More information on AspectJ can be found on the AspectJ website. Eclipse AspectJ by <NAME> et. al. (Addison-Wesley, 2005) provides a comprehensive introduction and reference for the AspectJ language. AspectJ in Action, Second Edition by <NAME> (Manning, 2009) comes highly recommended. The focus of the book is on AspectJ, but a lot of general AOP themes are explored (in some depth). The previous chapter described the Spring’s support for AOP with @AspectJ and schema-based aspect definitions. In this chapter, we discuss the lower-level Spring AOP APIs. For common applications, we recommend the use of Spring AOP with AspectJ pointcuts as described in the previous chapter. This section describes how Spring handles the crucial pointcut concept. ## Concepts Spring’s pointcut model enables pointcut reuse independent of advice types. You can target different advice with the same pointcut. The ``` org.springframework.aop.Pointcut ``` interface is the central interface, used to target advice to particular classes and methods. The complete interface follows: MethodMatcher getMethodMatcher(); } ``` Splitting the `Pointcut` interface into two parts allows reuse of class and method matching parts and fine-grained composition operations (such as performing a “union” with another method matcher). The `ClassFilter` interface is used to restrict the pointcut to a given set of target classes. If the `matches()` method always returns true, all target classes are matched. The following listing shows the `ClassFilter` interface definition: ``` public interface ClassFilter { boolean matches(Class clazz); } ``` The `MethodMatcher` interface is normally more important. The complete interface follows: ``` public interface MethodMatcher { boolean matches(Method m, Class<?> targetClass); boolean isRuntime(); boolean matches(Method m, Class<?> targetClass, Object... args); } ``` ``` matches(Method, Class) ``` method is used to test whether this pointcut ever matches a given method on a target class. This evaluation can be performed when an AOP proxy is created to avoid the need for a test on every method invocation. If the two-argument `matches` method returns `true` for a given method, and the `isRuntime()` method for the MethodMatcher returns `true` , the three-argument matches method is invoked on every method invocation. This lets a pointcut look at the arguments passed to the method invocation immediately before the target advice starts. Most `MethodMatcher` implementations are static, meaning that their `isRuntime()` method returns `false` . In this case, the three-argument `matches` method is never invoked. If possible, try to make pointcuts static, allowing the AOP framework to cache the results of pointcut evaluation when an AOP proxy is created. | | --- | ## Operations on Pointcuts Spring supports operations (notably, union and intersection) on pointcuts. Union means the methods that either pointcut matches. Intersection means the methods that both pointcuts match. Union is usually more useful. You can compose pointcuts by using the static methods in the ``` org.springframework.aop.support.Pointcuts ``` class or by using the `ComposablePointcut` class in the same package. However, using AspectJ pointcut expressions is usually a simpler approach. ## AspectJ Expression Pointcuts Since 2.0, the most important type of pointcut used by Spring is ``` org.springframework.aop.aspectj.AspectJExpressionPointcut ``` . This is a pointcut that uses an AspectJ-supplied library to parse an AspectJ pointcut expression string. See the previous chapter for a discussion of supported AspectJ pointcut primitives. ## Convenience Pointcut Implementations Spring provides several convenient pointcut implementations. You can use some of them directly; others are intended to be subclassed in application-specific pointcuts. ### Static Pointcuts Static pointcuts are based on the method and the target class and cannot take into account the method’s arguments. Static pointcuts suffice — and are best — for most usages. Spring can evaluate a static pointcut only once, when a method is first invoked. After that, there is no need to evaluate the pointcut again with each method invocation. The rest of this section describes some of the static pointcut implementations that are included with Spring. # Regular Expression Pointcuts One obvious way to specify static pointcuts is regular expressions. Several AOP frameworks besides Spring make this possible. ``` org.springframework.aop.support.JdkRegexpMethodPointcut ``` is a generic regular expression pointcut that uses the regular expression support in the JDK. With the class, you can provide a list of pattern strings. If any of these is a match, the pointcut evaluates to `true` . (As a consequence, the resulting pointcut is effectively the union of the specified patterns.) The following example shows how to use ``` <bean id="settersAndAbsquatulatePointcut" class="org.springframework.aop.support.JdkRegexpMethodPointcut"> <property name="patterns"> <list> <value>.*set.*</value> <value>.*absquatulate</value> </list> </property> </bean> ``` Spring provides a convenience class named , which lets us also reference an `Advice` (remember that an `Advice` can be an interceptor, before advice, throws advice, and others). Behind the scenes, Spring uses a . Using simplifies wiring, as the one bean encapsulates both pointcut and advice, as the following example shows: ``` <bean id="settersAndAbsquatulateAdvisor" class="org.springframework.aop.support.RegexpMethodPointcutAdvisor"> <property name="advice"> <ref bean="beanNameOfAopAllianceInterceptor"/> </property> <property name="patterns"> <list> <value>.*set.*</value> <value>.*absquatulate</value> </list> </property> </bean> ``` with any `Advice` type. ### Dynamic pointcuts Dynamic pointcuts are costlier to evaluate than static pointcuts. They take into account method arguments as well as static information. This means that they must be evaluated with every method invocation and that the result cannot be cached, as arguments will vary. The main example is the `control flow` pointcut. # Control Flow Pointcuts Spring control flow pointcuts are conceptually similar to AspectJ `cflow` pointcuts, although less powerful. (There is currently no way to specify that a pointcut runs below a join point matched by another pointcut.) A control flow pointcut matches the current call stack. For example, it might fire if the join point was invoked by a method in the `com.mycompany.web` package or by the `SomeCaller` class. Control flow pointcuts are specified by using the ``` org.springframework.aop.support.ControlFlowPointcut ``` Control flow pointcuts are significantly more expensive to evaluate at runtime than even other dynamic pointcuts. In Java 1.4, the cost is about five times that of other dynamic pointcuts. | | --- | ## Pointcut Superclasses Spring provides useful pointcut superclasses to help you to implement your own pointcuts. Because static pointcuts are most useful, you should probably subclass . This requires implementing only one abstract method (although you can override other methods to customize behavior). The following example shows how to subclass ``` class TestStaticPointcut extends StaticMethodMatcherPointcut { public boolean matches(Method m, Class targetClass) { // return true if custom criteria match } } ``` ``` class TestStaticPointcut : StaticMethodMatcherPointcut() { override fun matches(method: Method, targetClass: Class<*>): Boolean { // return true if custom criteria match } } ``` There are also superclasses for dynamic pointcuts. You can use custom pointcuts with any advice type. ## Custom Pointcuts Because pointcuts in Spring AOP are Java classes rather than language features (as in AspectJ), you can declare custom pointcuts, whether static or dynamic. Custom pointcuts in Spring can be arbitrarily complex. However, we recommend using the AspectJ pointcut expression language, if you can. Later versions of Spring may offer support for “semantic pointcuts” as offered by JAC — for example, “all methods that change instance variables in the target object.” | | --- | Now we can examine how Spring AOP handles advice. ## Advice Lifecycles Each advice is a Spring bean. An advice instance can be shared across all advised objects or be unique to each advised object. This corresponds to per-class or per-instance advice. Per-class advice is used most often. It is appropriate for generic advice, such as transaction advisors. These do not depend on the state of the proxied object or add new state. They merely act on the method and arguments. Per-instance advice is appropriate for introductions, to support mixins. In this case, the advice adds state to the proxied object. You can use a mix of shared and per-instance advice in the same AOP proxy. ## Advice Types in Spring Spring provides several advice types and is extensible to support arbitrary advice types. This section describes the basic concepts and standard advice types. The most fundamental advice type in Spring is interception around advice. Spring is compliant with the AOP `Alliance` interface for around advice that uses method interception. Classes that implement `MethodInterceptor` and that implement around advice should also implement the following interface: Object invoke(MethodInvocation invocation) throws Throwable; } ``` The `MethodInvocation` argument to the `invoke()` method exposes the method being invoked, the target join point, the AOP proxy, and the arguments to the method. The `invoke()` method should return the invocation’s result: the return value of the join point. The following example shows a simple `MethodInterceptor` implementation: ``` public class DebugInterceptor implements MethodInterceptor { public Object invoke(MethodInvocation invocation) throws Throwable { System.out.println("Before: invocation=[" + invocation + "]"); Object rval = invocation.proceed(); System.out.println("Invocation returned"); return rval; } } ``` ``` class DebugInterceptor : MethodInterceptor { override fun invoke(invocation: MethodInvocation): Any { println("Before: invocation=[$invocation]") val rval = invocation.proceed() println("Invocation returned") return rval } } ``` Note the call to the `proceed()` method of `MethodInvocation` . This proceeds down the interceptor chain towards the join point. Most interceptors invoke this method and return its return value. However, a `MethodInterceptor` , like any around advice, can return a different value or throw an exception rather than invoke the proceed method. However, you do not want to do this without good reason. MethodInterceptor implementations offer interoperability with other AOP Alliance-compliant AOP implementations. The other advice types discussed in the remainder of this section implement common AOP concepts but in a Spring-specific way. While there is an advantage in using the most specific advice type, stick with MethodInterceptor around advice if you are likely to want to run the aspect in another AOP framework. Note that pointcuts are not currently interoperable between frameworks, and the AOP Alliance does not currently define pointcut interfaces. A simpler advice type is a before advice. This does not need a `MethodInvocation` object, since it is called only before entering the method. The main advantage of a before advice is that there is no need to invoke the `proceed()` method and, therefore, no possibility of inadvertently failing to proceed down the interceptor chain. The following listing shows the `MethodBeforeAdvice` interface: ``` public interface MethodBeforeAdvice extends BeforeAdvice { void before(Method m, Object[] args, Object target) throws Throwable; } ``` (Spring’s API design would allow for field before advice, although the usual objects apply to field interception and it is unlikely for Spring to ever implement it.) Note that the return type is `void` . Before advice can insert custom behavior before the join point runs but cannot change the return value. If a before advice throws an exception, it stops further execution of the interceptor chain. The exception propagates back up the interceptor chain. If it is unchecked or on the signature of the invoked method, it is passed directly to the client. Otherwise, it is wrapped in an unchecked exception by the AOP proxy. The following example shows a before advice in Spring, which counts all method invocations: ``` public class CountingBeforeAdvice implements MethodBeforeAdvice { public void before(Method m, Object[] args, Object target) throws Throwable { ++count; } ``` class CountingBeforeAdvice : MethodBeforeAdvice { var count: Int = 0 override fun before(m: Method, args: Array<Any>, target: Any?) { ++count } } ``` ### Throws Advice Throws advice is invoked after the return of the join point if the join point threw an exception. Spring offers typed throws advice. Note that this means that the ``` org.springframework.aop.ThrowsAdvice ``` interface does not contain any methods. It is a tag interface identifying that the given object implements one or more typed throws advice methods. These should be in the following form: ``` afterThrowing([Method, args, target], subclassOfThrowable) ``` Only the last argument is required. The method signatures may have either one or four arguments, depending on whether the advice method is interested in the method and arguments. The next two listing show classes that are examples of throws advice. The following advice is invoked if a `RemoteException` is thrown (including from subclasses): ``` public class RemoteThrowsAdvice implements ThrowsAdvice { ``` class RemoteThrowsAdvice : ThrowsAdvice { Unlike the preceding advice, the next example declares four arguments, so that it has access to the invoked method, method arguments, and target object. The following advice is invoked if a `ServletException` is thrown: ``` public class ServletThrowsAdviceWithArguments implements ThrowsAdvice { ``` class ServletThrowsAdviceWithArguments : ThrowsAdvice { The final example illustrates how these two methods could be used in a single class that handles both `RemoteException` and `ServletException` . Any number of throws advice methods can be combined in a single class. The following listing shows the final example: ``` public static class CombinedThrowsAdvice implements ThrowsAdvice { ``` class CombinedThrowsAdvice : ThrowsAdvice { If a throws-advice method throws an exception itself, it overrides the original exception (that is, it changes the exception thrown to the user). The overriding exception is typically a RuntimeException, which is compatible with any method signature. However, if a throws-advice method throws a checked exception, it must match the declared exceptions of the target method and is, hence, to some degree coupled to specific target method signatures. Do not throw an undeclared checked exception that is incompatible with the target method’s signature! | | --- | An after returning advice in Spring must implement the ``` org.springframework.aop.AfterReturningAdvice ``` ``` public interface AfterReturningAdvice extends Advice { void afterReturning(Object returnValue, Method m, Object[] args, Object target) throws Throwable; } ``` An after returning advice has access to the return value (which it cannot modify), the invoked method, the method’s arguments, and the target. The following after returning advice counts all successful method invocations that have not thrown exceptions: ``` public class CountingAfterReturningAdvice implements AfterReturningAdvice { public void afterReturning(Object returnValue, Method m, Object[] args, Object target) throws Throwable { ++count; } ``` class CountingAfterReturningAdvice : AfterReturningAdvice { var count: Int = 0 private set override fun afterReturning(returnValue: Any?, m: Method, args: Array<Any>, target: Any?) { ++count } } ``` This advice does not change the execution path. If it throws an exception, it is thrown up the interceptor chain instead of the return value. After returning advice can be used with any pointcut. | | --- | ### Introduction Advice Spring treats introduction advice as a special kind of interception advice. Introduction requires an `IntroductionAdvisor` and an that implement the following interface: boolean implementsInterface(Class intf); } ``` The `invoke()` method inherited from the AOP Alliance `MethodInterceptor` interface must implement the introduction. That is, if the invoked method is on an introduced interface, the introduction interceptor is responsible for handling the method call — it cannot invoke `proceed()` . Introduction advice cannot be used with any pointcut, as it applies only at the class, rather than the method, level. You can only use introduction advice with the `IntroductionAdvisor` , which has the following methods: ``` public interface IntroductionAdvisor extends Advisor, IntroductionInfo { void validateInterfaces() throws IllegalArgumentException; } public interface IntroductionInfo { Class<?>[] getInterfaces(); } ``` There is no `MethodMatcher` and, hence, no `Pointcut` associated with introduction advice. Only class filtering is logical. The `getInterfaces()` method returns the interfaces introduced by this advisor. The `validateInterfaces()` method is used internally to see whether or not the introduced interfaces can be implemented by the configured . Consider an example from the Spring test suite and suppose we want to introduce the following interface to one or more objects: ``` public interface Lockable { void lock(); void unlock(); boolean locked(); } ``` ``` interface Lockable { fun lock() fun unlock() fun locked(): Boolean } ``` This illustrates a mixin. We want to be able to cast advised objects to `Lockable` , whatever their type and call lock and unlock methods. If we call the `lock()` method, we want all setter methods to throw a `LockedException` . Thus, we can add an aspect that provides the ability to make objects immutable without them having any knowledge of it: a good example of AOP. First, we need an that does the heavy lifting. In this case, we extend the ``` org.springframework.aop.support.DelegatingIntroductionInterceptor ``` convenience class. We could implement directly, but using is best for most cases. The is designed to delegate an introduction to an actual implementation of the introduced interfaces, concealing the use of interception to do so. You can set the delegate to any object using a constructor argument. The default delegate (when the no-argument constructor is used) is `this` . Thus, in the next example, the delegate is the `LockMixin` subclass of . Given a delegate (by default, itself), a instance looks for all interfaces implemented by the delegate (other than ) and supports introductions against any of them. Subclasses such as `LockMixin` can call the ``` suppressInterface(Class intf) ``` method to suppress interfaces that should not be exposed. However, no matter how many interfaces an is prepared to support, the `IntroductionAdvisor` used controls which interfaces are actually exposed. An introduced interface conceals any implementation of the same interface by the target. Thus, `LockMixin` extends and implements `Lockable` itself. The superclass automatically picks up that `Lockable` can be supported for introduction, so we do not need to specify that. We could introduce any number of interfaces in this way. Note the use of the `locked` instance variable. This effectively adds additional state to that held in the target object. The following example shows the example `LockMixin` class: ``` public class LockMixin extends DelegatingIntroductionInterceptor implements Lockable { private boolean locked; public void lock() { this.locked = true; } public void unlock() { this.locked = false; } public boolean locked() { return this.locked; } public Object invoke(MethodInvocation invocation) throws Throwable { if (locked() && invocation.getMethod().getName().indexOf("set") == 0) { throw new LockedException(); } return super.invoke(invocation); } ``` class LockMixin : DelegatingIntroductionInterceptor(), Lockable { private var locked: Boolean = false fun lock() { this.locked = true } fun unlock() { this.locked = false } fun locked(): Boolean { return this.locked } override fun invoke(invocation: MethodInvocation): Any? { if (locked() && invocation.method.name.indexOf("set") == 0) { throw LockedException() } return super.invoke(invocation) } Often, you need not override the `invoke()` method. The implementation (which calls the `delegate` method if the method is introduced, otherwise proceeds towards the join point) usually suffices. In the present case, we need to add a check: no setter method can be invoked if in locked mode. The required introduction only needs to hold a distinct `LockMixin` instance and specify the introduced interfaces (in this case, only `Lockable` ). A more complex example might take a reference to the introduction interceptor (which would be defined as a prototype). In this case, there is no configuration relevant for a `LockMixin` , so we create it by using `new` . The following example shows our `LockMixinAdvisor` class: ``` public class LockMixinAdvisor extends DefaultIntroductionAdvisor { public LockMixinAdvisor() { super(new LockMixin(), Lockable.class); } } ``` ``` class LockMixinAdvisor : DefaultIntroductionAdvisor(LockMixin(), Lockable::class.java) ``` We can apply this advisor very simply, because it requires no configuration. (However, it is impossible to use an without an `IntroductionAdvisor` .) As usual with introductions, the advisor must be per-instance, as it is stateful. We need a different instance of `LockMixinAdvisor` , and hence `LockMixin` , for each advised object. The advisor comprises part of the advised object’s state. We can apply this advisor programmatically by using the `Advised.addAdvisor()` method or (the recommended way) in XML configuration, as any other advisor. All proxy creation choices discussed below, including “auto proxy creators,” correctly handle introductions and stateful mixins. In Spring, an Advisor is an aspect that contains only a single advice object associated with a pointcut expression. Apart from the special case of introductions, any advisor can be used with any advice. ``` org.springframework.aop.support.DefaultPointcutAdvisor ``` is the most commonly used advisor class. It can be used with a `MethodInterceptor` , `BeforeAdvice` , or `ThrowsAdvice` . It is possible to mix advisor and advice types in Spring in the same AOP proxy. For example, you could use an interception around advice, throws advice, and before advice in one proxy configuration. Spring automatically creates the necessary interceptor chain. # Using the ProxyFactoryBean to Create AOP Proxies `ProxyFactoryBean` to Create AOP Proxies If you use the Spring IoC container (an `ApplicationContext` or `BeanFactory` ) for your business objects (and you should be!), you want to use one of Spring’s AOP `FactoryBean` implementations. (Remember that a factory bean introduces a layer of indirection, letting it create objects of a different type.) The Spring AOP support also uses factory beans under the covers. | | --- | The basic way to create an AOP proxy in Spring is to use the ``` org.springframework.aop.framework.ProxyFactoryBean ``` . This gives complete control over the pointcuts, any advice that applies, and their ordering. However, there are simpler options that are preferable if you do not need such control. ## Basics The `ProxyFactoryBean` , like other Spring `FactoryBean` implementations, introduces a level of indirection. If you define a `ProxyFactoryBean` named `foo` , objects that reference `foo` do not see the `ProxyFactoryBean` instance itself but an object created by the implementation of the `getObject()` method in the `ProxyFactoryBean` . This method creates an AOP proxy that wraps a target object. One of the most important benefits of using a `ProxyFactoryBean` or another IoC-aware class to create AOP proxies is that advice and pointcuts can also be managed by IoC. This is a powerful feature, enabling certain approaches that are hard to achieve with other AOP frameworks. For example, an advice may itself reference application objects (besides the target, which should be available in any AOP framework), benefiting from all the pluggability provided by Dependency Injection. ## JavaBean Properties In common with most `FactoryBean` implementations provided with Spring, the `ProxyFactoryBean` class is itself a JavaBean. Its properties are used to: Specify the target you want to proxy. * Specify whether to use CGLIB (described later and see also JDK- and CGLIB-based proxies). Some key properties are inherited from ``` org.springframework.aop.framework.ProxyConfig ``` (the superclass for all AOP proxy factories in Spring). These key properties include the following: * `proxyTargetClass` : `true` if the target class is to be proxied, rather than the target class’s interfaces. If this property value is set to `true` , then CGLIB proxies are created (but see also JDK- and CGLIB-based proxies). * `optimize` : Controls whether or not aggressive optimizations are applied to proxies created through CGLIB. You should not blithely use this setting unless you fully understand how the relevant AOP proxy handles optimization. This is currently used only for CGLIB proxies. It has no effect with JDK dynamic proxies. * `frozen` : If a proxy configuration is `frozen` , changes to the configuration are no longer allowed. This is useful both as a slight optimization and for those cases when you do not want callers to be able to manipulate the proxy (through the `Advised` interface) after the proxy has been created. The default value of this property is `false` , so changes (such as adding additional advice) are allowed. * `exposeProxy` : Determines whether or not the current proxy should be exposed in a `ThreadLocal` so that it can be accessed by the target. If a target needs to obtain the proxy and the `exposeProxy` property is set to `true` , the target can use the ``` AopContext.currentProxy() ``` method. Other properties specific to `ProxyFactoryBean` include the following: * `proxyInterfaces` : An array of `String` interface names. If this is not supplied, a CGLIB proxy for the target class is used (but see also JDK- and CGLIB-based proxies). * `interceptorNames` : A `String` array of `Advisor` , interceptor, or other advice names to apply. Ordering is significant, on a first come-first served basis. That is to say that the first interceptor in the list is the first to be able to intercept the invocation. The names are bean names in the current factory, including bean names from ancestor factories. You cannot mention bean references here, since doing so results in the `ProxyFactoryBean` ignoring the singleton setting of the advice. You can append an interceptor name with an asterisk ( `*` ). Doing so results in the application of all advisor beans with names that start with the part before the asterisk to be applied. You can find an example of using this feature in Using “Global” Advisors. * singleton: Whether or not the factory should return a single object, no matter how often the `getObject()` method is called. Several `FactoryBean` implementations offer such a method. The default value is `true` . If you want to use stateful advice - for example, for stateful mixins - use prototype advice along with a singleton value of `false` . ## JDK- and CGLIB-based proxies This section serves as the definitive documentation on how the `ProxyFactoryBean` chooses to create either a JDK-based proxy or a CGLIB-based proxy for a particular target object (which is to be proxied). The behavior of the | | --- | If the class of a target object that is to be proxied (hereafter simply referred to as the target class) does not implement any interfaces, a CGLIB-based proxy is created. This is the easiest scenario, because JDK proxies are interface-based, and no interfaces means JDK proxying is not even possible. You can plug in the target bean and specify the list of interceptors by setting the `interceptorNames` property. Note that a CGLIB-based proxy is created even if the `proxyTargetClass` property of the `ProxyFactoryBean` has been set to `false` . (Doing so makes no sense and is best removed from the bean definition, because it is, at best, redundant, and, at worst confusing.) If the target class implements one (or more) interfaces, the type of proxy that is created depends on the configuration of the `ProxyFactoryBean` . If the `proxyTargetClass` property of the `ProxyFactoryBean` has been set to `true` , a CGLIB-based proxy is created. This makes sense and is in keeping with the principle of least surprise. Even if the `proxyInterfaces` property of the `ProxyFactoryBean` has been set to one or more fully qualified interface names, the fact that the `proxyTargetClass` property is set to `true` causes CGLIB-based proxying to be in effect. If the `proxyInterfaces` property of the `ProxyFactoryBean` has been set to one or more fully qualified interface names, a JDK-based proxy is created. The created proxy implements all of the interfaces that were specified in the `proxyInterfaces` property. If the target class happens to implement a whole lot more interfaces than those specified in the `proxyInterfaces` property, that is all well and good, but those additional interfaces are not implemented by the returned proxy. If the `proxyInterfaces` property of the `ProxyFactoryBean` has not been set, but the target class does implement one (or more) interfaces, the `ProxyFactoryBean` auto-detects the fact that the target class does actually implement at least one interface, and a JDK-based proxy is created. The interfaces that are actually proxied are all of the interfaces that the target class implements. In effect, this is the same as supplying a list of each and every interface that the target class implements to the `proxyInterfaces` property. However, it is significantly less work and less prone to typographical errors. ## Proxying Interfaces Consider a simple example of `ProxyFactoryBean` in action. This example involves: A target bean that is proxied. This is the `personTarget` bean definition in the example. * An `Advisor` and an `Interceptor` used to provide advice. * An AOP proxy bean definition to specify the target object (the `personTarget` bean), the interfaces to proxy, and the advice to apply. The following listing shows the example: ``` <bean id="personTarget" class="com.mycompany.PersonImpl"> <property name="name" value="Tony"/> <property name="age" value="51"/> </bean<bean id="debugInterceptor" class="org.springframework.aop.interceptor.DebugInterceptor"> </bean<bean id="person" class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="proxyInterfaces" value="com.mycompany.Person"/ <property name="target" ref="personTarget"/> <property name="interceptorNames"> <list> <value>myAdvisor</value> <value>debugInterceptor</value> </list> </property> </bean> ``` Note that the `interceptorNames` property takes a list of `String` , which holds the bean names of the interceptors or advisors in the current factory. You can use advisors, interceptors, before, after returning, and throws advice objects. The ordering of advisors is significant. You might be wondering why the list does not hold bean references. The reason for this is that, if the singleton property of the | | --- | The `person` bean definition shown earlier can be used in place of a `Person` implementation, as follows: ``` Person person = (Person) factory.getBean("person"); ``` ``` val person = factory.getBean("person") as Person; ``` Other beans in the same IoC context can express a strongly typed dependency on it, as with an ordinary Java object. The following example shows how to do so: ``` <bean id="personUser" class="com.mycompany.PersonUser"> <property name="person"><ref bean="person"/></property> </bean> ``` The `PersonUser` class in this example exposes a property of type `Person` . As far as it is concerned, the AOP proxy can be used transparently in place of a “real” person implementation. However, its class would be a dynamic proxy class. It would be possible to cast it to the `Advised` interface (discussed later). You can conceal the distinction between target and proxy by using an anonymous inner bean. Only the `ProxyFactoryBean` definition is different. The advice is included only for completeness. The following example shows how to use an anonymous inner bean: <bean id="debugInterceptor" class="org.springframework.aop.interceptor.DebugInterceptor"/<bean id="person" class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="proxyInterfaces" value="com.mycompany.Person"/> <!-- Use inner bean, not local reference to target --> <property name="target"> <bean class="com.mycompany.PersonImpl"> <property name="name" value="Tony"/> <property name="age" value="51"/> </bean> </property> <property name="interceptorNames"> <list> <value>myAdvisor</value> <value>debugInterceptor</value> </list> </property> </bean> ``` Using an anonymous inner bean has the advantage that there is only one object of type `Person` . This is useful if we want to prevent users of the application context from obtaining a reference to the un-advised object or need to avoid any ambiguity with Spring IoC autowiring. There is also, arguably, an advantage in that the `ProxyFactoryBean` definition is self-contained. However, there are times when being able to obtain the un-advised target from the factory might actually be an advantage (for example, in certain test scenarios). ## Proxying Classes What if you need to proxy a class, rather than one or more interfaces? Imagine that in our earlier example, there was no `Person` interface. We needed to advise a class called `Person` that did not implement any business interface. In this case, you can configure Spring to use CGLIB proxying rather than dynamic proxies. To do so, set the `proxyTargetClass` property on the `ProxyFactoryBean` shown earlier to `true` . While it is best to program to interfaces rather than classes, the ability to advise classes that do not implement interfaces can be useful when working with legacy code. (In general, Spring is not prescriptive. While it makes it easy to apply good practices, it avoids forcing a particular approach.) If you want to, you can force the use of CGLIB in any case, even if you do have interfaces. CGLIB proxying works by generating a subclass of the target class at runtime. Spring configures this generated subclass to delegate method calls to the original target. The subclass is used to implement the Decorator pattern, weaving in the advice. CGLIB proxying should generally be transparent to users. However, there are some issues to consider: * `final` classes cannot be proxied, because they cannot be extended. * `final` methods cannot be advised, because they cannot be overridden. * `private` methods cannot be advised, because they cannot be overridden. There is no need to add CGLIB to your classpath. CGLIB is repackaged and included in the | | --- | There is little performance difference between CGLIB proxies and dynamic proxies. Performance should not be a decisive consideration in this case. ## Using “Global” Advisors By appending an asterisk to an interceptor name, all advisors with bean names that match the part before the asterisk are added to the advisor chain. This can come in handy if you need to add a standard set of “global” advisors. The following example defines two global advisors: ``` <bean id="proxy" class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="target" ref="service"/> <property name="interceptorNames"> <list> <value>global*</value> </list> </property> </bean<bean id="global_debug" class="org.springframework.aop.interceptor.DebugInterceptor"/> <bean id="global_performance" class="org.springframework.aop.interceptor.PerformanceMonitorInterceptor"/> ``` Especially when defining transactional proxies, you may end up with many similar proxy definitions. The use of parent and child bean definitions, along with inner bean definitions, can result in much cleaner and more concise proxy definitions. First, we create a parent, template, bean definition for the proxy, as follows: ``` <bean id="txProxyTemplate" abstract="true" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean"> <property name="transactionManager" ref="transactionManager"/> <property name="transactionAttributes"> <props> <prop key="*">PROPAGATION_REQUIRED</prop> </props> </property> </bean> ``` This is never instantiated itself, so it can actually be incomplete. Then, each proxy that needs to be created is a child bean definition, which wraps the target of the proxy as an inner bean definition, since the target is never used on its own anyway. The following example shows such a child bean: ``` <bean id="myService" parent="txProxyTemplate"> <property name="target"> <bean class="org.springframework.samples.MyServiceImpl"> </bean> </property> </bean> ``` You can override properties from the parent template. In the following example, we override the transaction propagation settings: ``` <bean id="mySpecialService" parent="txProxyTemplate"> <property name="target"> <bean class="org.springframework.samples.MySpecialServiceImpl"> </bean> </property> <property name="transactionAttributes"> <props> <prop key="get*">PROPAGATION_REQUIRED,readOnly</prop> <prop key="find*">PROPAGATION_REQUIRED,readOnly</prop> <prop key="load*">PROPAGATION_REQUIRED,readOnly</prop> <prop key="store*">PROPAGATION_REQUIRED</prop> </props> </property> </bean> ``` Note that in the parent bean example, we explicitly marked the parent bean definition as being abstract by setting the `abstract` attribute to `true` , as described previously, so that it may not actually ever be instantiated. Application contexts (but not simple bean factories), by default, pre-instantiate all singletons. Therefore, it is important (at least for singleton beans) that, if you have a (parent) bean definition that you intend to use only as a template, and this definition specifies a class, you must make sure to set the `abstract` attribute to `true` . Otherwise, the application context actually tries to pre-instantiate it. `ProxyFactory` It is easy to create AOP proxies programmatically with Spring. This lets you use Spring AOP without dependency on Spring IoC. The interfaces implemented by the target object are automatically proxied. The following listing shows creation of a proxy for a target object, with one interceptor and one advisor: ``` ProxyFactory factory = new ProxyFactory(myBusinessInterfaceImpl); factory.addAdvice(myMethodInterceptor); factory.addAdvisor(myAdvisor); MyBusinessInterface tb = (MyBusinessInterface) factory.getProxy(); ``` ``` val factory = ProxyFactory(myBusinessInterfaceImpl) factory.addAdvice(myMethodInterceptor) factory.addAdvisor(myAdvisor) val tb = factory.proxy as MyBusinessInterface ``` The first step is to construct an object of type ``` org.springframework.aop.framework.ProxyFactory ``` . You can create this with a target object, as in the preceding example, or specify the interfaces to be proxied in an alternate constructor. You can add advice (with interceptors as a specialized kind of advice), advisors, or both and manipulate them for the life of the `ProxyFactory` . If you add an ``` IntroductionInterceptionAroundAdvisor ``` , you can cause the proxy to implement additional interfaces. There are also convenience methods on `ProxyFactory` (inherited from `AdvisedSupport` ) that let you add other advice types, such as before and throws advice. `AdvisedSupport` is the superclass of both `ProxyFactory` and `ProxyFactoryBean` . Integrating AOP proxy creation with the IoC framework is best practice in most applications. We recommend that you externalize configuration from Java code with AOP, as you should in general. | | --- | However you create AOP proxies, you can manipulate them BY using the ``` org.springframework.aop.framework.Advised ``` interface. Any AOP proxy can be cast to this interface, no matter which other interfaces it implements. This interface includes the following methods: ``` Advisor[] getAdvisors(); void addAdvice(Advice advice) throws AopConfigException; void addAdvice(int pos, Advice advice) throws AopConfigException; void addAdvisor(Advisor advisor) throws AopConfigException; void addAdvisor(int pos, Advisor advisor) throws AopConfigException; int indexOf(Advisor advisor); boolean removeAdvisor(Advisor advisor) throws AopConfigException; void removeAdvisor(int index) throws AopConfigException; boolean replaceAdvisor(Advisor a, Advisor b) throws AopConfigException; boolean isFrozen(); ``` ``` fun getAdvisors(): Array<Advisor@Throws(AopConfigException::class) fun addAdvice(advice: Advice) @Throws(AopConfigException::class) fun addAdvice(pos: Int, advice: Advice) @Throws(AopConfigException::class) fun addAdvisor(advisor: Advisor) @Throws(AopConfigException::class) fun addAdvisor(pos: Int, advisor: Advisor) fun indexOf(advisor: Advisor): Int @Throws(AopConfigException::class) fun removeAdvisor(advisor: Advisor): Boolean @Throws(AopConfigException::class) fun removeAdvisor(index: Int) @Throws(AopConfigException::class) fun replaceAdvisor(a: Advisor, b: Advisor): Boolean fun isFrozen(): Boolean ``` The `getAdvisors()` method returns an `Advisor` for every advisor, interceptor, or other advice type that has been added to the factory. If you added an `Advisor` , the returned advisor at this index is the object that you added. If you added an interceptor or other advice type, Spring wrapped this in an advisor with a pointcut that always returns `true` . Thus, if you added a `MethodInterceptor` , the advisor returned for this index is a that returns your `MethodInterceptor` and a pointcut that matches all classes and methods. The `addAdvisor()` methods can be used to add any `Advisor` . Usually, the advisor holding pointcut and advice is the generic , which you can use with any advice or pointcut (but not for introductions). By default, it is possible to add or remove advisors or interceptors even once a proxy has been created. The only restriction is that it is impossible to add or remove an introduction advisor, as existing proxies from the factory do not show the interface change. (You can obtain a new proxy from the factory to avoid this problem.) The following example shows casting an AOP proxy to the `Advised` interface and examining and manipulating its advice: ``` Advised advised = (Advised) myObject; Advisor[] advisors = advised.getAdvisors(); int oldAdvisorCount = advisors.length; System.out.println(oldAdvisorCount + " advisors"); // Add selective advice using a pointcut advised.addAdvisor(new DefaultPointcutAdvisor(mySpecialPointcut, myAdvice)); ``` val advised = myObject as Advised val advisors = advised.advisors val oldAdvisorCount = advisors.size println("$oldAdvisorCount advisors") // Add selective advice using a pointcut advised.addAdvisor(DefaultPointcutAdvisor(mySpecialPointcut, myAdvice)) It is questionable whether it is advisable (no pun intended) to modify advice on a business object in production, although there are, no doubt, legitimate usage cases. However, it can be very useful in development (for example, in tests). We have sometimes found it very useful to be able to add test code in the form of an interceptor or other advice, getting inside a method invocation that we want to test. (For example, the advice can get inside a transaction created for that method, perhaps to run SQL to check that a database was correctly updated, before marking the transaction for roll back.) | | --- | Depending on how you created the proxy, you can usually set a `frozen` flag. In that case, the `Advised` `isFrozen()` method returns `true` , and any attempts to modify advice through addition or removal results in an `AopConfigException` . The ability to freeze the state of an advised object is useful in some cases (for example, to prevent calling code removing a security interceptor). So far, we have considered explicit creation of AOP proxies by using a `ProxyFactoryBean` or similar factory bean. Spring also lets us use “auto-proxy” bean definitions, which can automatically proxy selected bean definitions. This is built on Spring’s “bean post processor” infrastructure, which enables modification of any bean definition as the container loads. In this model, you set up some special bean definitions in your XML bean definition file to configure the auto-proxy infrastructure. This lets you declare the targets eligible for auto-proxying. You need not use `ProxyFactoryBean` . There are two ways to do this: By using an auto-proxy creator that refers to specific beans in the current context. * A special case of auto-proxy creation that deserves to be considered separately: auto-proxy creation driven by source-level metadata attributes. ## Auto-proxy Bean Definitions This section covers the auto-proxy creators provided by the ``` org.springframework.aop.framework.autoproxy ``` class is a `BeanPostProcessor` that automatically creates AOP proxies for beans with names that match literal values or wildcards. The following example shows how to create a bean: ``` <bean class="org.springframework.aop.framework.autoproxy.BeanNameAutoProxyCreator"> <property name="beanNames" value="jdk*,onlyJdk"/> <property name="interceptorNames"> <list> <value>myInterceptor</value> </list> </property> </bean> ``` As with `ProxyFactoryBean` , there is an `interceptorNames` property rather than a list of interceptors, to allow correct behavior for prototype advisors. Named “interceptors” can be advisors or any advice type. As with auto-proxying in general, the main point of using is to apply the same configuration consistently to multiple objects, with minimal volume of configuration. It is a popular choice for applying declarative transactions to multiple objects. Bean definitions whose names match, such as `jdkMyBean` and `onlyJdk` in the preceding example, are plain old bean definitions with the target class. An AOP proxy is automatically created by the . The same advice is applied to all matching beans. Note that, if advisors are used (rather than the interceptor in the preceding example), the pointcuts may apply differently to different beans. A more general and extremely powerful auto-proxy creator is . This automagically applies eligible advisors in the current context, without the need to include specific bean names in the auto-proxy advisor’s bean definition. It offers the same merit of consistent configuration and avoidance of duplication as . Using this mechanism involves: Specifying a bean definition. * Specifying any number of advisors in the same or related contexts. Note that these must be advisors, not interceptors or other advice. This is necessary, because there must be a pointcut to evaluate, to check the eligibility of each advice to candidate bean definitions. The automatically evaluates the pointcut contained in each advisor, to see what (if any) advice it should apply to each business object (such as `businessObject1` and `businessObject2` in the example). This means that any number of advisors can be applied automatically to each business object. If no pointcut in any of the advisors matches any method in a business object, the object is not proxied. As bean definitions are added for new business objects, they are automatically proxied if necessary. Auto-proxying in general has the advantage of making it impossible for callers or dependencies to obtain an un-advised object. Calling ``` getBean("businessObject1") ``` on this `ApplicationContext` returns an AOP proxy, not the target business object. (The “inner bean” idiom shown earlier also offers this benefit.) The following example creates a bean and the other elements discussed in this section: ``` <bean class="org.springframework.aop.framework.autoproxy.DefaultAdvisorAutoProxyCreator"/<bean class="org.springframework.transaction.interceptor.TransactionAttributeSourceAdvisor"> <property name="transactionInterceptor" ref="transactionInterceptor"/> </bean<bean id="customAdvisor" class="com.mycompany.MyAdvisor"/<bean id="businessObject1" class="com.mycompany.BusinessObject1"> <!-- Properties omitted --> </bean<bean id="businessObject2" class="com.mycompany.BusinessObject2"/> ``` is very useful if you want to apply the same advice consistently to many business objects. Once the infrastructure definitions are in place, you can add new business objects without including specific proxy configuration. You can also easily drop in additional aspects (for example, tracing or performance monitoring aspects) with minimal change to configuration. The offers support for filtering (by using a naming convention so that only certain advisors are evaluated, which allows the use of multiple, differently configured, AdvisorAutoProxyCreators in the same factory) and ordering. Advisors can implement the interface to ensure correct ordering if this is an issue. The ``` TransactionAttributeSourceAdvisor ``` used in the preceding example has a configurable order value. The default setting is unordered. # Using TargetSource Implementations `TargetSource` Implementations Spring offers the concept of a `TargetSource` , expressed in the ``` org.springframework.aop.TargetSource ``` interface. This interface is responsible for returning the “target object” that implements the join point. The `TargetSource` implementation is asked for a target instance each time the AOP proxy handles a method invocation. Developers who use Spring AOP do not normally need to work directly with `TargetSource` implementations, but this provides a powerful means of supporting pooling, hot swappable, and other sophisticated targets. For example, a pooling `TargetSource` can return a different target instance for each invocation, by using a pool to manage instances. If you do not specify a `TargetSource` , a default implementation is used to wrap a local object. The same target is returned for each invocation (as you would expect). The rest of this section describes the standard target sources provided with Spring and how you can use them. When using a custom target source, your target will usually need to be a prototype rather than a singleton bean definition. This allows Spring to create a new target instance when required. | | --- | ## Hot-swappable Target Sources ``` org.springframework.aop.target.HotSwappableTargetSource ``` exists to let the target of an AOP proxy be switched while letting callers keep their references to it. Changing the target source’s target takes effect immediately. The ``` HotSwappableTargetSource ``` is thread-safe. You can change the target by using the `swap()` method on HotSwappableTargetSource, as the follow example shows: ``` HotSwappableTargetSource swapper = (HotSwappableTargetSource) beanFactory.getBean("swapper"); Object oldTarget = swapper.swap(newTarget); ``` ``` val swapper = beanFactory.getBean("swapper") as HotSwappableTargetSource val oldTarget = swapper.swap(newTarget) ``` The following example shows the required XML definitions: ``` <bean id="initialTarget" class="mycompany.OldTarget"/<bean id="swapper" class="org.springframework.aop.target.HotSwappableTargetSource"> <constructor-arg ref="initialTarget"/> </bean<bean id="swappable" class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="targetSource" ref="swapper"/> </bean> ``` The preceding `swap()` call changes the target of the swappable bean. Clients that hold a reference to that bean are unaware of the change but immediately start hitting the new target. Although this example does not add any advice (it is not necessary to add advice to use a `TargetSource` ), any `TargetSource` can be used in conjunction with arbitrary advice. ## Pooling Target Sources Using a pooling target source provides a similar programming model to stateless session EJBs, in which a pool of identical instances is maintained, with method invocations going to free objects in the pool. A crucial difference between Spring pooling and SLSB pooling is that Spring pooling can be applied to any POJO. As with Spring in general, this service can be applied in a non-invasive way. Spring provides support for Commons Pool 2.2, which provides a fairly efficient pooling implementation. You need the `commons-pool` Jar on your application’s classpath to use this feature. You can also subclass ``` org.springframework.aop.target.AbstractPoolingTargetSource ``` to support any other pooling API. Commons Pool 1.5+ is also supported but is deprecated as of Spring Framework 4.2. | | --- | The following listing shows an example configuration: ``` <bean id="businessObjectTarget" class="com.mycompany.MyBusinessObject" scope="prototype"> ... properties omitted </bean<bean id="poolTargetSource" class="org.springframework.aop.target.CommonsPool2TargetSource"> <property name="targetBeanName" value="businessObjectTarget"/> <property name="maxSize" value="25"/> </bean<bean id="businessObject" class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="targetSource" ref="poolTargetSource"/> <property name="interceptorNames" value="myInterceptor"/> </bean> ``` Note that the target object ( `businessObjectTarget` in the preceding example) must be a prototype. This lets the `PoolingTargetSource` implementation create new instances of the target to grow the pool as necessary. See the javadoc of and the concrete subclass you wish to use for information about its properties. `maxSize` is the most basic and is always guaranteed to be present. In this case, `myInterceptor` is the name of an interceptor that would need to be defined in the same IoC context. However, you need not specify interceptors to use pooling. If you want only pooling and no other advice, do not set the `interceptorNames` property at all. You can configure Spring to be able to cast any pooled object to the ``` org.springframework.aop.target.PoolingConfig ``` interface, which exposes information about the configuration and current size of the pool through an introduction. You need to define an advisor similar to the following: ``` <bean id="poolConfigAdvisor" class="org.springframework.beans.factory.config.MethodInvokingFactoryBean"> <property name="targetObject" ref="poolTargetSource"/> <property name="targetMethod" value="getPoolingConfigMixin"/> </bean> ``` This advisor is obtained by calling a convenience method on the class, hence the use of ``` MethodInvokingFactoryBean ``` . This advisor’s name ( `poolConfigAdvisor` , here) must be in the list of interceptors names in the `ProxyFactoryBean` that exposes the pooled object. The cast is defined as follows: ``` PoolingConfig conf = (PoolingConfig) beanFactory.getBean("businessObject"); System.out.println("Max pool size is " + conf.getMaxSize()); ``` ``` val conf = beanFactory.getBean("businessObject") as PoolingConfig println("Max pool size is " + conf.maxSize) ``` Pooling stateless service objects is not usually necessary. We do not believe it should be the default choice, as most stateless objects are naturally thread safe, and instance pooling is problematic if resources are cached. | | --- | Simpler pooling is available by using auto-proxying. You can set the `TargetSource` implementations used by any auto-proxy creator. ## Prototype Target Sources Setting up a “prototype” target source is similar to setting up a pooling `TargetSource` . In this case, a new instance of the target is created on every method invocation. Although the cost of creating a new object is not high in a modern JVM, the cost of wiring up the new object (satisfying its IoC dependencies) may be more expensive. Thus, you should not use this approach without very good reason. To do this, you could modify the `poolTargetSource` definition shown earlier as follows (we also changed the name, for clarity): ``` <bean id="prototypeTargetSource" class="org.springframework.aop.target.PrototypeTargetSource"> <property name="targetBeanName" ref="businessObjectTarget"/> </bean> ``` The only property is the name of the target bean. Inheritance is used in the `TargetSource` implementations to ensure consistent naming. As with the pooling target source, the target bean must be a prototype bean definition. `ThreadLocal` Target Sources `ThreadLocal` target sources are useful if you need an object to be created for each incoming request (per thread that is). The concept of a `ThreadLocal` provides a JDK-wide facility to transparently store a resource alongside a thread. Setting up a ``` ThreadLocalTargetSource ``` is pretty much the same as was explained for the other types of target source, as the following example shows: ``` <bean id="threadlocalTargetSource" class="org.springframework.aop.target.ThreadLocalTargetSource"> <property name="targetBeanName" value="businessObjectTarget"/> </bean> ``` ThreadLocal instances come with serious issues (potentially resulting in memory leaks) when incorrectly using them in multi-threaded and multi-classloader environments. You should always consider wrapping a ThreadLocal in some other class and never directly use the ThreadLocal itself (except in the wrapper class). Also, you should always remember to correctly set and unset (where the latter simply involves a call to ThreadLocal.set(null)) the resource local to the thread. Unsetting should be done in any case, since not unsetting it might result in problematic behavior. Spring’s ThreadLocal support does this for you and should always be considered in favor of using ThreadLocal instances without other proper handling code. Spring AOP is designed to be extensible. While the interception implementation strategy is presently used internally, it is possible to support arbitrary advice types in addition to the interception around advice, before, throws advice, and after returning advice. The package is an SPI package that lets support for new custom advice types be added without changing the core framework. The only constraint on a custom `Advice` type is that it must implement the ``` org.aopalliance.aop.Advice ``` marker interface. See the javadoc for further information. Although Java does not let you express null-safety with its type system, the Spring Framework provides the following annotations in the package to let you declare nullability of APIs and fields: * `@Nullable` : Annotation to indicate that a specific parameter, return value, or field can be `null` . * `@NonNull` : Annotation to indicate that a specific parameter, return value, or field cannot be `null` (not needed on parameters, return values, and fields where `@NonNullApi` and `@NonNullFields` apply, respectively). * `@NonNullApi` : Annotation at the package level that declares non-null as the default semantics for parameters and return values. * `@NonNullFields` : Annotation at the package level that declares non-null as the default semantics for fields. The Spring Framework itself leverages these annotations, but they can also be used in any Spring-based Java project to declare null-safe APIs and optionally null-safe fields. Nullability declarations for generic type arguments, varargs, and array elements are not supported yet. Nullability declarations are expected to be fine-tuned between Spring Framework releases, including minor ones. Nullability of types used inside method bodies is outside the scope of this feature. Other common libraries such as Reactor and Spring Data provide null-safe APIs that use a similar nullability arrangement, delivering a consistent overall experience for Spring application developers. | | --- | ## Use cases In addition to providing an explicit declaration for Spring Framework API nullability, these annotations can be used by an IDE (such as IDEA or Eclipse) to provide useful warnings related to null-safety in order to avoid `NullPointerException` at runtime. They are also used to make Spring APIs null-safe in Kotlin projects, since Kotlin natively supports null-safety. More details are available in the Kotlin support documentation. ## JSR-305 meta-annotations Spring annotations are meta-annotated with JSR 305 annotations (a dormant but widespread JSR). JSR-305 meta-annotations let tooling vendors like IDEA or Kotlin provide null-safety support in a generic way, without having to hard-code support for Spring annotations. It is neither necessary nor recommended to add a JSR-305 dependency to the project classpath to take advantage of Spring’s null-safe APIs. Only projects such as Spring-based libraries that use null-safety annotations in their codebase should add ``` com.google.code.findbugs:jsr305:3.0.2 ``` with `compileOnly` Gradle configuration or Maven `provided` scope to avoid compiler warnings. Java NIO provides `ByteBuffer` but many libraries build their own byte buffer API on top, especially for network operations where reusing buffers and/or using direct buffers is beneficial for performance. For example Netty has the `ByteBuf` hierarchy, Undertow uses XNIO, Jetty uses pooled byte buffers with a callback to be released, and so on. The `spring-core` module provides a set of abstractions to work with various byte buffer APIs as follows: * `DataBufferFactory` abstracts the creation of a data buffer. * `DataBuffer` represents a byte buffer, which may be pooled. * `DataBufferUtils` offers utility methods for data buffers. * Codecs decode or encode data buffer streams into higher level objects. `DataBufferFactory` `DataBufferFactory` is used to create data buffers in one of two ways: Allocate a new data buffer, optionally specifying capacity upfront, if known, which is more efficient even though implementations of `DataBuffer` can grow and shrink on demand. * Wrap an existing `byte[]` or `java.nio.ByteBuffer` , which decorates the given data with a `DataBuffer` implementation and that does not involve allocation. Note that WebFlux applications do not create a `DataBufferFactory` directly but instead access it through the `ServerHttpResponse` or the `ClientHttpRequest` on the client side. The type of factory depends on the underlying client or server, e.g. ``` NettyDataBufferFactory ``` for Reactor Netty, ``` DefaultDataBufferFactory ``` for others. `DataBuffer` The `DataBuffer` interface offers similar operations as `java.nio.ByteBuffer` but also brings a few additional benefits some of which are inspired by the Netty `ByteBuf` . Below is a partial list of benefits: Read and write with independent positions, i.e. not requiring a call to `flip()` to alternate between read and write. * Capacity expanded on demand as with ``` java.lang.StringBuilder ``` Pooled buffers and reference counting via `PooledDataBuffer` . * View a buffer as `java.nio.ByteBuffer` , `InputStream` , or `OutputStream` . * Determine the index, or the last index, for a given byte. `PooledDataBuffer` As explained in the Javadoc for ByteBuffer, byte buffers can be direct or non-direct. Direct buffers may reside outside the Java heap which eliminates the need for copying for native I/O operations. That makes direct buffers particularly useful for receiving and sending data over a socket, but they’re also more expensive to create and release, which leads to the idea of pooling buffers. `PooledDataBuffer` is an extension of `DataBuffer` that helps with reference counting which is essential for byte buffer pooling. How does it work? When a `PooledDataBuffer` is allocated the reference count is at 1. Calls to `retain()` increment the count, while calls to `release()` decrement it. As long as the count is above 0, the buffer is guaranteed not to be released. When the count is decreased to 0, the pooled buffer can be released, which in practice could mean the reserved memory for the buffer is returned to the memory pool. Note that instead of operating on `PooledDataBuffer` directly, in most cases it’s better to use the convenience methods in `DataBufferUtils` that apply release or retain to a `DataBuffer` only if it is an instance of `PooledDataBuffer` . `DataBufferUtils` `DataBufferUtils` offers a number of utility methods to operate on data buffers: Join a stream of data buffers into a single buffer possibly with zero copy, e.g. via composite buffers, if that’s supported by the underlying byte buffer API. * Turn `InputStream` or NIO `Channel` into `Flux<DataBuffer>` , and vice versa a into `OutputStream` or NIO `Channel` . * Methods to release or retain a `DataBuffer` if the buffer is an instance of `PooledDataBuffer` . * Skip or take from a stream of bytes until a specific byte count. ``` org.springframework.core.codec ``` package provides the following strategy interfaces: * `Encoder` to encode `Publisher<T>` into a stream of data buffers. * `Decoder` to decode into a stream of higher level objects. The `spring-core` module provides `byte[]` , `ByteBuffer` , `DataBuffer` , `Resource` , and `String` encoder and decoder implementations. The `spring-web` module adds Jackson JSON, Jackson Smile, JAXB2, Protocol Buffers and other encoders and decoders. See Codecs in the WebFlux section. `DataBuffer` When working with data buffers, special care must be taken to ensure buffers are released since they may be pooled. We’ll use codecs to illustrate how that works but the concepts apply more generally. Let’s see what codecs must do internally to manage data buffers. A `Decoder` is the last to read input data buffers, before creating higher level objects, and therefore it must release them as follows: If a `Decoder` simply reads each input buffer and is ready to release it immediately, it can do so via If a `Decoder` is using `Flux` or `Mono` operators such as `flatMap` , `reduce` , and others that prefetch and cache data items internally, or is using operators such as `filter` , `skip` , and others that leave out items, then ``` doOnDiscard(DataBuffer.class, DataBufferUtils::release) ``` must be added to the composition chain to ensure such buffers are released prior to being discarded, possibly also as a result of an error or cancellation signal. * If a `Decoder` holds on to one or more data buffers in any other way, it must ensure they are released when fully read, or in case of an error or cancellation signals that take place before the cached data buffers have been read and released. Note that `DataBufferUtils#join` offers a safe and efficient way to aggregate a data buffer stream into a single data buffer. Likewise `skipUntilByteCount` and `takeUntilByteCount` are additional safe methods for decoders to use. An `Encoder` allocates data buffers that others must read (and release). So an `Encoder` doesn’t have much to do. However an `Encoder` must take care to release a data buffer if a serialization error occurs while populating the buffer with data. For example: ``` DataBuffer buffer = factory.allocateBuffer(); boolean release = true; try { // serialize and populate buffer.. release = false; } finally { if (release) { DataBufferUtils.release(buffer); } } return buffer; ``` ``` val buffer = factory.allocateBuffer() var release = true try { // serialize and populate buffer.. release = false } finally { if (release) { DataBufferUtils.release(buffer) } } return buffer ``` The consumer of an `Encoder` is responsible for releasing the data buffers it receives. In a WebFlux application, the output of the `Encoder` is used to write to the HTTP server response, or to the client HTTP request, in which case releasing the data buffers is the responsibility of the code writing to the server response, or to the client request. Note that when running on Netty, there are debugging options for troubleshooting buffer leaks. Since Spring Framework 5.0, Spring comes with its own Commons Logging bridge implemented in the `spring-jcl` module. The implementation checks for the presence of the Log4j 2.x API and the SLF4J 1.7 API in the classpath and uses the first one of those found as the logging implementation, falling back to the Java platform’s core logging facilities (also known as JUL or `java.util.logging` ) if neither Log4j 2.x nor SLF4J is available. Put Log4j 2.x or Logback (or another SLF4J provider) in your classpath, without any extra bridges, and let the framework auto-adapt to your choice. For further information see the Spring Boot Logging Reference Documentation. A `Log` implementation may be retrieved via ``` org.apache.commons.logging.LogFactory ``` as in the following example. ``` public class MyBean { private final Log log = LogFactory.getLog(getClass()); // ... } ``` ``` class MyBean { private val log = LogFactory.getLog(javaClass) // ... } ``` This chapter covers Spring’s Ahead of Time (AOT) optimizations. For AOT support specific to integration tests, see Ahead of Time Support for Tests. ## Introduction to Ahead of Time Optimizations Spring’s support for AOT optimizations is meant to inspect an `ApplicationContext` at build time and apply decisions and discovery logic that usually happens at runtime. Doing so allows building an application startup arrangement that is more straightforward and focused on a fixed set of features based mainly on the classpath and the `Environment` . Applying such optimizations early implies the following restrictions: The classpath is fixed and fully defined at build time. * The beans defined in your application cannot change at runtime, meaning: * `@Profile` , in particular profile-specific configuration needs to be chosen at build time. * `Environment` properties that impact the presence of a bean ( `@Conditional` ) are only considered at build time. * Bean definitions with instance suppliers (lambdas or method references) cannot be transformed ahead-of-time (see related spring-framework#29555 issue). * Make sure that the bean type is as precise as possible. See also the Best Practices section. | | --- | When these restrictions are in place, it becomes possible to perform ahead-of-time processing at build time and generate additional assets. A Spring AOT processed application typically generates: Java source code * Bytecode (usually for dynamic proxies) * `RuntimeHints` for the use of reflection, resource loading, serialization, and JDK proxies. At the moment, AOT is focused on allowing Spring applications to be deployed as native images using GraalVM. We intend to support more JVM-based use cases in future generations. | | --- | ## AOT engine overview The entry point of the AOT engine for processing an `ApplicationContext` arrangement is ``` ApplicationContextAotGenerator ``` . It takes care of the following steps, based on a that represents the application to optimize and a `GenerationContext` : Refresh an `ApplicationContext` for AOT processing. Contrary to a traditional refresh, this version only creates bean definitions, not bean instances. * Invoke the available implementations and apply their contributions against the `GenerationContext` . For instance, a core implementation iterates over all candidate bean definitions and generates the necessary code to restore the state of the `BeanFactory` . Once this process completes, the `GenerationContext` will have been updated with the generated code, resources, and classes that are necessary for the application to run. The `RuntimeHints` instance can also be used to generate the relevant GraalVM native image configuration files. ``` ApplicationContextAotGenerator#processAheadOfTime ``` returns the class name of the entry point that allows the context to be started with AOT optimizations. Those steps are covered in greater detail in the sections below. ## Refresh for AOT Processing Refresh for AOT processing is supported on all implementations. An application context is created with any number of entry points, usually in the form of `@Configuration` -annotated classes. Let’s look at a basic example: ``` @Configuration(proxyBeanMethods=false) @ComponentScan @Import({DataSourceConfiguration.class, ContainerConfiguration.class}) public class MyApplication { } ``` Starting this application with the regular runtime involves a number of steps including classpath scanning, configuration class parsing, bean instantiation, and lifecycle callback handling. Refresh for AOT processing only applies a subset of what happens with a regular `refresh` . AOT processing can be triggered as follows: ``` RuntimeHints hints = new RuntimeHints(); AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(); context.register(MyApplication.class); context.refreshForAotProcessing(hints); // ... context.close(); ``` In this mode, implementations are invoked as usual. This includes configuration class parsing, import selectors, classpath scanning, etc. Such steps make sure that the `BeanRegistry` contains the relevant bean definitions for the application. If bean definitions are guarded by conditions (such as `@Profile` ), these are discarded at this stage. Because this mode does not actually create bean instances, `BeanPostProcessor` implementations are not invoked, except for specific variants that are relevant for AOT processing. These are: ``` MergedBeanDefinitionPostProcessor ``` implementations post-process bean definitions to extract additional settings, such as `init` and `destroy` methods. * ``` SmartInstantiationAwareBeanPostProcessor ``` implementations determine a more precise bean type if necessary. This makes sure to create any proxy that will be required at runtime. Once this part completes, the `BeanFactory` contains the bean definitions that are necessary for the application to run. It does not trigger bean instantiation but allows the AOT engine to inspect the beans that will be created at runtime. ## Bean Factory Initialization AOT Contributions Components that want to participate in this step can implement the interface. Each implementation can return an AOT contribution, based on the state of the bean factory. An AOT contribution is a component that contributes generated code that reproduces a particular behavior. It can also contribute `RuntimeHints` to indicate the need for reflection, resource loading, serialization, or JDK proxies. A implementation can be registered in can also be implemented directly by a bean. In this mode, the bean provides an AOT contribution equivalent to the feature it provides with a regular runtime. Consequently, such a bean is automatically excluded from the AOT-optimized context. ### Bean Registration AOT Contributions A core implementation is responsible for collecting the necessary contributions for each candidate `BeanDefinition` . It does so using a dedicated . This interface is used as follows: Implemented by a `BeanPostProcessor` bean, to replace its runtime behavior. For instance implements this interface to generate code that injects members annotated with `@Autowired` . * Implemented by a type registered in with a key equal to the fully qualified name of the interface. Typically used when the bean definition needs to be tuned for specific features of the core framework. If no handles a particular registered bean, a default implementation processes it. This is the default behavior, since tuning the generated code for a bean definition should be restricted to corner cases. Taking our previous example, let’s assume that ``` DataSourceConfiguration ``` is as follows: @Bean public SimpleDataSource dataSource() { return new SimpleDataSource(); } Since there isn’t any particular condition on this class, and `dataSource` are identified as candidates. The AOT engine will convert the configuration class above to code similar to the following: ``` /** * Bean definitions for {@link DataSourceConfiguration} */ public class DataSourceConfiguration__BeanDefinitions { /** * Get the bean definition for 'dataSourceConfiguration' */ public static BeanDefinition getDataSourceConfigurationBeanDefinition() { Class<?> beanType = DataSourceConfiguration.class; RootBeanDefinition beanDefinition = new RootBeanDefinition(beanType); beanDefinition.setInstanceSupplier(DataSourceConfiguration::new); return beanDefinition; } /** * Get the bean instance supplier for 'dataSource'. */ private static BeanInstanceSupplier<SimpleDataSource> getDataSourceInstanceSupplier() { return BeanInstanceSupplier.<SimpleDataSource>forFactoryMethod(DataSourceConfiguration.class, "dataSource") .withGenerator((registeredBean) -> registeredBean.getBeanFactory().getBean(DataSourceConfiguration.class).dataSource()); } /** * Get the bean definition for 'dataSource' */ public static BeanDefinition getDataSourceBeanDefinition() { Class<?> beanType = SimpleDataSource.class; RootBeanDefinition beanDefinition = new RootBeanDefinition(beanType); beanDefinition.setInstanceSupplier(getDataSourceInstanceSupplier()); return beanDefinition; } } ``` The exact code generated may differ depending on the exact nature of your bean definitions. | | --- | The generated code above creates bean definitions equivalent to the `@Configuration` class, but in a direct way and without the use of reflection if at all possible. There is a bean definition for and one for `dataSourceBean` . When a `datasource` instance is required, a `BeanInstanceSupplier` is called. This supplier invokes the `dataSource()` method on the ## Best Practices The AOT engine is designed to handle as many use cases as possible, with no code change in applications. However, keep in mind that some optimizations are made at build time based on a static definition of the beans. This section lists the best practices that make sure your application is ready for AOT. ### Expose The Most Precise Bean Type While your application may interact with an interface that a bean implements, it is still very important to declare the most precise type. The AOT engine performs additional checks on the bean type, such as detecting the presence of `@Autowired` members, or lifecycle callback methods. For `@Configuration` classes, make sure that the return type of the factory `@Bean` method is as precise as possible. Consider the following example: @Bean public MyInterface myInterface() { return new MyImplementation(); } In the example above, the declared type for the `myInterface` bean is `MyInterface` . None of the usual post-processing will take `MyImplementation` into account. For instance, if there is an annotated handler method on `MyImplementation` that the context should register, it won’t be detected upfront. The example above should be rewritten as follows: @Bean public MyImplementation myInterface() { return new MyImplementation(); } If you are registering bean definitions programmatically, consider using `RootBeanBefinition` as it allows to specify a `ResolvableType` that handles generics. ### FactoryBean `FactoryBean` should be used with care as it introduces an intermediate layer in terms of bean type resolution that may not be conceptually necessary. As a rule of thumb, if the `FactoryBean` instance does not hold long-term state and is not needed at a later point in time at runtime, it should be replaced by a regular factory method, possibly with a `FactoryBean` adapter layer on top (for declarative configuration purposes). If your `FactoryBean` implementation does not resolve the object type (i.e. `T` ), extra care is necessary. Consider the following example: ``` public class ClientFactoryBean<T extends AbstractClient> implements FactoryBean<T> { A concrete client declaration should provide a resolved generic for the client, as shown in the following example: @Bean public ClientFactoryBean<MyClient> myClient() { return new ClientFactoryBean<>(...); } If the `FactoryBean` bean definition is registered programmatically, make sure to follow these steps: Use `RootBeanDefinition` . * Set the `beanClass` to the `FactoryBean` class so that AOT knows that it is an intermediate layer. * Set the `ResolvableType` to a resolved generic, which makes sure the most precise type is exposed. The following example showcases a basic definition: ``` RootBeanDefinition beanDefinition = new RootBeanDefinition(ClientFactoryBean.class); beanDefinition.setTargetType(ResolvableType.forClassWithGenerics(ClientFactoryBean.class, MyClient.class)); // ... registry.registerBeanDefinition("myClient", beanDefinition); ``` ### JPA The JPA persistence unit has to be known upfront for certain optimizations to apply. Consider the following basic example: ``` @Bean LocalContainerEntityManagerFactoryBean customDBEntityManagerFactory(DataSource dataSource) { LocalContainerEntityManagerFactoryBean factoryBean = new LocalContainerEntityManagerFactoryBean(); factoryBean.setDataSource(dataSource); factoryBean.setPackagesToScan("com.example.app"); return factoryBean; } ``` To make sure the scanning occurs ahead of time, a ``` PersistenceManagedTypes ``` bean must be declared and used by the factory bean definition, as shown by the following example: ``` @Bean PersistenceManagedTypes persistenceManagedTypes(ResourceLoader resourceLoader) { return new PersistenceManagedTypesScanner(resourceLoader) .scan("com.example.app"); } @Bean LocalContainerEntityManagerFactoryBean customDBEntityManagerFactory(DataSource dataSource, PersistenceManagedTypes managedTypes) { LocalContainerEntityManagerFactoryBean factoryBean = new LocalContainerEntityManagerFactoryBean(); factoryBean.setDataSource(dataSource); factoryBean.setManagedTypes(managedTypes); return factoryBean; } ``` ## Runtime Hints Running an application as a native image requires additional information compared to a regular JVM runtime. For instance, GraalVM needs to know ahead of time if a component uses reflection. Similarly, classpath resources are not shipped in a native image unless specified explicitly. Consequently, if the application needs to load a resource, it must be referenced from the corresponding GraalVM native image configuration file. The `RuntimeHints` API collects the need for reflection, resource loading, serialization, and JDK proxies at runtime. The following example makes sure that ``` config/app.properties ``` can be loaded from the classpath at runtime within a native image: ``` runtimeHints.resources().registerPattern("config/app.properties"); ``` A number of contracts are handled automatically during AOT processing. For instance, the return type of a `@Controller` method is inspected, and relevant reflection hints are added if Spring detects that the type should be serialized (typically to JSON). For cases that the core container cannot infer, you can register such hints programmatically. A number of convenient annotations are also provided for common use cases. `@ImportRuntimeHints` implementations allow you to get a callback to the `RuntimeHints` instance managed by the AOT engine. Implementations of this interface can be registered using `@ImportRuntimeHints` on any Spring bean or `@Bean` factory method. implementations are detected and invoked at build time. ``` import java.util.Locale; import org.springframework.aot.hint.RuntimeHints; import org.springframework.aot.hint.RuntimeHintsRegistrar; import org.springframework.context.annotation.ImportRuntimeHints; import org.springframework.core.io.ClassPathResource; import org.springframework.stereotype.Component; @Component @ImportRuntimeHints(SpellCheckService.SpellCheckServiceRuntimeHints.class) public class SpellCheckService { public void loadDictionary(Locale locale) { ClassPathResource resource = new ClassPathResource("dicts/" + locale.getLanguage() + ".txt"); //... } static class SpellCheckServiceRuntimeHints implements RuntimeHintsRegistrar { @Override public void registerHints(RuntimeHints hints, ClassLoader classLoader) { hints.resources().registerPattern("dicts/*"); } } If at all possible, `@ImportRuntimeHints` should be used as close as possible to the component that requires the hints. This way, if the component is not contributed to the `BeanFactory` , the hints won’t be contributed either. It is also possible to register an implementation statically by adding an entry in `@Reflective` `@Reflective` provides an idiomatic way to flag the need for reflection on an annotated element. For instance, `@EventListener` is meta-annotated with `@Reflective` since the underlying implementation invokes the annotated method using reflection. By default, only Spring beans are considered and an invocation hint is registered for the annotated element. This can be tuned by specifying a custom `ReflectiveProcessor` implementation via the `@Reflective` annotation. Library authors can reuse this annotation for their own purposes. If components other than Spring beans need to be processed, a can detect the relevant types and use ``` ReflectiveRuntimeHintsRegistrar ``` to process them. is a specialization of `@Reflective` that registers the need for serializing arbitrary types. A typical use case is the use of DTOs that the container cannot infer, such as using a web client within a method body. can be applied to any Spring bean at the class level, but it can also be applied directly to a method, field, or constructor to better indicate where the hints are actually required. The following example registers `Account` for serialization. ``` @Component public class OrderService { @RegisterReflectionForBinding(Account.class) public void process(Order order) { // ... } ### Testing Runtime Hints Spring Core also ships , a utility for checking that existing hints match a particular use case. This can be used in your own tests to validate that a contains the expected results. We can write a test for our `SpellCheckService` and ensure that we will be able to load a dictionary at runtime: ``` @Test void shouldRegisterResourceHints() { RuntimeHints hints = new RuntimeHints(); new SpellCheckServiceRuntimeHints().registerHints(hints, getClass().getClassLoader()); assertThat(RuntimeHintsPredicates.resource().forResource("dicts/en.txt")) .accepts(hints); } ``` With , we can check for reflection, resource, serialization, or proxy generation hints. This approach works well for unit tests but implies that the runtime behavior of a component is well known. You can learn more about the global runtime behavior of an application by running its test suite (or the app itself) with the GraalVM tracing agent. This agent will record all relevant calls requiring GraalVM hints at runtime and write them out as JSON configuration files. For more targeted discovery and testing, Spring Framework ships a dedicated module with core AOT testing utilities, ``` "org.springframework:spring-core-test" ``` . This module contains the RuntimeHints Agent, a Java agent that records all method invocations that are related to runtime hints and helps you to assert that a given `RuntimeHints` instance covers all recorded invocations. Let’s consider a piece of infrastructure for which we’d like to test the hints we’re contributing during the AOT processing phase. ``` import java.lang.reflect.Method; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.springframework.util.ClassUtils; public class SampleReflection { private final Log logger = LogFactory.getLog(SampleReflection.class); public void performReflection() { try { Class<?> springVersion = ClassUtils.forName("org.springframework.core.SpringVersion", null); Method getVersion = ClassUtils.getMethod(springVersion, "getVersion"); String version = (String) getVersion.invoke(null); logger.info("Spring version:" + version); } catch (Exception exc) { logger.error("reflection failed", exc); } } We can then write a unit test (no native compilation required) that checks our contributed hints: ``` import java.util.List; import org.junit.jupiter.api.Test; import org.springframework.aot.hint.ExecutableMode; import org.springframework.aot.hint.RuntimeHints; import org.springframework.aot.test.agent.EnabledIfRuntimeHintsAgent; import org.springframework.aot.test.agent.RuntimeHintsInvocations; import org.springframework.aot.test.agent.RuntimeHintsRecorder; import org.springframework.core.SpringVersion; import static org.assertj.core.api.Assertions.assertThat; // @EnabledIfRuntimeHintsAgent signals that the annotated test class or test // method is only enabled if the RuntimeHintsAgent is loaded on the current JVM. // It also tags tests with the "RuntimeHints" JUnit tag. @EnabledIfRuntimeHintsAgent class SampleReflectionRuntimeHintsTests { @Test void shouldRegisterReflectionHints() { RuntimeHints runtimeHints = new RuntimeHints(); // Call a RuntimeHintsRegistrar that contributes hints like: runtimeHints.reflection().registerType(SpringVersion.class, typeHint -> typeHint.withMethod("getVersion", List.of(), ExecutableMode.INVOKE)); // Invoke the relevant piece of code we want to test within a recording lambda RuntimeHintsInvocations invocations = RuntimeHintsRecorder.record(() -> { SampleReflection sample = new SampleReflection(); sample.performReflection(); }); // assert that the recorded invocations are covered by the contributed hints assertThat(invocations).match(runtimeHints); } If you forgot to contribute a hint, the test will fail and provide some details about the invocation: ``` org.springframework.docs.core.aot.hints.testing.SampleReflection performReflection INFO: Spring version:6.0.0-SNAPSHOT Missing <"ReflectionHints"> for invocation <java.lang.Class#forName> with arguments ["org.springframework.core.SpringVersion", false, jdk.internal.loader.ClassLoaders$AppClassLoader@251a69d7]. Stacktrace: <"org.springframework.util.ClassUtils#forName, Line 284 io.spring.runtimehintstesting.SampleReflection#performReflection, Line 19 io.spring.runtimehintstesting.SampleReflectionRuntimeHintsTests#lambda$shouldRegisterReflectionHints$0, Line 25 ``` There are various ways to configure this Java agent in your build, so please refer to the documentation of your build tool and test execution plugin. The agent itself can be configured to instrument specific packages (by default, only `org.springframework` is instrumented). You’ll find more details in the Spring Framework `buildSrc` README file. This part of the appendix lists XML schemas related to the core container. `util` Schema As the name implies, the `util` tags deal with common, utility configuration issues, such as configuring collections, referencing constants, and so forth. To use the tags in the `util` schema, you need to have the following preamble at the top of your Spring XML configuration file (the text in the snippet references the correct schema so that the tags in the `util` namespace are available to you): ``` <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation=" http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/util https://www.springframework.org/schema/util/spring-util.xsd" `<util:constant/>` Consider the following bean definition: ) to set the value of the `isolation` property on a bean to the value of the ``` java.sql.Connection.TRANSACTION_SERIALIZABLE ``` constant. This is all well and good, but it is verbose and (unnecessarily) exposes Spring’s internal plumbing to the end user. The following XML Schema-based version is more concise, clearly expresses the developer’s intent (“inject this constant value”), and it reads better: ``` <bean id="..." class="..."> <property name="isolation"> <util:constant static-field="java.sql.Connection.TRANSACTION_SERIALIZABLE"/> </property> </bean> ``` # Setting a Bean Property or Constructor Argument from a Field Value is a `FactoryBean` that retrieves a `static` or non-static field value. It is typically used for retrieving `public` `static` `final` constants, which may then be used to set a property value or constructor argument for another bean. The following example shows how a `static` field is exposed, by using the `staticField` property: ``` <bean id="myField" class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean"> <property name="staticField" value="java.sql.Connection.TRANSACTION_SERIALIZABLE"/> </bean> ``` There is also a convenience usage form where the `static` field is specified as the bean name, as the following example shows: ``` <bean id="java.sql.Connection.TRANSACTION_SERIALIZABLE" class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean"/> ``` This does mean that there is no longer any choice in what the bean `id` is (so any other bean that refers to it also has to use this longer name), but this form is very concise to define and very convenient to use as an inner bean since the `id` does not have to be specified for the bean reference, as the following example shows: You can also access a non-static (instance) field of another bean, as described in the API documentation for the class. Injecting enumeration values into beans as either property or constructor arguments is easy to do in Spring. You do not actually have to do anything or know anything about the Spring internals (or even about classes such as the ). The following example enumeration shows how easy injecting an enum value is: ``` package jakarta.persistence; public enum PersistenceContextType { ``` package jakarta.persistence enum class PersistenceContextType { Now consider the following setter of type ``` PersistenceContextType ``` and the corresponding bean definition: public class Client { private PersistenceContextType persistenceContextType; public void setPersistenceContextType(PersistenceContextType type) { this.persistenceContextType = type; } } ``` ``` package example class Client { lateinit var persistenceContextType: PersistenceContextType } ``` ``` <bean class="example.Client"> <property name="persistenceContextType" value="TRANSACTION"/> </bean> ``` ) to create a bean (of type `int` ) called `testBean.age` that has a value equal to the `age` property of the `testBean` bean. Now consider the following example, which adds a <!-- results in 10, which is the value of property 'age' of bean 'testBean' --> <util:property-path id="name" path="testBean.age"/> ``` The value of the `path` attribute of the `<property-path/>` element follows the form of ``` beanName.beanProperty ``` . In this case, it picks up the `age` property of the bean named `testBean` . The value of that `age` property is `10` . to Set a Bean Property or Constructor Argument is a `FactoryBean` that evaluates a property path on a given target object. The target object can be specified directly or by a bean name. You can then use this value in another bean definition as a property value or constructor argument. The following example shows a path being used against another bean, by name: <!-- results in 11, which is the value of property 'spouse.age' of bean 'person' --> <bean id="theAge" class="org.springframework.beans.factory.config.PropertyPathFactoryBean"> <property name="targetBeanName" value="person"/> <property name="propertyPath" value="spouse.age"/> </bean> ``` In the following example, a path is evaluated against an inner bean: ``` <!-- results in 12, which is the value of property 'age' of the inner bean --> <bean id="theAge" class="org.springframework.beans.factory.config.PropertyPathFactoryBean"> <property name="targetObject"> <bean class="org.springframework.beans.TestBean"> <property name="age" value="12"/> </bean> </property> <property name="propertyPath" value="age"/> </bean> ``` There is also a shortcut form, where the bean name is the property path. The following example shows the shortcut form: This form does mean that there is no choice in the name of the bean. Any reference to it also has to use the same `id` , which is the path. If used as an inner bean, there is no need to refer to it at all, as the following example shows: ``` <bean id="..." class="..."> <property name="age"> <bean id="person.age" class="org.springframework.beans.factory.config.PropertyPathFactoryBean"/> </property> </bean> ``` You can specifically set the result type in the actual definition. This is not necessary for most use cases, but it can sometimes be useful. See the javadoc for more info on this feature. ``` <!-- creates a java.util.Properties instance with values loaded from the supplied location --> <bean id="jdbcConfiguration" class="org.springframework.beans.factory.config.PropertiesFactoryBean"> <property name="location" value="classpath:com/foo/jdbc-production.properties"/> </bean> ``` ``` PropertiesFactoryBean ``` ) to instantiate a `java.util.Properties` instance with values loaded from the supplied `Resource` location). The following example uses a `util:properties` element to make a more concise representation: ``` <!-- creates a java.util.Properties instance with values loaded from the supplied location --> <util:properties id="jdbcConfiguration" location="classpath:com/foo/jdbc-production.properties"/> ``` ``` <!-- creates a java.util.List instance with values loaded from the supplied 'sourceList' --> <bean id="emails" class="org.springframework.beans.factory.config.ListFactoryBean"> <property name="sourceList"> <list> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> </list> </property> </bean> ``` The preceding configuration uses a Spring `FactoryBean` implementation (the `ListFactoryBean` ) to create a `java.util.List` instance and initialize it with values taken from the supplied `sourceList` . The following example uses a `<util:list/>` element to make a more concise representation: ``` <!-- creates a java.util.List instance with the supplied values --> <util:list id="emails"> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> </util:list> ``` ``` <util:list id="emails" list-class="java.util.LinkedList"> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> <value>d'[email protected]</value> </util:list> ``` If no `list-class` attribute is supplied, the container chooses a `List` implementation. ``` <!-- creates a java.util.Map instance with values loaded from the supplied 'sourceMap' --> <bean id="emails" class="org.springframework.beans.factory.config.MapFactoryBean"> <property name="sourceMap"> <map> <entry key="pechorin" value="[email protected]"/> <entry key="raskolnikov" value="[email protected]"/> <entry key="stavrogin" value="[email protected]"/> <entry key="porfiry" value="porfi[email protected]"/> </map> </property> </bean> ``` The preceding configuration uses a Spring `FactoryBean` implementation (the `MapFactoryBean` ) to create a `java.util.Map` instance initialized with key-value pairs taken from the supplied `'sourceMap'` . The following example uses a `<util:map/>` element to make a more concise representation: ``` <!-- creates a java.util.Map instance with the supplied key-value pairs --> <util:map id="emails"> <entry key="pechorin" value="[email protected]"/> <entry key="raskolnikov" value="[email protected]"/> <entry key="stavrogin" value="[email protected]"/> <entry key="porfiry" value="[email protected]"/> </util:map> ``` ``` <util:map id="emails" map-class="java.util.TreeMap"> <entry key="pechorin" value="[email protected]"/> <entry key="raskolnikov" value="[email protected]"/> <entry key="stavrogin" value="[email protected]"/> <entry key="porfiry" value="[email protected]"/> </util:map> ``` If no `'map-class'` attribute is supplied, the container chooses a `Map` implementation. `<util:set/>` Consider the following example: ``` <!-- creates a java.util.Set instance with values loaded from the supplied 'sourceSet' --> <bean id="emails" class="org.springframework.beans.factory.config.SetFactoryBean"> <property name="sourceSet"> <set> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> </set> </property> </bean> ``` The preceding configuration uses a Spring `FactoryBean` implementation (the `SetFactoryBean` ) to create a `java.util.Set` instance initialized with values taken from the supplied `sourceSet` . The following example uses a `<util:set/>` element to make a more concise representation: ``` <!-- creates a java.util.Set instance with the supplied values --> <util:set id="emails"> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> </util:set> ``` You can also explicitly control the exact type of `Set` that is instantiated and populated by using the `set-class` attribute on the `<util:set/>` element. For example, if we really need a `java.util.TreeSet` to be instantiated, we could use the following configuration: ``` <util:set id="emails" set-class="java.util.TreeSet"> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> </util:set> ``` If no `set-class` attribute is supplied, the container chooses a `Set` implementation. `aop` Schema The `aop` tags deal with configuring all things AOP in Spring, including Spring’s own proxy-based AOP framework and Spring’s integration with the AspectJ AOP framework. These tags are comprehensively covered in the chapter entitled Aspect Oriented Programming with Spring . In the interest of completeness, to use the tags in the `aop` schema, you need to have the following preamble at the top of your Spring XML configuration file (the text in the snippet references the correct schema so that the tags in the `aop` namespace are available to you): `context` Schema The `context` tags deal with `ApplicationContext` configuration that relates to plumbing — that is, not usually beans that are important to an end-user but rather beans that do a lot of the “grunt” work in Spring, such as ``` BeanfactoryPostProcessors ``` . The following snippet references the correct schema so that the elements in the `context` namespace are available to you: ``` <property-placeholder/> ``` This element activates the replacement of `${…​}` placeholders, which are resolved against a specified properties file (as a Spring resource location). This element is a convenience mechanism that sets up a for you. If you need more control over the specific setup, you can explicitly define it as a bean yourself. `<annotation-config/>` This element activates the Spring infrastructure to detect annotations in bean classes: Spring’s `@Configuration` model * `@Autowired` / `@Inject` , `@Value` , and `@Lookup` * JSR-250’s `@Resource` , `@PostConstruct` , and `@PreDestroy` (if available) * JAX-WS’s `@WebServiceRef` and EJB 3’s `@EJB` (if available) * JPA’s `@PersistenceContext` and `@PersistenceUnit` (if available) * Spring’s `@EventListener` Alternatively, you can choose to explicitly activate the individual `BeanPostProcessors` for those annotations. This element does not activate processing of Spring’s | | --- | `<component-scan/>` This element is detailed in the section on annotation-based container configuration . `<load-time-weaver/>` This element is detailed in the section on load-time weaving with AspectJ in the Spring Framework . `<spring-configured/>` This element is detailed in the section on using AspectJ to dependency inject domain objects with Spring . `<mbean-export/>` This element is detailed in the section on configuring annotation-based MBean export . ## The Beans Schema Last but not least, we have the elements in the `beans` schema. These elements have been in Spring since the very dawn of the framework. Examples of the various elements in the `beans` schema are not shown here because they are quite comprehensively covered in dependencies and configuration in detail (and, indeed, in that entire chapter). Note that you can add zero or more key-value pairs to `<bean/>` XML definitions. What, if anything, is done with this extra metadata is totally up to your own custom logic (and so is typically only of use if you write your own custom elements as described in the appendix entitled XML Schema Authoring). The following example shows the `<meta/>` element in the context of a surrounding `<bean/>` (note that, without any logic to interpret it, the metadata is effectively useless as it stands). <bean id="foo" class="x.y.Foo"> <meta key="cacheName" value="foo"/> (1) <property name="name" value="Rick"/> </bean1 | This is the example | | --- | --- | In the case of the preceding example, you could assume that there is some logic that consumes the bean definition and sets up some caching infrastructure that uses the supplied metadata. Since version 2.0, Spring has featured a mechanism for adding schema-based extensions to the basic Spring XML format for defining and configuring beans. This section covers how to write your own custom XML bean definition parsers and integrate such parsers into the Spring IoC container. To facilitate authoring configuration files that use a schema-aware XML editor, Spring’s extensible XML configuration mechanism is based on XML Schema. If you are not familiar with Spring’s current XML configuration extensions that come with the standard Spring distribution, you should first read the previous section on XML Schemas. To create new XML configuration extensions: For a unified example, we create an XML extension (a custom XML element) that lets us configure objects of the type `SimpleDateFormat` (from the `java.text` package). When we are done, we will be able to define bean definitions of type `SimpleDateFormat` as follows: (We include much more detailed examples follow later in this appendix. The intent of this first simple example is to walk you through the basic steps of making a custom extension.) ## Authoring the Schema Creating an XML configuration extension for use with Spring’s IoC container starts with authoring an XML Schema to describe the extension. For our example, we use the following schema to configure `SimpleDateFormat` objects: ``` <!-- myns.xsd (inside package org/springframework/samples/xml) --<?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns="http://www.mycompany.example/schema/myns" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:beans="http://www.springframework.org/schema/beans" targetNamespace="http://www.mycompany.example/schema/myns" elementFormDefault="qualified" attributeFormDefault="unqualified" <xsd:import namespace="http://www.springframework.org/schema/beans"/ <xsd:element name="dateformat"> <xsd:complexType> <xsd:complexContent> <xsd:extension base="beans:identifiedType"> (1) <xsd:attribute name="lenient" type="xsd:boolean"/> <xsd:attribute name="pattern" type="xsd:string" use="required"/> </xsd:extension> </xsd:complexContent> </xsd:complexType> </xsd:element> </xsd:schema> ``` 1 | The indicated line contains an extension base for all identifiable tags (meaning they have an | | --- | --- | The preceding schema lets us configure `SimpleDateFormat` objects directly in an XML application context file by using the `<myns:dateformat/>` element, as the following example shows: Note that, after we have created the infrastructure classes, the preceding snippet of XML is essentially the same as the following XML snippet: ``` <bean id="dateFormat" class="java.text.SimpleDateFormat"> <constructor-arg value="yyyy-MM-dd HH:mm"/> <property name="lenient" value="true"/> </bean> ``` The second of the two preceding snippets creates a bean in the container (identified by the name `dateFormat` of type `SimpleDateFormat` ) with a couple of properties set. The schema-based approach to creating configuration format allows for tight integration with an IDE that has a schema-aware XML editor. By using a properly authored schema, you can use autocompletion to let a user choose between several configuration options defined in the enumeration. | | --- | ## Coding a `NamespaceHandler` In addition to the schema, we need a `NamespaceHandler` to parse all elements of this specific namespace that Spring encounters while parsing configuration files. For this example, the `NamespaceHandler` should take care of the parsing of the `myns:dateformat` element. The `NamespaceHandler` interface features three methods: * `init()` : Allows for initialization of the `NamespaceHandler` and is called by Spring before the handler is used. * ``` BeanDefinition parse(Element, ParserContext) ``` : Called when Spring encounters a top-level element (not nested inside a bean definition or a different namespace). This method can itself register bean definitions, return a bean definition, or both. * ``` BeanDefinitionHolder decorate(Node, BeanDefinitionHolder, ParserContext) ``` : Called when Spring encounters an attribute or nested element of a different namespace. The decoration of one or more bean definitions is used (for example) with the scopes that Spring supports. We start by highlighting a simple example, without using decoration, after which we show decoration in a somewhat more advanced example. Although you can code your own `NamespaceHandler` for the entire namespace (and hence provide code that parses each and every element in the namespace), it is often the case that each top-level XML element in a Spring XML configuration file results in a single bean definition (as in our case, where a single `<myns:dateformat/>` element results in a single `SimpleDateFormat` bean definition). Spring features a number of convenience classes that support this scenario. In the following example, we use the public class MyNamespaceHandler extends NamespaceHandlerSupport { public void init() { registerBeanDefinitionParser("dateformat", new SimpleDateFormatBeanDefinitionParser()); } } ``` class MyNamespaceHandler : NamespaceHandlerSupport { override fun init() { registerBeanDefinitionParser("dateformat", SimpleDateFormatBeanDefinitionParser()) } } ``` You may notice that there is not actually a whole lot of parsing logic in this class. Indeed, the class has a built-in notion of delegation. It supports the registration of any number of `BeanDefinitionParser` instances, to which it delegates to when it needs to parse an element in its namespace. This clean separation of concerns lets a `NamespaceHandler` handle the orchestration of the parsing of all of the custom elements in its namespace while delegating to ``` BeanDefinitionParsers ``` to do the grunt work of the XML parsing. This means that each `BeanDefinitionParser` contains only the logic for parsing a single custom element, as we can see in the next step. `BeanDefinitionParser` A `BeanDefinitionParser` is used if the `NamespaceHandler` encounters an XML element of the type that has been mapped to the specific bean definition parser ( `dateformat` in this case). In other words, the `BeanDefinitionParser` is responsible for parsing one distinct top-level XML element defined in the schema. In the parser, we' have access to the XML element (and thus to its subelements, too) so that we can parse our custom XML content, as you can see in the following example: import org.springframework.beans.factory.support.BeanDefinitionBuilder; import org.springframework.beans.factory.xml.AbstractSingleBeanDefinitionParser; import org.springframework.util.StringUtils; import org.w3c.dom.Element; import java.text.SimpleDateFormat; public class SimpleDateFormatBeanDefinitionParser extends AbstractSingleBeanDefinitionParser { (1) protected Class getBeanClass(Element element) { return SimpleDateFormat.class; (2) } protected void doParse(Element element, BeanDefinitionBuilder bean) { // this will never be null since the schema explicitly requires that a value be supplied String pattern = element.getAttribute("pattern"); bean.addConstructorArgValue(pattern); // this however is an optional property String lenient = element.getAttribute("lenient"); if (StringUtils.hasText(lenient)) { bean.addPropertyValue("lenient", Boolean.valueOf(lenient)); } } import org.springframework.beans.factory.support.BeanDefinitionBuilder import org.springframework.beans.factory.xml.AbstractSingleBeanDefinitionParser import org.springframework.util.StringUtils import org.w3c.dom.Element import java.text.SimpleDateFormat class SimpleDateFormatBeanDefinitionParser : AbstractSingleBeanDefinitionParser() { (1) override fun getBeanClass(element: Element): Class<*>? { (2) return SimpleDateFormat::class.java } override fun doParse(element: Element, bean: BeanDefinitionBuilder) { // this will never be null since the schema explicitly requires that a value be supplied val pattern = element.getAttribute("pattern") bean.addConstructorArgValue(pattern) // this however is an optional property val lenient = element.getAttribute("lenient") if (StringUtils.hasText(lenient)) { bean.addPropertyValue("lenient", java.lang.Boolean.valueOf(lenient)) } } } ``` In this simple case, this is all that we need to do. The creation of our single `BeanDefinition` is handled by the ``` AbstractSingleBeanDefinitionParser ``` superclass, as is the extraction and setting of the bean definition’s unique identifier. ## Registering the Handler and the Schema The coding is finished. All that remains to be done is to make the Spring XML parsing infrastructure aware of our custom element. We do so by registering our custom `namespaceHandler` and custom XSD file in two special-purpose properties files. These properties files are both placed in a `META-INF` directory in your application and can, for example, be distributed alongside your binary classes in a JAR file. The Spring XML parsing infrastructure automatically picks up your new extension by consuming these special properties files, the formats of which are detailed in the next two sections. ### Writing The properties file called `spring.handlers` contains a mapping of XML Schema URIs to namespace handler classes. For our example, we need to write the following: > http\://www.mycompany.example/schema/myns=org.springframework.samples.xml.MyNamespaceHandler (The `:` character is a valid delimiter in the Java properties format, so `:` character in the URI needs to be escaped with a backslash.) The first part (the key) of the key-value pair is the URI associated with your custom namespace extension and needs to exactly match exactly the value of the `targetNamespace` attribute, as specified in your custom XSD schema. ### Writing 'META-INF/spring.schemas' The properties file called `spring.schemas` contains a mapping of XML Schema locations (referred to, along with the schema declaration, in XML files that use the schema as part of the `xsi:schemaLocation` attribute) to classpath resources. This file is needed to prevent Spring from absolutely having to use a default `EntityResolver` that requires Internet access to retrieve the schema file. If you specify the mapping in this properties file, Spring searches for the schema (in this case, `myns.xsd` in the ``` org.springframework.samples.xml ``` package) on the classpath. The following snippet shows the line we need to add for our custom schema: > http\://www.mycompany.example/schema/myns/myns.xsd=org/springframework/samples/xml/myns.xsd (Remember that the `:` character must be escaped.) You are encouraged to deploy your XSD file (or files) right alongside the `NamespaceHandler` and `BeanDefinitionParser` classes on the classpath. ## Using a Custom Extension in Your Spring XML Configuration Using a custom extension that you yourself have implemented is no different from using one of the “custom” extensions that Spring provides. The following example uses the custom `<dateformat/>` element developed in the previous steps in a Spring XML configuration file: ``` <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:myns="http://www.mycompany.example/schema/myns" xsi:schemaLocation=" http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd http://www.mycompany.example/schema/myns http://www.mycompany.com/schema/myns/myns.xsd" <!-- as a top-level bean --> <myns:dateformat id="defaultDateFormat" pattern="yyyy-MM-dd HH:mm" lenient="true"/> (1) <bean id="jobDetailTemplate" abstract="true"> <property name="dateFormat"> <!-- as an inner bean --> <myns:dateformat pattern="HH:mm MM-dd-yyyy"/> </property> </bean1 | Our custom bean. | | --- | --- | ## More Detailed Examples This section presents some more detailed examples of custom XML extensions. ### Nesting Custom Elements within Custom Elements The example presented in this section shows how you to write the various artifacts required to satisfy a target of the following configuration: ``` <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:foo="http://www.foo.example/schema/component" xsi:schemaLocation=" http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd http://www.foo.example/schema/component http://www.foo.example/schema/component/component.xsd" <foo:component id="bionic-family" name="Bionic-1"> <foo:component name="Mother-1"> <foo:component name="Karate-1"/> <foo:component name="Sport-1"/> </foo:component> <foo:component name="Rock-1"/> </foo:component The preceding configuration nests custom extensions within each other. The class that is actually configured by the `<foo:component/>` element is the `Component` class (shown in the next example). Notice how the `Component` class does not expose a setter method for the `components` property. This makes it hard (or rather impossible) to configure a bean definition for the `Component` class by using setter injection. The following listing shows the `Component` class: public class Component { private String name; private List<Component> components = new ArrayList<Component> (); // there is no setter method for the 'components' public void addComponent(Component component) { this.components.add(component); } public List<Component> getComponents() { return components; } class Component { var name: String? = null private val components = ArrayList<Component>() // there is no setter method for the 'components' fun addComponent(component: Component) { this.components.add(component) } fun getComponents(): List<Component> { return components } } ``` The typical solution to this issue is to create a custom `FactoryBean` that exposes a setter property for the `components` property. The following listing shows such a custom `FactoryBean` : import org.springframework.beans.factory.FactoryBean; public class ComponentFactoryBean implements FactoryBean<Component> { private Component parent; private List<Component> children; public void setParent(Component parent) { this.parent = parent; } public void setChildren(List<Component> children) { this.children = children; } public Component getObject() throws Exception { if (this.children != null && this.children.size() > 0) { for (Component child : children) { this.parent.addComponent(child); } } return this.parent; } public Class<Component> getObjectType() { return Component.class; } public boolean isSingleton() { return true; } } ``` import org.springframework.beans.factory.FactoryBean import org.springframework.stereotype.Component class ComponentFactoryBean : FactoryBean<Component> { private var parent: Component? = null private var children: List<Component>? = null fun setParent(parent: Component) { this.parent = parent } fun setChildren(children: List<Component>) { this.children = children } override fun getObject(): Component? { if (this.children != null && this.children!!.isNotEmpty()) { for (child in children!!) { this.parent!!.addComponent(child) } } return this.parent } override fun getObjectType(): Class<Component>? { return Component::class.java } This works nicely, but it exposes a lot of Spring plumbing to the end user. What we are going to do is write a custom extension that hides away all of this Spring plumbing. If we stick to the steps described previously, we start off by creating the XSD schema to define the structure of our custom tag, as the following listing shows: <xsd:schema xmlns="http://www.foo.example/schema/component" xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.foo.example/schema/component" elementFormDefault="qualified" attributeFormDefault="unqualified" <xsd:element name="component"> <xsd:complexType> <xsd:choice minOccurs="0" maxOccurs="unbounded"> <xsd:element ref="component"/> </xsd:choice> <xsd:attribute name="id" type="xsd:ID"/> <xsd:attribute name="name" use="required" type="xsd:string"/> </xsd:complexType> </xsd:element Again following the process described earlier, we then create a custom `NamespaceHandler` : public class ComponentNamespaceHandler extends NamespaceHandlerSupport { public void init() { registerBeanDefinitionParser("component", new ComponentBeanDefinitionParser()); } } ``` class ComponentNamespaceHandler : NamespaceHandlerSupport() { override fun init() { registerBeanDefinitionParser("component", ComponentBeanDefinitionParser()) } } ``` Next up is the custom `BeanDefinitionParser` . Remember that we are creating a `BeanDefinition` that describes a `ComponentFactoryBean` . The following listing shows our custom `BeanDefinitionParser` implementation: import org.springframework.beans.factory.config.BeanDefinition; import org.springframework.beans.factory.support.AbstractBeanDefinition; import org.springframework.beans.factory.support.BeanDefinitionBuilder; import org.springframework.beans.factory.support.ManagedList; import org.springframework.beans.factory.xml.AbstractBeanDefinitionParser; import org.springframework.beans.factory.xml.ParserContext; import org.springframework.util.xml.DomUtils; import org.w3c.dom.Element; public class ComponentBeanDefinitionParser extends AbstractBeanDefinitionParser { protected AbstractBeanDefinition parseInternal(Element element, ParserContext parserContext) { return parseComponentElement(element); } private static AbstractBeanDefinition parseComponentElement(Element element) { BeanDefinitionBuilder factory = BeanDefinitionBuilder.rootBeanDefinition(ComponentFactoryBean.class); factory.addPropertyValue("parent", parseComponent(element)); List<Element> childElements = DomUtils.getChildElementsByTagName(element, "component"); if (childElements != null && childElements.size() > 0) { parseChildComponents(childElements, factory); } return factory.getBeanDefinition(); } private static BeanDefinition parseComponent(Element element) { BeanDefinitionBuilder component = BeanDefinitionBuilder.rootBeanDefinition(Component.class); component.addPropertyValue("name", element.getAttribute("name")); return component.getBeanDefinition(); } private static void parseChildComponents(List<Element> childElements, BeanDefinitionBuilder factory) { ManagedList<BeanDefinition> children = new ManagedList<>(childElements.size()); for (Element element : childElements) { children.add(parseComponentElement(element)); } factory.addPropertyValue("children", children); } } ``` import org.springframework.beans.factory.config.BeanDefinition import org.springframework.beans.factory.support.AbstractBeanDefinition import org.springframework.beans.factory.support.BeanDefinitionBuilder import org.springframework.beans.factory.support.ManagedList import org.springframework.beans.factory.xml.AbstractBeanDefinitionParser import org.springframework.beans.factory.xml.ParserContext import org.springframework.util.xml.DomUtils import org.w3c.dom.Element import java.util.List class ComponentBeanDefinitionParser : AbstractBeanDefinitionParser() { override fun parseInternal(element: Element, parserContext: ParserContext): AbstractBeanDefinition? { return parseComponentElement(element) } private fun parseComponentElement(element: Element): AbstractBeanDefinition { val factory = BeanDefinitionBuilder.rootBeanDefinition(ComponentFactoryBean::class.java) factory.addPropertyValue("parent", parseComponent(element)) val childElements = DomUtils.getChildElementsByTagName(element, "component") if (childElements != null && childElements.size > 0) { parseChildComponents(childElements, factory) } return factory.getBeanDefinition() } private fun parseComponent(element: Element): BeanDefinition { val component = BeanDefinitionBuilder.rootBeanDefinition(Component::class.java) component.addPropertyValue("name", element.getAttribute("name")) return component.beanDefinition } private fun parseChildComponents(childElements: List<Element>, factory: BeanDefinitionBuilder) { val children = ManagedList<BeanDefinition>(childElements.size) for (element in childElements) { children.add(parseComponentElement(element)) } factory.addPropertyValue("children", children) } } ``` Finally, the various artifacts need to be registered with the Spring XML infrastructure, by modifying the files, as follows: > # in 'META-INF/spring.handlers' http\://www.foo.example/schema/component=com.foo.ComponentNamespaceHandler > # in 'META-INF/spring.schemas' http\://www.foo.example/schema/component/component.xsd=com/foo/component.xsd ### Custom Attributes on “Normal” Elements Writing your own custom parser and the associated artifacts is not hard. However, it is sometimes not the right thing to do. Consider a scenario where you need to add metadata to already existing bean definitions. In this case, you certainly do not want to have to write your own entire custom extension. Rather, you merely want to add an additional attribute to the existing bean definition element. By way of another example, suppose that you define a bean definition for a service object that (unknown to it) accesses a clustered JCache, and you want to ensure that the named JCache instance is eagerly started within the surrounding cluster. The following listing shows such a definition: ``` <bean id="checkingAccountService" class="com.foo.DefaultCheckingAccountService" jcache:cache-name="checking.account"> <!-- other dependencies here... --> </bean> ``` We can then create another `BeanDefinition` when the `'jcache:cache-name'` attribute is parsed. This `BeanDefinition` then initializes the named JCache for us. We can also modify the existing `BeanDefinition` for the ``` 'checkingAccountService' ``` so that it has a dependency on this new JCache-initializing `BeanDefinition` . The following listing shows our `JCacheInitializer` : public class JCacheInitializer { private final String name; public JCacheInitializer(String name) { this.name = name; } public void initialize() { // lots of JCache API calls to initialize the named cache... } } ``` class JCacheInitializer(private val name: String) { fun initialize() { // lots of JCache API calls to initialize the named cache... } } ``` Now we can move onto the custom extension. First, we need to author the XSD schema that describes the custom attribute, as follows: <xsd:schema xmlns="http://www.foo.example/schema/jcache" xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.foo.example/schema/jcache" elementFormDefault="qualified" <xsd:attribute name="cache-name" type="xsd:string"/ Next, we need to create the associated `NamespaceHandler` , as follows: public class JCacheNamespaceHandler extends NamespaceHandlerSupport { public void init() { super.registerBeanDefinitionDecoratorForAttribute("cache-name", new JCacheInitializingBeanDefinitionDecorator()); } class JCacheNamespaceHandler : NamespaceHandlerSupport() { override fun init() { super.registerBeanDefinitionDecoratorForAttribute("cache-name", JCacheInitializingBeanDefinitionDecorator()) } Next, we need to create the parser. Note that, in this case, because we are going to parse an XML attribute, we write a rather than a `BeanDefinitionParser` . The following listing shows our implementation: import org.springframework.beans.factory.config.BeanDefinitionHolder; import org.springframework.beans.factory.support.AbstractBeanDefinition; import org.springframework.beans.factory.support.BeanDefinitionBuilder; import org.springframework.beans.factory.xml.BeanDefinitionDecorator; import org.springframework.beans.factory.xml.ParserContext; import org.w3c.dom.Attr; import org.w3c.dom.Node; public class JCacheInitializingBeanDefinitionDecorator implements BeanDefinitionDecorator { private static final String[] EMPTY_STRING_ARRAY = new String[0]; public BeanDefinitionHolder decorate(Node source, BeanDefinitionHolder holder, ParserContext ctx) { String initializerBeanName = registerJCacheInitializer(source, ctx); createDependencyOnJCacheInitializer(holder, initializerBeanName); return holder; } private void createDependencyOnJCacheInitializer(BeanDefinitionHolder holder, String initializerBeanName) { AbstractBeanDefinition definition = ((AbstractBeanDefinition) holder.getBeanDefinition()); String[] dependsOn = definition.getDependsOn(); if (dependsOn == null) { dependsOn = new String[]{initializerBeanName}; } else { List dependencies = new ArrayList(Arrays.asList(dependsOn)); dependencies.add(initializerBeanName); dependsOn = (String[]) dependencies.toArray(EMPTY_STRING_ARRAY); } definition.setDependsOn(dependsOn); } private String registerJCacheInitializer(Node source, ParserContext ctx) { String cacheName = ((Attr) source).getValue(); String beanName = cacheName + "-initializer"; if (!ctx.getRegistry().containsBeanDefinition(beanName)) { BeanDefinitionBuilder initializer = BeanDefinitionBuilder.rootBeanDefinition(JCacheInitializer.class); initializer.addConstructorArg(cacheName); ctx.getRegistry().registerBeanDefinition(beanName, initializer.getBeanDefinition()); } return beanName; } } ``` import org.springframework.beans.factory.config.BeanDefinitionHolder import org.springframework.beans.factory.support.AbstractBeanDefinition import org.springframework.beans.factory.support.BeanDefinitionBuilder import org.springframework.beans.factory.xml.BeanDefinitionDecorator import org.springframework.beans.factory.xml.ParserContext import org.w3c.dom.Attr import org.w3c.dom.Node class JCacheInitializingBeanDefinitionDecorator : BeanDefinitionDecorator { override fun decorate(source: Node, holder: BeanDefinitionHolder, ctx: ParserContext): BeanDefinitionHolder { val initializerBeanName = registerJCacheInitializer(source, ctx) createDependencyOnJCacheInitializer(holder, initializerBeanName) return holder } private fun createDependencyOnJCacheInitializer(holder: BeanDefinitionHolder, initializerBeanName: String) { val definition = holder.beanDefinition as AbstractBeanDefinition var dependsOn = definition.dependsOn dependsOn = if (dependsOn == null) { arrayOf(initializerBeanName) } else { val dependencies = ArrayList(listOf(*dependsOn)) dependencies.add(initializerBeanName) dependencies.toTypedArray() } definition.setDependsOn(*dependsOn) } private fun registerJCacheInitializer(source: Node, ctx: ParserContext): String { val cacheName = (source as Attr).value val beanName = "$cacheName-initializer" if (!ctx.registry.containsBeanDefinition(beanName)) { val initializer = BeanDefinitionBuilder.rootBeanDefinition(JCacheInitializer::class.java) initializer.addConstructorArg(cacheName) ctx.registry.registerBeanDefinition(beanName, initializer.getBeanDefinition()) } return beanName } } ``` Finally, we need to register the various artifacts with the Spring XML infrastructure by modifying the files, as follows: > # in 'META-INF/spring.handlers' http\://www.foo.example/schema/jcache=com.foo.JCacheNamespaceHandler > # in 'META-INF/spring.schemas' http\://www.foo.example/schema/jcache/jcache.xsd=com/foo/jcache.xsd This part of the appendix lists the existing `StartupSteps` that the core container is instrumented with. The name and detailed information about each startup step is not part of the public contract and is subject to change; this is considered as an implementation detail of the core container and will follow its behavior changes. | | --- | This chapter covers Spring’s support for integration testing and best practices for unit testing. The Spring team advocates test-driven development (TDD). The Spring team has found that the correct use of inversion of control (IoC) certainly does make both unit and integration testing easier (in that the presence of setter methods and appropriate constructors on classes makes them easier to wire together in a test without having to set up service locator registries and similar structures). Testing is an integral part of enterprise software development. This chapter focuses on the value added by the IoC principle to unit testing and on the benefits of the Spring Framework’s support for integration testing. (A thorough treatment of testing in the enterprise is beyond the scope of this reference manual.) Dependency injection should make your code less dependent on the container than it would be with traditional J2EE / Java EE development. The POJOs that make up your application should be testable in JUnit or TestNG tests, with objects instantiated by using the `new` operator, without Spring or any other container. You can use mock objects (in conjunction with other valuable testing techniques) to test your code in isolation. If you follow the architecture recommendations for Spring, the resulting clean layering and componentization of your codebase facilitate easier unit testing. For example, you can test service layer objects by stubbing or mocking DAO or repository interfaces, without needing to access persistent data while running unit tests. True unit tests typically run extremely quickly, as there is no runtime infrastructure to set up. Emphasizing true unit tests as part of your development methodology can boost your productivity. You may not need this section of the testing chapter to help you write effective unit tests for your IoC-based applications. For certain unit testing scenarios, however, the Spring Framework provides mock objects and testing support classes, which are described in this chapter. ## Mock Objects Spring includes a number of packages dedicated to mocking: ### Environment ``` org.springframework.mock.env ``` package contains mock implementations of the `Environment` and `PropertySource` abstractions (see Bean Definition Profiles and `PropertySource` Abstraction). `MockEnvironment` and `MockPropertySource` are useful for developing out-of-container tests for code that depends on environment-specific properties. ### JNDI ``` org.springframework.mock.jndi ``` package contains a partial implementation of the JNDI SPI, which you can use to set up a simple JNDI environment for test suites or stand-alone applications. If, for example, JDBC `DataSource` instances get bound to the same JNDI names in test code as they do in a Jakarta EE container, you can reuse both application code and configuration in testing scenarios without modification. The mock JNDI support in the | | --- | ### Servlet API ``` org.springframework.mock.web ``` package contains a comprehensive set of Servlet API mock objects that are useful for testing web contexts, controllers, and filters. These mock objects are targeted at usage with Spring’s Web MVC framework and are generally more convenient to use than dynamic mock objects (such as EasyMock) or alternative Servlet API mock objects (such as MockObjects). Since Spring Framework 6.0, the mock objects in | | --- | The Spring MVC Test framework builds on the mock Servlet API objects to provide an integration testing framework for Spring MVC. See MockMvc. ### Spring Web Reactive ``` org.springframework.mock.http.server.reactive ``` package contains mock implementations of `ServerHttpRequest` and `ServerHttpResponse` for use in WebFlux applications. The ``` org.springframework.mock.web.server ``` package contains a mock `ServerWebExchange` that depends on those mock request and response objects. Both ``` MockServerHttpRequest ``` ``` MockServerHttpResponse ``` extend from the same abstract base classes as server-specific implementations and share behavior with them. For example, a mock request is immutable once created, but you can use the `mutate()` method from `ServerHttpRequest` to create a modified instance. In order for the mock response to properly implement the write contract and return a write completion handle (that is, `Mono<Void>` ), it by default uses a `Flux` with `cache().then()` , which buffers the data and makes it available for assertions in tests. Applications can set a custom write function (for example, to test an infinite stream). The WebTestClient builds on the mock request and response to provide support for testing WebFlux applications without an HTTP server. The client can also be used for end-to-end tests with a running server. ## Unit Testing Support Classes Spring includes a number of classes that can help with unit testing. They fall into two categories: ### General Testing Utilities ``` org.springframework.test.util ``` package contains several general purpose utilities for use in unit and integration testing. `AopTestUtils` is a collection of AOP-related utility methods. You can use these methods to obtain a reference to the underlying target object hidden behind one or more Spring proxies. For example, if you have configured a bean as a dynamic mock by using a library such as EasyMock or Mockito, and the mock is wrapped in a Spring proxy, you may need direct access to the underlying mock to configure expectations on it and perform verifications. For Spring’s core AOP utilities, see `AopUtils` and `AopProxyUtils` . `ReflectionTestUtils` is a collection of reflection-based utility methods. You can use these methods in testing scenarios where you need to change the value of a constant, set a non- `public` field, invoke a non- `public` setter method, or invoke a non- `public` configuration or lifecycle callback method when testing application code for use cases such as the following: ORM frameworks (such as JPA and Hibernate) that condone `private` or `protected` field access as opposed to `public` setter methods for properties in a domain entity. * Spring’s support for annotations (such as `@Autowired` , `@Inject` , and `@Resource` ), that provide dependency injection for `private` or `protected` fields, setter methods, and configuration methods. * Use of annotations such as `@PostConstruct` and `@PreDestroy` for lifecycle callback methods. `TestSocketUtils` is a simple utility for finding available TCP ports on `localhost` for use in integration testing scenarios. ### Spring MVC Testing Utilities ``` org.springframework.test.web ``` package contains `ModelAndViewAssert` , which you can use in combination with JUnit, TestNG, or any other testing framework for unit tests that deal with Spring MVC `ModelAndView` objects. Unit testing Spring MVC ControllersTo unit test your Spring MVCController classes as POJOs, use ModelAndViewAssert combined with MockHttpServletRequest, MockHttpSession, and so on from Spring’s Servlet API mocks. For thorough integration testing of your Spring MVC and REST Controller classes in conjunction with your WebApplicationContext configuration for Spring MVC, use the Spring MVC Test Framework instead. It is important to be able to perform some integration testing without requiring deployment to your application server or connecting to other enterprise infrastructure. Doing so lets you test things such as: The correct wiring of your Spring IoC container contexts. * Data access using JDBC or an ORM tool. This can include such things as the correctness of SQL statements, Hibernate queries, JPA entity mappings, and so forth. The Spring Framework provides first-class support for integration testing in the `spring-test` module. The name of the actual JAR file might include the release version and might also be in the long form, depending on where you get it from (see the section on Dependency Management for an explanation). This library includes the package, which contains valuable classes for integration testing with a Spring container. This testing does not rely on an application server or other deployment environment. Such tests are slower to run than unit tests but much faster than the equivalent Selenium tests or remote tests that rely on deployment to an application server. Unit and integration testing support is provided in the form of the annotation-driven Spring TestContext Framework. The TestContext framework is agnostic of the actual testing framework in use, which allows instrumentation of tests in various environments, including JUnit, TestNG, and others. The following section provides an overview of the high-level goals of Spring’s integration support, and the rest of this chapter then focuses on dedicated topics: ## Goals of Integration Testing Spring’s integration testing support has the following primary goals: To manage Spring IoC container caching between tests. * To provide Dependency Injection of test fixture instances. * To provide transaction management appropriate to integration testing. * To supply Spring-specific base classes that assist developers in writing integration tests. The next few sections describe each goal and provide links to implementation and configuration details. ### Context Management and Caching The Spring TestContext Framework provides consistent loading of Spring `ApplicationContext` instances and instances as well as caching of those contexts. Support for the caching of loaded contexts is important, because startup time can become an issue — not because of the overhead of Spring itself, but because the objects instantiated by the Spring container take time to instantiate. For example, a project with 50 to 100 Hibernate mapping files might take 10 to 20 seconds to load the mapping files, and incurring that cost before running every test in every test fixture leads to slower overall test runs that reduce developer productivity. Test classes typically declare either an array of resource locations for XML or Groovy configuration metadata — often in the classpath — or an array of component classes that is used to configure the application. These locations or classes are the same as or similar to those specified in `web.xml` or other configuration files for production deployments. By default, once loaded, the configured `ApplicationContext` is reused for each test. Thus, the setup cost is incurred only once per test suite, and subsequent test execution is much faster. In this context, the term “test suite” means all tests run in the same JVM — for example, all tests run from an Ant, Maven, or Gradle build for a given project or module. In the unlikely case that a test corrupts the application context and requires reloading (for example, by modifying a bean definition or the state of an application object) the TestContext framework can be configured to reload the configuration and rebuild the application context before executing the next test. See Context Management and Context Caching with the TestContext framework. ### Dependency Injection of Test Fixtures When the TestContext framework loads your application context, it can optionally configure instances of your test classes by using Dependency Injection. This provides a convenient mechanism for setting up test fixtures by using preconfigured beans from your application context. A strong benefit here is that you can reuse application contexts across various testing scenarios (for example, for configuring Spring-managed object graphs, transactional proxies, `DataSource` instances, and others), thus avoiding the need to duplicate complex test fixture setup for individual test cases. As an example, consider a scenario where we have a class ( ) that implements data access logic for a `Title` domain entity. We want to write integration tests that test the following areas: The Spring configuration: Basically, is everything related to the configuration of the bean correct and present? * The Hibernate mapping file configuration: Is everything mapped correctly and are the correct lazy-loading settings in place? * The logic of the : Does the configured instance of this class perform as anticipated? See dependency injection of test fixtures with the TestContext framework. ### Transaction Management One common issue in tests that access a real database is their effect on the state of the persistence store. Even when you use a development database, changes to the state may affect future tests. Also, many operations — such as inserting or modifying persistent data — cannot be performed (or verified) outside of a transaction. The TestContext framework addresses this issue. By default, the framework creates and rolls back a transaction for each test. You can write code that can assume the existence of a transaction. If you call transactionally proxied objects in your tests, they behave correctly, according to their configured transactional semantics. In addition, if a test method deletes the contents of selected tables while running within the transaction managed for the test, the transaction rolls back by default, and the database returns to its state prior to execution of the test. Transactional support is provided to a test by using a bean defined in the test’s application context. If you want a transaction to commit (unusual, but occasionally useful when you want a particular test to populate or modify the database), you can tell the TestContext framework to cause the transaction to commit instead of roll back by using the `@Commit` annotation. See transaction management with the TestContext framework. ### Support Classes for Integration Testing The Spring TestContext Framework provides several `abstract` support classes that simplify the writing of integration tests. These base test classes provide well-defined hooks into the testing framework as well as convenient instance variables and methods, which let you access: The `ApplicationContext` , for performing explicit bean lookups or testing the state of the context as a whole. * A `JdbcTemplate` , for executing SQL statements to query the database. You can use such queries to confirm database state both before and after execution of database-related application code, and Spring ensures that such queries run in the scope of the same transaction as the application code. When used in conjunction with an ORM tool, be sure to avoid false positives. In addition, you may want to create your own custom, application-wide superclass with instance variables and methods specific to your project. See support classes for the TestContext framework. ## JdbcTestUtils ``` org.springframework.test.jdbc ``` package contains `JdbcTestUtils` , which is a collection of JDBC-related utility functions intended to simplify standard database testing scenarios. Specifically, `JdbcTestUtils` provides the following static utility methods. * `countRowsInTable(..)` : Counts the number of rows in the given table. * ``` countRowsInTableWhere(..) ``` : Counts the number of rows in the given table by using the provided `WHERE` clause. * `deleteFromTables(..)` : Deletes all rows from the specified tables. * ``` deleteFromTableWhere(..) ``` : Deletes rows from the given table by using the provided `WHERE` clause. * `dropTables(..)` : Drops the specified tables. ## Embedded Databases The `spring-jdbc` module provides support for configuring and launching an embedded database, which you can use in integration tests that interact with a database. For details, see Embedded Database Support and Testing Data Access Logic with an Embedded Database. The Spring TestContext Framework (located in the ``` org.springframework.test.context ``` package) provides generic, annotation-driven unit and integration testing support that is agnostic of the testing framework in use. The TestContext framework also places a great deal of importance on convention over configuration, with reasonable defaults that you can override through annotation-based configuration. In addition to generic testing infrastructure, the TestContext framework provides explicit support for JUnit 4, JUnit Jupiter (AKA JUnit 5), and TestNG. For JUnit 4 and TestNG, Spring provides `abstract` support classes. Furthermore, Spring provides a custom JUnit `Runner` and custom JUnit `Rules` for JUnit 4 and a custom `Extension` for JUnit Jupiter that let you write so-called POJO test classes. POJO test classes are not required to extend a particular class hierarchy, such as the `abstract` support classes. The following section provides an overview of the internals of the TestContext framework. If you are interested only in using the framework and are not interested in extending it with your own custom listeners or custom loaders, feel free to go directly to the configuration (context management, dependency injection, transaction management ), support classes, and annotation support sections. The core of the framework consists of the `TestContextManager` class and the `TestContext` , , and `SmartContextLoader` interfaces. A `TestContextManager` is created for each test class (for example, for the execution of all test methods within a single test class in JUnit Jupiter). The `TestContextManager` , in turn, manages a `TestContext` that holds the context of the current test. The `TestContextManager` also updates the state of the `TestContext` as the test progresses and delegates to implementations, which instrument the actual test execution by providing dependency injection, managing transactions, and so on. A `SmartContextLoader` is responsible for loading an `ApplicationContext` for a given test class. See the javadoc and the Spring test suite for further information and examples of various implementations. `TestContext` `TestContext` encapsulates the context in which a test is run (agnostic of the actual testing framework in use) and provides context management and caching support for the test instance for which it is responsible. The `TestContext` also delegates to a `SmartContextLoader` to load an `ApplicationContext` if requested. `TestContextManager` `TestContextManager` is the main entry point into the Spring TestContext Framework and is responsible for managing a single `TestContext` and signaling events to each registered at well-defined test execution points: Prior to any “before class” or “before all” methods of a particular testing framework. * Test instance post-processing. * Prior to any “before” or “before each” methods of a particular testing framework. * Immediately before execution of the test method but after test setup. * Immediately after execution of the test method but before test tear down. * After any “after” or “after each” methods of a particular testing framework. * After any “after class” or “after all” methods of a particular testing framework. defines the API for reacting to test-execution events published by the `TestContextManager` with which the listener is registered. See Configuration. ## Context Loaders `ContextLoader` is a strategy interface for loading an `ApplicationContext` for an integration test managed by the Spring TestContext Framework. You should implement `SmartContextLoader` instead of this interface to provide support for component classes, active bean definition profiles, test property sources, context hierarchies, and support. `SmartContextLoader` is an extension of the `ContextLoader` interface that supersedes the original minimal `ContextLoader` SPI. Specifically, a `SmartContextLoader` can choose to process resource locations, component classes, or context initializers. Furthermore, a `SmartContextLoader` can set active bean definition profiles and test property sources in the context that it loads. Spring provides the following implementations: ``` DelegatingSmartContextLoader ``` , depending either on the configuration declared for the test class or on the presence of default locations or default configuration classes. Groovy support is enabled only if Groovy is on the classpath. * ``` WebDelegatingSmartContextLoader ``` , depending either on the configuration declared for the test class or on the presence of default locations or default configuration classes. A web `ContextLoader` is used only if `@WebAppConfiguration` is present on the test class. Groovy support is enabled only if Groovy is on the classpath. * : Loads a standard `ApplicationContext` from component classes. * from component classes. * : Loads a standard `ApplicationContext` from resource locations that are either Groovy scripts or XML configuration files. * from resource locations that are either Groovy scripts or XML configuration files. * : Loads a standard `ApplicationContext` from XML resource locations. * from XML resource locations. The default configuration for the internals of the Spring TestContext Framework is sufficient for all common use cases. However, there are times when a development team or third party framework would like to change the default `ContextLoader` , implement a custom `TestContext` or `ContextCache` , augment the default sets of implementations, and so on. For such low-level control over how the TestContext framework operates, Spring provides a bootstrapping strategy. defines the SPI for bootstrapping the TestContext framework. A is used by the `TestContextManager` to load the implementations for the current test and to build the `TestContext` that it manages. You can configure a custom bootstrapping strategy for a test class (or test class hierarchy) by using `@BootstrapWith` , either directly or as a meta-annotation. If a bootstrapper is not explicitly configured by using `@BootstrapWith` , either the ``` DefaultTestContextBootstrapper ``` or the ``` WebTestContextBootstrapper ``` is used, depending on the presence of `@WebAppConfiguration` . Since the SPI is likely to change in the future (to accommodate new requirements), we strongly encourage implementers not to implement this interface directly but rather to extend ``` AbstractTestContextBootstrapper ``` or one of its concrete subclasses instead. # TestExecutionListener Configuration Configuration Spring provides the following implementations that are registered by default, exactly in the following order: : Configures Servlet API mocks for a : Handles the `@DirtiesContext` annotation for “before” modes. * : Provides support for `ApplicationEvents` . * : Provides dependency injection for the test instance. * ``` MicrometerObservationRegistryTestExecutionListener ``` : Provides support for Micrometer’s `ObservationRegistry` . * : Handles the `@DirtiesContext` annotation for “after” modes. * : Provides transactional test execution with default rollback semantics. * : Runs SQL scripts configured by using the `@Sql` annotation. * : Publishes test execution events to the test’s `ApplicationContext` (see Test Execution Events). ## Registering Implementations You can register implementations explicitly for a test class, its subclasses, and its nested classes by using the annotation. See annotation support and the javadoc for for details and examples. Switching to defaultTestExecutionListener implementationsIf you extend a class that is annotated with ## Automatic Discovery of Default Implementations Registering implementations by using is suitable for custom listeners that are used in limited testing scenarios. However, it can become cumbersome if a custom listener needs to be used across an entire test suite. This issue is addressed through support for automatic discovery of default implementations through the mechanism. Specifically, the `spring-test` module declares all core default implementations under the ``` org.springframework.test.context.TestExecutionListener ``` key in its properties file. Third-party frameworks and developers can contribute their own implementations to the list of default listeners in the same manner through their own properties file. ## Ordering Implementations When the TestContext framework discovers default implementations through the aforementioned mechanism, the instantiated listeners are sorted by using Spring’s , which honors Spring’s `Ordered` interface and `@Order` annotation for ordering. ``` AbstractTestExecutionListener ``` and all default implementations provided by Spring implement `Ordered` with appropriate values. Third-party frameworks and developers should therefore make sure that their default implementations are registered in the proper order by implementing `Ordered` or declaring `@Order` . See the javadoc for the `getOrder()` methods of the core default implementations for details on what values are assigned to each core listener. ## Merging Implementations If a custom is registered via , the default listeners are not registered. In most common testing scenarios, this effectively forces the developer to manually declare all default listeners in addition to any custom listeners. The following listing demonstrates this style of configuration: ``` @ContextConfiguration @TestExecutionListeners({ MyCustomTestExecutionListener.class, ServletTestExecutionListener.class, DirtiesContextBeforeModesTestExecutionListener.class, DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, TransactionalTestExecutionListener.class, SqlScriptsTestExecutionListener.class }) class MyTest { // class body... } ``` ``` @ContextConfiguration @TestExecutionListeners( MyCustomTestExecutionListener::class, ServletTestExecutionListener::class, DirtiesContextBeforeModesTestExecutionListener::class, DependencyInjectionTestExecutionListener::class, DirtiesContextTestExecutionListener::class, TransactionalTestExecutionListener::class, SqlScriptsTestExecutionListener::class ) class MyTest { // class body... } ``` The challenge with this approach is that it requires that the developer know exactly which listeners are registered by default. Moreover, the set of default listeners can change from release to release — for example, was introduced in Spring Framework 4.1, and was introduced in Spring Framework 4.2. Furthermore, third-party frameworks like Spring Boot and Spring Security register their own default implementations by using the aforementioned automatic discovery mechanism . To avoid having to be aware of and re-declare all default listeners, you can set the `mergeMode` attribute of ``` MergeMode.MERGE_WITH_DEFAULTS ``` . `MERGE_WITH_DEFAULTS` indicates that locally declared listeners should be merged with the default listeners. The merging algorithm ensures that duplicates are removed from the list and that the resulting set of merged listeners is sorted according to the semantics of , as described in Ordering Implementations. If a listener implements `Ordered` or is annotated with `@Order` , it can influence the position in which it is merged with the defaults. Otherwise, locally declared listeners are appended to the list of default listeners when merged. For example, if the class in the previous example configures its `order` value (for example, `500` ) to be less than the order of the (which happens to be `1000` ), the can then be automatically merged with the list of defaults in front of the , and the previous example could be replaced with the following: ``` @ContextConfiguration @TestExecutionListeners( listeners = MyCustomTestExecutionListener.class, mergeMode = MERGE_WITH_DEFAULTS ) class MyTest { // class body... } ``` ``` @ContextConfiguration @TestExecutionListeners( listeners = [MyCustomTestExecutionListener::class], mergeMode = MERGE_WITH_DEFAULTS ) class MyTest { // class body... } ``` Since Spring Framework 5.3.3, the TestContext framework provides support for recording application events published in the `ApplicationContext` so that assertions can be performed against those events within tests. All events published during the execution of a single test are made available via the `ApplicationEvents` API which allows you to process the events as a `java.util.Stream` . To use `ApplicationEvents` in your tests, do the following. Ensure that your test class is annotated or meta-annotated with Ensure that the is registered. Note, however, that is registered by default and only needs to be manually registered if you have custom configuration via that does not include the default listeners. * Annotate a field of type `ApplicationEvents` with `@Autowired` and use that instance of `ApplicationEvents` in your test and lifecycle methods (such as `@BeforeEach` and `@AfterEach` methods in JUnit Jupiter). When using the SpringExtension for JUnit Jupiter, you may declare a method parameter of type `ApplicationEvents` in a test or lifecycle method as an alternative to an `@Autowired` field in the test class. The following test class uses the `SpringExtension` for JUnit Jupiter and AssertJ to assert the types of application events published while invoking a method in a Spring-managed component: @Autowired ApplicationEvents events; (2) @Test void submitOrder() { // Invoke method in OrderService that publishes an event orderService.submitOrder(new Order(/* ... */)); // Verify that an OrderSubmitted event was published long numEvents = events.stream(OrderSubmitted.class).count(); (3) assertThat(numEvents).isEqualTo(1); } } ``` @Autowired lateinit var events: ApplicationEvents (2) @Test fun submitOrder() { // Invoke method in OrderService that publishes an event orderService.submitOrder(Order(/* ... */)) // Verify that an OrderSubmitted event was published val numEvents = events.stream(OrderSubmitted::class).count() (3) assertThat(numEvents).isEqualTo(1) } } ``` See the `ApplicationEvents` javadoc for further details regarding the `ApplicationEvents` API. introduced in Spring Framework 5.2 offers an alternative approach to implementing a custom . Components in the test’s `ApplicationContext` can listen to the following events published by the , each of which corresponds to a method in the API. * `BeforeTestClassEvent` * ``` PrepareTestInstanceEvent ``` ``` BeforeTestExecutionEvent ``` ``` AfterTestExecutionEvent ``` * `AfterTestMethodEvent` * `AfterTestClassEvent` These events may be consumed for various reasons, such as resetting mock beans or tracing test execution. One advantage of consuming test execution events rather than implementing a custom is that test execution events may be consumed by any Spring bean registered in the test `ApplicationContext` , and such beans may benefit directly from dependency injection and other features of the `ApplicationContext` . In contrast, a is not a bean in the `ApplicationContext` . In order to listen to test execution events, a Spring bean may choose to implement the ``` org.springframework.context.ApplicationListener ``` interface. Alternatively, listener methods can be annotated with `@EventListener` and configured to listen to one of the particular event types listed above (see Annotation-based Event Listeners). Due to the popularity of this approach, Spring provides the following dedicated `@EventListener` annotations to simplify registration of test execution event listeners. These annotations reside in the ``` org.springframework.test.context.event.annotation ``` * `@BeforeTestClass` * `@PrepareTestInstance` * `@BeforeTestMethod` * `@BeforeTestExecution` * `@AfterTestExecution` * `@AfterTestMethod` * `@AfterTestClass` ## Exception Handling By default, if a test execution event listener throws an exception while consuming an event, that exception will propagate to the underlying testing framework in use (such as JUnit or TestNG). For example, if the consumption of a results in an exception, the corresponding test method will fail as a result of the exception. In contrast, if an asynchronous test execution event listener throws an exception, the exception will not propagate to the underlying testing framework. For further details on asynchronous exception handling, consult the class-level javadoc for `@EventListener` . ## Asynchronous Listeners If you want a particular test execution event listener to process events asynchronously, you can use Spring’s regular `@Async` support . For further details, consult the class-level javadoc for `@EventListener` . Each `TestContext` provides context management and caching support for the test instance for which it is responsible. Test instances do not automatically receive access to the configured `ApplicationContext` . However, if a test class implements the interface, a reference to the `ApplicationContext` is supplied to the test instance. Note that implement and, therefore, provide access to the `ApplicationContext` automatically. Test classes that use the TestContext framework do not need to extend any particular class or implement a specific interface to configure their application context. Instead, configuration is achieved by declaring the annotation at the class level. If your test class does not explicitly declare application context resource locations or component classes, the configured `ContextLoader` determines how to load a context from a default location or default configuration classes. In addition to context resource locations and component classes, an application context can also be configured through application context initializers. The following sections explain how to use Spring’s annotation to configure a test `ApplicationContext` by using XML configuration files, Groovy scripts, component classes (typically `@Configuration` classes), or context initializers. Alternatively, you can implement and configure your own custom `SmartContextLoader` for advanced use cases. To load an `ApplicationContext` for your tests by using XML configuration files, annotate your test class with and configure the `locations` attribute with an array that contains the resource locations of XML configuration metadata. A plain or relative path (for example, `context.xml` ) is treated as a classpath resource that is relative to the package in which the test class is defined. A path starting with a slash is treated as an absolute classpath location (for example, ``` /org/example/config.xml ``` ). A path that represents a resource URL (i.e., a path prefixed with `classpath:` , `file:` , `http:` , etc.) is used as is. supports an alias for the `locations` attribute through the standard Java `value` attribute. Thus, if you do not need to declare additional attributes in , you can omit the declaration of the `locations` attribute name and declare the resource locations by using the shorthand format demonstrated in the following example: ``` @ExtendWith(SpringExtension.class) @ContextConfiguration({"/app-config.xml", "/test-config.xml"}) (1) class MyTest { // class body... } ``` ``` @ExtendWith(SpringExtension::class) @ContextConfiguration("/app-config.xml", "/test-config.xml") (1) class MyTest { // class body... } ``` annotation, the TestContext framework tries to detect a default XML resource location. Specifically, loads your application context from ``` "classpath:com/example/MyTest-context.xml" ``` ``` @ExtendWith(SpringExtension::class) // ApplicationContext will be loaded from // "classpath:com/example/MyTest-context.xml" @ContextConfiguration (1) class MyTest { // class body... } ``` To load an `ApplicationContext` for your tests by using Groovy scripts that use the Groovy Bean Definition DSL, you can annotate your test class with and configure the `locations` or `value` attribute with an array that contains the resource locations of Groovy scripts. Resource lookup semantics for Groovy scripts are the same as those described for XML configuration files. Enabling Groovy script supportSupport for using Groovy scripts to load anApplicationContext in the Spring TestContext Framework is enabled automatically if Groovy is on the classpath. The following example shows how to specify Groovy configuration files: ``` @ExtendWith(SpringExtension.class) // ApplicationContext will be loaded from "/AppConfig.groovy" and // "/TestConfig.groovy" in the root of the classpath @ContextConfiguration({"/AppConfig.groovy", "/TestConfig.Groovy"}) (1) class MyTest { // class body... } ``` ``` @ExtendWith(SpringExtension::class) // ApplicationContext will be loaded from "/AppConfig.groovy" and // "/TestConfig.groovy" in the root of the classpath @ContextConfiguration("/AppConfig.groovy", "/TestConfig.Groovy") (1) class MyTest { // class body... } ``` annotation, the TestContext framework tries to detect a default Groovy script. Specifically, ``` "classpath:com/example/MyTestContext.groovy" ``` ``` @ExtendWith(SpringExtension::class) // ApplicationContext will be loaded from // "classpath:com/example/MyTestContext.groovy" @ContextConfiguration (1) class MyTest { // class body... } ``` To load an `ApplicationContext` for your tests by using component classes (see Java-based container configuration), you can annotate your test class with and configure the `classes` attribute with an array that contains references to component classes. The following example shows how to do so: ``` @ExtendWith(SpringExtension.class) // ApplicationContext will be loaded from AppConfig and TestConfig @ContextConfiguration(classes = {AppConfig.class, TestConfig.class}) (1) class MyTest { // class body... } ``` ``` @ExtendWith(SpringExtension::class) // ApplicationContext will be loaded from AppConfig and TestConfig @ContextConfiguration(classes = [AppConfig::class, TestConfig::class]) (1) class MyTest { // class body... } ``` If you omit the `classes` attribute from the annotation, the TestContext framework tries to detect the presence of default configuration classes. Specifically, detect all `static` nested classes of the test class that meet the requirements for configuration class implementations, as specified in the `@Configuration` javadoc. Note that the name of the configuration class is arbitrary. In addition, a test class can contain more than one `static` nested configuration class if desired. In the following example, the `OrderServiceTest` class declares a `static` nested configuration class named `Config` that is automatically used to load the `ApplicationContext` for the test class: @Configuration static class Config { // this bean will be injected into the OrderServiceTest class @Bean OrderService orderService() { OrderService orderService = new OrderServiceImpl(); // set properties, etc. return orderService; } } @Test void testOrderService() { // test the orderService } @Configuration class Config { // this bean will be injected into the OrderServiceTest class @Bean fun orderService(): OrderService { // set properties, etc. return OrderServiceImpl() } } @Test fun testOrderService() { // test the orderService } } ``` It may sometimes be desirable to mix XML configuration files, Groovy scripts, and component classes (typically `@Configuration` classes) to configure an `ApplicationContext` for your tests. For example, if you use XML configuration in production, you may decide that you want to use `@Configuration` classes to configure specific Spring-managed components for your tests, or vice versa. Furthermore, some third-party frameworks (such as Spring Boot) provide first-class support for loading an `ApplicationContext` from different types of resources simultaneously (for example, XML configuration files, Groovy scripts, and `@Configuration` classes). The Spring Framework, historically, has not supported this for standard deployments. Consequently, most of the `SmartContextLoader` implementations that the Spring Framework delivers in the `spring-test` module support only one resource type for each test context. However, this does not mean that you cannot use both. One exception to the general rule is that the support both XML configuration files and Groovy scripts simultaneously. Furthermore, third-party frameworks may choose to support the declaration of both `locations` and `classes` through , and, with the standard testing support in the TestContext framework, you have the following options. If you want to use resource locations (for example, XML or Groovy) and `@Configuration` classes to configure your tests, you must pick one as the entry point, and that one must include or import the other. For example, in XML or Groovy scripts, you can include `@Configuration` classes by using component scanning or defining them as normal Spring beans, whereas, in a `@Configuration` class, you can use `@ImportResource` to import XML configuration files or Groovy scripts. Note that this behavior is semantically equivalent to how you configure your application in production: In production configuration, you define either a set of XML or Groovy resource locations or a set of `@Configuration` classes from which your production `ApplicationContext` is loaded, but you still have the freedom to include or import the other type of configuration. To configure an `ApplicationContext` for your tests by using context initializers, annotate your test class with and configure the `initializers` attribute with an array that contains references to classes that implement . The declared context initializers are then used to initialize the that is loaded for your tests. Note that the concrete type supported by each declared initializer must be compatible with the type of `ApplicationContext` created by the `SmartContextLoader` in use (typically a ). Furthermore, the order in which the initializers are invoked depends on whether they implement Spring’s `Ordered` interface or are annotated with Spring’s `@Order` annotation or the standard `@Priority` annotation. The following example shows how to use initializers: ``` @ExtendWith(SpringExtension.class) // ApplicationContext will be loaded from TestConfig // and initialized by TestAppCtxInitializer @ContextConfiguration( classes = TestConfig.class, initializers = TestAppCtxInitializer.class) (1) class MyTest { // class body... } ``` ``` @ExtendWith(SpringExtension::class) // ApplicationContext will be loaded from TestConfig // and initialized by TestAppCtxInitializer @ContextConfiguration( classes = [TestConfig::class], initializers = [TestAppCtxInitializer::class]) (1) class MyTest { // class body... } ``` You can also omit the declaration of XML configuration files, Groovy scripts, or component classes in entirely and instead declare only classes, which are then responsible for registering beans in the context — for example, by programmatically loading bean definitions from XML files or configuration classes. The following example shows how to do so: ``` @ExtendWith(SpringExtension.class) // ApplicationContext will be initialized by EntireAppInitializer // which presumably registers beans in the context @ContextConfiguration(initializers = EntireAppInitializer.class) (1) class MyTest { // class body... } ``` ``` @ExtendWith(SpringExtension::class) // ApplicationContext will be initialized by EntireAppInitializer // which presumably registers beans in the context @ContextConfiguration(initializers = [EntireAppInitializer::class]) (1) class MyTest { // class body... } ``` supports boolean `inheritLocations` and `inheritInitializers` attributes that denote whether resource locations or component classes and context initializers declared by superclasses should be inherited. The default value for both flags is `true` . This means that a test class inherits the resource locations or component classes as well as the context initializers declared by any superclasses. Specifically, the resource locations or component classes for a test class are appended to the list of resource locations or annotated classes declared by superclasses. Similarly, the initializers for a given test class are added to the set of initializers defined by test superclasses. Thus, subclasses have the option of extending the resource locations, component classes, or context initializers. If the `inheritLocations` or `inheritInitializers` attribute in is set to `false` , the resource locations or component classes and the context initializers, respectively, for the test class shadow and effectively replace the configuration defined by superclasses. In the next example, which uses XML resource locations, the `ApplicationContext` for `ExtendedTest` is loaded from `base-config.xml` and `extended-config.xml` , in that order. Beans defined in `extended-config.xml` can, therefore, override (that is, replace) those defined in `base-config.xml` . The following example shows how one class can extend another and use both its own configuration file and the superclass’s configuration file: // ApplicationContext will be loaded from "/base-config.xml" and // "/extended-config.xml" in the root of the classpath @ContextConfiguration("/extended-config.xml") (2) class ExtendedTest extends BaseTest { // class body... } ``` // ApplicationContext will be loaded from "/base-config.xml" and // "/extended-config.xml" in the root of the classpath @ContextConfiguration("/extended-config.xml") (2) class ExtendedTest : BaseTest() { // class body... } ``` Similarly, in the next example, which uses component classes, the `ApplicationContext` for `ExtendedTest` is loaded from the `BaseConfig` and `ExtendedConfig` classes, in that order. Beans defined in `ExtendedConfig` can, therefore, override (that is, replace) those defined in `BaseConfig` . The following example shows how one class can extend another and use both its own configuration class and the superclass’s configuration class: ``` // ApplicationContext will be loaded from BaseConfig @SpringJUnitConfig(BaseConfig.class) (1) class BaseTest { // class body... } // ApplicationContext will be loaded from BaseConfig and ExtendedConfig @SpringJUnitConfig(ExtendedConfig.class) (2) class ExtendedTest extends BaseTest { // class body... } ``` ``` // ApplicationContext will be loaded from BaseConfig @SpringJUnitConfig(BaseConfig::class) (1) open class BaseTest { // class body... } // ApplicationContext will be loaded from BaseConfig and ExtendedConfig @SpringJUnitConfig(ExtendedConfig::class) (2) class ExtendedTest : BaseTest() { // class body... } ``` In the next example, which uses context initializers, the `ApplicationContext` for `ExtendedTest` is initialized by using `BaseInitializer` and `ExtendedInitializer` . Note, however, that the order in which the initializers are invoked depends on whether they implement Spring’s `Ordered` interface or are annotated with Spring’s `@Order` annotation or the standard `@Priority` annotation. The following example shows how one class can extend another and use both its own initializer and the superclass’s initializer: // ApplicationContext will be initialized by BaseInitializer // and ExtendedInitializer @SpringJUnitConfig(initializers = ExtendedInitializer.class) (2) class ExtendedTest extends BaseTest { // class body... } ``` // ApplicationContext will be initialized by BaseInitializer // and ExtendedInitializer @SpringJUnitConfig(initializers = [ExtendedInitializer::class]) (2) class ExtendedTest : BaseTest() { // class body... } ``` The Spring Framework has first-class support for the notion of environments and profiles (AKA "bean definition profiles"), and integration tests can be configured to activate particular bean definition profiles for various testing scenarios. This is achieved by annotating a test class with the `@ActiveProfiles` annotation and supplying a list of profiles that should be activated when loading the `ApplicationContext` for the test. You can use | | --- | Consider two examples with XML configuration and `@Configuration` classes: ``` <!-- app-config.xml --> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jdbc="http://www.springframework.org/schema/jdbc" xmlns:jee="http://www.springframework.org/schema/jee" xsi:schemaLocation="..." <bean id="transferService" class="com.bank.service.internal.DefaultTransferService"> <constructor-arg ref="accountRepository"/> <constructor-arg ref="feePolicy"/> </bean <bean id="accountRepository" class="com.bank.repository.internal.JdbcAccountRepository"> <constructor-arg ref="dataSource"/> </bean <bean id="feePolicy" class="com.bank.service.internal.ZeroFeePolicy"/ <beans profile="default"> <jdbc:embedded-database id="dataSource"> <jdbc:script location="classpath:com/bank/config/sql/schema.sql"/> </jdbc:embedded-database> </beans``` @ExtendWith(SpringExtension.class) // ApplicationContext will be loaded from "classpath:/app-config.xml" @ContextConfiguration("/app-config.xml") @ActiveProfiles("dev") class TransferServiceTest { ``` @ExtendWith(SpringExtension::class) // ApplicationContext will be loaded from "classpath:/app-config.xml" @ContextConfiguration("/app-config.xml") @ActiveProfiles("dev") class TransferServiceTest { When `TransferServiceTest` is run, its `ApplicationContext` is loaded from the `app-config.xml` configuration file in the root of the classpath. If you inspect `app-config.xml` , you can see that the `accountRepository` bean has a dependency on a `dataSource` bean. However, `dataSource` is not defined as a top-level bean. Instead, `dataSource` is defined three times: in the `production` profile, in the `dev` profile, and in the `default` profile. By annotating `TransferServiceTest` with , we instruct the Spring TestContext Framework to load the `ApplicationContext` with the active profiles set to `{"dev"}` . As a result, an embedded database is created and populated with test data, and the `accountRepository` bean is wired with a reference to the development `DataSource` . That is likely what we want in an integration test. It is sometimes useful to assign beans to a `default` profile. Beans within the default profile are included only when no other profile is specifically activated. You can use this to define “fallback” beans to be used in the application’s default state. For example, you may explicitly provide a data source for `dev` and `production` profiles, but define an in-memory data source as a default when neither of these is active. The following code listings demonstrate how to implement the same configuration and integration test with `@Configuration` classes instead of XML: @Autowired DataSource dataSource; @Bean public FeePolicy feePolicy() { return new ZeroFeePolicy(); } } ``` @Bean fun feePolicy(): FeePolicy { return ZeroFeePolicy() } } ``` ``` @SpringJUnitConfig({ TransferServiceConfig.class, StandaloneDataConfig.class, JndiDataConfig.class, DefaultDataConfig.class}) @ActiveProfiles("dev") class TransferServiceTest { ``` @SpringJUnitConfig( TransferServiceConfig::class, StandaloneDataConfig::class, JndiDataConfig::class, DefaultDataConfig::class) @ActiveProfiles("dev") class TransferServiceTest { In this variation, we have split the XML configuration into four independent `@Configuration` classes: ``` TransferServiceConfig ``` : Acquires a `dataSource` through dependency injection by using `@Autowired` . * `StandaloneDataConfig` : Defines a `dataSource` for an embedded database suitable for developer tests. * `JndiDataConfig` : Defines a `dataSource` that is retrieved from JNDI in a production environment. * `DefaultDataConfig` : Defines a `dataSource` for a default embedded database, in case no profile is active. As with the XML-based configuration example, we still annotate `TransferServiceTest` with , but this time we specify all four configuration classes by using the annotation. The body of the test class itself remains completely unchanged. It is often the case that a single set of profiles is used across multiple test classes within a given project. Thus, to avoid duplicate declarations of the `@ActiveProfiles` annotation, you can declare `@ActiveProfiles` once on a base class, and subclasses automatically inherit the `@ActiveProfiles` configuration from the base class. In the following example, the declaration of `@ActiveProfiles` (as well as other annotations) has been moved to an abstract superclass, ``` AbstractIntegrationTest ``` ``` @SpringJUnitConfig({ TransferServiceConfig.class, StandaloneDataConfig.class, JndiDataConfig.class, DefaultDataConfig.class}) @ActiveProfiles("dev") abstract class AbstractIntegrationTest { } ``` ``` @SpringJUnitConfig( TransferServiceConfig::class, StandaloneDataConfig::class, JndiDataConfig::class, DefaultDataConfig::class) @ActiveProfiles("dev") abstract class AbstractIntegrationTest { } ``` ``` // "dev" profile inherited from superclass class TransferServiceTest extends AbstractIntegrationTest { ``` // "dev" profile inherited from superclass class TransferServiceTest : AbstractIntegrationTest() { `@ActiveProfiles` also supports an `inheritProfiles` attribute that can be used to disable the inheritance of active profiles, as the following example shows: ``` // "dev" profile overridden with "production" @ActiveProfiles(profiles = "production", inheritProfiles = false) class ProductionTransferServiceTest extends AbstractIntegrationTest { // test body } ``` ``` // "dev" profile overridden with "production" @ActiveProfiles("production", inheritProfiles = false) class ProductionTransferServiceTest : AbstractIntegrationTest() { // test body } ``` Furthermore, it is sometimes necessary to resolve active profiles for tests programmatically instead of declaratively — for example, based on: The current operating system. * Whether tests are being run on a continuous integration build server. * The presence of certain environment variables. * The presence of custom class-level annotations. * Other concerns. To resolve active bean definition profiles programmatically, you can implement a custom ``` ActiveProfilesResolver ``` and register it by using the `resolver` attribute of `@ActiveProfiles` . For further information, see the corresponding javadoc. The following example demonstrates how to implement and register a custom ``` OperatingSystemActiveProfilesResolver ``` ``` // "dev" profile overridden programmatically via a custom resolver @ActiveProfiles( resolver = OperatingSystemActiveProfilesResolver.class, inheritProfiles = false) class TransferServiceTest extends AbstractIntegrationTest { // test body } ``` ``` // "dev" profile overridden programmatically via a custom resolver @ActiveProfiles( resolver = OperatingSystemActiveProfilesResolver::class, inheritProfiles = false) class TransferServiceTest : AbstractIntegrationTest() { // test body } ``` ``` public class OperatingSystemActiveProfilesResolver implements ActiveProfilesResolver { @Override public String[] resolve(Class<?> testClass) { String profile = ...; // determine the value of profile based on the operating system return new String[] {profile}; } } ``` ``` class OperatingSystemActiveProfilesResolver : ActiveProfilesResolver { override fun resolve(testClass: Class<*>): Array<String> { val profile: String = ... // determine the value of profile based on the operating system return arrayOf(profile) } } ``` The Spring Framework has first-class support for the notion of an environment with a hierarchy of property sources, and you can configure integration tests with test-specific property sources. In contrast to the `@PropertySource` annotation used on `@Configuration` classes, you can declare the `@TestPropertySource` annotation on a test class to declare resource locations for test properties files or inlined properties. These test property sources are added to the set of `PropertySources` in the `Environment` for the `ApplicationContext` loaded for the annotated integration test. ## Declaring Test Property Sources You can configure test properties files by using the `locations` or `value` attribute of `@TestPropertySource` . Both traditional and XML-based properties file formats are supported — for example, ``` "classpath:/com/example/test.properties" ``` ``` "file:///path/to/file.xml" ``` ``` "/org/example/test.xml" ``` ). A path that references a URL (for example, a path prefixed with `classpath:` , `file:` , or `http:` ) is loaded by using the specified resource protocol. Resource location wildcards (such as `**/*.properties` ) are not permitted: Each location must evaluate to exactly one `.properties` or `.xml` resource. The following example uses a test properties file: You can configure inlined properties in the form of key-value pairs by using the `properties` attribute of `@TestPropertySource` , as shown in the next example. All key-value pairs are added to the enclosing `Environment` as a single test `PropertySource` with the highest precedence. The supported syntax for key-value pairs is the same as the syntax defined for entries in a Java properties file: * `key=value` * `key:value` * `key value` The following example sets two inlined properties: ``` @ContextConfiguration @TestPropertySource(properties = {"timezone = GMT", "port: 4242"}) (1) class MyIntegrationTests { // class body... } ``` ## Default Properties File Detection If `@TestPropertySource` is declared as an empty annotation (that is, without explicit values for the `locations` or `properties` attributes), an attempt is made to detect a default properties file relative to the class that declared the annotation. For example, if the annotated test class is `com.example.MyTest` , the corresponding default properties file is ``` classpath:com/example/MyTest.properties ``` . If the default cannot be detected, an Test properties have higher precedence than those defined in the operating system’s environment, Java system properties, or property sources added by the application declaratively by using `@PropertySource` or programmatically. Thus, test properties can be used to selectively override properties loaded from system and application property sources. Furthermore, inlined properties have higher precedence than properties loaded from resource locations. Note, however, that properties registered via have higher precedence than those loaded via `@TestPropertySource` . In the next example, the `timezone` and `port` properties and any properties defined in `"/test.properties"` override any properties of the same name that are defined in system and application property sources. Furthermore, if the `"/test.properties"` file defines entries for the `timezone` and `port` properties those are overridden by the inlined properties declared by using the `properties` attribute. The following example shows how to specify properties both in a file and inline: ``` @ContextConfiguration @TestPropertySource( locations = "/test.properties", properties = {"timezone = GMT", "port: 4242"} ) class MyIntegrationTests { // class body... } ``` ``` @ContextConfiguration @TestPropertySource("/test.properties", properties = ["timezone = GMT", "port: 4242"] ) class MyIntegrationTests { // class body... } ``` ## Inheriting and Overriding Test Property Sources `@TestPropertySource` supports boolean `inheritLocations` and `inheritProperties` attributes that denote whether resource locations for properties files and inlined properties declared by superclasses should be inherited. The default value for both flags is `true` . This means that a test class inherits the locations and inlined properties declared by any superclasses. Specifically, the locations and inlined properties for a test class are appended to the locations and inlined properties declared by superclasses. Thus, subclasses have the option of extending the locations and inlined properties. Note that properties that appear later shadow (that is, override) properties of the same name that appear earlier. In addition, the aforementioned precedence rules apply for inherited test property sources as well. If the `inheritLocations` or `inheritProperties` attribute in `@TestPropertySource` is set to `false` , the locations or inlined properties, respectively, for the test class shadow and effectively replace the configuration defined by superclasses. In the next example, the `ApplicationContext` for `BaseTest` is loaded by using only the `base.properties` file as a test property source. In contrast, the `ApplicationContext` for `ExtendedTest` is loaded by using the `base.properties` and `extended.properties` files as test property source locations. The following example shows how to define properties in both a subclass and its superclass by using `properties` files: @TestPropertySource("extended.properties") @ContextConfiguration class ExtendedTest extends BaseTest { // ... } ``` @TestPropertySource("extended.properties") @ContextConfiguration class ExtendedTest : BaseTest() { // ... } ``` In the next example, the `ApplicationContext` for `BaseTest` is loaded by using only the inlined `key1` property. In contrast, the `ApplicationContext` for `ExtendedTest` is loaded by using the inlined `key1` and `key2` properties. The following example shows how to define properties in both a subclass and its superclass by using inline properties: ``` @TestPropertySource(properties = "key1 = value1") @ContextConfiguration class BaseTest { // ... } @TestPropertySource(properties = "key2 = value2") @ContextConfiguration class ExtendedTest extends BaseTest { // ... } ``` ``` @TestPropertySource(properties = ["key1 = value1"]) @ContextConfiguration open class BaseTest { // ... } @TestPropertySource(properties = ["key2 = value2"]) @ContextConfiguration class ExtendedTest : BaseTest() { // ... } ``` As of Spring Framework 5.2.5, the TestContext framework provides support for dynamic properties via the annotation. This annotation can be used in integration tests that need to add properties with dynamic values to the set of `PropertySources` in the `Environment` for the `ApplicationContext` loaded for the integration test. In contrast to the `@TestPropertySource` annotation that is applied at the class level, must be applied to a `static` method that accepts a single ``` DynamicPropertyRegistry ``` argument which is used to add name-value pairs to the `Environment` . Values are dynamic and provided via a `Supplier` which is only invoked when the property is resolved. Typically, method references are used to supply values, as can be seen in the following example which uses the Testcontainers project to manage a Redis container outside of the Spring `ApplicationContext` . The IP address and port of the managed Redis container are made available to components within the test’s `ApplicationContext` via the `redis.host` and `redis.port` properties. These properties can be accessed via Spring’s `Environment` abstraction or injected directly into Spring-managed components – for example, via ``` @Value("${redis.host}") ``` ``` @Value("${redis.port}") ``` @Container static GenericContainer redis = new GenericContainer("redis:5.0.3-alpine").withExposedPorts(6379); @DynamicPropertySource static void redisProperties(DynamicPropertyRegistry registry) { registry.add("redis.host", redis::getHost); registry.add("redis.port", redis::getFirstMappedPort); } @Container @JvmStatic val redis: GenericContainer = GenericContainer("redis:5.0.3-alpine").withExposedPorts(6379) @DynamicPropertySource @JvmStatic fun redisProperties(registry: DynamicPropertyRegistry) { registry.add("redis.host", redis::getHost) registry.add("redis.port", redis::getFirstMappedPort) } } Dynamic properties have higher precedence than those loaded from `@TestPropertySource` , the operating system’s environment, Java system properties, or property sources added by the application declaratively by using `@PropertySource` or programmatically. Thus, dynamic properties can be used to selectively override properties loaded via `@TestPropertySource` , system property sources, and application property sources. # Loading a WebApplicationContext # Loading a To instruct the TestContext framework to load a instead of a standard `ApplicationContext` , you can annotate the respective test class with `@WebAppConfiguration` . The presence of `@WebAppConfiguration` on your test class instructs the TestContext framework (TCF) that a (WAC) should be loaded for your integration tests. In the background, the TCF makes sure that a `MockServletContext` is created and supplied to your test’s WAC. By default, the base resource path for your `MockServletContext` is set to `src/main/webapp` . This is interpreted as a path relative to the root of your JVM (normally the path to your project). If you are familiar with the directory structure of a web application in a Maven project, you know that `src/main/webapp` is the default location for the root of your WAR. If you need to override this default, you can provide an alternate path to the `@WebAppConfiguration` annotation (for example, ``` @WebAppConfiguration("src/test/webapp") ``` ). If you wish to reference a base resource path from the classpath instead of the file system, you can use Spring’s `classpath:` prefix. Note that Spring’s testing support for implementations is on par with its support for standard `ApplicationContext` implementations. When testing with a , you are free to declare XML configuration files, Groovy scripts, or `@Configuration` classes by using . You are also free to use any other test annotations, such as `@ActiveProfiles` , , `@Sql` , `@Rollback` , and others. The remaining examples in this section show some of the various configuration options for loading a . The following example shows the TestContext framework’s support for convention over configuration: Conventions * If you annotate a test class with `@WebAppConfiguration` without specifying a resource base path, the resource path effectively defaults to `file:src/main/webapp` . Similarly, if you declare without specifying resource `locations` , component `classes` , or context `initializers` , Spring tries to detect the presence of your configuration by using conventions (that is, `WacTests-context.xml` in the same package as the `WacTests` class or static nested `@Configuration` classes). The following example shows how to explicitly declare a resource base path with `@WebAppConfiguration` and an XML resource location with Default resource semantics * The important thing to note here is the different semantics for paths with these two annotations. By default, `@WebAppConfiguration` resource paths are file system based, whereas resource locations are classpath based. The following example shows that we can override the default resource semantics for both annotations by specifying a Spring resource prefix: Explicit resource semantics * Contrast the comments in this example with the previous example. To provide comprehensive web testing support, the TestContext framework has a that is enabled by default. When testing against a , this sets up default thread-local state by using Spring Web’s `RequestContextHolder` before each test method and creates a , and a `ServletWebRequest` based on the base resource path configured with `@WebAppConfiguration` . also ensures that the and `ServletWebRequest` can be injected into the test instance, and, once the test is complete, it cleans up thread-local state. Once you have a loaded for your test, you might find that you need to interact with the web mocks — for example, to set up your test fixture or to perform assertions after invoking your web component. The following example shows which mocks can be autowired into your test instance. Note that the and `MockServletContext` are both cached across the test suite, whereas the other mocks are managed per test method by the Injecting mocks * @Autowired WebApplicationContext wac; // cached @Autowired MockServletContext servletContext; // cached @Autowired MockHttpSession session; @Autowired MockHttpServletRequest request; @Autowired MockHttpServletResponse response; @Autowired ServletWebRequest webRequest; @Autowired lateinit var wac: WebApplicationContext // cached @Autowired lateinit var servletContext: MockServletContext // cached @Autowired lateinit var session: MockHttpSession @Autowired lateinit var request: MockHttpServletRequest @Autowired lateinit var response: MockHttpServletResponse @Autowired lateinit var webRequest: ServletWebRequest Once the TestContext framework loads an `ApplicationContext` (or ) for a test, that context is cached and reused for all subsequent tests that declare the same unique context configuration within the same test suite. To understand how caching works, it is important to understand what is meant by “unique” and “test suite.” An `ApplicationContext` can be uniquely identified by the combination of configuration parameters that is used to load it. Consequently, the unique combination of configuration parameters is used to generate a key under which the context is cached. The TestContext framework uses the following configuration parameters to build the context cache key: * `locations` (from ) * `classes` (from ``` contextInitializerClasses ``` (from ) * `contextCustomizers` (from ) – this includes methods as well as various features from Spring Boot’s testing support such as `@MockBean` and `@SpyBean` . * `contextLoader` (from ) * `parent` (from `@ContextHierarchy` ) * `activeProfiles` (from `@ActiveProfiles` ) * ``` propertySourceLocations ``` (from `@TestPropertySource` ) * ``` propertySourceProperties ``` (from `@TestPropertySource` ) * `resourceBasePath` (from `@WebAppConfiguration` ) For example, if `TestClassA` specifies for the `locations` (or `value` ) attribute of , the TestContext framework loads the corresponding `ApplicationContext` and stores it in a `static` context cache under a key that is based solely on those locations. So, if `TestClassB` also defines for its locations (either explicitly or implicitly through inheritance) but does not define `@WebAppConfiguration` , a different `ContextLoader` , different active profiles, different context initializers, different test property sources, or a different parent context, then the same `ApplicationContext` is shared by both test classes. This means that the setup cost for loading an application context is incurred only once (per test suite), and subsequent test execution is much faster. The size of the context cache is bounded with a default maximum size of 32. Whenever the maximum size is reached, a least recently used (LRU) eviction policy is used to evict and close stale contexts. You can configure the maximum size from the command line or a build script by setting a JVM system property named ``` spring.test.context.cache.maxSize ``` . As an alternative, you can set the same property via the `SpringProperties` mechanism. Since having a large number of application contexts loaded within a given test suite can cause the suite to take an unnecessarily long time to run, it is often beneficial to know exactly how many contexts have been loaded and cached. To view the statistics for the underlying context cache, you can set the log level for the ``` org.springframework.test.context.cache ``` logging category to `DEBUG` . In the unlikely case that a test corrupts the application context and requires reloading (for example, by modifying a bean definition or the state of an application object), you can annotate your test class or test method with `@DirtiesContext` (see the discussion of `@DirtiesContext` in Spring Testing Annotations ). This instructs Spring to remove the context from the cache and rebuild the application context before running the next test that requires the same application context. Note that support for the `@DirtiesContext` annotation is provided by the and the , which are enabled by default. When writing integration tests that rely on a loaded Spring `ApplicationContext` , it is often sufficient to test against a single context. However, there are times when it is beneficial or even necessary to test against a hierarchy of `ApplicationContext` instances. For example, if you are developing a Spring MVC web application, you typically have a root loaded by Spring’s and a child loaded by Spring’s `DispatcherServlet` . This results in a parent-child context hierarchy where shared components and infrastructure configuration are declared in the root context and consumed in the child context by web-specific components. Another use case can be found in Spring Batch applications, where you often have a parent context that provides configuration for shared batch infrastructure and a child context for the configuration of a specific batch job. You can write integration tests that use context hierarchies by declaring context configuration with the `@ContextHierarchy` annotation, either on an individual test class or within a test class hierarchy. If a context hierarchy is declared on multiple classes within a test class hierarchy, you can also merge or override the context configuration for a specific, named level in the context hierarchy. When merging configuration for a given level in the hierarchy, the configuration resource type (that is, XML configuration files or component classes) must be consistent. Otherwise, it is perfectly acceptable to have different levels in a context hierarchy configured using different resource types. The remaining JUnit Jupiter based examples in this section show common configuration scenarios for integration tests that require the use of context hierarchies. Single test class with context hierarchy ``` ControllerIntegrationTests ``` represents a typical integration testing scenario for a Spring MVC web application by declaring a context hierarchy that consists of two levels, one for the root (loaded by using the `TestAppConfig` `@Configuration` class) and one for the dispatcher servlet (loaded by using the `WebConfig` `@Configuration` class). The that is autowired into the test instance is the one for the child context (that is, the lowest context in the hierarchy). The following listing shows this configuration scenario: ``` @ExtendWith(SpringExtension.class) @WebAppConfiguration @ContextHierarchy({ @ContextConfiguration(classes = TestAppConfig.class), @ContextConfiguration(classes = WebConfig.class) }) class ControllerIntegrationTests { @Autowired WebApplicationContext wac; ``` @ExtendWith(SpringExtension::class) @WebAppConfiguration @ContextHierarchy( ContextConfiguration(classes = [TestAppConfig::class]), ContextConfiguration(classes = [WebConfig::class])) class ControllerIntegrationTests { @Autowired lateinit var wac: WebApplicationContext Class hierarchy with implicit parent context The test classes in this example define a context hierarchy within a test class hierarchy. `AbstractWebTests` declares the configuration for a root in a Spring-powered web application. Note, however, that `AbstractWebTests` does not declare `@ContextHierarchy` . Consequently, subclasses of `AbstractWebTests` can optionally participate in a context hierarchy or follow the standard semantics for . `SoapWebServiceTests` and `RestWebServiceTests` both extend `AbstractWebTests` and define a context hierarchy by using `@ContextHierarchy` . The result is that three application contexts are loaded (one for each declaration of ), and the application context loaded based on the configuration in `AbstractWebTests` is set as the parent context for each of the contexts loaded for the concrete subclasses. The following listing shows this configuration scenario: ``` @ExtendWith(SpringExtension.class) @WebAppConfiguration @ContextConfiguration("file:src/main/webapp/WEB-INF/applicationContext.xml") public abstract class AbstractWebTests {} @ContextHierarchy(@ContextConfiguration("/spring/soap-ws-config.xml")) public class SoapWebServiceTests extends AbstractWebTests {} @ContextHierarchy(@ContextConfiguration("/spring/rest-ws-config.xml")) public class RestWebServiceTests extends AbstractWebTests {} ``` ``` @ExtendWith(SpringExtension::class) @WebAppConfiguration @ContextConfiguration("file:src/main/webapp/WEB-INF/applicationContext.xml") abstract class AbstractWebTests @ContextHierarchy(ContextConfiguration("/spring/soap-ws-config.xml")) class SoapWebServiceTests : AbstractWebTests() @ContextHierarchy(ContextConfiguration("/spring/rest-ws-config.xml")) class RestWebServiceTests : AbstractWebTests() ``` Class hierarchy with merged context hierarchy configuration The classes in this example show the use of named hierarchy levels in order to merge the configuration for specific levels in a context hierarchy. `BaseTests` defines two levels in the hierarchy, `parent` and `child` . `ExtendedTests` extends `BaseTests` and instructs the Spring TestContext Framework to merge the context configuration for the `child` hierarchy level, by ensuring that the names declared in the `name` attribute in are both `child` . The result is that three application contexts are loaded: one for `/app-config.xml` , one for `/user-config.xml` , and one for . As with the previous example, the application context loaded from `/app-config.xml` is set as the parent context for the contexts loaded from `/user-config.xml` and . The following listing shows this configuration scenario: @ContextHierarchy( @ContextConfiguration(name = "child", locations = "/order-config.xml") ) class ExtendedTests extends BaseTests {} ``` @ContextHierarchy( ContextConfiguration(name = "child", locations = ["/order-config.xml"]) ) class ExtendedTests : BaseTests() {} ``` Class hierarchy with overridden context hierarchy configuration In contrast to the previous example, this example demonstrates how to override the configuration for a given named level in a context hierarchy by setting the `inheritLocations` flag in to `false` . Consequently, the application context for `ExtendedTests` is loaded only from ``` /test-user-config.xml ``` and has its parent set to the context loaded from `/app-config.xml` . The following listing shows this configuration scenario: @ContextHierarchy( @ContextConfiguration( name = "child", locations = "/test-user-config.xml", inheritLocations = false )) class ExtendedTests extends BaseTests {} ``` @ContextHierarchy( ContextConfiguration( name = "child", locations = ["/test-user-config.xml"], inheritLocations = false )) class ExtendedTests : BaseTests() {} ``` Dirtying a context within a context hierarchyIf you use@DirtiesContext in a test whose context is configured as part of a context hierarchy, you can use the hierarchyMode flag to control how the context cache is cleared. For further details, see the discussion of @DirtiesContext in Spring Testing Annotations and the @DirtiesContext javadoc. When you use the (which is configured by default), the dependencies of your test instances are injected from beans in the application context that you configured with or related annotations. You may use setter injection, field injection, or both, depending on which annotations you choose and whether you place them on setter methods or fields. If you are using JUnit Jupiter you may also optionally use constructor injection (see Dependency Injection with `SpringExtension` ). For consistency with Spring’s annotation-based injection support, you may also use Spring’s `@Autowired` annotation or the `@Inject` annotation from JSR-330 for field and setter injection. For testing frameworks other than JUnit Jupiter, the TestContext framework does not participate in instantiation of the test class. Thus, the use of | | --- | Although field injection is discouraged in production code, field injection is actually quite natural in test code. The rationale for the difference is that you will never instantiate your test class directly. Consequently, there is no need to be able to invoke a | | --- | Because `@Autowired` is used to perform autowiring by type , if you have multiple bean definitions of the same type, you cannot rely on this approach for those particular beans. In that case, you can use `@Autowired` in conjunction with `@Qualifier` . You can also choose to use `@Inject` in conjunction with `@Named` . Alternatively, if your test class has access to its `ApplicationContext` , you can perform an explicit lookup by using (for example) a call to ``` applicationContext.getBean("titleRepository", TitleRepository.class) ``` . If you do not want dependency injection applied to your test instances, do not annotate fields or setter methods with `@Autowired` or `@Inject` . Alternatively, you can disable dependency injection altogether by explicitly configuring your class with and omitting ``` DependencyInjectionTestExecutionListener.class ``` from the list of listeners. Consider the scenario of testing a class, as outlined in the Goals section. The next two code listings demonstrate the use of `@Autowired` on fields and setter methods. The application context configuration is presented after all sample code listings. The first code listing shows a JUnit Jupiter based implementation of the test class that uses `@Autowired` for field injection: Alternatively, you can configure the class to use `@Autowired` for setter injection, as follows: @Autowired void setTitleRepository(HibernateTitleRepository titleRepository) { this.titleRepository = titleRepository; } @Autowired fun setTitleRepository(titleRepository: HibernateTitleRepository) { this.titleRepository = titleRepository } The preceding code listings use the same XML context file referenced by the annotation (that is, ``` repository-config.xml ``` ). The following shows this configuration: <!-- this bean will be injected into the HibernateTitleRepositoryTests class --> <bean id="titleRepository" class="com.foo.repository.hibernate.HibernateTitleRepository"> <property name="sessionFactory" ref="sessionFactory"/> </bean <bean id="sessionFactory" class="org.springframework.orm.hibernate5.LocalSessionFactoryBean"> <!-- configuration elided for brevity --> </beanSpring has supported Request- and session-scoped beans since the early years, and you can test your request-scoped and session-scoped beans by following these steps: Ensure that a is loaded for your test by annotating your test class with `@WebAppConfiguration` . * Inject the mock request or session into your test instance and prepare your test fixture as appropriate. * Invoke your web component that you retrieved from the configured (with dependency injection). * Perform assertions against the mocks. The next code snippet shows the XML configuration for a login use case. Note that the `userService` bean has a dependency on a request-scoped `loginAction` bean. Also, the `LoginAction` is instantiated by using SpEL expressions that retrieve the username and password from the current HTTP request. In our test, we want to configure these request parameters through the mock managed by the TestContext framework. The following listing shows the configuration for this use case: <bean id="userService" class="com.example.SimpleUserService" c:loginAction-ref="loginAction"/ <bean id="loginAction" class="com.example.LoginAction" c:username="#{request.getParameter('user')}" c:password="#{request.getParameter('pswd')}" scope="request"> <aop:scoped-proxy/> </bean``` RequestScopedBeanTests ``` , we inject both the `UserService` (that is, the subject under test) and the into our test instance. Within our `requestScope()` test method, we set up our test fixture by setting request parameters in the provided . When the `loginUser()` method is invoked on our `userService` , we are assured that the user service has access to the request-scoped `loginAction` for the current (that is, the one in which we just set parameters). We can then perform assertions against the results based on the known inputs for the username and password. The following listing shows how to do so: Request-scoped bean test * @Autowired UserService userService; @Autowired MockHttpServletRequest request; @Test void requestScope() { request.setParameter("user", "enigma"); request.setParameter("pswd", "$pr!ng"); LoginResults results = userService.loginUser(); // assert results } } ``` @Autowired lateinit var userService: UserService @Autowired lateinit var request: MockHttpServletRequest @Test fun requestScope() { request.setParameter("user", "enigma") request.setParameter("pswd", "\$pr!ng") val results = userService.loginUser() // assert results } } ``` The following code snippet is similar to the one we saw earlier for a request-scoped bean. However, this time, the `userService` bean has a dependency on a session-scoped `userPreferences` bean. Note that the `UserPreferences` bean is instantiated by using a SpEL expression that retrieves the theme from the current HTTP session. In our test, we need to configure a theme in the mock session managed by the TestContext framework. The following example shows how to do so: <bean id="userService" class="com.example.SimpleUserService" c:userPreferences-ref="userPreferences" / <bean id="userPreferences" class="com.example.UserPreferences" c:theme="#{session.getAttribute('theme')}" scope="session"> <aop:scoped-proxy/> </bean``` SessionScopedBeanTests ``` , we inject the `UserService` and the `MockHttpSession` into our test instance. Within our `sessionScope()` test method, we set up our test fixture by setting the expected `theme` attribute in the provided `MockHttpSession` . When the ``` processUserPreferences() ``` method is invoked on our `userService` , we are assured that the user service has access to the session-scoped `userPreferences` for the current `MockHttpSession` , and we can perform assertions against the results based on the configured theme. The following example shows how to do so: Session-scoped bean test * @Autowired UserService userService; @Autowired MockHttpSession session; @Test void sessionScope() throws Exception { session.setAttribute("theme", "blue"); Results results = userService.processUserPreferences(); // assert results } } ``` @Autowired lateinit var userService: UserService @Autowired lateinit var session: MockHttpSession @Test fun sessionScope() { session.setAttribute("theme", "blue") val results = userService.processUserPreferences() // assert results } } ``` In the TestContext framework, transactions are managed by the , which is configured by default, even if you do not explicitly declare on your test class. To enable support for transactions, however, you must configure a bean in the `ApplicationContext` that is loaded with semantics (further details are provided later). In addition, you must declare Spring’s `@Transactional` annotation either at the class or the method level for your tests. ## Test-managed Transactions Test-managed transactions are transactions that are managed declaratively by using the or programmatically by using `TestTransaction` (described later). You should not confuse such transactions with Spring-managed transactions (those managed directly by Spring within the `ApplicationContext` loaded for tests) or application-managed transactions (those managed programmatically within application code that is invoked by tests). Spring-managed and application-managed transactions typically participate in test-managed transactions. However, you should use caution if Spring-managed or application-managed transactions are configured with any propagation type other than `REQUIRED` or `SUPPORTS` (see the discussion on transaction propagation for details). ## Enabling and Disabling Transactions Annotating a test method with `@Transactional` causes the test to be run within a transaction that is, by default, automatically rolled back after completion of the test. If a test class is annotated with `@Transactional` , each test method within that class hierarchy runs within a transaction. Test methods that are not annotated with `@Transactional` (at the class or method level) are not run within a transaction. Note that `@Transactional` is not supported on test lifecycle methods — for example, methods annotated with JUnit Jupiter’s `@BeforeAll` , `@BeforeEach` , etc. Furthermore, tests that are annotated with `@Transactional` but have the `propagation` attribute set to `NOT_SUPPORTED` or `NEVER` are not run within a transaction. Attribute | Supported for test-managed transactions | | --- | --- | | | | | | | | are preconfigured for transactional support at the class level. The following example demonstrates a common scenario for writing an integration test for a Hibernate-based `UserRepository` : @Autowired HibernateUserRepository repository; @Autowired SessionFactory sessionFactory; JdbcTemplate jdbcTemplate; @Test void createUser() { // track initial state in test database: final int count = countRowsInTable("user"); User user = new User(...); repository.save(user); // Manual flush is required to avoid false positive in test sessionFactory.getCurrentSession().flush(); assertNumUsers(count + 1); } @Autowired lateinit var repository: HibernateUserRepository @Autowired lateinit var sessionFactory: SessionFactory lateinit var jdbcTemplate: JdbcTemplate @Autowired fun setDataSource(dataSource: DataSource) { this.jdbcTemplate = JdbcTemplate(dataSource) } @Test fun createUser() { // track initial state in test database: val count = countRowsInTable("user") val user = User() repository.save(user) // Manual flush is required to avoid false positive in test sessionFactory.getCurrentSession().flush() assertNumUsers(count + 1) } As explained in Transaction Rollback and Commit Behavior, there is no need to clean up the database after the `createUser()` method runs, since any changes made to the database are automatically rolled back by the ## Transaction Rollback and Commit Behavior By default, test transactions will be automatically rolled back after completion of the test; however, transactional commit and rollback behavior can be configured declaratively via the `@Commit` and `@Rollback` annotations. See the corresponding entries in the annotation support section for further details. ## Programmatic Transaction Management You can interact with test-managed transactions programmatically by using the static methods in `TestTransaction` . For example, you can use `TestTransaction` within test methods, before methods, and after methods to start or end the current test-managed transaction or to configure the current test-managed transaction for rollback or commit. Support for `TestTransaction` is automatically available whenever the is enabled. The following example demonstrates some of the features of `TestTransaction` . See the javadoc for `TestTransaction` for further details. ``` @ContextConfiguration(classes = TestConfig.class) public class ProgrammaticTransactionManagementTests extends AbstractTransactionalJUnit4SpringContextTests { @Test public void transactionalTest() { // assert initial state in test database: assertNumUsers(2); deleteFromTables("user"); // changes to the database will be committed! TestTransaction.flagForCommit(); TestTransaction.end(); assertFalse(TestTransaction.isActive()); assertNumUsers(0); TestTransaction.start(); // perform other actions against the database that will // be automatically rolled back after the test completes... } ``` @ContextConfiguration(classes = [TestConfig::class]) class ProgrammaticTransactionManagementTests : AbstractTransactionalJUnit4SpringContextTests() { @Test fun transactionalTest() { // assert initial state in test database: assertNumUsers(2) deleteFromTables("user") // changes to the database will be committed! TestTransaction.flagForCommit() TestTransaction.end() assertFalse(TestTransaction.isActive()) assertNumUsers(0) TestTransaction.start() // perform other actions against the database that will // be automatically rolled back after the test completes... } ## Running Code Outside of a Transaction Occasionally, you may need to run certain code before or after a transactional test method but outside the transactional context — for example, to verify the initial database state prior to running your test or to verify expected transactional commit behavior after your test runs (if the test was configured to commit the transaction). supports the `@BeforeTransaction` and `@AfterTransaction` annotations for exactly such scenarios. You can annotate any `void` method in a test class or any `void` default method in a test interface with one of these annotations, and the ensures that your before transaction method or after transaction method runs at the appropriate time. Any before methods (such as methods annotated with JUnit Jupiter’s | | --- | ## Configuring a Transaction Manager expects a bean to be defined in the Spring `ApplicationContext` for the test. If there are multiple instances of within the test’s `ApplicationContext` , you can declare a qualifier by using ``` @Transactional("myTxMgr") ``` ``` @Transactional(transactionManager = "myTxMgr") ``` , or ``` TransactionManagementConfigurer ``` can be implemented by an `@Configuration` class. Consult the javadoc for ``` TestContextTransactionUtils.retrieveTransactionManager() ``` for details on the algorithm used to look up a transaction manager in the test’s `ApplicationContext` . ## Demonstration of All Transaction-related Annotations The following JUnit Jupiter based example displays a fictitious integration testing scenario that highlights all transaction-related annotations. The example is not intended to demonstrate best practices but rather to demonstrate how these annotations can be used. See the annotation support section for further information and configuration examples. Transaction management for `@Sql` contains an additional example that uses `@Sql` for declarative SQL script execution with default transaction rollback semantics. The following example shows the relevant annotations: When writing integration tests against a relational database, it is often beneficial to run SQL scripts to modify the database schema or insert test data into tables. The `spring-jdbc` module provides support for initializing an embedded or existing database by executing SQL scripts when the Spring `ApplicationContext` is loaded. See Embedded database support and Testing data access logic with an embedded database for details. Although it is very useful to initialize a database for testing once when the `ApplicationContext` is loaded, sometimes it is essential to be able to modify the database during integration tests. The following sections explain how to run SQL scripts programmatically and declaratively during integration tests. ## Executing SQL scripts programmatically Spring provides the following options for executing SQL scripts programmatically within integration test methods. ``` org.springframework.jdbc.datasource.init.ScriptUtils ``` ``` org.springframework.jdbc.datasource.init.ResourceDatabasePopulator ``` ``` org.springframework.test.context.junit4.AbstractTransactionalJUnit4SpringContextTests ``` ``` org.springframework.test.context.testng.AbstractTransactionalTestNGSpringContextTests ``` `ScriptUtils` provides a collection of static utility methods for working with SQL scripts and is mainly intended for internal use within the framework. However, if you require full control over how SQL scripts are parsed and run, `ScriptUtils` may suit your needs better than some of the other alternatives described later. See the javadoc for individual methods in `ScriptUtils` for further details. provides an object-based API for programmatically populating, initializing, or cleaning up a database by using SQL scripts defined in external resources. provides options for configuring the character encoding, statement separator, comment delimiters, and error handling flags used when parsing and running the scripts. Each of the configuration options has a reasonable default value. See the javadoc for details on default values. To run the scripts configured in a , you can invoke either the `populate(Connection)` method to run the populator against a `java.sql.Connection` or the `execute(DataSource)` method to run the populator against a `javax.sql.DataSource` . The following example specifies SQL scripts for a test schema and test data, sets the statement separator to `@@` , and run the scripts against a `DataSource` : ``` @Test void databaseTest() { ResourceDatabasePopulator populator = new ResourceDatabasePopulator(); populator.addScripts( new ClassPathResource("test-schema.sql"), new ClassPathResource("test-data.sql")); populator.setSeparator("@@"); populator.execute(this.dataSource); // run code that uses the test schema and data } ``` ``` @Test fun databaseTest() { val populator = ResourceDatabasePopulator() populator.addScripts( ClassPathResource("test-schema.sql"), ClassPathResource("test-data.sql")) populator.setSeparator("@@") populator.execute(dataSource) // run code that uses the test schema and data } ``` internally delegates to `ScriptUtils` for parsing and running SQL scripts. Similarly, the `executeSqlScript(..)` methods in internally use a to run SQL scripts. See the Javadoc for the various `executeSqlScript(..)` methods for further details. ## Executing SQL scripts declaratively with @Sql In addition to the aforementioned mechanisms for running SQL scripts programmatically, you can declaratively configure SQL scripts in the Spring TestContext Framework. Specifically, you can declare the `@Sql` annotation on a test class or test method to configure individual SQL statements or the resource paths to SQL scripts that should be run against a given database before or after an integration test method. Support for `@Sql` is provided by the , which is enabled by default. Method-level | | --- | ### Path Resource Semantics ``` "/org/example/schema.sql" ``` ). A path that references a URL (for example, a path prefixed with `classpath:` , `file:` , `http:` ) is loaded by using the specified resource protocol. The following example shows how to use `@Sql` at the class level and at the method level within a JUnit Jupiter based integration test class: @Test @Sql({"/test-schema.sql", "/test-user-data.sql"}) void userTest() { // run code that uses the test schema and test data } } ``` @Test @Sql("/test-schema.sql", "/test-user-data.sql") fun userTest() { // run code that uses the test schema and test data } } ``` ### Default Script Detection If no SQL scripts or statements are specified, an attempt is made to detect a `default` script, depending on where `@Sql` is declared. If a default cannot be detected, an Class-level declaration: If the annotated test class is `com.example.MyTest` , the corresponding default script is ``` classpath:com/example/MyTest.sql ``` Method-level declaration: If the annotated test method is named `testMethod()` and is defined in the class `com.example.MyTest` , the corresponding default script is ``` classpath:com/example/MyTest.testMethod.sql ``` ### Declaring Multiple `@Sql` Sets If you need to configure multiple sets of SQL scripts for a given test class or test method but with different syntax configuration, different error handling rules, or different execution phases per set, you can declare multiple instances of `@Sql` . With Java 8, you can use `@Sql` as a repeatable annotation. Otherwise, you can use the `@SqlGroup` annotation as an explicit container for declaring multiple instances of `@Sql` . The following example shows how to use `@Sql` as a repeatable annotation with Java 8: ``` // Repeatable annotations with non-SOURCE retention are not yet supported by Kotlin ``` In the scenario presented in the preceding example, the `test-schema.sql` script uses a different syntax for single-line comments. The following example is identical to the preceding example, except that the `@Sql` declarations are grouped together within `@SqlGroup` . With Java 8 and above, the use of `@SqlGroup` is optional, but you may need to use `@SqlGroup` for compatibility with other JVM languages such as Kotlin. ### Script Execution Phases By default, SQL scripts are run before the corresponding test method. However, if you need to run a particular set of scripts after the test method (for example, to clean up database state), you can use the `executionPhase` attribute in `@Sql` , as the following example shows: ``` @Test @Sql( scripts = "create-test-data.sql", config = @SqlConfig(transactionMode = ISOLATED) ) @Sql( scripts = "delete-test-data.sql", config = @SqlConfig(transactionMode = ISOLATED), executionPhase = AFTER_TEST_METHOD ) void userTest() { // run code that needs the test data to be committed // to the database outside of the test's transaction } ``` ``` @Test @SqlGroup( Sql("create-test-data.sql", config = SqlConfig(transactionMode = ISOLATED)), Sql("delete-test-data.sql", config = SqlConfig(transactionMode = ISOLATED), executionPhase = AFTER_TEST_METHOD)) fun userTest() { // run code that needs the test data to be committed // to the database outside of the test's transaction } ``` Note that `ISOLATED` and `AFTER_TEST_METHOD` are statically imported from `Sql.TransactionMode` and `Sql.ExecutionPhase` , respectively. ### Script Configuration with `@SqlConfig` You can configure script parsing and error handling by using the `@SqlConfig` annotation. When declared as a class-level annotation on an integration test class, `@SqlConfig` serves as global configuration for all SQL scripts within the test class hierarchy. When declared directly by using the `config` attribute of the `@Sql` annotation, `@SqlConfig` serves as local configuration for the SQL scripts declared within the enclosing `@Sql` annotation. Every attribute in `@SqlConfig` has an implicit default value, which is documented in the javadoc of the corresponding attribute. Due to the rules defined for annotation attributes in the Java Language Specification, it is, unfortunately, not possible to assign a value of `null` to an annotation attribute. Thus, in order to support overrides of inherited global configuration, `@SqlConfig` attributes have an explicit default value of either `""` (for Strings), `{}` (for arrays), or `DEFAULT` (for enumerations). This approach lets local declarations of `@SqlConfig` selectively override individual attributes from global declarations of `@SqlConfig` by providing a value other than `""` , `{}` , or `DEFAULT` . Global `@SqlConfig` attributes are inherited whenever local `@SqlConfig` attributes do not supply an explicit value other than `""` , `{}` , or `DEFAULT` . Explicit local configuration, therefore, overrides global configuration. The configuration options provided by `@Sql` and `@SqlConfig` are equivalent to those supported by `ScriptUtils` and but are a superset of those provided by the ``` <jdbc:initialize-database/> ``` XML namespace element. See the javadoc of individual attributes in `@Sql` and `@SqlConfig` for details. Transaction management for `@Sql` By default, the infers the desired transaction semantics for scripts configured by using `@Sql` . Specifically, SQL scripts are run without a transaction, within an existing Spring-managed transaction (for example, a transaction managed by the for a test annotated with `@Transactional` ), or within an isolated transaction, depending on the configured value of the `transactionMode` attribute in `@SqlConfig` and the presence of a in the test’s `ApplicationContext` . As a bare minimum, however, a `javax.sql.DataSource` must be present in the test’s `ApplicationContext` . If the algorithms used by to detect a `DataSource` and and infer the transaction semantics do not suit your needs, you can specify explicit names by setting the `dataSource` and `transactionManager` attributes of `@SqlConfig` . Furthermore, you can control the transaction propagation behavior by setting the `transactionMode` attribute of `@SqlConfig` (for example, whether scripts should be run in an isolated transaction). Although a thorough discussion of all supported options for transaction management with `@Sql` is beyond the scope of this reference manual, the javadoc for `@SqlConfig` and provide detailed information, and the following example shows a typical testing scenario that uses JUnit Jupiter and transactional tests with `@Sql` : ``` @SpringJUnitConfig(TestDatabaseConfig.class) @Transactional class TransactionalSqlScriptsTests { final JdbcTemplate jdbcTemplate; @Autowired TransactionalSqlScriptsTests(DataSource dataSource) { this.jdbcTemplate = new JdbcTemplate(dataSource); } @Test @Sql("/test-data.sql") void usersTest() { // verify state in test database: assertNumUsers(2); // run code that uses the test data... } void assertNumUsers(int expected) { assertEquals(expected, countRowsInTable("user"), "Number of rows in the [user] table."); } } ``` ``` @SpringJUnitConfig(TestDatabaseConfig::class) @Transactional class TransactionalSqlScriptsTests @Autowired constructor(dataSource: DataSource) { val jdbcTemplate: JdbcTemplate = JdbcTemplate(dataSource) @Test @Sql("/test-data.sql") fun usersTest() { // verify state in test database: assertNumUsers(2) // run code that uses the test data... } fun assertNumUsers(expected: Int) { assertEquals(expected, countRowsInTable("user"), "Number of rows in the [user] table.") } } ``` Note that there is no need to clean up the database after the `usersTest()` method is run, since any changes made to the database (either within the test method or within the `/test-data.sql` script) are automatically rolled back by the (see transaction management for details). ### Merging and Overriding Configuration with `@SqlMergeMode` As of Spring Framework 5.2, it is possible to merge method-level `@Sql` declarations with class-level declarations. For example, this allows you to provide the configuration for a database schema or some common test data once per test class and then provide additional, use case specific test data per test method. To enable `@Sql` merging, annotate either your test class or test method with `@SqlMergeMode(MERGE)` . To disable merging for a specific test method (or specific test subclass), you can switch back to the default mode via ``` @SqlMergeMode(OVERRIDE) ``` . Consult the `@SqlMergeMode` annotation documentation section for examples and further details. Spring Framework 5.0 introduced basic support for executing tests in parallel within a single JVM when using the Spring TestContext Framework. In general, this means that most test classes or test methods can be run in parallel without any changes to test code or configuration. For details on how to set up parallel test execution, see the documentation for your testing framework, build tool, or IDE. | | --- | Keep in mind that the introduction of concurrency into your test suite can result in unexpected side effects, strange runtime behavior, and tests that fail intermittently or seemingly randomly. The Spring Team therefore provides the following general guidelines for when not to run tests in parallel. Do not run tests in parallel if the tests: Use Spring Framework’s `@DirtiesContext` support. * Use Spring Boot’s `@MockBean` or `@SpyBean` support. * Use JUnit 4’s `@FixMethodOrder` support or any testing framework feature that is designed to ensure that test methods run in a particular order. Note, however, that this does not apply if entire test classes are run in parallel. * Change the state of shared services or systems such as a database, message broker, filesystem, and others. This applies to both embedded and external systems. Parallel test execution in the Spring TestContext Framework is only possible if the underlying | | --- | This section describes the various classes that support the Spring TestContext Framework. The Spring TestContext Framework offers full integration with JUnit 4 through a custom runner (supported on JUnit 4.12 or higher). By annotating test classes with ``` @RunWith(SpringJUnit4ClassRunner.class) ``` or the shorter ``` @RunWith(SpringRunner.class) ``` variant, developers can implement standard JUnit 4-based unit and integration tests and simultaneously reap the benefits of the TestContext framework, such as support for loading application contexts, dependency injection of test instances, transactional test method execution, and so on. If you want to use the Spring TestContext Framework with an alternative runner (such as JUnit 4’s `Parameterized` runner) or third-party runners (such as the `MockitoJUnitRunner` ), you can, optionally, use Spring’s support for JUnit rules instead. The following code listing shows the minimal requirements for configuring a test class to run with the custom Spring `Runner` : ``` @RunWith(SpringRunner.class) @TestExecutionListeners({}) public class SimpleTest { ``` @RunWith(SpringRunner::class) @TestExecutionListeners class SimpleTest { is configured with an empty list, to disable the default listeners, which otherwise would require an `ApplicationContext` to be configured through ``` org.springframework.test.context.junit4.rules ``` package provides the following JUnit 4 rules (supported on JUnit 4.12 or higher): * `SpringClassRule` * `SpringMethodRule` `SpringClassRule` is a JUnit `TestRule` that supports class-level features of the Spring TestContext Framework, whereas `SpringMethodRule` is a JUnit `MethodRule` that supports instance-level and method-level features of the Spring TestContext Framework. In contrast to the `SpringRunner` , Spring’s rule-based JUnit support has the advantage of being independent of any ``` org.junit.runner.Runner ``` implementation and can, therefore, be combined with existing alternative runners (such as JUnit 4’s `Parameterized` ) or third-party runners (such as the `MockitoJUnitRunner` ). To support the full functionality of the TestContext framework, you must combine a `SpringClassRule` with a `SpringMethodRule` . The following example shows the proper way to declare these rules in an integration test: @ClassRule public static final SpringClassRule springClassRule = new SpringClassRule(); @Rule public final SpringMethodRule springMethodRule = new SpringMethodRule(); @Rule val springMethodRule = SpringMethodRule() companion object { @ClassRule val springClassRule = SpringClassRule() } } ``` ## JUnit 4 Support Classes ``` org.springframework.test.context.junit4 ``` package provides the following support classes for JUnit 4-based test cases (supported on JUnit 4.12 or higher): ## SpringExtension for JUnit Jupiter The Spring TestContext Framework offers full integration with the JUnit Jupiter testing framework, introduced in JUnit 5. By annotating test classes with , you can implement standard JUnit Jupiter-based unit and integration tests and simultaneously reap the benefits of the TestContext framework, such as support for loading application contexts, dependency injection of test instances, transactional test method execution, and so on. Furthermore, thanks to the rich extension API in JUnit Jupiter, Spring provides the following features above and beyond the feature set that Spring supports for JUnit 4 and TestNG: Dependency injection for test constructors, test methods, and test lifecycle callback methods. See Dependency Injection with `SpringExtension` for further details. * Powerful support for conditional test execution based on SpEL expressions, environment variables, system properties, and so on. See the documentation for `@EnabledIf` and `@DisabledIf` in Spring JUnit Jupiter Testing Annotations for further details and examples. * Custom composed annotations that combine annotations from Spring and JUnit Jupiter. See the examples in Meta-Annotation Support for Testing for further details. The following code listing shows how to configure a test class to use the `SpringExtension` in conjunction with Since you can also use annotations in JUnit 5 as meta-annotations, Spring provides the `@SpringJUnitConfig` and composed annotations to simplify the configuration of the test `ApplicationContext` and JUnit Jupiter. The following example uses `@SpringJUnitConfig` to reduce the amount of configuration used in the previous example: Similarly, the following example uses to create a for use with JUnit Jupiter: ``` // Instructs Spring to register the SpringExtension with JUnit // Jupiter and load a WebApplicationContext from TestWebConfig.class @SpringJUnitWebConfig(TestWebConfig.class) class SimpleWebTests { ``` // Instructs Spring to register the SpringExtension with JUnit // Jupiter and load a WebApplicationContext from TestWebConfig::class @SpringJUnitWebConfig(TestWebConfig::class) class SimpleWebTests { See the documentation for `@SpringJUnitConfig` and in Spring JUnit Jupiter Testing Annotations for further details. ### Dependency Injection with `SpringExtension` `SpringExtension` implements the `ParameterResolver` extension API from JUnit Jupiter, which lets Spring provide dependency injection for test constructors, test methods, and test lifecycle callback methods. Specifically, `SpringExtension` can inject dependencies from the test’s `ApplicationContext` into test constructors and methods that are annotated with `@BeforeAll` , `@AfterAll` , `@BeforeEach` , `@AfterEach` , `@Test` , `@RepeatedTest` , `@ParameterizedTest` , and others. # Constructor Injection If a specific parameter in a constructor for a JUnit Jupiter test class is of type `ApplicationContext` (or a sub-type thereof) or is annotated or meta-annotated with `@Autowired` , `@Qualifier` , or `@Value` , Spring injects the value for that specific parameter with the corresponding bean or value from the test’s `ApplicationContext` . Spring can also be configured to autowire all arguments for a test class constructor if the constructor is considered to be autowirable. A constructor is considered to be autowirable if one of the following conditions is met (in order of precedence). The constructor is annotated with `@Autowired` . * `@TestConstructor` is present or meta-present on the test class with the `autowireMode` attribute set to `ALL` . * The default test constructor autowire mode has been changed to `ALL` . See `@TestConstructor` for details on the use of `@TestConstructor` and how to change the global test constructor autowire mode. If the constructor for a test class is considered to be autowirable, Spring assumes the responsibility for resolving arguments for all parameters in the constructor. Consequently, no other | | --- | In the following example, Spring injects the `OrderService` bean from the `ApplicationContext` loaded from `TestConfig.class` into the ``` OrderServiceIntegrationTests ``` constructor. @Autowired OrderServiceIntegrationTests(OrderService orderService) { this.orderService = orderService; } ``` @SpringJUnitConfig(TestConfig::class) class OrderServiceIntegrationTests @Autowired constructor(private val orderService: OrderService){ // tests that use the injected OrderService } ``` Note that this feature lets test dependencies be `final` and therefore immutable. If the ``` spring.test.constructor.autowire.mode ``` property is to `all` (see `@TestConstructor` ), we can omit the declaration of `@Autowired` on the constructor in the previous example, resulting in the following. OrderServiceIntegrationTests(OrderService orderService) { this.orderService = orderService; } ``` @SpringJUnitConfig(TestConfig::class) class OrderServiceIntegrationTests(val orderService:OrderService) { // tests that use the injected OrderService } ``` If a parameter in a JUnit Jupiter test method or test lifecycle callback method is of type `ApplicationContext` (or a sub-type thereof) or is annotated or meta-annotated with `@Autowired` , `@Qualifier` , or `@Value` , Spring injects the value for that specific parameter with the corresponding bean from the test’s `ApplicationContext` . In the following example, Spring injects the `OrderService` from the `ApplicationContext` loaded from `TestConfig.class` into the `deleteOrder()` test method: @Test void deleteOrder(@Autowired OrderService orderService) { // use orderService from the test's ApplicationContext } } ``` @Test fun deleteOrder(@Autowired orderService: OrderService) { // use orderService from the test's ApplicationContext } } ``` Due to the robustness of the `ParameterResolver` support in JUnit Jupiter, you can also have multiple dependencies injected into a single method, not only from Spring but also from JUnit Jupiter itself or other third-party extensions. The following example shows how to have both Spring and JUnit Jupiter inject dependencies into the ``` placeOrderRepeatedly() ``` test method simultaneously. @RepeatedTest(10) void placeOrderRepeatedly(RepetitionInfo repetitionInfo, @Autowired OrderService orderService) { @RepeatedTest(10) fun placeOrderRepeatedly(repetitionInfo:RepetitionInfo, @Autowired orderService:OrderService) { Note that the use of `@RepeatedTest` from JUnit Jupiter lets the test method gain access to the `RepetitionInfo` . `@Nested` test class configuration The Spring TestContext Framework has supported the use of test-related annotations on `@Nested` test classes in JUnit Jupiter since Spring Framework 5.0; however, until Spring Framework 5.3 class-level test configuration annotations were not inherited from enclosing classes like they are from superclasses. Spring Framework 5.3 introduces first-class support for inheriting test class configuration from enclosing classes, and such configuration will be inherited by default. To change from the default `INHERIT` mode to `OVERRIDE` mode, you may annotate an individual `@Nested` test class with ``` @NestedTestConfiguration(EnclosingConfiguration.OVERRIDE) ``` . An explicit declaration will apply to the annotated test class as well as any of its subclasses and nested classes. Thus, you may annotate a top-level test class with , and that will apply to all of its nested test classes recursively. In order to allow development teams to change the default to `OVERRIDE` – for example, for compatibility with Spring Framework 5.0 through 5.2 – the default mode can be changed globally via a JVM system property or a `spring.properties` file in the root of the classpath. See the "Changing the default enclosing configuration inheritance mode" note for details. Although the following "Hello World" example is very simplistic, it shows how to declare common configuration on a top-level class that is inherited by its `@Nested` test classes. In this particular example, only the `TestConfig` configuration class is inherited. Each nested test class provides its own set of active profiles, resulting in a distinct `ApplicationContext` for each nested test class (see Context Caching for details). Consult the list of supported annotations to see which annotations can be inherited in `@Nested` test classes. ``` @SpringJUnitConfig(TestConfig.class) class GreetingServiceTests { @Nested @ActiveProfiles("lang_en") class EnglishGreetings { @Test void hello(@Autowired GreetingService service) { assertThat(service.greetWorld()).isEqualTo("Hello World"); } } @Nested @ActiveProfiles("lang_de") class GermanGreetings { @Test void hello(@Autowired GreetingService service) { assertThat(service.greetWorld()).isEqualTo("Hallo Welt"); } } } ``` ``` @SpringJUnitConfig(TestConfig::class) class GreetingServiceTests { @Nested @ActiveProfiles("lang_en") inner class EnglishGreetings { @Test fun hello(@Autowired service:GreetingService) { assertThat(service.greetWorld()).isEqualTo("Hello World") } } @Nested @ActiveProfiles("lang_de") inner class GermanGreetings { @Test fun hello(@Autowired service:GreetingService) { assertThat(service.greetWorld()).isEqualTo("Hallo Welt") } } } ``` ## TestNG Support Classes ``` org.springframework.test.context.testng ``` package provides the following support classes for TestNG based test cases: This chapter covers Spring’s Ahead of Time (AOT) support for integration tests using the Spring TestContext Framework. The testing support extends Spring’s core AOT support with the following features. Build-time detection of all integration tests in the current project that use the TestContext framework to load an `ApplicationContext` . Provides explicit support for test classes based on JUnit Jupiter and JUnit 4 as well as implicit support for TestNG and other testing frameworks that use Spring’s core testing annotations — as long as the tests are run using a JUnit Platform `TestEngine` that is registered for the current project. * Build-time AOT processing: each unique test `ApplicationContext` in the current project will be refreshed for AOT processing. * Runtime AOT support: when executing in AOT runtime mode, a Spring integration test will use an AOT-optimized `ApplicationContext` that participates transparently with the context cache. To provide test-specific runtime hints for use within a GraalVM native image, you have the following options. ``` TestRuntimeHintsRegistrar ``` or locally on a test class via `@ImportRuntimeHints` . * Annotate a test class with `@Reflective` or See Runtime Hints for details on Spring’s core runtime hints and annotation support. If you implement a custom `ContextLoader` , it must implement `AotContextLoader` in order to provide AOT build-time processing and AOT runtime execution support. Note, however, that all context loader implementations provided by the Spring Framework and Spring Boot already implement `AotContextLoader` . If you implement a custom , it must implement ``` AotTestExecutionListener ``` in order to participate in AOT processing. See the in the `spring-test` module for an example. `WebTestClient` is an HTTP client designed for testing server applications. It wraps Spring’s WebClient and uses it to perform requests but exposes a testing facade for verifying responses. `WebTestClient` can be used to perform end-to-end HTTP tests. It can also be used to test Spring MVC and Spring WebFlux applications without a running server via mock server request and response objects. Kotlin users: See this section related to use of the | | --- | ## Setup To set up a `WebTestClient` you need to choose a server setup to bind to. This can be one of several mock server setup choices or a connection to a live server. ### Bind to Controller This setup allows you to test specific controller(s) via mock request and response objects, without a running server. For WebFlux applications, use the following which loads infrastructure equivalent to the WebFlux Java config, registers the given controller(s), and creates a WebHandler chain to handle requests: ``` WebTestClient client = WebTestClient.bindToController(new TestController()).build(); ``` ``` val client = WebTestClient.bindToController(TestController()).build() ``` For Spring MVC, use the following which delegates to the StandaloneMockMvcBuilder to load infrastructure equivalent to the WebMvc Java config, registers the given controller(s), and creates an instance of MockMvc to handle requests: ``` WebTestClient client = MockMvcWebTestClient.bindToController(new TestController()).build(); ``` ``` val client = MockMvcWebTestClient.bindToController(TestController()).build() ``` ### Bind to `ApplicationContext` This setup allows you to load Spring configuration with Spring MVC or Spring WebFlux infrastructure and controller declarations and use it to handle requests via mock request and response objects, without a running server. For WebFlux, use the following where the Spring `ApplicationContext` is passed to WebHttpHandlerBuilder to create the WebHandler chain to handle requests: ``` @SpringJUnitConfig(WebConfig.class) (1) class MyTests { @BeforeEach void setUp(ApplicationContext context) { (2) client = WebTestClient.bindToApplicationContext(context).build(); (3) } } ``` ``` @SpringJUnitConfig(WebConfig::class) (1) class MyTests { @BeforeEach fun setUp(context: ApplicationContext) { (2) client = WebTestClient.bindToApplicationContext(context).build() (3) } } ``` For Spring MVC, use the following where the Spring `ApplicationContext` is passed to MockMvcBuilders.webAppContextSetup to create a MockMvc instance to handle requests: @Autowired WebApplicationContext wac; (2) @BeforeEach void setUp() { client = MockMvcWebTestClient.bindToApplicationContext(this.wac).build(); (3) } } ``` @Autowired lateinit var wac: WebApplicationContext; (2) @BeforeEach fun setUp() { (2) client = MockMvcWebTestClient.bindToApplicationContext(wac).build() (3) } } ``` ### Bind to Router Function This setup allows you to test functional endpoints via mock request and response objects, without a running server. For WebFlux, use the following which delegates to ``` RouterFunctions.toWebHandler ``` to create a server setup to handle requests: ``` RouterFunction<?> route = ... client = WebTestClient.bindToRouterFunction(route).build(); ``` ``` val route: RouterFunction<*> = ... val client = WebTestClient.bindToRouterFunction(route).build() ``` For Spring MVC there are currently no options to test WebMvc functional endpoints. ### Bind to Server This setup connects to a running server to perform full, end-to-end HTTP tests: ``` client = WebTestClient.bindToServer().baseUrl("http://localhost:8080").build(); ``` ``` client = WebTestClient.bindToServer().baseUrl("http://localhost:8080").build() ``` ### Client Config In addition to the server setup options described earlier, you can also configure client options, including base URL, default headers, client filters, and others. These options are readily available following `bindToServer()` . For all other configuration options, you need to use `configureClient()` to transition from server to client configuration, as follows: ``` client = WebTestClient.bindToController(new TestController()) .configureClient() .baseUrl("/test") .build(); ``` ``` client = WebTestClient.bindToController(TestController()) .configureClient() .baseUrl("/test") .build() ``` ## Writing Tests `WebTestClient` provides an API identical to WebClient up to the point of performing a request by using `exchange()` . See the WebClient documentation for examples on how to prepare a request with any content including form data, multipart data, and more. After the call to `exchange()` , `WebTestClient` diverges from the `WebClient` and instead continues with a workflow to verify responses. To assert the response status and headers, use the following: ``` client.get().uri("/persons/1") .accept(MediaType.APPLICATION_JSON) .exchange() .expectStatus().isOk() .expectHeader().contentType(MediaType.APPLICATION_JSON); ``` ``` client.get().uri("/persons/1") .accept(MediaType.APPLICATION_JSON) .exchange() .expectStatus().isOk() .expectHeader().contentType(MediaType.APPLICATION_JSON) ``` If you would like for all expectations to be asserted even if one of them fails, you can use `expectAll(..)` instead of multiple chained `expect*(..)` calls. This feature is similar to the soft assertions support in AssertJ and the `assertAll()` support in JUnit Jupiter. ``` client.get().uri("/persons/1") .accept(MediaType.APPLICATION_JSON) .exchange() .expectAll( spec -> spec.expectStatus().isOk(), spec -> spec.expectHeader().contentType(MediaType.APPLICATION_JSON) ); ``` You can then choose to decode the response body through one of the following: * `expectBody(Class<T>)` : Decode to single object. * ``` expectBodyList(Class<T>) ``` : Decode and collect objects to `List<T>` . * `expectBody()` : Decode to `byte[]` for JSON Content or an empty body. And perform assertions on the resulting higher level Object(s): ``` client.get().uri("/persons") .exchange() .expectStatus().isOk() .expectBodyList(Person.class).hasSize(3).contains(person); ``` ``` import org.springframework.test.web.reactive.server.expectBodyList client.get().uri("/persons") .exchange() .expectStatus().isOk() .expectBodyList<Person>().hasSize(3).contains(person) ``` If the built-in assertions are insufficient, you can consume the object instead and perform any other assertions: client.get().uri("/persons/1") .exchange() .expectStatus().isOk() .expectBody(Person.class) .consumeWith(result -> { // custom assertions (e.g. AssertJ)... }); ``` ``` client.get().uri("/persons/1") .exchange() .expectStatus().isOk() .expectBody<Person>() .consumeWith { // custom assertions (e.g. AssertJ)... } ``` Or you can exit the workflow and obtain an `EntityExchangeResult` : ``` EntityExchangeResult<Person> result = client.get().uri("/persons/1") .exchange() .expectStatus().isOk() .expectBody(Person.class) .returnResult(); ``` val result = client.get().uri("/persons/1") .exchange() .expectStatus().isOk .expectBody<Person>() .returnResult() ``` When you need to decode to a target type with generics, look for the overloaded methods that accept | | --- | ### No Content If the response is not expected to have content, you can assert that as follows: ``` client.post().uri("/persons") .body(personMono, Person.class) .exchange() .expectStatus().isCreated() .expectBody().isEmpty(); ``` ``` client.post().uri("/persons") .bodyValue(person) .exchange() .expectStatus().isCreated() .expectBody().isEmpty() ``` If you want to ignore the response content, the following releases the content without any assertions: ``` client.get().uri("/persons/123") .exchange() .expectStatus().isNotFound() .expectBody(Void.class); ``` ``` client.get().uri("/persons/123") .exchange() .expectStatus().isNotFound .expectBody<Unit>() ``` ### JSON Content You can use `expectBody()` without a target type to perform assertions on the raw content rather than through higher level Object(s). To verify the full JSON content with JSONAssert: To verify JSON content with JSONPath: ``` client.get().uri("/persons") .exchange() .expectStatus().isOk() .expectBody() .jsonPath("$[0].name").isEqualTo("Jane") .jsonPath("$[1].name").isEqualTo("Jason"); ``` ``` client.get().uri("/persons") .exchange() .expectStatus().isOk() .expectBody() .jsonPath("$[0].name").isEqualTo("Jane") .jsonPath("$[1].name").isEqualTo("Jason") ``` ### Streaming Responses To test potentially infinite streams such as `"text/event-stream"` or ``` "application/x-ndjson" ``` , start by verifying the response status and headers, and then obtain a `FluxExchangeResult` : ``` FluxExchangeResult<MyEvent> result = client.get().uri("/events") .accept(TEXT_EVENT_STREAM) .exchange() .expectStatus().isOk() .returnResult(MyEvent.class); ``` ``` import org.springframework.test.web.reactive.server.returnResult val result = client.get().uri("/events") .accept(TEXT_EVENT_STREAM) .exchange() .expectStatus().isOk() .returnResult<MyEvent>() ``` Now you’re ready to consume the response stream with `StepVerifier` from `reactor-test` : ``` Flux<Event> eventFlux = result.getResponseBody(); StepVerifier.create(eventFlux) .expectNext(person) .expectNextCount(4) .consumeNextWith(p -> ...) .thenCancel() .verify(); ``` ``` val eventFlux = result.getResponseBody() StepVerifier.create(eventFlux) .expectNext(person) .expectNextCount(4) .consumeNextWith { p -> ... } .thenCancel() .verify() ``` ### MockMvc Assertions `WebTestClient` is an HTTP client and as such it can only verify what is in the client response including status, headers, and body. When testing a Spring MVC application with a MockMvc server setup, you have the extra choice to perform further assertions on the server response. To do that start by obtaining an `ExchangeResult` after asserting the body: Then switch to MockMvc server response assertions: The Spring MVC Test framework, also known as MockMvc, provides support for testing Spring MVC applications. It performs full Spring MVC request handling but via mock request and response objects instead of a running server. MockMvc can be used on its own to perform requests and verify responses. It can also be used through the WebTestClient where MockMvc is plugged in as the server to handle requests with. The advantage of `WebTestClient` is the option to work with higher level objects instead of raw data as well as the ability to switch to full, end-to-end HTTP tests against a live server and use the same test API. You can write plain unit tests for Spring MVC by instantiating a controller, injecting it with dependencies, and calling its methods. However such tests do not verify request mappings, data binding, message conversion, type conversion, validation, and nor do they involve any of the supporting `@InitBinder` , `@ModelAttribute` , or `@ExceptionHandler` methods. The Spring MVC Test framework, also known as `MockMvc` , aims to provide more complete testing for Spring MVC controllers without a running server. It does that by invoking the `DispatcherServlet` and passing “mock” implementations of the Servlet API from the `spring-test` module which replicates the full Spring MVC request handling without a running server. MockMvc is a server side test framework that lets you verify most of the functionality of a Spring MVC application using lightweight and targeted tests. You can use it on its own to perform requests and to verify responses, or you can also use it through the WebTestClient API with MockMvc plugged in as the server to handle requests with. When using MockMvc directly to perform requests, you’ll need static imports for: * `MockMvcBuilders.*` * ``` MockMvcRequestBuilders.* ``` ``` MockMvcResultHandlers.* ``` An easy way to remember that is search for `MockMvc*` . If using Eclipse be sure to also add the above as “favorite static members” in the Eclipse preferences. When using MockMvc through the WebTestClient you do not need static imports. The `WebTestClient` provides a fluent API without static imports. MockMvc can be setup in one of two ways. One is to point directly to the controllers you want to test and programmatically configure Spring MVC infrastructure. The second is to point to Spring configuration with Spring MVC and controller infrastructure in it. To set up MockMvc for testing a specific controller, use the following: @BeforeEach void setup() { this.mockMvc = MockMvcBuilders.standaloneSetup(new AccountController()).build(); } lateinit var mockMvc : MockMvc @BeforeEach fun setup() { mockMvc = MockMvcBuilders.standaloneSetup(AccountController()).build() } To set up MockMvc through Spring configuration, use the following: lateinit var mockMvc: MockMvc Which setup option should you use? The `webAppContextSetup` loads your actual Spring MVC configuration, resulting in a more complete integration test. Since the TestContext framework caches the loaded Spring configuration, it helps keep tests running fast, even as you introduce more tests in your test suite. Furthermore, you can inject mock services into controllers through Spring configuration to remain focused on testing the web layer. The following example declares a mock service with Mockito: ``` <bean id="accountService" class="org.mockito.Mockito" factory-method="mock"> <constructor-arg value="org.example.AccountService"/> </bean> ``` You can then inject the mock service into the test to set up and verify your expectations, as the following example shows: @Autowired AccountService accountService; @Autowired lateinit var accountService: AccountService lateinit mockMvc: MockMvc The `standaloneSetup` , on the other hand, is a little closer to a unit test. It tests one controller at a time. You can manually inject the controller with mock dependencies, and it does not involve loading Spring configuration. Such tests are more focused on style and make it easier to see which controller is being tested, whether any specific Spring MVC configuration is required to work, and so on. The `standaloneSetup` is also a very convenient way to write ad-hoc tests to verify specific behavior or to debug an issue. As with most “integration versus unit testing” debates, there is no right or wrong answer. However, using the `standaloneSetup` does imply the need for additional `webAppContextSetup` tests in order to verify your Spring MVC configuration. Alternatively, you can write all your tests with `webAppContextSetup` , in order to always test against your actual Spring MVC configuration. No matter which MockMvc builder you use, all `MockMvcBuilder` implementations provide some common and very useful features. For example, you can declare an `Accept` header for all requests and expect a status of 200 as well as a `Content-Type` header in all responses, as follows: ``` // static import of MockMvcBuilders.standaloneSetup MockMvc mockMvc = standaloneSetup(new MusicController()) .defaultRequest(get("/").accept(MediaType.APPLICATION_JSON)) .alwaysExpect(status().isOk()) .alwaysExpect(content().contentType("application/json;charset=UTF-8")) .build(); ``` In addition, third-party frameworks (and applications) can pre-package setup instructions, such as those in a `MockMvcConfigurer` . The Spring Framework has one such built-in implementation that helps to save and re-use the HTTP session across requests. You can use it as follows: ``` // static import of SharedHttpSessionConfigurer.sharedHttpSession MockMvc mockMvc = MockMvcBuilders.standaloneSetup(new TestController()) .apply(sharedHttpSession()) .build(); // Use mockMvc to perform requests... ``` ``` ConfigurableMockMvcBuilder ``` for a list of all MockMvc builder features or use the IDE to explore the available options. This section shows how to use MockMvc on its own to perform requests and verify responses. If using MockMvc through the `WebTestClient` please see the corresponding section on Writing Tests instead. To perform requests that use any HTTP method, as the following example shows: ``` // static import of MockMvcRequestBuilders.* mockMvc.perform(post("/hotels/{id}", 42).accept(MediaType.APPLICATION_JSON)); ``` mockMvc.post("/hotels/{id}", 42) { accept = MediaType.APPLICATION_JSON } ``` You can also perform file upload requests that internally use ``` MockMultipartHttpServletRequest ``` so that there is no actual parsing of a multipart request. Rather, you have to set it up to be similar to the following example: ``` mockMvc.perform(multipart("/doc").file("a1", "ABC".getBytes("UTF-8"))); ``` ``` import org.springframework.test.web.servlet.multipart mockMvc.multipart("/doc") { file("a1", "ABC".toByteArray(charset("UTF8"))) } ``` You can specify query parameters in URI template style, as the following example shows: ``` mockMvc.perform(get("/hotels?thing={thing}", "somewhere")); ``` ``` mockMvc.get("/hotels?thing={thing}", "somewhere") ``` You can also add Servlet request parameters that represent either query or form parameters, as the following example shows: ``` mockMvc.perform(get("/hotels").param("thing", "somewhere")); ``` mockMvc.get("/hotels") { param("thing", "somewhere") } ``` If application code relies on Servlet request parameters and does not check the query string explicitly (as is most often the case), it does not matter which option you use. Keep in mind, however, that query parameters provided with the URI template are decoded while request parameters provided through the `param(…​)` method are expected to already be decoded. In most cases, it is preferable to leave the context path and the Servlet path out of the request URI. If you must test with the full request URI, be sure to set the `contextPath` and `servletPath` accordingly so that request mappings work, as the following example shows: ``` mockMvc.perform(get("/app/main/hotels/{id}").contextPath("/app").servletPath("/main")) ``` mockMvc.get("/app/main/hotels/{id}") { contextPath = "/app" servletPath = "/main" } ``` In the preceding example, it would be cumbersome to set the `contextPath` and `servletPath` with every performed request. Instead, you can set up default request properties, as the following example shows: @BeforeEach void setup() { mockMvc = standaloneSetup(new AccountController()) .defaultRequest(get("/") .contextPath("/app").servletPath("/main") .accept(MediaType.APPLICATION_JSON)).build(); } } ``` The preceding properties affect every request performed through the `MockMvc` instance. If the same property is also specified on a given request, it overrides the default value. That is why the HTTP method and URI in the default request do not matter, since they must be specified on every request. You can define expectations by appending one or more `andExpect(..)` calls after performing a request, as the following example shows. As soon as one expectation fails, no other expectations will be asserted. mockMvc.perform(get("/accounts/1")).andExpect(status().isOk()); ``` mockMvc.get("/accounts/1").andExpect { status { isOk() } } ``` You can define multiple expectations by appending `andExpectAll(..)` after performing a request, as the following example shows. In contrast to `andExpect(..)` , `andExpectAll(..)` guarantees that all supplied expectations will be asserted and that all failures will be tracked and reported. mockMvc.perform(get("/accounts/1")).andExpectAll( status().isOk(), content().contentType("application/json;charset=UTF-8")); ``` mockMvc.get("/accounts/1").andExpectAll { status { isOk() } content { contentType(APPLICATION_JSON) } } ``` provides a number of expectations, some of which are further nested with more detailed expectations. Expectations fall in two general categories. The first category of assertions verifies properties of the response (for example, the response status, headers, and content). These are the most important results to assert. The second category of assertions goes beyond the response. These assertions let you inspect Spring MVC specific aspects, such as which controller method processed the request, whether an exception was raised and handled, what the content of the model is, what view was selected, what flash attributes were added, and so on. They also let you inspect Servlet specific aspects, such as request and session attributes. The following test asserts that binding or validation failed: ``` mockMvc.perform(post("/persons")) .andExpect(status().isOk()) .andExpect(model().attributeHasErrors("person")); ``` mockMvc.post("/persons").andExpect { status { isOk() } model { attributeHasErrors("person") } } ``` Many times, when writing tests, it is useful to dump the results of the performed request. You can do so as follows, where `print()` is a static import from ``` MockMvcResultHandlers ``` ``` mockMvc.perform(post("/persons")) .andDo(print()) .andExpect(status().isOk()) .andExpect(model().attributeHasErrors("person")); ``` mockMvc.post("/persons").andDo { print() }.andExpect { status { isOk() } model { attributeHasErrors("person") } } ``` As long as request processing does not cause an unhandled exception, the `print()` method prints all the available result data to `System.out` . There is also a `log()` method and two additional variants of the `print()` method, one that accepts an `OutputStream` and one that accepts a `Writer` . For example, invoking `print(System.err)` prints the result data to `System.err` , while invoking `print(myWriter)` prints the result data to a custom writer. If you want to have the result data logged instead of printed, you can invoke the `log()` method, which logs the result data as a single `DEBUG` message under the ``` org.springframework.test.web.servlet.result ``` logging category. In some cases, you may want to get direct access to the result and verify something that cannot be verified otherwise. This can be achieved by appending `.andReturn()` after all other expectations, as the following example shows: ``` MvcResult mvcResult = mockMvc.perform(post("/persons")).andExpect(status().isOk()).andReturn(); // ... ``` ``` var mvcResult = mockMvc.post("/persons").andExpect { status { isOk() } }.andReturn() // ... ``` If all tests repeat the same expectations, you can set up common expectations once when building the `MockMvc` instance, as the following example shows: ``` standaloneSetup(new SimpleController()) .alwaysExpect(status().isOk()) .alwaysExpect(content().contentType("application/json;charset=UTF-8")) .build() ``` Note that common expectations are always applied and cannot be overridden without creating a separate `MockMvc` instance. When a JSON response content contains hypermedia links created with Spring HATEOAS, you can verify the resulting links by using JsonPath expressions, as the following example shows: ``` mockMvc.perform(get("/people").accept(MediaType.APPLICATION_JSON)) .andExpect(jsonPath("$.links[?(@.rel == 'self')].href").value("http://localhost:8080/people")); ``` ``` mockMvc.get("/people") { accept(MediaType.APPLICATION_JSON) }.andExpect { jsonPath("$.links[?(@.rel == 'self')].href") { value("http://localhost:8080/people") } } ``` When XML response content contains hypermedia links created with Spring HATEOAS, you can verify the resulting links by using XPath expressions: ``` Map<String, String> ns = Collections.singletonMap("ns", "http://www.w3.org/2005/Atom"); mockMvc.perform(get("/handle").accept(MediaType.APPLICATION_XML)) .andExpect(xpath("/person/ns:link[@rel='self']/@href", ns).string("http://localhost:8080/people")); ``` ``` val ns = mapOf("ns" to "http://www.w3.org/2005/Atom") mockMvc.get("/handle") { accept(MediaType.APPLICATION_XML) }.andExpect { xpath("/person/ns:link[@rel='self']/@href", ns) { string("http://localhost:8080/people") } } ``` This section shows how to use MockMvc on its own to test asynchronous request handling. If using MockMvc through the WebTestClient, there is nothing special to do to make asynchronous requests work as the `WebTestClient` automatically does what is described in this section. Servlet asynchronous requests, supported in Spring MVC, work by exiting the Servlet container thread and allowing the application to compute the response asynchronously, after which an async dispatch is made to complete processing on a Servlet container thread. In Spring MVC Test, async requests can be tested by asserting the produced async value first, then manually performing the async dispatch, and finally verifying the response. Below is an example test for controller methods that return `DeferredResult` , `Callable` , or reactive type such as Reactor `Mono` : @Test void test() throws Exception { MvcResult mvcResult = this.mockMvc.perform(get("/path")) .andExpect(status().isOk()) (1) .andExpect(request().asyncStarted()) (2) .andExpect(request().asyncResult("body")) (3) .andReturn(); this.mockMvc.perform(asyncDispatch(mvcResult)) (4) .andExpect(status().isOk()) (5) .andExpect(content().string("body")); } ``` ``` @Test fun test() { var mvcResult = mockMvc.get("/path").andExpect { status { isOk() } (1) request { asyncStarted() } (2) // TODO Remove unused generic parameter request { asyncResult<Nothing>("body") } (3) }.andReturn() mockMvc.perform(asyncDispatch(mvcResult)) (4) .andExpect { status { isOk() } (5) content().string("body") } } ``` The best way to test streaming responses such as Server-Sent Events is through the [WebTestClient] which can be used as a test client to connect to a `MockMvc` instance to perform tests on Spring MVC controllers without a running server. For example: ``` WebTestClient client = MockMvcWebTestClient.bindToController(new SseController()).build(); FluxExchangeResult<Person> exchangeResult = client.get() .uri("/persons") .exchange() .expectStatus().isOk() .expectHeader().contentType("text/event-stream") .returnResult(Person.class); // Use StepVerifier from Project Reactor to test the streaming response StepVerifier.create(exchangeResult.getResponseBody()) .expectNext(new Person("N0"), new Person("N1"), new Person("N2")) .expectNextCount(4) .consumeNextWith(person -> assertThat(person.getName()).endsWith("7")) .thenCancel() .verify(); ``` `WebTestClient` can also connect to a live server and perform full end-to-end integration tests. This is also supported in Spring Boot where you can test a running server. When setting up a `MockMvc` instance, you can register one or more Servlet `Filter` instances, as the following example shows: ``` mockMvc = standaloneSetup(new PersonController()).addFilters(new CharacterEncodingFilter()).build(); ``` Registered filters are invoked through the `MockFilterChain` from `spring-test` , and the last filter delegates to the `DispatcherServlet` . MockMVc is built on Servlet API mock implementations from the `spring-test` module and does not rely on a running container. Therefore, there are some differences when compared to full end-to-end integration tests with an actual client and a live server running. The easiest way to think about this is by starting with a blank . Whatever you add to it is what the request becomes. Things that may catch you by surprise are that there is no context path by default; no `jsessionid` cookie; no forwarding, error, or async dispatches; and, therefore, no actual JSP rendering. Instead, “forwarded” and “redirected” URLs are saved in the and can be asserted with expectations. This means that, if you use JSPs, you can verify the JSP page to which the request was forwarded, but no HTML is rendered. In other words, the JSP is not invoked. Note, however, that all other rendering technologies that do not rely on forwarding, such as Thymeleaf and Freemarker, render HTML to the response body as expected. The same is true for rendering JSON, XML, and other formats through `@ResponseBody` methods. Alternatively, you may consider the full end-to-end integration testing support from Spring Boot with `@SpringBootTest` . See the Spring Boot Reference Guide. There are pros and cons for each approach. The options provided in Spring MVC Test are different stops on the scale from classic unit testing to full integration testing. To be certain, none of the options in Spring MVC Test fall under the category of classic unit testing, but they are a little closer to it. For example, you can isolate the web layer by injecting mocked services into controllers, in which case you are testing the web layer only through the `DispatcherServlet` but with actual Spring configuration, as you might test the data access layer in isolation from the layers above it. Also, you can use the stand-alone setup, focusing on one controller at a time and manually providing the configuration required to make it work. Another important distinction when using Spring MVC Test is that, conceptually, such tests are the server-side, so you can check what handler was used, if an exception was handled with a HandlerExceptionResolver, what the content of the model is, what binding errors there were, and other details. That means that it is easier to write expectations, since the server is not an opaque box, as it is when testing it through an actual HTTP client. This is generally an advantage of classic unit testing: It is easier to write, reason about, and debug but does not replace the need for full integration tests. At the same time, it is important not to lose sight of the fact that the response is the most important thing to check. In short, there is room here for multiple styles and strategies of testing even within the same project. Spring provides integration between MockMvc and HtmlUnit. This simplifies performing end-to-end testing when using HTML-based views. This integration lets you: Easily test HTML pages by using tools such as HtmlUnit, WebDriver, and Geb without the need to deploy to a Servlet container. * Test JavaScript within pages. * Optionally, test using mock services to speed up testing. * Share logic between in-container end-to-end tests and out-of-container integration tests. MockMvc works with templating technologies that do not rely on a Servlet Container (for example, Thymeleaf, FreeMarker, and others), but it does not work with JSPs, since they rely on the Servlet container. | | --- | The most obvious question that comes to mind is “Why do I need this?” The answer is best found by exploring a very basic sample application. Assume you have a Spring MVC web application that supports CRUD operations on a `Message` object. The application also supports paging through all messages. How would you go about testing it? With Spring MVC Test, we can easily test if we are able to create a `Message` , as follows: ``` MockHttpServletRequestBuilder createMessage = post("/messages/") .param("summary", "Spring Rocks") .param("text", "In case you didn't know, Spring Rocks!"); ``` @Test fun test() { mockMvc.post("/messages/") { param("summary", "Spring Rocks") param("text", "In case you didn't know, Spring Rocks!") }.andExpect { status().is3xxRedirection() redirectedUrl("/messages/123") } } ``` What if we want to test the form view that lets us create the message? For example, assume our form looks like the following snippet: ``` <form id="messageForm" action="/messages/" method="post"> <div class="pull-right"><a href="/messages/">Messages</a></div <label for="summary">Summary</label> <input type="text" class="required" id="summary" name="summary" value="" / <label for="text">Message</label> <textarea id="text" name="text"></textarea <div class="form-actions"> <input type="submit" value="Create" /> </div> </form> ``` How do we ensure that our form produce the correct request to create a new message? A naive attempt might resemble the following: ``` mockMvc.perform(get("/messages/form")) .andExpect(xpath("//input[@name='summary']").exists()) .andExpect(xpath("//textarea[@name='text']").exists()); ``` ``` mockMvc.get("/messages/form").andExpect { xpath("//input[@name='summary']") { exists() } xpath("//textarea[@name='text']") { exists() } } ``` This test has some obvious drawbacks. If we update our controller to use the parameter `message` instead of `text` , our form test continues to pass, even though the HTML form is out of synch with the controller. To resolve this we can combine our two tests, as follows: ``` String summaryParamName = "summary"; String textParamName = "text"; mockMvc.perform(get("/messages/form")) .andExpect(xpath("//input[@name='" + summaryParamName + "']").exists()) .andExpect(xpath("//textarea[@name='" + textParamName + "']").exists()); MockHttpServletRequestBuilder createMessage = post("/messages/") .param(summaryParamName, "Spring Rocks") .param(textParamName, "In case you didn't know, Spring Rocks!"); ``` val summaryParamName = "summary"; val textParamName = "text"; mockMvc.get("/messages/form").andExpect { xpath("//input[@name='$summaryParamName']") { exists() } xpath("//textarea[@name='$textParamName']") { exists() } } mockMvc.post("/messages/") { param(summaryParamName, "Spring Rocks") param(textParamName, "In case you didn't know, Spring Rocks!") }.andExpect { status().is3xxRedirection() redirectedUrl("/messages/123") } ``` This would reduce the risk of our test incorrectly passing, but there are still some problems: What if we have multiple forms on our page? Admittedly, we could update our XPath expressions, but they get more complicated as we take more factors into account: Are the fields the correct type? Are the fields enabled? And so on. * Another issue is that we are doing double the work we would expect. We must first verify the view, and then we submit the view with the same parameters we just verified. Ideally, this could be done all at once. * Finally, we still cannot account for some things. For example, what if the form has JavaScript validation that we wish to test as well? The overall problem is that testing a web page does not involve a single interaction. Instead, it is a combination of how the user interacts with a web page and how that web page interacts with other resources. For example, the result of a form view is used as the input to a user for creating a message. In addition, our form view can potentially use additional resources that impact the behavior of the page, such as JavaScript validation. ## Integration Testing to the Rescue? To resolve the issues mentioned earlier, we could perform end-to-end integration testing, but this has some drawbacks. Consider testing the view that lets us page through the messages. We might need the following tests: Does our page display a notification to the user to indicate that no results are available when the messages are empty? * Does our page properly display a single message? * Does our page properly support paging? To set up these tests, we need to ensure our database contains the proper messages. This leads to a number of additional challenges: Ensuring the proper messages are in the database can be tedious. (Consider foreign key constraints.) * Testing can become slow, since each test would need to ensure that the database is in the correct state. * Since our database needs to be in a specific state, we cannot run tests in parallel. * Performing assertions on such items as auto-generated IDs, timestamps, and others can be difficult. These challenges do not mean that we should abandon end-to-end integration testing altogether. Instead, we can reduce the number of end-to-end integration tests by refactoring our detailed tests to use mock services that run much faster, more reliably, and without side effects. We can then implement a small number of true end-to-end integration tests that validate simple workflows to ensure that everything works together properly. ## Enter HtmlUnit Integration So how can we achieve a balance between testing the interactions of our pages and still retain good performance within our test suite? The answer is: “By integrating MockMvc with HtmlUnit.” ## HtmlUnit Integration Options You have a number of options when you want to integrate MockMvc with HtmlUnit: MockMvc and HtmlUnit: Use this option if you want to use the raw HtmlUnit libraries. * MockMvc and WebDriver: Use this option to ease development and reuse code between integration and end-to-end testing. * MockMvc and Geb: Use this option if you want to use Groovy for testing, ease development, and reuse code between integration and end-to-end testing. This section describes how to integrate MockMvc and HtmlUnit. Use this option if you want to use the raw HtmlUnit libraries. ## MockMvc and HtmlUnit Setup First, make sure that you have included a test dependency on ``` net.sourceforge.htmlunit:htmlunit ``` . In order to use HtmlUnit with Apache HttpComponents 4.5+, you need to use HtmlUnit 2.18 or higher. We can easily create an HtmlUnit `WebClient` that integrates with MockMvc by using the ``` HtmlPage createMsgFormPage = webClient.getPage("http://localhost/messages/form"); ``` ``` val createMsgFormPage = webClient.getPage("http://localhost/messages/form") ``` The default context path is | | --- | Once we have a reference to the `HtmlPage` , we can then fill out the form and submit it to create a message, as the following example shows: ``` HtmlForm form = createMsgFormPage.getHtmlElementById("messageForm"); HtmlTextInput summaryInput = createMsgFormPage.getHtmlElementById("summary"); summaryInput.setValueAttribute("Spring Rocks"); HtmlTextArea textInput = createMsgFormPage.getHtmlElementById("text"); textInput.setText("In case you didn't know, Spring Rocks!"); HtmlSubmitInput submit = form.getOneHtmlElementByAttribute("input", "type", "submit"); HtmlPage newMessagePage = submit.click(); ``` ``` val form = createMsgFormPage.getHtmlElementById("messageForm") val summaryInput = createMsgFormPage.getHtmlElementById("summary") summaryInput.setValueAttribute("Spring Rocks") val textInput = createMsgFormPage.getHtmlElementById("text") textInput.setText("In case you didn't know, Spring Rocks!") val submit = form.getOneHtmlElementByAttribute("input", "type", "submit") val newMessagePage = submit.click() ``` ``` assertThat(newMessagePage.getUrl().toString()).endsWith("/messages/123"); String id = newMessagePage.getHtmlElementById("id").getTextContent(); assertThat(id).isEqualTo("123"); String summary = newMessagePage.getHtmlElementById("summary").getTextContent(); assertThat(summary).isEqualTo("Spring Rocks"); String text = newMessagePage.getHtmlElementById("text").getTextContent(); assertThat(text).isEqualTo("In case you didn't know, Spring Rocks!"); ``` ``` assertThat(newMessagePage.getUrl().toString()).endsWith("/messages/123") val id = newMessagePage.getHtmlElementById("id").getTextContent() assertThat(id).isEqualTo("123") val summary = newMessagePage.getHtmlElementById("summary").getTextContent() assertThat(summary).isEqualTo("Spring Rocks") val text = newMessagePage.getHtmlElementById("text").getTextContent() assertThat(text).isEqualTo("In case you didn't know, Spring Rocks!") ``` The preceding code improves on our MockMvc test in a number of ways. First, we no longer have to explicitly verify our form and then create a request that looks like the form. Instead, we request the form, fill it out, and submit it, thereby significantly reducing the overhead. Another important factor is that HtmlUnit uses the Mozilla Rhino engine to evaluate JavaScript. This means that we can also test the behavior of JavaScript within our pages. See the HtmlUnit documentation for additional information about using HtmlUnit. in the simplest way possible, by building a `WebClient` based on the loaded for us by the Spring TestContext Framework. This approach is repeated in the following example: We can also specify additional configuration options, as the following example shows: In the previous sections, we have seen how to use MockMvc in conjunction with the raw HtmlUnit APIs. In this section, we use additional abstractions within the Selenium WebDriver to make things even easier. ## Why WebDriver and MockMvc? We can already use HtmlUnit and MockMvc, so why would we want to use WebDriver? The Selenium WebDriver provides a very elegant API that lets us easily organize our code. To better show how it works, we explore an example in this section. Despite being a part of Selenium, WebDriver does not require a Selenium Server to run your tests. | | --- | Suppose we need to ensure that a message is created properly. The tests involve finding the HTML form input elements, filling them out, and making various assertions. This approach results in numerous separate tests because we want to test error conditions as well. For example, we want to ensure that we get an error if we fill out only part of the form. If we fill out the entire form, the newly created message should be displayed afterwards. If one of the fields were named “summary”, we might have something that resembles the following repeated in multiple places within our tests: ``` HtmlTextInput summaryInput = currentPage.getHtmlElementById("summary"); summaryInput.setValueAttribute(summary); ``` ``` val summaryInput = currentPage.getHtmlElementById("summary") summaryInput.setValueAttribute(summary) ``` So what happens if we change the `id` to `smmry` ? Doing so would force us to update all of our tests to incorporate this change. This violates the DRY principle, so we should ideally extract this code into its own method, as follows: ``` public HtmlPage createMessage(HtmlPage currentPage, String summary, String text) { setSummary(currentPage, summary); // ... } public void setSummary(HtmlPage currentPage, String summary) { HtmlTextInput summaryInput = currentPage.getHtmlElementById("summary"); summaryInput.setValueAttribute(summary); } ``` ``` fun createMessage(currentPage: HtmlPage, summary:String, text:String) :HtmlPage{ setSummary(currentPage, summary); // ... } fun setSummary(currentPage:HtmlPage , summary: String) { val summaryInput = currentPage.getHtmlElementById("summary") summaryInput.setValueAttribute(summary) } ``` Doing so ensures that we do not have to update all of our tests if we change the UI. We might even take this a step further and place this logic within an `Object` that represents the `HtmlPage` we are currently on, as the following example shows: ``` public class CreateMessagePage { final HtmlPage currentPage; final HtmlTextInput summaryInput; final HtmlSubmitInput submit; public CreateMessagePage(HtmlPage currentPage) { this.currentPage = currentPage; this.summaryInput = currentPage.getHtmlElementById("summary"); this.submit = currentPage.getHtmlElementById("submit"); } public <T> T createMessage(String summary, String text) throws Exception { setSummary(summary); HtmlPage result = submit.click(); boolean error = CreateMessagePage.at(result); return (T) (error ? new CreateMessagePage(result) : new ViewMessagePage(result)); } public void setSummary(String summary) throws Exception { summaryInput.setValueAttribute(summary); } public static boolean at(HtmlPage page) { return "Create Message".equals(page.getTitleText()); } } ``` ``` class CreateMessagePage(private val currentPage: HtmlPage) { val summaryInput: HtmlTextInput = currentPage.getHtmlElementById("summary") val submit: HtmlSubmitInput = currentPage.getHtmlElementById("submit") fun <T> createMessage(summary: String, text: String): T { setSummary(summary) val result = submit.click() val error = at(result) return (if (error) CreateMessagePage(result) else ViewMessagePage(result)) as T } fun setSummary(summary: String) { summaryInput.setValueAttribute(summary) } fun at(page: HtmlPage): Boolean { return "Create Message" == page.getTitleText() } } } ``` Formerly, this pattern was known as the Page Object Pattern. While we can certainly do this with HtmlUnit, WebDriver provides some tools that we explore in the following sections to make this pattern much easier to implement. ## MockMvc and WebDriver Setup To use Selenium WebDriver with the Spring MVC Test framework, make sure that your project includes a test dependency on ``` org.seleniumhq.selenium:selenium-htmlunit-driver ``` . We can easily create a Selenium WebDriver that integrates with MockMvc by using the ``` CreateMessagePage page = CreateMessagePage.to(driver); ``` ``` val page = CreateMessagePage.to(driver) ``` We can then fill out the form and submit it to create a message, as follows: ``` ViewMessagePage viewMessagePage = page.createMessage(ViewMessagePage.class, expectedSummary, expectedText); ``` ``` val viewMessagePage = page.createMessage(ViewMessagePage::class, expectedSummary, expectedText) ``` This improves on the design of our HtmlUnit test by leveraging the Page Object Pattern. As we mentioned in Why WebDriver and MockMvc?, we can use the Page Object Pattern with HtmlUnit, but it is much easier with WebDriver. Consider the following `CreateMessagePage` implementation: ``` public class CreateMessagePage extends AbstractPage { (1) (2) private WebElement summary; private WebElement text; @FindBy(css = "input[type=submit]") (3) private WebElement submit; public CreateMessagePage(WebDriver driver) { super(driver); } public <T> T createMessage(Class<T> resultPage, String summary, String details) { this.summary.sendKeys(summary); this.text.sendKeys(details); this.submit.click(); return PageFactory.initElements(driver, resultPage); } public static CreateMessagePage to(WebDriver driver) { driver.get("http://localhost:9990/mail/messages/form"); return PageFactory.initElements(driver, CreateMessagePage.class); } } ``` ``` class CreateMessagePage(private val driver: WebDriver) : AbstractPage(driver) { (1) (2) private lateinit var summary: WebElement private lateinit var text: WebElement @FindBy(css = "input[type=submit]") (3) private lateinit var submit: WebElement fun <T> createMessage(resultPage: Class<T>, summary: String, details: String): T { this.summary.sendKeys(summary) text.sendKeys(details) submit.click() return PageFactory.initElements(driver, resultPage) } companion object { fun to(driver: WebDriver): CreateMessagePage { driver.get("http://localhost:9990/mail/messages/form") return PageFactory.initElements(driver, CreateMessagePage::class.java) } } } ``` ``` assertThat(viewMessagePage.getMessage()).isEqualTo(expectedMessage); assertThat(viewMessagePage.getSuccess()).isEqualTo("Successfully created a new message"); ``` ``` assertThat(viewMessagePage.message).isEqualTo(expectedMessage) assertThat(viewMessagePage.success).isEqualTo("Successfully created a new message") ``` We can see that our `ViewMessagePage` lets us interact with our custom domain model. For example, it exposes a method that returns a `Message` object: ``` public Message getMessage() throws ParseException { Message message = new Message(); message.setId(getId()); message.setCreated(getCreated()); message.setSummary(getSummary()); message.setText(getText()); return message; } ``` ``` fun getMessage() = Message(getId(), getCreated(), getSummary(), getText()) ``` We can then use the rich domain objects in our assertions. Lastly, we must not forget to close the `WebDriver` instance when the test is complete, as follows: ``` @AfterEach void destroy() { if (driver != null) { driver.close(); } } ``` ``` @AfterEach fun destroy() { if (driver != null) { driver.close() } } ``` For additional information on using WebDriver, see the Selenium WebDriver documentation. in the simplest way possible, by building a `WebDriver` based on the loaded for us by the Spring TestContext Framework. This approach is repeated here, as follows: We can also specify additional configuration options, as follows: In the previous section, we saw how to use MockMvc with WebDriver. In this section, we use Geb to make our tests even Groovy-er. ## Why Geb and MockMvc? Geb is backed by WebDriver, so it offers many of the same benefits that we get from WebDriver. However, Geb makes things even easier by taking care of some of the boilerplate code for us. ## MockMvc and Geb Setup We can easily initialize a Geb `Browser` with a Selenium WebDriver that uses MockMvc, as follows: ``` def setup() { browser.driver = MockMvcHtmlUnitDriverBuilder .webAppContextSetup(context) .build() } ``` ## MockMvc and Geb Usage Now we can use Geb as we normally would but without the need to deploy our application to a Servlet container. For example, we can request the view to create a message with the following: `to CreateMessagePage` We can then fill out the form and submit it to create a message, as follows: ``` when: form.summary = expectedSummary form.text = expectedMessage submit.click(ViewMessagePage) ``` Any unrecognized method calls or property accesses or references that are not found are forwarded to the current page object. This removes a lot of the boilerplate code we needed when using WebDriver directly. As with direct WebDriver usage, this improves on the design of our HtmlUnit test by using the Page Object Pattern. As mentioned previously, we can use the Page Object Pattern with HtmlUnit and WebDriver, but it is even easier with Geb. Consider our new Groovy-based `CreateMessagePage` implementation: ``` class CreateMessagePage extends Page { static url = 'messages/form' static at = { assert title == 'Messages : Create'; true } static content = { submit { $('input[type=submit]') } form { $('form') } errors(required:false) { $('label.error, .alert-error')?.text() } } } ``` Our `CreateMessagePage` extends `Page` . We do not go over the details of `Page` , but, in summary, it contains common functionality for all of our pages. We define a URL in which this page can be found. This lets us navigate to the page, as follows: `to CreateMessagePage` We also have an `at` closure that determines if we are at the specified page. It should return `true` if we are on the correct page. This is why we can assert that we are on the correct page, as follows: ``` then: at CreateMessagePage errors.contains('This field is required.') ``` We use an assertion in the closure so that we can determine where things went wrong if we were at the wrong page. | | --- | Next, we create a `content` closure that specifies all the areas of interest within the page. We can use a jQuery-ish Navigator API to select the content in which we are interested. Finally, we can verify that a new message was created successfully, as follows: ``` then: at ViewMessagePage success == 'Successfully created a new message' id date summary == expectedSummary message == expectedMessage ``` For further details on how to get the most out of Geb, see The Book of Geb user’s manual. You can use client-side tests to test code that internally uses the `RestTemplate` . The idea is to declare expected requests and to provide “stub” responses so that you can focus on testing the code in isolation (that is, without running a server). The following example shows how to do so: MockRestServiceServer mockServer = MockRestServiceServer.bindTo(restTemplate).build(); mockServer.expect(requestTo("/greeting")).andRespond(withSuccess()); val mockServer = MockRestServiceServer.bindTo(restTemplate).build() mockServer.expect(requestTo("/greeting")).andRespond(withSuccess()) (the central class for client-side REST tests) configures the `RestTemplate` with a custom that asserts actual requests against expectations and returns “stub” responses. In this case, we expect a request to `/greeting` and want to return a 200 response with `text/plain` content. We can define additional expected requests and stub responses as needed. When we define expected requests and stub responses, the `RestTemplate` can be used in client-side code as usual. At the end of testing, `mockServer.verify()` can be used to verify that all expectations have been satisfied. By default, requests are expected in the order in which expectations were declared. You can set the `ignoreExpectOrder` option when building the server, in which case all expectations are checked (in order) to find a match for a given request. That means requests are allowed to come in any order. The following example uses `ignoreExpectOrder` : ``` server = MockRestServiceServer.bindTo(restTemplate).ignoreExpectOrder(true).build(); ``` ``` server = MockRestServiceServer.bindTo(restTemplate).ignoreExpectOrder(true).build() ``` Even with unordered requests by default, each request is allowed to run once only. The `expect` method provides an overloaded variant that accepts an `ExpectedCount` argument that specifies a count range (for example, `once` , `manyTimes` , `max` , `min` , `between` , and so on). The following example uses `times` : MockRestServiceServer mockServer = MockRestServiceServer.bindTo(restTemplate).build(); mockServer.expect(times(2), requestTo("/something")).andRespond(withSuccess()); mockServer.expect(times(3), requestTo("/somewhere")).andRespond(withSuccess()); val mockServer = MockRestServiceServer.bindTo(restTemplate).build() mockServer.expect(times(2), requestTo("/something")).andRespond(withSuccess()) mockServer.expect(times(3), requestTo("/somewhere")).andRespond(withSuccess()) Note that, when `ignoreExpectOrder` is not set (the default), and, therefore, requests are expected in order of declaration, then that order applies only to the first of any expected request. For example if "/something" is expected two times followed by "/somewhere" three times, then there should be a request to "/something" before there is a request to "/somewhere", but, aside from that subsequent "/something" and "/somewhere", requests can come at any time. As an alternative to all of the above, the client-side test support also provides a implementation that you can configure into a `RestTemplate` to bind it to a `MockMvc` instance. That allows processing requests using actual server-side logic but without running a server. The following example shows how to do so: ``` MockMvc mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build(); this.restTemplate = new RestTemplate(new MockMvcClientHttpRequestFactory(mockMvc)); ``` val mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build() restTemplate = RestTemplate(MockMvcClientHttpRequestFactory(mockMvc)) In some cases it may be necessary to perform an actual call to a remote service instead of mocking the response. The following example shows how to do that through // Create ExecutingResponseCreator with the original request factory ExecutingResponseCreator withActualResponse = new ExecutingResponseCreator(restTemplate.getRequestFactory()); MockRestServiceServer mockServer = MockRestServiceServer.bindTo(restTemplate).build(); mockServer.expect(requestTo("/profile")).andRespond(withSuccess()); mockServer.expect(requestTo("/quoteOfTheDay")).andRespond(withActualResponse); // Create ExecutingResponseCreator with the original request factory val withActualResponse = new ExecutingResponseCreator(restTemplate.getRequestFactory()) val mockServer = MockRestServiceServer.bindTo(restTemplate).build() mockServer.expect(requestTo("/profile")).andRespond(withSuccess()) mockServer.expect(requestTo("/quoteOfTheDay")).andRespond(withActualResponse) using the from the `RestTemplate` before replaces it with a different one that mocks responses. Then we define expectations with two kinds of responses: a stub `200` response for the `/profile` endpoint (no actual request will be executed) * a response obtained through a call to the `/quoteOfTheDay` endpoint In the second case, the request is executed through the that was captured earlier. This generates a response that could e.g. come from an actual remote server, depending on how the `RestTemplate` was originally configured. ## Static Imports As with server-side tests, the fluent API for client-side tests requires a few static imports. Those are easy to find by searching for `MockRest*` . Eclipse users should add ``` MockRestRequestMatchers.* ``` ``` MockRestResponseCreators.* ``` as “favorite static members” in the Eclipse preferences under Java → Editor → Content Assist → Favorites. That allows using content assist after typing the first character of the static method name. Other IDEs (such IntelliJ) may not require any additional configuration. Check for the support for code completion on static members. ## Further Examples of Client-side REST Tests Spring MVC Test’s own tests include example tests of client-side REST tests. Annotations This section covers annotations that you can use when you test Spring applications. Section Summary Standard Annotation Support Spring Testing Annotations Spring JUnit 4 Testing Annotations Spring JUnit Jupiter Testing Annotations Meta-Annotation Support for Testing The following annotations are supported with standard semantics for all configurations of the Spring TestContext Framework. Note that these annotations are not specific to tests and can be used anywhere in the Spring Framework. * `@Autowired` * `@Qualifier` * `@Value` * `@Resource` (jakarta.annotation) if JSR-250 is present * `@ManagedBean` (jakarta.annotation) if JSR-250 is present * `@Inject` (jakarta.inject) if JSR-330 is present * `@Named` (jakarta.inject) if JSR-330 is present * `@PersistenceContext` (jakarta.persistence) if JPA is present * `@PersistenceUnit` (jakarta.persistence) if JPA is present * `@Transactional` (org.springframework.transaction.annotation) with limited attribute support The Spring Framework provides the following set of Spring-specific annotations that you can use in your unit and integration tests in conjunction with the TestContext framework. See the corresponding javadoc for further information, including default attribute values, attribute aliases, and other details. Spring’s testing annotations include the following: # @BootstrapWith `@BootstrapWith` `@BootstrapWith` is a class-level annotation that you can use to configure how the Spring TestContext Framework is bootstrapped. Specifically, you can use `@BootstrapWith` to specify a custom . See the section on bootstrapping the TestContext framework for further details. # @ContextConfiguration defines class-level metadata that is used to determine how to load and configure an `ApplicationContext` for integration tests. Specifically, declares the application context resource `locations` or the component `classes` used to load the context. Resource locations are typically XML configuration files or Groovy scripts located in the classpath, while component classes are typically `@Configuration` classes. However, resource locations can also refer to files and scripts in the file system, and component classes can be `@Component` classes, `@Service` classes, and so on. See Component Classes for further details. The following example shows a annotation that refers to an XML file: annotation that refers to a class: As an alternative or in addition to declaring resource locations or component classes, you can use to declare classes. The following example shows such a case: You can optionally use to declare the `ContextLoader` strategy as well. Note, however, that you typically do not need to explicitly configure the loader, since the default loader supports `initializers` and either resource `locations` or component `classes` . The following example uses both a location and a loader: ``` @ContextConfiguration(locations = "/test-context.xml", loader = CustomContextLoader.class) (1) class CustomLoaderXmlApplicationContextTests { // class body... } ``` ``` @ContextConfiguration("/test-context.xml", loader = CustomContextLoader::class) (1) class CustomLoaderXmlApplicationContextTests { // class body... } ``` @ContextConfiguration provides support for inheriting resource locations or configuration classes as well as context initializers that are declared by superclasses or enclosing classes. See Context Management, `@Nested` test class configuration, and the javadocs for further details. # @WebAppConfiguration `@WebAppConfiguration` `@WebAppConfiguration` is a class-level annotation that you can use to declare that the `ApplicationContext` loaded for an integration test should be a . The mere presence of `@WebAppConfiguration` on a test class ensures that a is loaded for the test, using the default value of ``` "file:src/main/webapp" ``` for the path to the root of the web application (that is, the resource base path). The resource base path is used behind the scenes to create a `MockServletContext` , which serves as the `ServletContext` for the test’s . The following example shows how to use the `@WebAppConfiguration` annotation: To override the default, you can specify a different base resource path by using the implicit `value` attribute. Both `classpath:` and `file:` resource prefixes are supported. If no resource prefix is supplied, the path is assumed to be a file system resource. The following example shows how to specify a classpath resource: Note that `@WebAppConfiguration` must be used in conjunction with , either within a single test class or within a test class hierarchy. See the `@WebAppConfiguration` javadoc for further details. # @ContextHierarchy `@ContextHierarchy` `@ContextHierarchy` is a class-level annotation that is used to define a hierarchy of `ApplicationContext` instances for integration tests. `@ContextHierarchy` should be declared with a list of one or more instances, each of which defines a level in the context hierarchy. The following examples demonstrate the use of `@ContextHierarchy` within a single test class ( `@ContextHierarchy` can also be used within a test class hierarchy): ``` @ContextHierarchy({ @ContextConfiguration("/parent-config.xml"), @ContextConfiguration("/child-config.xml") }) class ContextHierarchyTests { // class body... } ``` ``` @ContextHierarchy( ContextConfiguration("/parent-config.xml"), ContextConfiguration("/child-config.xml")) class ContextHierarchyTests { // class body... } ``` ``` @WebAppConfiguration @ContextHierarchy({ @ContextConfiguration(classes = AppConfig.class), @ContextConfiguration(classes = WebConfig.class) }) class WebIntegrationTests { // class body... } ``` ``` @WebAppConfiguration @ContextHierarchy( ContextConfiguration(classes = [AppConfig::class]), ContextConfiguration(classes = [WebConfig::class])) class WebIntegrationTests { // class body... } ``` If you need to merge or override the configuration for a given level of the context hierarchy within a test class hierarchy, you must explicitly name that level by supplying the same value to the `name` attribute in at each corresponding level in the class hierarchy. See Context Hierarchies and the `@ContextHierarchy` javadoc for further examples. # @ActiveProfiles `@ActiveProfiles` `@ActiveProfiles` is a class-level annotation that is used to declare which bean definition profiles should be active when loading an `ApplicationContext` for an integration test. The following example indicates that the `dev` profile should be active: The following example indicates that both the `dev` and the `integration` profiles should be active: ``` @ContextConfiguration @ActiveProfiles({"dev", "integration"}) (1) class DeveloperIntegrationTests { // class body... } ``` ``` @ContextConfiguration @ActiveProfiles(["dev", "integration"]) (1) class DeveloperIntegrationTests { // class body... } ``` @ActiveProfiles provides support for inheriting active bean definition profiles declared by superclasses and enclosing classes by default. You can also resolve active bean definition profiles programmatically by implementing a custom ActiveProfilesResolver and registering it by using the resolver attribute of @ActiveProfiles. See Context Configuration with Environment Profiles, `@Nested` test class configuration, and the `@ActiveProfiles` javadoc for examples and further details. # @TestPropertySource `@TestPropertySource` `@TestPropertySource` is a class-level annotation that you can use to configure the locations of properties files and inlined properties to be added to the set of `PropertySources` in the `Environment` for an `ApplicationContext` loaded for an integration test. The following example demonstrates how to declare a properties file from the classpath: The following example demonstrates how to declare inlined properties: ``` @ContextConfiguration @TestPropertySource(properties = { "timezone = GMT", "port: 4242" }) (1) class MyIntegrationTests { // class body... } ``` See Context Configuration with Test Property Sources for examples and further details. # @DynamicPropertySource is a method-level annotation that you can use to register dynamic properties to be added to the set of `PropertySources` in the `Environment` for an `ApplicationContext` loaded for an integration test. Dynamic properties are useful when you do not know the value of the properties upfront – for example, if the properties are managed by an external resource such as for a container managed by the Testcontainers project. The following example demonstrates how to register a dynamic property: static MyExternalServer server = // ... @DynamicPropertySource (1) static void dynamicProperties(DynamicPropertyRegistry registry) { (2) registry.add("server.port", server::getPort); (3) } @JvmStatic val server: MyExternalServer = // ... @DynamicPropertySource (1) @JvmStatic fun dynamicProperties(registry: DynamicPropertyRegistry) { (2) registry.add("server.port", server::getPort) (3) } } See Context Configuration with Dynamic Property Sources for further details. # @DirtiesContext `@DirtiesContext` `@DirtiesContext` indicates that the underlying Spring `ApplicationContext` has been dirtied during the execution of a test (that is, the test modified or corrupted it in some manner — for example, by changing the state of a singleton bean) and should be closed. When an application context is marked as dirty, it is removed from the testing framework’s cache and closed. As a consequence, the underlying Spring container is rebuilt for any subsequent test that requires a context with the same configuration metadata. You can use `@DirtiesContext` as both a class-level and a method-level annotation within the same class or class hierarchy. In such scenarios, the `ApplicationContext` is marked as dirty before or after any such annotated method as well as before or after the current test class, depending on the configured `methodMode` and `classMode` . When `@DirtiesContext` is declared at both the class level and the method level, the configured modes from both annotations will be honored. For example, if the class mode is set to ``` BEFORE_EACH_TEST_METHOD ``` and the method mode is set to `AFTER_METHOD` , the context will be marked as dirty both before and after the given test method. The following examples explain when the context would be dirtied for various configuration scenarios: Before the current test class, when declared on a class with class mode set to `BEFORE_CLASS` . After the current test class, when declared on a class with class mode set to `AFTER_CLASS` (i.e., the default class mode). Before each test method in the current test class, when declared on a class with class mode set to ``` BEFORE_EACH_TEST_METHOD. ``` After each test method in the current test class, when declared on a class with class mode set to ``` AFTER_EACH_TEST_METHOD. ``` Before the current test, when declared on a method with the method mode set to `BEFORE_METHOD` . After the current test, when declared on a method with the method mode set to `AFTER_METHOD` (i.e., the default method mode). 1 Dirty the context after the current test method. If you use `@DirtiesContext` in a test whose context is configured as part of a context hierarchy with `@ContextHierarchy` , you can use the `hierarchyMode` flag to control how the context cache is cleared. By default, an exhaustive algorithm is used to clear the context cache, including not only the current level but also all other context hierarchies that share an ancestor context common to the current test. All `ApplicationContext` instances that reside in a sub-hierarchy of the common ancestor context are removed from the context cache and closed. If the exhaustive algorithm is overkill for a particular use case, you can specify the simpler current level algorithm, as the following example shows. ``` @ContextHierarchy({ @ContextConfiguration("/parent-config.xml"), @ContextConfiguration("/child-config.xml") }) class BaseTests { // class body... } class ExtendedTests extends BaseTests { ``` @ContextHierarchy( ContextConfiguration("/parent-config.xml"), ContextConfiguration("/child-config.xml")) open class BaseTests { // class body... } class ExtendedTests : BaseTests() { For further details regarding the `EXHAUSTIVE` and `CURRENT_LEVEL` algorithms, see the ``` DirtiesContext.HierarchyMode ``` javadoc. # @TestExecutionListeners is used to register listeners for a particular test class, its subclasses, and its nested classes. If you wish to register a listener globally, you should register it via the automatic discovery mechanism described in Configuration. The following example shows how to register two ``` @ContextConfiguration @TestExecutionListeners({CustomTestExecutionListener.class, AnotherTestExecutionListener.class}) (1) class CustomTestExecutionListenerTests { // class body... } ``` ``` @ContextConfiguration @TestExecutionListeners(CustomTestExecutionListener::class, AnotherTestExecutionListener::class) (1) class CustomTestExecutionListenerTests { // class body... } ``` provides support for inheriting listeners from superclasses or enclosing classes. See `@Nested` test class configuration and the javadoc for an example and further details. If you discover that you need to switch back to using the default implementations, see the note in Registering Implementations. # @RecordApplicationEvents is a class-level annotation that is used to instruct the Spring TestContext Framework to record all application events that are published in the `ApplicationContext` during the execution of a single test. The recorded events can be accessed via the `ApplicationEvents` API within tests. See Application Events and the javadoc for an example and further details. # @Commit `@Commit` `@Commit` indicates that the transaction for a transactional test method should be committed after the test method has completed. You can use `@Commit` as a direct replacement for `@Rollback(false)` to more explicitly convey the intent of the code. Analogous to `@Rollback` , `@Commit` can also be declared as a class-level or method-level annotation. The following example shows how to use the `@Commit` annotation: # @Rollback `@Rollback` `@Rollback` indicates whether the transaction for a transactional test method should be rolled back after the test method has completed. If `true` , the transaction is rolled back. Otherwise, the transaction is committed (see also `@Commit` ). Rollback for integration tests in the Spring TestContext Framework defaults to `true` even if `@Rollback` is not explicitly declared. When declared as a class-level annotation, `@Rollback` defines the default rollback semantics for all test methods within the test class hierarchy. When declared as a method-level annotation, `@Rollback` defines rollback semantics for the specific test method, potentially overriding class-level `@Rollback` or `@Commit` semantics. The following example causes a test method’s result to not be rolled back (that is, the result is committed to the database): # @BeforeTransaction `@BeforeTransaction` `@BeforeTransaction` indicates that the annotated `void` method should be run before a transaction is started, for test methods that have been configured to run within a transaction by using Spring’s `@Transactional` annotation. `@BeforeTransaction` methods are not required to be `public` and may be declared on Java 8-based interface default methods. The following example shows how to use the `@BeforeTransaction` annotation: # @AfterTransaction `@AfterTransaction` `@AfterTransaction` indicates that the annotated `void` method should be run after a transaction is ended, for test methods that have been configured to run within a transaction by using Spring’s `@Transactional` annotation. `@AfterTransaction` methods are not required to be `public` and may be declared on Java 8-based interface default methods. # @Sql `@Sql` `@Sql` is used to annotate a test class or test method to configure SQL scripts to be run against a given database during integration tests. The following example shows how to use it: ``` @Test @Sql({"/test-schema.sql", "/test-user-data.sql"}) (1) void userTest() { // run code that relies on the test schema and test data } ``` ``` @Test @Sql("/test-schema.sql", "/test-user-data.sql") (1) fun userTest() { // run code that relies on the test schema and test data } ``` See Executing SQL scripts declaratively with @Sql for further details. # @SqlConfig `@SqlConfig` `@SqlConfig` defines metadata that is used to determine how to parse and run SQL scripts configured with the `@Sql` annotation. The following example shows how to use it: ``` @Test @Sql( scripts = "/test-user-data.sql", config = @SqlConfig(commentPrefix = "`", separator = "@@") (1) ) void userTest() { // run code that relies on the test data } ``` ``` @Test @Sql("/test-user-data.sql", config = SqlConfig(commentPrefix = "`", separator = "@@")) (1) fun userTest() { // run code that relies on the test data } ``` # @SqlMergeMode `@SqlMergeMode` `@SqlMergeMode` is used to annotate a test class or test method to configure whether method-level `@Sql` declarations are merged with class-level `@Sql` declarations. If `@SqlMergeMode` is not declared on a test class or test method, the `OVERRIDE` merge mode will be used by default. With the `OVERRIDE` mode, method-level `@Sql` declarations will effectively override class-level `@Sql` declarations. Note that a method-level `@SqlMergeMode` declaration overrides a class-level declaration. The following example shows how to use `@SqlMergeMode` at the class level. ``` @SpringJUnitConfig(TestConfig.class) @Sql("/test-schema.sql") @SqlMergeMode(MERGE) (1) class UserTests { ``` @SpringJUnitConfig(TestConfig::class) @Sql("/test-schema.sql") @SqlMergeMode(MERGE) (1) class UserTests { The following example shows how to use `@SqlMergeMode` at the method level. ``` @SpringJUnitConfig(TestConfig.class) @Sql("/test-schema.sql") class UserTests { ``` @SpringJUnitConfig(TestConfig::class) @Sql("/test-schema.sql") class UserTests { # @SqlGroup `@SqlGroup` `@SqlGroup` is a container annotation that aggregates several `@Sql` annotations. You can use `@SqlGroup` natively to declare several nested `@Sql` annotations, or you can use it in conjunction with Java 8’s support for repeatable annotations, where `@Sql` can be declared several times on the same class or method, implicitly generating this container annotation. The following example shows how to declare an SQL group: The following annotations are supported only when used in conjunction with the SpringRunner, Spring’s JUnit 4 rules , or Spring’s JUnit 4 support classes: `@IfProfileValue` `@IfProfileValue` indicates that the annotated test is enabled for a specific testing environment. If the configured `ProfileValueSource` returns a matching `value` for the provided `name` , the test is enabled. Otherwise, the test is disabled and, effectively, ignored. You can apply `@IfProfileValue` at the class level, the method level, or both. Class-level usage of `@IfProfileValue` takes precedence over method-level usage for any methods within that class or its subclasses. Specifically, a test is enabled if it is enabled both at the class level and at the method level. The absence of `@IfProfileValue` means the test is implicitly enabled. This is analogous to the semantics of JUnit 4’s `@Ignore` annotation, except that the presence of `@Ignore` always disables a test. The following example shows a test that has an `@IfProfileValue` annotation: Alternatively, you can configure `@IfProfileValue` with a list of `values` (with `OR` semantics) to achieve TestNG-like support for test groups in a JUnit 4 environment. Consider the following example: ``` @IfProfileValue(name="test-groups", values={"unit-tests", "integration-tests"}) (1) @Test public void testProcessWhichRunsForUnitOrIntegrationTestGroups() { // some logic that should run only for unit and integration test groups } ``` ``` @IfProfileValue(name="test-groups", values=["unit-tests", "integration-tests"]) (1) @Test fun testProcessWhichRunsForUnitOrIntegrationTestGroups() { // some logic that should run only for unit and integration test groups } ``` is a class-level annotation that specifies what type of `ProfileValueSource` to use when retrieving profile values configured through the `@IfProfileValue` annotation. If is not declared for a test, ``` SystemProfileValueSource ``` is used by default. The following example shows how to use ``` @ProfileValueSourceConfiguration(CustomProfileValueSource.class) (1) public class CustomProfileValueSourceTests { // class body... } ``` ``` @ProfileValueSourceConfiguration(CustomProfileValueSource::class) (1) class CustomProfileValueSourceTests { // class body... } ``` `@Timed` `@Timed` indicates that the annotated test method must finish execution in a specified time period (in milliseconds). If the text execution time exceeds the specified time period, the test fails. The time period includes running the test method itself, any repetitions of the test (see `@Repeat` ), as well as any setting up or tearing down of the test fixture. The following example shows how to use it: Spring’s `@Timed` annotation has different semantics than JUnit 4’s `@Test(timeout=…​)` support. Specifically, due to the manner in which JUnit 4 handles test execution timeouts (that is, by executing the test method in a separate `Thread` ), `@Test(timeout=…​)` preemptively fails the test if the test takes too long. Spring’s `@Timed` , on the other hand, does not preemptively fail the test but rather waits for the test to complete before failing. `@Repeat` `@Repeat` indicates that the annotated test method must be run repeatedly. The number of times that the test method is to be run is specified in the annotation. The scope of execution to be repeated includes execution of the test method itself as well as any setting up or tearing down of the test fixture. When used with the `SpringMethodRule` , the scope additionally includes preparation of the test instance by implementations. The following example shows how to use the `@Repeat` annotation: ``` @Repeat(10) (1) @Test public void testProcessRepeatedly() { // ... } ``` ``` @Repeat(10) (1) @Test fun testProcessRepeatedly() { // ... } ``` The following annotations are supported when used in conjunction with the `SpringExtension` and JUnit Jupiter (that is, the programming model in JUnit 5): from the Spring TestContext Framework. It can be used at the class level as a drop-in replacement for and `@SpringJUnitConfig` is that component classes may be declared with the `value` attribute in `@SpringJUnitConfig` . The following example shows how to use the `@SpringJUnitConfig` annotation to specify a configuration class: The following example shows how to use the `@SpringJUnitConfig` annotation to specify the location of a configuration file: ``` @SpringJUnitConfig(locations = "/test-config.xml") (1) class XmlJUnitJupiterSpringTests { // class body... } ``` ``` @SpringJUnitConfig(locations = ["/test-config.xml"]) (1) class XmlJUnitJupiterSpringTests { // class body... } ``` See Context Management as well as the javadoc for `@SpringJUnitConfig` and for further details. and `@WebAppConfiguration` from the Spring TestContext Framework. You can use it at the class level as a drop-in replacement for is that you can declare component classes by using the `value` attribute in . In addition, you can override the `value` attribute from `@WebAppConfiguration` only by using the `resourcePath` attribute in annotation to specify a configuration class: ``` @SpringJUnitWebConfig(TestConfig.class) (1) class ConfigurationClassJUnitJupiterSpringWebTests { // class body... } ``` ``` @SpringJUnitWebConfig(TestConfig::class) (1) class ConfigurationClassJUnitJupiterSpringWebTests { // class body... } ``` annotation to specify the location of a configuration file: ``` @SpringJUnitWebConfig(locations = "/test-config.xml") (1) class XmlJUnitJupiterSpringWebTests { // class body... } ``` ``` @SpringJUnitWebConfig(locations = ["/test-config.xml"]) (1) class XmlJUnitJupiterSpringWebTests { // class body... } ``` See Context Management as well as the javadoc for , and `@WebAppConfiguration` for further details. `@TestConstructor` `@TestConstructor` is a type-level annotation that is used to configure how the parameters of a test class constructor are autowired from components in the test’s `ApplicationContext` . If `@TestConstructor` is not present or meta-present on a test class, the default test constructor autowire mode will be used. See the tip below for details on how to change the default mode. Note, however, that a local declaration of `@Autowired` on a constructor takes precedence over both `@TestConstructor` and the default mode. As of Spring Framework 5.2, | | --- | is a type-level annotation that is used to configure how Spring test configuration annotations are processed within enclosing class hierarchies for inner test classes. If is not present or meta-present on a test class, in its supertype hierarchy, or in its enclosing class hierarchy, the default enclosing configuration inheritance mode will be used. See the tip below for details on how to change the default mode. The Spring TestContext Framework honors semantics for the following annotations. See `@Nested` test class configuration for an example and further details. ``` @EnabledIf("#{systemProperties['os.name'].toLowerCase().contains('mac')}") ``` ``` @EnabledIf("${smoke.tests.enabled}") ``` Text literal. For example: `@EnabledIf("true")` Note, however, that a text literal that is not the result of dynamic resolution of a property placeholder is of zero practical value, since `@EnabledIf("false")` is equivalent to `@Disabled` and `@EnabledIf("true")` is logically meaningless. You can use `@EnabledIf` as a meta-annotation to create custom composed annotations. For example, you can create a custom `@EnabledOnMac` annotation as follows: ``` @Target({ElementType.TYPE, ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @EnabledIf( expression = "#{systemProperties['os.name'].toLowerCase().contains('mac')}", reason = "Enabled on Mac OS" ) public @interface EnabledOnMac {} ``` ``` @Target(AnnotationTarget.TYPE, AnnotationTarget.FUNCTION) @Retention(AnnotationRetention.RUNTIME) @EnabledIf( expression = "#{systemProperties['os.name'].toLowerCase().contains('mac')}", reason = "Enabled on Mac OS" ) annotation class EnabledOnMac {} ``` ``` @DisabledIf("#{systemProperties['os.name'].toLowerCase().contains('mac')}") ``` ``` @DisabledIf("${smoke.tests.disabled}") ``` Text literal. For example: `@DisabledIf("true")` Note, however, that a text literal that is not the result of dynamic resolution of a property placeholder is of zero practical value, since `@DisabledIf("true")` is equivalent to `@Disabled` and `@DisabledIf("false")` is logically meaningless. You can use `@DisabledIf` as a meta-annotation to create custom composed annotations. For example, you can create a custom `@DisabledOnMac` annotation as follows: ``` @Target({ElementType.TYPE, ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @DisabledIf( expression = "#{systemProperties['os.name'].toLowerCase().contains('mac')}", reason = "Disabled on Mac OS" ) public @interface DisabledOnMac {} ``` ``` @Target(AnnotationTarget.TYPE, AnnotationTarget.FUNCTION) @Retention(AnnotationRetention.RUNTIME) @DisabledIf( expression = "#{systemProperties['os.name'].toLowerCase().contains('mac')}", reason = "Disabled on Mac OS" ) annotation class DisabledOnMac {} ``` You can use most test-related annotations as meta-annotations to create custom composed annotations and reduce configuration duplication across a test suite. You can use each of the following as a meta-annotation in conjunction with the TestContext framework. * `@BootstrapWith` * * `@ContextHierarchy` * `@ActiveProfiles` * `@TestPropertySource` * `@DirtiesContext` * `@WebAppConfiguration` * * `@Transactional` * `@BeforeTransaction` * `@AfterTransaction` * `@Commit` * `@Rollback` * `@Sql` * `@SqlConfig` * `@SqlMergeMode` * `@SqlGroup` * `@Repeat` (only supported on JUnit 4) * `@Timed` (only supported on JUnit 4) * `@IfProfileValue` (only supported on JUnit 4) * (only supported on JUnit 4) * `@SpringJUnitConfig` (only supported on JUnit Jupiter) * (only supported on JUnit Jupiter) * `@TestConstructor` (only supported on JUnit Jupiter) * (only supported on JUnit Jupiter) * `@EnabledIf` (only supported on JUnit Jupiter) * `@DisabledIf` (only supported on JUnit Jupiter) Consider the following example: ``` @RunWith(SpringRunner.class) @ContextConfiguration({"/app-config.xml", "/test-data-access-config.xml"}) @ActiveProfiles("dev") @Transactional public class OrderRepositoryTests { } @RunWith(SpringRunner.class) @ContextConfiguration({"/app-config.xml", "/test-data-access-config.xml"}) @ActiveProfiles("dev") @Transactional public class UserRepositoryTests { } ``` ``` @RunWith(SpringRunner::class) @ContextConfiguration("/app-config.xml", "/test-data-access-config.xml") @ActiveProfiles("dev") @Transactional class OrderRepositoryTests { } ``` @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) @ContextConfiguration({"/app-config.xml", "/test-data-access-config.xml"}) @ActiveProfiles("dev") @Transactional public @interface TransactionalDevTestConfig { } ``` @RunWith(SpringRunner.class) @TransactionalDevTestConfig public class UserRepositoryTests { } ``` ``` @RunWith(SpringRunner::class) @TransactionalDevTestConfig class OrderRepositoryTests @RunWith(SpringRunner::class) @TransactionalDevTestConfig class UserRepositoryTests ``` If we write tests that use JUnit Jupiter, we can reduce code duplication even further, since annotations in JUnit 5 can also be used as meta-annotations. Consider the following example: ``` @ExtendWith(SpringExtension.class) @ContextConfiguration({"/app-config.xml", "/test-data-access-config.xml"}) @ActiveProfiles("dev") @Transactional class OrderRepositoryTests { } @ExtendWith(SpringExtension.class) @ContextConfiguration({"/app-config.xml", "/test-data-access-config.xml"}) @ActiveProfiles("dev") @Transactional class UserRepositoryTests { } ``` ``` @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) @ExtendWith(SpringExtension.class) @ContextConfiguration({"/app-config.xml", "/test-data-access-config.xml"}) @ActiveProfiles("dev") @Transactional public @interface TransactionalDevTestConfig { } ``` Since JUnit Jupiter supports the use of `@Test` , `@RepeatedTest` , `ParameterizedTest` , and others as meta-annotations, you can also create custom composed annotations at the test method level. For example, if we wish to create a composed annotation that combines the `@Test` and `@Tag` annotations from JUnit Jupiter with the `@Transactional` annotation from Spring, we could create an ``` @Target(ElementType.METHOD) @Retention(RetentionPolicy.RUNTIME) @Transactional @Tag("integration-test") // org.junit.jupiter.api.Tag @Test // org.junit.jupiter.api.Test public @interface TransactionalIntegrationTest { } ``` ``` @Target(AnnotationTarget.TYPE) @Retention(AnnotationRetention.RUNTIME) @Transactional @Tag("integration-test") // org.junit.jupiter.api.Tag @Test // org.junit.jupiter.api.Test annotation class TransactionalIntegrationTest { } ``` ``` @TransactionalIntegrationTest void saveOrder() { } @TransactionalIntegrationTest void deleteOrder() { } ``` ``` @TransactionalIntegrationTest fun saveOrder() { } @TransactionalIntegrationTest fun deleteOrder() { } ``` JUnit: "A programmer-friendly testing framework for Java and the JVM". Used by the Spring Framework in its test suite and supported in the Spring TestContext Framework. * TestNG: A testing framework inspired by JUnit with added support for test groups, data-driven testing, distributed testing, and other features. Supported in the Spring TestContext Framework * AssertJ: "Fluent assertions for Java", including support for Java 8 lambdas, streams, and numerous other features. * Mock Objects: Article in Wikipedia. * MockObjects.com: Web site dedicated to mock objects, a technique for improving the design of code within test-driven development. * Mockito: Java mock library based on the Test Spy pattern. Used by the Spring Framework in its test suite. * EasyMock: Java library "that provides Mock Objects for interfaces (and objects through the class extension) by generating them on the fly using Java’s proxy mechanism." * JMock: Library that supports test-driven development of Java code with mock objects. * DbUnit: JUnit extension (also usable with Ant and Maven) that is targeted at database-driven projects and, among other things, puts your database into a known state between test runs. * Testcontainers: Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. * The Grinder: Java load testing framework. * SpringMockK: Support for Spring Boot integration tests written in Kotlin using MockK instead of Mockito. This part of the reference documentation is concerned with data access and the interaction between the data access layer and the business or service layer. Spring’s comprehensive transaction management support is covered in some detail, followed by thorough coverage of the various data access frameworks and technologies with which the Spring Framework integrates. Comprehensive transaction support is among the most compelling reasons to use the Spring Framework. The Spring Framework provides a consistent abstraction for transaction management that delivers the following benefits: A consistent programming model across different transaction APIs, such as Java Transaction API (JTA), JDBC, Hibernate, and the Java Persistence API (JPA). * Support for declarative transaction management. * A simpler API for programmatic transaction management than complex transaction APIs, such as JTA. * Excellent integration with Spring’s data access abstractions. The following sections describe the Spring Framework’s transaction features and technologies: Advantages of the Spring Framework’s transaction support model describes why you would use the Spring Framework’s transaction abstraction instead of EJB Container-Managed Transactions (CMT) or choosing to drive transactions through a proprietary API. * Understanding the Spring Framework transaction abstraction outlines the core classes and describes how to configure and obtain `DataSource` instances from a variety of sources. * Synchronizing resources with transactions describes how the application code ensures that resources are created, reused, and cleaned up properly. * Declarative transaction management describes support for declarative transaction management. * Programmatic transaction management covers support for programmatic (that is, explicitly coded) transaction management. * Transaction bound event describes how you could use application events within a transaction. The chapter also includes discussions of best practices, application server integration, and solutions to common problems. Traditionally, EE application developers have had two choices for transaction management: global or local transactions, both of which have profound limitations. Global and local transaction management is reviewed in the next two sections, followed by a discussion of how the Spring Framework’s transaction management support addresses the limitations of the global and local transaction models. ## Global Transactions Global transactions let you work with multiple transactional resources, typically relational databases and message queues. The application server manages global transactions through the JTA, which is a cumbersome API (partly due to its exception model). Furthermore, a JTA `UserTransaction` normally needs to be sourced from JNDI, meaning that you also need to use JNDI in order to use JTA. The use of global transactions limits any potential reuse of application code, as JTA is normally only available in an application server environment. Previously, the preferred way to use global transactions was through EJB CMT (Container Managed Transaction). CMT is a form of declarative transaction management (as distinguished from programmatic transaction management). EJB CMT removes the need for transaction-related JNDI lookups, although the use of EJB itself necessitates the use of JNDI. It removes most but not all of the need to write Java code to control transactions. The significant downside is that CMT is tied to JTA and an application server environment. Also, it is only available if one chooses to implement business logic in EJBs (or at least behind a transactional EJB facade). The negatives of EJB in general are so great that this is not an attractive proposition, especially in the face of compelling alternatives for declarative transaction management. ## Local Transactions Local transactions are resource-specific, such as a transaction associated with a JDBC connection. Local transactions may be easier to use but have a significant disadvantage: They cannot work across multiple transactional resources. For example, code that manages transactions by using a JDBC connection cannot run within a global JTA transaction. Because the application server is not involved in transaction management, it cannot help ensure correctness across multiple resources. (It is worth noting that most applications use a single transaction resource.) Another downside is that local transactions are invasive to the programming model. ## Spring Framework’s Consistent Programming Model Spring resolves the disadvantages of global and local transactions. It lets application developers use a consistent programming model in any environment. You write your code once, and it can benefit from different transaction management strategies in different environments. The Spring Framework provides both declarative and programmatic transaction management. Most users prefer declarative transaction management, which we recommend in most cases. With programmatic transaction management, developers work with the Spring Framework transaction abstraction, which can run over any underlying transaction infrastructure. With the preferred declarative model, developers typically write little or no code related to transaction management and, hence, do not depend on the Spring Framework transaction API or any other transaction API. The key to the Spring transaction abstraction is the notion of a transaction strategy. A transaction strategy is defined by a `TransactionManager` , specifically the interface for imperative transaction management and the interface for reactive transaction management. The following listing shows the definition of the API: ``` public interface PlatformTransactionManager extends TransactionManager { TransactionStatus getTransaction(TransactionDefinition definition) throws TransactionException; void commit(TransactionStatus status) throws TransactionException; void rollback(TransactionStatus status) throws TransactionException; } ``` This is primarily a service provider interface (SPI), although you can use it programmatically from your application code. Because is an interface, it can be easily mocked or stubbed as necessary. It is not tied to a lookup strategy, such as JNDI. implementations are defined like any other object (or bean) in the Spring Framework IoC container. This benefit alone makes Spring Framework transactions a worthwhile abstraction, even when you work with JTA. You can test transactional code much more easily than if it used JTA directly. Again, in keeping with Spring’s philosophy, the `TransactionException` that can be thrown by any of the interface’s methods is unchecked (that is, it extends the ``` java.lang.RuntimeException ``` class). Transaction infrastructure failures are almost invariably fatal. In rare cases where application code can actually recover from a transaction failure, the application developer can still choose to catch and handle `TransactionException` . The salient point is that developers are not forced to do so. The `getTransaction(..)` method returns a `TransactionStatus` object, depending on a parameter. The returned `TransactionStatus` might represent a new transaction or can represent an existing transaction, if a matching transaction exists in the current call stack. The implication in this latter case is that, as with Jakarta EE transaction contexts, a `TransactionStatus` is associated with a thread of execution. As of Spring Framework 5.2, Spring also provides a transaction management abstraction for reactive applications that make use of reactive types or Kotlin Coroutines. The following listing shows the transaction strategy defined by ``` public interface ReactiveTransactionManager extends TransactionManager { Mono<ReactiveTransaction> getReactiveTransaction(TransactionDefinition definition) throws TransactionException; Mono<Void> commit(ReactiveTransaction status) throws TransactionException; Mono<Void> rollback(ReactiveTransaction status) throws TransactionException; } ``` The reactive transaction manager is primarily a service provider interface (SPI), although you can use it programmatically from your application code. Because is an interface, it can be easily mocked or stubbed as necessary. The interface specifies: Propagation: Typically, all code within a transaction scope runs in that transaction. However, you can specify the behavior if a transactional method is run when a transaction context already exists. For example, code can continue running in the existing transaction (the common case), or the existing transaction can be suspended and a new transaction created. Spring offers all of the transaction propagation options familiar from EJB CMT. To read about the semantics of transaction propagation in Spring, see Transaction Propagation. * Isolation: The degree to which this transaction is isolated from the work of other transactions. For example, can this transaction see uncommitted writes from other transactions? * Timeout: How long this transaction runs before timing out and being automatically rolled back by the underlying transaction infrastructure. * Read-only status: You can use a read-only transaction when your code reads but does not modify data. Read-only transactions can be a useful optimization in some cases, such as when you use Hibernate. These settings reflect standard transactional concepts. If necessary, refer to resources that discuss transaction isolation levels and other core transaction concepts. Understanding these concepts is essential to using the Spring Framework or any transaction management solution. The `TransactionStatus` interface provides a simple way for transactional code to control transaction execution and query transaction status. The concepts should be familiar, as they are common to all transaction APIs. The following listing shows the `TransactionStatus` interface: ``` public interface TransactionStatus extends TransactionExecution, SavepointManager, Flushable { @Override boolean isNewTransaction(); boolean hasSavepoint(); @Override void setRollbackOnly(); @Override boolean isRollbackOnly(); void flush(); @Override boolean isCompleted(); } ``` Regardless of whether you opt for declarative or programmatic transaction management in Spring, defining the correct `TransactionManager` implementation is absolutely essential. You typically define this implementation through dependency injection. `TransactionManager` implementations normally require knowledge of the environment in which they work: JDBC, JTA, Hibernate, and so on. The following examples show how you can define a local implementation (in this case, with plain JDBC.) You can define a JDBC `DataSource` by creating a bean similar to the following: ``` <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="${jdbc.driverClassName}" /> <property name="url" value="${jdbc.url}" /> <property name="username" value="${jdbc.username}" /> <property name="password" value="${jdbc.password}" /> </bean> ``` The related bean definition then has a reference to the `DataSource` definition. It should resemble the following example: If you use JTA in a Jakarta EE container, then you use a container `DataSource` , obtained through JNDI, in conjunction with Spring’s . The following example shows what the JTA and JNDI lookup version would look like: <jee:jndi-lookup id="dataSource" jndi-name="jdbc/jpetstore"/ <bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager" /does not need to know about the `DataSource` (or any other specific resources) because it uses the container’s global transaction management infrastructure. The preceding definition of the | | --- | If you use JTA, your transaction manager definition should look the same, regardless of what data access technology you use, be it JDBC, Hibernate JPA, or any other supported technology. This is due to the fact that JTA transactions are global transactions, which can enlist any transactional resource. | | --- | In all Spring transaction setups, application code does not need to change. You can change how transactions are managed merely by changing configuration, even if that change means moving from local to global transactions or vice versa. ## Hibernate Transaction Setup You can also easily use Hibernate local transactions, as shown in the following examples. In this case, you need to define a Hibernate , which your application code can use to obtain Hibernate `Session` instances. The `DataSource` bean definition is similar to the local JDBC example shown previously and, thus, is not shown in the following example. The `txManager` bean in this case is of the type. In the same way as the needs a reference to the `DataSource` , the needs a reference to the `SessionFactory` . The following example declares `sessionFactory` and `txManager` beans: <bean id="txManager" class="org.springframework.orm.hibernate5.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> </bean> ``` If you use Hibernate and Jakarta EE container-managed JTA transactions, you should use the same as in the previous JTA example for JDBC, as the following example shows. Also, it is recommended to make Hibernate aware of JTA through its transaction coordinator and possibly also its connection release mode configuration: Or alternatively, you may pass the into your for enforcing the same defaults: How to create different transaction managers and how they are linked to related resources that need to be synchronized to transactions (for example to a JDBC `DataSource` , to a Hibernate `SessionFactory` , and so forth) should now be clear. This section describes how the application code (directly or indirectly, by using a persistence API such as JDBC, Hibernate, or JPA) ensures that these resources are created, reused, and cleaned up properly. The section also discusses how transaction synchronization is (optionally) triggered through the relevant `TransactionManager` . ## High-level Synchronization Approach The preferred approach is to use Spring’s highest-level template-based persistence integration APIs or to use native ORM APIs with transaction-aware factory beans or proxies for managing the native resource factories. These transaction-aware solutions internally handle resource creation and reuse, cleanup, optional transaction synchronization of the resources, and exception mapping. Thus, user data access code does not have to address these tasks but can focus purely on non-boilerplate persistence logic. Generally, you use the native ORM API or take a template approach for JDBC access by using the `JdbcTemplate` . These solutions are detailed in subsequent sections of this reference documentation. ## Low-level Synchronization Approach Classes such as `DataSourceUtils` (for JDBC), ``` EntityManagerFactoryUtils ``` (for JPA), `SessionFactoryUtils` (for Hibernate), and so on exist at a lower level. When you want the application code to deal directly with the resource types of the native persistence APIs, you use these classes to ensure that proper Spring Framework-managed instances are obtained, transactions are (optionally) synchronized, and exceptions that occur in the process are properly mapped to a consistent API. For example, in the case of JDBC, instead of the traditional JDBC approach of calling the `getConnection()` method on the `DataSource` , you can instead use Spring’s ``` org.springframework.jdbc.datasource.DataSourceUtils ``` ``` Connection conn = DataSourceUtils.getConnection(dataSource); ``` If an existing transaction already has a connection synchronized (linked) to it, that instance is returned. Otherwise, the method call triggers the creation of a new connection, which is (optionally) synchronized to any existing transaction and made available for subsequent reuse in that same transaction. As mentioned earlier, any `SQLException` is wrapped in a Spring Framework ``` CannotGetJdbcConnectionException ``` , one of the Spring Framework’s hierarchy of unchecked `DataAccessException` types. This approach gives you more information than can be obtained easily from the `SQLException` and ensures portability across databases and even across different persistence technologies. This approach also works without Spring transaction management (transaction synchronization is optional), so you can use it whether or not you use Spring for transaction management. Of course, once you have used Spring’s JDBC support, JPA support, or Hibernate support, you generally prefer not to use `DataSourceUtils` or the other helper classes, because you are much happier working through the Spring abstraction than directly with the relevant APIs. For example, if you use the Spring `JdbcTemplate` or `jdbc.object` package to simplify your use of JDBC, correct connection retrieval occurs behind the scenes and you need not write any special code. At the very lowest level exists the class. This is a proxy for a target `DataSource` , which wraps the target `DataSource` to add awareness of Spring-managed transactions. In this respect, it is similar to a transactional JNDI `DataSource` , as provided by a Jakarta EE server. You should almost never need or want to use this class, except when existing code must be called and passed a standard JDBC `DataSource` interface implementation. In that case, it is possible that this code is usable but is participating in Spring-managed transactions. You can write your new code by using the higher-level abstractions mentioned earlier. Most Spring Framework users choose declarative transaction management. This option has the least impact on application code and, hence, is most consistent with the ideals of a non-invasive lightweight container. | | --- | The Spring Framework’s declarative transaction management is made possible with Spring aspect-oriented programming (AOP). However, as the transactional aspects code comes with the Spring Framework distribution and may be used in a boilerplate fashion, AOP concepts do not generally have to be understood to make effective use of this code. The Spring Framework’s declarative transaction management is similar to EJB CMT, in that you can specify transaction behavior (or lack of it) down to the individual method level. You can make a `setRollbackOnly()` call within a transaction context, if necessary. The differences between the two types of transaction management are: Unlike EJB CMT, which is tied to JTA, the Spring Framework’s declarative transaction management works in any environment. It can work with JTA transactions or local transactions by using JDBC, JPA, or Hibernate by adjusting the configuration files. * You can apply the Spring Framework declarative transaction management to any class, not merely special classes such as EJBs. * The Spring Framework offers declarative rollback rules, a feature with no EJB equivalent. Both programmatic and declarative support for rollback rules is provided. * The Spring Framework lets you customize transactional behavior by using AOP. For example, you can insert custom behavior in the case of transaction rollback. You can also add arbitrary advice, along with transactional advice. With EJB CMT, you cannot influence the container’s transaction management, except with `setRollbackOnly()` . * The Spring Framework does not support propagation of transaction contexts across remote calls, as high-end application servers do. If you need this feature, we recommend that you use EJB. However, consider carefully before using such a feature, because, normally, one does not want transactions to span remote calls. The concept of rollback rules is important. They let you specify which exceptions (and throwables) should cause automatic rollback. You can specify this declaratively, in configuration, not in Java code. So, although you can still call `setRollbackOnly()` on the `TransactionStatus` object to roll back the current transaction back, most often you can specify a rule that ``` MyApplicationException ``` must always result in rollback. The significant advantage to this option is that business objects do not depend on the transaction infrastructure. For example, they typically do not need to import Spring transaction APIs or other Spring APIs. Although EJB container default behavior automatically rolls back the transaction on a system exception (usually a runtime exception), EJB CMT does not roll back the transaction automatically on an application exception (that is, a checked exception other than ``` java.rmi.RemoteException ``` ). While the Spring default behavior for declarative transaction management follows EJB convention (roll back is automatic only on unchecked exceptions), it is often useful to customize this behavior. It is not sufficient merely to tell you to annotate your classes with the `@Transactional` annotation, add to your configuration, and expect you to understand how it all works. To provide a deeper understanding, this section explains the inner workings of the Spring Framework’s declarative transaction infrastructure in the context of transaction-related issues. The most important concepts to grasp with regard to the Spring Framework’s declarative transaction support are that this support is enabled via AOP proxies and that the transactional advice is driven by metadata (currently XML- or annotation-based). The combination of AOP with transactional metadata yields an AOP proxy that uses a in conjunction with an appropriate `TransactionManager` implementation to drive transactions around method invocations. Spring AOP is covered in the AOP section. | | --- | Spring Framework’s provides transaction management for imperative and reactive programming models. The interceptor detects the desired flavor of transaction management by inspecting the method return type. Methods returning a reactive type such as `Publisher` or Kotlin `Flow` (or a subtype of those) qualify for reactive transaction management. All other return types including `void` use the code path for imperative transaction management. Transaction management flavors impact which transaction manager is required. Imperative transactions require a , while reactive transactions use implementations. The following image shows a conceptual view of calling a method on a transactional proxy: Consider the following interface and its attendant implementation. This example uses `Foo` and `Bar` classes as placeholders so that you can concentrate on the transaction usage without focusing on a particular domain model. For the purposes of this example, the fact that the `DefaultFooService` class throws instances in the body of each implemented method is good. That behavior lets you see transactions being created and then rolled back in response to the instance. The following listing shows the `FooService` interface: Foo getFoo(String fooName); Foo getFoo(String fooName, String barName); void insertFoo(Foo foo); void updateFoo(Foo foo); fun getFoo(fooName: String): Foo fun getFoo(fooName: String, barName: String): Foo fun insertFoo(foo: Foo) fun updateFoo(foo: Foo) } ``` Assume that the first two methods of the `FooService` interface, `getFoo(String)` and ``` getFoo(String, String) ``` , must run in the context of a transaction with read-only semantics and that the other methods, `insertFoo(Foo)` and `updateFoo(Foo)` , must run in the context of a transaction with read-write semantics. The following configuration is explained in detail in the next few paragraphs: <!-- the transactional advice (what 'happens'; see the <aop:advisor/> bean below) --> <tx:advice id="txAdvice" transaction-manager="txManager"> <!-- the transactional semantics... --> <tx:attributes> <!-- all methods starting with 'get' are read-only --> <tx:method name="get*" read-only="true"/> <!-- other methods use the default transaction settings (see below) --> <tx:method name="*"/> </tx:attributes> </tx:advice <!-- ensure that the above transactional advice runs for any execution of an operation defined by the FooService interface --> <aop:config> <aop:pointcut id="fooServiceOperation" expression="execution(* x.y.service.FooService.*(..))"/> <aop:advisor advice-ref="txAdvice" pointcut-ref="fooServiceOperation"/> </aop:config <!-- don't forget the DataSource --> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"/> <property name="url" value="jdbc:oracle:thin:@rj-t42:1521:elvis"/> <property name="username" value="scott"/> <property name="password" value="tiger"/> </bean <!-- similarly, don't forget the TransactionManager --> <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="dataSource"/> </bean Examine the preceding configuration. It assumes that you want to make a service object, the `fooService` bean, transactional. The transaction semantics to apply are encapsulated in the `<tx:advice/>` definition. The `<tx:advice/>` definition reads as "all methods starting with `get` are to run in the context of a read-only transaction, and all other methods are to run with the default transaction semantics". The `transaction-manager` attribute of the `<tx:advice/>` tag is set to the name of the `TransactionManager` bean that is going to drive the transactions (in this case, the `txManager` bean). The `<aop:config/>` definition ensures that the transactional advice defined by the `txAdvice` bean runs at the appropriate points in the program. First, you define a pointcut that matches the execution of any operation defined in the `FooService` interface ( `fooServiceOperation` ). Then you associate the pointcut with the `txAdvice` by using an advisor. The result indicates that, at the execution of a `fooServiceOperation` , the advice defined by `txAdvice` is run. The expression defined within the `<aop:pointcut/>` element is an AspectJ pointcut expression. See the AOP section for more details on pointcut expressions in Spring. A common requirement is to make an entire service layer transactional. The best way to do this is to change the pointcut expression to match any operation in your service layer. The following example shows how to do so: ``` <aop:config> <aop:pointcut id="fooServiceMethods" expression="execution(* x.y.service.*.*(..))"/> <aop:advisor advice-ref="txAdvice" pointcut-ref="fooServiceMethods"/> </aop:config> ``` In the preceding example, it is assumed that all your service interfaces are defined in the | | --- | Now that we have analyzed the configuration, you may be asking yourself, "What does all this configuration actually do?" The configuration shown earlier is used to create a transactional proxy around the object that is created from the `fooService` bean definition. The proxy is configured with the transactional advice so that, when an appropriate method is invoked on the proxy, a transaction is started, suspended, marked as read-only, and so on, depending on the transaction configuration associated with that method. Consider the following program that test drives the configuration shown earlier: public static void main(final String[] args) throws Exception { ApplicationContext ctx = new ClassPathXmlApplicationContext("context.xml"); FooService fooService = ctx.getBean(FooService.class); fooService.insertFoo(new Foo()); } } ``` fun main() { val ctx = ClassPathXmlApplicationContext("context.xml") val fooService = ctx.getBean<FooService>("fooService") fooService.insertFoo(Foo()) } ``` The output from running the preceding program should resemble the following (the Log4J output and the stack trace from the thrown by the `insertFoo(..)` method of the `DefaultFooService` class have been truncated for clarity): ``` <!-- the Spring container is starting up... --> [AspectJInvocationContextExposingAdvisorAutoProxyCreator] - Creating implicit proxy for bean 'fooService' with 0 common interceptors and 1 specific interceptors <!-- the DefaultFooService is actually proxied --> [JdkDynamicAopProxy] - Creating JDK dynamic proxy for [x.y.service.DefaultFooService] <!-- ... the insertFoo(..) method is now being invoked on the proxy --> [TransactionInterceptor] - Getting transaction for x.y.service.FooService.insertFoo <!-- the transactional advice kicks in here... --> [DataSourceTransactionManager] - Creating new transaction with name [x.y.service.FooService.insertFoo] [DataSourceTransactionManager] - Acquired Connection [org.apache.commons.dbcp.PoolableConnection@a53de4] for JDBC transaction <!-- the insertFoo(..) method from DefaultFooService throws an exception... --> [RuleBasedTransactionAttribute] - Applying rules to determine whether transaction should rollback on java.lang.UnsupportedOperationException [TransactionInterceptor] - Invoking rollback for transaction on x.y.service.FooService.insertFoo due to throwable [java.lang.UnsupportedOperationException] <!-- and the transaction is rolled back (by default, RuntimeException instances cause rollback) --> [DataSourceTransactionManager] - Rolling back JDBC transaction on Connection [org.apache.commons.dbcp.PoolableConnection@a53de4] [DataSourceTransactionManager] - Releasing JDBC Connection after transaction [DataSourceUtils] - Returning JDBC Connection to DataSource Exception in thread "main" java.lang.UnsupportedOperationException at x.y.service.DefaultFooService.insertFoo(DefaultFooService.java:14) <!-- AOP infrastructure stack trace elements removed for clarity --> at $Proxy0.insertFoo(Unknown Source) at Boot.main(Boot.java:11) ``` To use reactive transaction management the code has to use reactive types. Spring Framework uses the | | --- | The following listing shows a modified version of the previously used `FooService` , but this time the code uses reactive types: Flux<Foo> getFoo(String fooName); Publisher<Foo> getFoo(String fooName, String barName); Mono<Void> insertFoo(Foo foo); Mono<Void> updateFoo(Foo foo); fun getFoo(fooName: String): Flow<Foo fun getFoo(fooName: String, barName: String): Publisher<Foo fun insertFoo(foo: Foo) : Mono<Void fun updateFoo(foo: Foo) : Mono<Void> } ``` override fun getFoo(fooName: String, barName: String): Publisher<Foo> { // ... } Imperative and reactive transaction management share the same semantics for transaction boundary and transaction attribute definitions. The main difference between imperative and reactive transactions is the deferred nature of the latter. decorates the returned reactive type with a transactional operator to begin and clean up the transaction. Therefore, calling a transactional reactive method defers the actual transaction management to a subscription type that activates processing of the reactive type. Another aspect of reactive transaction management relates to data escaping which is a natural consequence of the programming model. Method return values of imperative transactions are returned from transactional methods upon successful termination of a method so that partially computed results do not escape the method closure. Reactive transaction methods return a reactive wrapper type which represents a computation sequence along with a promise to begin and complete the computation. A `Publisher` can emit data while a transaction is ongoing but not necessarily completed. Therefore, methods that depend upon successful completion of an entire transaction need to ensure completion and buffer results in the calling code. The previous section outlined the basics of how to specify transactional settings for classes, typically service layer classes, declaratively in your application. This section describes how you can control the rollback of transactions in a simple, declarative fashion in XML configuration. For details on controlling rollback semantics declaratively with the `@Transactional` annotation, see `@Transactional` Settings. The recommended way to indicate to the Spring Framework’s transaction infrastructure that a transaction’s work is to be rolled back is to throw an `Exception` from code that is currently executing in the context of a transaction. The Spring Framework’s transaction infrastructure code catches any unhandled `Exception` as it bubbles up the call stack and makes a determination whether to mark the transaction for rollback. In its default configuration, the Spring Framework’s transaction infrastructure code marks a transaction for rollback only in the case of runtime, unchecked exceptions. That is, when the thrown exception is an instance or subclass of `RuntimeException` . ( `Error` instances also, by default, result in a rollback). As of Spring Framework 5.2, the default configuration also provides support for Vavr’s `Try` method to trigger transaction rollbacks when it returns a 'Failure'. This allows you to handle functional-style errors using Try and have the transaction automatically rolled back in case of a failure. For more information on Vavr’s Try, refer to the [official Vavr documentation](www.vavr.io/vavr-docs/#_try). Here’s an example of how to use Vavr’s Try with a transactional method: ``` @Transactional public Try<String> myTransactionalMethod() { // If myDataAccessOperation throws an exception, it will be caught by the // Try instance created with Try.of() and wrapped inside the Failure class // which can be checked using the isFailure() method on the Try instance. return Try.of(delegate::myDataAccessOperation); } ``` Checked exceptions that are thrown from a transactional method do not result in a rollback in the default configuration. You can configure exactly which `Exception` types mark a transaction for rollback, including checked exceptions by specifying rollback rules. The following XML snippet demonstrates how to configure rollback for a checked, application-specific `Exception` type by supplying an exception pattern via the `rollback-for` attribute: ``` <tx:advice id="txAdvice" transaction-manager="txManager"> <tx:attributes> <tx:method name="get*" read-only="true" rollback-for="NoProductInStockException"/> <tx:method name="*"/> </tx:attributes> </tx:advice> ``` If you do not want a transaction rolled back when an exception is thrown, you can also specify 'no rollback' rules. The following example tells the Spring Framework’s transaction infrastructure to commit the attendant transaction even in the face of an unhandled ``` <tx:advice id="txAdvice"> <tx:attributes> <tx:method name="updateStock" no-rollback-for="InstrumentNotFoundException"/> <tx:method name="*"/> </tx:attributes> </tx:advice> ``` When the Spring Framework’s transaction infrastructure catches an exception and consults the configured rollback rules to determine whether to mark the transaction for rollback, the strongest matching rule wins. So, in the case of the following configuration, any exception other than an results in a rollback of the attendant transaction: ``` <tx:advice id="txAdvice"> <tx:attributes> <tx:method name="*" rollback-for="Throwable" no-rollback-for="InstrumentNotFoundException"/> </tx:attributes> </tx:advice> ``` You can also indicate a required rollback programmatically. Although simple, this process is quite invasive and tightly couples your code to the Spring Framework’s transaction infrastructure. The following example shows how to programmatically indicate a required rollback: ``` public void resolvePosition() { try { // some business logic... } catch (NoProductInStockException ex) { // trigger rollback programmatically TransactionAspectSupport.currentTransactionStatus().setRollbackOnly(); } } ``` ``` fun resolvePosition() { try { // some business logic... } catch (ex: NoProductInStockException) { // trigger rollback programmatically TransactionAspectSupport.currentTransactionStatus().setRollbackOnly(); } } ``` You are strongly encouraged to use the declarative approach to rollback, if at all possible. Programmatic rollback is available should you absolutely need it, but its usage flies in the face of achieving a clean POJO-based architecture. Consider the scenario where you have a number of service layer objects, and you want to apply a totally different transactional configuration to each of them. You can do so by defining distinct `<aop:advisor/>` elements with differing `pointcut` and `advice-ref` attribute values. As a point of comparison, first assume that all of your service layer classes are defined in a root `x.y.service` package. To make all beans that are instances of classes defined in that package (or in subpackages) and that have names ending in `Service` have the default transactional configuration, you could write the following: <aop:pointcut id="serviceOperation" expression="execution(* x.y.service..*Service.*(..))"/ <aop:advisor pointcut-ref="serviceOperation" advice-ref="txAdvice"/ <!-- these two beans will be transactional... --> <bean id="fooService" class="x.y.service.DefaultFooService"/> <bean id="barService" class="x.y.service.extras.SimpleBarService"/ <!-- ... and these two beans won't --> <bean id="anotherService" class="org.xyz.SomeService"/> <!-- (not in the right package) --> <bean id="barManager" class="x.y.service.SimpleBarManager"/> <!-- (doesn't end in 'Service') -- <tx:advice id="txAdvice"> <tx:attributes> <tx:method name="get*" read-only="true"/> <tx:method name="*"/> </tx:attributes> </tx:adviceThe following example shows how to configure two distinct beans with totally different transactional settings: <aop:pointcut id="defaultServiceOperation" expression="execution(* x.y.service.*Service.*(..))"/ <aop:pointcut id="noTxServiceOperation" expression="execution(* x.y.service.ddl.DefaultDdlManager.*(..))"/ <aop:advisor pointcut-ref="defaultServiceOperation" advice-ref="defaultTxAdvice"/ <aop:advisor pointcut-ref="noTxServiceOperation" advice-ref="noTxAdvice"/ <!-- this bean will be transactional (see the 'defaultServiceOperation' pointcut) --> <bean id="fooService" class="x.y.service.DefaultFooService"/ <!-- this bean will also be transactional, but with totally different transactional settings --> <bean id="anotherFooService" class="x.y.service.ddl.DefaultDdlManager"/ <tx:advice id="defaultTxAdvice"> <tx:attributes> <tx:method name="get*" read-only="true"/> <tx:method name="*"/> </tx:attributes> </tx:advice <tx:advice id="noTxAdvice"> <tx:attributes> <tx:method name="*" propagation="NEVER"/> </tx:attributes> </tx:advice This section summarizes the various transactional settings that you can specify by using the `<tx:advice/>` tag. The default `<tx:advice/>` settings are: The propagation setting is `REQUIRED.` * The isolation level is `DEFAULT.` * Any `RuntimeException` triggers rollback, and any checked `Exception` does not. You can change these default settings. The following table summarizes the various attributes of the `<tx:method/>` tags that are nested within `<tx:advice/>` and `<tx:attributes/>` tags: Attribute | Required? | Default | Description | | --- | --- | --- | --- | | | | | | | | | | | | | | | | | # Using @Transactional `@Transactional` In addition to the XML-based declarative approach to transaction configuration, you can use an annotation-based approach. Declaring transaction semantics directly in the Java source code puts the declarations much closer to the affected code. There is not much danger of undue coupling, because code that is meant to be used transactionally is almost always deployed that way anyway. The standard | | --- | The ease-of-use afforded by the use of the `@Transactional` annotation is best illustrated with an example, which is explained in the text that follows. Consider the following class definition: Used at the class level as above, the annotation indicates a default for all methods of the declaring class (as well as its subclasses). Alternatively, each method can be annotated individually. See null for further details on which methods Spring considers transactional. Note that a class-level annotation does not apply to ancestor classes up the class hierarchy; in such a scenario, inherited methods need to be locally redeclared in order to participate in a subclass-level annotation. When a POJO class such as the one above is defined as a bean in a Spring context, you can make the bean instance transactional through an annotation in a `@Configuration` class. See the javadoc for full details. In XML configuration, the tag provides similar convenience: <!-- enable the configuration of transactional behavior based on annotations --> <!-- a TransactionManager is still required --> <tx:annotation-driven transaction-manager="txManager"/> (1) <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <!-- (this dependency is defined somewhere else) --> <property name="dataSource" ref="dataSource"/> </bean1 | The line that makes the bean instance transactional. | | --- | --- | Reactive transactional methods use reactive return types in contrast to imperative programming arrangements as the following listing shows: @Override public Mono<Foo> getFoo(String fooName, String barName) { // ... } override fun getFoo(fooName: String, barName: String): Mono<Foo> { // ... } Note that there are special considerations for the returned `Publisher` with regards to Reactive Streams cancellation signals. See the Cancel Signals section under "Using the TransactionalOperator" for more details. Method visibility and@Transactional in proxy modeThe If you prefer consistent treatment of method visibility across the different kinds of proxies (which was the default up until 5.3), consider specifying The Spring TestContext Framework supports non-private You can apply the `@Transactional` annotation to an interface definition, a method on an interface, a class definition, or a method on a class. However, the mere presence of the `@Transactional` annotation is not enough to activate the transactional behavior. The `@Transactional` annotation is merely metadata that can be consumed by corresponding runtime infrastructure which uses that metadata to configure the appropriate beans with transactional behavior. In the preceding example, the element switches on actual transaction management at runtime. The Spring team recommends that you annotate methods of concrete classes with the | | --- | Consider using AspectJ mode (see the `mode` attribute in the following table) if you expect self-invocations to be wrapped with transactions as well. In this case, there is no proxy in the first place. Instead, the target class is woven (that is, its byte code is modified) to support `@Transactional` runtime behavior on any kind of method. @EnableTransactionManagement and <tx:annotation-driven/> look for @Transactional only on beans in the same application context in which they are defined. This means that, if you put annotation-driven configuration in a WebApplicationContext for a DispatcherServlet, it checks for @Transactional beans only in your controllers and not in your services. See MVC for more information. The most derived location takes precedence when evaluating the transactional settings for a method. In the case of the following example, the `DefaultFooService` class is annotated at the class level with the settings for a read-only transaction, but the `@Transactional` annotation on the `updateFoo(Foo)` method in the same class takes precedence over the transactional settings defined at the class level. ``` @Transactional(readOnly = true) public class DefaultFooService implements FooService { // these settings have precedence for this method @Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW) public void updateFoo(Foo foo) { // ... } } ``` ``` @Transactional(readOnly = true) class DefaultFooService : FooService { // these settings have precedence for this method @Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW) override fun updateFoo(foo: Foo) { // ... } } ``` `@Transactional` Settings The `@Transactional` annotation is metadata that specifies that an interface, class, or method must have transactional semantics (for example, "start a brand new read-only transaction when this method is invoked, suspending any existing transaction"). The default `@Transactional` settings are as follows: The propagation setting is ``` PROPAGATION_REQUIRED. ``` The isolation level is `ISOLATION_DEFAULT.` * Any `RuntimeException` or `Error` triggers rollback, and any checked `Exception` does not. You can change these default settings. The following table summarizes the various properties of the `@Transactional` annotation: Property | Type | Description | | --- | --- | --- | | | | | | | | | | | | | | | | | | See Rollback rules for further details on rollback rule semantics, patterns, and warnings regarding possible unintentional matches for pattern-based rollback rules. | | --- | Currently, you cannot have explicit control over the name of a transaction, where 'name' means the transaction name that appears in a transaction monitor and in logging output. For declarative transactions, the transaction name is always the fully-qualified class name + `.` + the method name of the transactionally advised class. For example, if the `handlePayment(..)` method of the `BusinessService` class started a transaction, the name of the transaction would be: ``` com.example.BusinessService.handlePayment ``` ## Multiple Transaction Managers with `@Transactional` Most Spring applications need only a single transaction manager, but there may be situations where you want multiple independent transaction managers in a single application. You can use the `value` or `transactionManager` attribute of the `@Transactional` annotation to optionally specify the identity of the `TransactionManager` to be used. This can either be the bean name or the qualifier value of the transaction manager bean. For example, using the qualifier notation, you can combine the following Java code with the following transaction manager bean declarations in the application context: @Transactional("order") public void setSomething(String name) { ... } @Transactional("account") public void doSomething() { ... } @Transactional("reactive-account") public Mono<Void> doSomethingReactive() { ... } } ``` @Transactional("order") fun setSomething(name: String) { // ... } @Transactional("account") fun doSomething() { // ... } @Transactional("reactive-account") fun doSomethingReactive(): Mono<Void> { // ... } } ``` The following listing shows the bean declarations: <bean id="transactionManager1" class="org.springframework.jdbc.support.JdbcTransactionManager"> ... <qualifier value="order"/> </bean <bean id="transactionManager2" class="org.springframework.jdbc.support.JdbcTransactionManager"> ... <qualifier value="account"/> </bean <bean id="transactionManager3" class="org.springframework.data.r2dbc.connection.R2dbcTransactionManager"> ... <qualifier value="reactive-account"/> </bean> ``` In this case, the individual methods on `TransactionalService` run under separate transaction managers, differentiated by the `order` , `account` , and `reactive-account` qualifiers. The default ``` <tx:annotation-driven> ``` target bean name, `transactionManager` , is still used if no specifically qualified `TransactionManager` bean is found. ## Custom Composed Annotations If you find you repeatedly use the same attributes with `@Transactional` on many different methods, Spring’s meta-annotation support lets you define custom composed annotations for your specific use cases. For example, consider the following annotation definitions: ``` @Target({ElementType.METHOD, ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @Transactional(transactionManager = "order", label = "causal-consistency") public @interface OrderTx { } @Target({ElementType.METHOD, ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @Transactional(transactionManager = "account", label = "retryable") public @interface AccountTx { } ``` ``` @Target(AnnotationTarget.FUNCTION, AnnotationTarget.TYPE) @Retention(AnnotationRetention.RUNTIME) @Transactional(transactionManager = "order", label = ["causal-consistency"]) annotation class OrderTx @Target(AnnotationTarget.FUNCTION, AnnotationTarget.TYPE) @Retention(AnnotationRetention.RUNTIME) @Transactional(transactionManager = "account", label = ["retryable"]) annotation class AccountTx ``` The preceding annotations let us write the example from the previous section as follows: @OrderTx public void setSomething(String name) { // ... } @AccountTx public void doSomething() { // ... } } ``` @OrderTx fun setSomething(name: String) { // ... } @AccountTx fun doSomething() { // ... } } ``` In the preceding example, we used the syntax to define the transaction manager qualifier and transactional labels, but we could also have included propagation behavior, rollback rules, timeouts, and other features. This section describes some semantics of transaction propagation in Spring. Note that this section is not a proper introduction to transaction propagation. Rather, it details some of the semantics regarding transaction propagation in Spring. In Spring-managed transactions, be aware of the difference between physical and logical transactions, and how the propagation setting applies to this difference. `PROPAGATION_REQUIRED` `PROPAGATION_REQUIRED` enforces a physical transaction, either locally for the current scope if no transaction exists yet or participating in an existing 'outer' transaction defined for a larger scope. This is a fine default in common call stack arrangements within the same thread (for example, a service facade that delegates to several repository methods where all the underlying resources have to participate in the service-level transaction). By default, a participating transaction joins the characteristics of the outer scope, silently ignoring the local isolation level, timeout value, or read-only flag (if any). Consider switching the | | --- | When the propagation setting is `PROPAGATION_REQUIRED` , a logical transaction scope is created for each method upon which the setting is applied. Each such logical transaction scope can determine rollback-only status individually, with an outer transaction scope being logically independent from the inner transaction scope. In the case of standard `PROPAGATION_REQUIRED` behavior, all these scopes are mapped to the same physical transaction. So a rollback-only marker set in the inner transaction scope does affect the outer transaction’s chance to actually commit. However, in the case where an inner transaction scope sets the rollback-only marker, the outer transaction has not decided on the rollback itself, so the rollback (silently triggered by the inner transaction scope) is unexpected. A corresponding is thrown at that point. This is expected behavior so that the caller of a transaction can never be misled to assume that a commit was performed when it really was not. So, if an inner transaction (of which the outer caller is not aware) silently marks a transaction as rollback-only, the outer caller still calls commit. The outer caller needs to receive an to indicate clearly that a rollback was performed instead. , in contrast to `PROPAGATION_REQUIRED` , always uses an independent physical transaction for each affected transaction scope, never participating in an existing transaction for an outer scope. In such an arrangement, the underlying resource transactions are different and, hence, can commit or roll back independently, with an outer transaction not affected by an inner transaction’s rollback status and with an inner transaction’s locks released immediately after its completion. Such an independent inner transaction can also declare its own isolation level, timeout, and read-only settings and not inherit an outer transaction’s characteristics. The resources attached to the outer transaction will remain bound there while the inner transaction acquires its own resources such as a new database connection. This may lead to exhaustion of the connection pool and potentially to a deadlock if several threads have an active outer transaction and wait to acquire a new connection for their inner transaction, with the pool not being able to hand out any such inner connection anymore. Do not use | | --- | `PROPAGATION_NESTED` `PROPAGATION_NESTED` uses a single physical transaction with multiple savepoints that it can roll back to. Such partial rollbacks let an inner transaction scope trigger a rollback for its scope, with the outer transaction being able to continue the physical transaction despite some operations having been rolled back. This setting is typically mapped onto JDBC savepoints, so it works only with JDBC resource transactions. See Spring’s Suppose you want to run both transactional operations and some basic profiling advice. How do you effect this in the context of ? When you invoke the `updateFoo(Foo)` method, you want to see the following actions: The configured profiling aspect starts. * The transactional advice runs. * The method on the advised object runs. * The transaction commits. * The profiling aspect reports the exact duration of the whole transactional method invocation. This chapter is not concerned with explaining AOP in any great detail (except as it applies to transactions). See AOP for detailed coverage of the AOP configuration and AOP in general. | | --- | The following code shows the simple profiling aspect discussed earlier: import org.aspectj.lang.ProceedingJoinPoint; import org.springframework.util.StopWatch; import org.springframework.core.Ordered; public class SimpleProfiler implements Ordered { private int order; // allows us to control the ordering of advice public int getOrder() { return this.order; } // this method is the around advice public Object profile(ProceedingJoinPoint call) throws Throwable { Object returnValue; StopWatch clock = new StopWatch(getClass().getName()); try { clock.start(call.toShortString()); returnValue = call.proceed(); } finally { clock.stop(); System.out.println(clock.prettyPrint()); } return returnValue; } } ``` import org.aspectj.lang.ProceedingJoinPoint import org.springframework.util.StopWatch import org.springframework.core.Ordered class SimpleProfiler : Ordered { private var order: Int = 0 // allows us to control the ordering of advice override fun getOrder(): Int { return this.order } // this method is the around advice fun profile(call: ProceedingJoinPoint): Any { var returnValue: Any val clock = StopWatch(javaClass.name) try { clock.start(call.toShortString()) returnValue = call.proceed() } finally { clock.stop() println(clock.prettyPrint()) } return returnValue } } ``` The ordering of advice is controlled through the `Ordered` interface. For full details on advice ordering, see Advice ordering. The following configuration creates a `fooService` bean that has profiling and transactional aspects applied to it in the desired order: <!-- this is the aspect --> <bean id="profiler" class="x.y.SimpleProfiler"> <!-- run before the transactional advice (hence the lower order number) --> <property name="order" value="1"/> </bean <tx:annotation-driven transaction-manager="txManager" order="200"/ <aop:config> <!-- this advice runs around the transactional advice --> <aop:aspect id="profilingAspect" ref="profiler"> <aop:pointcut id="serviceMethodWithReturnValue" expression="execution(!void x.y..*Service.*(..))"/> <aop:around method="profile" pointcut-ref="serviceMethodWithReturnValue"/> </aop:aspect> </aop:config <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"/> <property name="url" value="jdbc:oracle:thin:@rj-t42:1521:elvis"/> <property name="username" value="scott"/> <property name="password" value="tiger"/> </beanYou can configure any number of additional aspects in similar fashion. The following example creates the same setup as the previous two examples but uses the purely XML declarative approach: <!-- the profiling advice --> <bean id="profiler" class="x.y.SimpleProfiler"> <!-- run before the transactional advice (hence the lower order number) --> <property name="order" value="1"/> </bean <aop:config> <aop:pointcut id="entryPointMethod" expression="execution(* x.y..*Service.*(..))"/> <!-- runs after the profiling advice (cf. the order attribute) -- <aop:advisor advice-ref="txAdvice" pointcut-ref="entryPointMethod" order="2"/> <!-- order value is higher than the profiling aspect -- <aop:aspect id="profilingAspect" ref="profiler"> <aop:pointcut id="serviceMethodWithReturnValue" expression="execution(!void x.y..*Service.*(..))"/> <aop:around method="profile" pointcut-ref="serviceMethodWithReturnValue"/> </aop:aspect <tx:advice id="txAdvice" transaction-manager="txManager"> <tx:attributes> <tx:method name="get*" read-only="true"/> <tx:method name="*"/> </tx:attributes> </tx:advice <!-- other <bean/> definitions such as a DataSource and a TransactionManager here -- The result of the preceding configuration is a `fooService` bean that has profiling and transactional aspects applied to it in that order. If you want the profiling advice to run after the transactional advice on the way in and before the transactional advice on the way out, you can swap the value of the profiling aspect bean’s `order` property so that it is higher than the transactional advice’s order value. You can configure additional aspects in similar fashion. # Using @Transactional with AspectJ `@Transactional` with AspectJ You can also use the Spring Framework’s `@Transactional` support outside of a Spring container by means of an AspectJ aspect. To do so, first annotate your classes (and optionally your classes' methods) with the `@Transactional` annotation, and then link (weave) your application with the ``` org.springframework.transaction.aspectj.AnnotationTransactionAspect ``` defined in the `spring-aspects.jar` file. You must also configure the aspect with a transaction manager. You can use the Spring Framework’s IoC container to take care of dependency-injecting the aspect. The simplest way to configure the transaction management aspect is to use the element and specify the `mode` attribute to `aspectj` as described in Using `@Transactional` . Because we focus here on applications that run outside of a Spring container, we show you how to do it programmatically. Prior to continuing, you may want to read Using | | --- | The following example shows how to create a transaction manager and configure the to use it: ``` // construct an appropriate transaction manager DataSourceTransactionManager txManager = new DataSourceTransactionManager(getDataSource()); // configure the AnnotationTransactionAspect to use it; this must be done before executing any transactional methods AnnotationTransactionAspect.aspectOf().setTransactionManager(txManager); ``` ``` // construct an appropriate transaction manager val txManager = DataSourceTransactionManager(getDataSource()) // configure the AnnotationTransactionAspect to use it; this must be done before executing any transactional methods AnnotationTransactionAspect.aspectOf().transactionManager = txManager ``` When you use this aspect, you must annotate the implementation class (or the methods within that class or both), not the interface (if any) that the class implements. AspectJ follows Java’s rule that annotations on interfaces are not inherited. | | --- | The `@Transactional` annotation on a class specifies the default transaction semantics for the execution of any public method in the class. The `@Transactional` annotation on a method within the class overrides the default transaction semantics given by the class annotation (if present). You can annotate any method, regardless of visibility. To weave your applications with the , you must either build your application with AspectJ (see the AspectJ Development Guide) or use load-time weaving. See Load-time weaving with AspectJ in the Spring Framework for a discussion of load-time weaving with AspectJ. The Spring Framework provides two means of programmatic transaction management, by using: The `TransactionTemplate` or A `TransactionManager` implementation directly. The Spring team generally recommends the `TransactionTemplate` for programmatic transaction management in imperative flows and for reactive code. The second approach is similar to using the JTA `UserTransaction` API, although exception handling is less cumbersome. `TransactionTemplate` The `TransactionTemplate` adopts the same approach as other Spring templates, such as the `JdbcTemplate` . It uses a callback approach (to free application code from having to do the boilerplate acquisition and release transactional resources) and results in code that is intention driven, in that your code focuses solely on what you want to do. Application code that must run in a transactional context and that explicitly uses the `TransactionTemplate` resembles the next example. You, as an application developer, can write a `TransactionCallback` implementation (typically expressed as an anonymous inner class) that contains the code that you need to run in the context of a transaction. You can then pass an instance of your custom `TransactionCallback` to the `execute(..)` method exposed on the `TransactionTemplate` . The following example shows how to do so: // single TransactionTemplate shared amongst all methods in this instance private final TransactionTemplate transactionTemplate; // use constructor-injection to supply the PlatformTransactionManager public SimpleService(PlatformTransactionManager transactionManager) { this.transactionTemplate = new TransactionTemplate(transactionManager); } public Object someServiceMethod() { return transactionTemplate.execute(new TransactionCallback() { // the code in this method runs in a transactional context public Object doInTransaction(TransactionStatus status) { updateOperation1(); return resultOfUpdateOperation2(); } }); } } ``` ``` // use constructor-injection to supply the PlatformTransactionManager class SimpleService(transactionManager: PlatformTransactionManager) : Service { // single TransactionTemplate shared amongst all methods in this instance private val transactionTemplate = TransactionTemplate(transactionManager) fun someServiceMethod() = transactionTemplate.execute<Any?> { updateOperation1() resultOfUpdateOperation2() } } ``` If there is no return value, you can use the convenient ``` TransactionCallbackWithoutResult ``` class with an anonymous class, as follows: ``` transactionTemplate.execute(new TransactionCallbackWithoutResult() { protected void doInTransactionWithoutResult(TransactionStatus status) { updateOperation1(); updateOperation2(); } }); ``` ``` transactionTemplate.execute(object : TransactionCallbackWithoutResult() { override fun doInTransactionWithoutResult(status: TransactionStatus) { updateOperation1() updateOperation2() } }) ``` ``` transactionTemplate.execute(new TransactionCallbackWithoutResult() { protected void doInTransactionWithoutResult(TransactionStatus status) { try { updateOperation1(); updateOperation2(); } catch (SomeBusinessException ex) { status.setRollbackOnly(); } } }); ``` ``` transactionTemplate.execute(object : TransactionCallbackWithoutResult() { override fun doInTransactionWithoutResult(status: TransactionStatus) { try { updateOperation1() updateOperation2() } catch (ex: SomeBusinessException) { status.setRollbackOnly() } } }) ``` You can specify transaction settings (such as the propagation mode, the isolation level, the timeout, and so forth) on the `TransactionTemplate` either programmatically or in configuration. By default, `TransactionTemplate` instances have the default transactional settings. The following example shows the programmatic customization of the transactional settings for a specific `TransactionTemplate:` private final TransactionTemplate transactionTemplate; public SimpleService(PlatformTransactionManager transactionManager) { this.transactionTemplate = new TransactionTemplate(transactionManager); // the transaction settings can be set here explicitly if so desired this.transactionTemplate.setIsolationLevel(TransactionDefinition.ISOLATION_READ_UNCOMMITTED); this.transactionTemplate.setTimeout(30); // 30 seconds // and so forth... } } ``` ``` class SimpleService(transactionManager: PlatformTransactionManager) : Service { The following example defines a `TransactionTemplate` with some custom transactional settings by using Spring XML configuration: ``` <bean id="sharedTransactionTemplate" class="org.springframework.transaction.support.TransactionTemplate"> <property name="isolationLevelName" value="ISOLATION_READ_UNCOMMITTED"/> <property name="timeout" value="30"/> </bean> ``` You can then inject the ``` sharedTransactionTemplate ``` into as many services as are required. Finally, instances of the `TransactionTemplate` class are thread-safe, in that instances do not maintain any conversational state. `TransactionTemplate` instances do, however, maintain configuration state. So, while a number of classes may share a single instance of a `TransactionTemplate` , if a class needs to use a `TransactionTemplate` with different settings (for example, a different isolation level), you need to create two distinct `TransactionTemplate` instances. follows an operator design that is similar to other reactive operators. It uses a callback approach (to free application code from having to do the boilerplate acquisition and release transactional resources) and results in code that is intention driven, in that your code focuses solely on what you want to do. Application code that must run in a transactional context and that explicitly uses the resembles the next example: // single TransactionalOperator shared amongst all methods in this instance private final TransactionalOperator transactionalOperator; // use constructor-injection to supply the ReactiveTransactionManager public SimpleService(ReactiveTransactionManager transactionManager) { this.transactionalOperator = TransactionalOperator.create(transactionManager); } public Mono<Object> someServiceMethod() { // the code in this method runs in a transactional context Mono<Object> update = updateOperation1(); return update.then(resultOfUpdateOperation2).as(transactionalOperator::transactional); } } ``` ``` // use constructor-injection to supply the ReactiveTransactionManager class SimpleService(transactionManager: ReactiveTransactionManager) : Service { // single TransactionalOperator shared amongst all methods in this instance private val transactionalOperator = TransactionalOperator.create(transactionManager) suspend fun someServiceMethod() = transactionalOperator.executeAndAwait<Any?> { updateOperation1() resultOfUpdateOperation2() } } ``` can be used in two ways: Operator-style using Project Reactor types ( ``` mono.as(transactionalOperator::transactional) ``` Callback-style for every other case ( ``` transactionalOperator.execute(TransactionCallback<T>) ``` ``` transactionalOperator.execute(new TransactionCallback<>() { public Mono<Object> doInTransaction(ReactiveTransaction status) { return updateOperation1().then(updateOperation2) .doOnError(SomeBusinessException.class, e -> status.setRollbackOnly()); } } }); ``` ``` transactionalOperator.execute(object : TransactionCallback() { override fun doInTransactionWithoutResult(status: ReactiveTransaction) { updateOperation1().then(updateOperation2) .doOnError(SomeBusinessException.class, e -> status.setRollbackOnly()) } }) ``` ### Cancel Signals In Reactive Streams, a `Subscriber` can cancel its `Subscription` and stop its `Publisher` . Operators in Project Reactor, as well as in other libraries, such as `next()` , `take(long)` , `timeout(Duration)` , and others can issue cancellations. There is no way to know the reason for the cancellation, whether it is due to an error or a simply lack of interest to consume further. Since version 5.3 cancel signals lead to a roll back. As a result it is important to consider the operators used downstream from a transaction `Publisher` . In particular in the case of a `Flux` or other multi-value `Publisher` , the full output must be consumed to allow the transaction to complete. You can specify transaction settings (such as the propagation mode, the isolation level, the timeout, and so forth) for the . By default, instances have default transactional settings. The following example shows customization of the transactional settings for a specific ``` TransactionalOperator: ``` private final TransactionalOperator transactionalOperator; public SimpleService(ReactiveTransactionManager transactionManager) { DefaultTransactionDefinition definition = new DefaultTransactionDefinition(); // the transaction settings can be set here explicitly if so desired definition.setIsolationLevel(TransactionDefinition.ISOLATION_READ_UNCOMMITTED); definition.setTimeout(30); // 30 seconds // and so forth... this.transactionalOperator = TransactionalOperator.create(transactionManager, definition); } } ``` ``` class SimpleService(transactionManager: ReactiveTransactionManager) : Service { `TransactionManager` The following sections explain programmatic usage of imperative and reactive transaction managers. For imperative transactions, you can use a TransactionStatus status = txManager.getTransaction(def); try { // put your business logic here } catch (MyException ex) { txManager.rollback(status); throw ex; } txManager.commit(status); ``` val status = txManager.getTransaction(def) try { // put your business logic here } catch (ex: MyException) { txManager.rollback(status) throw ex } txManager.commit(status) ``` When working with reactive transactions, you can use a Mono<ReactiveTransaction> reactiveTx = txManager.getReactiveTransaction(def); reactiveTx.flatMap(status -> { Mono<Object> tx = ...; // put your business logic here return tx.then(txManager.commit(status)) .onErrorResume(ex -> txManager.rollback(status).then(Mono.error(ex))); }); ``` val reactiveTx = txManager.getReactiveTransaction(def) reactiveTx.flatMap { status - val tx = ... // put your business logic here tx.then(txManager.commit(status)) .onErrorResume { ex -> txManager.rollback(status).then(Mono.error(ex)) } } ``` Programmatic transaction management is usually a good idea only if you have a small number of transactional operations. For example, if you have a web application that requires transactions only for certain update operations, you may not want to set up transactional proxies by using Spring or any other technology. In this case, using the `TransactionTemplate` may be a good approach. Being able to set the transaction name explicitly is also something that can be done only by using the programmatic approach to transaction management. On the other hand, if your application has numerous transactional operations, declarative transaction management is usually worthwhile. It keeps transaction management out of business logic and is not difficult to configure. When using the Spring Framework, rather than EJB CMT, the configuration cost of declarative transaction management is greatly reduced. As of Spring 4.2, the listener of an event can be bound to a phase of the transaction. The typical example is to handle the event when the transaction has completed successfully. Doing so lets events be used with more flexibility when the outcome of the current transaction actually matters to the listener. You can register a regular event listener by using the `@EventListener` annotation. If you need to bind it to the transaction, use . When you do so, the listener is bound to the commit phase of the transaction by default. The next example shows this concept. Assume that a component publishes an order-created event and that we want to define a listener that should only handle that event once the transaction in which it has been published has committed successfully. The following example sets up such an event listener: @TransactionalEventListener public void handleOrderCreatedEvent(CreationEvent<Order> creationEvent) { // ... } } ``` @TransactionalEventListener fun handleOrderCreatedEvent(creationEvent: CreationEvent<Order>) { // ... } } ``` annotation exposes a `phase` attribute that lets you customize the phase of the transaction to which the listener should be bound. The valid phases are `BEFORE_COMMIT` , `AFTER_COMMIT` (default), `AFTER_ROLLBACK` , as well as `AFTER_COMPLETION` which aggregates the transaction completion (be it a commit or a rollback). If no transaction is running, the listener is not invoked at all, since we cannot honor the required semantics. You can, however, override that behavior by setting the `fallbackExecution` attribute of the annotation to `true` . Spring’s transaction abstraction is generally application server-agnostic. Additionally, Spring’s class (which can optionally perform a JNDI lookup for the JTA `UserTransaction` and `TransactionManager` objects) autodetects the location for the latter object, which varies by application server. Having access to the JTA `TransactionManager` allows for enhanced transaction semantics — in particular, supporting transaction suspension. See the javadoc for details. Spring’s is the standard choice to run on Jakarta EE application servers and is known to work on all common servers. Advanced functionality, such as transaction suspension, works on many servers as well (including GlassFish, JBoss and Geronimo) without any special configuration required. This section describes solutions to some common problems. ## Using the Wrong Transaction Manager for a Specific `DataSource` Use the correct implementation based on your choice of transactional technologies and requirements. Used properly, the Spring Framework merely provides a straightforward and portable abstraction. If you use global transactions, you must use the ``` org.springframework.transaction.jta.JtaTransactionManager ``` class (or an application server-specific subclass of it) for all your transactional operations. Otherwise, the transaction infrastructure tries to perform local transactions on such resources as container `DataSource` instances. Such local transactions do not make sense, and a good application server treats them as errors. For more information about the Spring Framework’s transaction support, see: Distributed transactions in Spring, with and without XA is a JavaWorld presentation in which Spring’s David Syer guides you through seven patterns for distributed transactions in Spring applications, three of them with XA and four without. * Java Transaction Design Strategies is a book available from InfoQ that provides a well-paced introduction to transactions in Java. It also includes side-by-side examples of how to configure and use transactions with both the Spring Framework and EJB3. The Data Access Object (DAO) support in Spring is aimed at making it easy to work with data access technologies (such as JDBC, Hibernate, or JPA) in a consistent way. This lets you switch between the aforementioned persistence technologies fairly easily, and it also lets you code without worrying about catching exceptions that are specific to each technology. ## Consistent Exception Hierarchy Spring provides a convenient translation from technology-specific exceptions, such as `SQLException` to its own exception class hierarchy, which has `DataAccessException` as the root exception. These exceptions wrap the original exception so that there is never any risk that you might lose any information about what might have gone wrong. In addition to JDBC exceptions, Spring can also wrap JPA- and Hibernate-specific exceptions, converting them to a set of focused runtime exceptions. This lets you handle most non-recoverable persistence exceptions in only the appropriate layers, without having annoying boilerplate catch-and-throw blocks and exception declarations in your DAOs. (You can still trap and handle exceptions anywhere you need to though.) As mentioned above, JDBC exceptions (including database-specific dialects) are also converted to the same hierarchy, meaning that you can perform some operations with JDBC within a consistent programming model. The preceding discussion holds true for the various template classes in Spring’s support for various ORM frameworks. If you use the interceptor-based classes, the application must care about handling `HibernateExceptions` and itself, preferably by delegating to the ``` convertHibernateAccessException(..) ``` ``` convertJpaAccessException(..) ``` methods, respectively, of `SessionFactoryUtils` . These methods convert the exceptions to exceptions that are compatible with the exceptions in the exception hierarchy. As are unchecked, they can get thrown, too (sacrificing generic DAO abstraction in terms of exceptions, though). The following image shows the exception hierarchy that Spring provides. (Note that the class hierarchy detailed in the image shows only a subset of the entire `DataAccessException` hierarchy.) ## Annotations Used to Configure DAO or Repository Classes The best way to guarantee that your Data Access Objects (DAOs) or repositories provide exception translation is to use the `@Repository` annotation. This annotation also lets the component scanning support find and configure your DAOs and repositories without having to provide XML configuration entries for them. The following example shows how to use the `@Repository` annotation: Any DAO or repository implementation needs access to a persistence resource, depending on the persistence technology used. For example, a JDBC-based repository needs access to a JDBC `DataSource` , and a JPA-based repository needs access to an `EntityManager` . The easiest way to accomplish this is to have this resource dependency injected by using one of the `@Autowired` , `@Inject` , `@Resource` or `@PersistenceContext` annotations. The following example works for a JPA repository: ``` @Repository public class JpaMovieFinder implements MovieFinder { @PersistenceContext private EntityManager entityManager; ``` @Repository class JpaMovieFinder : MovieFinder { @PersistenceContext private lateinit var entityManager: EntityManager If you use the classic Hibernate APIs, you can inject `SessionFactory` , as the following example shows: ``` @Repository public class HibernateMovieFinder implements MovieFinder { ``` @Repository class HibernateMovieFinder(private val sessionFactory: SessionFactory) : MovieFinder { // ... } ``` The last example we show here is for typical JDBC support. You could have the `DataSource` injected into an initialization method or a constructor, where you would create a `JdbcTemplate` and other data access support classes (such as `SimpleJdbcCall` and others) by using this `DataSource` . The following example autowires a `DataSource` : ``` @Repository public class JdbcMovieFinder implements MovieFinder { ``` @Repository class JdbcMovieFinder(dataSource: DataSource) : MovieFinder { See the specific coverage of each persistence technology for details on how to configure the application context to take advantage of these annotations. | | --- | The value provided by the Spring Framework JDBC abstraction is perhaps best shown by the sequence of actions outlined in the following table below. The table shows which actions Spring takes care of and which actions are your responsibility. Action | Spring | You | | --- | --- | --- | | | | | | | | | | | | | | | | | | | | | The Spring Framework takes care of all the low-level details that can make JDBC such a tedious API. You can choose among several approaches to form the basis for your JDBC database access. In addition to three flavors of `JdbcTemplate` , a new `SimpleJdbcInsert` and `SimpleJdbcCall` approach optimizes database metadata, and the RDBMS Object style takes a more object-oriented approach similar to that of JDO Query design. Once you start using one of these approaches, you can still mix and match to include a feature from a different approach. All approaches require a JDBC 2.0-compliant driver, and some advanced features require a JDBC 3.0 driver. * `JdbcTemplate` is the classic and most popular Spring JDBC approach. This “lowest-level” approach and all others use a JdbcTemplate under the covers. * wraps a `JdbcTemplate` to provide named parameters instead of the traditional JDBC `?` placeholders. This approach provides better documentation and ease of use when you have multiple parameters for an SQL statement. * `SimpleJdbcInsert` and `SimpleJdbcCall` optimize database metadata to limit the amount of necessary configuration. This approach simplifies coding so that you need to provide only the name of the table or procedure and provide a map of parameters matching the column names. This works only if the database provides adequate metadata. If the database does not provide this metadata, you have to provide explicit configuration of the parameters. * RDBMS objects — including `MappingSqlQuery` , `SqlUpdate` , and `StoredProcedure` — require you to create reusable and thread-safe objects during initialization of your data-access layer. This approach is modeled after JDO Query, wherein you define your query string, declare parameters, and compile the query. Once you do that, `execute(…​)` , `update(…​)` , and `findObject(…​)` methods can be called multiple times with various parameter values. package contains the `JdbcTemplate` class and its various callback interfaces, plus a variety of related classes. A subpackage named ``` org.springframework.jdbc.core.simple ``` contains the `SimpleJdbcInsert` and `SimpleJdbcCall` classes. Another subpackage named ``` org.springframework.jdbc.core.namedparam ``` contains the class and the related support classes. See Using the JDBC Core Classes to Control Basic JDBC Processing and Error Handling, JDBC Batch Operations, and Simplifying JDBC Operations with the `SimpleJdbc` Classes. * `datasource` : The ``` org.springframework.jdbc.datasource ``` package contains a utility class for easy `DataSource` access and various simple `DataSource` implementations that you can use for testing and running unmodified JDBC code outside of a Jakarta EE container. A subpackage named ``` org.springframework.jdbc.datasource.embedded ``` provides support for creating embedded databases by using Java database engines, such as HSQL, H2, and Derby. See Controlling Database Connections and Embedded Database Support. * `object` : The package contains classes that represent RDBMS queries, updates, and stored procedures as thread-safe, reusable objects. See Modeling JDBC Operations as Java Objects. This approach is modeled by JDO, although objects returned by queries are naturally disconnected from the database. This higher-level of JDBC abstraction depends on the lower-level abstraction in the package. * `support` : The ``` org.springframework.jdbc.support ``` package provides `SQLException` translation functionality and some utility classes. Exceptions thrown during JDBC processing are translated to exceptions defined in the package. This means that code using the Spring JDBC abstraction layer does not need to implement JDBC or RDBMS-specific error handling. All translated exceptions are unchecked, which gives you the option of catching the exceptions from which you can recover while letting other exceptions be propagated to the caller. See Using Updates statements and stored procedure calls * Performs iteration over `ResultSet` instances and extraction of returned parameter values. * package. (See Consistent Exception Hierarchy.) When you use the `JdbcTemplate` for your code, you need only to implement callback interfaces, giving them a clearly defined contract. Given a `Connection` provided by the `JdbcTemplate` class, the callback interface creates a prepared statement, providing SQL and any necessary parameters. The same is true for the ``` CallableStatementCreator ``` interface, which creates callable statements. The `RowCallbackHandler` interface extracts values from each row of a `ResultSet` . You can use `JdbcTemplate` within a DAO implementation through direct instantiation with a `DataSource` reference, or you can configure it in a Spring IoC container and give it to DAOs as a bean reference. All SQL issued by this class is logged at the `DEBUG` level under the category corresponding to the fully qualified class name of the template instance (typically `JdbcTemplate` , but it may be different if you use a custom subclass of the `JdbcTemplate` class). The following sections provide some examples of `JdbcTemplate` usage. These examples are not an exhaustive list of all of the functionality exposed by the `JdbcTemplate` . See the attendant javadoc for that. ### Querying ( `SELECT` ) The following query gets the number of rows in a relation: ``` int rowCount = this.jdbcTemplate.queryForObject("select count(*) from t_actor", Integer.class); ``` ``` val rowCount = jdbcTemplate.queryForObject<Int>("select count(*) from t_actor")!! ``` ``` int countOfActorsNamedJoe = this.jdbcTemplate.queryForObject( "select count(*) from t_actor where first_name = ?", Integer.class, "Joe"); ``` ``` val countOfActorsNamedJoe = jdbcTemplate.queryForObject<Int>( "select count(*) from t_actor where first_name = ?", arrayOf("Joe"))!! ``` The following query looks for a `String` : ``` String lastName = this.jdbcTemplate.queryForObject( "select last_name from t_actor where id = ?", String.class, 1212L); ``` ``` val lastName = this.jdbcTemplate.queryForObject<String>( "select last_name from t_actor where id = ?", arrayOf(1212L))!! ``` The following query finds and populates a single domain object: ``` Actor actor = jdbcTemplate.queryForObject( "select first_name, last_name from t_actor where id = ?", (resultSet, rowNum) -> { Actor newActor = new Actor(); newActor.setFirstName(resultSet.getString("first_name")); newActor.setLastName(resultSet.getString("last_name")); return newActor; }, 1212L); ``` ``` val actor = jdbcTemplate.queryForObject( "select first_name, last_name from t_actor where id = ?", arrayOf(1212L)) { rs, _ -> Actor(rs.getString("first_name"), rs.getString("last_name")) } ``` The following query finds and populates a list of domain objects: ``` List<Actor> actors = this.jdbcTemplate.query( "select first_name, last_name from t_actor", (resultSet, rowNum) -> { Actor actor = new Actor(); actor.setFirstName(resultSet.getString("first_name")); actor.setLastName(resultSet.getString("last_name")); return actor; }); ``` ``` val actors = jdbcTemplate.query("select first_name, last_name from t_actor") { rs, _ -> Actor(rs.getString("first_name"), rs.getString("last_name")) ``` If the last two snippets of code actually existed in the same application, it would make sense to remove the duplication present in the two `RowMapper` lambda expressions and extract them out into a single field that could then be referenced by DAO methods as needed. For example, it may be better to write the preceding code snippet as follows: ``` private final RowMapper<Actor> actorRowMapper = (resultSet, rowNum) -> { Actor actor = new Actor(); actor.setFirstName(resultSet.getString("first_name")); actor.setLastName(resultSet.getString("last_name")); return actor; }; public List<Actor> findAllActors() { return this.jdbcTemplate.query("select first_name, last_name from t_actor", actorRowMapper); } ``` ``` val actorMapper = RowMapper<Actor> { rs: ResultSet, rowNum: Int -> Actor(rs.getString("first_name"), rs.getString("last_name")) } fun findAllActors(): List<Actor> { return jdbcTemplate.query("select first_name, last_name from t_actor", actorMapper) } ``` ### Updating ( `INSERT` , `UPDATE` , and `DELETE` ) with `JdbcTemplate` You can use the `update(..)` method to perform insert, update, and delete operations. Parameter values are usually provided as variable arguments or, alternatively, as an object array. The following example inserts a new entry: ``` this.jdbcTemplate.update( "insert into t_actor (first_name, last_name) values (?, ?)", "Leonor", "Watling"); ``` ``` jdbcTemplate.update( "insert into t_actor (first_name, last_name) values (?, ?)", "Leonor", "Watling") ``` The following example updates an existing entry: ``` this.jdbcTemplate.update( "update t_actor set last_name = ? where id = ?", "Banjo", 5276L); ``` ``` jdbcTemplate.update( "update t_actor set last_name = ? where id = ?", "Banjo", 5276L) ``` The following example deletes an entry: ``` this.jdbcTemplate.update( "delete from t_actor where id = ?", Long.valueOf(actorId)); ``` ``` jdbcTemplate.update("delete from t_actor where id = ?", actorId.toLong()) ``` ### Other `JdbcTemplate` Operations You can use the `execute(..)` method to run any arbitrary SQL. Consequently, the method is often used for DDL statements. It is heavily overloaded with variants that take callback interfaces, binding variable arrays, and so on. The following example creates a table: ``` this.jdbcTemplate.execute("create table mytable (id integer, name varchar(100))"); ``` ``` jdbcTemplate.execute("create table mytable (id integer, name varchar(100))") ``` The following example invokes a stored procedure: ``` this.jdbcTemplate.update( "call SUPPORT.REFRESH_ACTORS_SUMMARY(?)", Long.valueOf(unionId)); ``` ``` jdbcTemplate.update( "call SUPPORT.REFRESH_ACTORS_SUMMARY(?)", unionId.toLong()) ``` More sophisticated stored procedure support is covered later. `JdbcTemplate` Best Practices Instances of the `JdbcTemplate` class are thread-safe, once configured. This is important because it means that you can configure a single instance of a `JdbcTemplate` and then safely inject this shared reference into multiple DAOs (or repositories). The `JdbcTemplate` is stateful, in that it maintains a reference to a `DataSource` , but this state is not conversational state. A common practice when using the `JdbcTemplate` class (and the associated class) is to configure a `DataSource` in your Spring configuration file and then dependency-inject that shared `DataSource` bean into your DAO classes. The `JdbcTemplate` is created in the setter for the `DataSource` . This leads to DAOs that resemble the following: ``` class JdbcCorporateEventDao(dataSource: DataSource) : CorporateEventDao { <bean id="corporateEventDao" class="com.example.JdbcCorporateEventDao"> <property name="dataSource" ref="dataSource"/> </bean``` @Repository (1) class JdbcCorporateEventDao(dataSource: DataSource) : CorporateEventDao { (2) <!-- Scans within the base package of the application for @Component classes to configure as beans --> <context:component-scan base-package="org.springframework.docs.test" / If you use Spring’s `JdbcDaoSupport` class and your various JDBC-backed DAO classes extend from it, your sub-class inherits a `setDataSource(..)` method from the `JdbcDaoSupport` class. You can choose whether to inherit from this class. The `JdbcDaoSupport` class is provided as a convenience only. Regardless of which of the above template initialization styles you choose to use (or not), it is seldom necessary to create a new instance of a `JdbcTemplate` class each time you want to run SQL. Once configured, a `JdbcTemplate` instance is thread-safe. If your application accesses multiple databases, you may want multiple `JdbcTemplate` instances, which requires multiple `DataSources` and, subsequently, multiple differently configured `JdbcTemplate` instances. class adds support for programming JDBC statements by using named parameters, as opposed to programming JDBC statements using only classic placeholder ( `'?'` ) arguments. The class wraps a `JdbcTemplate` and delegates to the wrapped `JdbcTemplate` to do much of its work. This section describes only those areas of the class that differ from the `JdbcTemplate` itself — namely, programming JDBC statements by using named parameters. The following example shows how to use public int countOfActorsByFirstName(String firstName) { String sql = "select count(*) from t_actor where first_name = :first_name"; SqlParameterSource namedParameters = new MapSqlParameterSource("first_name", firstName); return this.namedParameterJdbcTemplate.queryForObject(sql, namedParameters, Integer.class); } ``` Notice the use of the named parameter notation in the value assigned to the `sql` variable and the corresponding value that is plugged into the `namedParameters` variable (of type ). Alternatively, you can pass along named parameters and their corresponding values to a instance by using the `Map` -based style. The remaining methods exposed by the ``` NamedParameterJdbcOperations ``` and implemented by the class follow a similar pattern and are not covered here. The following example shows the use of the `Map` -based style: public int countOfActorsByFirstName(String firstName) { String sql = "select count(*) from t_actor where first_name = :first_name"; Map<String, String> namedParameters = Collections.singletonMap("first_name", firstName); return this.namedParameterJdbcTemplate.queryForObject(sql, namedParameters, Integer.class); } ``` One nice feature related to the (and existing in the same Java package) is the `SqlParameterSource` interface. You have already seen an example of an implementation of this interface in one of the previous code snippets (the class). An `SqlParameterSource` is a source of named parameter values to a class is a simple implementation that is an adapter around a `java.util.Map` , where the keys are the parameter names and the values are the parameter values. Another `SqlParameterSource` implementation is the class. This class wraps an arbitrary JavaBean (that is, an instance of a class that adheres to the JavaBean conventions) and uses the properties of the wrapped JavaBean as the source of named parameter values. The following example shows a typical JavaBean: ``` public class Actor { private Long id; private String firstName; private String lastName; public String getFirstName() { return this.firstName; } public String getLastName() { return this.lastName; } public Long getId() { return this.id; } // setters omitted... } ``` ``` data class Actor(val id: Long, val firstName: String, val lastName: String) ``` The following example uses a to return the count of the members of the class shown in the preceding example: public int countOfActors(Actor exampleActor) { // notice how the named parameters match the properties of the above 'Actor' class String sql = "select count(*) from t_actor where first_name = :firstName and last_name = :lastName"; SqlParameterSource namedParameters = new BeanPropertySqlParameterSource(exampleActor); return this.namedParameterJdbcTemplate.queryForObject(sql, namedParameters, Integer.class); } ``` fun countOfActors(exampleActor: Actor): Int { // notice how the named parameters match the properties of the above 'Actor' class val sql = "select count(*) from t_actor where first_name = :firstName and last_name = :lastName" val namedParameters = BeanPropertySqlParameterSource(exampleActor) return namedParameterJdbcTemplate.queryForObject(sql, namedParameters, Int::class.java)!! } ``` Remember that the class wraps a classic `JdbcTemplate` template. If you need access to the wrapped `JdbcTemplate` instance to access functionality that is present only in the `JdbcTemplate` class, you can use the `getJdbcOperations()` method to access the wrapped `JdbcTemplate` through the `JdbcOperations` interface. See also `JdbcTemplate` Best Practices for guidelines on using the class in the context of an application. is an interface to be implemented by classes that can translate between `SQLException` s and Spring’s own ``` org.springframework.dao.DataAccessException ``` , which is agnostic in regard to data access strategy. Implementations can be generic (for example, using SQLState codes for JDBC) or proprietary (for example, using Oracle error codes) for greater precision. This exception translation mechanism is used behind the the common `JdbcTemplate` and entry points which do not propagate `SQLException` but rather `DataAccessException` . As of 6.0, the default exception translator is | | --- | is the implementation of that is used by default when a file named `sql-error-codes.xml` is present in the root of the classpath. This implementation uses specific vendor codes. It is more precise than `SQLState` or `SQLException` subclass translation. The error code translations are based on codes held in a JavaBean type class called `SQLErrorCodes` . This class is created and populated by an `SQLErrorCodesFactory` , which (as the name suggests) is a factory for creating `SQLErrorCodes` based on the contents of a configuration file named `sql-error-codes.xml` . This file is populated with vendor codes and based on the `DatabaseProductName` taken from `DatabaseMetaData` . The codes for the actual database you are using are used. The applies matching rules in the following sequence: Any custom translation implemented by a subclass. Normally, the provided concrete is used, so this rule does not apply. It applies only if you have actually provided a subclass implementation. * Any custom implementation of the interface that is provided as the ``` customSqlExceptionTranslator ``` property of the `SQLErrorCodes` class. * The list of instances of the ``` CustomSQLErrorCodesTranslation ``` class (provided for the `customTranslations` property of the `SQLErrorCodes` class) are searched for a match. * Error code matching is applied. * Use the fallback translator. ``` SQLExceptionSubclassTranslator ``` is the default fallback translator. If this translation is not available, the next fallback translator is the ``` SQLStateSQLExceptionTranslator ``` You can extend ``` public class CustomSQLErrorCodesTranslator extends SQLErrorCodeSQLExceptionTranslator { protected DataAccessException customTranslate(String task, String sql, SQLException sqlEx) { if (sqlEx.getErrorCode() == -12345) { return new DeadlockLoserDataAccessException(task, sqlEx); } return null; } } ``` ``` class CustomSQLErrorCodesTranslator : SQLErrorCodeSQLExceptionTranslator() { override fun customTranslate(task: String, sql: String?, sqlEx: SQLException): DataAccessException? { if (sqlEx.errorCode == -12345) { return DeadlockLoserDataAccessException(task, sqlEx) } return null } } ``` In the preceding example, the specific error code ( `-12345` ) is translated while other errors are left to be translated by the default translator implementation. To use this custom translator, you must pass it to the `JdbcTemplate` through the method ``` setExceptionTranslator ``` , and you must use this `JdbcTemplate` for all of the data access processing where this translator is needed. The following example shows how you can use this custom translator: public void setDataSource(DataSource dataSource) { // create a JdbcTemplate and set data source this.jdbcTemplate = new JdbcTemplate(); this.jdbcTemplate.setDataSource(dataSource); // create a custom translator and set the DataSource for the default translation lookup CustomSQLErrorCodesTranslator tr = new CustomSQLErrorCodesTranslator(); tr.setDataSource(dataSource); this.jdbcTemplate.setExceptionTranslator(tr); } public void updateShippingCharge(long orderId, long pct) { // use the prepared JdbcTemplate for this update this.jdbcTemplate.update("update orders" + " set shipping_charge = shipping_charge * ? / 100" + " where id = ?", pct, orderId); } ``` ``` // create a JdbcTemplate and set data source private val jdbcTemplate = JdbcTemplate(dataSource).apply { // create a custom translator and set the DataSource for the default translation lookup exceptionTranslator = CustomSQLErrorCodesTranslator().apply { this.dataSource = dataSource } } fun updateShippingCharge(orderId: Long, pct: Long) { // use the prepared JdbcTemplate for this update this.jdbcTemplate!!.update("update orders" + " set shipping_charge = shipping_charge * ? / 100" + " where id = ?", pct, orderId) } ``` The custom translator is passed a data source in order to look up the error codes in `sql-error-codes.xml` . ## Running Statements Running an SQL statement requires very little code. You need a `DataSource` and a `JdbcTemplate` , including the convenience methods that are provided with the `JdbcTemplate` . The following example shows what you need to include for a minimal but fully functional class that creates a new table: public class ExecuteAStatement { public void doExecute() { this.jdbcTemplate.execute("create table mytable (id integer, name varchar(100))"); } } ``` class ExecuteAStatement(dataSource: DataSource) { fun doExecute() { jdbcTemplate.execute("create table mytable (id integer, name varchar(100))") } } ``` ## Running Queries Some query methods return a single value. To retrieve a count or a specific value from one row, use `queryForObject(..)` . The latter converts the returned JDBC `Type` to the Java class that is passed in as an argument. If the type conversion is invalid, an ``` InvalidDataAccessApiUsageException ``` is thrown. The following example contains two query methods, one for an `int` and one that queries for a `String` : public class RunAQuery { public int getCount() { return this.jdbcTemplate.queryForObject("select count(*) from mytable", Integer.class); } public String getName() { return this.jdbcTemplate.queryForObject("select name from mytable", String.class); } } ``` class RunAQuery(dataSource: DataSource) { val count: Int get() = jdbcTemplate.queryForObject("select count(*) from mytable")!! val name: String? get() = jdbcTemplate.queryForObject("select name from mytable") } ``` In addition to the single result query methods, several methods return a list with an entry for each row that the query returned. The most generic method is `queryForList(..)` , which returns a `List` where each element is a `Map` containing one entry for each column, using the column name as the key. If you add a method to the preceding example to retrieve a list of all the rows, it might be as follows: public List<Map<String, Object>> getList() { return this.jdbcTemplate.queryForList("select * from mytable"); } ``` fun getList(): List<Map<String, Any>> { return jdbcTemplate.queryForList("select * from mytable") } ``` The returned list would resemble the following: > [{name=Bob, id=1}, {name=Mary, id=2}] ## Updating the Database The following example updates a column for a certain primary key: public class ExecuteAnUpdate { public void setName(int id, String name) { this.jdbcTemplate.update("update mytable set name = ? where id = ?", name, id); } } ``` class ExecuteAnUpdate(dataSource: DataSource) { fun setName(id: Int, name: String) { jdbcTemplate.update("update mytable set name = ? where id = ?", name, id) } } ``` In the preceding example, an SQL statement has placeholders for row parameters. You can pass the parameter values in as varargs or, alternatively, as an array of objects. Thus, you should explicitly wrap primitives in the primitive wrapper classes, or you should use auto-boxing. An `update()` convenience method supports the retrieval of primary keys generated by the database. This support is part of the JDBC 3.0 standard. See Chapter 13.6 of the specification for details. The method takes a as its first argument, and this is the way the required insert statement is specified. The other argument is a `KeyHolder` , which contains the generated key on successful return from the update. There is no standard single way to create an appropriate `PreparedStatement` (which explains why the method signature is the way it is). The following example works on Oracle but may not work on other platforms: KeyHolder keyHolder = new GeneratedKeyHolder(); jdbcTemplate.update(connection -> { PreparedStatement ps = connection.prepareStatement(INSERT_SQL, new String[] { "id" }); ps.setString(1, name); return ps; }, keyHolder); val keyHolder = GeneratedKeyHolder() jdbcTemplate.update({ it.prepareStatement (INSERT_SQL, arrayOf("id")).apply { setString(1, name) } }, keyHolder) `DataSource` Spring obtains a connection to the database through a `DataSource` . A `DataSource` is part of the JDBC specification and is a generalized connection factory. It lets a container or a framework hide connection pooling and transaction management issues from the application code. As a developer, you need not know details about how to connect to the database. That is the responsibility of the administrator who sets up the datasource. You most likely fill both roles as you develop and test code, but you do not necessarily have to know how the production data source is configured. When you use Spring’s JDBC layer, you can obtain a data source from JNDI, or you can configure your own with a connection pool implementation provided by a third party. Traditional choices are Apache Commons DBCP and C3P0 with bean-style `DataSource` classes; for a modern JDBC connection pool, consider HikariCP with its builder-style API instead. The following section uses Spring’s implementation. Several other `DataSource` variants are covered later. To configure a Obtain a connection with as you typically obtain a JDBC connection. * Specify the fully qualified class name of the JDBC driver so that the `DriverManager` can load the driver class. * Provide a URL that varies between JDBC drivers. (See the documentation for your driver for the correct value.) * Provide a username and a password to connect to the database. The following example shows how to configure a in Java: ``` DriverManagerDataSource dataSource = new DriverManagerDataSource(); dataSource.setDriverClassName("org.hsqldb.jdbcDriver"); dataSource.setUrl("jdbc:hsqldb:hsql://localhost:"); dataSource.setUsername("sa"); dataSource.setPassword(""); ``` ``` val dataSource = DriverManagerDataSource().apply { setDriverClassName("org.hsqldb.jdbcDriver") url = "jdbc:hsqldb:hsql://localhost:" username = "sa" password = "" } ``` The next two examples show the basic connectivity and configuration for DBCP and C3P0. To learn about more options that help control the pooling features, see the product documentation for the respective connection pooling implementations. The following example shows DBCP configuration: ``` <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="${jdbc.driverClassName}"/> <property name="jdbcUrl" value="${jdbc.url}"/> <property name="user" value="${jdbc.username}"/> <property name="password" value="${jdbc.password}"/> </bean `DataSourceUtils` The `DataSourceUtils` class is a convenient and powerful helper class that provides `static` methods to obtain connections from JNDI and close connections if necessary. It supports a thread-bound JDBC `Connection` with but also with . Note that `JdbcTemplate` implies `DataSourceUtils` connection access, using it behind every JDBC operation, implicitly participating in an ongoing transaction. ## Implementing `SmartDataSource` The `SmartDataSource` interface should be implemented by classes that can provide a connection to a relational database. It extends the `DataSource` interface to let classes that use it query whether the connection should be closed after a given operation. This usage is efficient when you know that you need to reuse a connection. ## Extending `AbstractDataSource` `AbstractDataSource` is an `abstract` base class for Spring’s `DataSource` implementations. It implements code that is common to all `DataSource` implementations. You should extend the `AbstractDataSource` class if you write your own `DataSource` implementation. is primarily a test class. It typically enables easy testing of code outside an application server, in conjunction with a simple JNDI environment. In contrast to , it reuses the same connection all the time, avoiding excessive creation of physical connections. class is an implementation of the standard `DataSource` interface that configures a plain JDBC driver through bean properties and returns a new `Connection` every time. This implementation is useful for test and stand-alone environments outside of a Jakarta EE container, either as a `DataSource` bean in a Spring IoC container or in conjunction with a simple JNDI environment. Pool-assuming `Connection.close()` calls close the connection, so any `DataSource` -aware persistence code should work. However, using JavaBean-style connection pools (such as `commons-dbcp` ) is so easy, even in a test environment, that it is almost always preferable to use such a connection pool over is a proxy for a target `DataSource` . The proxy wraps that target `DataSource` to add awareness of Spring-managed transactions. In this respect, it is similar to a transactional JNDI `DataSource` , as provided by a Jakarta EE server. It is rarely desirable to use this class, except when already existing code must be called and passed a standard JDBC | | --- | / implementation for a single JDBC `DataSource` . It binds a JDBC `Connection` from the specified `DataSource` to the currently executing thread, potentially allowing for one thread-bound `Connection` per `DataSource` . Application code is required to retrieve the JDBC `Connection` through ``` DataSourceUtils.getConnection(DataSource) ``` instead of Java EE’s standard . It throws unchecked exceptions instead of checked `SQLExceptions` . All framework classes (such as `JdbcTemplate` ) use this strategy implicitly. If not used with a transaction manager, the lookup strategy behaves exactly like class supports savepoints ( `PROPAGATION_NESTED` ), custom isolation levels, and timeouts that get applied as appropriate JDBC statement query timeouts. To support the latter, application code must either use `JdbcTemplate` or call the ``` DataSourceUtils.applyTransactionTimeout(..) ``` method for each created statement. You can use in the single-resource case, as it does not require the container to support a JTA transaction coordinator. Switching between these transaction managers is just a matter of configuration, provided you stick to the required connection lookup pattern. Note that JTA does not support savepoints or custom isolation levels and has a different timeout mechanism but otherwise exposes similar behavior in terms of JDBC resources and JDBC commit/rollback management. As of 5.3, Spring provides an extended | | --- | In terms of exception behavior, is roughly equivalent to and also to , serving as an immediate companion/replacement for each other. on the other hand is equivalent to and can serve as a direct replacement there. Most JDBC drivers provide improved performance if you batch multiple calls to the same prepared statement. By grouping updates into batches, you limit the number of round trips to the database. ## Basic Batch Operations with `JdbcTemplate` You accomplish `JdbcTemplate` batch processing by implementing two methods of a special interface, ``` BatchPreparedStatementSetter ``` , and passing that implementation in as the second parameter in your `batchUpdate` method call. You can use the `getBatchSize` method to provide the size of the current batch. You can use the `setValues` method to set the values for the parameters of the prepared statement. This method is called the number of times that you specified in the `getBatchSize` call. The following example updates the `t_actor` table based on entries in a list, and the entire list is used as the batch: public int[] batchUpdate(final List<Actor> actors) { return this.jdbcTemplate.batchUpdate( "update t_actor set first_name = ?, last_name = ? where id = ?", new BatchPreparedStatementSetter() { public void setValues(PreparedStatement ps, int i) throws SQLException { Actor actor = actors.get(i); ps.setString(1, actor.getFirstName()); ps.setString(2, actor.getLastName()); ps.setLong(3, actor.getId().longValue()); } public int getBatchSize() { return actors.size(); } }); } fun batchUpdate(actors: List<Actor>): IntArray { return jdbcTemplate.batchUpdate( "update t_actor set first_name = ?, last_name = ? where id = ?", object: BatchPreparedStatementSetter { override fun setValues(ps: PreparedStatement, i: Int) { ps.setString(1, actors[i].firstName) ps.setString(2, actors[i].lastName) ps.setLong(3, actors[i].id) } override fun getBatchSize() = actors.size }) } If you process a stream of updates or reading from a file, you might have a preferred batch size, but the last batch might not have that number of entries. In this case, you can use the ``` InterruptibleBatchPreparedStatementSetter ``` interface, which lets you interrupt a batch once the input source is exhausted. The `isBatchExhausted` method lets you signal the end of the batch. ## Batch Operations with a List of Objects Both the `JdbcTemplate` and the provides an alternate way of providing the batch update. Instead of implementing a special batch interface, you provide all parameter values in the call as a list. The framework loops over these values and uses an internal prepared statement setter. The API varies, depending on whether you use named parameters. For the named parameters, you provide an array of `SqlParameterSource` , one entry for each member of the batch. You can use the ``` SqlParameterSourceUtils.createBatch ``` convenience methods to create this array, passing in an array of bean-style objects (with getter methods corresponding to parameters), `String` -keyed `Map` instances (containing the corresponding parameters as values), or a mix of both. The following example shows a batch update using named parameters: private NamedParameterTemplate namedParameterJdbcTemplate; public int[] batchUpdate(List<Actor> actors) { return this.namedParameterJdbcTemplate.batchUpdate( "update t_actor set first_name = :firstName, last_name = :lastName where id = :id", SqlParameterSourceUtils.createBatch(actors)); } fun batchUpdate(actors: List<Actor>): IntArray { return this.namedParameterJdbcTemplate.batchUpdate( "update t_actor set first_name = :firstName, last_name = :lastName where id = :id", SqlParameterSourceUtils.createBatch(actors)); } For an SQL statement that uses the classic `?` placeholders, you pass in a list containing an object array with the update values. This object array must have one entry for each placeholder in the SQL statement, and they must be in the same order as they are defined in the SQL statement. The following example is the same as the preceding example, except that it uses classic JDBC `?` placeholders: public int[] batchUpdate(final List<Actor> actors) { List<Object[]> batch = new ArrayList<>(); for (Actor actor : actors) { Object[] values = new Object[] { actor.getFirstName(), actor.getLastName(), actor.getId()}; batch.add(values); } return this.jdbcTemplate.batchUpdate( "update t_actor set first_name = ?, last_name = ? where id = ?", batch); } fun batchUpdate(actors: List<Actor>): IntArray { val batch = mutableListOf<Array<Any>>() for (actor in actors) { batch.add(arrayOf(actor.firstName, actor.lastName, actor.id)) } return jdbcTemplate.batchUpdate( "update t_actor set first_name = ?, last_name = ? where id = ?", batch) } All of the batch update methods that we described earlier return an `int` array containing the number of affected rows for each batch entry. This count is reported by the JDBC driver. If the count is not available, the JDBC driver returns a value of `-2` . ## Batch Operations with Multiple Batches The preceding example of a batch update deals with batches that are so large that you want to break them up into several smaller batches. You can do this with the methods mentioned earlier by making multiple calls to the `batchUpdate` method, but there is now a more convenient method. This method takes, in addition to the SQL statement, a `Collection` of objects that contain the parameters, the number of updates to make for each batch, and a ``` ParameterizedPreparedStatementSetter ``` to set the values for the parameters of the prepared statement. The framework loops over the provided values and breaks the update calls into batches of the size specified. The following example shows a batch update that uses a batch size of 100: public int[][] batchUpdate(final Collection<Actor> actors) { int[][] updateCounts = jdbcTemplate.batchUpdate( "update t_actor set first_name = ?, last_name = ? where id = ?", actors, 100, (PreparedStatement ps, Actor actor) -> { ps.setString(1, actor.getFirstName()); ps.setString(2, actor.getLastName()); ps.setLong(3, actor.getId().longValue()); }); return updateCounts; } fun batchUpdate(actors: List<Actor>): Array<IntArray> { return jdbcTemplate.batchUpdate( "update t_actor set first_name = ?, last_name = ? where id = ?", actors, 100) { ps, argument -> ps.setString(1, argument.firstName) ps.setString(2, argument.lastName) ps.setLong(3, argument.id) } } The batch update method for this call returns an array of `int` arrays that contains an array entry for each batch with an array of the number of affected rows for each update. The top-level array’s length indicates the number of batches run, and the second level array’s length indicates the number of updates in that batch. The number of updates in each batch should be the batch size provided for all batches (except that the last one that might be less), depending on the total number of update objects provided. The update count for each update statement is the one reported by the JDBC driver. If the count is not available, the JDBC driver returns a value of `-2` . # Simplifying JDBC Operations with the SimpleJdbc Classes # Simplifying JDBC Operations with the `SimpleJdbc` Classes The `SimpleJdbcInsert` and `SimpleJdbcCall` classes provide a simplified configuration by taking advantage of database metadata that can be retrieved through the JDBC driver. This means that you have less to configure up front, although you can override or turn off the metadata processing if you prefer to provide all the details in your code. ## Inserting Data by Using `SimpleJdbcInsert` We start by looking at the `SimpleJdbcInsert` class with the minimal amount of configuration options. You should instantiate the `SimpleJdbcInsert` in the data access layer’s initialization method. For this example, the initializing method is the `setDataSource` method. You do not need to subclass the `SimpleJdbcInsert` class. Instead, you can create a new instance and set the table name by using the `withTableName` method. Configuration methods for this class follow the `fluid` style that returns the instance of the `SimpleJdbcInsert` , which lets you chain all configuration methods. The following example uses only one configuration method (we show examples of multiple methods later): public void setDataSource(DataSource dataSource) { this.insertActor = new SimpleJdbcInsert(dataSource).withTableName("t_actor"); } public void add(Actor actor) { Map<String, Object> parameters = new HashMap<>(3); parameters.put("id", actor.getId()); parameters.put("first_name", actor.getFirstName()); parameters.put("last_name", actor.getLastName()); insertActor.execute(parameters); } private val insertActor = SimpleJdbcInsert(dataSource).withTableName("t_actor") fun add(actor: Actor) { val parameters = mutableMapOf<String, Any>() parameters["id"] = actor.id parameters["first_name"] = actor.firstName parameters["last_name"] = actor.lastName insertActor.execute(parameters) } The `execute` method used here takes a plain `java.util.Map` as its only parameter. The important thing to note here is that the keys used for the `Map` must match the column names of the table, as defined in the database. This is because we read the metadata to construct the actual insert statement. ## Retrieving Auto-generated Keys by Using `SimpleJdbcInsert` The next example uses the same insert as the preceding example, but, instead of passing in the `id` , it retrieves the auto-generated key and sets it on the new `Actor` object. When it creates the `SimpleJdbcInsert` , in addition to specifying the table name, it specifies the name of the generated key column with the ``` usingGeneratedKeyColumns ``` method. The following listing shows how it works: private val insertActor = SimpleJdbcInsert(dataSource) .withTableName("t_actor").usingGeneratedKeyColumns("id") The main difference when you run the insert by using this second approach is that you do not add the `id` to the `Map` , and you call the `executeAndReturnKey` method. This returns a `java.lang.Number` object with which you can create an instance of the numerical type that is used in your domain class. You cannot rely on all databases to return a specific Java class here. `java.lang.Number` is the base class that you can rely on. If you have multiple auto-generated columns or the generated values are non-numeric, you can use a `KeyHolder` that is returned from the ``` executeAndReturnKeyHolder ``` ## Specifying Columns for a `SimpleJdbcInsert` You can limit the columns for an insert by specifying a list of column names with the `usingColumns` method, as the following example shows: The execution of the insert is the same as if you had relied on the metadata to determine which columns to use. `SqlParameterSource` to Provide Parameter Values Using a `Map` to provide parameter values works fine, but it is not the most convenient class to use. Spring provides a couple of implementations of the `SqlParameterSource` interface that you can use instead. The first one is , which is a very convenient class if you have a JavaBean-compliant class that contains your values. It uses the corresponding getter method to extract the parameter values. The following example shows how to use public void add(Actor actor) { SqlParameterSource parameters = new BeanPropertySqlParameterSource(actor); Number newId = insertActor.executeAndReturnKey(parameters); actor.setId(newId.longValue()); } fun add(actor: Actor): Actor { val parameters = BeanPropertySqlParameterSource(actor) val newId = insertActor.executeAndReturnKey(parameters) return actor.copy(id = newId.toLong()) } Another option is the that resembles a `Map` but provides a more convenient `addValue` method that can be chained. The following example shows how to use it: public void add(Actor actor) { SqlParameterSource parameters = new MapSqlParameterSource() .addValue("first_name", actor.getFirstName()) .addValue("last_name", actor.getLastName()); Number newId = insertActor.executeAndReturnKey(parameters); actor.setId(newId.longValue()); } fun add(actor: Actor): Actor { val parameters = MapSqlParameterSource() .addValue("first_name", actor.firstName) .addValue("last_name", actor.lastName) val newId = insertActor.executeAndReturnKey(parameters) return actor.copy(id = newId.toLong()) } As you can see, the configuration is the same. Only the executing code has to change to use these alternative input classes. ## Calling a Stored Procedure with `SimpleJdbcCall` The `SimpleJdbcCall` class uses metadata in the database to look up names of `in` and `out` parameters so that you do not have to explicitly declare them. You can declare parameters if you prefer to do that or if you have parameters (such as `ARRAY` or `STRUCT` ) that do not have an automatic mapping to a Java class. The first example shows a simple procedure that returns only scalar values in `VARCHAR` and `DATE` format from a MySQL database. The example procedure reads a specified actor entry and returns `first_name` , `last_name` , and `birth_date` columns in the form of `out` parameters. The following listing shows the first example: ``` CREATE PROCEDURE read_actor ( IN in_id INTEGER, OUT out_first_name VARCHAR(100), OUT out_last_name VARCHAR(100), OUT out_birth_date DATE) BEGIN SELECT first_name, last_name, birth_date INTO out_first_name, out_last_name, out_birth_date FROM t_actor where id = in_id; END; ``` The `in_id` parameter contains the `id` of the actor that you are looking up. The `out` parameters return the data read from the table. You can declare `SimpleJdbcCall` in a manner similar to declaring `SimpleJdbcInsert` . You should instantiate and configure the class in the initialization method of your data-access layer. Compared to the `StoredProcedure` class, you need not create a subclass and you need not to declare parameters that can be looked up in the database metadata. The following example of a `SimpleJdbcCall` configuration uses the preceding stored procedure (the only configuration option, in addition to the `DataSource` , is the name of the stored procedure): public void setDataSource(DataSource dataSource) { this.procReadActor = new SimpleJdbcCall(dataSource) .withProcedureName("read_actor"); } public Actor readActor(Long id) { SqlParameterSource in = new MapSqlParameterSource() .addValue("in_id", id); Map out = procReadActor.execute(in); Actor actor = new Actor(); actor.setId(id); actor.setFirstName((String) out.get("out_first_name")); actor.setLastName((String) out.get("out_last_name")); actor.setBirthDate((Date) out.get("out_birth_date")); return actor; } private val procReadActor = SimpleJdbcCall(dataSource) .withProcedureName("read_actor") fun readActor(id: Long): Actor { val source = MapSqlParameterSource().addValue("in_id", id) val output = procReadActor.execute(source) return Actor( id, output["out_first_name"] as String, output["out_last_name"] as String, output["out_birth_date"] as Date) } The code you write for the execution of the call involves creating an `SqlParameterSource` containing the IN parameter. You must match the name provided for the input value with that of the parameter name declared in the stored procedure. The case does not have to match because you use metadata to determine how database objects should be referred to in a stored procedure. What is specified in the source for the stored procedure is not necessarily the way it is stored in the database. Some databases transform names to all upper case, while others use lower case or use the case as specified. The `execute` method takes the IN parameters and returns a `Map` that contains any `out` parameters keyed by the name, as specified in the stored procedure. In this case, they are `out_first_name` , `out_last_name` , and `out_birth_date` . The last part of the `execute` method creates an `Actor` instance to use to return the data retrieved. Again, it is important to use the names of the `out` parameters as they are declared in the stored procedure. Also, the case in the names of the `out` parameters stored in the results map matches that of the `out` parameter names in the database, which could vary between databases. To make your code more portable, you should do a case-insensitive lookup or instruct Spring to use a ``` LinkedCaseInsensitiveMap ``` . To do the latter, you can create your own `JdbcTemplate` and set the ``` setResultsMapCaseInsensitive ``` property to `true` . Then you can pass this customized `JdbcTemplate` instance into the constructor of your `SimpleJdbcCall` . The following example shows this configuration: public void setDataSource(DataSource dataSource) { JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource); jdbcTemplate.setResultsMapCaseInsensitive(true); this.procReadActor = new SimpleJdbcCall(jdbcTemplate) .withProcedureName("read_actor"); } private var procReadActor = SimpleJdbcCall(JdbcTemplate(dataSource).apply { isResultsMapCaseInsensitive = true }).withProcedureName("read_actor") By taking this action, you avoid conflicts in the case used for the names of your returned `out` parameters. ## Explicitly Declaring Parameters to Use for a `SimpleJdbcCall` Earlier in this chapter, we described how parameters are deduced from metadata, but you can declare them explicitly if you wish. You can do so by creating and configuring `SimpleJdbcCall` with the `declareParameters` method, which takes a variable number of `SqlParameter` objects as input. See the next section for details on how to define an `SqlParameter` . Explicit declarations are necessary if the database you use is not a Spring-supported database. Currently, Spring supports metadata lookup of stored procedure calls for the following databases: Apache Derby, DB2, MySQL, Microsoft SQL Server, Oracle, and Sybase. We also support metadata lookup of stored functions for MySQL, Microsoft SQL Server, and Oracle. | | --- | You can opt to explicitly declare one, some, or all of the parameters. The parameter metadata is still used where you do not explicitly declare parameters. To bypass all processing of metadata lookups for potential parameters and use only the declared parameters, you can call the method ``` withoutProcedureColumnMetaDataAccess ``` as part of the declaration. Suppose that you have two or more different call signatures declared for a database function. In this case, you call `useInParameterNames` to specify the list of IN parameter names to include for a given signature. The following example shows a fully declared procedure call and uses the information from the preceding example: public void setDataSource(DataSource dataSource) { JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource); jdbcTemplate.setResultsMapCaseInsensitive(true); this.procReadActor = new SimpleJdbcCall(jdbcTemplate) .withProcedureName("read_actor") .withoutProcedureColumnMetaDataAccess() .useInParameterNames("in_id") .declareParameters( new SqlParameter("in_id", Types.NUMERIC), new SqlOutParameter("out_first_name", Types.VARCHAR), new SqlOutParameter("out_last_name", Types.VARCHAR), new SqlOutParameter("out_birth_date", Types.DATE) ); } private val procReadActor = SimpleJdbcCall(JdbcTemplate(dataSource).apply { isResultsMapCaseInsensitive = true }).withProcedureName("read_actor") .withoutProcedureColumnMetaDataAccess() .useInParameterNames("in_id") .declareParameters( SqlParameter("in_id", Types.NUMERIC), SqlOutParameter("out_first_name", Types.VARCHAR), SqlOutParameter("out_last_name", Types.VARCHAR), SqlOutParameter("out_birth_date", Types.DATE) ) The execution and end results of the two examples are the same. The second example specifies all details explicitly rather than relying on metadata. ## How to Define `SqlParameters` To define a parameter for the `SimpleJdbc` classes and also for the RDBMS operations classes (covered in Modeling JDBC Operations as Java Objects) you can use `SqlParameter` or one of its subclasses. To do so, you typically specify the parameter name and SQL type in the constructor. The SQL type is specified by using the `java.sql.Types` constants. Earlier in this chapter, we saw declarations similar to the following: The first line with the `SqlParameter` declares an IN parameter. You can use IN parameters for both stored procedure calls and for queries by using the `SqlQuery` and its subclasses (covered in Understanding `SqlQuery` ). The second line (with the `SqlOutParameter` ) declares an `out` parameter to be used in a stored procedure call. There is also an `SqlInOutParameter` for `InOut` parameters (parameters that provide an IN value to the procedure and that also return a value). Only parameters declared as | | --- | For IN parameters, in addition to the name and the SQL type, you can specify a scale for numeric data or a type name for custom database types. For `out` parameters, you can provide a `RowMapper` to handle mapping of rows returned from a `REF` cursor. Another option is to specify an `SqlReturnType` that provides an opportunity to define customized handling of the return values. ## Calling a Stored Function by Using `SimpleJdbcCall` You can call a stored function in almost the same way as you call a stored procedure, except that you provide a function name rather than a procedure name. You use the `withFunctionName` method as part of the configuration to indicate that you want to make a call to a function, and the corresponding string for a function call is generated. A specialized call ( `executeFunction` ) is used to run the function, and it returns the function return value as an object of a specified type, which means you do not have to retrieve the return value from the results map. A similar convenience method (named `executeObject` ) is also available for stored procedures that have only one `out` parameter. The following example (for MySQL) is based on a stored function named `get_actor_name` that returns an actor’s full name: ``` CREATE FUNCTION get_actor_name (in_id INTEGER) RETURNS VARCHAR(200) READS SQL DATA BEGIN DECLARE out_name VARCHAR(200); SELECT concat(first_name, ' ', last_name) INTO out_name FROM t_actor where id = in_id; RETURN out_name; END; ``` To call this function, we again create a `SimpleJdbcCall` in the initialization method, as the following example shows: private SimpleJdbcCall funcGetActorName; public void setDataSource(DataSource dataSource) { JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource); jdbcTemplate.setResultsMapCaseInsensitive(true); this.funcGetActorName = new SimpleJdbcCall(jdbcTemplate) .withFunctionName("get_actor_name"); } public String getActorName(Long id) { SqlParameterSource in = new MapSqlParameterSource() .addValue("in_id", id); String name = funcGetActorName.executeFunction(String.class, in); return name; } private val jdbcTemplate = JdbcTemplate(dataSource).apply { isResultsMapCaseInsensitive = true } private val funcGetActorName = SimpleJdbcCall(jdbcTemplate) .withFunctionName("get_actor_name") fun getActorName(id: Long): String { val source = MapSqlParameterSource().addValue("in_id", id) return funcGetActorName.executeFunction(String::class.java, source) } The `executeFunction` method used returns a `String` that contains the return value from the function call. ## Returning a `ResultSet` or REF Cursor from a `SimpleJdbcCall` Calling a stored procedure or function that returns a result set is a bit tricky. Some databases return result sets during the JDBC results processing, while others require an explicitly registered `out` parameter of a specific type. Both approaches need additional processing to loop over the result set and process the returned rows. With the `SimpleJdbcCall` , you can use the `returningResultSet` method and declare a `RowMapper` implementation to be used for a specific parameter. If the result set is returned during the results processing, there are no names defined, so the returned results must match the order in which you declare the `RowMapper` implementations. The name specified is still used to store the processed list of results in the results map that is returned from the `execute` statement. The next example (for MySQL) uses a stored procedure that takes no IN parameters and returns all rows from the `t_actor` table: ``` CREATE PROCEDURE read_all_actors() BEGIN SELECT a.id, a.first_name, a.last_name, a.birth_date FROM t_actor a; END; ``` To call this procedure, you can declare the `RowMapper` . Because the class to which you want to map follows the JavaBean rules, you can use a ``` BeanPropertyRowMapper ``` that is created by passing in the required class to map to in the `newInstance` method. The following example shows how to do so: private SimpleJdbcCall procReadAllActors; public void setDataSource(DataSource dataSource) { JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource); jdbcTemplate.setResultsMapCaseInsensitive(true); this.procReadAllActors = new SimpleJdbcCall(jdbcTemplate) .withProcedureName("read_all_actors") .returningResultSet("actors", BeanPropertyRowMapper.newInstance(Actor.class)); } public List getActorsList() { Map m = procReadAllActors.execute(new HashMap<String, Object>(0)); return (List) m.get("actors"); } private val procReadAllActors = SimpleJdbcCall(JdbcTemplate(dataSource).apply { isResultsMapCaseInsensitive = true }).withProcedureName("read_all_actors") .returningResultSet("actors", BeanPropertyRowMapper.newInstance(Actor::class.java)) fun getActorsList(): List<Actor> { val m = procReadAllActors.execute(mapOf<String, Any>()) return m["actors"] as List<Actor> } The `execute` call passes in an empty `Map` , because this call does not take any parameters. The list of actors is then retrieved from the results map and returned to the caller. package contains classes that let you access the database in a more object-oriented manner. As an example, you can run queries and get the results back as a list that contains business objects with the relational column data mapped to the properties of the business object. You can also run stored procedures and run update, delete, and insert statements. `SqlQuery` `SqlQuery` is a reusable, thread-safe class that encapsulates an SQL query. Subclasses must implement the `newRowMapper(..)` method to provide a `RowMapper` instance that can create one object per row obtained from iterating over the `ResultSet` that is created during the execution of the query. The `SqlQuery` class is rarely used directly, because the `MappingSqlQuery` subclass provides a much more convenient implementation for mapping rows to Java classes. Other implementations that extend `SqlQuery` are ``` MappingSqlQueryWithParameters ``` and `UpdatableSqlQuery` . `MappingSqlQuery` `MappingSqlQuery` is a reusable query in which concrete subclasses must implement the abstract `mapRow(..)` method to convert each row of the supplied `ResultSet` into an object of the type specified. The following example shows a custom query that maps the data from the `t_actor` relation to an instance of the `Actor` class: ``` public class ActorMappingQuery extends MappingSqlQuery<Actor> { public ActorMappingQuery(DataSource ds) { super(ds, "select id, first_name, last_name from t_actor where id = ?"); declareParameter(new SqlParameter("id", Types.INTEGER)); compile(); } @Override protected Actor mapRow(ResultSet rs, int rowNumber) throws SQLException { Actor actor = new Actor(); actor.setId(rs.getLong("id")); actor.setFirstName(rs.getString("first_name")); actor.setLastName(rs.getString("last_name")); return actor; } } ``` ``` class ActorMappingQuery(ds: DataSource) : MappingSqlQuery<Actor>(ds, "select id, first_name, last_name from t_actor where id = ?") { init { declareParameter(SqlParameter("id", Types.INTEGER)) compile() } override fun mapRow(rs: ResultSet, rowNumber: Int) = Actor( rs.getLong("id"), rs.getString("first_name"), rs.getString("last_name") ) } ``` The class extends `MappingSqlQuery` parameterized with the `Actor` type. The constructor for this customer query takes a `DataSource` as the only parameter. In this constructor, you can call the constructor on the superclass with the `DataSource` and the SQL that should be run to retrieve the rows for this query. This SQL is used to create a `PreparedStatement` , so it may contain placeholders for any parameters to be passed in during execution. You must declare each parameter by using the `declareParameter` method passing in an `SqlParameter` . The `SqlParameter` takes a name, and the JDBC type as defined in `java.sql.Types` . After you define all parameters, you can call the `compile()` method so that the statement can be prepared and later run. This class is thread-safe after it is compiled, so, as long as these instances are created when the DAO is initialized, they can be kept as instance variables and be reused. The following example shows how to define such a class: ``` private ActorMappingQuery actorMappingQuery; @Autowired public void setDataSource(DataSource dataSource) { this.actorMappingQuery = new ActorMappingQuery(dataSource); } public Customer getCustomer(Long id) { return actorMappingQuery.findObject(id); } ``` ``` private val actorMappingQuery = ActorMappingQuery(dataSource) fun getCustomer(id: Long) = actorMappingQuery.findObject(id) ``` The method in the preceding example retrieves the customer with the `id` that is passed in as the only parameter. Since we want only one object to be returned, we call the `findObject` convenience method with the `id` as the parameter. If we had instead a query that returned a list of objects and took additional parameters, we would use one of the `execute` methods that takes an array of parameter values passed in as varargs. The following example shows such a method: ``` public List<Actor> searchForActors(int age, String namePattern) { return actorSearchMappingQuery.execute(age, namePattern); } ``` ``` fun searchForActors(age: Int, namePattern: String) = actorSearchMappingQuery.execute(age, namePattern) ``` `SqlUpdate` The `SqlUpdate` class encapsulates an SQL update. As with a query, an update object is reusable, and, as with all `RdbmsOperation` classes, an update can have parameters and is defined in SQL. This class provides a number of `update(..)` methods analogous to the `execute(..)` methods of query objects. The `SqlUpdate` class is concrete. It can be subclassed — for example, to add a custom update method. However, you do not have to subclass the `SqlUpdate` class, since it can easily be parameterized by setting SQL and declaring parameters. The following example creates a custom update method named `execute` : ``` import java.sql.Types; import javax.sql.DataSource; import org.springframework.jdbc.core.SqlParameter; import org.springframework.jdbc.object.SqlUpdate; public class UpdateCreditRating extends SqlUpdate { public UpdateCreditRating(DataSource ds) { setDataSource(ds); setSql("update customer set credit_rating = ? where id = ?"); declareParameter(new SqlParameter("creditRating", Types.NUMERIC)); declareParameter(new SqlParameter("id", Types.NUMERIC)); compile(); } /** * @param id for the Customer to be updated * @param rating the new value for credit rating * @return number of rows updated */ public int execute(int id, int rating) { return update(rating, id); } } ``` ``` import java.sql.Types import javax.sql.DataSource import org.springframework.jdbc.core.SqlParameter import org.springframework.jdbc. object.SqlUpdate class UpdateCreditRating(ds: DataSource) : SqlUpdate() { init { setDataSource(ds) sql = "update customer set credit_rating = ? where id = ?" declareParameter(SqlParameter("creditRating", Types.NUMERIC)) declareParameter(SqlParameter("id", Types.NUMERIC)) compile() } /** * @param id for the Customer to be updated * @param rating the new value for credit rating * @return number of rows updated */ fun execute(id: Int, rating: Int): Int { return update(rating, id) } } ``` `StoredProcedure` The `StoredProcedure` class is an `abstract` superclass for object abstractions of RDBMS stored procedures. The inherited `sql` property is the name of the stored procedure in the RDBMS. To define a parameter for the `StoredProcedure` class, you can use an `SqlParameter` or one of its subclasses. You must specify the parameter name and SQL type in the constructor, as the following code snippet shows: The SQL type is specified using the `java.sql.Types` constants. The first line (with the `SqlParameter` ) declares an IN parameter. You can use IN parameters both for stored procedure calls and for queries using the `SqlQuery` and its subclasses (covered in Understanding `SqlQuery` ). The second line (with the `SqlOutParameter` ) declares an `out` parameter to be used in the stored procedure call. There is also an `SqlInOutParameter` for `InOut` parameters (parameters that provide an `in` value to the procedure and that also return a value). For `in` parameters, in addition to the name and the SQL type, you can specify a scale for numeric data or a type name for custom database types. For `out` parameters, you can provide a `RowMapper` to handle mapping of rows returned from a `REF` cursor. Another option is to specify an `SqlReturnType` that lets you define customized handling of the return values. The next example of a simple DAO uses a `StoredProcedure` to call a function ( `sysdate()` ), which comes with any Oracle database. To use the stored procedure functionality, you have to create a class that extends `StoredProcedure` . In this example, the `StoredProcedure` class is an inner class. However, if you need to reuse the `StoredProcedure` , you can declare it as a top-level class. This example has no input parameters, but an output parameter is declared as a date type by using the `SqlOutParameter` class. The `execute()` method runs the procedure and extracts the returned date from the results `Map` . The results `Map` has an entry for each declared output parameter (in this case, only one) by using the parameter name as the key. The following listing shows our custom StoredProcedure class: ``` import java.sql.Types; import java.util.Date; import java.util.HashMap; import java.util.Map; import javax.sql.DataSource; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.jdbc.core.SqlOutParameter; import org.springframework.jdbc.object.StoredProcedure; public class StoredProcedureDao { private GetSysdateProcedure getSysdate; @Autowired public void init(DataSource dataSource) { this.getSysdate = new GetSysdateProcedure(dataSource); } public Date getSysdate() { return getSysdate.execute(); } private class GetSysdateProcedure extends StoredProcedure { private static final String SQL = "sysdate"; public GetSysdateProcedure(DataSource dataSource) { setDataSource(dataSource); setFunction(true); setSql(SQL); declareParameter(new SqlOutParameter("date", Types.DATE)); compile(); } public Date execute() { // the 'sysdate' sproc has no input parameters, so an empty Map is supplied... Map<String, Object> results = execute(new HashMap<String, Object>()); Date sysdate = (Date) results.get("date"); return sysdate; } } ``` import java.sql.Types import java.util.Date import java.util.Map import javax.sql.DataSource import org.springframework.jdbc.core.SqlOutParameter import org.springframework.jdbc.object.StoredProcedure class StoredProcedureDao(dataSource: DataSource) { private val SQL = "sysdate" private val getSysdate = GetSysdateProcedure(dataSource) val sysdate: Date get() = getSysdate.execute() private inner class GetSysdateProcedure(dataSource: DataSource) : StoredProcedure() { init { setDataSource(dataSource) isFunction = true sql = SQL declareParameter(SqlOutParameter("date", Types.DATE)) compile() } fun execute(): Date { // the 'sysdate' sproc has no input parameters, so an empty Map is supplied... val results = execute(mutableMapOf<String, Any>()) return results["date"] as Date } } } ``` The following example of a `StoredProcedure` has two output parameters (in this case, Oracle REF cursors): ``` import java.util.HashMap; import java.util.Map; import javax.sql.DataSource; import oracle.jdbc.OracleTypes; import org.springframework.jdbc.core.SqlOutParameter; import org.springframework.jdbc.object.StoredProcedure; public class TitlesAndGenresStoredProcedure extends StoredProcedure { private static final String SPROC_NAME = "AllTitlesAndGenres"; public TitlesAndGenresStoredProcedure(DataSource dataSource) { super(dataSource, SPROC_NAME); declareParameter(new SqlOutParameter("titles", OracleTypes.CURSOR, new TitleMapper())); declareParameter(new SqlOutParameter("genres", OracleTypes.CURSOR, new GenreMapper())); compile(); } public Map<String, Object> execute() { // again, this sproc has no input parameters, so an empty Map is supplied return super.execute(new HashMap<String, Object>()); } } ``` ``` import java.util.HashMap import javax.sql.DataSource import oracle.jdbc.OracleTypes import org.springframework.jdbc.core.SqlOutParameter import org.springframework.jdbc. object.StoredProcedure class TitlesAndGenresStoredProcedure(dataSource: DataSource) : StoredProcedure(dataSource, SPROC_NAME) { companion object { private const val SPROC_NAME = "AllTitlesAndGenres" } init { declareParameter(SqlOutParameter("titles", OracleTypes.CURSOR, TitleMapper())) declareParameter(SqlOutParameter("genres", OracleTypes.CURSOR, GenreMapper())) compile() } fun execute(): Map<String, Any> { // again, this sproc has no input parameters, so an empty Map is supplied return super.execute(HashMap<String, Any>()) } } ``` Notice how the overloaded variants of the `declareParameter(..)` method that have been used in the ``` TitlesAndGenresStoredProcedure ``` constructor are passed `RowMapper` implementation instances. This is a very convenient and powerful way to reuse existing functionality. The next two examples provide code for the two `RowMapper` implementations. The `TitleMapper` class maps a `ResultSet` to a `Title` domain object for each row in the supplied `ResultSet` , as follows: ``` import java.sql.ResultSet; import java.sql.SQLException; import com.foo.domain.Title; import org.springframework.jdbc.core.RowMapper; public final class TitleMapper implements RowMapper<Title> { public Title mapRow(ResultSet rs, int rowNum) throws SQLException { Title title = new Title(); title.setId(rs.getLong("id")); title.setName(rs.getString("name")); return title; } } ``` ``` import java.sql.ResultSet import com.foo.domain.Title import org.springframework.jdbc.core.RowMapper class TitleMapper : RowMapper<Title> { override fun mapRow(rs: ResultSet, rowNum: Int) = Title(rs.getLong("id"), rs.getString("name")) } ``` The `GenreMapper` class maps a `ResultSet` to a `Genre` domain object for each row in the supplied `ResultSet` , as follows: ``` import java.sql.ResultSet; import java.sql.SQLException; import com.foo.domain.Genre; import org.springframework.jdbc.core.RowMapper; public final class GenreMapper implements RowMapper<Genre> { public Genre mapRow(ResultSet rs, int rowNum) throws SQLException { return new Genre(rs.getString("name")); } } ``` ``` import java.sql.ResultSet import com.foo.domain.Genre import org.springframework.jdbc.core.RowMapper class GenreMapper : RowMapper<Genre> { override fun mapRow(rs: ResultSet, rowNum: Int): Genre { return Genre(rs.getString("name")) } } ``` To pass parameters to a stored procedure that has one or more input parameters in its definition in the RDBMS, you can code a strongly typed `execute(..)` method that would delegate to the untyped `execute(Map)` method in the superclass, as the following example shows: ``` import java.sql.Types; import java.util.Date; import java.util.HashMap; import java.util.Map; import javax.sql.DataSource; import oracle.jdbc.OracleTypes; import org.springframework.jdbc.core.SqlOutParameter; import org.springframework.jdbc.core.SqlParameter; import org.springframework.jdbc.object.StoredProcedure; public class TitlesAfterDateStoredProcedure extends StoredProcedure { private static final String SPROC_NAME = "TitlesAfterDate"; private static final String CUTOFF_DATE_PARAM = "cutoffDate"; public TitlesAfterDateStoredProcedure(DataSource dataSource) { super(dataSource, SPROC_NAME); declareParameter(new SqlParameter(CUTOFF_DATE_PARAM, Types.DATE); declareParameter(new SqlOutParameter("titles", OracleTypes.CURSOR, new TitleMapper())); compile(); } public Map<String, Object> execute(Date cutoffDate) { Map<String, Object> inputs = new HashMap<String, Object>(); inputs.put(CUTOFF_DATE_PARAM, cutoffDate); return super.execute(inputs); } } ``` ``` import java.sql.Types import java.util.Date import javax.sql.DataSource import oracle.jdbc.OracleTypes import org.springframework.jdbc.core.SqlOutParameter import org.springframework.jdbc.core.SqlParameter import org.springframework.jdbc. object.StoredProcedure class TitlesAfterDateStoredProcedure(dataSource: DataSource) : StoredProcedure(dataSource, SPROC_NAME) { companion object { private const val SPROC_NAME = "TitlesAfterDate" private const val CUTOFF_DATE_PARAM = "cutoffDate" } init { declareParameter(SqlParameter(CUTOFF_DATE_PARAM, Types.DATE)) declareParameter(SqlOutParameter("titles", OracleTypes.CURSOR, TitleMapper())) compile() } fun execute(cutoffDate: Date) = super.execute( mapOf<String, Any>(CUTOFF_DATE_PARAM to cutoffDate)) } ``` Date: 2010-12-31 Categories: Tags: Common problems with parameters and data values exist in the different approaches provided by Spring Framework’s JDBC support. This section covers how to address them. ## Providing SQL Type Information for Parameters Usually, Spring determines the SQL type of the parameters based on the type of parameter passed in. It is possible to explicitly provide the SQL type to be used when setting parameter values. This is sometimes necessary to correctly set `NULL` values. You can provide SQL type information in several ways: Many update and query methods of the `JdbcTemplate` take an additional parameter in the form of an `int` array. This array is used to indicate the SQL type of the corresponding parameter by using constant values from the `java.sql.Types` class. Provide one entry for each parameter. * You can use the `SqlParameterValue` class to wrap the parameter value that needs this additional information. To do so, create a new instance for each value and pass in the SQL type and the parameter value in the constructor. You can also provide an optional scale parameter for numeric values. * For methods that work with named parameters, you can use the `SqlParameterSource` classes, . They both have methods for registering the SQL type for any of the named parameter values. ## Handling BLOB and CLOB objects You can store images, other binary data, and large chunks of text in the database. These large objects are called BLOBs (Binary Large OBject) for binary data and CLOBs (Character Large OBject) for character data. In Spring, you can handle these large objects by using the `JdbcTemplate` directly and also when using the higher abstractions provided by RDBMS Objects and the `SimpleJdbc` classes. All of these approaches use an implementation of the `LobHandler` interface for the actual management of the LOB (Large OBject) data. `LobHandler` provides access to a `LobCreator` class, through the `getLobCreator` method, that is used for creating new LOB objects to be inserted. `LobCreator` and `LobHandler` provide the following support for LOB input and output: BLOB * `byte[]` : `getBlobAsBytes` and `setBlobAsBytes` * `InputStream` : ``` getBlobAsBinaryStream ``` ``` setBlobAsBinaryStream ``` CLOB * `String` : `getClobAsString` and `setClobAsString` * `InputStream` : `getClobAsAsciiStream` and `setClobAsAsciiStream` * `Reader` : ``` getClobAsCharacterStream ``` ``` setClobAsCharacterStream ``` The next example shows how to create and insert a BLOB. Later we show how to read it back from the database. This example uses a `JdbcTemplate` and an implementation of the ``` AbstractLobCreatingPreparedStatementCallback ``` . It implements one method, `setValues` . This method provides a `LobCreator` that we use to set the values for the LOB columns in your SQL insert statement. For this example, we assume that there is a variable, `lobHandler` , that is already set to an instance of a `DefaultLobHandler` . You typically set this value through dependency injection. The following example shows how to create and insert a BLOB: ``` final File blobIn = new File("spring2004.jpg"); final InputStream blobIs = new FileInputStream(blobIn); final File clobIn = new File("large.txt"); final InputStream clobIs = new FileInputStream(clobIn); final InputStreamReader clobReader = new InputStreamReader(clobIs); jdbcTemplate.execute( "INSERT INTO lob_table (id, a_clob, a_blob) VALUES (?, ?, ?)", new AbstractLobCreatingPreparedStatementCallback(lobHandler) { (1) protected void setValues(PreparedStatement ps, LobCreator lobCreator) throws SQLException { ps.setLong(1, 1L); lobCreator.setClobAsCharacterStream(ps, 2, clobReader, (int)clobIn.length()); (2) lobCreator.setBlobAsBinaryStream(ps, 3, blobIs, (int)blobIn.length()); (3) } } ); blobIs.close(); clobReader.close(); ``` ``` val blobIn = File("spring2004.jpg") val blobIs = FileInputStream(blobIn) val clobIn = File("large.txt") val clobIs = FileInputStream(clobIn) val clobReader = InputStreamReader(clobIs) jdbcTemplate.execute( "INSERT INTO lob_table (id, a_clob, a_blob) VALUES (?, ?, ?)", object: AbstractLobCreatingPreparedStatementCallback(lobHandler) { (1) override fun setValues(ps: PreparedStatement, lobCreator: LobCreator) { ps.setLong(1, 1L) lobCreator.setClobAsCharacterStream(ps, 2, clobReader, clobIn.length().toInt()) (2) lobCreator.setBlobAsBinaryStream(ps, 3, blobIs, blobIn.length().toInt()) (3) } } ) blobIs.close() clobReader.close() ``` Now it is time to read the LOB data from the database. Again, you use a `JdbcTemplate` with the same instance variable `lobHandler` and a reference to a `DefaultLobHandler` . The following example shows how to do so: ``` List<Map<String, Object>> l = jdbcTemplate.query("select id, a_clob, a_blob from lob_table", new RowMapper<Map<String, Object>>() { public Map<String, Object> mapRow(ResultSet rs, int i) throws SQLException { Map<String, Object> results = new HashMap<String, Object>(); String clobText = lobHandler.getClobAsString(rs, "a_clob"); (1) results.put("CLOB", clobText); byte[] blobBytes = lobHandler.getBlobAsBytes(rs, "a_blob"); (2) results.put("BLOB", blobBytes); return results; } }); ``` ``` val l = jdbcTemplate.query("select id, a_clob, a_blob from lob_table") { rs, _ -> val clobText = lobHandler.getClobAsString(rs, "a_clob") (1) val blobBytes = lobHandler.getBlobAsBytes(rs, "a_blob") (2) mapOf("CLOB" to clobText, "BLOB" to blobBytes) } ``` ## Passing in Lists of Values for IN Clause The SQL standard allows for selecting rows based on an expression that includes a variable list of values. A typical example would be ``` select * from T_ACTOR where id in (1, 2, 3) ``` . This variable list is not directly supported for prepared statements by the JDBC standard. You cannot declare a variable number of placeholders. You need a number of variations with the desired number of placeholders prepared, or you need to generate the SQL string dynamically once you know how many placeholders are required. The named parameter support provided in the takes the latter approach. You can pass in the values as a `java.util.List` (or any `Iterable` ) of simple values. This list is used to insert the required placeholders into the actual SQL statement and pass in the values during statement execution. Be careful when passing in many values. The JDBC standard does not guarantee that you can use more than 100 values for an | | --- | In addition to the primitive values in the value list, you can create a `java.util.List` of object arrays. This list can support multiple expressions being defined for the `in` clause, such as ``` select * from T_ACTOR where (id, last_name) in ((1, 'Johnson'), (2, 'Harrop')) ``` . This, of course, requires that your database supports this syntax. ## Handling Complex Types for Stored Procedure Calls When you call stored procedures, you can sometimes use complex types specific to the database. To accommodate these types, Spring provides a `SqlReturnType` for handling them when they are returned from the stored procedure call and `SqlTypeValue` when they are passed in as a parameter to the stored procedure. The `SqlReturnType` interface has a single method (named `getTypeValue` ) that must be implemented. This interface is used as part of the declaration of an `SqlOutParameter` . The following example shows returning the value of an Oracle `STRUCT` object of the user declared type `ITEM_TYPE` : ``` public class TestItemStoredProcedure extends StoredProcedure { public TestItemStoredProcedure(DataSource dataSource) { // ... declareParameter(new SqlOutParameter("item", OracleTypes.STRUCT, "ITEM_TYPE", (CallableStatement cs, int colIndx, int sqlType, String typeName) -> { STRUCT struct = (STRUCT) cs.getObject(colIndx); Object[] attr = struct.getAttributes(); TestItem item = new TestItem(); item.setId(((Number) attr[0]).longValue()); item.setDescription((String) attr[1]); item.setExpirationDate((java.util.Date) attr[2]); return item; })); // ... } ``` init { // ... declareParameter(SqlOutParameter("item", OracleTypes.STRUCT, "ITEM_TYPE") { cs, colIndx, sqlType, typeName -> val struct = cs.getObject(colIndx) as STRUCT val attr = struct.getAttributes() TestItem((attr[0] as Long, attr[1] as String, attr[2] as Date) }) // ... } } ``` You can use `SqlTypeValue` to pass the value of a Java object (such as `TestItem` ) to a stored procedure. The `SqlTypeValue` interface has a single method (named `createTypeValue` ) that you must implement. The active connection is passed in, and you can use it to create database-specific objects, such as `StructDescriptor` instances or `ArrayDescriptor` instances. The following example creates a `StructDescriptor` instance: ``` final TestItem testItem = new TestItem(123L, "A test item", new SimpleDateFormat("yyyy-M-d").parse("2010-12-31")); SqlTypeValue value = new AbstractSqlTypeValue() { protected Object createTypeValue(Connection conn, int sqlType, String typeName) throws SQLException { StructDescriptor itemDescriptor = new StructDescriptor(typeName, conn); Struct item = new STRUCT(itemDescriptor, conn, new Object[] { testItem.getId(), testItem.getDescription(), new java.sql.Date(testItem.getExpirationDate().getTime()) }); return item; } }; ``` ``` val (id, description, expirationDate) = TestItem(123L, "A test item", SimpleDateFormat("yyyy-M-d").parse("2010-12-31")) val value = object : AbstractSqlTypeValue() { override fun createTypeValue(conn: Connection, sqlType: Int, typeName: String?): Any { val itemDescriptor = StructDescriptor(typeName, conn) return STRUCT(itemDescriptor, conn, arrayOf(id, description, java.sql.Date(expirationDate.time))) } } ``` You can now add this `SqlTypeValue` to the `Map` that contains the input parameters for the `execute` call of the stored procedure. Another use for the `SqlTypeValue` is passing in an array of values to an Oracle stored procedure. Oracle has its own internal `ARRAY` class that must be used in this case, and you can use the `SqlTypeValue` to create an instance of the Oracle `ARRAY` and populate it with values from the Java `ARRAY` , as the following example shows: ``` final Long[] ids = new Long[] {1L, 2L}; SqlTypeValue value = new AbstractSqlTypeValue() { protected Object createTypeValue(Connection conn, int sqlType, String typeName) throws SQLException { ArrayDescriptor arrayDescriptor = new ArrayDescriptor(typeName, conn); ARRAY idArray = new ARRAY(arrayDescriptor, conn, ids); return idArray; } }; ``` init { val ids = arrayOf(1L, 2L) val value = object : AbstractSqlTypeValue() { override fun createTypeValue(conn: Connection, sqlType: Int, typeName: String?): Any { val arrayDescriptor = ArrayDescriptor(typeName, conn) return ARRAY(arrayDescriptor, conn, ids) } } } } ``` ## Why Use an Embedded Database? An embedded database can be useful during the development phase of a project because of its lightweight nature. Benefits include ease of configuration, quick startup time, testability, and the ability to rapidly evolve your SQL during development. ## Creating an Embedded Database by Using Spring XML If you want to expose an embedded database instance as a bean in a Spring `ApplicationContext` , you can use the `embedded-database` tag in the `spring-jdbc` namespace: ``` <jdbc:embedded-database id="dataSource" generate-name="true"> <jdbc:script location="classpath:schema.sql"/> <jdbc:script location="classpath:test-data.sql"/> </jdbc:embedded-database> ``` The preceding configuration creates an embedded HSQL database that is populated with SQL from the `schema.sql` and `test-data.sql` resources in the root of the classpath. In addition, as a best practice, the embedded database is assigned a uniquely generated name. The embedded database is made available to the Spring container as a bean of type `javax.sql.DataSource` that can then be injected into data access objects as needed. ## Creating an Embedded Database Programmatically class provides a fluent API for constructing an embedded database programmatically. You can use this when you need to create an embedded database in a stand-alone environment or in a stand-alone integration test, as in the following example: ``` EmbeddedDatabase db = new EmbeddedDatabaseBuilder() .generateUniqueName(true) .setType(H2) .setScriptEncoding("UTF-8") .ignoreFailedDrops(true) .addScript("schema.sql") .addScripts("user_data.sql", "country_data.sql") .build(); ``` val db = EmbeddedDatabaseBuilder() .generateUniqueName(true) .setType(H2) .setScriptEncoding("UTF-8") .ignoreFailedDrops(true) .addScript("schema.sql") .addScripts("user_data.sql", "country_data.sql") .build() for further details on all supported options. You can also use the to create an embedded database by using Java configuration, as the following example shows: @Bean public DataSource dataSource() { return new EmbeddedDatabaseBuilder() .generateUniqueName(true) .setType(H2) .setScriptEncoding("UTF-8") .ignoreFailedDrops(true) .addScript("schema.sql") .addScripts("user_data.sql", "country_data.sql") .build(); } } ``` @Bean fun dataSource(): DataSource { return EmbeddedDatabaseBuilder() .generateUniqueName(true) .setType(H2) .setScriptEncoding("UTF-8") .ignoreFailedDrops(true) .addScript("schema.sql") .addScripts("user_data.sql", "country_data.sql") .build() } } ``` ## Selecting the Embedded Database Type This section covers how to select one of the three embedded databases that Spring supports. It includes the following topics: ### Using HSQL Spring supports HSQL 1.8.0 and above. HSQL is the default embedded database if no type is explicitly specified. To specify HSQL explicitly, set the `type` attribute of the `embedded-database` tag to `HSQL` . If you use the builder API, call the ``` EmbeddedDatabaseType.HSQL ``` ### Using H2 Spring supports the H2 database. To enable H2, set the `type` attribute of the `embedded-database` tag to `H2` . If you use the builder API, call the ``` EmbeddedDatabaseType.H2 ``` ## Testing Data Access Logic with an Embedded Database Embedded databases provide a lightweight way to test data access code. The next example is a data access integration test template that uses an embedded database. Using such a template can be useful for one-offs when the embedded database does not need to be reused across test classes. However, if you wish to create an embedded database that is shared within a test suite, consider using the Spring TestContext Framework and configuring the embedded database as a bean in the Spring `ApplicationContext` as described in Creating an Embedded Database by Using Spring XML and Creating an Embedded Database Programmatically. The following listing shows the test template: private EmbeddedDatabase db; @BeforeEach public void setUp() { // creates an HSQL in-memory database populated from default scripts // classpath:schema.sql and classpath:data.sql db = new EmbeddedDatabaseBuilder() .generateUniqueName(true) .addDefaultScripts() .build(); } @Test public void testDataAccess() { JdbcTemplate template = new JdbcTemplate(db); template.query( /* ... */ ); } @AfterEach public void tearDown() { db.shutdown(); } private lateinit var db: EmbeddedDatabase @BeforeEach fun setUp() { // creates an HSQL in-memory database populated from default scripts // classpath:schema.sql and classpath:data.sql db = EmbeddedDatabaseBuilder() .generateUniqueName(true) .addDefaultScripts() .build() } @Test fun testDataAccess() { val template = JdbcTemplate(db) template.query( /* ... */) } @AfterEach fun tearDown() { db.shutdown() } } ``` ## Generating Unique Names for Embedded Databases Development teams often encounter errors with embedded databases if their test suite inadvertently attempts to recreate additional instances of the same database. This can happen quite easily if an XML configuration file or `@Configuration` class is responsible for creating an embedded database and the corresponding configuration is then reused across multiple testing scenarios within the same test suite (that is, within the same JVM process) — for example, integration tests against embedded databases whose `ApplicationContext` configuration differs only with regard to which bean definition profiles are active. The root cause of such errors is the fact that Spring’s ``` EmbeddedDatabaseFactory ``` (used internally by both the XML namespace element and the for Java configuration) sets the name of the embedded database to `testdb` if not otherwise specified. For the case of , the embedded database is typically assigned a name equal to the bean’s `id` (often, something like `dataSource` ). Thus, subsequent attempts to create an embedded database do not result in a new database. Instead, the same JDBC connection URL is reused, and attempts to create a new embedded database actually point to an existing embedded database created from the same configuration. To address this common issue, Spring Framework 4.2 provides support for generating unique names for embedded databases. To enable the use of generated names, use one of the following options. ``` EmbeddedDatabaseFactory.setGenerateUniqueDatabaseName() ``` ``` EmbeddedDatabaseBuilder.generateUniqueName() ``` ``` <jdbc:embedded-database generate-name="true" …​ > ``` ## Extending the Embedded Database Support You can extend Spring JDBC embedded database support in two ways: Implement ``` EmbeddedDatabaseConfigurer ``` to support a new embedded database type. * Implement `DataSourceFactory` to support a new `DataSource` implementation, such as a connection pool to manage embedded database connections. We encourage you to contribute extensions to the Spring community at GitHub Issues. # Initializing a DataSource # Initializing a `DataSource` The ``` org.springframework.jdbc.datasource.init ``` package provides support for initializing an existing `DataSource` . The embedded database support provides one option for creating and initializing a `DataSource` for an application. However, you may sometimes need to initialize an instance that runs on a server somewhere. ## Initializing a Database by Using Spring XML If you want to initialize a database and you can provide a reference to a `DataSource` bean, you can use the `initialize-database` tag in the `spring-jdbc` namespace: ``` <jdbc:initialize-database data-source="dataSource"> <jdbc:script location="classpath:com/foo/sql/db-schema.sql"/> <jdbc:script location="classpath:com/foo/sql/db-test-data.sql"/> </jdbc:initialize-database> ``` The preceding example runs the two specified scripts against the database. The first script creates a schema, and the second populates tables with a test data set. The script locations can also be patterns with wildcards in the usual Ant style used for resources in Spring (for example, ``` classpath*:/com/foo/**/sql/*-data.sql ``` ). If you use a pattern, the scripts are run in the lexical order of their URL or filename. The default behavior of the database initializer is to unconditionally run the provided scripts. This may not always be what you want — for instance, if you run the scripts against a database that already has test data in it. The likelihood of accidentally deleting data is reduced by following the common pattern (shown earlier) of creating the tables first and then inserting the data. The first step fails if the tables already exist. However, to gain more control over the creation and deletion of existing data, the XML namespace provides a few additional options. The first is a flag to switch the initialization on and off. You can set this according to the environment (such as pulling a boolean value from system properties or from an environment bean). The following example gets a value from a system property: ``` <jdbc:initialize-database data-source="dataSource" enabled="#{systemProperties.INITIALIZE_DATABASE}"> (1) <jdbc:script location="..."/> </jdbc:initialize-database> ``` 1 | Get the value for | | --- | --- | The second option to control what happens with existing data is to be more tolerant of failures. To this end, you can control the ability of the initializer to ignore certain errors in the SQL it runs from the scripts, as the following example shows: ``` <jdbc:initialize-database data-source="dataSource" ignore-failures="DROPS"> <jdbc:script location="..."/> </jdbc:initialize-database> ``` In the preceding example, we are saying that we expect that, sometimes, the scripts are run against an empty database, and there are some `DROP` statements in the scripts that would, therefore, fail. So failed SQL `DROP` statements will be ignored, but other failures will cause an exception. This is useful if your SQL dialect doesn’t support ``` DROP …​ IF EXISTS ``` (or similar) but you want to unconditionally remove all test data before re-creating it. In that case the first script is usually a set of `DROP` statements, followed by a set of `CREATE` statements. The `ignore-failures` option can be set to `NONE` (the default), `DROPS` (ignore failed drops), or `ALL` (ignore all failures). Each statement should be separated by `;` or a new line if the `;` character is not present at all in the script. You can control that globally or script by script, as the following example shows: ``` <jdbc:initialize-database data-source="dataSource" separator="@@"> (1) <jdbc:script location="classpath:com/myapp/sql/db-schema.sql" separator=";"/> (2) <jdbc:script location="classpath:com/myapp/sql/db-test-data-1.sql"/> <jdbc:script location="classpath:com/myapp/sql/db-test-data-2.sql"/> </jdbc:initialize-database> ``` 1 | Set the separator scripts to | | --- | --- | 2 | Set the separator for | In this example, the two `test-data` scripts use `@@` as statement separator and only the `db-schema.sql` uses `;` . This configuration specifies that the default separator is `@@` and overrides that default for the `db-schema` script. If you need more control than you get from the XML namespace, you can use the ``` DataSourceInitializer ``` directly and define it as a component in your application. ### Initialization of Other Components that Depend on the Database A large class of applications (those that do not use the database until after the Spring context has started) can use the database initializer with no further complications. If your application is not one of those, you might need to read the rest of this section. The database initializer depends on a `DataSource` instance and runs the scripts provided in its initialization callback (analogous to an `init-method` in an XML bean definition, a `@PostConstruct` method in a component, or the `afterPropertiesSet()` method in a component that implements `InitializingBean` ). If other beans depend on the same data source and use the data source in an initialization callback, there might be a problem because the data has not yet been initialized. A common example of this is a cache that initializes eagerly and loads data from the database on application startup. To get around this issue, you have two options: change your cache initialization strategy to a later phase or ensure that the database initializer is initialized first. Changing your cache initialization strategy might be easy if the application is in your control and not otherwise. Some suggestions for how to implement this include: Make the cache initialize lazily on first usage, which improves application startup time. * Have your cache or a separate component that initializes the cache implement `Lifecycle` or `SmartLifecycle` . When the application context starts, you can automatically start a `SmartLifecycle` by setting its `autoStartup` flag, and you can manually start a `Lifecycle` by calling ``` ConfigurableApplicationContext.start() ``` on the enclosing context. * Use a Spring `ApplicationEvent` or similar custom observer mechanism to trigger the cache initialization. ``` ContextRefreshedEvent ``` is always published by the context when it is ready for use (after all beans have been initialized), so that is often a useful hook (this is how the `SmartLifecycle` works by default). Ensuring that the database initializer is initialized first can also be easy. Some suggestions on how to implement this include: Rely on the default behavior of the Spring `BeanFactory` , which is that beans are initialized in registration order. You can easily arrange that by adopting the common practice of a set of `<import/>` elements in XML configuration that order your application modules and ensuring that the database and database initialization are listed first. * Separate the `DataSource` and the business components that use it and control their startup order by putting them in separate `ApplicationContext` instances (for example, the parent context contains the `DataSource` , and the child context contains the business components). This structure is common in Spring web applications but can be more generally applied. R2DBC ("Reactive Relational Database Connectivity") is a community-driven specification effort to standardize access to SQL databases using reactive patterns. ## Package Hierarchy ``` org.springframework.r2dbc.core ``` package contains the `DatabaseClient` class plus a variety of related classes. See Using the R2DBC Core Classes to Control Basic R2DBC Processing and Error Handling. * `connection` : The ``` org.springframework.r2dbc.connection ``` package contains a utility class for easy `ConnectionFactory` access and various simple `ConnectionFactory` implementations that you can use for testing and running unmodified R2DBC. See Controlling Database Connections. ## Using the R2DBC Core Classes to Control Basic R2DBC Processing and Error Handling Update statements and stored procedure calls * Performs iteration over `Result` instances * package. (See Consistent Exception Hierarchy.) The client has a functional, fluent API using reactive types for declarative composition. When you use the `DatabaseClient` for your code, you need only to implement `java.util.function` interfaces, giving them a clearly defined contract. Given a `Connection` provided by the `DatabaseClient` class, a `Function` callback creates a `Publisher` . The same is true for mapping functions that extract a `Row` result. You can use `DatabaseClient` within a DAO implementation through direct instantiation with a `ConnectionFactory` reference, or you can configure it in a Spring IoC container and give it to DAOs as a bean reference. The simplest way to create a `DatabaseClient` object is through a static factory method, as follows: ``` DatabaseClient client = DatabaseClient.create(connectionFactory); ``` The preceding method creates a `DatabaseClient` with default settings. You can also obtain a `Builder` instance from ``` DatabaseClient.builder() ``` . You can customize the client by calling the following methods: * `….bindMarkers(…)` : Supply a specific `BindMarkersFactory` to configure named parameter to database bind marker translation. * `….executeFunction(…)` : Set the `ExecuteFunction` how `Statement` objects get run. * ``` ….namedParameters(false) ``` : Disable named parameter expansion. Enabled by default. Dialects are resolved by | | --- | Currently supported databases are: H2 * MariaDB * Microsoft SQL Server * MySQL * Postgres All SQL issued by this class is logged at the `DEBUG` level under the category corresponding to the fully qualified class name of the client instance (typically ``` DefaultDatabaseClient ``` ). Additionally, each execution registers a checkpoint in the reactive sequence to aid debugging. The following sections provide some examples of `DatabaseClient` usage. These examples are not an exhaustive list of all of the functionality exposed by the `DatabaseClient` . See the attendant javadoc for that. # Executing Statements `DatabaseClient` provides the basic functionality of running a statement. The following example shows what you need to include for minimal but fully functional code that creates a new table: `DatabaseClient` is designed for convenient, fluent usage. It exposes intermediate, continuation, and terminal methods at each stage of the execution specification. The preceding example above uses `then()` to return a completion `Publisher` that completes as soon as the query (or queries, if the SQL query contains multiple statements) completes. execute(…) accepts either the SQL query string or a query Supplier<String> to defer the actual query creation until execution. # Querying ( `SELECT` ) SQL queries can return values through `Row` objects or the number of affected rows. `DatabaseClient` can return the number of updated rows or the rows themselves, depending on the issued query. The following query gets the `id` and `name` columns from a table: ``` Mono<Map<String, Object>> first = client.sql("SELECT id, name FROM person") .fetch().first(); ``` ``` val first = client.sql("SELECT id, name FROM person") .fetch().awaitSingle() ``` You might have noticed the use of `fetch()` in the example above. `fetch()` is a continuation operator that lets you specify how much data you want to consume. Calling `first()` returns the first row from the result and discards remaining rows. You can consume data with the following operators: * `first()` return the first row of the entire result. Its Kotlin Coroutine variant is named `awaitSingle()` for non-nullable return values and `awaitSingleOrNull()` if the value is optional. * `one()` returns exactly one result and fails if the result contains more rows. Using Kotlin Coroutines, `awaitOne()` for exactly one value or `awaitOneOrNull()` if the value may be `null` . * `all()` returns all rows of the result. When using Kotlin Coroutines, use `flow()` . * `rowsUpdated()` returns the number of affected rows ( `INSERT` / `UPDATE` / `DELETE` count). Its Kotlin Coroutine variant is named `awaitRowsUpdated()` . Without specifying further mapping details, queries return tabular results as `Map` whose keys are case-insensitive column names that map to their column value. You can take control over result mapping by supplying a `Function<Row, T>` that gets called for each `Row` so it can return arbitrary values (singular values, collections and maps, and objects). The following example extracts the `name` column and emits its value: ``` Flux<String> names = client.sql("SELECT name FROM person") .map(row -> row.get("name", String.class)) .all(); ``` ``` val names = client.sql("SELECT name FROM person") .map{ row: Row -> row.get("name", String.class) } .flow() ``` # Updating ( `INSERT` , `UPDATE` , and `DELETE` ) with `DatabaseClient` The only difference of modifying statements is that these statements typically do not return tabular data so you use `rowsUpdated()` to consume results. The following example shows an `UPDATE` statement that returns the number of updated rows: ``` Mono<Integer> affectedRows = client.sql("UPDATE person SET first_name = :fn") .bind("fn", "Joe") .fetch().rowsUpdated(); ``` ``` val affectedRows = client.sql("UPDATE person SET first_name = :fn") .bind("fn", "Joe") .fetch().awaitRowsUpdated() ``` # Binding Values to Queries A typical application requires parameterized SQL statements to select or update rows according to some input. These are typically `SELECT` statements constrained by a `WHERE` clause or `INSERT` and `UPDATE` statements that accept input parameters. Parameterized statements bear the risk of SQL injection if parameters are not escaped properly. `DatabaseClient` leverages R2DBC’s `bind` API to eliminate the risk of SQL injection for query parameters. You can provide a parameterized SQL statement with the `execute(…)` operator and bind parameters to the actual `Statement` . Your R2DBC driver then runs the statement by using prepared statements and parameter substitution. Parameter binding supports two binding strategies: By Index, using zero-based parameter indexes. * By Name, using the placeholder name. The following example shows parameter binding for a query: ``` db.sql("INSERT INTO person (id, name, age) VALUES(:id, :name, :age)") .bind("id", "joe") .bind("name", "Joe") .bind("age", 34); ``` The query-preprocessor unrolls named `Collection` parameters into a series of bind markers to remove the need of dynamic query creation based on the number of arguments. Nested object arrays are expanded to allow usage of (for example) select lists. Consider the following query: ``` SELECT id, name, state FROM table WHERE (name, age) IN (('John', 35), ('Ann', 50)) ``` The preceding query can be parameterized and run as follows: ``` List<Object[]> tuples = new ArrayList<>(); tuples.add(new Object[] {"John", 35}); tuples.add(new Object[] {"Ann", 50}); Usage of select lists is vendor-dependent. | | --- | The following example shows a simpler variant using `IN` predicates: ``` client.sql("SELECT id, name, state FROM table WHERE age IN (:ages)") .bind("ages", Arrays.asList(35, 50)); ``` client.sql("SELECT id, name, state FROM table WHERE age IN (:ages)") .bind("tuples", arrayOf(35, 50)) ``` R2DBC itself does not support Collection-like values. Nevertheless, expanding a given | | --- | # Statement Filters Sometimes you need to fine-tune options on the actual `Statement` before it gets run. To do so, register a `Statement` filter ( ) with the `DatabaseClient` to intercept and modify statements in their execution, as the following example shows: ``` client.sql("INSERT INTO table (name, state) VALUES(:name, :state)") .filter((s, next) -> next.execute(s.returnGeneratedValues("id"))) .bind("name", …) .bind("state", …); ``` ``` client.sql("INSERT INTO table (name, state) VALUES(:name, :state)") .filter { s: Statement, next: ExecuteFunction -> next.execute(s.returnGeneratedValues("id")) } .bind("name", …) .bind("state", …) ``` `DatabaseClient` also exposes a simplified `filter(…)` overload that accepts a ``` Function<Statement, Statement> ``` ``` client.sql("INSERT INTO table (name, state) VALUES(:name, :state)") .filter(statement -> s.returnGeneratedValues("id")); client.sql("SELECT id, name, state FROM table") .filter(statement -> s.fetchSize(25)); ``` ``` client.sql("INSERT INTO table (name, state) VALUES(:name, :state)") .filter { statement -> s.returnGeneratedValues("id") } client.sql("SELECT id, name, state FROM table") .filter { statement -> s.fetchSize(25) } ``` implementations allow filtering of the `Statement` and filtering of `Result` objects. `DatabaseClient` Best Practices Instances of the `DatabaseClient` class are thread-safe, once configured. This is important because it means that you can configure a single instance of a `DatabaseClient` and then safely inject this shared reference into multiple DAOs (or repositories). The `DatabaseClient` is stateful, in that it maintains a reference to a `ConnectionFactory` , but this state is not conversational state. A common practice when using the `DatabaseClient` class is to configure a `ConnectionFactory` in your Spring configuration file and then dependency-inject that shared `ConnectionFactory` bean into your DAO classes. The `DatabaseClient` is created in the setter for the `ConnectionFactory` . This leads to DAOs that resemble the following: public void setConnectionFactory(ConnectionFactory connectionFactory) { this.databaseClient = DatabaseClient.create(connectionFactory); } ``` class R2dbcCorporateEventDao(connectionFactory: ConnectionFactory) : CorporateEventDao { private val databaseClient = DatabaseClient.create(connectionFactory) @Autowired (2) public void setConnectionFactory(ConnectionFactory connectionFactory) { this.databaseClient = DatabaseClient.create(connectionFactory); (3) } ``` @Component (1) class R2dbcCorporateEventDao(connectionFactory: ConnectionFactory) : CorporateEventDao { (2) private val databaseClient = DatabaseClient(connectionFactory) (3) Regardless of which of the above template initialization styles you choose to use (or not), it is seldom necessary to create a new instance of a `DatabaseClient` class each time you want to run SQL. Once configured, a `DatabaseClient` instance is thread-safe. If your application accesses multiple databases, you may want multiple `DatabaseClient` instances, which requires multiple `ConnectionFactory` and, subsequently, multiple differently configured `DatabaseClient` instances. `INSERT` statements may generate keys when inserting rows into a table that defines an auto-increment or identity column. To get full control over the column name to generate, simply register a that requests the generated key for the desired column. ``` Mono<Integer> generatedId = client.sql("INSERT INTO table (name, state) VALUES(:name, :state)") .filter(statement -> s.returnGeneratedValues("id")) .map(row -> row.get("id", Integer.class)) .first(); ``` val generatedId = client.sql("INSERT INTO table (name, state) VALUES(:name, :state)") .filter { statement -> s.returnGeneratedValues("id") } .map { row -> row.get("id", Integer.class) } .awaitOne() ## Controlling Database Connections This section covers: `ConnectionFactory` Spring obtains an R2DBC connection to the database through a `ConnectionFactory` . A `ConnectionFactory` is part of the R2DBC specification and is a common entry-point for drivers. It lets a container or a framework hide connection pooling and transaction management issues from the application code. As a developer, you need not know details about how to connect to the database. That is the responsibility of the administrator who sets up the `ConnectionFactory` . You most likely fill both roles as you develop and test code, but you do not necessarily have to know how the production data source is configured. When you use Spring’s R2DBC layer, you can configure your own with a connection pool implementation provided by a third party. A popular implementation is R2DBC Pool ( `r2dbc-pool` ). Implementations in the Spring distribution are meant only for testing purposes and do not provide pooling. To configure a `ConnectionFactory` : Obtain a connection with `ConnectionFactory` as you typically obtain an R2DBC `ConnectionFactory` . * Provide an R2DBC URL (See the documentation for your driver for the correct value). The following example shows how to configure a `ConnectionFactory` : ``` ConnectionFactory factory = ConnectionFactories.get("r2dbc:h2:mem:///test?options=DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"); ``` ``` val factory = ConnectionFactories.get("r2dbc:h2:mem:///test?options=DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"); ``` class is a convenient and powerful helper class that provides `static` methods to obtain connections from `ConnectionFactory` and close connections (if necessary). It supports subscriber `Context` -bound connections with, for example class is an implementation of ``` DelegatingConnectionFactory ``` is primarily a test class and may be used for specific requirements such as pipelining if your R2DBC driver permits for such use. In contrast to a pooled `ConnectionFactory` , it reuses the same connection all the time, avoiding excessive creation of physical connections. is a proxy for a target `ConnectionFactory` . The proxy wraps that target `ConnectionFactory` to add awareness of Spring-managed transactions. Using this class is required if you use a R2DBC client that is not integrated otherwise with Spring’s R2DBC support. In this case, you can still use this client and, at the same time, have this client participating in Spring managed transactions. It is generally preferable to integrate a R2DBC client with proper access to | | --- | implementation for a single R2DBC `ConnectionFactory` . It binds an R2DBC `Connection` from the specified `ConnectionFactory` to the subscriber `Context` , potentially allowing for one subscriber `Connection` for each `ConnectionFactory` . Application code is required to retrieve the R2DBC `Connection` through ``` ConnectionFactoryUtils.getConnection(ConnectionFactory) ``` , instead of R2DBC’s standard . All framework classes (such as `DatabaseClient` ) use this strategy implicitly. If not used with a transaction manager, the lookup strategy behaves exactly like # Object Relational Mapping (ORM) Data Access Object Relational Mapping (ORM) Data Access This section covers data access when you use Object Relational Mapping (ORM). Section Summary Introduction to ORM with Spring General ORM Integration Considerations Hibernate JPA The Spring Framework supports integration with the Java Persistence API (JPA) and supports native Hibernate for resource management, data access object (DAO) implementations, and transaction strategies. For example, for Hibernate, there is first-class support with several convenient IoC features that address many typical Hibernate integration issues. You can configure all of the supported features for OR (object relational) mapping tools through Dependency Injection. They can participate in Spring’s resource and transaction management, and they comply with Spring’s generic transaction and DAO exception hierarchies. The recommended integration style is to code DAOs against plain Hibernate or JPA APIs. Spring adds significant enhancements to the ORM layer of your choice when you create data access applications. You can leverage as much of the integration support as you wish, and you should compare this integration effort with the cost and risk of building a similar infrastructure in-house. You can use much of the ORM support as you would a library, regardless of technology, because everything is designed as a set of reusable JavaBeans. ORM in a Spring IoC container facilitates configuration and deployment. Thus, most examples in this section show configuration inside a Spring container. The benefits of using the Spring Framework to create your ORM DAOs include: Easier testing. Spring’s IoC approach makes it easy to swap the implementations and configuration locations of Hibernate `SessionFactory` instances, JDBC `DataSource` instances, transaction managers, and mapped object implementations (if needed). This in turn makes it much easier to test each piece of persistence-related code in isolation. * Common data access exceptions. Spring can wrap exceptions from your ORM tool, converting them from proprietary (potentially checked) exceptions to a common runtime `DataAccessException` hierarchy. This feature lets you handle most persistence exceptions, which are non-recoverable, only in the appropriate layers, without annoying boilerplate catches, throws, and exception declarations. You can still trap and handle exceptions as necessary. Remember that JDBC exceptions (including DB-specific dialects) are also converted to the same hierarchy, meaning that you can perform some operations with JDBC within a consistent programming model. * General resource management. Spring application contexts can handle the location and configuration of Hibernate `SessionFactory` instances, JPA `EntityManagerFactory` instances, JDBC `DataSource` instances, and other related resources. This makes these values easy to manage and change. Spring offers efficient, easy, and safe handling of persistence resources. For example, related code that uses Hibernate generally needs to use the same Hibernate `Session` to ensure efficiency and proper transaction handling. Spring makes it easy to create and bind a `Session` to the current thread transparently, by exposing a current `Session` through the Hibernate `SessionFactory` . Thus, Spring solves many chronic problems of typical Hibernate usage, for any local or JTA transaction environment. * Integrated transaction management. You can wrap your ORM code with a declarative, aspect-oriented programming (AOP) style method interceptor either through the `@Transactional` annotation or by explicitly configuring the transaction AOP advice in an XML configuration file. In both cases, transaction semantics and exception handling (rollback and so on) are handled for you. As discussed in Resource and Transaction Management, you can also swap various transaction managers, without affecting your ORM-related code. For example, you can swap between local transactions and JTA, with the same full services (such as declarative transactions) available in both scenarios. Additionally, JDBC-related code can fully integrate transactionally with the code you use to do ORM. This is useful for data access that is not suitable for ORM (such as batch processing and BLOB streaming) but that still needs to share common transactions with ORM operations. For more comprehensive ORM support, including support for alternative database technologies such as MongoDB, you might want to check out the Spring Data suite of projects. If you are a JPA user, the Getting Started Accessing Data with JPA guide from spring.io provides a great introduction. | | --- | This section highlights considerations that apply to all ORM technologies. The Hibernate section provides more details and also show these features and configurations in a concrete context. The major goal of Spring’s ORM integration is clear application layering (with any data access and transaction technology) and for loose coupling of application objects — no more business service dependencies on the data access or transaction strategy, no more hard-coded resource lookups, no more hard-to-replace singletons, no more custom service registries. The goal is to have one simple and consistent approach to wiring up application objects, keeping them as reusable and free from container dependencies as possible. All the individual data access features are usable on their own but integrate nicely with Spring’s application context concept, providing XML-based configuration and cross-referencing of plain JavaBean instances that need not be Spring-aware. In a typical Spring application, many important objects are JavaBeans: data access templates, data access objects, transaction managers, business services that use the data access objects and transaction managers, web view resolvers, web controllers that use the business services, and so on. ## Resource and Transaction Management Typical business applications are cluttered with repetitive resource management code. Many projects try to invent their own solutions, sometimes sacrificing proper handling of failures for programming convenience. Spring advocates simple solutions for proper resource handling, namely IoC through templating in the case of JDBC and applying AOP interceptors for the ORM technologies. The infrastructure provides proper resource handling and appropriate conversion of specific API exceptions to an unchecked infrastructure exception hierarchy. Spring introduces a DAO exception hierarchy, applicable to any data access strategy. For direct JDBC, the `JdbcTemplate` class mentioned in a previous section provides connection handling and proper conversion of `SQLException` to the `DataAccessException` hierarchy, including translation of database-specific SQL error codes to meaningful exception classes. For ORM technologies, see the next section for how to get the same exception translation benefits. When it comes to transaction management, the `JdbcTemplate` class hooks in to the Spring transaction support and supports both JTA and JDBC transactions, through respective Spring transaction managers. For the supported ORM technologies, Spring offers Hibernate and JPA support through the Hibernate and JPA transaction managers as well as JTA support. For details on transaction support, see the Transaction Management chapter. ## Exception Translation When you use Hibernate or JPA in a DAO, you must decide how to handle the persistence technology’s native exception classes. The DAO throws a subclass of a `HibernateException` or `PersistenceException` , depending on the technology. These exceptions are all runtime exceptions and do not have to be declared or caught. You may also have to deal with . This means that callers can only treat exceptions as being generally fatal, unless they want to depend on the persistence technology’s own exception structure. Catching specific causes (such as an optimistic locking failure) is not possible without tying the caller to the implementation strategy. This trade-off might be acceptable to applications that are strongly ORM-based or do not need any special exception treatment (or both). However, Spring lets exception translation be applied transparently through the `@Repository` annotation. The following examples (one for Java configuration and one for XML configuration) show how to do so: <!-- Exception translation bean post processor --> <bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"/ The postprocessor automatically looks for all exception translators (implementations of the ``` PersistenceExceptionTranslator ``` interface) and advises all beans marked with the `@Repository` annotation so that the discovered translators can intercept and apply the appropriate translation on the thrown exceptions. In summary, you can implement DAOs based on the plain persistence technology’s API and annotations while still benefiting from Spring-managed transactions, dependency injection, and transparent exception conversion (if desired) to Spring’s custom exception hierarchies. We start with a coverage of Hibernate 5 in a Spring environment, using it to demonstrate the approach that Spring takes towards integrating OR mappers. This section covers many issues in detail and shows different variations of DAO implementations and transaction demarcation. Most of these patterns can be directly translated to all other supported ORM tools. The later sections in this chapter then cover the other ORM technologies and show brief examples. `SessionFactory` Setup in a Spring Container To avoid tying application objects to hard-coded resource lookups, you can define resources (such as a JDBC `DataSource` or a Hibernate `SessionFactory` ) as beans in the Spring container. Application objects that need to access resources receive references to such predefined instances through bean references, as illustrated in the DAO definition in the next section. The following excerpt from an XML application context definition shows how to set up a JDBC `DataSource` and a Hibernate `SessionFactory` on top of it: <bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:hsql://localhost:9001"/> <property name="username" value="sa"/> <property name="password" value=""/> </bean <bean id="mySessionFactory" class="org.springframework.orm.hibernate5.LocalSessionFactoryBean"> <property name="dataSource" ref="myDataSource"/> <property name="mappingResources"> <list> <value>product.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <value> hibernate.dialect=org.hibernate.dialect.HSQLDialect </value> </property> </bean Switching from a local Jakarta Commons DBCP `BasicDataSource` to a JNDI-located `DataSource` (usually managed by an application server) is only a matter of configuration, as the following example shows: ``` <beans> <jee:jndi-lookup id="myDataSource" jndi-name="java:comp/env/jdbc/myds"/> </beans> ``` You can also access a JNDI-located `SessionFactory` , using Spring’s / `<jee:jndi-lookup>` to retrieve and expose it. However, that is typically not common outside of an EJB context. ## Implementing DAOs Based on the Plain Hibernate API Hibernate has a feature called contextual sessions, wherein Hibernate itself manages one current `Session` per transaction. This is roughly equivalent to Spring’s synchronization of one Hibernate `Session` per transaction. A corresponding DAO implementation resembles the following example, based on the plain Hibernate API: public Collection loadProductsByCategory(String category) { return this.sessionFactory.getCurrentSession() .createQuery("from test.Product product where product.category=?") .setParameter(0, category) .list(); } } ``` ``` class ProductDaoImpl(private val sessionFactory: SessionFactory) : ProductDao { fun loadProductsByCategory(category: String): Collection<*> { return sessionFactory.currentSession .createQuery("from test.Product product where product.category=?") .setParameter(0, category) .list() } } ``` This style is similar to that of the Hibernate reference documentation and examples, except for holding the `SessionFactory` in an instance variable. We strongly recommend such an instance-based setup over the old-school `static` `HibernateUtil` class from Hibernate’s CaveatEmptor sample application. (In general, do not keep any resources in `static` variables unless absolutely necessary.) The preceding DAO example follows the dependency injection pattern. It fits nicely into a Spring IoC container, as it would if coded against Spring’s `HibernateTemplate` . You can also set up such a DAO in plain Java (for example, in unit tests). To do so, instantiate it and call ``` setSessionFactory(..) ``` with the desired factory reference. As a Spring bean definition, the DAO would resemble the following: <bean id="myProductDao" class="product.ProductDaoImpl"> <property name="sessionFactory" ref="mySessionFactory"/> </beanThe main advantage of this DAO style is that it depends on Hibernate API only. No import of any Spring class is required. This is appealing from a non-invasiveness perspective and may feel more natural to Hibernate developers. However, the DAO throws plain `HibernateException` (which is unchecked, so it does not have to be declared or caught), which means that callers can treat exceptions only as being generally fatal — unless they want to depend on Hibernate’s own exception hierarchy. Catching specific causes (such as an optimistic locking failure) is not possible without tying the caller to the implementation strategy. This trade off might be acceptable to applications that are strongly Hibernate-based, do not need any special exception treatment, or both. Fortunately, Spring’s supports Hibernate’s method for any Spring transaction strategy, returning the current Spring-managed transactional `Session` , even with . The standard behavior of that method remains to return the current `Session` associated with the ongoing JTA transaction, if any. This behavior applies regardless of whether you use Spring’s , EJB container managed transactions (CMTs), or JTA. In summary, you can implement DAOs based on the plain Hibernate API, while still being able to participate in Spring-managed transactions. ## Declarative Transaction Demarcation We recommend that you use Spring’s declarative transaction support, which lets you replace explicit transaction demarcation API calls in your Java code with an AOP transaction interceptor. You can configure this transaction interceptor in a Spring container by using either Java annotations or XML. This declarative transaction capability lets you keep business services free of repetitive transaction demarcation code and focus on adding business logic, which is the real value of your application. Before you continue, we are strongly encourage you to read Declarative Transaction Management if you have not already done so. | | --- | You can annotate the service layer with `@Transactional` annotations and instruct the Spring container to find these annotations and provide transactional semantics for these annotated methods. The following example shows how to do so: private ProductDao productDao; @Transactional public void increasePriceOfAllProductsInCategory(final String category) { List productsToChange = this.productDao.loadProductsByCategory(category); // ... } @Transactional(readOnly = true) public List<Product> findAllProducts() { return this.productDao.findAllProducts(); } } ``` ``` class ProductServiceImpl(private val productDao: ProductDao) : ProductService { @Transactional fun increasePriceOfAllProductsInCategory(category: String) { val productsToChange = productDao.loadProductsByCategory(category) // ... } @Transactional(readOnly = true) fun findAllProducts() = productDao.findAllProducts() } ``` In the container, you need to set up the implementation (as a bean) and a entry, opting into `@Transactional` processing at runtime. The following example shows how to do so: <!-- SessionFactory, DataSource, etc. omitted -- <bean id="transactionManager" class="org.springframework.orm.hibernate5.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> </bean <tx:annotation-driven/ <bean id="myProductService" class="product.SimpleProductService"> <property name="productDao" ref="myProductDao"/> </bean## Programmatic Transaction Demarcation You can demarcate transactions in a higher level of the application, on top of lower-level data access services that span any number of operations. Nor do restrictions exist on the implementation of the surrounding business service. It needs only a Spring . Again, the latter can come from anywhere, but preferably as a bean reference through a ``` setTransactionManager(..) ``` method. Also, the `productDAO` should be set by a `setProductDao(..)` method. The following pair of snippets show a transaction manager and a business service definition in a Spring application context and an example for a business method implementation: <bean id="myTxManager" class="org.springframework.orm.hibernate5.HibernateTransactionManager"> <property name="sessionFactory" ref="mySessionFactory"/> </bean <bean id="myProductService" class="product.ProductServiceImpl"> <property name="transactionManager" ref="myTxManager"/> <property name="productDao" ref="myProductDao"/> </bean private TransactionTemplate transactionTemplate; private ProductDao productDao; public void setTransactionManager(PlatformTransactionManager transactionManager) { this.transactionTemplate = new TransactionTemplate(transactionManager); } public void increasePriceOfAllProductsInCategory(final String category) { this.transactionTemplate.execute(new TransactionCallbackWithoutResult() { public void doInTransactionWithoutResult(TransactionStatus status) { List productsToChange = this.productDao.loadProductsByCategory(category); // do the price increase... } }); } } ``` ``` class ProductServiceImpl(transactionManager: PlatformTransactionManager, private val productDao: ProductDao) : ProductService { private val transactionTemplate = TransactionTemplate(transactionManager) fun increasePriceOfAllProductsInCategory(category: String) { transactionTemplate.execute { val productsToChange = productDao.loadProductsByCategory(category) // do the price increase... } } } ``` lets any checked application exception be thrown with the callback code, while `TransactionTemplate` is restricted to unchecked exceptions within the callback. `TransactionTemplate` triggers a rollback in case of an unchecked application exception or if the transaction is marked rollback-only by the application (by setting `TransactionStatus` ). By default, behaves the same way but allows configurable rollback policies per method. ## Transaction Management Strategies Both `TransactionTemplate` and delegate the actual transaction handling to a instance (which can be a (for a single Hibernate `SessionFactory` ) by using a `ThreadLocal` `Session` under the hood) or a (delegating to the JTA subsystem of the container) for Hibernate applications. You can even use a custom implementation. Switching from native Hibernate transaction management to JTA (such as when facing distributed transaction requirements for certain deployments of your application) is only a matter of configuration. You can replace the Hibernate transaction manager with Spring’s JTA transaction implementation. Both transaction demarcation and data access code work without changes, because they use the generic transaction management APIs. For distributed transactions across multiple Hibernate session factories, you can combine as a transaction strategy with multiple definitions. Each DAO then gets one specific `SessionFactory` reference passed into its corresponding bean property. If all underlying JDBC data sources are transactional container ones, a business service can demarcate transactions across any number of DAOs and any number of session factories without special regard, as long as it uses as the strategy. Both allow for proper JVM-level cache handling with Hibernate, without container-specific transaction manager lookup or a JCA connector (if you do not use EJB to initiate transactions). can export the Hibernate JDBC `Connection` to plain JDBC access code for a specific `DataSource` . This ability allows for high-level transaction demarcation with mixed Hibernate and JDBC data access completely without JTA, provided you access only one database. automatically exposes the Hibernate transaction as a JDBC transaction if you have set up the passed-in `SessionFactory` with a `DataSource` through the `dataSource` property of the class. Alternatively, you can specify explicitly the `DataSource` for which the transactions are supposed to be exposed through the `dataSource` property of the ## Comparing Container-managed and Locally Defined Resources You can switch between a container-managed JNDI `SessionFactory` and a locally defined one without having to change a single line of application code. Whether to keep resource definitions in the container or locally within the application is mainly a matter of the transaction strategy that you use. Compared to a Spring-defined local `SessionFactory` , a manually registered JNDI `SessionFactory` does not provide any benefits. Deploying a `SessionFactory` through Hibernate’s JCA connector provides the added value of participating in the Jakarta EE server’s management infrastructure, but does not add actual value beyond that. Spring’s transaction support is not bound to a container. When configured with any strategy other than JTA, transaction support also works in a stand-alone or test environment. Especially in the typical case of single-database transactions, Spring’s single-resource local transaction support is a lightweight and powerful alternative to JTA. When you use local EJB stateless session beans to drive transactions, you depend both on an EJB container and on JTA, even if you access only a single database and use only stateless session beans to provide declarative transactions through container-managed transactions. Direct use of JTA programmatically also requires a Jakarta EE environment. Spring-driven transactions can work as well with a locally defined Hibernate `SessionFactory` as they do with a local JDBC `DataSource` , provided they access a single database. Thus, you need only use Spring’s JTA transaction strategy when you have distributed transaction requirements. A JCA connector requires container-specific deployment steps, and (obviously) JCA support in the first place. This configuration requires more work than deploying a simple web application with local resource definitions and Spring-driven transactions. All things considered, if you do not use EJBs, stick with local `SessionFactory` setup and Spring’s . You get all of the benefits, including proper transactional JVM-level caching and distributed transactions, without the inconvenience of container deployment. JNDI registration of a Hibernate `SessionFactory` through the JCA connector adds value only when used in conjunction with EJBs. ## Spurious Application Server Warnings with Hibernate In some JTA environments with very strict `XADataSource` implementations (currently some WebLogic Server and WebSphere versions), when Hibernate is configured without regard to the JTA transaction manager for that environment, spurious warnings or exceptions can show up in the application server log. These warnings or exceptions indicate that the connection being accessed is no longer valid or JDBC access is no longer valid, possibly because the transaction is no longer active. As an example, here is an actual exception from WebLogic: > java.sql.SQLException: The transaction is no longer active - status: 'Committed'. No further JDBC access is allowed within this transaction. Another common problem is a connection leak after JTA transactions, with Hibernate sessions (and potentially underlying JDBC connections) not getting closed properly. You can resolve such issues by making Hibernate aware of the JTA transaction manager, to which it synchronizes (along with Spring). You have two options for doing this: Pass your Spring bean to your Hibernate setup. The easiest way is a bean reference into the ``` jtaTransactionManager ``` property for your bean (see Hibernate Transaction Setup). Spring then makes the corresponding JTA strategies available to Hibernate. * You may also configure Hibernate’s JTA-related properties explicitly, in particular "hibernate.transaction.coordinator_class", "hibernate.connection.handling_mode" and potentially "hibernate.transaction.jta.platform" in your "hibernateProperties" on (see Hibernate’s manual for details on those properties). The remainder of this section describes the sequence of events that occur with and without Hibernate’s awareness of the JTA . When Hibernate is not configured with any awareness of the JTA transaction manager, the following events occur when a JTA transaction commits: is synchronized to the JTA transaction, so it is called back through an `afterCompletion` callback by the JTA transaction manager. * Among other activities, this synchronization can trigger a callback by Spring to Hibernate, through Hibernate’s ``` afterTransactionCompletion ``` callback (used to clear the Hibernate cache), followed by an explicit `close()` call on the Hibernate session, which causes Hibernate to attempt to `close()` the JDBC Connection. * In some environments, this `Connection.close()` call then triggers the warning or error, as the application server no longer considers the `Connection` to be usable, because the transaction has already been committed. When Hibernate is configured with awareness of the JTA transaction manager, the following events occur when a JTA transaction commits: The JTA transaction is ready to commit. * is synchronized to the JTA transaction, so the transaction is called back through a `beforeCompletion` callback by the JTA transaction manager. * Spring is aware that Hibernate itself is synchronized to the JTA transaction and behaves differently than in the previous scenario. In particular, it aligns with Hibernate’s transactional resource management. * Hibernate is synchronized to the JTA transaction, so the transaction is called back through an `afterCompletion` callback by the JTA transaction manager and can properly clear its cache. The Spring JPA, available under the ``` org.springframework.orm.jpa ``` package, offers comprehensive support for the Java Persistence API in a manner similar to the integration with Hibernate while being aware of the underlying implementation in order to provide additional features. ## Three Options for JPA Setup in a Spring Environment The Spring JPA support offers three ways of setting up the JPA `EntityManagerFactory` that is used by the application to obtain an entity manager. You can use this option only in simple deployment environments such as stand-alone applications and integration tests. The creates an `EntityManagerFactory` suitable for simple deployment environments where the application uses only JPA for data access. The factory bean uses the JPA `PersistenceProvider` auto-detection mechanism (according to JPA’s Java SE bootstrapping) and, in most cases, requires you to specify only the persistence unit name. The following XML example configures such a bean: ``` <beans> <bean id="myEmf" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myPersistenceUnit"/> </bean> </beans> ``` This form of JPA deployment is the simplest and the most limited. You cannot refer to an existing JDBC `DataSource` bean definition, and no support for global transactions exists. Furthermore, weaving (byte-code transformation) of persistent classes is provider-specific, often requiring a specific JVM agent to be specified on startup. This option is sufficient only for stand-alone applications and test environments, for which the JPA specification is designed. ### Obtaining an EntityManagerFactory from JNDI You can use this option when deploying to a Jakarta EE server. Check your server’s documentation on how to deploy a custom JPA provider into your server, allowing for a different provider than the server’s default. Obtaining an `EntityManagerFactory` from JNDI (for example in a Jakarta EE environment), is a matter of changing the XML configuration, as the following example shows: ``` <beans> <jee:jndi-lookup id="myEmf" jndi-name="persistence/myPersistenceUnit"/> </beans> ``` This action assumes standard Jakarta EE bootstrapping. The Jakarta EE server auto-detects persistence units (in effect, files in application jars) and `persistence-unit-ref` entries in the Jakarta EE deployment descriptor (for example, `web.xml` ) and defines environment naming context locations for those persistence units. In such a scenario, the entire persistence unit deployment, including the weaving (byte-code transformation) of persistent classes, is up to the Jakarta EE server. The JDBC `DataSource` is defined through a JNDI location in the file. `EntityManager` transactions are integrated with the server’s JTA subsystem. Spring merely uses the obtained `EntityManagerFactory` , passing it on to application objects through dependency injection and managing transactions for the persistence unit (typically through ). If you use multiple persistence units in the same application, the bean names of such JNDI-retrieved persistence units should match the persistence unit names that the application uses to refer to them (for example, in `@PersistenceUnit` and `@PersistenceContext` annotations). You can use this option for full JPA capabilities in a Spring-based application environment. This includes web containers such as Tomcat, stand-alone applications, and integration tests with sophisticated persistence requirements. The gives full control over `EntityManagerFactory` configuration and is appropriate for environments where fine-grained customization is required. The creates a `PersistenceUnitInfo` instance based on the `persistence.xml` file, the supplied `dataSourceLookup` strategy, and the specified `loadTimeWeaver` . It is, thus, possible to work with custom data sources outside of JNDI and to control the weaving process. The following example shows a typical bean definition for a ``` <beans> <bean id="myEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="someDataSource"/> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver"/> </property> </bean> </beans> ``` The following example shows a typical `persistence.xml` file: ``` <persistence xmlns="http://java.sun.com/xml/ns/persistence" version="1.0"> <persistence-unit name="myUnit" transaction-type="RESOURCE_LOCAL"> <mapping-file>META-INF/orm.xml</mapping-file> <exclude-unlisted-classes/> </persistence-unit> </persistence> ``` Using the is the most powerful JPA setup option, allowing for flexible local configuration within the application. It supports links to an existing JDBC `DataSource` , supports both local and global transactions, and so on. However, it also imposes requirements on the runtime environment, such as the availability of a weaving-capable class loader if the persistence provider demands byte-code transformation. This option may conflict with the built-in JPA capabilities of a Jakarta EE server. In a full Jakarta EE environment, consider obtaining your `EntityManagerFactory` from JNDI. Alternatively, specify a custom ``` persistenceXmlLocation ``` on your definition (for example, META-INF/my-persistence.xml) and include only a descriptor with that name in your application jar files. Because the Jakarta EE server looks only for default files, it ignores such custom persistence units and, hence, avoids conflicts with a Spring-driven JPA setup upfront. (This applies to Resin 3.1, for example.) The `LoadTimeWeaver` interface is a Spring-provided class that lets JPA `ClassTransformer` instances be plugged in a specific manner, depending on whether the environment is a web container or application server. Hooking `ClassTransformers` through an agent is typically not efficient. The agents work against the entire virtual machine and inspect every class that is loaded, which is usually undesirable in a production server environment. Spring provides a number of `LoadTimeWeaver` implementations for various environments, letting `ClassTransformer` instances be applied only for each class loader and not for each VM. See the Spring configuration in the AOP chapter for more insight regarding the `LoadTimeWeaver` implementations and their setup, either generic or customized to various platforms (such as Tomcat, JBoss and WebSphere). As described in Spring configuration, you can configure a context-wide `LoadTimeWeaver` by using the annotation or the XML element. Such a global weaver is automatically picked up by all JPA instances. The following example shows the preferred way of setting up a load-time weaver, delivering auto-detection of the platform (e.g. Tomcat’s weaving-capable class loader or Spring’s JVM agent) and automatic propagation of the weaver to all weaver-aware beans: However, you can, if needed, manually specify a dedicated weaver through the `loadTimeWeaver` property, as the following example shows: ``` <bean id="emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.ReflectiveLoadTimeWeaver"/> </property> </bean> ``` No matter how the LTW is configured, by using this technique, JPA applications relying on instrumentation can run in the target platform (for example, Tomcat) without needing an agent. This is especially important when the hosting applications rely on different JPA implementations, because the JPA transformers are applied only at the class-loader level and are, thus, isolated from each other. ### Dealing with Multiple Persistence Units For applications that rely on multiple persistence units locations (stored in various JARS in the classpath, for example), Spring offers the to act as a central repository and to avoid the persistence units discovery process, which can be expensive. The default implementation lets multiple locations be specified. These locations are parsed and later retrieved through the persistence unit name. (By default, the classpath is searched for files.) The following example configures multiple locations: ``` <bean id="pum" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager"> <property name="persistenceXmlLocations"> <list> <value>org/springframework/orm/jpa/domain/persistence-multi.xml</value> <value>classpath:/my/package/**/custom-persistence.xml</value> <value>classpath*:META-INF/persistence.xml</value> </list> </property> <property name="dataSources"> <map> <entry key="localDataSource" value-ref="local-db"/> <entry key="remoteDataSource" value-ref="remote-db"/> </map> </property> <!-- if no datasource is specified, use this one --> <property name="defaultDataSource" ref="remoteDataSource"/> </bean<bean id="emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitManager" ref="pum"/> <property name="persistenceUnitName" value="myCustomUnit"/> </bean> ``` The default implementation allows customization of the `PersistenceUnitInfo` instances (before they are fed to the JPA provider) either declaratively (through its properties, which affect all hosted units) or programmatically (through the ``` PersistenceUnitPostProcessor ``` , which allows persistence unit selection). If no is specified, one is created and used internally by ### Background Bootstrapping supports background bootstrapping through the `bootstrapExecutor` property, as the following example shows: ``` <bean id="emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="bootstrapExecutor"> <bean class="org.springframework.core.task.SimpleAsyncTaskExecutor"/> </property> </bean> ``` The actual JPA provider bootstrapping is handed off to the specified executor and then, running in parallel, to the application bootstrap thread. The exposed `EntityManagerFactory` proxy can be injected into other application components and is even able to respond to ``` EntityManagerFactoryInfo ``` configuration inspection. However, once the actual JPA provider is being accessed by other components (for example, calling `createEntityManager` ), those calls block until the background bootstrapping has completed. In particular, when you use Spring Data JPA, make sure to set up deferred bootstrapping for its repositories as well. ## Implementing DAOs Based on JPA: `EntityManagerFactory` and `EntityManager` Although | | --- | It is possible to write code against the plain JPA without any Spring dependencies, by using an injected `EntityManagerFactory` or `EntityManager` . Spring can understand the `@PersistenceUnit` and `@PersistenceContext` annotations both at the field and the method level if a is enabled. The following example shows a plain JPA DAO implementation that uses the `@PersistenceUnit` annotation: private EntityManagerFactory emf; @PersistenceUnit public void setEntityManagerFactory(EntityManagerFactory emf) { this.emf = emf; } public Collection loadProductsByCategory(String category) { EntityManager em = this.emf.createEntityManager(); try { Query query = em.createQuery("from Product as p where p.category = ?1"); query.setParameter(1, category); return query.getResultList(); } finally { if (em != null) { em.close(); } } } } ``` private lateinit var emf: EntityManagerFactory @PersistenceUnit fun setEntityManagerFactory(emf: EntityManagerFactory) { this.emf = emf } fun loadProductsByCategory(category: String): Collection<*> { val em = this.emf.createEntityManager() val query = em.createQuery("from Product as p where p.category = ?1"); query.setParameter(1, category); return query.resultList; } } ``` The preceding DAO has no dependency on Spring and still fits nicely into a Spring application context. Moreover, the DAO takes advantage of annotations to require the injection of the default `EntityManagerFactory` , as the following example bean definition shows: <!-- bean post-processor for JPA annotations --> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/ As an alternative to explicitly defining a , consider using the Spring ``` context:annotation-config ``` XML element in your application context configuration. Doing so automatically registers all Spring standard post-processors for annotation-based configuration, including and so on. Consider the following example: <!-- post-processors for all standard config annotations --> <context:annotation-config/ The main problem with such a DAO is that it always creates a new `EntityManager` through the factory. You can avoid this by requesting a transactional `EntityManager` (also called a “shared EntityManager” because it is a shared, thread-safe proxy for the actual transactional EntityManager) to be injected instead of the factory. The following example shows how to do so: @PersistenceContext private EntityManager em; public Collection loadProductsByCategory(String category) { Query query = em.createQuery("from Product as p where p.category = :category"); query.setParameter("category", category); return query.getResultList(); } } ``` @PersistenceContext private lateinit var em: EntityManager fun loadProductsByCategory(category: String): Collection<*> { val query = em.createQuery("from Product as p where p.category = :category") query.setParameter("category", category) return query.resultList } } ``` The `@PersistenceContext` annotation has an optional attribute called `type` , which defaults to ``` PersistenceContextType.TRANSACTION ``` . You can use this default to receive a shared `EntityManager` proxy. The alternative, ``` PersistenceContextType.EXTENDED ``` , is a completely different affair. This results in a so-called extended `EntityManager` , which is not thread-safe and, hence, must not be used in a concurrently accessed component, such as a Spring-managed singleton bean. Extended `EntityManager` instances are only supposed to be used in stateful components that, for example, reside in a session, with the lifecycle of the `EntityManager` not tied to a current transaction but rather being completely up to the application. The injected `EntityManager` is Spring-managed (aware of the ongoing transaction). Even though the new DAO implementation uses method-level injection of an `EntityManager` instead of an `EntityManagerFactory` , no change is required in the bean definition due to annotation usage. The main advantage of this DAO style is that it depends only on the Java Persistence API. No import of any Spring class is required. Moreover, as the JPA annotations are understood, the injections are applied automatically by the Spring container. This is appealing from a non-invasiveness perspective and can feel more natural to JPA developers. ### Implementing DAOs Based on `@Autowired` (typically with constructor-based injection) `@PersistenceUnit` and `@PersistenceContext` can only be declared on methods and fields. What about providing JPA resources via constructors and other `@Autowired` injection points? `EntityManagerFactory` can easily be injected via constructors and `@Autowired` fields/methods as long as the target is defined as a bean, e.g. via . The injection point matches the original `EntityManagerFactory` definition by type as-is. However, an `@PersistenceContext` -style shared `EntityManager` reference is not available for regular dependency injection out of the box. In order to make it available for type-based matching as required by `@Autowired` , consider defining a ``` SharedEntityManagerBean ``` as a companion for your `EntityManagerFactory` definition: <bean id="em" class="org.springframework.orm.jpa.support.SharedEntityManagerBean"> <property name="entityManagerFactory" ref="emf"/> </bean> ``` Alternatively, you may define an `@Bean` method based on ``` SharedEntityManagerCreator ``` ``` @Bean("em") public static EntityManager sharedEntityManager(EntityManagerFactory emf) { return SharedEntityManagerCreator.createSharedEntityManager(emf); } ``` In case of multiple persistence units, each `EntityManagerFactory` definition needs to be accompanied by a corresponding `EntityManager` bean definition, ideally with qualifiers that match with the distinct `EntityManagerFactory` definition in order to distinguish the persistence units via ``` @Autowired @Qualifier("…​") ``` ## Spring-driven JPA Transactions We strongly encourage you to read Declarative Transaction Management, if you have not already done so, to get more detailed coverage of Spring’s declarative transaction support. | | --- | The recommended strategy for JPA is local transactions through JPA’s native transaction support. Spring’s provides many capabilities known from local JDBC transactions (such as transaction-specific isolation levels and resource-level read-only optimizations) against any regular JDBC connection pool (no XA requirement). Spring JPA also lets a configured expose a JPA transaction to JDBC access code that accesses the same `DataSource` , provided that the registered `JpaDialect` supports retrieval of the underlying JDBC `Connection` . Spring provides dialects for the EclipseLink and Hibernate JPA implementations. See the next section for details on the `JpaDialect` mechanism. `JpaDialect` and `JpaVendorAdapter` As an advanced feature, and subclasses of ``` AbstractEntityManagerFactoryBean ``` allow a custom `JpaDialect` to be passed into the `jpaDialect` bean property. A `JpaDialect` implementation can enable the following advanced features supported by Spring, usually in a vendor-specific manner: Applying specific transaction semantics (such as custom isolation level or transaction timeout) * Retrieving the transactional JDBC `Connection` (for exposure to JDBC-based DAOs) * Advanced translation of `PersistenceException` to Spring’s `DataAccessException` This is particularly valuable for special transaction semantics and for advanced translation of exception. The default implementation ( `DefaultJpaDialect` ) does not provide any special abilities and, if the features listed earlier are required, you have to specify the appropriate dialect. As an even broader provider adaptation facility primarily for Spring’s full-featured | | --- | See the `JpaDialect` and `JpaVendorAdapter` javadoc for more details of its operations and how they are used within Spring’s JPA support. ## Setting up JPA with JTA Transaction Management , Spring also allows for multi-resource transaction coordination through JTA, either in a Jakarta EE environment or with a stand-alone transaction coordinator, such as Atomikos. Aside from choosing Spring’s , you need to take few further steps: The underlying JDBC connection pools need to be XA-capable and be integrated with your transaction coordinator. This is usually straightforward in a Jakarta EE environment, exposing a different kind of `DataSource` through JNDI. See your application server documentation for details. Analogously, a standalone transaction coordinator usually comes with special XA-integrated `DataSource` variants. Again, check its documentation. * The JPA `EntityManagerFactory` setup needs to be configured for JTA. This is provider-specific, typically through special properties to be specified as `jpaProperties` on . In the case of Hibernate, these properties are even version-specific. See your Hibernate documentation for details. * enforces certain Spring-oriented defaults, such as the connection release mode, `on-close` , which matches Hibernate’s own default in Hibernate 5.0 but not any more in Hibernate 5.1+. For a JTA setup, make sure to declare your persistence unit transaction type as "JTA". Alternatively, set Hibernate 5.2’s ``` hibernate.connection.handling_mode ``` property to ``` DELAYED_ACQUISITION_AND_RELEASE_AFTER_STATEMENT ``` to restore Hibernate’s own default. See Spurious Application Server Warnings with Hibernate for related notes. * Alternatively, consider obtaining the `EntityManagerFactory` from your application server itself (that is, through a JNDI lookup instead of a locally declared ). A server-provided `EntityManagerFactory` might require special definitions in your server configuration (making the deployment less portable) but is set up for the server’s JTA environment. ## Native Hibernate Setup and Native Hibernate Transactions for JPA Interaction A native setup in combination with allows for interaction with `@PersistenceContext` and other JPA access code. A Hibernate `SessionFactory` natively implements JPA’s `EntityManagerFactory` interface now and a Hibernate `Session` handle natively is a JPA `EntityManager` . Spring’s JPA support facilities automatically detect native Hibernate sessions. Such native Hibernate setup can, therefore, serve as a replacement for a standard JPA combination in many scenarios, allowing for interaction with (and also `HibernateTemplate` ) next to ``` @PersistenceContext EntityManager ``` within the same local transaction. Such a setup also provides stronger Hibernate integration and more configuration flexibility, because it is not constrained by JPA bootstrap contracts. You do not need configuration in such a scenario, since Spring’s native Hibernate setup provides even more features (for example, custom Hibernate Integrator setup, Hibernate 5.3 bean container integration, and stronger optimizations for read-only transactions). Last but not least, you can also express native Hibernate setup through ``` LocalSessionFactoryBuilder ``` , seamlessly integrating with `@Bean` style configuration (no `FactoryBean` involved). This chapter, describes Spring’s Object-XML Mapping support. Object-XML Mapping (O-X mapping for short) is the act of converting an XML document to and from an object. This conversion process is also known as XML Marshalling, or XML Serialization. This chapter uses these terms interchangeably. Within the field of O-X mapping, a marshaller is responsible for serializing an object (graph) to XML. In similar fashion, an unmarshaller deserializes the XML to an object graph. This XML can take the form of a DOM document, an input or output stream, or a SAX handler. Some of the benefits of using Spring for your O/X mapping needs are: ### Ease of configuration Spring’s bean factory makes it easy to configure marshallers, without needing to construct JAXB context, JiBX binding factories, and so on. You can configure the marshallers as you would any other bean in your application context. Additionally, XML namespace-based configuration is available for a number of marshallers, making the configuration even simpler. ### Consistent Interfaces Spring’s O-X mapping operates through two global interfaces: `Marshaller` and `Unmarshaller` . These abstractions let you switch O-X mapping frameworks with relative ease, with little or no change required on the classes that do the marshalling. This approach has the additional benefit of making it possible to do XML marshalling with a mix-and-match approach (for example, some marshalling performed using JAXB and some by XStream) in a non-intrusive fashion, letting you use the strength of each technology. `Marshaller` and `Unmarshaller` As stated in the introduction, a marshaller serializes an object to XML, and an unmarshaller deserializes XML stream to an object. This section describes the two Spring interfaces used for this purpose. `Marshaller` Spring abstracts all marshalling operations behind the ``` org.springframework.oxm.Marshaller ``` interface, the main method of which follows: ``` public interface Marshaller { /** * Marshal the object graph with the given root into the provided Result. */ void marshal(Object graph, Result result) throws XmlMappingException, IOException; } ``` The `Marshaller` interface has one main method, which marshals the given object to a given ``` javax.xml.transform.Result ``` . The result is a tagging interface that basically represents an XML output abstraction. Concrete implementations wrap various XML representations, as the following table indicates: Although the | | --- | `Unmarshaller` Similar to the `Marshaller` , we have the ``` org.springframework.oxm.Unmarshaller ``` ``` public interface Unmarshaller { /** * Unmarshal the given provided Source into an object graph. */ Object unmarshal(Source source) throws XmlMappingException, IOException; } ``` This interface also has one method, which reads from the given ``` javax.xml.transform.Source ``` (an XML input abstraction) and returns the object read. As with `Result` , `Source` is a tagging interface that has three concrete implementations. Each wraps a different XML representation, as the following table indicates: Even though there are two separate marshalling interfaces ( `Marshaller` and `Unmarshaller` ), all implementations in Spring-WS implement both in one class. This means that you can wire up one marshaller class and refer to it both as a marshaller and as an unmarshaller in your `XmlMappingException` Spring converts exceptions from the underlying O-X mapping tool to its own exception hierarchy with the `XmlMappingException` as the root exception. These runtime exceptions wrap the original exception so that no information will be lost. Additionally, the ``` MarshallingFailureException ``` ``` UnmarshallingFailureException ``` provide a distinction between marshalling and unmarshalling operations, even though the underlying O-X mapping tool does not do so. The O-X Mapping exception hierarchy is shown in the following figure: `Marshaller` and `Unmarshaller` You can use Spring’s OXM for a wide variety of situations. In the following example, we use it to marshal the settings of a Spring-managed application as an XML file. In the following example, we use a simple JavaBean to represent the settings: ``` public class Settings { private boolean fooEnabled; public boolean isFooEnabled() { return fooEnabled; } public void setFooEnabled(boolean fooEnabled) { this.fooEnabled = fooEnabled; } } ``` ``` class Settings { var isFooEnabled: Boolean = false } ``` The application class uses this bean to store its settings. Besides a main method, the class has two methods: `saveSettings()` saves the settings bean to a file named `settings.xml` , and `loadSettings()` loads these settings again. The following `main()` method constructs a Spring application context and calls these two methods: ``` import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import javax.xml.transform.stream.StreamResult; import javax.xml.transform.stream.StreamSource; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import org.springframework.oxm.Marshaller; import org.springframework.oxm.Unmarshaller; public class Application { private static final String FILE_NAME = "settings.xml"; private Settings settings = new Settings(); private Marshaller marshaller; private Unmarshaller unmarshaller; public void setMarshaller(Marshaller marshaller) { this.marshaller = marshaller; } public void setUnmarshaller(Unmarshaller unmarshaller) { this.unmarshaller = unmarshaller; } public void saveSettings() throws IOException { try (FileOutputStream os = new FileOutputStream(FILE_NAME)) { this.marshaller.marshal(settings, new StreamResult(os)); } } public void loadSettings() throws IOException { try (FileInputStream is = new FileInputStream(FILE_NAME)) { this.settings = (Settings) this.unmarshaller.unmarshal(new StreamSource(is)); } } public static void main(String[] args) throws IOException { ApplicationContext appContext = new ClassPathXmlApplicationContext("applicationContext.xml"); Application application = (Application) appContext.getBean("application"); application.saveSettings(); application.loadSettings(); } } ``` ``` class Application { lateinit var marshaller: Marshaller lateinit var unmarshaller: Unmarshaller fun saveSettings() { FileOutputStream(FILE_NAME).use { outputStream -> marshaller.marshal(settings, StreamResult(outputStream)) } } fun loadSettings() { FileInputStream(FILE_NAME).use { inputStream -> settings = unmarshaller.unmarshal(StreamSource(inputStream)) as Settings } } } private const val FILE_NAME = "settings.xml" fun main(args: Array<String>) { val appContext = ClassPathXmlApplicationContext("applicationContext.xml") val application = appContext.getBean("application") as Application application.saveSettings() application.loadSettings() } ``` The `Application` requires both a `marshaller` and an `unmarshaller` property to be set. We can do so by using the following ``` <beans> <bean id="application" class="Application"> <property name="marshaller" ref="xstreamMarshaller" /> <property name="unmarshaller" ref="xstreamMarshaller" /> </bean> <bean id="xstreamMarshaller" class="org.springframework.oxm.xstream.XStreamMarshaller"/> </beans> ``` This application context uses XStream, but we could have used any of the other marshaller instances described later in this chapter. Note that, by default, XStream does not require any further configuration, so the bean definition is rather simple. Also note that the `XStreamMarshaller` implements both `Marshaller` and `Unmarshaller` , so we can refer to the `xstreamMarshaller` bean in both the `marshaller` and `unmarshaller` property of the application. This sample application produces the following `settings.xml` file: ``` <?xml version="1.0" encoding="UTF-8"?> <settings foo-enabled="false"/> ``` ## XML Configuration Namespace You can configure marshallers more concisely by using tags from the OXM namespace. To make these tags available, you must first reference the appropriate schema in the preamble of the XML configuration file. The following example shows how to do so: ``` <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:oxm="http://www.springframework.org/schema/oxm" (1) xsi:schemaLocation="http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/oxm https://www.springframework.org/schema/oxm/spring-oxm.xsd"> (2) ``` 1 | Reference the | | --- | --- | 2 | Specify the | The schema makes the following elements available: Each tag is explained in its respective marshaller’s section. As an example, though, the configuration of a JAXB2 marshaller might resemble the following: ## JAXB The JAXB binding compiler translates a W3C XML Schema into one or more Java classes, a `jaxb.properties` file, and possibly some resource files. JAXB also offers a way to generate a schema from annotated Java classes. Spring supports the JAXB 2.0 API as XML marshalling strategies, following the `Marshaller` and `Unmarshaller` interfaces described in `Marshaller` and `Unmarshaller` . The corresponding integration classes reside in the ``` org.springframework.oxm.jaxb ``` `Jaxb2Marshaller` The `Jaxb2Marshaller` class implements both of Spring’s `Marshaller` and `Unmarshaller` interfaces. It requires a context path to operate. You can set the context path by setting the `contextPath` property. The context path is a list of colon-separated Java package names that contain schema derived classes. It also offers a `classesToBeBound` property, which allows you to set an array of classes to be supported by the marshaller. Schema validation is performed by specifying one or more schema resources to the bean, as the following example shows: ``` <beans> <bean id="jaxb2Marshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="classesToBeBound"> <list> <value>org.springframework.oxm.jaxb.Flight</value> <value>org.springframework.oxm.jaxb.Flights</value> </list> </property> <property name="schema" value="classpath:org/springframework/oxm/schema.xsd"/> </bean The `jaxb2-marshaller` element configures a ``` org.springframework.oxm.jaxb.Jaxb2Marshaller ``` Alternatively, you can provide the list of classes to bind to the marshaller by using the `class-to-be-bound` child element: ``` <oxm:jaxb2-marshaller id="marshaller"> <oxm:class-to-be-bound name="org.springframework.ws.samples.airline.schema.Airport"/> <oxm:class-to-be-bound name="org.springframework.ws.samples.airline.schema.Flight"/> ... </oxm:jaxb2-marshaller> ``` ## JiBX The JiBX framework offers a solution similar to that which Hibernate provides for ORM: A binding definition defines the rules for how your Java objects are converted to or from XML. After preparing the binding and compiling the classes, a JiBX binding compiler enhances the class files and adds code to handle converting instances of the classes from or to XML. For more information on JiBX, see the JiBX web site. The Spring integration classes reside in the ``` org.springframework.oxm.jibx ``` `JibxMarshaller` The `JibxMarshaller` class implements both the `Marshaller` and `Unmarshaller` interface. To operate, it requires the name of the class to marshal in, which you can set using the `targetClass` property. Optionally, you can set the binding name by setting the `bindingName` property. In the following example, we bind the `Flights` class: ``` <beans> <bean id="jibxFlightsMarshaller" class="org.springframework.oxm.jibx.JibxMarshaller"> <property name="targetClass">org.springframework.oxm.jibx.Flights</property> </bean> ... </beans> ``` A `JibxMarshaller` is configured for a single class. If you want to marshal multiple classes, you have to configure multiple `JibxMarshaller` instances with different `targetClass` property values. The `jibx-marshaller` tag configures a ``` org.springframework.oxm.jibx.JibxMarshaller ``` ``` <oxm:jibx-marshaller id="marshaller" target-class="org.springframework.ws.samples.airline.schema.Flight"/> ``` ## XStream XStream is a simple library to serialize objects to XML and back again. It does not require any mapping and generates clean XML. For more information on XStream, see the XStream web site. The Spring integration classes reside in the ``` org.springframework.oxm.xstream ``` `XStreamMarshaller` The `XStreamMarshaller` does not require any configuration and can be configured in an application context directly. To further customize the XML, you can set an alias map, which consists of string aliases mapped to classes, as the following example shows: ``` <beans> <bean id="xstreamMarshaller" class="org.springframework.oxm.xstream.XStreamMarshaller"> <property name="aliases"> <props> <prop key="Flight">org.springframework.oxm.xstream.Flight</prop> </props> </property> </bean> ... </beans> ``` Note that XStream is an XML serialization library, not a data binding library. Therefore, it has limited namespace support. As a result, it is rather unsuitable for usage within Web Services. | | --- | This part of the appendix lists XML schemas for data access, including the following: `tx` Schema The `tx` tags deal with configuring all of those beans in Spring’s comprehensive support for transactions. These tags are covered in the chapter entitled Transaction Management. We strongly encourage you to look at the | | --- | In the interest of completeness, to use the elements in the `tx` schema, you need to have the following preamble at the top of your Spring XML configuration file. The text in the following snippet references the correct schema so that the tags in the `tx` namespace are available to you: ``` <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" (1) xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation=" http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx https://www.springframework.org/schema/tx/spring-tx.xsd (2) http://www.springframework.org/schema/aop https://www.springframework.org/schema/aop/spring-aop.xsd"Often, when you use the elements in the | | --- | `jdbc` Schema The `jdbc` elements let you quickly configure an embedded database or initialize an existing data source. These elements are documented in Embedded Database Support and Initializing a DataSource, respectively. To use the elements in the `jdbc` schema, you need to have the following preamble at the top of your Spring XML configuration file. The text in the following snippet references the correct schema so that the elements in the `jdbc` namespace are available to you: ``` <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jdbc="http://www.springframework.org/schema/jdbc" (1) xsi:schemaLocation=" http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/jdbc https://www.springframework.org/schema/jdbc/spring-jdbc.xsd"> (2) Spring Web MVC is the original web framework built on the Servlet API and has been included in the Spring Framework from the very beginning. The formal name, "Spring Web MVC," comes from the name of its source module ( `spring-webmvc` ), but it is more commonly known as "Spring MVC". Parallel to Spring Web MVC, Spring Framework 5.0 introduced a reactive-stack web framework whose name, "Spring WebFlux," is also based on its source module ( `spring-webflux` ). This chapter covers Spring Web MVC. The next chapter covers Spring WebFlux. For baseline information and compatibility with Servlet container and Jakarta EE version ranges, see the Spring Framework Wiki. Spring MVC, as many other web frameworks, is designed around the front controller pattern where a central `Servlet` , the `DispatcherServlet` , provides a shared algorithm for request processing, while actual work is performed by configurable delegate components. This model is flexible and supports diverse workflows. The `DispatcherServlet` , as any `Servlet` , needs to be declared and mapped according to the Servlet specification by using Java configuration or in `web.xml` . In turn, the `DispatcherServlet` uses Spring configuration to discover the delegate components it needs for request mapping, view resolution, exception handling, and more. The following example of the Java configuration registers and initializes the `DispatcherServlet` , which is auto-detected by the Servlet container (see Servlet Config): @Override public void onStartup(ServletContext servletContext) { // Load Spring web application configuration AnnotationConfigWebApplicationContext context = new AnnotationConfigWebApplicationContext(); context.register(AppConfig.class); // Create and register the DispatcherServlet DispatcherServlet servlet = new DispatcherServlet(context); ServletRegistration.Dynamic registration = servletContext.addServlet("app", servlet); registration.setLoadOnStartup(1); registration.addMapping("/app/*"); } } ``` override fun onStartup(servletContext: ServletContext) { // Load Spring web application configuration val context = AnnotationConfigWebApplicationContext() context.register(AppConfig::class.java) // Create and register the DispatcherServlet val servlet = DispatcherServlet(context) val registration = servletContext.addServlet("app", servlet) registration.setLoadOnStartup(1) registration.addMapping("/app/*") } } ``` In addition to using the ServletContext API directly, you can also extend | | --- | The following example of `web.xml` configuration registers and initializes the `DispatcherServlet` : <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/app-context.xml</param-value> </context-param <servlet> <servlet-name>app</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value></param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet <servlet-mapping> <servlet-name>app</servlet-name> <url-pattern>/app/*</url-pattern> </servlet-mappingSpring Boot follows a different initialization sequence. Rather than hooking into the lifecycle of the Servlet container, Spring Boot uses Spring configuration to bootstrap itself and the embedded Servlet container. | | --- | `DispatcherServlet` expects a (an extension of a plain `ApplicationContext` ) for its own configuration. has a link to the `ServletContext` and the `Servlet` with which it is associated. It is also bound to the `ServletContext` such that applications can use static methods on `RequestContextUtils` to look up the if they need access to it. For many applications, having a single is simple and suffices. It is also possible to have a context hierarchy where one root is shared across multiple `DispatcherServlet` (or other `Servlet` ) instances, each with its own child configuration. See Additional Capabilities of the `ApplicationContext` for more on the context hierarchy feature. The root typically contains infrastructure beans, such as data repositories and business services that need to be shared across multiple `Servlet` instances. Those beans are effectively inherited and can be overridden (that is, re-declared) in the Servlet-specific child , which typically contains beans local to the given `Servlet` . The following image shows this relationship: The following example configures a hierarchy: @Override protected Class<?>[] getRootConfigClasses() { return new Class<?>[] { RootConfig.class }; } @Override protected Class<?>[] getServletConfigClasses() { return new Class<?>[] { App1Config.class }; } override fun getRootConfigClasses(): Array<Class<*>> { return arrayOf(RootConfig::class.java) } override fun getServletConfigClasses(): Array<Class<*>> { return arrayOf(App1Config::class.java) } override fun getServletMappings(): Array<String> { return arrayOf("/app1/*") } } ``` If an application context hierarchy is not required, applications can return all configuration through | | --- | The following example shows the `web.xml` equivalent: <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/root-context.xml</param-value> </context-param <servlet> <servlet-name>app1</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/app1-context.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet <servlet-mapping> <servlet-name>app1</servlet-name> <url-pattern>/app1/*</url-pattern> </servlet-mappingIf an application context hierarchy is not required, applications may configure a “root” context only and leave the | | --- | The `DispatcherServlet` delegates to special beans to process requests and render the appropriate responses. By “special beans” we mean Spring-managed `Object` instances that implement framework contracts. Those usually come with built-in contracts, but you can customize their properties and extend or replace them. The following table lists the special beans detected by the `DispatcherServlet` : Applications can declare the infrastructure beans listed in Special Bean Types that are required to process requests. The `DispatcherServlet` checks the for each special bean. If there are no matching bean types, it falls back on the default types listed in ``` DispatcherServlet.properties ``` . In most cases, the MVC Config is the best starting point. It declares the required beans in either Java or XML and provides a higher-level configuration callback API to customize it. Spring Boot relies on the MVC Java configuration to configure Spring MVC and provides many extra convenient options. | | --- | In a Servlet environment, you have the option of configuring the Servlet container programmatically as an alternative or in combination with a `web.xml` file. The following example registers a `DispatcherServlet` : ``` import org.springframework.web.WebApplicationInitializer; @Override public void onStartup(ServletContext container) { XmlWebApplicationContext appContext = new XmlWebApplicationContext(); appContext.setConfigLocation("/WEB-INF/spring/dispatcher-config.xml"); ServletRegistration.Dynamic registration = container.addServlet("dispatcher", new DispatcherServlet(appContext)); registration.setLoadOnStartup(1); registration.addMapping("/"); } } ``` ``` import org.springframework.web.WebApplicationInitializer override fun onStartup(container: ServletContext) { val appContext = XmlWebApplicationContext() appContext.setConfigLocation("/WEB-INF/spring/dispatcher-config.xml") val registration = container.addServlet("dispatcher", DispatcherServlet(appContext)) registration.setLoadOnStartup(1) registration.addMapping("/") } } ``` is an interface provided by Spring MVC that ensures your implementation is detected and automatically used to initialize any Servlet 3 container. An abstract base class implementation of named makes it even easier to register the `DispatcherServlet` by overriding methods to specify the servlet mapping and the location of the `DispatcherServlet` configuration. This is recommended for applications that use Java-based Spring configuration, as the following example shows: @Override protected Class<?>[] getRootConfigClasses() { return null; } @Override protected Class<?>[] getServletConfigClasses() { return new Class<?>[] { MyWebConfig.class }; } override fun getRootConfigClasses(): Array<Class<*>>? { return null } override fun getServletConfigClasses(): Array<Class<*>>? { return arrayOf(MyWebConfig::class.java) } If you use XML-based Spring configuration, you should extend directly from @Override protected WebApplicationContext createRootApplicationContext() { return null; } @Override protected WebApplicationContext createServletApplicationContext() { XmlWebApplicationContext cxt = new XmlWebApplicationContext(); cxt.setConfigLocation("/WEB-INF/spring/dispatcher-config.xml"); return cxt; } override fun createRootApplicationContext(): WebApplicationContext? { return null } override fun createServletApplicationContext(): WebApplicationContext { return XmlWebApplicationContext().apply { setConfigLocation("/WEB-INF/spring/dispatcher-config.xml") } } also provides a convenient way to add `Filter` instances and have them be automatically mapped to the `DispatcherServlet` , as the following example shows: @Override protected Filter[] getServletFilters() { return new Filter[] { new HiddenHttpMethodFilter(), new CharacterEncodingFilter() }; } } ``` override fun getServletFilters(): Array<Filter> { return arrayOf(HiddenHttpMethodFilter(), CharacterEncodingFilter()) } } ``` Each filter is added with a default name based on its concrete type and automatically mapped to the `DispatcherServlet` . The `isAsyncSupported` protected method of provides a single place to enable async support on the `DispatcherServlet` and all filters mapped to it. By default, this flag is set to `true` . Finally, if you need to further customize the `DispatcherServlet` itself, you can override the ``` createDispatcherServlet ``` The `DispatcherServlet` processes requests as follows: is searched for and bound in the request as an attribute that the controller and other elements in the process can use. It is bound by default under the ``` DispatcherServlet.WEB_APPLICATION_CONTEXT_ATTRIBUTE ``` key. * The locale resolver is bound to the request to let elements in the process resolve the locale to use when processing the request (rendering the view, preparing data, and so on). If you do not need locale resolving, you do not need the locale resolver. * The theme resolver is bound to the request to let elements such as views determine which theme to use. If you do not use themes, you can ignore it. * If you specify a multipart file resolver, the request is inspected for multiparts. If multiparts are found, the request is wrapped in a for further processing by other elements in the process. See Multipart Resolver for further information about multipart handling. * An appropriate handler is searched for. If a handler is found, the execution chain associated with the handler (preprocessors, postprocessors, and controllers) is run to prepare a model for rendering. Alternatively, for annotated controllers, the response can be rendered (within the `HandlerAdapter` ) instead of returning a view. * If a model is returned, the view is rendered. If no model is returned (maybe due to a preprocessor or postprocessor intercepting the request, perhaps for security reasons), no view is rendered, because the request could already have been fulfilled. The beans declared in the are used to resolve exceptions thrown during request processing. Those exception resolvers allow customizing the logic to address exceptions. See Exceptions for more details. For HTTP caching support, handlers can use the `checkNotModified` methods of `WebRequest` , along with further options for annotated controllers as described in HTTP Caching for Controllers. You can customize individual `DispatcherServlet` instances by adding Servlet initialization parameters ( `init-param` elements) to the Servlet declaration in the `web.xml` file. The following table lists the supported parameters: Parameter | Explanation | | --- | --- | | | | | The Servlet API exposes the full request path as `requestURI` and further sub-divides it into `contextPath` , `servletPath` , and `pathInfo` whose values vary depending on how a Servlet is mapped. From these inputs, Spring MVC needs to determine the lookup path to use for mapping handlers, which should exclude the `contextPath` and any `servletMapping` prefix, if applicable. The `servletPath` and `pathInfo` are decoded and that makes them impossible to compare directly to the full `requestURI` in order to derive the lookupPath and that makes it necessary to decode the `requestURI` . However this introduces its own issues because the path may contain encoded reserved characters such as `"/"` or `";"` that can in turn alter the structure of the path after they are decoded which can also lead to security issues. In addition, Servlet containers may normalize the `servletPath` to varying degrees which makes it further impossible to perform `startsWith` comparisons against the `requestURI` . This is why it is best to avoid reliance on the `servletPath` which comes with the prefix-based `servletPath` mapping type. If the `DispatcherServlet` is mapped as the default Servlet with `"/"` or otherwise without a prefix with `"/*"` and the Servlet container is 4.0+ then Spring MVC is able to detect the Servlet mapping type and avoid use of the `servletPath` and `pathInfo` altogether. On a 3.1 Servlet container, assuming the same Servlet mapping types, the equivalent can be achieved by providing a `UrlPathHelper` with ``` alwaysUseFullPath=true ``` via Path Matching in the MVC config. Fortunately the default Servlet mapping `"/"` is a good choice. However, there is still an issue in that the `requestURI` needs to be decoded to make it possible to compare to controller mappings. This is again undesirable because of the potential to decode reserved characters that alter the path structure. If such characters are not expected, then you can reject them (like the Spring Security HTTP firewall), or you can configure `UrlPathHelper` with `urlDecode=false` but controller mappings will need to match to the encoded path which may not always work well. Furthermore, sometimes the `DispatcherServlet` needs to share the URL space with another Servlet and may need to be mapped by prefix. The above issues are addressed when using `PathPatternParser` and parsed patterns, as an alternative to String path matching with `AntPathMatcher` . The `PathPatternParser` has been available for use in Spring MVC from version 5.3, and is enabled by default from version 6.0. Unlike `AntPathMatcher` which needs either the lookup path decoded or the controller mapping encoded, a parsed `PathPattern` matches to a parsed representation of the path called `RequestPath` , one path segment at a time. This allows decoding and sanitizing path segment values individually without the risk of altering the structure of the path. Parsed `PathPattern` also supports the use of `servletPath` prefix mapping as long as a Servlet path mapping is used and the prefix is kept simple, i.e. it has no encoded characters. For pattern syntax details and comparison, see Pattern Comparison. All `HandlerMapping` implementations support handler interceptors that are useful when you want to apply specific functionality to certain requests — for example, checking for a principal. Interceptors must implement `HandlerInterceptor` from the ``` org.springframework.web.servlet ``` package with three methods that should provide enough flexibility to do all kinds of pre-processing and post-processing: * `preHandle(..)` : Before the actual handler is run * `postHandle(..)` : After the handler is run * `afterCompletion(..)` : After the complete request has finished The `preHandle(..)` method returns a boolean value. You can use this method to break or continue the processing of the execution chain. When this method returns `true` , the handler execution chain continues. When it returns false, the `DispatcherServlet` assumes the interceptor itself has taken care of requests (and, for example, rendered an appropriate view) and does not continue executing the other interceptors and the actual handler in the execution chain. See Interceptors in the section on MVC configuration for examples of how to configure interceptors. You can also register them directly by using setters on individual `HandlerMapping` implementations. `postHandle` method is less useful with `@ResponseBody` and `ResponseEntity` methods for which the response is written and committed within the `HandlerAdapter` and before `postHandle` . That means it is too late to make any changes to the response, such as adding an extra header. For such scenarios, you can implement `ResponseBodyAdvice` and either declare it as an Controller Advice bean or configure it directly on If an exception occurs during request mapping or is thrown from a request handler (such as a `@Controller` ), the `DispatcherServlet` delegates to a chain of beans to resolve the exception and provide alternative handling, which is typically an error response. The following table lists the available ## Chain of Resolvers You can form an exception resolver chain by declaring multiple beans in your Spring configuration and setting their `order` properties as needed. The higher the order property, the later the exception resolver is positioned. The contract of specifies that it can return: a `ModelAndView` that points to an error view. * An empty `ModelAndView` if the exception was handled within the resolver. * `null` if the exception remains unresolved, for subsequent resolvers to try, and, if the exception remains at the end, it is allowed to bubble up to the Servlet container. The MVC Config automatically declares built-in resolvers for default Spring MVC exceptions, for `@ResponseStatus` annotated exceptions, and for support of `@ExceptionHandler` methods. You can customize that list or replace it. ## Container Error Page If an exception remains unresolved by any and is, therefore, left to propagate or if the response status is set to an error status (that is, 4xx, 5xx), Servlet containers can render a default error page in HTML. To customize the default error page of the container, you can declare an error page mapping in `web.xml` . The following example shows how to do so: ``` <error-page> <location>/error</location> </error-page> ``` Given the preceding example, when an exception bubbles up or the response has an error status, the Servlet container makes an ERROR dispatch within the container to the configured URL (for example, `/error` ). This is then processed by the `DispatcherServlet` , possibly mapping it to a `@Controller` , which could be implemented to return an error view name with a model or to render a JSON response, as the following example shows: @RequestMapping(path = "/error") public Map<String, Object> handle(HttpServletRequest request) { Map<String, Object> map = new HashMap<>(); map.put("status", request.getAttribute("jakarta.servlet.error.status_code")); map.put("reason", request.getAttribute("jakarta.servlet.error.message")); return map; } } ``` @RequestMapping(path = ["/error"]) fun handle(request: HttpServletRequest): Map<String, Any> { val map = HashMap<String, Any>() map["status"] = request.getAttribute("jakarta.servlet.error.status_code") map["reason"] = request.getAttribute("jakarta.servlet.error.message") return map } } ``` The Servlet API does not provide a way to create error page mappings in Java. You can, however, use both a | | --- | Spring MVC defines the `ViewResolver` and `View` interfaces that let you render models in a browser without tying you to a specific view technology. `ViewResolver` provides a mapping between view names and actual views. `View` addresses the preparation of data before handing over to a specific view technology. The following table provides more details on the `ViewResolver` hierarchy: You can chain view resolvers by declaring more than one resolver bean and, if necessary, by setting the `order` property to specify ordering. Remember, the higher the order property, the later the view resolver is positioned in the chain. The contract of a `ViewResolver` specifies that it can return null to indicate that the view could not be found. However, in the case of JSPs and , the only way to figure out if a JSP exists is to perform a dispatch through `RequestDispatcher` . Therefore, you must always configure an to be last in the overall order of view resolvers. Configuring view resolution is as simple as adding `ViewResolver` beans to your Spring configuration. The MVC Config provides a dedicated configuration API for View Resolvers and for adding logic-less View Controllers which are useful for HTML template rendering without controller logic. ## Redirecting The special `redirect:` prefix in a view name lets you perform a redirect. The `UrlBasedViewResolver` (and its subclasses) recognize this as an instruction that a redirect is needed. The rest of the view name is the redirect URL. The net effect is the same as if the controller had returned a `RedirectView` , but now the controller itself can operate in terms of logical view names. A logical view name (such as ``` redirect:/myapp/some/resource ``` ) redirects relative to the current Servlet context, while a name such as ``` redirect:https://myhost.com/some/arbitrary/path ``` ## Forwarding You can also use a special `forward:` prefix for view names that are ultimately resolved by `UrlBasedViewResolver` and subclasses. This creates an `InternalResourceView` , which does a ``` RequestDispatcher.forward() ``` . Therefore, this prefix is not useful with and `InternalResourceView` (for JSPs), but it can be helpful if you use another view technology but still want to force a forward of a resource to be handled by the Servlet/JSP engine. Note that you may also chain multiple view resolvers, instead. ## Content Negotiation does not resolve views itself but rather delegates to other view resolvers and selects the view that resembles the representation requested by the client. The representation can be determined from the `Accept` header or from a query parameter (for example, `"/path?format=pdf"` ). The selects an appropriate `View` to handle the request by comparing the request media types with the media type (also known as `Content-Type` ) supported by the `View` associated with each of its `ViewResolvers` . The first `View` in the list that has a compatible `Content-Type` returns the representation to the client. If a compatible view cannot be supplied by the `ViewResolver` chain, the list of views specified through the `DefaultViews` property is consulted. This latter option is appropriate for singleton `Views` that can render an appropriate representation of the current resource regardless of the logical view name. The `Accept` header can include wildcards (for example `text/*` ), in which case a `View` whose `Content-Type` is `text/xml` is a compatible match. See View Resolvers under MVC Config for configuration details. Most parts of Spring’s architecture support internationalization, as the Spring web MVC framework does. `DispatcherServlet` lets you automatically resolve messages by using the client’s locale. This is done with `LocaleResolver` objects. When a request comes in, the `DispatcherServlet` looks for a locale resolver and, if it finds one, it tries to use it to set the locale. By using the ``` RequestContext.getLocale() ``` method, you can always retrieve the locale that was resolved by the locale resolver. In addition to automatic locale resolution, you can also attach an interceptor to the handler mapping (see Interception for more information on handler mapping interceptors) to change the locale under specific circumstances (for example, based on a parameter in the request). Locale resolvers and interceptors are defined in the ``` org.springframework.web.servlet.i18n ``` package and are configured in your application context in the normal way. The following selection of locale resolvers is included in Spring. ## Time Zone In addition to obtaining the client’s locale, it is often useful to know its time zone. The ``` LocaleContextResolver ``` interface offers an extension to `LocaleResolver` that lets resolvers provide a richer `LocaleContext` , which may include time zone information. When available, the user’s `TimeZone` can be obtained by using the ``` RequestContext.getTimeZone() ``` method. Time zone information is automatically used by any Date/Time `Converter` and `Formatter` objects that are registered with Spring’s `ConversionService` . ## Header Resolver This locale resolver inspects the `accept-language` header in the request that was sent by the client (for example, a web browser). Usually, this header field contains the locale of the client’s operating system. Note that this resolver does not support time zone information. ## Cookie Resolver This locale resolver inspects a `Cookie` that might exist on the client to see if a `Locale` or `TimeZone` is specified. If so, it uses the specified details. By using the properties of this locale resolver, you can specify the name of the cookie as well as the maximum age. The following example defines a `CookieLocaleResolver` : ``` <bean id="localeResolver" class="org.springframework.web.servlet.i18n.CookieLocaleResolver" <property name="cookieName" value="clientlanguage"/ <!-- in seconds. If set to -1, the cookie is not persisted (deleted when browser shuts down) --> <property name="cookieMaxAge" value="100000"/ The following table describes the properties `CookieLocaleResolver` : Property | Default | Description | | --- | --- | --- | | | | | | | ## Session Resolver lets you retrieve `Locale` and `TimeZone` from the session that might be associated with the user’s request. In contrast to `CookieLocaleResolver` , this strategy stores locally chosen locale settings in the Servlet container’s `HttpSession` . As a consequence, those settings are temporary for each session and are, therefore, lost when each session ends. Note that there is no direct relationship with external session management mechanisms, such as the Spring Session project. This evaluates and modifies the corresponding `HttpSession` attributes against the current `HttpServletRequest` . ## Locale Interceptor You can enable changing of locales by adding the ``` LocaleChangeInterceptor ``` to one of the `HandlerMapping` definitions. It detects a parameter in the request and changes the locale accordingly, calling the `setLocale` method on the `LocaleResolver` in the dispatcher’s application context. The next example shows that calls to all `*.view` resources that contain a parameter named `siteLanguage` now changes the locale. So, for example, a request for the URL, ``` www.sf.net/home.view?siteLanguage=nl ``` , changes the site language to Dutch. The following example shows how to intercept the locale: ``` <bean id="localeChangeInterceptor" class="org.springframework.web.servlet.i18n.LocaleChangeInterceptor"> <property name="paramName" value="siteLanguage"/> </bean<bean id="localeResolver" class="org.springframework.web.servlet.i18n.CookieLocaleResolver"/<bean id="urlMapping" class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="interceptors"> <list> <ref bean="localeChangeInterceptor"/> </list> </property> <property name="mappings"> <value>/**/*.view=someController</value> </property> </bean> ``` You can apply Spring Web MVC framework themes to set the overall look-and-feel of your application, thereby enhancing user experience. A theme is a collection of static resources, typically style sheets and images, that affect the visual style of the application. as of 6.0 support for themes has been deprecated theme in favor of using CSS, and without any special support on the server side. | | --- | ## Defining a theme To use themes in your web application, you must set up an implementation of the ``` org.springframework.ui.context.ThemeSource ``` interface. The interface extends `ThemeSource` but delegates its responsibilities to a dedicated implementation. By default, the delegate is an ``` org.springframework.ui.context.support.ResourceBundleThemeSource ``` implementation that loads properties files from the root of the classpath. To use a custom `ThemeSource` implementation or to configure the base name prefix of the , you can register a bean in the application context with the reserved name, `themeSource` . The web application context automatically detects a bean with that name and uses it. When you use the , a theme is defined in a simple properties file. The properties file lists the resources that make up the theme, as the following example shows: > styleSheet=/themes/cool/style.css background=/themes/cool/img/coolBg.jpg The keys of the properties are the names that refer to the themed elements from view code. For a JSP, you typically do this using the `spring:theme` custom tag, which is very similar to the `spring:message` tag. The following JSP fragment uses the theme defined in the previous example to customize the look and feel: ``` <%@ taglib prefix="spring" uri="http://www.springframework.org/tags"%> <html> <head> <link rel="stylesheet" href="<spring:theme code='styleSheet'/>" type="text/css"/> </head> <body style="background=<spring:theme code='background'/>"> ... </body> </html> ``` By default, the uses an empty base name prefix. As a result, the properties files are loaded from the root of the classpath. Thus, you would put the `cool.properties` theme definition in a directory at the root of the classpath (for example, in `/WEB-INF/classes` ). The uses the standard Java resource bundle loading mechanism, allowing for full internationalization of themes. For example, we could have a ``` /WEB-INF/classes/cool_nl.properties ``` that references a special background image with Dutch text on it. ## Resolving Themes After you define themes, as described in the preceding section, you decide which theme to use. The `DispatcherServlet` looks for a bean named `themeResolver` to find out which `ThemeResolver` implementation to use. A theme resolver works in much the same way as a `LocaleResolver` . It detects the theme to use for a particular request and can also alter the request’s theme. The following table describes the theme resolvers provided by Spring: Class | Description | | --- | --- | | | | ``` ThemeChangeInterceptor ``` that lets theme changes on every request with a simple request parameter. `MultipartResolver` from the ``` org.springframework.web.multipart ``` package is a strategy for parsing multipart requests including file uploads. There is a container-based implementation for Servlet multipart request parsing. Note that the outdated ``` CommonsMultipartResolver ``` based on Apache Commons FileUpload is not available anymore, as of Spring Framework 6.0 with its new Servlet 5.0+ baseline. To enable multipart handling, you need to declare a `MultipartResolver` bean in your `DispatcherServlet` Spring configuration with a name of `multipartResolver` . The `DispatcherServlet` detects it and applies it to the incoming request. When a POST with a content type of `multipart/form-data` is received, the resolver parses the content wraps the current `HttpServletRequest` as a to provide access to resolved files in addition to exposing parts as request parameters. ## Servlet Multipart Parsing Servlet multipart parsing needs to be enabled through Servlet container configuration. To do so: In Java, set a on the Servlet registration. * In `web.xml` , add a `"<multipart-config>"` section to the servlet declaration. The following example shows how to set a on the Servlet registration: ``` public class AppInitializer extends AbstractAnnotationConfigDispatcherServletInitializer { @Override protected void customizeRegistration(ServletRegistration.Dynamic registration) { // Optionally also set maxFileSize, maxRequestSize, fileSizeThreshold registration.setMultipartConfig(new MultipartConfigElement("/tmp")); } ``` class AppInitializer : AbstractAnnotationConfigDispatcherServletInitializer() { override fun customizeRegistration(registration: ServletRegistration.Dynamic) { // Optionally also set maxFileSize, maxRequestSize, fileSizeThreshold registration.setMultipartConfig(MultipartConfigElement("/tmp")) } Once the Servlet multipart configuration is in place, you can add a bean of type with a name of `multipartResolver` . DEBUG-level logging in Spring MVC is designed to be compact, minimal, and human-friendly. It focuses on high-value bits of information that are useful over and over again versus others that are useful only when debugging a specific issue. TRACE-level logging generally follows the same principles as DEBUG (and, for example, also should not be a fire hose) but can be used for debugging any issue. In addition, some log messages may show a different level of detail at TRACE versus DEBUG. Good logging comes from the experience of using the logs. If you spot anything that does not meet the stated goals, please let us know. ## Sensitive Data DEBUG and TRACE logging may log sensitive information. This is why request parameters and headers are masked by default and their logging in full must be enabled explicitly through the ``` enableLoggingRequestDetails ``` property on `DispatcherServlet` . The following example shows how to do so by using Java configuration: ``` public class MyInitializer extends AbstractAnnotationConfigDispatcherServletInitializer { @Override protected Class<?>[] getRootConfigClasses() { return ... ; } @Override protected Class<?>[] getServletConfigClasses() { return ... ; } @Override protected String[] getServletMappings() { return ... ; } @Override protected void customizeRegistration(ServletRegistration.Dynamic registration) { registration.setInitParameter("enableLoggingRequestDetails", "true"); } ``` class MyInitializer : AbstractAnnotationConfigDispatcherServletInitializer() { override fun getServletMappings(): Array<String> { return ... } override fun customizeRegistration(registration: ServletRegistration.Dynamic) { registration.setInitParameter("enableLoggingRequestDetails", "true") } } ``` The `spring-web` module provides some useful filters: Browsers can submit form data only through HTTP GET or HTTP POST but non-browser clients can also use HTTP PUT, PATCH, and DELETE. The Servlet API requires methods to support form field access only for HTTP POST. The `spring-web` module provides `FormContentFilter` to intercept HTTP PUT, PATCH, and DELETE requests with a content type of , read the form data from the body of the request, and wrap the `ServletRequest` to make the form data available through the family of methods. ## Forwarded Headers is a Servlet filter that modifies the request in order to a) change the host, port, and scheme based on `Forwarded` headers, and b) to remove those headers to eliminate further impact. The filter relies on wrapping the request, and therefore it must be ordered ahead of other filters, such as `RequestContextFilter` , that should work with the modified and not the original request. There are security considerations for forwarded headers since an application cannot know if the headers were added by a proxy, as intended, or by a malicious client. This is why a proxy at the boundary of trust should be configured to remove untrusted `Forwarded` headers that come from the outside. You can also configure the with `removeOnly=true` , in which case it removes but does not use the headers. In order to support asynchronous requests and error dispatches this filter should be mapped with `DispatcherType.ASYNC` and also `DispatcherType.ERROR` . If using Spring Framework’s be sure to include `DispatcherType.ASYNC` and `DispatcherType.ERROR` in addition to ``` DispatcherType.REQUEST ``` ## Shallow ETag ``` ShallowEtagHeaderFilter ``` filter creates a “shallow” ETag by caching the content written to the response and computing an MD5 hash from it. The next time a client sends, it does the same, but it also compares the computed value against the `If-None-Match` request header and, if the two are equal, returns a 304 (NOT_MODIFIED). This strategy saves network bandwidth but not CPU, as the full response must be computed for each request. State-changing HTTP methods and other HTTP conditional request headers such as `If-Match` and `If-Unmodified-Since` are outside the scope of this filter. Other strategies at the controller level can avoid the computation and have a broader support for HTTP conditional requests. See HTTP Caching. This filter has a `writeWeakETag` parameter that configures the filter to write weak ETags similar to the following: ``` W/"02a2d595e6ed9a0b24f027f2b63b134d6" ``` (as defined in RFC 7232 Section 2.3). In order to support asynchronous requests this filter must be mapped with `DispatcherType.ASYNC` so that the filter can delay and successfully generate an ETag to the end of the last async dispatch. If using Spring Framework’s be sure to include `DispatcherType.ASYNC` . ## CORS Spring MVC provides fine-grained support for CORS configuration through annotations on controllers. However, when used with Spring Security, we advise relying on the built-in `CorsFilter` that must be ordered ahead of Spring Security’s chain of filters. See the sections on CORS and the CORS Filter for more details. Spring MVC provides an annotation-based programming model where `@Controller` and `@RestController` components use annotations to express request mappings, request input, exception handling, and more. Annotated controllers have flexible method signatures and do not have to extend base classes nor implement specific interfaces. The following example shows a controller defined by annotations: ``` @Controller public class HelloController { @GetMapping("/hello") public String handle(Model model) { model.addAttribute("message", "Hello World!"); return "index"; } } ``` @Controller class HelloController { @GetMapping("/hello") fun handle(model: Model): String { model["message"] = "Hello World!" return "index" } } ``` In the preceding example, the method accepts a `Model` and returns a view name as a `String` , but many other options exist and are explained later in this chapter. Guides and tutorials on spring.io use the annotation-based programming model described in this section. | | --- | Date: 2015-01-01 Categories: Tags: ## URI patterns `@RequestMapping` methods can be mapped using URL patterns. There are two alternatives: * `PathPattern` — a pre-parsed pattern matched against the URL path also pre-parsed as `PathContainer` . Designed for web use, this solution deals effectively with encoding and path parameters, and matches efficiently. * `AntPathMatcher` — match String patterns against a String path. This is the original solution also used in Spring configuration to select resources on the classpath, on the filesystem, and other locations. It is less efficient and the String path input is a challenge for dealing effectively with encoding and other issues with URLs. `PathPattern` is the recommended solution for web applications and it is the only choice in Spring WebFlux. It was enabled for use in Spring MVC from version 5.3 and is enabled by default from version 6.0. See MVC config for customizations of path matching options. `PathPattern` supports the same pattern syntax as `AntPathMatcher` . In addition, it also supports the capturing pattern, e.g. `{*spring}` , for matching 0 or more path segments at the end of a path. `PathPattern` also restricts the use of `**` for matching multiple path segments such that it’s only allowed at the end of a pattern. This eliminates many cases of ambiguity when choosing the best matching pattern for a given request. For full pattern syntax please refer to PathPattern and AntPathMatcher. Some example patterns: ``` "/resources/ima?e.png" ``` - match one character in a path segment * `"/resources/*.png"` - match zero or more characters in a path segment * `"/resources/**"` - match multiple path segments * ``` "/projects/{project}/versions" ``` - match a path segment and capture it as a variable * ``` "/projects/{project:[a-z]+}/versions" ``` - match and capture a variable with a regex Captured URI variables can be accessed with `@PathVariable` . For example: ), but you can leave that detail out if the names are the same and your code is compiled with the `-parameters` compiler flag. The syntax `{varName:regex}` declares a URI variable with a regular expression that has syntax of `{varName:regex}` . For example, given URL ``` "/spring-web-3.0.5.jar" ``` When multiple patterns match a URL, the best match must be selected. This is done with one of the following depending on whether use of parsed `PathPattern` is enabled for use or not: Both help to sort patterns with more specific ones on top. A pattern is more specific if it has a lower count of URI variables (counted as 1), single wildcards (counted as 1), and double wildcards (counted as 2). Given an equal score, the longer pattern is chosen. Given the same score and length, the pattern with more URI variables than wildcards is chosen. The default mapping pattern ( `/**` ) is excluded from scoring and always sorted last. Also, prefix patterns (such as `/public/**` ) are considered less specific than other pattern that do not have double wildcards. For the full details, follow the above links to the pattern Comparators. ## Suffix Match Starting in 5.3, by default Spring MVC no longer performs `.*` suffix pattern matching where a controller mapped to `/person` is also implicitly mapped to `/person.*` . As a consequence path extensions are no longer used to interpret the requested content type for the response — for example, `/person.pdf` , `/person.xml` , and so on. Using file extensions in this way was necessary when browsers used to send `Accept` headers that were hard to interpret consistently. At present, that is no longer a necessity and using the `Accept` header should be the preferred choice. Over time, the use of file name extensions has proven problematic in a variety of ways. It can cause ambiguity when overlain with the use of URI variables, path parameters, and URI encoding. Reasoning about URL-based authorization and security (see next section for more details) also becomes more difficult. To completely disable the use of path extensions in versions prior to 5.3, set the following: ``` useSuffixPatternMatching(false) ``` , see PathMatchConfigurer * ``` favorPathExtension(false) ``` , see ContentNegotiationConfigurer Having a way to request content types other than through the `"Accept"` header can still be useful, e.g. when typing a URL in a browser. A safe alternative to path extensions is to use the query parameter strategy. If you must use file extensions, consider restricting them to a list of explicitly registered extensions through the `mediaTypes` property of ContentNegotiationConfigurer. ## Suffix Match and RFD A reflected file download (RFD) attack is similar to XSS in that it relies on request input (for example, a query parameter and a URI variable) being reflected in the response. However, instead of inserting JavaScript into HTML, an RFD attack relies on the browser switching to perform a download and treating the response as an executable script when double-clicked later. In Spring MVC, `@ResponseBody` and `ResponseEntity` methods are at risk, because they can render different content types, which clients can request through URL path extensions. Disabling suffix pattern matching and using path extensions for content negotiation lower the risk but are not sufficient to prevent RFD attacks. To prevent RFD attacks, prior to rendering the response body, Spring MVC adds a ``` Content-Disposition:inline;filename=f.txt ``` header to suggest a fixed and safe download file. This is done only if the URL path contains a file extension that is neither allowed as safe nor explicitly registered for content negotiation. However, it can potentially have side effects when URLs are typed directly into a browser. Many common path extensions are allowed as safe by default. Applications with custom `HttpMessageConverter` implementations can explicitly register file extensions for content negotiation to avoid having a `Content-Disposition` header added for those extensions. See Content Types. See CVE-2015-5211 for additional recommendations related to RFD. The media type can specify a character set. Negated expressions are supported — for example, `!text/plain` means any content type other than "text/plain". You can declare a shared `produces` attribute at the class level. Unlike most other request-mapping attributes, however, when used at the class level, a method-level `produces` attribute overrides rather than extends the class-level declaration. MediaType provides constants for commonly used media types, such as APPLICATION_JSON_VALUE and APPLICATION_XML_VALUE. ## Parameters, headers You can narrow request mappings based on request parameter conditions. You can test for the presence of a request parameter ( `myParam` ), for the absence of one ( `!myParam` ), or for a specific value ( `myParam=myValue` ). The following example shows how to test for a specific value: `@GetMapping` (and ) support HTTP HEAD transparently for request mapping. Controller methods do not need to change. A response wrapper, applied in ``` jakarta.servlet.http.HttpServlet ``` , ensures a `Content-Length` header is set to the number of bytes written (without actually writing to the response). `@GetMapping` (and ) are implicitly mapped to and support HTTP HEAD. An HTTP HEAD request is processed as if it were HTTP GET except that, instead of writing the body, the number of bytes are counted and the `Content-Length` header is set. By default, HTTP OPTIONS is handled by setting the `Allow` response header to the list of HTTP methods listed in all `@RequestMapping` methods that have matching URL patterns. For a `@RequestMapping` without HTTP method declarations, the `Allow` header is set to You can programmatically register handler methods, which you can use for dynamic registrations or for advanced cases, such as different instances of the same handler under different URLs. The following example registers a handler method: @Autowired fun setHandlerMapping(mapping: RequestMappingHandlerMapping, handler: UserHandler) { (1) val info = RequestMappingInfo.paths("/user/{id}").methods(RequestMethod.GET).build() (2) val method = UserHandler::class.java.getMethod("getUser", Long::class.java) (3) mapping.registerMapping(info, handler, method) (4) } } ``` The next table describes the supported controller method arguments. Reactive types are not supported for any arguments. JDK 8’s `java.util.Optional` is supported as a method argument in combination with annotations that have a `required` attribute (for example, `@RequestParam` , `@RequestHeader` , and others) and is equivalent to `required=false` . The return value is converted through HttpMessageConverter implementations and written to the response. See @ResponseBody. @ResponseBody `@ResponseBody` The return value is converted through HttpMessageConverter implementations and written to the response. See @ResponseBody. The return value is converted through `HttpMessageConverter` implementations and written to the response. See `@ResponseBody` . HttpEntity<B>, ResponseEntity<B The return value that specifies the full response (including HTTP headers and body) is to be converted through HttpMessageConverter implementations and written to the response. See ResponseEntity. HttpEntity<B>, ResponseEntity<B> `HttpEntity<B>` , `ResponseEntity<B>` The return value that specifies the full response (including HTTP headers and body) is to be converted through HttpMessageConverter implementations and written to the response. See ResponseEntity. The return value that specifies the full response (including HTTP headers and body) is to be converted through `HttpMessageConverter` implementations and written to the response. See ResponseEntity. HttpHeaders . @ModelAttribute An attribute to be added to the model, with the view name implicitly determined through a RequestToViewNameTranslator. Note that @ModelAttribute is optional. See "Any other return value" at the end of this table. @ModelAttribute `@ModelAttribute` An attribute to be added to the model, with the view name implicitly determined through a RequestToViewNameTranslator. Note that @ModelAttribute is optional. See "Any other return value" at the end of this table. An attribute to be added to the model, with the view name implicitly determined through a . Note that `@ModelAttribute` is optional. See "Any other return value" at the end of this table. ModelAndView object The view and model attributes to use and, optionally, a response status. ModelAndView object `ModelAndView` object The view and model attributes to use and, optionally, a response status. The view and model attributes to use and, optionally, a response status. void A method with a void return type (or null return value) is considered to have fully handled the response if it also has a ServletResponse, an OutputStream argument, or an @ResponseStatus annotation. The same is also true if the controller has made a positive ETag or lastModified timestamp check (see Controllers for details). If none of the above is true, a void return type can also indicate “no response body” for REST controllers or a default view name selection for HTML controllers. void `void` A method with a void return type (or null return value) is considered to have fully handled the response if it also has a ServletResponse, an OutputStream argument, or an @ResponseStatus annotation. The same is also true if the controller has made a positive ETag or lastModified timestamp check (see Controllers for details). If none of the above is true, a void return type can also indicate “no response body” for REST controllers or a default view name selection for HTML controllers. A method with a `void` return type (or `null` return value) is considered to have fully handled the response if it also has a `ServletResponse` , an `OutputStream` argument, or an `@ResponseStatus` annotation. The same is also true if the controller has made a positive `ETag` or `lastModified` timestamp check (see Controllers for details). If none of the above is true, a `void` return type can also indicate “no response body” for REST controllers or a default view name selection for HTML controllers. DeferredResult<V Produce any of the preceding return values asynchronously from any thread — for example, as a result of some event or callback. See Asynchronous Requests and DeferredResult. DeferredResult<V> `DeferredResult<V>` Produce any of the preceding return values asynchronously from any thread — for example, as a result of some event or callback. See Asynchronous Requests and DeferredResult. Produce any of the preceding return values asynchronously from any thread — for example, as a result of some event or callback. See Asynchronous Requests and `DeferredResult` . Callable<V Produce any of the above return values asynchronously in a Spring MVC-managed thread. See Asynchronous Requests and Callable. Callable<V> `Callable<V>` Produce any of the above return values asynchronously in a Spring MVC-managed thread. See Asynchronous Requests and Callable. Produce any of the above return values asynchronously in a Spring MVC-managed thread. See Asynchronous Requests and `Callable` . ListenableFuture<V>, java.util.concurrent.CompletionStage<V>, java.util.concurrent.CompletableFuture<V Alternative to DeferredResult, as a convenience (for example, when an underlying service returns one of those). ListenableFuture<V>, java.util.concurrent.CompletionStage<V>, java.util.concurrent.CompletableFuture<V> `ListenableFuture<V>` , ``` java.util.concurrent.CompletionStage<V> ``` ``` java.util.concurrent.CompletableFuture<V> ``` Alternative to DeferredResult, as a convenience (for example, when an underlying service returns one of those). Alternative to `DeferredResult` , as a convenience (for example, when an underlying service returns one of those). ResponseBodyEmitter, SseEmitter Emit a stream of objects asynchronously to be written to the response with HttpMessageConverter implementations. Also supported as the body of a ResponseEntity. See Asynchronous Requests and HTTP Streaming. ResponseBodyEmitter, SseEmitter `ResponseBodyEmitter` , `SseEmitter` Emit a stream of objects asynchronously to be written to the response with HttpMessageConverter implementations. Also supported as the body of a ResponseEntity. See Asynchronous Requests and HTTP Streaming. Emit a stream of objects asynchronously to be written to the response with `HttpMessageConverter` implementations. Also supported as the body of a `ResponseEntity` . See Asynchronous Requests and HTTP Streaming. StreamingResponseBody Write to the response OutputStream asynchronously. Also supported as the body of a ResponseEntity. See Asynchronous Requests and HTTP Streaming. StreamingResponseBody Write to the response OutputStream asynchronously. Also supported as the body of a ResponseEntity. See Asynchronous Requests and HTTP Streaming. Write to the response `OutputStream` asynchronously. Also supported as the body of a `ResponseEntity` . See Asynchronous Requests and HTTP Streaming. Reactor and other reactive types registered via ReactiveAdapterRegistry A single value type, e.g. Mono, is comparable to returning DeferredResult. A multi-value type, e.g. Flux, may be treated as a stream depending on the requested media type, e.g. "text/event-stream", "application/json+stream", or otherwise is collected to a List and rendered as a single value. See Asynchronous Requests and Reactive Types. Reactor and other reactive types registered via ReactiveAdapterRegistry Reactor and other reactive types registered via A single value type, e.g. Mono, is comparable to returning DeferredResult. A multi-value type, e.g. Flux, may be treated as a stream depending on the requested media type, e.g. "text/event-stream", "application/json+stream", or otherwise is collected to a List and rendered as a single value. See Asynchronous Requests and Reactive Types. A single value type, e.g. `Mono` , is comparable to returning `DeferredResult` . A multi-value type, e.g. `Flux` , may be treated as a stream depending on the requested media type, e.g. "text/event-stream", "application/json+stream", or otherwise is collected to a List and rendered as a single value. See Asynchronous Requests and Reactive Types. Other return values ``` /cars;color=red,green;year=2012 ``` ). Multiple values can also be specified through repeated variable names (for example, ``` color=red;color=green;color=blue ``` ). If a URL is expected to contain matrix variables, the request mapping for a controller method must use a URI variable to mask that variable content and ensure the request can be matched successfully independent of matrix variable order and presence. The following example uses a matrix variable: A matrix variable may be defined as optional and a default value specified, as the following example shows: Note that you need to enable the use of matrix variables. In the MVC Java configuration, you need to set a `UrlPathHelper` with ``` removeSemicolonContent=false ``` through Path Matching. In the MVC XML namespace, you can set ``` <mvc:annotation-driven enable-matrix-variables="true"/> ``` By default, method parameters that use this annotation are required, but you can specify that a method parameter is optional by setting the `@RequestParam` annotation’s `required` flag to `false` or by declaring the argument with an `java.util.Optional` wrapper. Type conversion is automatically applied if the target method parameter type is not `String` . See Type Conversion. Declaring the argument type as an array or list allows for resolving multiple parameter values for the same parameter name. When an `@RequestParam` annotation is declared as a `Map<String, String>` or , without a parameter name specified in the annotation, then the map is populated with the request parameter values for each given parameter name. Note that use of `@RequestParam` is optional (for example, to set its attributes). By default, any argument that is a simple value type (as determined by BeanUtils#isSimpleProperty) and is not resolved by any other argument resolver, is treated as if it were annotated with `@RequestParam` . ``` @PostMapping("/owners/{ownerId}/pets/{petId}/edit") public String processSubmit(@ModelAttribute Pet pet) { (1) // method logic... } ``` ``` @PostMapping("/owners/{ownerId}/pets/{petId}/edit") fun processSubmit(@ModelAttribute pet: Pet): String { (1) // method logic... } ``` The `Pet` instance above is sourced in one of the following ways: Retrieved from the model where it may have been added by a @ModelAttribute method. * Retrieved from the HTTP session if the model attribute was listed in the class-level `@SessionAttributes` annotation. * Obtained through a `Converter` where the model attribute name matches the name of a request value such as a path variable or a request parameter (see next example). * Instantiated using its default constructor. * Instantiated through a “primary constructor” with arguments that match to Servlet request parameters. Argument names are determined through JavaBeans or through runtime-retained parameter names in the bytecode. One alternative to using a @ModelAttribute method to supply it or relying on the framework to create the model attribute, is to have a `Converter<String, T>` to provide the instance. This is applied when the model attribute name matches to the name of a request value such as a path variable or a request parameter, and there is a `Converter` from `String` to the model attribute type. In the following example, the model attribute name is `account` which matches the URI path variable `account` , and there is a registered ``` Converter<String, Account> ``` which could load the `Account` from a data store: ``` @PutMapping("/accounts/{account}") public String save(@ModelAttribute("account") Account account) { (1) // ... } ``` ``` @PutMapping("/accounts/{account}") fun save(@ModelAttribute("account") account: Account): String { (1) // ... } ``` After the model attribute instance is obtained, data binding is applied. The `WebDataBinder` class matches Servlet request parameter names (query parameters and form fields) to field names on the target `Object` . Matching fields are populated after type conversion is applied, where necessary. For more on data binding (and validation), see Validation. For more on customizing data binding, see `DataBinder` . Data binding can result in errors. By default, a `BindException` is raised. However, to check for such errors in the controller method, you can add a `BindingResult` argument immediately next to the `@ModelAttribute` , as the following example shows: In some cases, you may want access to a model attribute without data binding. For such cases, you can inject the `Model` into the controller and access it directly or, alternatively, set ``` @ModelAttribute(binding=false) ``` ``` @ModelAttribute public AccountForm setUpForm() { return new AccountForm(); } @ModelAttribute public Account findAccount(@PathVariable String accountId) { return accountRepository.findOne(accountId); } @PostMapping("update") public String update(@Valid AccountForm form, BindingResult result, @ModelAttribute(binding=false) Account account) { (1) // ... } ``` ``` @ModelAttribute fun setUpForm(): AccountForm { return AccountForm() } @ModelAttribute fun findAccount(@PathVariable accountId: String): Account { return accountRepository.findOne(accountId) } @PostMapping("update") fun update(@Valid form: AccountForm, result: BindingResult, @ModelAttribute(binding = false) account: Account): String { (1) // ... } ``` annotation or Spring’s `@Validated` annotation (Bean Validation and Spring validation). The following example shows how to do so: By default, all model attributes are considered to be exposed as URI template variables in the redirect URL. Of the remaining attributes, those that are primitive types or collections or arrays of primitive types are automatically appended as query parameters. Appending primitive type attributes as query parameters can be the desired result if a model instance was prepared specifically for the redirect. However, in annotated controllers, the model can contain additional attributes added for rendering purposes (for example, drop-down field values). To avoid the possibility of having such attributes appear in the URL, a `@RequestMapping` method can declare an argument of type `RedirectAttributes` and use it to specify the exact attributes to make available to `RedirectView` . If the method does redirect, the content of `RedirectAttributes` is used. Otherwise, the content of the model is used. The provides a flag called ``` ignoreDefaultModelOnRedirect ``` , which you can use to indicate that the content of the default `Model` should never be used if a controller method redirects. Instead, the controller method should declare an attribute of type `RedirectAttributes` or, if it does not do so, no attributes should be passed on to `RedirectView` . Both the MVC namespace and the MVC Java configuration keep this flag set to `false` , to maintain backwards compatibility. However, for new applications, we recommend setting it to `true` . Note that URI template variables from the present request are automatically made available when expanding a redirect URL, and you don’t need to explicitly add them through `Model` or `RedirectAttributes` . The following example shows how to define a redirect: ``` @PostMapping("/files/{path}") public String upload(...) { // ... return "redirect:files/{path}"; } ``` ``` @PostMapping("/files/{path}") fun upload(...): String { // ... return "redirect:files/{path}" } ``` Another way of passing data to the redirect target is by using flash attributes. Unlike other redirect attributes, flash attributes are saved in the HTTP session (and, hence, do not appear in the URL). See Flash Attributes for more information. Flash attributes provide a way for one request to store attributes that are intended for use in another. This is most commonly needed when redirecting — for example, the Post-Redirect-Get pattern. Flash attributes are saved temporarily before the redirect (typically in the session) to be made available to the request after the redirect and are removed immediately. Spring MVC has two main abstractions in support of flash attributes. `FlashMap` is used to hold flash attributes, while `FlashMapManager` is used to store, retrieve, and manage `FlashMap` instances. Flash attribute support is always “on” and does not need to be enabled explicitly. However, if not used, it never causes HTTP session creation. On each request, there is an “input” `FlashMap` with attributes passed from a previous request (if any) and an “output” `FlashMap` with attributes to save for a subsequent request. Both `FlashMap` instances are accessible from anywhere in Spring MVC through static methods in `RequestContextUtils` . Annotated controllers typically do not need to work with `FlashMap` directly. Instead, a `@RequestMapping` method can accept an argument of type `RedirectAttributes` and use it to add flash attributes for a redirect scenario. Flash attributes added through `RedirectAttributes` are automatically propagated to the “output” FlashMap. Similarly, after the redirect, attributes from the “input” `FlashMap` are automatically added to the `Model` of the controller that serves the target URL. After a `MultipartResolver` has been enabled, the content of POST requests with `multipart/form-data` is parsed and accessible as regular request parameters. The following example accesses one regular form field and one uploaded file: @PostMapping("/form") public String handleFormUpload(@RequestParam("name") String name, @RequestParam("file") MultipartFile file) { if (!file.isEmpty()) { byte[] bytes = file.getBytes(); // store the bytes somewhere return "redirect:uploadSuccess"; } return "redirect:uploadFailure"; } } ``` @PostMapping("/form") fun handleFormUpload(@RequestParam("name") name: String, @RequestParam("file") file: MultipartFile): String { if (!file.isEmpty) { val bytes = file.bytes // store the bytes somewhere return "redirect:uploadSuccess" } return "redirect:uploadFailure" } } ``` Declaring the argument type as a `List<MultipartFile>` allows for resolving multiple files for the same parameter name. When the `@RequestParam` annotation is declared as a ``` Map<String, MultipartFile> ``` ``` MultiValueMap<String, MultipartFile> ``` , without a parameter name specified in the annotation, then the map is populated with the multipart files for each given parameter name. With Servlet multipart parsing, you may also declare | | --- | You can also use multipart content as part of data binding to a command object. For example, the form field and file from the preceding example could be fields on a form object, as the following example shows: // ... } @PostMapping("/form") public String handleFormUpload(MyForm form, BindingResult errors) { if (!form.getFile().isEmpty()) { byte[] bytes = form.getFile().getBytes(); // store the bytes somewhere return "redirect:uploadSuccess"; } return "redirect:uploadFailure"; } } ``` ``` class MyForm(val name: String, val file: MultipartFile, ...) @PostMapping("/form") fun handleFormUpload(form: MyForm, errors: BindingResult): String { if (!form.file.isEmpty) { val bytes = form.file.bytes // store the bytes somewhere return "redirect:uploadSuccess" } return "redirect:uploadFailure" } } ``` Multipart requests can also be submitted from non-browser clients in a RESTful service scenario. The following example shows a file with JSON: > POST /someUrl Content-Type: multipart/mixed --edt7Tfrdusa7r3lNQc79vXuhIIMlatb7PQg7Vp Content-Disposition: form-data; name="meta-data" Content-Type: application/json; charset=UTF-8 Content-Transfer-Encoding: 8bit { "name": "value" } --edt7Tfrdusa7r3lNQc79vXuhIIMlatb7PQg7Vp Content-Disposition: form-data; name="file-data"; filename="file.properties" Content-Type: text/xml Content-Transfer-Encoding: 8bit ... File Data ... You can access the "meta-data" part with `@RequestParam` as a `String` but you’ll probably want it deserialized from JSON (similar to `@RequestBody` ). Use the `@RequestPart` annotation to access a multipart after converting it with an HttpMessageConverter: ``` @PostMapping("/") public String handle(@RequestPart("meta-data") MetaData metadata, @RequestPart("file-data") MultipartFile file) { // ... } ``` ``` @PostMapping("/") fun handle(@RequestPart("meta-data") metadata: MetaData, @RequestPart("file-data") file: MultipartFile): String { // ... } ``` ``` @PostMapping("/") public String handle(@Valid @RequestPart("meta-data") MetaData metadata, BindingResult result) { // ... } ``` ``` @PostMapping("/") fun handle(@Valid @RequestPart("meta-data") metadata: MetaData, result: BindingResult): String { // ... } ``` You can use the Message Converters option of the MVC Config to configure or customize message conversion. You can use `@RequestBody` in combination with ``` @PostMapping("/accounts") public void handle(@Valid @RequestBody Account account, BindingResult result) { // ... } ``` `@ResponseBody` You can use the `@ResponseBody` annotation on a method to have the return serialized to the response body through an HttpMessageConverter. The following listing shows an example: `@ResponseBody` is also supported at the class level, in which case it is inherited by all controller methods. This is the effect of `@RestController` , which is nothing more than a meta-annotation marked with `@Controller` and `@ResponseBody` . You can use `@ResponseBody` with reactive types. See Asynchronous Requests and Reactive Types for more details. You can use the Message Converters option of the MVC Config to configure or customize message conversion. You can combine `@ResponseBody` methods with JSON serialization views. See Jackson JSON for details. `DataBinder` `@Controller` or `@ControllerAdvice` classes can have `@InitBinder` methods that initialize instances of `WebDataBinder` , and those, in turn, can: Convert String-based request values (such as request parameters, path variables, headers, cookies, and others) to the target type of controller method arguments. * or Spring `Converter` and `Formatter` components. In addition, you can use the MVC config to register `Converter` and `Formatter` types in a globally shared . `@InitBinder` methods support many of the same arguments that `@RequestMapping` methods do, except for `@ModelAttribute` (command object) arguments. Typically, they are declared with a `WebDataBinder` argument (for registrations) and a `void` return value. The following listing shows an example: Alternatively, when you use a `Formatter` -based setup through a shared , you can re-use the same approach and register controller-specific `Formatter` implementations, as the following example shows: ## Model Design In the context of web applications, data binding involves the binding of HTTP request parameters (that is, form data or query parameters) to properties in a model object and its nested objects. Only `public` properties following the JavaBeans naming conventions are exposed for data binding — for example, ``` public String getFirstName() ``` ``` public void setFirstName(String) ``` methods for a `firstName` property. The model object, and its nested object graph, is also sometimes referred to as a command object, form-backing object, or POJO (Plain Old Java Object). | | --- | By default, Spring permits binding to all public properties in the model object graph. This means you need to carefully consider what public properties the model has, since a client could target any public property path, even some that are not expected to be targeted for a given use case. For example, given an HTTP form data endpoint, a malicious client could supply values for properties that exist in the model object graph but are not part of the HTML form presented in the browser. This could lead to data being set on the model object and any of its nested objects, that is not expected to be updated. The recommended approach is to use a dedicated model object that exposes only properties that are relevant for the form submission. For example, on a form for changing a user’s email address, the model object should declare a minimum set of properties such as in the following `ChangeEmailForm` . ``` public class ChangeEmailForm { private String oldEmailAddress; private String newEmailAddress; public void setOldEmailAddress(String oldEmailAddress) { this.oldEmailAddress = oldEmailAddress; } public String getOldEmailAddress() { return this.oldEmailAddress; } public void setNewEmailAddress(String newEmailAddress) { this.newEmailAddress = newEmailAddress; } public String getNewEmailAddress() { return this.newEmailAddress; } If you cannot or do not want to use a dedicated model object for each data binding use case, you must limit the properties that are allowed for data binding. Ideally, you can achieve this by registering allowed field patterns via the `setAllowedFields()` method on `WebDataBinder` . For example, to register allowed field patterns in your application, you can implement an `@InitBinder` method in a `@Controller` or `@ControllerAdvice` component as shown below: ``` @Controller public class ChangeEmailController { @InitBinder void initBinder(WebDataBinder binder) { binder.setAllowedFields("oldEmailAddress", "newEmailAddress"); } // @RequestMapping methods, etc. In addition to registering allowed patterns, it is also possible to register disallowed field patterns via the method in `DataBinder` and its subclasses. Please note, however, that an "allow list" is safer than a "deny list". Consequently, `setAllowedFields()` should be favored over . Note that matching against allowed field patterns is case-sensitive; whereas, matching against disallowed field patterns is case-insensitive. In addition, a field matching a disallowed pattern will not be accepted even if it also happens to match a pattern in the allowed list. `@Controller` and @ControllerAdvice classes can have `@ExceptionHandler` methods to handle exceptions from controller methods, as the following example shows: The exception may match against a top-level exception being propagated (e.g. a direct `IOException` being thrown) or against a nested cause within a wrapper exception (e.g. an `IOException` wrapped inside an ). As of 5.3, this can match at arbitrary cause levels, whereas previously only an immediate cause was considered. For matching exception types, preferably declare the target exception as a method argument, as the preceding example shows. When multiple exception methods match, a root exception match is generally preferred to a cause exception match. More specifically, the ``` ExceptionDepthComparator ``` is used to sort exceptions based on their depth from the thrown exception type. Alternatively, the annotation declaration may narrow the exception types to match, as the following example shows: ``` @ExceptionHandler(FileSystemException::class, RemoteException::class) fun handle(ex: IOException): ResponseEntity<String> { // ... } ``` You can even use a list of specific exception types with a very generic argument signature, as the following example shows: ``` @ExceptionHandler(FileSystemException::class, RemoteException::class) fun handle(ex: Exception): ResponseEntity<String> { // ... } ``` We generally recommend that you be as specific as possible in the argument signature, reducing the potential for mismatches between root and cause exception types. Consider breaking a multi-matching method into individual `@ExceptionHandler` methods, each matching a single specific exception type through its signature. In a multi- `@ControllerAdvice` arrangement, we recommend declaring your primary root exception mappings on a `@ControllerAdvice` prioritized with a corresponding order. While a root exception match is preferred to a cause, this is defined among the methods of a given controller or `@ControllerAdvice` class. This means a cause match on a higher-priority `@ControllerAdvice` bean is preferred to any match (for example, root) on a lower-priority `@ControllerAdvice` bean. Last but not least, an `@ExceptionHandler` method implementation can choose to back out of dealing with a given exception instance by rethrowing it in its original form. This is useful in scenarios where you are interested only in root-level matches or in matches within a specific context that cannot be statically determined. A rethrown exception is propagated through the remaining resolution chain, as though the given `@ExceptionHandler` method would not have matched in the first place. Support for `@ExceptionHandler` methods in Spring MVC is built on the `DispatcherServlet` level, HandlerExceptionResolver mechanism. `@ExceptionHandler` methods support the following arguments: `@ExceptionHandler` methods support the following return values: Return value | Description | | --- | --- | | | | | | | | | | | | | `@ExceptionHandler` , `@InitBinder` , and `@ModelAttribute` methods apply only to the `@Controller` class, or class hierarchy, in which they are declared. If, instead, they are declared in an `@ControllerAdvice` or class, then they apply to any controller. Moreover, as of 5.3, `@ExceptionHandler` methods in `@ControllerAdvice` can be used to handle exceptions from any `@Controller` or any other handler. `@ControllerAdvice` is meta-annotated with `@Component` and therefore can be registered as a Spring bean through component scanning . is meta-annotated with `@ControllerAdvice` and `@ResponseBody` , and that means `@ExceptionHandler` methods will have their return value rendered via response body message conversion, rather than via HTML views. On startup, ``` ExceptionHandlerExceptionResolver ``` detect controller advice beans and apply them at runtime. Global `@ExceptionHandler` methods, from an `@ControllerAdvice` , are applied after local ones, from the `@Controller` . By contrast, global `@ModelAttribute` and `@InitBinder` methods are applied before local ones. The `@ControllerAdvice` annotation has attributes that let you narrow the set of controllers and handlers that they apply to. For example: ``` // Target all Controllers annotated with @RestController @ControllerAdvice(annotations = [RestController::class]) class ExampleAdvice1 // Target all Controllers within specific packages @ControllerAdvice("org.example.controllers") class ExampleAdvice2 // Target all Controllers assignable to specific classes @ControllerAdvice(assignableTypes = [ControllerInterface::class, AbstractController::class]) class ExampleAdvice3 ``` Spring Web MVC includes WebMvc.fn, a lightweight functional programming model in which functions are used to route and handle requests and contracts are designed for immutability. It is an alternative to the annotation-based programming model but otherwise runs on the same DispatcherServlet. In WebMvc.fn, an HTTP request is handled with a `HandlerFunction` : a function that takes `ServerRequest` and returns a `ServerResponse` . Both the request and the response object have immutable contracts that offer JDK 8-friendly access to the HTTP request and response. `HandlerFunction` is the equivalent of the body of a `@RequestMapping` method in the annotation-based programming model. Incoming requests are routed to a handler function with a `RouterFunction` : a function that takes `ServerRequest` and returns an optional `HandlerFunction` (i.e. ``` Optional<HandlerFunction> ``` public ServerResponse listPeople(ServerRequest request) { // ... } 1 | Create router using the router DSL. | | --- | --- | If you register the `RouterFunction` as a bean, for instance by exposing it in a `@Configuration` class, it will be auto-detected by the servlet, as explained in Running a Server. `ServerRequest` and `ServerResponse` are immutable interfaces that offer JDK 8-friendly access to the HTTP request and response, including headers, body, method, and status code. ``` String string = request.body(String.class); ``` ``` val string = request.body<String>() ``` The following example extracts the body to a `List<Person>` , where `Person` objects are decoded from a serialized form, such as JSON or XML: ``` List<Person> people = request.body(new ParameterizedTypeReference<List<Person>>() {}); ``` ``` val people = request.body<Person>() ``` The following example shows how to access parameters: ``` MultiValueMap<String, String> params = request.params(); ``` ``` val map = request.params() ``` ``` Person person = ... ServerResponse.ok().contentType(MediaType.APPLICATION_JSON).body(person); ``` ``` val person: Person = ... ServerResponse.ok().contentType(MediaType.APPLICATION_JSON).body(person) ``` You can also use an asynchronous result as the body, in the form of a `CompletableFuture` , `Publisher` , or any other type supported by the ``` Mono<Person> person = webClient.get().retrieve().bodyToMono(Person.class); ServerResponse.ok().contentType(MediaType.APPLICATION_JSON).body(person); ``` ``` val person = webClient.get().retrieve().awaitBody<Person>() ServerResponse.ok().contentType(MediaType.APPLICATION_JSON).body(person) ``` If not just the body, but also the status or headers are based on an asynchronous type, you can use the static `async` method on `ServerResponse` , which accepts ``` CompletableFuture<ServerResponse> ``` ``` Publisher<ServerResponse> ``` , or any other asynchronous type supported by the ``` Mono<ServerResponse> asyncResponse = webClient.get().retrieve().bodyToMono(Person.class) .map(p -> ServerResponse.ok().header("Name", p.name()).body(p)); ServerResponse.async(asyncResponse); ``` Server-Sent Events can be provided via the static `sse` method on `ServerResponse` . The builder provided by that method allows you to send Strings, or other objects as JSON. For example: ``` public RouterFunction<ServerResponse> sse() { return route(GET("/sse"), request -> ServerResponse.sse(sseBuilder -> { // Save the sseBuilder object somewhere.. })); } // In some other thread, sending a String sseBuilder.send("Hello world"); // Customize the event by using the other methods sseBuilder.id("42") .event("sse event") .data(person); ``` fun sse(): RouterFunction<ServerResponse> = router { GET("/sse") { request -> ServerResponse.sse { sseBuilder -> // Save the sseBuilder object somewhere.. } } // In some other thread, sending a String sseBuilder.send("Hello world") // Customize the event by using the other methods sseBuilder.id("42") .event("sse event") .data(person) ``` HandlerFunction<ServerResponse> helloWorld = request -> ServerResponse.ok().body("Hello World"); ``` ``` val helloWorld: (ServerRequest) -> ServerResponse = { ServerResponse.ok().body("Hello World") } ``` public ServerResponse listPeople(ServerRequest request) { (1) List<Person> people = repository.allPeople(); return ok().contentType(APPLICATION_JSON).body(people); } public ServerResponse createPerson(ServerRequest request) throws Exception { (2) Person person = request.body(Person.class); repository.savePerson(person); return ok().build(); } public ServerResponse getPerson(ServerRequest request) { (3) int personId = Integer.parseInt(request.pathVariable("id")); Person person = repository.getPerson(personId); if (person != null) { return ok().contentType(APPLICATION_JSON).body(person); } else { return ServerResponse.notFound().build(); } } fun listPeople(request: ServerRequest): ServerResponse { (1) val people: List<Person> = repository.allPeople() return ok().contentType(APPLICATION_JSON).body(people); } fun getPerson(request: ServerRequest): ServerResponse { (3) val personId = request.pathVariable("id").toInt() return repository.getPerson(personId)?.let { ok().contentType(APPLICATION_JSON).body(it) } ?: ServerResponse.notFound().build() public ServerResponse createPerson(ServerRequest request) { Person person = request.body(Person.class); validate(person); (2) repository.savePerson(person); return ok().build(); } ``` RouterFunction<ServerResponse> route = RouterFunctions.route() .GET("/hello-world", accept(MediaType.TEXT_PLAIN), request -> ServerResponse.ok().body("Hello World")).build(); ``` val route = router { GET("/hello-world", accept(TEXT_PLAIN)) { ServerResponse.ok().body("Hello World") } } ``` ``` import org.springframework.http.MediaType.APPLICATION_JSON import org.springframework.web.servlet.function.router val otherRoute = router { } You typically run router functions in a `DispatcherHandler` -based setup through the MVC Config, which uses Spring configuration to declare the components required to process requests. The MVC Java configuration declares the following infrastructure components to support functional endpoints: : Simple adapter that lets `DispatcherHandler` invoke a `HandlerFunction` that was mapped to a request. The preceding components let functional endpoints fit within the `DispatcherServlet` request processing lifecycle and also (potentially) run side by side with annotated controllers, if any are declared. It is also how functional endpoints are enabled by the Spring Boot Web starter. The following example shows a WebFlux Java configuration: ``` @Configuration @EnableMvc public class WebConfig implements WebMvcConfigurer { @Override public void configureMessageConverters(List<HttpMessageConverter<?>> converters) { // configure message conversion... } ``` @Configuration @EnableMvc class WebConfig : WebMvcConfigurer { override fun configureMessageConverters(converters: List<HttpMessageConverter<*>>) { // configure message conversion... } This section describes various options available in the Spring Framework to work with URI’s. ## Relative Servlet Requests to create URIs relative to the current request, as the following example shows: You can create URIs relative to the context path, as the following example shows: URI uri = ServletUriComponentsBuilder.fromContextPath(request) .path("/accounts") .build() .toUri(); ``` val uri = ServletUriComponentsBuilder.fromContextPath(request) .path("/accounts") .build() .toUri() ``` You can create URIs relative to a Servlet (for example, `/main/*` ), as the following example shows: URI uri = ServletUriComponentsBuilder.fromServletMapping(request) .path("/accounts") .build() .toUri(); ``` val uri = ServletUriComponentsBuilder.fromServletMapping(request) .path("/accounts") .build() .toUri() ``` ## Links to Controllers Spring MVC provides a mechanism to prepare links to controller methods. For example, the following MVC controller allows for link creation: @GetMapping("/bookings/{booking}") public ModelAndView getBooking(@PathVariable Long booking) { // ... } } ``` @GetMapping("/bookings/{booking}") fun getBooking(@PathVariable booking: Long): ModelAndView { // ... } } ``` You can prepare a link by referring to the method by name, as the following example shows: ``` UriComponents uriComponents = MvcUriComponentsBuilder .fromMethodName(BookingController.class, "getBooking", 21).buildAndExpand(42); ``` val uriComponents = MvcUriComponentsBuilder .fromMethodName(BookingController::class.java, "getBooking", 21).buildAndExpand(42) In the preceding example, we provide actual method argument values (in this case, the long value: `21` ) to be used as a path variable and inserted into the URL. Furthermore, we provide the value, `42` , to fill in any remaining URI variables, such as the `hotel` variable inherited from the type-level request mapping. If the method had more arguments, we could supply null for arguments not needed for the URL. In general, only `@PathVariable` and `@RequestParam` arguments are relevant for constructing the URL. There are additional ways to use . For example, you can use a technique akin to mock testing through proxies to avoid referring to the controller method by name, as the following example shows (the example assumes static import of ``` MvcUriComponentsBuilder.on ``` ``` UriComponents uriComponents = MvcUriComponentsBuilder .fromMethodCall(on(BookingController.class).getBooking(21)).buildAndExpand(42); ``` val uriComponents = MvcUriComponentsBuilder .fromMethodCall(on(BookingController::class.java).getBooking(21)).buildAndExpand(42) Controller method signatures are limited in their design when they are supposed to be usable for link creation with | | --- | The earlier examples use static methods in . Internally, they rely on to prepare a base URL from the scheme, host, port, context path, and servlet path of the current request. This works well in most cases. However, sometimes, it can be insufficient. For example, you may be outside the context of a request (such as a batch process that prepares links) or perhaps you need to insert a path prefix (such as a locale prefix that was removed from the request path and needs to be re-inserted into links). For such cases, you can use the static `fromXxx` overloaded methods that accept a `UriComponentsBuilder` to use a base URL. Alternatively, you can create an instance of with a base URL and then use the instance-based `withXxx` methods. For example, the following listing uses `withMethodCall` : ``` UriComponentsBuilder base = ServletUriComponentsBuilder.fromCurrentContextPath().path("/en"); MvcUriComponentsBuilder builder = MvcUriComponentsBuilder.relativeTo(base); builder.withMethodCall(on(BookingController.class).getBooking(21)).buildAndExpand(42); ``` val base = ServletUriComponentsBuilder.fromCurrentContextPath().path("/en") val builder = MvcUriComponentsBuilder.relativeTo(base) builder.withMethodCall(on(BookingController::class.java).getBooking(21)).buildAndExpand(42) ## Links in Views In views such as Thymeleaf, FreeMarker, or JSP, you can build links to annotated controllers by referring to the implicitly or explicitly assigned name for each request mapping. @RequestMapping("/{country}") public HttpEntity<PersonAddress> getAddress(@PathVariable String country) { ... } } ``` @RequestMapping("/{country}") fun getAddress(@PathVariable country: String): HttpEntity<PersonAddress> { ... } } ``` Given the preceding controller, you can prepare a link from a JSP, as follows: ``` <%@ taglib uri="http://www.springframework.org/tags" prefix="s" %> ... <a href="${s:mvcUrl('PAC#getAddress').arg(0,'US').buildAndExpand('123')}">Get Address</a> ``` The preceding example relies on the `mvcUrl` function declared in the Spring tag library (that is, META-INF/spring.tld), but it is easy to define your own function or prepare a similar one for other templating technologies. Here is how this works. On startup, every `@RequestMapping` is assigned a default name through , whose default implementation uses the capital letters of the class and the method name (for example, the `getThing` method in `ThingController` becomes "TC#getThing"). If there is a name clash, you can use ``` @RequestMapping(name="..") ``` to assign an explicit name or implement your own Date: 2012-05-07 Categories: Tags: Spring MVC has an extensive integration with Servlet asynchronous request processing: * `DeferredResult` and `Callable` return values in controller methods provide basic support for a single asynchronous return value. * Controllers can stream multiple values, including SSE and raw data. * Controllers can use reactive clients and return reactive types for response handling. For an overview of how this differs from Spring WebFlux, see the Async Spring MVC compared to WebFlux section below. `DeferredResult` Once the asynchronous request processing feature is enabled in the Servlet container, controller methods can wrap any supported controller method return value with `DeferredResult` , as the following example shows: ``` @GetMapping("/quotes") @ResponseBody public DeferredResult<String> quotes() { DeferredResult<String> deferredResult = new DeferredResult<>(); // Save the deferredResult somewhere.. return deferredResult; } // From some other thread... deferredResult.setResult(result); ``` ``` @GetMapping("/quotes") @ResponseBody fun quotes(): DeferredResult<String> { val deferredResult = DeferredResult<String>() // Save the deferredResult somewhere.. return deferredResult } // From some other thread... deferredResult.setResult(result) ``` The controller can produce the return value asynchronously, from a different thread — for example, in response to an external event (JMS message), a scheduled task, or other event. `Callable` A controller can wrap any supported return value with ``` java.util.concurrent.Callable ``` ``` @PostMapping public Callable<String> processUpload(final MultipartFile file) { return () -> "someView"; } ``` ``` @PostMapping fun processUpload(file: MultipartFile) = Callable<String> { // ... "someView" } ``` The return value can then be obtained by running the given task through the configured `TaskExecutor` . Here is a very concise overview of Servlet asynchronous request processing: A `ServletRequest` can be put in asynchronous mode by calling `request.startAsync()` . The main effect of doing so is that the Servlet (as well as any filters) can exit, but the response remains open to let processing complete later. * The call to `request.startAsync()` returns `AsyncContext` , which you can use for further control over asynchronous processing. For example, it provides the `dispatch` method, which is similar to a forward from the Servlet API, except that it lets an application resume request processing on a Servlet container thread. * The `ServletRequest` provides access to the current `DispatcherType` , which you can use to distinguish between processing the initial request, an asynchronous dispatch, a forward, and other dispatcher types. `DeferredResult` processing works as follows: The controller returns a `DeferredResult` and saves it in some in-memory queue or list where it can be accessed. * Spring MVC calls `request.startAsync()` . * Meanwhile, the `DispatcherServlet` and all configured filters exit the request processing thread, but the response remains open. * The application sets the `DeferredResult` from some thread, and Spring MVC dispatches the request back to the Servlet container. * The `DispatcherServlet` is invoked again, and processing resumes with the asynchronously produced return value. `Callable` processing works as follows: The controller returns a `Callable` . * Spring MVC calls `request.startAsync()` and submits the `Callable` to a `TaskExecutor` for processing in a separate thread. * Meanwhile, the `DispatcherServlet` and all filters exit the Servlet container thread, but the response remains open. * Eventually the `Callable` produces a result, and Spring MVC dispatches the request back to the Servlet container to complete processing. * The `DispatcherServlet` is invoked again, and processing resumes with the asynchronously produced return value from the `Callable` . For further background and context, you can also read the blog posts that introduced asynchronous request processing support in Spring MVC 3.2. When you use a `DeferredResult` , you can choose whether to call `setResult` or `setErrorResult` with an exception. In both cases, Spring MVC dispatches the request back to the Servlet container to complete processing. It is then treated either as if the controller method returned the given value or as if it produced the given exception. The exception then goes through the regular exception handling mechanism (for example, invoking `@ExceptionHandler` methods). When you use `Callable` , similar processing logic occurs, the main difference being that the result is returned from the `Callable` or an exception is raised by it. ### Interception `HandlerInterceptor` instances can be of type , to receive the ``` afterConcurrentHandlingStarted ``` callback on the initial request that starts asynchronous processing (instead of `postHandle` and `afterCompletion` ). `HandlerInterceptor` implementations can also register a or a , to integrate more deeply with the lifecycle of an asynchronous request (for example, to handle a timeout event). See for more details. `DeferredResult` provides `onTimeout(Runnable)` and ``` onCompletion(Runnable) ``` callbacks. See the javadoc of `DeferredResult` for more details. `Callable` can be substituted for `WebAsyncTask` that exposes additional methods for timeout and completion callbacks. ### Async Spring MVC compared to WebFlux The Servlet API was originally built for making a single pass through the Filter-Servlet chain. Asynchronous request processing lets applications exit the Filter-Servlet chain but leave the response open for further processing. The Spring MVC asynchronous support is built around that mechanism. When a controller returns a `DeferredResult` , the Filter-Servlet chain is exited, and the Servlet container thread is released. Later, when the `DeferredResult` is set, an `ASYNC` dispatch (to the same URL) is made, during which the controller is mapped again but, rather than invoking it, the `DeferredResult` value is used (as if the controller returned it) to resume processing. By contrast, Spring WebFlux is neither built on the Servlet API, nor does it need such an asynchronous request processing feature, because it is asynchronous by design. Asynchronous handling is built into all framework contracts and is intrinsically supported through all stages of request processing. From a programming model perspective, both Spring MVC and Spring WebFlux support asynchronous and Reactive Types as return values in controller methods. Spring MVC even supports streaming, including reactive back pressure. However, individual writes to the response remain blocking (and are performed on a separate thread), unlike WebFlux, which relies on non-blocking I/O and does not need an extra thread for each write. Another fundamental difference is that Spring MVC does not support asynchronous or reactive types in controller method arguments (for example, `@RequestBody` , `@RequestPart` , and others), nor does it have any explicit support for asynchronous and reactive types as model attributes. Spring WebFlux does support all that. Finally, from a configuration perspective the asynchronous request processing feature must be enabled at the Servlet container level. ## HTTP Streaming You can use `DeferredResult` and `Callable` for a single asynchronous return value. What if you want to produce multiple asynchronous values and have those written to the response? This section describes how to do so. ### Objects You can use the `ResponseBodyEmitter` return value to produce a stream of objects, where each object is serialized with an `HttpMessageConverter` and written to the response, as the following example shows: ``` @GetMapping("/events") public ResponseBodyEmitter handle() { ResponseBodyEmitter emitter = new ResponseBodyEmitter(); // Save the emitter somewhere.. return emitter; } ``` @GetMapping("/events") fun handle() = ResponseBodyEmitter().apply { // Save the emitter somewhere.. } You can also use `ResponseBodyEmitter` as the body in a `ResponseEntity` , letting you customize the status and headers of the response. When an `emitter` throws an `IOException` (for example, if the remote client went away), applications are not responsible for cleaning up the connection and should not invoke `emitter.complete` or ``` emitter.completeWithError ``` . Instead, the servlet container automatically initiates an `AsyncListener` error notification, in which Spring MVC makes a `completeWithError` call. This call, in turn, performs one final `ASYNC` dispatch to the application, during which Spring MVC invokes the configured exception resolvers and completes the request. ### SSE `SseEmitter` (a subclass of `ResponseBodyEmitter` ) provides support for Server-Sent Events, where events sent from the server are formatted according to the W3C SSE specification. To produce an SSE stream from a controller, return `SseEmitter` , as the following example shows: ``` @GetMapping(path="/events", produces=MediaType.TEXT_EVENT_STREAM_VALUE) public SseEmitter handle() { SseEmitter emitter = new SseEmitter(); // Save the emitter somewhere.. return emitter; } ``` @GetMapping("/events", produces = [MediaType.TEXT_EVENT_STREAM_VALUE]) fun handle() = SseEmitter().apply { // Save the emitter somewhere.. } While SSE is the main option for streaming into browsers, note that Internet Explorer does not support Server-Sent Events. Consider using Spring’s WebSocket messaging with SockJS fallback transports (including SSE) that target a wide range of browsers. See also previous section for notes on exception handling. ### Raw Data Sometimes, it is useful to bypass message conversion and stream directly to the response `OutputStream` (for example, for a file download). You can use the return value type to do so, as the following example shows: ``` @GetMapping("/download") public StreamingResponseBody handle() { return new StreamingResponseBody() { @Override public void writeTo(OutputStream outputStream) throws IOException { // write... } }; } ``` ``` @GetMapping("/download") fun handle() = StreamingResponseBody { // write... } ``` as the body in a `ResponseEntity` to customize the status and headers of the response. ## Reactive Types Spring MVC supports use of reactive client libraries in a controller (also read Reactive Libraries in the WebFlux section). This includes the `WebClient` from `spring-webflux` and others, such as Spring Data reactive data repositories. In such scenarios, it is convenient to be able to return reactive types from the controller method. Reactive return values are handled as follows: A single-value promise is adapted to, similar to using `DeferredResult` . Examples include `Mono` (Reactor) or `Single` (RxJava). * A multi-value stream with a streaming media type (such as `application/x-ndjson` or `text/event-stream` ) is adapted to, similar to using `ResponseBodyEmitter` or `SseEmitter` . Examples include `Flux` (Reactor) or `Observable` (RxJava). Applications can also return A multi-value stream with any other media type (such as `application/json` ) is adapted to, similar to using ``` DeferredResult<List<?>> ``` Spring MVC supports Reactor and RxJava through the | | --- | For streaming to the response, reactive back pressure is supported, but writes to the response are still blocking and are run on a separate thread through the configured `TaskExecutor` , to avoid blocking the upstream source (such as a `Flux` returned from `WebClient` ). By default, is used for the blocking writes, but that is not suitable under load. If you plan to stream with a reactive type, you should use the MVC configuration to configure a task executor. ## Context Propagation It is common to propagate context via ``` java.lang.ThreadLocal ``` . This works transparently for handling on the same thread, but requires additional work for asynchronous handling across multiple threads. The Micrometer Context Propagation library simplifies context propagation across threads, and across context mechanisms such as `ThreadLocal` values, Reactor context, GraphQL Java context, and others. If Micrometer Context Propagation is present on the classpath, when a controller method returns a reactive type such as `Flux` or `Mono` , all `ThreadLocal` values, for which there is a registered ``` io.micrometer.ThreadLocalAccessor ``` , are written to the Reactor `Context` as key-value pairs, using the key assigned by the `ThreadLocalAccessor` . For other asynchronous handling scenarios, you can use the Context Propagation library directly. For example: ``` // Capture ThreadLocal values from the main thread ... ContextSnapshot snapshot = ContextSnapshot.captureAll(); // On a different thread: restore ThreadLocal values try (ContextSnapshot.Scope scope = snapshot.setThreadLocals()) { // ... } ``` For more details, see the documentation of the Micrometer Context Propagation library. ## Disconnects The Servlet API does not provide any notification when a remote client goes away. Therefore, while streaming to the response, whether through SseEmitter or reactive types, it is important to send data periodically, since the write fails if the client has disconnected. The send could take the form of an empty (comment-only) SSE event or any other data that the other side would have to interpret as a heartbeat and ignore. Alternatively, consider using web messaging solutions (such as STOMP over WebSocket or WebSocket with SockJS) that have a built-in heartbeat mechanism. The asynchronous request processing feature must be enabled at the Servlet container level. The MVC configuration also exposes several options for asynchronous requests. ### Servlet Container Filter and Servlet declarations have an `asyncSupported` flag that needs to be set to `true` to enable asynchronous request processing. In addition, Filter mappings should be declared to handle the `ASYNC` ``` jakarta.servlet.DispatchType ``` . In Java configuration, when you use to initialize the Servlet container, this is done automatically. In `web.xml` configuration, you can add ``` <async-supported>true</async-supported> ``` to the `DispatcherServlet` and to `Filter` declarations and add ``` <dispatcher>ASYNC</dispatcher> ``` to filter mappings. ### Spring MVC The MVC configuration exposes the following options related to asynchronous request processing: Java configuration: Use the ``` configureAsyncSupport ``` callback on `WebMvcConfigurer` . * XML namespace: Use the `<async-support>` element under . You can configure the following: Default timeout value for async requests, which if not set, depends on the underlying Servlet container. * `AsyncTaskExecutor` to use for blocking writes when streaming with Reactive Types and for executing `Callable` instances returned from controller methods. We highly recommended configuring this property if you stream with reactive types or have controller methods that return `Callable` , since by default, it is a implementations and implementations. Note that you can also set the default timeout value on a `DeferredResult` , a `ResponseBodyEmitter` , and an `SseEmitter` . For a `Callable` , you can use `WebAsyncTask` to provide a timeout value. The use of view technologies in Spring MVC is pluggable. Whether you decide to use Thymeleaf, Groovy Markup Templates, JSPs, or other technologies is primarily a matter of a configuration change. This chapter covers view technologies integrated with Spring MVC. We assume you are already familiar with View Resolution. The views of a Spring MVC application live within the internal trust boundaries of that application. Views have access to all the beans of your application context. As such, it is not recommended to use Spring MVC’s template support in applications where the templates are editable by external sources, since this can have security implications. | | --- | Thymeleaf is a modern server-side Java template engine that emphasizes natural HTML templates that can be previewed in a browser by double-clicking, which is very helpful for independent work on UI templates (for example, by a designer) without the need for a running server. If you want to replace JSPs, Thymeleaf offers one of the most extensive sets of features to make such a transition easier. Thymeleaf is actively developed and maintained. For a more complete introduction, see the Thymeleaf project home page. The Thymeleaf integration with Spring MVC is managed by the Thymeleaf project. The configuration involves a few bean declarations, such as ``` ServletContextTemplateResolver ``` , `SpringTemplateEngine` , and ``` ThymeleafViewResolver ``` . See Thymeleaf+Spring for more details. ## View Configuration <mvc:view-resolvers> <mvc:freemarker/> </mvc:view-resolvers<!-- Configure FreeMarker... --> <mvc:freemarker-configurer> <mvc:template-loader-path location="/WEB-INF/freemarker"/> </mvc:freemarker-configurer> ``` Alternatively, you can also declare the `FreeMarkerConfigurer` bean for full control over all properties, as the following example shows: ``` <bean id="freemarkerConfig" class="org.springframework.web.servlet.view.freemarker.FreeMarkerConfigurer"> <property name="templateLoaderPath" value="/WEB-INF/freemarker/"/> </bean> ``` ``` /WEB-INF/freemarker/welcome.ftl ``` ## FreeMarker Configuration ``` <bean id="freemarkerConfig" class="org.springframework.web.servlet.view.freemarker.FreeMarkerConfigurer"> <property name="templateLoaderPath" value="/WEB-INF/freemarker/"/> <property name="freemarkerVariables"> <map> <entry key="xml_escape" value-ref="fmXmlEscape"/> </map> </property> </bean<bean id="fmXmlEscape" class="freemarker.template.utility.XmlEscape"/> ``` ### The Bind Macros ``` org.springframework.web.servlet.view.freemarker ``` ### Simple Binding In your HTML forms based on FreeMarker templates that act as a form view for a Spring MVC controller, you can use code similar to the next example to bind to field values and display error messages for each input field in similar fashion to the JSP equivalent. The following example shows a `personForm` view: ``` <!-- FreeMarker macros have to be imported into a namespace. We strongly recommend sticking to 'spring'. --> <#import "/spring.ftl" as spring/> <html> ... <form action="" method="POST"> Name: <@spring.bind "personForm.name"/> <input type="text" name="${spring.status.expression}" value="${spring.status.value?html}"/><br /> <#list spring.status.errorMessages as error> <b>${error}</b> <br /> </#list> <br /> ... <input type="submit" value="submit"/> </form> ... </html> ``` `<@spring.bind>` requires a 'path' argument, which consists of the name of your command object (it is 'command', unless you changed it in your controller configuration) followed by a period and the name of the field on the command object to which you wish to bind. You can also use nested fields, such as ``` command.address.street ``` . The `bind` macro assumes the default HTML escaping behavior specified by the `ServletContext` parameter `defaultHtmlEscape` in `web.xml` . An alternative form of the macro called ``` <@spring.bindEscaped> ``` takes a second argument that explicitly specifies whether HTML escaping should be used in the status error messages or values. You can set it to `true` or `false` as required. Additional form handling macros simplify the use of HTML escaping, and you should use these macros wherever possible. They are explained in the next section. Additional convenience macros for FreeMarker simplify both binding and form generation (including validation error display). It is never necessary to use these macros to generate form input fields, and you can mix and match them with simple HTML or direct calls to the Spring bind macros that we highlighted previously. The following table of available macros shows the FreeMarker Template (FTL) definitions and the parameter list that each takes: macro | FTL definition | | --- | --- | | | | | | | | | | | | | | In FreeMarker templates, | | --- | The parameters to any of the above macros have consistent meanings: * `path` : The name of the field to bind to (for example, "command.name") * `options` : A `Map` of all the available values that can be selected from in the input field. The keys to the map represent the values that are POSTed back from the form and bound to the command object. Map objects stored against the keys are the labels displayed on the form to the user and may be different from the corresponding values posted back by the form. Usually, such a map is supplied as reference data by the controller. You can use any `Map` implementation, depending on required behavior. For strictly sorted maps, you can use a `SortedMap` (such as a `TreeMap` ) with a suitable `Comparator` and, for arbitrary Maps that should return values in insertion order, use a `LinkedHashMap` or a `LinkedMap` from `commons-collections` . * `separator` : Where multiple options are available as discreet elements (radio buttons or checkboxes), the sequence of characters used to separate each one in the list (such as `<br>` ). * `attributes` : An additional string of arbitrary tags or text to be included within the HTML tag itself. This string is echoed literally by the macro. For example, in a `textarea` field, you may supply attributes (such as 'rows="5" cols="60"'), or you could pass style information such as 'style="border:1px solid silver"'. * `classOrStyle` : For the `showErrors` macro, the name of the CSS class that the `span` element that wraps each error uses. If no information is supplied (or the value is empty), the errors are wrapped in `<b></b>` tags. The following sections outline examples of the macros. # Input Fields The `formInput` macro takes the `path` parameter ( `command.name` ) and an additional `attributes` parameter (which is empty in the upcoming example). The macro, along with all other form generation macros, performs an implicit Spring bind on the path parameter. The binding remains valid until a new bind occurs, so the `showErrors` macro does not need to pass the path parameter again — it operates on the field for which a binding was last created. The `showErrors` macro takes a separator parameter (the characters that are used to separate multiple errors on a given field) and also accepts a second parameter — this time, a class name or style attribute. Note that FreeMarker can specify default values for the attributes parameter. The following example shows how to use the `formInput` and `showErrors` macros: ``` <@spring.formInput "command.name"/> <@spring.showErrors "<br>"/> ``` The next example shows the output of the form fragment, generating the name field and displaying a validation error after the form was submitted with no value in the field. Validation occurs through Spring’s Validation framework. The generated HTML resembles the following example: ``` Name: <input type="text" name="name" value=""> <br> <b>required</b> <br> <br> ``` The `formTextarea` macro works the same way as the `formInput` macro and accepts the same parameter list. Commonly, the second parameter ( `attributes` ) is used to pass style information or `rows` and `cols` attributes for the `textarea` . # Selection Fields You can use four selection field macros to generate common UI value selection inputs in your HTML forms: * `formSingleSelect` * `formMultiSelect` * `formRadioButtons` * `formCheckboxes` Each of the four macros accepts a `Map` of options that contains the value for the form field and the label that corresponds to that value. The value and the label can be the same. The next example is for radio buttons in FTL. The form-backing object specifies a default value of 'London' for this field, so no validation is necessary. When the form is rendered, the entire list of cities to choose from is supplied as reference data in the model under the name 'cityMap'. The following listing shows the example: ``` ... Town: <@spring.formRadioButtons "command.address.town", cityMap, ""/><br><br> ``` The preceding listing renders a line of radio buttons, one for each value in `cityMap` , and uses a separator of `""` . No additional attributes are supplied (the last parameter to the macro is missing). The `cityMap` uses the same `String` for each key-value pair in the map. The map’s keys are what the form actually submits as `POST` request parameters. The map values are the labels that the user sees. In the preceding example, given a list of three well known cities and a default value in the form backing object, the HTML resembles the following: ``` Town: <input type="radio" name="address.town" value="London">London</input> <input type="radio" name="address.town" value="Paris" checked="checked">Paris</input> <input type="radio" name="address.town" value="New York">New York</input> ``` If your application expects to handle cities by internal codes (for example), you can create the map of codes with suitable keys, as the following example shows: ``` protected Map<String, ?> referenceData(HttpServletRequest request) throws Exception { Map<String, String> cityMap = new LinkedHashMap<>(); cityMap.put("LDN", "London"); cityMap.put("PRS", "Paris"); cityMap.put("NYC", "New York"); Map<String, Object> model = new HashMap<>(); model.put("cityMap", cityMap); return model; } ``` ``` protected fun referenceData(request: HttpServletRequest): Map<String, *> { val cityMap = linkedMapOf( "LDN" to "London", "PRS" to "Paris", "NYC" to "New York" ) return hashMapOf("cityMap" to cityMap) } ``` The code now produces output where the radio values are the relevant codes, but the user still sees the more user-friendly city names, as follows: ``` Town: <input type="radio" name="address.town" value="LDN">London</input> <input type="radio" name="address.town" value="PRS" checked="checked">Paris</input> <input type="radio" name="address.town" value="NYC">New York</input> ``` ### HTML Escaping Default usage of the form macros described earlier results in HTML elements that are HTML 4.01 compliant and that use the default value for HTML escaping defined in your `web.xml` file, as used by Spring’s bind support. To make the elements be XHTML compliant or to override the default HTML escaping value, you can specify two variables in your template (or in your model, where they are visible to your templates). The advantage of specifying them in the templates is that they can be changed to different values later in the template processing to provide different behavior for different fields in your form. To switch to XHTML compliance for your tags, specify a value of `true` for a model or context variable named `xhtmlCompliant` , as the following example shows: ``` <#-- for FreeMarker --> <#assign xhtmlCompliant = true> ``` After processing this directive, any elements generated by the Spring macros are now XHTML compliant. In similar fashion, you can specify HTML escaping per field, as the following example shows: ``` <#-- until this point, default HTML escaping is used --<#assign htmlEscape = true> <#-- next field will use HTML escaping --> <@spring.formInput "command.name"/<#assign htmlEscape = false in spring> <#-- all future fields will be bound with HTML escaping off --> ``` The Groovy Markup Template Engine is primarily aimed at generating XML-like markup (XML, XHTML, HTML5, and others), but you can use it to generate any text-based content. The Spring Framework has a built-in integration for using Spring MVC with Groovy Markup. The Groovy Markup Template engine requires Groovy 2.3.1+. | | --- | The following example shows how to configure the Groovy Markup Template Engine: @Override public void configureViewResolvers(ViewResolverRegistry registry) { registry.groovy(); } @Bean public GroovyMarkupConfigurer groovyMarkupConfigurer() { GroovyMarkupConfigurer configurer = new GroovyMarkupConfigurer(); configurer.setResourceLoaderPath("/WEB-INF/"); return configurer; } } ``` override fun configureViewResolvers(registry: ViewResolverRegistry) { registry.groovy() } @Bean fun groovyMarkupConfigurer() = GroovyMarkupConfigurer().apply { resourceLoaderPath = "/WEB-INF/" } } ``` <mvc:view-resolvers> <mvc:groovy/> </mvc:view-resolvers<!-- Configure the Groovy Markup Template Engine... --> <mvc:groovy-configurer resource-loader-path="/WEB-INF/"/> ``` ## Example Unlike traditional template engines, Groovy Markup relies on a DSL that uses a builder syntax. The following example shows a sample template for an HTML page: ``` yieldUnescaped '<!DOCTYPE html>' html(lang:'en') { head { meta('http-equiv':'"Content-Type" content="text/html; charset=utf-8"') title('My page') } body { p('This is an example of HTML contents') } } ``` ## Requirements ## Script Templates The following example shows the same arrangement in XML: <mvc:view-resolvers> <mvc:script-template/> </mvc:view-resolvers<mvc:script-template-configurer engine-name="nashorn" render-object="Mustache" render-function="render"> <mvc:script location="mustache.js"/> </mvc:script-template-configurer> ``` The controller would look no different for the Java and XML configurations, as the following example shows: @GetMapping("/sample") public String test(Model model) { model.addAttribute("title", "Sample title"); model.addAttribute("body", "Sample body"); return "template"; } } ``` @GetMapping("/sample") fun test(model: Model): String { model["title"] = "Sample title" model["body"] = "Sample body" return "template" } } ``` The following example shows the Mustache template: ``` <html> <head> <title>{{title}}</title> </head> <body> <p>{{body}}</p> </body> </html> ``` The render function is called with the following parameters: `polyfill.js` defines only the `window` object needed by Handlebars to run properly, as follows: `var window = {};` This basic `render.js` implementation compiles the template before using it. A production-ready implementation should also store any reused cached templates or pre-compiled templates. You can do so on the script side (and handle any customization you need — managing template engine configuration, for example). The following example shows how to do so: The Spring Framework has a built-in integration for using Spring MVC with JSP and JSTL. When developing with JSPs, you typically declare an can be used for dispatching to any Servlet resource but in particular for JSPs. As a best practice, we strongly encourage placing your JSP files in a directory under the `'WEB-INF'` directory so there can be no direct access by clients. ``` <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/WEB-INF/jsp/"/> <property name="suffix" value=".jsp"/> </bean> ``` ## JSPs versus JSTL When using the JSP Standard Tag Library (JSTL) you must use a special view class, the `JstlView` , as JSTL needs some preparation before things such as the I18N features can work. ## Spring’s JSP Tag Library Spring provides data binding of request parameters to command objects, as described in earlier chapters. To facilitate the development of JSP pages in combination with those data binding features, Spring provides a few tags that make things even easier. All Spring tags have HTML escaping features to enable or disable escaping of characters. The `spring.tld` tag library descriptor (TLD) is included in the `spring-webmvc.jar` . For a comprehensive reference on individual tags, browse the API reference or see the tag library description. ## Spring’s form tag library As of version 2.0, Spring provides a comprehensive set of data binding-aware tags for handling form elements when using JSP and Spring Web MVC. Each tag provides support for the set of attributes of its corresponding HTML tag counterpart, making the tags familiar and intuitive to use. The tag-generated HTML is HTML 4.01/XHTML 1.0 compliant. Unlike other form/input tag libraries, Spring’s form tag library is integrated with Spring Web MVC, giving the tags access to the command object and reference data your controller deals with. As we show in the following examples, the form tags make JSPs easier to develop, read, and maintain. We go through the form tags and look at an example of how each tag is used. We have included generated HTML snippets where certain tags require further commentary. ### Configuration The form tag library comes bundled in `spring-webmvc.jar` . The library descriptor is called `spring-form.tld` . To use the tags from this library, add the following directive to the top of your JSP page: ``` <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> ``` where `form` is the tag name prefix you want to use for the tags from this library. ### The Form Tag This tag renders an HTML 'form' element and exposes a binding path to inner tags for binding. It puts the command object in the `PageContext` so that the command object can be accessed by inner tags. All the other tags in this library are nested tags of the `form` tag. Assume that we have a domain object called `User` . It is a JavaBean with properties such as `firstName` and `lastName` . We can use it as the form-backing object of our form controller, which returns `form.jsp` . The following example shows what `form.jsp` could look like: ``` <form:form> <table> <tr> <td>First Name:</td> <td><form:input path="firstName"/></td> </tr> <tr> <td>Last Name:</td> <td><form:input path="lastName"/></td> </tr> <tr> <td colspan="2"> <input type="submit" value="Save Changes"/> </td> </tr> </table> </form:form> ``` The `firstName` and `lastName` values are retrieved from the command object placed in the `PageContext` by the page controller. Keep reading to see more complex examples of how inner tags are used with the `form` tag. The following listing shows the generated HTML, which looks like a standard form: ``` <form method="POST"> <table> <tr> <td>First Name:</td> <td><input name="firstName" type="text" value="Harry"/></td> </tr> <tr> <td>Last Name:</td> <td><input name="lastName" type="text" value="Potter"/></td> </tr> <tr> <td colspan="2"> <input type="submit" value="Save Changes"/> </td> </tr> </table> </form> ``` The preceding JSP assumes that the variable name of the form-backing object is `command` . If you have put the form-backing object into the model under another name (definitely a best practice), you can bind the form to the named variable, as the following example shows: ``` <form:form modelAttribute="user"> <table> <tr> <td>First Name:</td> <td><form:input path="firstName"/></td> </tr> <tr> <td>Last Name:</td> <td><form:input path="lastName"/></td> </tr> <tr> <td colspan="2"> <input type="submit" value="Save Changes"/> </td> </tr> </table> </form:form> ``` `input` Tag This tag renders an HTML `input` element with the bound value and `type='text'` by default. For an example of this tag, see The Form Tag. You can also use HTML5-specific types, such as `tel` , `date` , and others. `checkbox` Tag This tag renders an HTML `input` tag with the `type` set to `checkbox` . Assume that our `User` has preferences such as newsletter subscription and a list of hobbies. The following example shows the `Preferences` class: ``` public class Preferences { private boolean receiveNewsletter; private String[] interests; private String favouriteWord; public boolean isReceiveNewsletter() { return receiveNewsletter; } public void setReceiveNewsletter(boolean receiveNewsletter) { this.receiveNewsletter = receiveNewsletter; } public String[] getInterests() { return interests; } public void setInterests(String[] interests) { this.interests = interests; } public String getFavouriteWord() { return favouriteWord; } public void setFavouriteWord(String favouriteWord) { this.favouriteWord = favouriteWord; } } ``` ``` class Preferences( var receiveNewsletter: Boolean, var interests: StringArray, var favouriteWord: String ) ``` The corresponding `form.jsp` could then resemble the following: ``` <form:form> <table> <tr> <td>Subscribe to newsletter?:</td> <%-- Approach 1: Property is of type java.lang.Boolean --%> <td><form:checkbox path="preferences.receiveNewsletter"/></td> </tr <tr> <td>Interests:</td> <%-- Approach 2: Property is of an array or of type java.util.Collection --%> <td> Quidditch: <form:checkbox path="preferences.interests" value="Quidditch"/> Herbology: <form:checkbox path="preferences.interests" value="Herbology"/> Defence Against the Dark Arts: <form:checkbox path="preferences.interests" value="Defence Against the Dark Arts"/> </td> </tr <tr> <td>Favourite Word:</td> <%-- Approach 3: Property is of type java.lang.Object --%> <td> Magic: <form:checkbox path="preferences.favouriteWord" value="Magic"/> </td> </tr> </table> </form:form> ``` There are three approaches to the `checkbox` tag, which should meet all your checkbox needs. Approach One: When the bound value is of type `java.lang.Boolean` , the `input(checkbox)` is marked as `checked` if the bound value is `true` . The `value` attribute corresponds to the resolved value of the `setValue(Object)` value property. * Approach Two: When the bound value is of type `array` or `java.util.Collection` , the `input(checkbox)` is marked as `checked` if the configured `setValue(Object)` value is present in the bound `Collection` . * Approach Three: For any other bound value type, the `input(checkbox)` is marked as `checked` if the configured `setValue(Object)` is equal to the bound value. Note that, regardless of the approach, the same HTML structure is generated. The following HTML snippet defines some checkboxes: ``` <tr> <td>Interests:</td> <td> Quidditch: <input name="preferences.interests" type="checkbox" value="Quidditch"/> <input type="hidden" value="1" name="_preferences.interests"/> Herbology: <input name="preferences.interests" type="checkbox" value="Herbology"/> <input type="hidden" value="1" name="_preferences.interests"/> Defence Against the Dark Arts: <input name="preferences.interests" type="checkbox" value="Defence Against the Dark Arts"/> <input type="hidden" value="1" name="_preferences.interests"/> </td> </tr> ``` You might not expect to see the additional hidden field after each checkbox. When a checkbox in an HTML page is not checked, its value is not sent to the server as part of the HTTP request parameters once the form is submitted, so we need a workaround for this quirk in HTML for Spring form data binding to work. The `checkbox` tag follows the existing Spring convention of including a hidden parameter prefixed by an underscore ( `_` ) for each checkbox. By doing this, you are effectively telling Spring that “the checkbox was visible in the form, and I want my object to which the form data binds to reflect the state of the checkbox, no matter what.” `checkboxes` Tag This tag renders multiple HTML `input` tags with the `type` set to `checkbox` . This section build on the example from the previous `checkbox` tag section. Sometimes, you prefer not to have to list all the possible hobbies in your JSP page. You would rather provide a list at runtime of the available options and pass that in to the tag. That is the purpose of the `checkboxes` tag. You can pass in an `Array` , a `List` , or a `Map` that contains the available options in the `items` property. Typically, the bound property is a collection so that it can hold multiple values selected by the user. The following example shows a JSP that uses this tag: ``` <form:form> <table> <tr> <td>Interests:</td> <td> <%-- Property is of an array or of type java.util.Collection --%> <form:checkboxes path="preferences.interests" items="${interestList}"/> </td> </tr> </table> </form:form> ``` This example assumes that the `interestList` is a `List` available as a model attribute that contains strings of the values to be selected from. If you use a `Map` , the map entry key is used as the value, and the map entry’s value is used as the label to be displayed. You can also use a custom object where you can provide the property names for the value by using `itemValue` and the label by using `itemLabel` . `radiobutton` Tag This tag renders an HTML `input` element with the `type` set to `radio` . A typical usage pattern involves multiple tag instances bound to the same property but with different values, as the following example shows: ``` <tr> <td>Sex:</td> <td> Male: <form:radiobutton path="sex" value="M"/> <br/> Female: <form:radiobutton path="sex" value="F"/> </td> </tr> ``` `radiobuttons` Tag This tag renders multiple HTML `input` elements with the `type` set to `radio` . As with the `checkboxes` tag, you might want to pass in the available options as a runtime variable. For this usage, you can use the `radiobuttons` tag. You pass in an `Array` , a `List` , or a `Map` that contains the available options in the `items` property. If you use a `Map` , the map entry key is used as the value and the map entry’s value are used as the label to be displayed. You can also use a custom object where you can provide the property names for the value by using `itemValue` and the label by using `itemLabel` , as the following example shows: ``` <tr> <td>Sex:</td> <td><form:radiobuttons path="sex" items="${sexOptions}"/></td> </tr> ``` `password` Tag This tag renders an HTML `input` tag with the type set to `password` with the bound value. ``` <tr> <td>Password:</td> <td> <form:password path="password"/> </td> </tr> ``` Note that, by default, the password value is not shown. If you do want the password value to be shown, you can set the value of the `showPassword` attribute to `true` , as the following example shows: ``` <tr> <td>Password:</td> <td> <form:password path="password" value="^76525bvHGq" showPassword="true"/> </td> </tr> ``` `select` Tag This tag renders an HTML 'select' element. It supports data binding to the selected option as well as the use of nested `option` and `options` tags. Assume that a `User` has a list of skills. The corresponding HTML could be as follows: ``` <tr> <td>Skills:</td> <td><form:select path="skills" items="${skills}"/></td> </tr> ``` If the `User’s` skill are in Herbology, the HTML source of the 'Skills' row could be as follows: ``` <tr> <td>Skills:</td> <td> <select name="skills" multiple="true"> <option value="Potions">Potions</option> <option value="Herbology" selected="selected">Herbology</option> <option value="Quidditch">Quidditch</option> </select> </td> </tr> ``` `option` Tag This tag renders an HTML `option` element. It sets `selected` , based on the bound value. The following HTML shows typical output for it: ``` <tr> <td>House:</td> <td> <form:select path="house"> <form:option value="Gryffindor"/> <form:option value="Hufflepuff"/> <form:option value="Ravenclaw"/> <form:option value="Slytherin"/> </form:select> </td> </tr> ``` If the `User’s` house was in Gryffindor, the HTML source of the 'House' row would be as follows: ``` <tr> <td>House:</td> <td> <select name="house"> <option value="Gryffindor" selected="selected">Gryffindor</option> (1) <option value="Hufflepuff">Hufflepuff</option> <option value="Ravenclaw">Ravenclaw</option> <option value="Slytherin">Slytherin</option> </select> </td> </tr> ``` `options` Tag This tag renders a list of HTML `option` elements. It sets the `selected` attribute, based on the bound value. The following HTML shows typical output for it: ``` <tr> <td>Country:</td> <td> <form:select path="country"> <form:option value="-" label="--Please Select"/> <form:options items="${countryList}" itemValue="code" itemLabel="name"/> </form:select> </td> </tr> ``` If the `User` lived in the UK, the HTML source of the 'Country' row would be as follows: ``` <tr> <td>Country:</td> <td> <select name="country"> <option value="-">--Please Select</option> <option value="AT">Austria</option> <option value="UK" selected="selected">United Kingdom</option> (1) <option value="US">United States</option> </select> </td> </tr> ``` As the preceding example shows, the combined usage of an `option` tag with the `options` tag generates the same standard HTML but lets you explicitly specify a value in the JSP that is for display only (where it belongs), such as the default string in the example: "-- Please Select". The `items` attribute is typically populated with a collection or array of item objects. `itemValue` and `itemLabel` refer to bean properties of those item objects, if specified. Otherwise, the item objects themselves are turned into strings. Alternatively, you can specify a `Map` of items, in which case the map keys are interpreted as option values and the map values correspond to option labels. If `itemValue` or `itemLabel` (or both) happen to be specified as well, the item value property applies to the map key, and the item label property applies to the map value. `textarea` Tag This tag renders an HTML `textarea` element. The following HTML shows typical output for it: ``` <tr> <td>Notes:</td> <td><form:textarea path="notes" rows="3" cols="20"/></td> <td><form:errors path="notes"/></td> </tr> ``` `hidden` Tag This tag renders an HTML `input` tag with the `type` set to `hidden` with the bound value. To submit an unbound hidden value, use the HTML `input` tag with the `type` set to `hidden` . The following HTML shows typical output for it: ``` <form:hidden path="house"/> ``` If we choose to submit the `house` value as a hidden one, the HTML would be as follows: ``` <input name="house" type="hidden" value="Gryffindor"/> ``` `errors` Tag This tag renders field errors in an HTML `span` element. It provides access to the errors created in your controller or those that were created by any validators associated with your controller. Assume that we want to display all error messages for the `firstName` and `lastName` fields once we submit the form. We have a validator for instances of the `User` class called `UserValidator` , as the following example shows: ``` public class UserValidator implements Validator { public boolean supports(Class candidate) { return User.class.isAssignableFrom(candidate); } public void validate(Object obj, Errors errors) { ValidationUtils.rejectIfEmptyOrWhitespace(errors, "firstName", "required", "Field is required."); ValidationUtils.rejectIfEmptyOrWhitespace(errors, "lastName", "required", "Field is required."); } } ``` ``` class UserValidator : Validator { override fun supports(candidate: Class<*>): Boolean { return User::class.java.isAssignableFrom(candidate) } override fun validate(obj: Any, errors: Errors) { ValidationUtils.rejectIfEmptyOrWhitespace(errors, "firstName", "required", "Field is required.") ValidationUtils.rejectIfEmptyOrWhitespace(errors, "lastName", "required", "Field is required.") } } ``` The `form.jsp` could be as follows: ``` <form:form> <table> <tr> <td>First Name:</td> <td><form:input path="firstName"/></td> <%-- Show errors for firstName field --%> <td><form:errors path="firstName"/></td> </tr <tr> <td>Last Name:</td> <td><form:input path="lastName"/></td> <%-- Show errors for lastName field --%> <td><form:errors path="lastName"/></td> </tr> <tr> <td colspan="3"> <input type="submit" value="Save Changes"/> </td> </tr> </table> </form:form> ``` If we submit a form with empty values in the `firstName` and `lastName` fields, the HTML would be as follows: ``` <form method="POST"> <table> <tr> <td>First Name:</td> <td><input name="firstName" type="text" value=""/></td> <%-- Associated errors to firstName field displayed --%> <td><span name="firstName.errors">Field is required.</span></td> </tr What if we want to display the entire list of errors for a given page? The next example shows that the `errors` tag also supports some basic wildcarding functionality. * `path="*"` : Displays all errors. * `path="lastName"` : Displays all errors associated with the `lastName` field. * If `path` is omitted, only object errors are displayed. The following example displays a list of errors at the top of the page, followed by field-specific errors next to the fields: ``` <form:form> <form:errors path="*" cssClass="errorBox"/> <table> <tr> <td>First Name:</td> <td><form:input path="firstName"/></td> <td><form:errors path="firstName"/></td> </tr> <tr> <td>Last Name:</td> <td><form:input path="lastName"/></td> <td><form:errors path="lastName"/></td> </tr> <tr> <td colspan="3"> <input type="submit" value="Save Changes"/> </td> </tr> </table> </form:form> ``` The HTML would be as follows: ``` <form method="POST"> <span name="*.errors" class="errorBox">Field is required.<br/>Field is required.</span> <table> <tr> <td><NAME>:</td> <td><input name="firstName" type="text" value=""/></td> <td><span name="firstName.errors">Field is required.</span></td> </tr The `spring-form.tld` tag library descriptor (TLD) is included in the `spring-webmvc.jar` . For a comprehensive reference on individual tags, browse the API reference or see the tag library description. ### HTTP Method Conversion A key principle of REST is the use of the “Uniform Interface”. This means that all resources (URLs) can be manipulated by using the same four HTTP methods: GET, PUT, POST, and DELETE. For each method, the HTTP specification defines the exact semantics. For instance, a GET should always be a safe operation, meaning that it has no side effects, and a PUT or DELETE should be idempotent, meaning that you can repeat these operations over and over again, but the end result should be the same. While HTTP defines these four methods, HTML only supports two: GET and POST. Fortunately, there are two possible workarounds: you can either use JavaScript to do your PUT or DELETE, or you can do a POST with the “real” method as an additional parameter (modeled as a hidden input field in an HTML form). Spring’s uses this latter trick. This filter is a plain Servlet filter and, therefore, it can be used in combination with any web framework (not just Spring MVC). Add this filter to your web.xml, and a POST with a hidden `method` parameter is converted into the corresponding HTTP method request. To support HTTP method conversion, the Spring MVC form tag was updated to support setting the HTTP method. For example, the following snippet comes from the Pet Clinic sample: ``` <form:form method="delete"> <p class="submit"><input type="submit" value="Delete Pet"/></p> </form:form> ``` The preceding example performs an HTTP POST, with the “real” DELETE method hidden behind a request parameter. It is picked up by the , which is defined in web.xml, as the following example shows: ``` <filter> <filter-name>httpMethodFilter</filter-name> <filter-class>org.springframework.web.filter.HiddenHttpMethodFilter</filter-class> </filter<filter-mapping> <filter-name>httpMethodFilter</filter-name> <servlet-name>petclinic</servlet-name> </filter-mapping> ``` The following example shows the corresponding `@Controller` method: ``` @RequestMapping(method = RequestMethod.DELETE) public String deletePet(@PathVariable int ownerId, @PathVariable int petId) { this.clinic.deletePet(petId); return "redirect:/owners/" + ownerId; } ``` ``` @RequestMapping(method = [RequestMethod.DELETE]) fun deletePet(@PathVariable ownerId: Int, @PathVariable petId: Int): String { clinic.deletePet(petId) return "redirect:/owners/$ownerId" } ``` ### HTML5 Tags The Spring form tag library allows entering dynamic attributes, which means you can enter any HTML5 specific attributes. The form `input` tag supports entering a type attribute other than `text` . This is intended to allow rendering new HTML5 specific input types, such as `date` , `range` , and others. Note that entering `type='text'` is not required, since `text` is the default type. Date: 2009-03-16 Categories: Tags: Both `AbstractAtomFeedView` and `AbstractRssFeedView` inherit from the `AbstractFeedView` base class and are used to provide Atom and RSS Feed views, respectively. They are based on ROME project and are located in the package ``` org.springframework.web.servlet.view.feed ``` . `AbstractAtomFeedView` requires you to implement the `buildFeedEntries()` method and optionally override the `buildFeedMetadata()` method (the default implementation is empty). The following example shows how to do so: ``` public class SampleContentAtomView extends AbstractAtomFeedView { ``` class SampleContentAtomView : AbstractAtomFeedView() { override fun buildFeedMetadata(model: Map<String, Any>, feed: Feed, request: HttpServletRequest) { // implementation omitted } Similar requirements apply for implementing `AbstractRssFeedView` , as the following example shows: ``` public class SampleContentRssView extends AbstractRssFeedView { ``` class SampleContentRssView : AbstractRssFeedView() { override fun buildFeedMetadata(model: Map<String, Any>, feed: Channel, request: HttpServletRequest) { // implementation omitted } The `buildFeedItems()` and `buildFeedEntries()` methods pass in the HTTP request, in case you need to access the Locale. The HTTP response is passed in only for the setting of cookies or other HTTP headers. The feed is automatically written to the response object after the method returns. For an example of creating an Atom view, see <NAME>’s Spring Team Blog entry. Spring offers ways to return output other than HTML, including PDF and Excel spreadsheets. This section describes how to use those features. ## Introduction to Document Views An HTML page is not always the best way for the user to view the model output, and Spring makes it simple to generate a PDF document or an Excel spreadsheet dynamically from the model data. The document is the view and is streamed from the server with the correct content type, to (hopefully) enable the client PC to run their spreadsheet or PDF viewer application in response. In order to use Excel views, you need to add the Apache POI library to your classpath. For PDF generation, you need to add (preferably) the OpenPDF library. You should use the latest versions of the underlying document-generation libraries, if possible. In particular, we strongly recommend OpenPDF (for example, OpenPDF 1.2.12) instead of the outdated original iText 2.1.7, since OpenPDF is actively maintained and fixes an important vulnerability for untrusted PDF content. | | --- | ## PDF Views A simple PDF view for a word list could extend ``` org.springframework.web.servlet.view.document.AbstractPdfView ``` and implement the `buildPdfDocument()` method, as the following example shows: ``` public class PdfWordList extends AbstractPdfView { protected void buildPdfDocument(Map<String, Object> model, Document doc, PdfWriter writer, HttpServletRequest request, HttpServletResponse response) throws Exception { List<String> words = (List<String>) model.get("wordList"); for (String word : words) { doc.add(new Paragraph(word)); } } } ``` ``` class PdfWordList : AbstractPdfView() { override fun buildPdfDocument(model: Map<String, Any>, doc: Document, writer: PdfWriter, request: HttpServletRequest, response: HttpServletResponse) { val words = model["wordList"] as List<String> for (word in words) { doc.add(Paragraph(word)) } } } ``` A controller can return such a view either from an external view definition (referencing it by name) or as a `View` instance from the handler method. ## Excel Views Since Spring Framework 4.2, ``` org.springframework.web.servlet.view.document.AbstractXlsView ``` is provided as a base class for Excel views. It is based on Apache POI, with specialized subclasses ( `AbstractXlsxView` and ``` AbstractXlsxStreamingView ``` ) that supersede the outdated `AbstractExcelView` class. The programming model is similar to `AbstractPdfView` , with `buildExcelDocument()` as the central template method and controllers being able to return such a view from an external definition (by name) or as a `View` instance from the handler method. Spring offers support for the Jackson JSON library. ## Jackson-based JSON MVC Views ``` MappingJackson2JsonView ``` uses the Jackson library’s `ObjectMapper` to render the response content as JSON. By default, the entire contents of the model map (with the exception of framework-specific classes) are encoded as JSON. For cases where the contents of the map need to be filtered, you can specify a specific set of model attributes to encode by using the `modelKeys` property. You can also use the ``` extractValueFromSingleKeyModel ``` property to have the value in single-key models extracted and serialized directly rather than as a map of model attributes. You can customize JSON mapping as needed by using Jackson’s provided annotations. When you need further control, you can inject a custom `ObjectMapper` through the `ObjectMapper` property, for cases where you need to provide custom JSON serializers and deserializers for specific types. ## Jackson-based XML Views ``` MappingJackson2XmlView ``` uses the Jackson XML extension’s `XmlMapper` to render the response content as XML. If the model contains multiple entries, you should explicitly set the object to be serialized by using the `modelKey` bean property. If the model contains a single entry, it is serialized automatically. You can customized XML mapping as needed by using JAXB or Jackson’s provided annotations. When you need further control, you can inject a custom `XmlMapper` through the `ObjectMapper` property, for cases where custom XML you need to provide serializers and deserializers for specific types. The `MarshallingView` uses an XML `Marshaller` (defined in the package) to render the response content as XML. You can explicitly set the object to be marshalled by using a `MarshallingView` instance’s `modelKey` bean property. Alternatively, the view iterates over all model properties and marshals the first type that is supported by the `Marshaller` . For more information on the functionality in the package, see Marshalling XML using O/X Mappers. Date: 1999-01-01 Categories: Tags: XSLT is a transformation language for XML and is popular as a view technology within web applications. XSLT can be a good choice as a view technology if your application naturally deals with XML or if your model can easily be converted to XML. The following section shows how to produce an XML document as model data and have it transformed with XSLT in a Spring Web MVC application. This example is a trivial Spring application that creates a list of words in the `Controller` and adds them to the model map. The map is returned, along with the view name of our XSLT view. See Annotated Controllers for details of Spring Web MVC’s `Controller` interface. The XSLT controller turns the list of words into a simple XML document ready for transformation. Configuration is standard for a simple Spring web application: The MVC configuration has to define an `XsltViewResolver` bean and regular MVC annotation configuration. The following example shows how to do so: @Bean public XsltViewResolver xsltViewResolver() { XsltViewResolver viewResolver = new XsltViewResolver(); viewResolver.setPrefix("/WEB-INF/xsl/"); viewResolver.setSuffix(".xslt"); return viewResolver; } } ``` @Bean fun xsltViewResolver() = XsltViewResolver().apply { setPrefix("/WEB-INF/xsl/") setSuffix(".xslt") } } ``` ## Controller We also need a Controller that encapsulates our word-generation logic. The controller logic is encapsulated in a `@Controller` class, with the handler method being defined as follows: ``` @Controller public class XsltController { @RequestMapping("/") public String home(Model model) throws Exception { Document document = DocumentBuilderFactory.newInstance().newDocumentBuilder().newDocument(); Element root = document.createElement("wordList"); List<String> words = Arrays.asList("Hello", "Spring", "Framework"); for (String word : words) { Element wordNode = document.createElement("word"); Text textNode = document.createTextNode(word); wordNode.appendChild(textNode); root.appendChild(wordNode); } model.addAttribute("wordList", root); return "home"; } } ``` @Controller class XsltController { @RequestMapping("/") fun home(model: Model): String { val document = DocumentBuilderFactory.newInstance().newDocumentBuilder().newDocument() val root = document.createElement("wordList") val words = listOf("Hello", "Spring", "Framework") for (word in words) { val wordNode = document.createElement("word") val textNode = document.createTextNode(word) wordNode.appendChild(textNode) root.appendChild(wordNode) } model["wordList"] = root return "home" } } ``` So far, we have only created a DOM document and added it to the Model map. Note that you can also load an XML file as a `Resource` and use it instead of a custom DOM document. There are software packages available that automatically 'domify' an object graph, but, within Spring, you have complete flexibility to create the DOM from your model in any way you choose. This prevents the transformation of XML playing too great a part in the structure of your model data, which is a danger when using tools to manage the DOMification process. ## Transformation Finally, the `XsltViewResolver` resolves the “home” XSLT template file and merges the DOM document into it to generate our view. As shown in the `XsltViewResolver` configuration, XSLT templates live in the `war` file in the `WEB-INF/xsl` directory and end with an `xslt` file extension. The following example shows an XSLT transform: ``` <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" <xsl:output method="html" omit-xml-declaration="yes"/ <xsl:template match="/"> <html> <head><title>Hello!</title></head> <body> <h1>My First Words</h1> <ul> <xsl:apply-templates/> </ul> </body> </html> </xsl:template <xsl:template match="word"> <li><xsl:value-of select="."/></li> </xsl:template</xsl:stylesheet> ``` The preceding transform is rendered as the following HTML: ``` <html> <head> <META http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Hello!</title> </head> <body> <h1>My First Words</h1> <ul> <li>Hello</li> <li>Spring</li> <li>Framework</li> </ul> </body> </html> ``` The MVC Java configuration and the MVC XML namespace provide default configuration suitable for most applications and a configuration API to customize it. For more advanced customizations, which are not available in the configuration API, see Advanced Java Config and Advanced XML Config. You do not need to understand the underlying beans created by the MVC Java configuration and the MVC namespace. If you want to learn more, see Special Bean Types and Web MVC Config. In Java configuration, you can use the `@EnableWebMvc` annotation to enable MVC configuration, as the following example shows: ``` @Configuration @EnableWebMvc public class WebConfig { } ``` ``` @Configuration @EnableWebMvc class WebConfig ``` In XML configuration, you can use the element to enable MVC configuration, as the following example shows: <mvc:annotation-driven/The preceding example registers a number of Spring MVC infrastructure beans and adapts to dependencies available on the classpath (for example, payload converters for JSON, XML, and others). In Java configuration, you can implement the `WebMvcConfigurer` interface, as the following example shows: In XML, you can check attributes and sub-elements of . You can view the Spring MVC XML schema or use the code completion feature of your IDE to discover what attributes and sub-elements are available. To do the same in XML config, use the following: <mvc:annotation-driven conversion-service="conversionService"/ <bean id="conversionService" class="org.springframework.format.support.FormattingConversionServiceFactoryBean"> <property name="converters"> <set> <bean class="org.example.MyConverter"/> </set> </property> <property name="formatters"> <set> <bean class="org.example.MyFormatter"/> <bean class="org.example.MyAnnotationFormatterFactory"/> </set> </property> <property name="formatterRegistrars"> <set> <bean class="org.example.MyFormatterRegistrar"/> </set> </property> </beanSee the | | --- | <mvc:annotation-driven validator="globalValidator"/In Java configuration, you can register interceptors to apply to incoming requests, as the following example shows: @Override public void addInterceptors(InterceptorRegistry registry) { registry.addInterceptor(new LocaleChangeInterceptor()); registry.addInterceptor(new ThemeChangeInterceptor()).addPathPatterns("/**").excludePathPatterns("/admin/**"); } } ``` override fun addInterceptors(registry: InterceptorRegistry) { registry.addInterceptor(LocaleChangeInterceptor()) registry.addInterceptor(ThemeChangeInterceptor()).addPathPatterns("/**").excludePathPatterns("/admin/**") } } ``` ``` <mvc:interceptors> <bean class="org.springframework.web.servlet.i18n.LocaleChangeInterceptor"/> <mvc:interceptor> <mvc:mapping path="/**"/> <mvc:exclude-mapping path="/admin/**"/> <bean class="org.springframework.web.servlet.theme.ThemeChangeInterceptor"/> </mvc:interceptor> </mvc:interceptors> ``` Interceptors are not ideally suited as a security layer due to the potential for a mismatch with annotated controller path matching, which can also match trailing slashes and path extensions transparently, along with other path matching options. Many of these options have been deprecated but the potential for a mismatch remains. Generally, we recommend using Spring Security which includes a dedicated MvcRequestMatcher to align with Spring MVC path matching and also has a security firewall that blocks many unwanted characters in URL paths. | | --- | The XML config declares interceptors as | | --- | You can configure how Spring MVC determines the requested media types from the request (for example, `Accept` header, URL path extension, query parameter, and others). By default, only the `Accept` header is checked. If you must use URL-based content type resolution, consider using the query parameter strategy over path extensions. See Suffix Match and Suffix Match and RFD for more details. In Java configuration, you can customize requested content type resolution, as the following example shows: @Override public void configureContentNegotiation(ContentNegotiationConfigurer configurer) { configurer.mediaType("json", MediaType.APPLICATION_JSON); configurer.mediaType("xml", MediaType.APPLICATION_XML); } } ``` override fun configureContentNegotiation(configurer: ContentNegotiationConfigurer) { configurer.mediaType("json", MediaType.APPLICATION_JSON) configurer.mediaType("xml", MediaType.APPLICATION_XML) } } ``` ``` <mvc:annotation-driven content-negotiation-manager="contentNegotiationManager"/<bean id="contentNegotiationManager" class="org.springframework.web.accept.ContentNegotiationManagerFactoryBean"> <property name="mediaTypes"> <value> json=application/json xml=application/xml </value> </property> </bean> ``` You can set the `HttpMessageConverter` instances to use in Java configuration, replacing the ones used by default, by overriding ``` configureMessageConverters() ``` . You can also customize the list of configured message converters at the end by overriding ``` extendMessageConverters() ``` In a Spring Boot application, the | | --- | The following example adds XML and Jackson JSON converters with a customized `ObjectMapper` instead of the default ones: ``` @Configuration @EnableWebMvc public class WebConfiguration implements WebMvcConfigurer { @Override public void configureMessageConverters(List<HttpMessageConverter<?>> converters) { Jackson2ObjectMapperBuilder builder = new Jackson2ObjectMapperBuilder() .indentOutput(true) .dateFormat(new SimpleDateFormat("yyyy-MM-dd")) .modulesToInstall(new ParameterNamesModule()); converters.add(new MappingJackson2HttpMessageConverter(builder.build())); converters.add(new MappingJackson2XmlHttpMessageConverter(builder.createXmlMapper(true).build())); } } ``` ``` @Configuration @EnableWebMvc class WebConfiguration : WebMvcConfigurer { override fun configureMessageConverters(converters: MutableList<HttpMessageConverter<*>>) { val builder = Jackson2ObjectMapperBuilder() .indentOutput(true) .dateFormat(SimpleDateFormat("yyyy-MM-dd")) .modulesToInstall(ParameterNamesModule()) converters.add(MappingJackson2HttpMessageConverter(builder.build())) converters.add(MappingJackson2XmlHttpMessageConverter(builder.createXmlMapper(true).build())) ``` is used to create a common configuration for both ``` MappingJackson2HttpMessageConverter ``` ``` MappingJackson2XmlHttpMessageConverter ``` with indentation enabled, a customized date format, and the registration of ``` jackson-module-parameter-names ``` , Which adds support for accessing parameter names (a feature added in Java 8). This builder customizes Jackson’s default properties as follows: jackson-datatype-joda: Support for Joda-Time types. * Enabling indentation with Jackson XML support requires | | --- | Other interesting Jackson modules are available: jackson-datatype-money: Support for `javax.money` types (unofficial module). * jackson-datatype-hibernate: Support for Hibernate-specific types and properties (including lazy-loading aspects). ``` <mvc:annotation-driven> <mvc:message-converters> <bean class="org.springframework.http.converter.json.MappingJackson2HttpMessageConverter"> <property name="objectMapper" ref="objectMapper"/> </bean> <bean class="org.springframework.http.converter.xml.MappingJackson2XmlHttpMessageConverter"> <property name="objectMapper" ref="xmlMapper"/> </bean> </mvc:message-converters> </mvc:annotation-driven<bean id="objectMapper" class="org.springframework.http.converter.json.Jackson2ObjectMapperFactoryBean" p:indentOutput="true" p:simpleDateFormat="yyyy-MM-dd" p:modulesToInstall="com.fasterxml.jackson.module.paramnames.ParameterNamesModule"/<bean id="xmlMapper" parent="objectMapper" p:createXmlMapper="true"/> ``` This is a shortcut for defining a ``` ParameterizableViewController ``` that immediately forwards to a view when invoked. You can use it in static cases when there is no Java controller logic to run before the view generates the response. The following example of Java configuration forwards a request for `/` to a view called `home` : @Override public void addViewControllers(ViewControllerRegistry registry) { registry.addViewController("/").setViewName("home"); } } ``` override fun addViewControllers(registry: ViewControllerRegistry) { registry.addViewController("/").setViewName("home") } } ``` The following example achieves the same thing as the preceding example, but with XML, by using the ``` <mvc:view-controller> ``` ``` <mvc:view-controller path="/" view-name="home"/> ``` If an `@RequestMapping` method is mapped to a URL for any HTTP method then a view controller cannot be used to handle the same URL. This is because a match by URL to an annotated controller is considered a strong enough indication of endpoint ownership so that a 405 (METHOD_NOT_ALLOWED), a 415 (UNSUPPORTED_MEDIA_TYPE), or similar response can be sent to the client to help with debugging. For this reason it is recommended to avoid splitting URL handling across an annotated controller and a view controller. The MVC configuration simplifies the registration of view resolvers. The following Java configuration example configures content negotiation view resolution by using JSP and Jackson as a default `View` for JSON rendering: @Override public void configureViewResolvers(ViewResolverRegistry registry) { registry.enableContentNegotiation(new MappingJackson2JsonView()); registry.jsp(); } } ``` override fun configureViewResolvers(registry: ViewResolverRegistry) { registry.enableContentNegotiation(MappingJackson2JsonView()) registry.jsp() } } ``` Note, however, that FreeMarker, Groovy Markup, and script templates also require configuration of the underlying view technology. The MVC namespace provides dedicated elements. The following example works with FreeMarker: <mvc:freemarker-configurer> <mvc:template-loader-path location="/freemarker"/> </mvc:freemarker-configurer> ``` In Java configuration, you can add the respective `Configurer` bean, as the following example shows: @Override public void configureViewResolvers(ViewResolverRegistry registry) { registry.enableContentNegotiation(new MappingJackson2JsonView()); registry.freeMarker().cache(false); } override fun configureViewResolvers(registry: ViewResolverRegistry) { registry.enableContentNegotiation(MappingJackson2JsonView()) registry.freeMarker().cache(false) } @Bean fun freeMarkerConfigurer() = FreeMarkerConfigurer().apply { setTemplateLoaderPath("/freemarker") } } ``` ``` Resource#lastModified ``` so that HTTP conditional requests are supported with `"Last-Modified"` headers. The following listing shows how to do so with Java configuration: @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry.addResourceHandler("/resources/**") .addResourceLocations("/public", "classpath:/static/") .setCacheControl(CacheControl.maxAge(Duration.ofDays(365))); } } ``` ``` <mvc:resources mapping="/resources/**" location="/public, classpath:/static/" cache-period="31556926" /> ``` (MD5 hash) is a good choice — with some notable exceptions, such as JavaScript resources used with a module loader. The following example shows how to use in Java configuration: ``` <mvc:resources mapping="/resources/**" location="/public/"> <mvc:resource-chain resource-cache="true"> <mvc:resolvers> <mvc:version-resolver> <mvc:content-version-strategy patterns="/**"/> </mvc:version-resolver> </mvc:resolvers> </mvc:resource-chain> </mvc:resources> ``` You can then use `ResourceUrlProvider` to rewrite URLs and apply the full chain of resolvers and transformers — for example, to insert versions. The MVC configuration provides a `ResourceUrlProvider` bean so that it can be injected into others. You can also make the rewrite transparent with the ``` ResourceUrlEncodingFilter ``` for Thymeleaf, JSPs, FreeMarker, and others with URL tags that rely on ``` HttpServletResponse#encodeURL ``` . Note that, when using both (for example, for serving gzipped or brotli-encoded resources) and , you must register them in this order. That ensures content-based versions are always computed reliably, based on the unencoded file. For WebJars, versioned URLs like Spring MVC allows for mapping the `DispatcherServlet` to `/` (thus overriding the mapping of the container’s default Servlet), while still allowing static resource requests to be handled by the container’s default Servlet. It configures a with a URL mapping of `/**` and the lowest priority relative to other URL mappings. This handler forwards all requests to the default Servlet. Therefore, it must remain last in the order of all other URL `HandlerMappings` . That is the case if you use . Alternatively, if you set up your own customized `HandlerMapping` instance, be sure to set its `order` property to a value lower than that of the , which is `Integer.MAX_VALUE` . The following example shows how to enable the feature by using the default setup: @Override public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) { configurer.enable(); } } ``` override fun configureDefaultServletHandling(configurer: DefaultServletHandlerConfigurer) { configurer.enable() } } ``` ``` <mvc:default-servlet-handler/> ``` The caveat to overriding the `/` Servlet mapping is that the `RequestDispatcher` for the default Servlet must be retrieved by name rather than by path. The tries to auto-detect the default Servlet for the container at startup time, using a list of known names for most of the major Servlet containers (including Tomcat, Jetty, GlassFish, JBoss, Resin, WebLogic, and WebSphere). If the default Servlet has been custom-configured with a different name, or if a different Servlet container is being used where the default Servlet name is unknown, then you must explicitly provide the default Servlet’s name, as the following example shows: @Override public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) { configurer.enable("myCustomDefaultServlet"); } } ``` override fun configureDefaultServletHandling(configurer: DefaultServletHandlerConfigurer) { configurer.enable("myCustomDefaultServlet") } } ``` ``` <mvc:default-servlet-handler default-servlet-name="myCustomDefaultServlet"/> ``` You can customize options related to path matching and treatment of the URL. For details on the individual options, see the `PathMatchConfigurer` javadoc. The following example shows how to customize path matching in Java configuration: @Override public void configurePathMatch(PathMatchConfigurer configurer) { configurer.addPathPrefix("/api", HandlerTypePredicate.forAnnotation(RestController.class)); } private PathPatternParser patternParser() { // ... } } ``` override fun configurePathMatch(configurer: PathMatchConfigurer) { configurer.addPathPrefix("/api", HandlerTypePredicate.forAnnotation(RestController::class.java)) } fun patternParser(): PathPatternParser { //... } } ``` The following example shows how to customize path matching in XML configuration: ``` <mvc:annotation-driven> <mvc:path-matching path-helper="pathHelper" path-matcher="pathMatcher"/> </mvc:annotation-driven<bean id="pathHelper" class="org.example.app.MyPathHelper"/> <bean id="pathMatcher" class="org.example.app.MyPathMatcher"/> ``` `@EnableWebMvc` imports , which: Provides default Spring configuration for Spring MVC applications * instead of implementing `WebMvcConfigurer` , as the following example shows: ``` @Configuration public class WebConfig extends DelegatingWebMvcConfiguration { ``` @Configuration class WebConfig : DelegatingWebMvcConfiguration() { The MVC namespace does not have an advanced mode. If you need to customize a property on a bean that you cannot change otherwise, you can use the `BeanPostProcessor` lifecycle hook of the Spring `ApplicationContext` , as the following example shows: ``` @Component public class MyPostProcessor implements BeanPostProcessor { public Object postProcessBeforeInitialization(Object bean, String name) throws BeansException { // ... } } ``` ``` @Component class MyPostProcessor : BeanPostProcessor { override fun postProcessBeforeInitialization(bean: Any, name: String): Any { // ... } } ``` Note that you need to declare `MyPostProcessor` as a bean, either explicitly in XML or by letting it be detected through a `<component-scan/>` declaration. Servlet 4 containers are required to support HTTP/2, and Spring Framework 5 is compatible with Servlet API 4. From a programming model perspective, there is nothing specific that applications need to do. However, there are considerations related to server configuration. For more details, see the HTTP/2 wiki page. The Servlet API does expose one construct related to HTTP/2. You can use the ``` jakarta.servlet.http.PushBuilder ``` to proactively push resources to clients, and it is supported as a method argument to `@RequestMapping` methods. This section describes options for client-side access to REST endpoints. `RestTemplate` `RestTemplate` is a synchronous client to perform HTTP requests. It is the original Spring REST client and exposes a simple, template-method API over underlying HTTP client libraries. As of 5.0 the | | --- | This section summarizes the options available in `spring-test` for Spring MVC applications. Servlet API Mocks: Mock implementations of Servlet API contracts for unit testing controllers, filters, and other web components. See Servlet API mock objects for more details. * TestContext Framework: Support for loading Spring configuration in JUnit and TestNG tests, including efficient caching of the loaded configuration across test methods and support for loading a with a `MockServletContext` . See TestContext Framework for more details. * Spring MVC Test: A framework, also known as `MockMvc` , for testing annotated controllers through the `DispatcherServlet` (that is, supporting annotations), complete with the Spring MVC infrastructure but without an HTTP server. See Spring MVC Test for more details. * Client-side REST: `spring-test` provides a that you can use as a mock server for testing client-side code that internally uses the `RestTemplate` . See Client REST Tests for more details. * `WebTestClient` : Built for testing WebFlux applications, but it can also be used for end-to-end integration testing, to any server, over an HTTP connection. It is a non-blocking, reactive client and is well suited for testing asynchronous and streaming scenarios. See `WebTestClient` for more details. This part of the reference documentation covers support for Servlet stack, WebSocket messaging that includes raw WebSocket interactions, WebSocket emulation through SockJS, and publish-subscribe messaging through STOMP as a sub-protocol over WebSocket. `WebSocketHandler` Creating a WebSocket server is as simple as implementing `WebSocketHandler` or, more likely, extending either `TextWebSocketHandler` or ``` BinaryWebSocketHandler ``` . The following example uses `TextWebSocketHandler` : ``` import org.springframework.web.socket.WebSocketHandler; import org.springframework.web.socket.WebSocketSession; import org.springframework.web.socket.TextMessage; public class MyHandler extends TextWebSocketHandler { @Override public void handleTextMessage(WebSocketSession session, TextMessage message) { // ... } There is dedicated WebSocket Java configuration and XML namespace support for mapping the preceding WebSocket handler to a specific URL, as the following example shows: <websocket:handlers> <websocket:mapping path="/myHandler" handler="myHandler"/> </websocket:handlers. When using the `WebSocketHandler` API directly vs indirectly, e.g. through the STOMP messaging, the application must synchronize the sending of messages since the underlying standard WebSocket session (JSR-356) does not allow concurrent sending. One option is to wrap the `WebSocketSession` with ``` ConcurrentWebSocketSessionDecorator ``` ## WebSocket Handshake The easiest way to customize the initial HTTP WebSocket handshake request is through a `HandshakeInterceptor` , which exposes methods for “before” and “after” the handshake. You can use such an interceptor to preclude the handshake or to make any attributes available to the `WebSocketSession` . The following example uses a built-in interceptor to pass HTTP session attributes to the WebSocket session: @Override public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) { registry.addHandler(new MyHandler(), "/myHandler") .addInterceptors(new HttpSessionHandshakeInterceptor()); } <websocket:handlers> <websocket:mapping path="/myHandler" handler="myHandler"/> <websocket:handshake-interceptors> <bean class="org.springframework.web.socket.server.support.HttpSessionHandshakeInterceptor"/> </websocket:handshake-interceptors> </websocket:handlers A more advanced option is to extend the that performs the steps of the WebSocket handshake, including validating the client origin, negotiating a sub-protocol, and other details. An application may also need to use this option if it needs to configure a custom in order to adapt to a WebSocket server engine and version that is not yet supported (see Deployment for more on this subject). Both the Java configuration and XML namespace make it possible to configure a custom `HandshakeHandler` . Spring provides a | | --- | ## Deployment The Spring WebSocket API is easy to integrate into a Spring MVC application where the `DispatcherServlet` serves both HTTP WebSocket handshake and other HTTP requests. It is also easy to integrate into other HTTP processing scenarios by invoking . This is convenient and easy to understand. However, special considerations apply with regards to JSR-356 runtimes. The Jakarta WebSocket API (JSR-356) provides two deployment mechanisms. The first involves a Servlet container classpath scan (a Servlet 3 feature) at startup. The other is a registration API to use at Servlet container initialization. Neither of these mechanism makes it possible to use a single “front controller” for all HTTP processing — including WebSocket handshake and all other HTTP requests — such as Spring MVC’s `DispatcherServlet` . This is a significant limitation of JSR-356 that Spring’s WebSocket support addresses with server-specific implementations even when running in a JSR-356 runtime. Such strategies currently exist for Tomcat, Jetty, GlassFish, WebLogic, WebSphere, and Undertow (and WildFly). As of Jakarta WebSocket 2.1, a standard request upgrade strategy is available which Spring chooses on Jakarta EE 10 based web containers such as Tomcat 10.1 and Jetty 12. A secondary consideration is that Servlet containers with JSR-356 support are expected to perform a ``` ServletContainerInitializer ``` (SCI) scan that can slow down application startup — in some cases, dramatically. If a significant impact is observed after an upgrade to a Servlet container version with JSR-356 support, it should be possible to selectively enable or disable web fragments (and SCI scanning) through the use of the ``` <absolute-ordering /> ``` element in `web.xml` , as the following example shows: <absolute-ordering/ You can then selectively enable web fragments by name, such as Spring’s own ``` SpringServletContainerInitializer ``` that provides support for the Servlet 3 Java initialization API. The following example shows how to do so: <absolute-ordering> <name>spring_web</name> </absolute-ordering## Server Configuration Each underlying WebSocket engine exposes configuration properties that control runtime characteristics, such as the size of message buffer sizes, idle timeout, and others. For Tomcat, WildFly, and GlassFish, you can add a ``` ServletServerContainerFactoryBean ``` to your WebSocket Java config, as the following example shows: @Bean public ServletServerContainerFactoryBean createWebSocketContainer() { ServletServerContainerFactoryBean container = new ServletServerContainerFactoryBean(); container.setMaxTextMessageBufferSize(8192); container.setMaxBinaryMessageBufferSize(8192); return container; } <bean class="org.springframework...ServletServerContainerFactoryBean"> <property name="maxTextMessageBufferSize" value="8192"/> <property name="maxBinaryMessageBufferSize" value="8192"/> </beanFor client-side WebSocket configuration, you should use | | --- | For Jetty, you need to supply a pre-configured Jetty ``` WebSocketServerFactory ``` and plug that into Spring’s through your WebSocket Java config. The following example shows how to do so: @Override public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) { registry.addHandler(echoWebSocketHandler(), "/echo").setHandshakeHandler(handshakeHandler()); } <websocket:handlers> <websocket:mapping path="/echo" handler="echoHandler"/> <websocket:handshake-handler ref="handshakeHandler"/> </websocket:handlers <bean id="handshakeHandler" class="org.springframework...DefaultHandshakeHandler"> <constructor-arg ref="upgradeStrategy"/> </bean <bean id="upgradeStrategy" class="org.springframework...JettyRequestUpgradeStrategy"> <constructor-arg ref="serverFactory"/> </bean <bean id="serverFactory" class="org.eclipse.jetty...WebSocketServerFactory"> <constructor-arg> <bean class="org.eclipse.jetty...WebSocketPolicy"> <constructor-arg value="SERVER"/> <property name="inputBufferSize" value="8092"/> <property name="idleTimeout" value="600000"/> </bean> </constructor-arg> </bean## Allowed Origins As of Spring Framework 4.1.5, the default behavior for WebSocket and SockJS is to accept only same-origin requests. It is also possible to allow all or a specified list of origins. This check is mostly designed for browser clients. Nothing prevents other types of clients from modifying the `Origin` header value (see RFC 6454: The Web Origin Concept for more details). The three possible behaviors are: Allow only same-origin requests (default): In this mode, when SockJS is enabled, the Iframe HTTP response header `X-Frame-Options` is set to `SAMEORIGIN` , and JSONP transport is disabled, since it does not allow checking the origin of a request. As a consequence, IE6 and IE7 are not supported when this mode is enabled. * Allow a specified list of origins: Each allowed origin must start with `http://` or `https://` . In this mode, when SockJS is enabled, IFrame transport is disabled. As a consequence, IE6 through IE9 are not supported when this mode is enabled. * Allow all origins: To enable this mode, you should provide `*` as the allowed origin value. In this mode, all transports are available. You can configure WebSocket and SockJS allowed origins, as the following example shows: <websocket:handlers allowed-origins="https://mydomain.com"> <websocket:mapping path="/myHandler" handler="myHandler" /> </websocket:handlersDate: 2012-05-08 Categories: Tags: Over the public Internet, restrictive proxies outside your control may preclude WebSocket interactions, either because they are not configured to pass on the `Upgrade` header or because they close long-lived connections that appear to be idle. The solution to this problem is WebSocket emulation — that is, attempting to use WebSocket first and then falling back on HTTP-based techniques that emulate a WebSocket interaction and expose the same application-level API. On the Servlet stack, the Spring Framework provides both server (and also client) support for the SockJS protocol. The goal of SockJS is to let applications use a WebSocket API but fall back to non-WebSocket alternatives when necessary at runtime, without the need to change application code. SockJS consists of: The SockJS protocol defined in the form of executable narrated tests. * The SockJS JavaScript client — a client library for use in browsers. * SockJS server implementations, including one in the Spring Framework `spring-websocket` module. * A SockJS Java client in the `spring-websocket` module (since version 4.1). SockJS is designed for use in browsers. It uses a variety of techniques to support a wide range of browser versions. For the full list of SockJS transport types and browsers, see the SockJS client page. Transports fall in three general categories: WebSocket, HTTP Streaming, and HTTP Long Polling. For an overview of these categories, see this blog post. The SockJS client begins by sending `GET /info` to obtain basic information from the server. After that, it must decide what transport to use. If possible, WebSocket is used. If not, in most browsers, there is at least one HTTP streaming option. If not, then HTTP (long) polling is used. All transport requests have the following URL structure: > https://host:port/myApp/myEndpoint/{server-id}/{session-id}/{transport} where: * `{server-id}` is useful for routing requests in a cluster but is not used otherwise. * `{session-id}` correlates HTTP requests belonging to a SockJS session. * `{transport}` indicates the transport type (for example, `websocket` , `xhr-streaming` , and others). The WebSocket transport needs only a single HTTP request to do the WebSocket handshake. All messages thereafter are exchanged on that socket. HTTP transports require more requests. Ajax/XHR streaming, for example, relies on one long-running request for server-to-client messages and additional HTTP POST requests for client-to-server messages. Long polling is similar, except that it ends the current request after each server-to-client send. SockJS adds minimal message framing. For example, the server sends the letter `o` (“open” frame) initially, messages are sent as ``` a["message1","message2"] ``` (JSON-encoded array), the letter `h` (“heartbeat” frame) if no messages flow for 25 seconds (by default), and the letter `c` (“close” frame) to close the session. To learn more, run an example in a browser and watch the HTTP requests. The SockJS client allows fixing the list of transports, so it is possible to see each transport one at a time. The SockJS client also provides a debug flag, which enables helpful messages in the browser console. On the server side, you can enable `TRACE` logging for ``` org.springframework.web.socket ``` . For even more detail, see the SockJS protocol narrated test. ## Enabling SockJS You can enable SockJS through Java configuration, as the following example shows: @Override public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) { registry.addHandler(myHandler(), "/myHandler").withSockJS(); } <websocket:handlers> <websocket:mapping path="/myHandler" handler="myHandler"/> <websocket:sockjs/> </websocket:handlers``` SockJsHttpRequestHandler ``` . On the browser side, applications can use the `sockjs-client` (version 1.0.x). It emulates the W3C WebSocket API and communicates with the server to select the best transport option, depending on the browser in which it runs. See the sockjs-client page and the list of transport types supported by browser. The client also provides several configuration options — for example, to specify which transports to include. ## IE 8 and 9 Internet Explorer 8 and 9 remain in use. They are a key reason for having SockJS. This section covers important considerations about running in those browsers. The SockJS client supports Ajax/XHR streaming in IE 8 and 9 by using Microsoft’s `XDomainRequest` . That works across domains but does not support sending cookies. Cookies are often essential for Java applications. However, since the SockJS client can be used with many server types (not just Java ones), it needs to know whether cookies matter. If so, the SockJS client prefers Ajax/XHR for streaming. Otherwise, it relies on an iframe-based technique. The first `/info` request from the SockJS client is a request for information that can influence the client’s choice of transports. One of those details is whether the server application relies on cookies (for example, for authentication purposes or clustering with sticky sessions). Spring’s SockJS support includes a property called `sessionCookieNeeded` . It is enabled by default, since most Java applications rely on the `JSESSIONID` cookie. If your application does not need it, you can turn off this option, and SockJS client should then choose `xdr-streaming` in IE 8 and 9. If you do use an iframe-based transport, keep in mind that browsers can be instructed to block the use of IFrames on a given page by setting the HTTP response header `X-Frame-Options` to `DENY` , `SAMEORIGIN` , or `ALLOW-FROM <origin>` . This is used to prevent clickjacking. If your application adds the `X-Frame-Options` response header (as it should!) and relies on an iframe-based transport, you need to set the header value to `SAMEORIGIN` or `ALLOW-FROM <origin>` . The Spring SockJS support also needs to know the location of the SockJS client, because it is loaded from the iframe. By default, the iframe is set to download the SockJS client from a CDN location. It is a good idea to configure this option to use a URL from the same origin as the application. The following example shows how to do so in Java configuration: @Override public void registerStompEndpoints(StompEndpointRegistry registry) { registry.addEndpoint("/portfolio").withSockJS() .setClientLibraryUrl("http://localhost:8080/myapp/js/sockjs-client.js"); } The XML namespace provides a similar option through the `<websocket:sockjs>` element. During initial development, do enable the SockJS client | | --- | ## Heartbeats The SockJS protocol requires servers to send heartbeat messages to preclude proxies from concluding that a connection is hung. The Spring SockJS configuration has a property called `heartbeatTime` that you can use to customize the frequency. By default, a heartbeat is sent after 25 seconds, assuming no other messages were sent on that connection. This 25-second value is in line with the following IETF recommendation for public Internet applications. When using STOMP over WebSocket and SockJS, if the STOMP client and server negotiate heartbeats to be exchanged, the SockJS heartbeats are disabled. | | --- | The Spring SockJS support also lets you configure the `TaskScheduler` to schedule heartbeats tasks. The task scheduler is backed by a thread pool, with default settings based on the number of available processors. Your should consider customizing the settings according to your specific needs. ## Client Disconnects HTTP streaming and HTTP long polling SockJS transports require a connection to remain open longer than usual. For an overview of these techniques, see this blog post. In Servlet containers, this is done through Servlet 3 asynchronous support that allows exiting the Servlet container thread, processing a request, and continuing to write to the response from another thread. A specific issue is that the Servlet API does not provide notifications for a client that has gone away. See eclipse-ee4j/servlet-api#44. However, Servlet containers raise an exception on subsequent attempts to write to the response. Since Spring’s SockJS Service supports server-sent heartbeats (every 25 seconds by default), that means a client disconnect is usually detected within that time period (or earlier, if messages are sent more frequently). As a result, network I/O failures can occur because a client has disconnected, which can fill the log with unnecessary stack traces. Spring makes a best effort to identify such network failures that represent client disconnects (specific to each server) and log a minimal message by using the dedicated log category, | | --- | ## SockJS and CORS If you allow cross-origin requests (see Allowed Origins), the SockJS protocol uses CORS for cross-domain support in the XHR streaming and polling transports. Therefore, CORS headers are added automatically, unless the presence of CORS headers in the response is detected. So, if an application is already configured to provide CORS support (for example, through a Servlet Filter), Spring’s `SockJsService` skips this part. It is also possible to disable the addition of these CORS headers by setting the `suppressCors` property in Spring’s SockJsService. SockJS expects the following headers and values: ``` Access-Control-Allow-Origin ``` : Initialized from the value of the `Origin` request header. * ``` Access-Control-Allow-Credentials ``` : Always set to `true` . * ``` Access-Control-Request-Headers ``` : Initialized from values from the equivalent request header. * ``` Access-Control-Allow-Methods ``` : The HTTP methods a transport supports (see `TransportType` enum). * ``` Access-Control-Max-Age ``` : Set to 31536000 (1 year). For the exact implementation, see `addCorsHeaders` in ``` AbstractSockJsService ``` and the `TransportType` enum in the source code. Alternatively, if the CORS configuration allows it, consider excluding URLs with the SockJS endpoint prefix, thus letting Spring’s `SockJsService` handle it. `SockJsClient` Spring provides a SockJS Java client to connect to remote SockJS endpoints without using a browser. This can be especially useful when there is a need for bidirectional communication between two servers over a public network (that is, where network proxies can preclude the use of the WebSocket protocol). A SockJS Java client is also very useful for testing purposes (for example, to simulate a large number of concurrent users). The SockJS Java client supports the `websocket` , `xhr-streaming` , and `xhr-polling` transports. The remaining ones only make sense for use in a browser. You can configure the `WebSocketTransport` with: in a JSR-356 runtime. * `JettyWebSocketClient` by using the Jetty 9+ native WebSocket API. * Any implementation of Spring’s `WebSocketClient` . An `XhrTransport` , by definition, supports both `xhr-streaming` and `xhr-polling` , since, from a client perspective, there is no difference other than in the URL used to connect to the server. At present there are two implementations: ``` RestTemplateXhrTransport ``` uses Spring’s `RestTemplate` for HTTP requests. * `JettyXhrTransport` uses Jetty’s `HttpClient` for HTTP requests. The following example shows how to create a SockJS client and connect to a SockJS endpoint: ``` List<Transport> transports = new ArrayList<>(2); transports.add(new WebSocketTransport(new StandardWebSocketClient())); transports.add(new RestTemplateXhrTransport()); SockJsClient sockJsClient = new SockJsClient(transports); sockJsClient.doHandshake(new MyWebSocketHandler(), "ws://example.com:8080/sockjs"); ``` SockJS uses JSON formatted arrays for messages. By default, Jackson 2 is used and needs to be on the classpath. Alternatively, you can configure a custom implementation of | | --- | To use `SockJsClient` to simulate a large number of concurrent users, you need to configure the underlying HTTP client (for XHR transports) to allow a sufficient number of connections and threads. The following example shows how to do so with Jetty: ``` HttpClient jettyHttpClient = new HttpClient(); jettyHttpClient.setMaxConnectionsPerDestination(1000); jettyHttpClient.setExecutor(new QueuedThreadPool(1000)); ``` The following example shows the server-side SockJS-related properties (see javadoc for details) that you should also consider customizing: ``` @Configuration public class WebSocketConfig extends WebSocketMessageBrokerConfigurationSupport { @Override public void registerStompEndpoints(StompEndpointRegistry registry) { registry.addEndpoint("/sockjs").withSockJS() .setStreamBytesLimit(512 * 1024) (1) .setHttpMessageCacheSize(1000) (2) .setDisconnectDelay(30 * 1000); (3) } 1 | Set the | | --- | --- | 2 | Set the | 3 | Set the | The WebSocket protocol defines two types of messages (text and binary), but their content is undefined. The protocol defines a mechanism for client and server to negotiate a sub-protocol (that is, a higher-level messaging protocol) to use on top of WebSocket to define what kind of messages each can send, what the format is, the content of each message, and so on. The use of a sub-protocol is optional but, either way, the client and the server need to agree on some protocol that defines message content. STOMP (Simple Text Oriented Messaging Protocol) was originally created for scripting languages (such as Ruby, Python, and Perl) to connect to enterprise message brokers. It is designed to address a minimal subset of commonly used messaging patterns. STOMP can be used over any reliable two-way streaming network protocol, such as TCP and WebSocket. Although STOMP is a text-oriented protocol, message payloads can be either text or binary. STOMP is a frame-based protocol whose frames are modeled on HTTP. The following listing shows the structure of a STOMP frame: > COMMAND header1:value1 header2:value2 Body^@ Clients can use the `SEND` or `SUBSCRIBE` commands to send or subscribe for messages, along with a `destination` header that describes what the message is about and who should receive it. This enables a simple publish-subscribe mechanism that you can use to send messages through the broker to other connected clients or to send messages to the server to request that some work be performed. When you use Spring’s STOMP support, the Spring WebSocket application acts as the STOMP broker to clients. Messages are routed to `@Controller` message-handling methods or to a simple in-memory broker that keeps track of subscriptions and broadcasts messages to subscribed users. You can also configure Spring to work with a dedicated STOMP broker (such as RabbitMQ, ActiveMQ, and others) for the actual broadcasting of messages. In that case, Spring maintains TCP connections to the broker, relays messages to it, and passes messages from it down to connected WebSocket clients. Thus, Spring web applications can rely on unified HTTP-based security, common validation, and a familiar programming model for message handling. The following example shows a client subscribing to receive stock quotes, which the server may emit periodically (for example, via a scheduled task that sends messages through a to the broker): > SUBSCRIBE id:sub-1 destination:/topic/price.stock.* ^@ The following example shows a client that sends a trade request, which the server can handle through an `@MessageMapping` method: > SEND destination:/queue/trade content-type:application/json content-length:44 {"action":"BUY","ticker":"MMM","shares",44}^@ After the execution, the server can broadcast a trade confirmation message and details down to the client. The meaning of a destination is intentionally left opaque in the STOMP spec. It can be any string, and it is entirely up to STOMP servers to define the semantics and the syntax of the destinations that they support. It is very common, however, for destinations to be path-like strings where `/topic/..` implies publish-subscribe (one-to-many) and `/queue/` implies point-to-point (one-to-one) message exchanges. STOMP servers can use the `MESSAGE` command to broadcast messages to all subscribers. The following example shows a server sending a stock quote to a subscribed client: > MESSAGE message-id:nxahklf6-1 subscription:sub-1 destination:/topic/price.stock.MMM {"ticker":"MMM","price":129.45}^@ A server cannot send unsolicited messages. All messages from a server must be in response to a specific client subscription, and the `subscription` header of the server message must match the `id` header of the client subscription. The preceding overview is intended to provide the most basic understanding of the STOMP protocol. We recommended reviewing the protocol specification in full. Using STOMP as a sub-protocol lets the Spring Framework and Spring Security provide a richer programming model versus using raw WebSockets. The same point can be made about HTTP versus raw TCP and how it lets Spring MVC and other web frameworks provide rich functionality. The following is a list of benefits: No need to invent a custom messaging protocol and message format. * STOMP clients, including a Java client in the Spring Framework, are available. * You can (optionally) use message brokers (such as RabbitMQ, ActiveMQ, and others) to manage subscriptions and broadcast messages. * Application logic can be organized in any number of `@Controller` instances and messages can be routed to them based on the STOMP destination header versus handling raw WebSocket messages with a single `WebSocketHandler` for a given connection. * You can use Spring Security to secure messages based on STOMP destinations and message types. STOMP over WebSocket support is available in the `spring-messaging` and `spring-websocket` modules. Once you have those dependencies, you can expose a STOMP endpoint over WebSocket, as the following example shows: ``` import org.springframework.web.socket.config.annotation.EnableWebSocketMessageBroker; import org.springframework.web.socket.config.annotation.StompEndpointRegistry; @Override public void configureMessageBroker(MessageBrokerRegistry config) { config.setApplicationDestinationPrefixes("/app"); (2) config.enableSimpleBroker("/topic", "/queue"); (3) } } ``` 1 /portfolio is the HTTP URL for the endpoint to which a WebSocket (or SockJS) client needs to connect for the WebSocket handshake. 2 STOMP messages whose destination header begins with /app are routed to @MessageMapping methods in @Controller classes. 3 Use the built-in message broker for subscriptions and broadcasting and route messages whose destination header begins with /topic or /queue to the broker. <websocket:message-broker application-destination-prefix="/app"> <websocket:stomp-endpoint path="/portfolio" /> <websocket:simple-broker prefix="/topic, /queue"/> </websocket:message-brokerFor the built-in simple broker, the | | --- | To connect from a browser, for STOMP, you can use `stomp-js/stompjs` which is the most actively maintained JavaScript library. The following example code is based on it: ``` const stompClient = new StompJs.Client({ brokerURL: 'ws://domain.com/portfolio', onConnect: () => { // ... } }); ``` Alternatively, if you connect through SockJS, you can enable the SockJS Fallback on server-side with ``` registry.addEndpoint("/portfolio").withSockJS() ``` and on JavaScript side, by following those instructions. Note that `stompClient` in the preceding example does not need to specify `login` and `passcode` headers. Even if it did, they would be ignored (or, rather, overridden) on the server side. See Connecting to a Broker and Authentication for more information on authentication. For more example code see: Using WebSocket to build an interactive web application — a getting started guide. * Stock Portfolio — a sample application. To configure the underlying WebSocket server, the information in Server Configuration applies. For Jetty, however you need to set the `HandshakeHandler` and `WebSocketPolicy` through the ``` StompEndpointRegistry ``` @Override public void registerStompEndpoints(StompEndpointRegistry registry) { registry.addEndpoint("/portfolio").setHandshakeHandler(handshakeHandler()); } Once a STOMP endpoint is exposed, the Spring application becomes a STOMP broker for connected clients. This section describes the flow of messages on the server side. The `spring-messaging` module contains foundational support for messaging applications that originated in Spring Integration and was later extracted and incorporated into the Spring Framework for broader use across many Spring projects and application scenarios. The following list briefly describes a few of the available messaging abstractions: Message: Simple representation for a message, including headers and payload. * MessageHandler: Contract for handling a message. * MessageChannel: Contract for sending a message that enables loose coupling between producers and consumers. * SubscribableChannel: `MessageChannel` with `MessageHandler` subscribers. * ExecutorSubscribableChannel: `SubscribableChannel` that uses an `Executor` for delivering messages. Both the Java configuration (that is, ) and the XML namespace configuration (that is, ) use the preceding components to assemble a message workflow. The following diagram shows the components used when the simple built-in message broker is enabled: The preceding diagram shows three message channels: * `clientInboundChannel` : For passing messages received from WebSocket clients. * : For sending server messages to WebSocket clients. * `brokerChannel` : For sending messages to the message broker from within server-side application code. The next diagram shows the components used when an external broker (such as RabbitMQ) is configured for managing subscriptions and broadcasting messages: The main difference between the two preceding diagrams is the use of the “broker relay” for passing messages up to the external STOMP broker over TCP and for passing messages down from the broker to subscribed clients. When messages are received from a WebSocket connection, they are decoded to STOMP frames, turned into a Spring `Message` representation, and sent to the `clientInboundChannel` for further processing. For example, STOMP messages whose destination headers start with `/app` may be routed to `@MessageMapping` methods in annotated controllers, while `/topic` and `/queue` messages may be routed directly to the message broker. An annotated `@Controller` that handles a STOMP message from a client may send a message to the message broker through the `brokerChannel` , and the broker broadcasts the message to matching subscribers through the . The same controller can also do the same in response to HTTP requests, so a client can perform an HTTP POST, and then a `@PostMapping` method can send a message to the message broker to broadcast to subscribed clients. We can trace the flow through a simple example. Consider the following example, which sets up a server: @Override public void configureMessageBroker(MessageBrokerRegistry registry) { registry.setApplicationDestinationPrefixes("/app"); registry.enableSimpleBroker("/topic"); } } @MessageMapping("/greeting") public String handle(String greeting) { return "[" + getTimestamp() + ": " + greeting; } } ``` The preceding example supports the following flow: The client connects to ``` localhost:8080/portfolio ``` and, once a WebSocket connection is established, STOMP frames begin to flow on it. * The client sends a SUBSCRIBE frame with a destination header of `/topic/greeting` . Once received and decoded, the message is sent to the `clientInboundChannel` and is then routed to the message broker, which stores the client subscription. * The client sends a SEND frame to `/app/greeting` . The `/app` prefix helps to route it to annotated controllers. After the `/app` prefix is stripped, the remaining `/greeting` part of the destination is mapped to the `@MessageMapping` method in `GreetingController` . * The value returned from `GreetingController` is turned into a Spring `Message` with a payload based on the return value and a default destination header of `/topic/greeting` (derived from the input destination with `/app` replaced by `/topic` ). The resulting message is sent to the `brokerChannel` and handled by the message broker. * The message broker finds all matching subscribers and sends a MESSAGE frame to each one through the , from where messages are encoded as STOMP frames and sent on the WebSocket connection. The next section provides more details on annotated methods, including the kinds of arguments and return values that are supported. Applications can use annotated `@Controller` classes to handle messages from clients. Such classes can declare `@MessageMapping` , `@SubscribeMapping` , and `@ExceptionHandler` methods, as described in the following topics: `@MessageMapping` You can use `@MessageMapping` to annotate methods that route messages based on their destination. It is supported at the method level as well as at the type level. At the type level, `@MessageMapping` is used to express shared mappings across all methods in a controller. By default, the mapping values are Ant-style path patterns (for example `/thing*` , `/thing/**` ), including support for template variables (for example, `/thing/{id}` ). The values can be referenced through `@DestinationVariable` method arguments. Applications can also switch to a dot-separated destination convention for mappings, as explained in Dots as Separators. ### Supported Method Arguments The following table describes the method arguments: By default, the return value from a `@MessageMapping` method is serialized to a payload through a matching `MessageConverter` and sent as a `Message` to the `brokerChannel` , from where it is broadcast to subscribers. The destination of the outbound message is the same as that of the inbound message but prefixed with `/topic` . You can use the `@SendTo` and `@SendToUser` annotations to customize the destination of the output message. `@SendTo` is used to customize the target destination or to specify multiple destinations. `@SendToUser` is used to direct the output message to only the user associated with the input message. See User Destinations. You can use both `@SendTo` and `@SendToUser` at the same time on the same method, and both are supported at the class level, in which case they act as a default for methods in the class. However, keep in mind that any method-level `@SendTo` or `@SendToUser` annotations override any such annotations at the class level. Messages can be handled asynchronously and a `@MessageMapping` method can return `ListenableFuture` , `CompletableFuture` , or `CompletionStage` . Note that `@SendTo` and `@SendToUser` are merely a convenience that amounts to using the to send messages. If necessary, for more advanced scenarios, `@MessageMapping` methods can fall back on using the directly. This can be done instead of, or possibly in addition to, returning a value. See Sending Messages. `@SubscribeMapping` `@SubscribeMapping` is similar to `@MessageMapping` but narrows the mapping to subscription messages only. It supports the same method arguments as `@MessageMapping` . However for the return value, by default, a message is sent directly to the client (through , in response to the subscription) and not to the broker (through `brokerChannel` , as a broadcast to matching subscriptions). Adding `@SendTo` or `@SendToUser` overrides this behavior and sends to the broker instead. When is this useful? Assume that the broker is mapped to `/topic` and `/queue` , while application controllers are mapped to `/app` . In this setup, the broker stores all subscriptions to `/topic` and `/queue` that are intended for repeated broadcasts, and there is no need for the application to get involved. A client could also subscribe to some `/app` destination, and a controller could return a value in response to that subscription without involving the broker without storing or using the subscription again (effectively a one-time request-reply exchange). One use case for this is populating a UI with initial data on startup. When is this not useful? Do not try to map broker and controllers to the same destination prefix unless you want both to independently process messages, including subscriptions, for some reason. Inbound messages are handled in parallel. There are no guarantees whether a broker or a controller processes a given message first. If the goal is to be notified when a subscription is stored and ready for broadcasts, a client should ask for a receipt if the server supports it (simple broker does not). For example, with the Java STOMP client, you could do the following to add a receipt: ``` @Autowired private TaskScheduler messageBrokerTaskScheduler; // During initialization.. stompClient.setTaskScheduler(this.messageBrokerTaskScheduler); // When subscribing.. StompHeaders headers = new StompHeaders(); headers.setDestination("/topic/..."); headers.setReceipt("r1"); FrameHandler handler = ...; stompSession.subscribe(headers, handler).addReceiptTask(receiptHeaders -> { // Subscription ready... }); ``` A server side option is to register an on the `brokerChannel` and implement the `afterMessageHandled` method that is invoked after messages, including subscriptions, have been handled. An application can use methods to handle exceptions from `@MessageMapping` methods. You can declare exceptions in the annotation itself or through a method argument if you want to get access to the exception instance. The following example declares an exception through a method argument: @MessageExceptionHandler public ApplicationError handleException(MyException exception) { // ... return appError; } } ``` methods support flexible method signatures and support the same method argument types and return values as `@MessageMapping` methods. Typically, methods apply within the `@Controller` class (or class hierarchy) in which they are declared. If you want such methods to apply more globally (across controllers), you can declare them in a class marked with `@ControllerAdvice` . This is comparable to the similar support available in Spring MVC. What if you want to send messages to connected clients from any part of the application? Any application component can send messages to the `brokerChannel` . The easiest way to do so is to inject a and use it to send messages. Typically, you would inject it by type, as the following example shows: private SimpMessagingTemplate template; @Autowired public GreetingController(SimpMessagingTemplate template) { this.template = template; } @RequestMapping(path="/greetings", method=POST) public void greet(String greeting) { String text = "[" + getTimestamp() + "]:" + greeting; this.template.convertAndSend("/topic/greetings", text); } However, you can also qualify it by its name ( ), if another bean of the same type exists. The built-in simple message broker handles subscription requests from clients, stores them in memory, and broadcasts messages to connected clients that have matching destinations. The broker supports path-like destinations, including subscriptions to Ant-style destination patterns. Applications can also use dot-separated (rather than slash-separated) destinations. See Dots as Separators. | | --- | If configured with a task scheduler, the simple broker supports STOMP heartbeats. To configure a scheduler, you can declare your own `TaskScheduler` bean and set it through the . Alternatively, you can use the one that is automatically declared in the built-in WebSocket configuration, however, you’ll' need `@Lazy` to avoid a cycle between the built-in WebSocket configuration and your . For example: private TaskScheduler messageBrokerTaskScheduler; @Autowired public void setMessageBrokerTaskScheduler(@Lazy TaskScheduler taskScheduler) { this.messageBrokerTaskScheduler = taskScheduler; } @Override public void configureMessageBroker(MessageBrokerRegistry registry) { registry.enableSimpleBroker("/queue/", "/topic/") .setHeartbeatValue(new long[] {10000, 20000}) .setTaskScheduler(this.messageBrokerTaskScheduler); The simple broker is great for getting started but supports only a subset of STOMP commands (it does not support acks, receipts, and some other features), relies on a simple message-sending loop, and is not suitable for clustering. As an alternative, you can upgrade your applications to use a full-featured message broker. See the STOMP documentation for your message broker of choice (such as RabbitMQ, ActiveMQ, and others), install the broker, and run it with STOMP support enabled. Then you can enable the STOMP broker relay (instead of the simple broker) in the Spring configuration. The following example configuration enables a full-featured broker: @Override public void registerStompEndpoints(StompEndpointRegistry registry) { registry.addEndpoint("/portfolio").withSockJS(); } @Override public void configureMessageBroker(MessageBrokerRegistry registry) { registry.enableStompBrokerRelay("/topic", "/queue"); registry.setApplicationDestinationPrefixes("/app"); } <websocket:message-broker application-destination-prefix="/app"> <websocket:stomp-endpoint path="/portfolio" /> <websocket:sockjs/> </websocket:stomp-endpoint> <websocket:stomp-broker-relay prefix="/topic,/queue" /> </websocket:message-broker The STOMP broker relay in the preceding configuration is a Spring `MessageHandler` that handles messages by forwarding them to an external message broker. To do so, it establishes TCP connections to the broker, forwards all messages to it, and then forwards all messages received from the broker to clients through their WebSocket sessions. Essentially, it acts as a “relay” that forwards messages in both directions. Add | | --- | Furthermore, application components (such as HTTP request handling methods, business services, and others) can also send messages to the broker relay, as described in Sending Messages, to broadcast messages to subscribed WebSocket clients. In effect, the broker relay enables robust and scalable message broadcasting. A STOMP broker relay maintains a single “system” TCP connection to the broker. This connection is used for messages originating from the server-side application only, not for receiving messages. You can configure the STOMP credentials (that is, the STOMP frame `login` and `passcode` headers) for this connection. This is exposed in both the XML namespace and Java configuration as the `systemLogin` and `systemPasscode` properties with default values of `guest` and `guest` . The STOMP broker relay also creates a separate TCP connection for every connected WebSocket client. You can configure the STOMP credentials that are used for all TCP connections created on behalf of clients. This is exposed in both the XML namespace and Java configuration as the `clientLogin` and `clientPasscode` properties with default values of `guest` and `guest` . The STOMP broker relay always sets the | | --- | The STOMP broker relay also sends and receives heartbeats to and from the message broker over the “system” TCP connection. You can configure the intervals for sending and receiving heartbeats (10 seconds each by default). If connectivity to the broker is lost, the broker relay continues to try to reconnect, every 5 seconds, until it succeeds. Any Spring bean can implement ``` ApplicationListener<BrokerAvailabilityEvent> ``` to receive notifications when the “system” connection to the broker is lost and re-established. For example, a Stock Quote service that broadcasts stock quotes can stop trying to send messages when there is no active “system” connection. By default, the STOMP broker relay always connects, and reconnects as needed if connectivity is lost, to the same host and port. If you wish to supply multiple addresses, on each attempt to connect, you can configure a supplier of addresses, instead of a fixed host and port. The following example shows how to do that: ``` @Configuration @EnableWebSocketMessageBroker public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer { @Override public void configureMessageBroker(MessageBrokerRegistry registry) { registry.enableStompBrokerRelay("/queue/", "/topic/").setTcpClient(createTcpClient()); registry.setApplicationDestinationPrefixes("/app"); } private ReactorNettyTcpClient<byte[]> createTcpClient() { return new ReactorNettyTcpClient<>( client -> client.addressSupplier(() -> ... ), new StompReactorNettyCodec()); } } ``` You can also configure the STOMP broker relay with a `virtualHost` property. The value of this property is set as the `host` header of every `CONNECT` frame and can be useful (for example, in a cloud environment where the actual host to which the TCP connection is established differs from the host that provides the cloud-based STOMP service). When messages are routed to `@MessageMapping` methods, they are matched with `AntPathMatcher` . By default, patterns are expected to use slash ( `/` ) as the separator. This is a good convention in web applications and similar to HTTP URLs. However, if you are more used to messaging conventions, you can switch to using dot ( `.` ) as the separator. The following example shows how to do so in Java configuration: @Override public void configureMessageBroker(MessageBrokerRegistry registry) { registry.setPathMatcher(new AntPathMatcher(".")); registry.enableStompBrokerRelay("/queue", "/topic"); registry.setApplicationDestinationPrefixes("/app"); } } ``` <websocket:message-broker application-destination-prefix="/app" path-matcher="pathMatcher"> <websocket:stomp-endpoint path="/stomp"/> <websocket:stomp-broker-relay prefix="/topic,/queue" /> </websocket:message-broker <bean id="pathMatcher" class="org.springframework.util.AntPathMatcher"> <constructor-arg index="0" value="."/> </bean After that, a controller can use a dot ( `.` ) as the separator in `@MessageMapping` methods, as the following example shows: ``` @Controller @MessageMapping("red") public class RedController { @MessageMapping("blue.{green}") public void handleGreen(@DestinationVariable String green) { // ... } } ``` The client can now send a message to ``` /app/red.blue.green123 ``` . In the preceding example, we did not change the prefixes on the “broker relay”, because those depend entirely on the external message broker. See the STOMP documentation pages for the broker you use to see what conventions it supports for the destination header. The “simple broker”, on the other hand, does rely on the configured `PathMatcher` , so, if you switch the separator, that change also applies to the broker and the way the broker matches destinations from a message to patterns in subscriptions. Every STOMP over WebSocket messaging session begins with an HTTP request. That can be a request to upgrade to WebSockets (that is, a WebSocket handshake) or, in the case of SockJS fallbacks, a series of SockJS HTTP transport requests. Many web applications already have authentication and authorization in place to secure HTTP requests. Typically, a user is authenticated through Spring Security by using some mechanism such as a login page, HTTP basic authentication, or another way. The security context for the authenticated user is saved in the HTTP session and is associated with subsequent requests in the same cookie-based session. Therefore, for a WebSocket handshake or for SockJS HTTP transport requests, typically, there is already an authenticated user accessible through ``` HttpServletRequest#getUserPrincipal() ``` . Spring automatically associates that user with a WebSocket or SockJS session created for them and, subsequently, with all STOMP messages transported over that session through a user header. In short, a typical web application needs to do nothing beyond what it already does for security. The user is authenticated at the HTTP request level with a security context that is maintained through a cookie-based HTTP session (which is then associated with WebSocket or SockJS sessions created for that user) and results in a user header being stamped on every `Message` flowing through the application. The STOMP protocol does have `login` and `passcode` headers on the `CONNECT` frame. Those were originally designed for and are needed for STOMP over TCP. However, for STOMP over WebSocket, by default, Spring ignores authentication headers at the STOMP protocol level, and assumes that the user is already authenticated at the HTTP transport level. The expectation is that the WebSocket or SockJS session contain the authenticated user. Spring Security OAuth provides support for token based security, including JSON Web Token (JWT). You can use this as the authentication mechanism in Web applications, including STOMP over WebSocket interactions, as described in the previous section (that is, to maintain identity through a cookie-based session). At the same time, cookie-based sessions are not always the best fit (for example, in applications that do not maintain a server-side session or in mobile applications where it is common to use headers for authentication). The WebSocket protocol, RFC 6455 "doesn’t prescribe any particular way that servers can authenticate clients during the WebSocket handshake." In practice, however, browser clients can use only standard authentication headers (that is, basic HTTP authentication) or cookies and cannot (for example) provide custom headers. Likewise, the SockJS JavaScript client does not provide a way to send HTTP headers with SockJS transport requests. See sockjs-client issue 196. Instead, it does allow sending query parameters that you can use to send a token, but that has its own drawbacks (for example, the token may be inadvertently logged with the URL in server logs). The preceding limitations are for browser-based clients and do not apply to the Spring Java-based STOMP client, which does support sending headers with both WebSocket and SockJS requests. | | --- | Therefore, applications that wish to avoid the use of cookies may not have any good alternatives for authentication at the HTTP protocol level. Instead of using cookies, they may prefer to authenticate with headers at the STOMP messaging protocol level. Doing so requires two simple steps: Use the STOMP client to pass authentication headers at connect time. * Process the authentication headers with a `ChannelInterceptor` . The next example uses server-side configuration to register a custom authentication interceptor. Note that an interceptor needs only to authenticate and set the user header on the CONNECT `Message` . Spring notes and saves the authenticated user and associate it with subsequent STOMP messages on the same session. The following example shows how register a custom authentication interceptor: @Override public void configureClientInboundChannel(ChannelRegistration registration) { registration.interceptors(new ChannelInterceptor() { @Override public Message<?> preSend(Message<?> message, MessageChannel channel) { StompHeaderAccessor accessor = MessageHeaderAccessor.getAccessor(message, StompHeaderAccessor.class); if (StompCommand.CONNECT.equals(accessor.getCommand())) { Authentication user = ... ; // access authentication header(s) accessor.setUser(user); } return message; } }); } } ``` Also, note that, when you use Spring Security’s authorization for messages, at present, you need to ensure that the authentication `ChannelInterceptor` config is ordered ahead of Spring Security’s. This is best done by declaring the custom interceptor in its own implementation of that is marked with ``` @Order(Ordered.HIGHEST_PRECEDENCE + 99) ``` Spring Security provides WebSocket sub-protocol authorization that uses a `ChannelInterceptor` to authorize messages based on the user header in them. Also, Spring Session provides WebSocket integration that ensures the user’s HTTP session does not expire while the WebSocket session is still active. An application can send messages that target a specific user, and Spring’s STOMP support recognizes destinations prefixed with `/user/` for this purpose. For example, a client might subscribe to the ``` /user/queue/position-updates ``` destination. handles this destination and transforms it into a destination unique to the user session (such as ``` /queue/position-updates-user123 ``` ). This provides the convenience of subscribing to a generically named destination while, at the same time, ensuring no collisions with other users who subscribe to the same destination so that each user can receive unique stock position updates. When working with user destinations, it is important to configure broker and application destination prefixes as shown in Enable STOMP, or otherwise the broker would handle "/user" prefixed messages that should only be handled by | | --- | On the sending side, messages can be sent to a destination such as ``` /user/{username}/queue/position-updates ``` , which in turn is translated by the into one or more destinations, one for each session associated with the user. This lets any component within the application send messages that target a specific user without necessarily knowing anything more than their name and the generic destination. This is also supported through an annotation and a messaging template. A message-handling method can send messages to the user associated with the message being handled through the `@SendToUser` annotation (also supported on the class-level to share a common destination), as the following example shows: ``` @Controller public class PortfolioController { @MessageMapping("/trade") @SendToUser("/queue/position-updates") public TradeResult executeTrade(Trade trade, Principal principal) { // ... return tradeResult; } } ``` If the user has more than one session, by default, all of the sessions subscribed to the given destination are targeted. However, sometimes, it may be necessary to target only the session that sent the message being handled. You can do so by setting the `broadcast` attribute to false, as the following example shows: @MessageMapping("/action") public void handleAction() throws Exception{ // raise MyBusinessException here } @MessageExceptionHandler @SendToUser(destinations="/queue/errors", broadcast=false) public ApplicationError handleException(MyBusinessException exception) { // ... return appError; } } ``` While user destinations generally imply an authenticated user, it is not strictly required. A WebSocket session that is not associated with an authenticated user can subscribe to a user destination. In such cases, the | | --- | You can send a message to user destinations from any application component by, for example, injecting the created by the Java configuration or the XML namespace. (The bean name is if required for qualification with `@Qualifier` .) The following example shows how to do so: ``` @Service public class TradeServiceImpl implements TradeService { private final SimpMessagingTemplate messagingTemplate; @Autowired public TradeServiceImpl(SimpMessagingTemplate messagingTemplate) { this.messagingTemplate = messagingTemplate; } public void afterTradeExecuted(Trade trade) { this.messagingTemplate.convertAndSendToUser( trade.getUserName(), "/queue/position-updates", trade.getResult()); } } ``` When you use user destinations with an external message broker, you should check the broker documentation on how to manage inactive queues, so that, when the user session is over, all unique user queues are removed. For example, RabbitMQ creates auto-delete queues when you use destinations such as | | --- | In a multi-application server scenario, a user destination may remain unresolved because the user is connected to a different server. In such cases, you can configure a destination to broadcast unresolved messages so that other servers have a chance to try. This can be done through the ``` userDestinationBroadcast ``` property of the in Java configuration and the ``` user-destination-broadcast ``` attribute of the `message-broker` element in XML. Messages from the broker are published to the , from where they are written to WebSocket sessions. As the channel is backed by a `ThreadPoolExecutor` , messages are processed in different threads, and the resulting sequence received by the client may not match the exact order of publication. If this is an issue, enable the ``` setPreservePublishOrder ``` flag, as the following example shows: @Override protected void configureMessageBroker(MessageBrokerRegistry registry) { // ... registry.setPreservePublishOrder(true); } <websocket:message-broker preserve-publish-order="true"> <!-- ... --> </websocket:message-broker When the flag is set, messages within the same client session are published to the one at a time, so that the order of publication is guaranteed. Note that this incurs a small performance overhead, so you should enable it only if it is required. Several `ApplicationContext` events are published and can be received by implementing Spring’s `ApplicationListener` interface: ``` BrokerAvailabilityEvent ``` : Indicates when the broker becomes available or unavailable. While the “simple” broker becomes available immediately on startup and remains so while the application is running, the STOMP “broker relay” can lose its connection to the full featured broker (for example, if the broker is restarted). The broker relay has reconnect logic and re-establishes the “system” connection to the broker when it comes back. As a result, this event is published whenever the state changes from connected to disconnected and vice-versa. Components that use the should subscribe to this event and avoid sending messages at times when the broker is not available. In any case, they should be prepared to handle ``` MessageDeliveryException ``` when sending a message. * `SessionConnectEvent` : Published when a new STOMP CONNECT is received to indicate the start of a new client session. The event contains the message that represents the connect, including the session ID, user information (if any), and any custom headers the client sent. This is useful for tracking client sessions. Components subscribed to this event can wrap the contained message with ``` StompMessageHeaderAccessor ``` ``` SessionConnectedEvent ``` : Published shortly after a `SessionConnectEvent` when the broker has sent a STOMP CONNECTED frame in response to the CONNECT. At this point, the STOMP session can be considered fully established. * ``` SessionSubscribeEvent ``` ``` SessionUnsubscribeEvent ``` When you use a full-featured broker, the STOMP “broker relay” automatically reconnects the “system” connection if broker becomes temporarily unavailable. Client connections, however, are not automatically reconnected. Assuming heartbeats are enabled, the client typically notices the broker is not responding within 10 seconds. Clients need to implement their own reconnecting logic. | | --- | Events provide notifications for the lifecycle of a STOMP connection but not for every client message. Applications can also register a `ChannelInterceptor` to intercept any message and in any part of the processing chain. The following example shows how to intercept inbound messages from clients: @Override public void configureClientInboundChannel(ChannelRegistration registration) { registration.interceptors(new MyChannelInterceptor()); } } ``` A custom `ChannelInterceptor` can use `StompHeaderAccessor` or to access information about the message, as the following example shows: ``` public class MyChannelInterceptor implements ChannelInterceptor { @Override public Message<?> preSend(Message<?> message, MessageChannel channel) { StompHeaderAccessor accessor = StompHeaderAccessor.wrap(message); StompCommand command = accessor.getStompCommand(); // ... return message; } } ``` Applications can also implement , which is a sub-interface of `ChannelInterceptor` with callbacks in the thread in which the messages are handled. While a `ChannelInterceptor` is invoked once for each message sent to a channel, the provides hooks in the thread of each `MessageHandler` subscribed to messages from the channel. Note that, as with the Spring provides a STOMP over WebSocket client and a STOMP over TCP client. To begin, you can create and configure `WebSocketStompClient` , as the following example shows: ``` WebSocketClient webSocketClient = new StandardWebSocketClient(); WebSocketStompClient stompClient = new WebSocketStompClient(webSocketClient); stompClient.setMessageConverter(new StringMessageConverter()); stompClient.setTaskScheduler(taskScheduler); // for heartbeats ``` In the preceding example, you could replace with `SockJsClient` , since that is also an implementation of `WebSocketClient` . The `SockJsClient` can use WebSocket or HTTP-based transport as a fallback. For more details, see `SockJsClient` . Next, you can establish a connection and provide a handler for the STOMP session, as the following example shows: ``` String url = "ws://127.0.0.1:8080/endpoint"; StompSessionHandler sessionHandler = new MyStompSessionHandler(); stompClient.connect(url, sessionHandler); ``` When the session is ready for use, the handler is notified, as the following example shows: ``` public class MyStompSessionHandler extends StompSessionHandlerAdapter { @Override public void afterConnected(StompSession session, StompHeaders connectedHeaders) { // ... } } ``` Once the session is established, any payload can be sent and is serialized with the configured `MessageConverter` , as the following example shows: ``` session.send("/topic/something", "payload"); ``` You can also subscribe to destinations. The `subscribe` methods require a handler for messages on the subscription and returns a `Subscription` handle that you can use to unsubscribe. For each received message, the handler can specify the target `Object` type to which the payload should be deserialized, as the following example shows: ``` session.subscribe("/topic/something", new StompFrameHandler() { @Override public Type getPayloadType(StompHeaders headers) { return String.class; } @Override public void handleFrame(StompHeaders headers, Object payload) { // ... } }); ``` To enable STOMP heartbeat, you can configure `WebSocketStompClient` with a `TaskScheduler` and optionally customize the heartbeat intervals (10 seconds for write inactivity, which causes a heartbeat to be sent, and 10 seconds for read inactivity, which closes the connection). `WebSocketStompClient` sends a heartbeat only in case of inactivity, i.e. when no other messages are sent. This can present a challenge when using an external broker since messages with a non-broker destination represent activity but aren’t actually forwarded to the broker. In that case you can configure a `TaskScheduler` when initializing the External Broker which ensures a heartbeat is forwarded to the broker also when only messages with a non-broker destination are sent. When you use | | --- | The STOMP protocol also supports receipts, where the client must add a `receipt` header to which the server responds with a RECEIPT frame after the send or subscribe are processed. To support this, the `StompSession` offers ``` setAutoReceipt(boolean) ``` that causes a `receipt` header to be added on every subsequent send or subscribe event. Alternatively, you can also manually add a receipt header to the `StompHeaders` . Both send and subscribe return an instance of `Receiptable` that you can use to register for receipt success and failure callbacks. For this feature, you must configure the client with a `TaskScheduler` and the amount of time before a receipt expires (15 seconds by default). Note that `StompSessionHandler` itself is a `StompFrameHandler` , which lets it handle ERROR frames in addition to the `handleException` callback for exceptions from the handling of messages and `handleTransportError` for transport-level errors including ``` ConnectionLostException ``` Each WebSocket session has a map of attributes. The map is attached as a header to inbound client messages and may be accessed from a controller method, as the following example shows: @MessageMapping("/action") public void handle(SimpMessageHeaderAccessor headerAccessor) { Map<String, Object> attrs = headerAccessor.getSessionAttributes(); // ... } } ``` You can declare a Spring-managed bean in the `websocket` scope. You can inject WebSocket-scoped beans into controllers and any channel interceptors registered on the `clientInboundChannel` . Those are typically singletons and live longer than any individual WebSocket session. Therefore, you need to use a scope proxy mode for WebSocket-scoped beans, as the following example shows: ``` @Component @Scope(scopeName = "websocket", proxyMode = ScopedProxyMode.TARGET_CLASS) public class MyBean { @PostConstruct public void init() { // Invoked after dependencies injected } @PreDestroy public void destroy() { // Invoked when the WebSocket session ends } } private final MyBean myBean; @Autowired public MyController(MyBean myBean) { this.myBean = myBean; } @MessageMapping("/action") public void handle() { // this.myBean from the current WebSocket session } } ``` As with any custom scope, Spring initializes a new `MyBean` instance the first time it is accessed from the controller and stores the instance in the WebSocket session attributes. The same instance is subsequently returned until the session ends. WebSocket-scoped beans have all Spring lifecycle methods invoked, as shown in the preceding examples. There is no silver bullet when it comes to performance. Many factors affect it, including the size and volume of messages, whether application methods perform work that requires blocking, and external factors (such as network speed and other issues). The goal of this section is to provide an overview of the available configuration options along with some thoughts on how to reason about scaling. In a messaging application, messages are passed through channels for asynchronous executions that are backed by thread pools. Configuring such an application requires good knowledge of the channels and the flow of messages. Therefore, it is recommended to review Flow of Messages. The obvious place to start is to configure the thread pools that back the `clientInboundChannel` and the . By default, both are configured at twice the number of available processors. If the handling of messages in annotated methods is mainly CPU-bound, the number of threads for the `clientInboundChannel` should remain close to the number of processors. If the work they do is more IO-bound and requires blocking or waiting on a database or other external system, the thread pool size probably needs to be increased. On the side, it is all about sending messages to WebSocket clients. If clients are on a fast network, the number of threads should remain close to the number of available processors. If they are slow or on low bandwidth, they take longer to consume messages and put a burden on the thread pool. Therefore, increasing the thread pool size becomes necessary. While the workload for the `clientInboundChannel` is possible to predict — after all, it is based on what the application does — how to configure the "clientOutboundChannel" is harder, as it is based on factors beyond the control of the application. For this reason, two additional properties relate to the sending of messages: `sendTimeLimit` and `sendBufferSizeLimit` . You can use those methods to configure how long a send is allowed to take and how much data can be buffered when sending messages to a client. The general idea is that, at any given time, only a single thread can be used to send to a client. All additional messages, meanwhile, get buffered, and you can use these properties to decide how long sending a message is allowed to take and how much data can be buffered in the meantime. See the javadoc and documentation of the XML schema for important additional details. <websocket:message-broker> <websocket:transport send-timeout="15000" send-buffer-size="524288" /> <!-- ... --> </websocket:message-brokerYou can also use the WebSocket transport configuration shown earlier to configure the maximum allowed size for incoming STOMP messages. In theory, a WebSocket message can be almost unlimited in size. In practice, WebSocket servers impose limits — for example, 8K on Tomcat and 64K on Jetty. For this reason, STOMP clients (such as the JavaScript webstomp-client and others) split larger STOMP messages at 16K boundaries and send them as multiple WebSocket messages, which requires the server to buffer and re-assemble. Spring’s STOMP-over-WebSocket support does this ,so applications can configure the maximum size for STOMP messages irrespective of WebSocket server-specific message sizes. Keep in mind that the WebSocket message size is automatically adjusted, if necessary, to ensure they can carry 16K WebSocket messages at a minimum. The following example shows one possible configuration: <websocket:message-broker> <websocket:transport message-size="131072" /> <!-- ... --> </websocket:message-brokerAn important point about scaling involves using multiple application instances. Currently, you cannot do that with the simple broker. However, when you use a full-featured broker (such as RabbitMQ), each application instance connects to the broker, and messages broadcast from one application instance can be broadcast through the broker to WebSocket clients connected through any other application instances. When you use , key infrastructure components automatically gather statistics and counters that provide important insight into the internal state of the application. The configuration also declares a bean of type ``` WebSocketMessageBrokerStats ``` that gathers all available information in one place and by default logs it at the `INFO` level once every 30 minutes. This bean can be exported to JMX through Spring’s `MBeanExporter` for viewing at runtime (for example, through JDK’s `jconsole` ). The following list summarizes the available information: * Client WebSocket Sessions * * Current * Indicates how many client sessions there are currently, with the count further broken down by WebSocket versus HTTP streaming and polling SockJS sessions. * Total * Indicates how many total sessions have been established. * Abnormally Closed * * Connect Failures * Sessions that got established but were closed after not having received any messages within 60 seconds. This is usually an indication of proxy or network issues. * Send Limit Exceeded * Sessions closed after exceeding the configured send timeout or the send buffer limits, which can occur with slow clients (see previous section). * Transport Errors * Sessions closed after a transport error, such as failure to read or write to a WebSocket connection or HTTP request or response. * STOMP Frames * The total number of CONNECT, CONNECTED, and DISCONNECT frames processed, indicating how many clients connected on the STOMP level. Note that the DISCONNECT count may be lower when sessions get closed abnormally or when clients close without sending a DISCONNECT frame. * STOMP Broker Relay * * TCP Connections * Indicates how many TCP connections on behalf of client WebSocket sessions are established to the broker. This should be equal to the number of client WebSocket sessions + 1 additional shared “system” connection for sending messages from within the application. * STOMP Frames * The total number of CONNECT, CONNECTED, and DISCONNECT frames forwarded to or received from the broker on behalf of clients. Note that a DISCONNECT frame is sent to the broker regardless of how the client WebSocket session was closed. Therefore, a lower DISCONNECT frame count is an indication that the broker is pro-actively closing connections (maybe because of a heartbeat that did not arrive in time, an invalid input frame, or other issue). * Client Inbound Channel * Statistics from the thread pool that backs the `clientInboundChannel` that provide insight into the health of incoming message processing. Tasks queueing up here is an indication that the application may be too slow to handle messages. If there I/O bound tasks (for example, slow database queries, HTTP requests to third party REST API, and so on), consider increasing the thread pool size. * Client Outbound Channel * Statistics from the thread pool that backs the that provides insight into the health of broadcasting messages to clients. Tasks queueing up here is an indication clients are too slow to consume messages. One way to address this is to increase the thread pool size to accommodate the expected number of concurrent slow clients. Another option is to reduce the send timeout and send buffer size limits (see the previous section). * SockJS Task Scheduler * Statistics from the thread pool of the SockJS task scheduler that is used to send heartbeats. Note that, when heartbeats are negotiated on the STOMP level, the SockJS heartbeats are disabled. There are two main approaches to testing applications when you use Spring’s STOMP-over-WebSocket support. The first is to write server-side tests to verify the functionality of controllers and their annotated message-handling methods. The second is to write full end-to-end tests that involve running a client and a server. The two approaches are not mutually exclusive. On the contrary, each has a place in an overall test strategy. Server-side tests are more focused and easier to write and maintain. End-to-end integration tests, on the other hand, are more complete and test much more, but they are also more involved to write and maintain. The simplest form of server-side tests is to write controller unit tests. However, this is not useful enough, since much of what a controller does depends on its annotations. Pure unit tests simply cannot test that. Ideally, controllers under test should be invoked as they are at runtime, much like the approach to testing controllers that handle HTTP requests by using the Spring MVC Test framework — that is, without running a Servlet container but relying on the Spring Framework to invoke the annotated controllers. As with Spring MVC Test, you have two possible alternatives here, either use a “context-based” or use a “standalone” setup: Load the actual Spring configuration with the help of the Spring TestContext framework, inject `clientInboundChannel` as a test field, and use it to send messages to be handled by controller methods. * Manually set up the minimum Spring framework infrastructure required to invoke controllers (namely the ``` SimpAnnotationMethodMessageHandler ``` ) and pass messages for controllers directly to it. Both of these setup scenarios are demonstrated in the tests for the stock portfolio sample application. The second approach is to create end-to-end integration tests. For that, you need to run a WebSocket server in embedded mode and connect to it as a WebSocket client that sends WebSocket messages containing STOMP frames. The tests for the stock portfolio sample application also demonstrate this approach by using Tomcat as the embedded WebSocket server and a simple STOMP client for test purposes. This chapter details Spring’s integration with third-party web frameworks. One of the core value propositions of the Spring Framework is that of enabling choice. In a general sense, Spring does not force you to use or buy into any particular architecture, technology, or methodology (although it certainly recommends some over others). This freedom to pick and choose the architecture, technology, or methodology that is most relevant to a developer and their development team is arguably most evident in the web area, where Spring provides its own web frameworks (Spring MVC and Spring WebFlux) while, at the same time, supporting integration with a number of popular third-party web frameworks. Before diving into the integration specifics of each supported web framework, let us first take a look at common Spring configuration that is not specific to any one web framework. (This section is equally applicable to Spring’s own web framework variants.) One of the concepts (for want of a better word) espoused by Spring’s lightweight application model is that of a layered architecture. Remember that in a "classic" layered architecture, the web layer is but one of many layers. It serves as one of the entry points into a server-side application, and it delegates to service objects (facades) that are defined in a service layer to satisfy business-specific (and presentation-technology agnostic) use cases. In Spring, these service objects, any other business-specific objects, data-access objects, and others exist in a distinct "business context", which contains no web or presentation layer objects (presentation objects, such as Spring MVC controllers, are typically configured in a distinct "presentation context"). This section details how you can configure a Spring container (a ) that contains all of the 'business beans' in your application. Moving on to specifics, all you need to do is declare a in the standard Jakarta EE servlet `web.xml` file of your web application and add a `<context-param/>` section (in the same file) that defines which set of Spring XML configuration files to load. Consider the following `<listener/>` configuration: Further consider the following `<context-param/>` configuration: ``` <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext*.xml</param-value> </context-param> ``` If you do not specify the context parameter, the looks for a file called to load. Once the context files are loaded, Spring creates a object based on the bean definitions and stores it in the `ServletContext` of the web application. All Java web frameworks are built on top of the Servlet API, so you can use the following code snippet to get access to this "business context" `ApplicationContext` created by the ``` WebApplicationContext ctx = WebApplicationContextUtils.getWebApplicationContext(servletContext); ``` class is for convenience, so you need not remember the name of the `ServletContext` attribute. Its ``` getWebApplicationContext() ``` method returns `null` if an object does not exist under the ``` WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE ``` key. Rather than risk getting ``` NullPointerExceptions ``` in your application, it is better to use the ``` getRequiredWebApplicationContext() ``` method. This method throws an exception when the `ApplicationContext` is missing. Once you have a reference to the , you can retrieve beans by their name or type. Most developers retrieve beans by name and then cast them to one of their implemented interfaces. Fortunately, most of the frameworks in this section have simpler ways of looking up beans. Not only do they make it easy to get beans from a Spring container, but they also let you use dependency injection on their controllers. Each web framework section has more detail on its specific integration strategies. ## JSF JavaServer Faces (JSF) is the JCP’s standard component-based, event-driven web user interface framework. It is an official part of the Jakarta EE umbrella but also individually usable, e.g. through embedding Mojarra or MyFaces within Tomcat. Please note that recent versions of JSF became closely tied to CDI infrastructure in application servers, with some new JSF functionality only working in such an environment. Spring’s JSF support is not actively evolved anymore and primarily exists for migration purposes when modernizing older JSF-based applications. The key element in Spring’s JSF integration is the JSF `ELResolver` mechanism. ### Spring Bean Resolver is a JSF compliant `ELResolver` implementation, integrating with the standard Unified EL as used by JSF and JSP. It delegates to Spring’s "business context" first and then to the default resolver of the underlying JSF implementation. Configuration-wise, you can define in your JSF `faces-context.xml` file, as the following example shows: ``` <faces-config> <application> <el-resolver>org.springframework.web.jsf.el.SpringBeanFacesELResolver</el-resolver> ... </application> </faces-config> ``` `FacesContextUtils` A custom `ELResolver` works well when mapping your properties to beans in `faces-config.xml` , but, at times, you may need to explicitly grab a bean. The `FacesContextUtils` class makes this easy. It is similar to , except that it takes a `FacesContext` parameter rather than a `ServletContext` parameter. The following example shows how to use `FacesContextUtils` : ``` ApplicationContext ctx = FacesContextUtils.getWebApplicationContext(FacesContext.getCurrentInstance()); ``` ## Apache Struts Invented by <NAME>, Struts is an open-source project hosted by the Apache Software Foundation. Struts 1.x greatly simplified the JSP/Servlet programming paradigm and won over many developers who were using proprietary frameworks. It simplified the programming model; it was open source; and it had a large community, which let the project grow and become popular among Java web developers. As a successor to the original Struts 1.x, check out Struts 2.x or more recent versions as well as the Struts-provided Spring Plugin for built-in Spring integration. ## Apache Tapestry Tapestry is a "Component oriented framework for creating dynamic, robust, highly scalable web applications in Java." While Spring has its own powerful web layer, there are a number of unique advantages to building an enterprise Java application by using a combination of Tapestry for the web user interface and the Spring container for the lower layers. For more information, see Tapestry’s dedicated integration module for Spring. The original web framework included in the Spring Framework, Spring Web MVC, was purpose-built for the Servlet API and Servlet containers. The reactive-stack web framework, Spring WebFlux, was added later in version 5.0. It is fully non-blocking, supports Reactive Streams back pressure, and runs on such servers as Netty, Undertow, and Servlet containers. Both web frameworks mirror the names of their source modules (spring-webmvc and spring-webflux) and co-exist side by side in the Spring Framework. Each module is optional. Applications can use one or the other module or, in some cases, both — for example, Spring MVC controllers with the reactive `WebClient` . Why was Spring WebFlux created? Part of the answer is the need for a non-blocking web stack to handle concurrency with a small number of threads and scale with fewer hardware resources. Servlet non-blocking I/O leads away from the rest of the Servlet API, where contracts are synchronous ( `Filter` , `Servlet` ) or blocking ( `getParameter` , `getPart` ). This was the motivation for a new common API to serve as a foundation across any non-blocking runtime. That is important because of servers (such as Netty) that are well-established in the async, non-blocking space. The other part of the answer is functional programming. Much as the addition of annotations in Java 5 created opportunities (such as annotated REST controllers or unit tests), the addition of lambda expressions in Java 8 created opportunities for functional APIs in Java. This is a boon for non-blocking applications and continuation-style APIs (as popularized by `CompletableFuture` and ReactiveX) that allow declarative composition of asynchronous logic. At the programming-model level, Java 8 enabled Spring WebFlux to offer functional web endpoints alongside annotated controllers. ## Define “Reactive” We touched on “non-blocking” and “functional” but what does reactive mean? The term, “reactive,” refers to programming models that are built around reacting to change — network components reacting to I/O events, UI controllers reacting to mouse events, and others. In that sense, non-blocking is reactive, because, instead of being blocked, we are now in the mode of reacting to notifications as operations complete or data becomes available. There is also another important mechanism that we on the Spring team associate with “reactive” and that is non-blocking back pressure. In synchronous, imperative code, blocking calls serve as a natural form of back pressure that forces the caller to wait. In non-blocking code, it becomes important to control the rate of events so that a fast producer does not overwhelm its destination. Reactive Streams is a small spec (also adopted in Java 9) that defines the interaction between asynchronous components with back pressure. For example a data repository (acting as Publisher) can produce data that an HTTP server (acting as Subscriber) can then write to the response. The main purpose of Reactive Streams is to let the subscriber control how quickly or how slowly the publisher produces data. Common question: what if a publisher cannot slow down? | | --- | ## Reactive API Reactive Streams plays an important role for interoperability. It is of interest to libraries and infrastructure components but less useful as an application API, because it is too low-level. Applications need a higher-level and richer, functional API to compose async logic — similar to the Java 8 `Stream` API but not only for collections. This is the role that reactive libraries play. Reactor is the reactive library of choice for Spring WebFlux. It provides the `Mono` and `Flux` API types to work on data sequences of 0..1 ( `Mono` ) and 0..N ( `Flux` ) through a rich set of operators aligned with the ReactiveX vocabulary of operators. Reactor is a Reactive Streams library and, therefore, all of its operators support non-blocking back pressure. Reactor has a strong focus on server-side Java. It is developed in close collaboration with Spring. WebFlux requires Reactor as a core dependency but it is interoperable with other reactive libraries via Reactive Streams. As a general rule, a WebFlux API accepts a plain `Publisher` as input, adapts it to a Reactor type internally, uses that, and returns either a `Flux` or a `Mono` as output. So, you can pass any `Publisher` as input and you can apply operations on the output, but you need to adapt the output for use with another reactive library. Whenever feasible (for example, annotated controllers), WebFlux adapts transparently to the use of RxJava or another reactive library. See Reactive Libraries for more details. In addition to Reactive APIs, WebFlux can also be used with Coroutines APIs in Kotlin which provides a more imperative style of programming. The following Kotlin code samples will be provided with Coroutines APIs. | | --- | ## Programming Models The `spring-web` module contains the reactive foundation that underlies Spring WebFlux, including HTTP abstractions, Reactive Streams adapters for supported servers, codecs, and a core `WebHandler` API comparable to the Servlet API but with non-blocking contracts. On that foundation, Spring WebFlux provides a choice of two programming models: Annotated Controllers: Consistent with Spring MVC and based on the same annotations from the `spring-web` module. Both Spring MVC and WebFlux controllers support reactive (Reactor and RxJava) return types, and, as a result, it is not easy to tell them apart. One notable difference is that WebFlux also supports reactive `@RequestBody` arguments. * [webflux-fn]: Lambda-based, lightweight, and functional programming model. You can think of this as a small library or a set of utilities that an application can use to route and handle requests. The big difference with annotated controllers is that the application is in charge of request handling from start to finish versus declaring intent through annotations and being called back. ## Applicability Spring MVC or WebFlux? A natural question to ask but one that sets up an unsound dichotomy. Actually, both work together to expand the range of available options. The two are designed for continuity and consistency with each other, they are available side by side, and feedback from each side benefits both sides. The following diagram shows how the two relate, what they have in common, and what each supports uniquely: We suggest that you consider the following specific points: If you have a Spring MVC application that works fine, there is no need to change. Imperative programming is the easiest way to write, understand, and debug code. You have maximum choice of libraries, since, historically, most are blocking. * If you are already shopping for a non-blocking web stack, Spring WebFlux offers the same execution model benefits as others in this space and also provides a choice of servers (Netty, Tomcat, Jetty, Undertow, and Servlet containers), a choice of programming models (annotated controllers and functional web endpoints), and a choice of reactive libraries (Reactor, RxJava, or other). * If you are interested in a lightweight, functional web framework for use with Java 8 lambdas or Kotlin, you can use the Spring WebFlux functional web endpoints. That can also be a good choice for smaller applications or microservices with less complex requirements that can benefit from greater transparency and control. * In a microservice architecture, you can have a mix of applications with either Spring MVC or Spring WebFlux controllers or with Spring WebFlux functional endpoints. Having support for the same annotation-based programming model in both frameworks makes it easier to re-use knowledge while also selecting the right tool for the right job. * A simple way to evaluate an application is to check its dependencies. If you have blocking persistence APIs (JPA, JDBC) or networking APIs to use, Spring MVC is the best choice for common architectures at least. It is technically feasible with both Reactor and RxJava to perform blocking calls on a separate thread but you would not be making the most of a non-blocking web stack. * If you have a Spring MVC application with calls to remote services, try the reactive `WebClient` . You can return reactive types (Reactor, RxJava, or other) directly from Spring MVC controller methods. The greater the latency per call or the interdependency among calls, the more dramatic the benefits. Spring MVC controllers can call other reactive components too. * If you have a large team, keep in mind the steep learning curve in the shift to non-blocking, functional, and declarative programming. A practical way to start without a full switch is to use the reactive `WebClient` . Beyond that, start small and measure the benefits. We expect that, for a wide range of applications, the shift is unnecessary. If you are unsure what benefits to look for, start by learning about how non-blocking I/O works (for example, concurrency on single-threaded Node.js) and its effects. ## Servers Spring WebFlux is supported on Tomcat, Jetty, Servlet containers, as well as on non-Servlet runtimes such as Netty and Undertow. All servers are adapted to a low-level, common API so that higher-level programming models can be supported across servers. Spring WebFlux does not have built-in support to start or stop a server. However, it is easy to assemble an application from Spring configuration and WebFlux infrastructure and run it with a few lines of code. Spring Boot has a WebFlux starter that automates these steps. By default, the starter uses Netty, but it is easy to switch to Tomcat, Jetty, or Undertow by changing your Maven or Gradle dependencies. Spring Boot defaults to Netty, because it is more widely used in the asynchronous, non-blocking space and lets a client and a server share resources. Tomcat and Jetty can be used with both Spring MVC and WebFlux. Keep in mind, however, that the way they are used is very different. Spring MVC relies on Servlet blocking I/O and lets applications use the Servlet API directly if they need to. Spring WebFlux relies on Servlet non-blocking I/O and uses the Servlet API behind a low-level adapter. It is not exposed for direct use. It is strongly advised not to map Servlet filters or directly manipulate the Servlet API in the context of a WebFlux application. For the reasons listed above, mixing blocking I/O and non-blocking I/O in the same context will cause runtime issues. | | --- | For Undertow, Spring WebFlux uses Undertow APIs directly without the Servlet API. ## Performance Performance has many characteristics and meanings. Reactive and non-blocking generally do not make applications run faster. They can in some cases – for example, if using the `WebClient` to run remote calls in parallel. However, it requires more work to do things the non-blocking way, and that can slightly increase the required processing time. The key expected benefit of reactive and non-blocking is the ability to scale with a small, fixed number of threads and less memory. That makes applications more resilient under load, because they scale in a more predictable way. In order to observe those benefits, however, you need to have some latency (including a mix of slow and unpredictable network I/O). That is where the reactive stack begins to show its strengths, and the differences can be dramatic. ## Concurrency Model Both Spring MVC and Spring WebFlux support annotated controllers, but there is a key difference in the concurrency model and the default assumptions for blocking and threads. In Spring MVC (and servlet applications in general), it is assumed that applications can block the current thread, (for example, for remote calls). For this reason, servlet containers use a large thread pool to absorb potential blocking during request handling. In Spring WebFlux (and non-blocking servers in general), it is assumed that applications do not block. Therefore, non-blocking servers use a small, fixed-size thread pool (event loop workers) to handle requests. “To scale” and “small number of threads” may sound contradictory, but to never block the current thread (and rely on callbacks instead) means that you do not need extra threads, as there are no blocking calls to absorb. | | --- | ### Invoking a Blocking API What if you do need to use a blocking library? Both Reactor and RxJava provide the `publishOn` operator to continue processing on a different thread. That means there is an easy escape hatch. Keep in mind, however, that blocking APIs are not a good fit for this concurrency model. ### Mutable State In Reactor and RxJava, you declare logic through operators. At runtime, a reactive pipeline is formed where data is processed sequentially, in distinct stages. A key benefit of this is that it frees applications from having to protect mutable state because application code within that pipeline is never invoked concurrently. ### Threading Model What threads should you expect to see on a server running with Spring WebFlux? On a “vanilla” Spring WebFlux server (for example, no data access or other optional dependencies), you can expect one thread for the server and several others for request processing (typically as many as the number of CPU cores). Servlet containers, however, may start with more threads (for example, 10 on Tomcat), in support of both servlet (blocking) I/O and servlet 3.1 (non-blocking) I/O usage. * The reactive `WebClient` operates in event loop style. So you can see a small, fixed number of processing threads related to that (for example, `reactor-http-nio-` with the Reactor Netty connector). However, if Reactor Netty is used for both client and server, the two share event loop resources by default. * Reactor and RxJava provide thread pool abstractions, called schedulers, to use with the `publishOn` operator that is used to switch processing to a different thread pool. The schedulers have names that suggest a specific concurrency strategy — for example, “parallel” (for CPU-bound work with a limited number of threads) or “elastic” (for I/O-bound work with a large number of threads). If you see such threads, it means some code is using a specific thread pool `Scheduler` strategy. * Data access libraries and other third party dependencies can also create and use threads of their own. ### Configuring The Spring Framework does not provide support for starting and stopping servers. To configure the threading model for a server, you need to use server-specific configuration APIs, or, if you use Spring Boot, check the Spring Boot configuration options for each server. You can configure the `WebClient` directly. For all other libraries, see their respective documentation. The `spring-web` module contains the following foundational support for reactive web applications: For server request processing there are two levels of support. HttpHandler: Basic contract for HTTP request handling with non-blocking I/O and Reactive Streams back pressure, along with adapters for Reactor Netty, Undertow, Tomcat, Jetty, and any Servlet container. * `WebHandler` API: Slightly higher level, general-purpose web API for request handling, on top of which concrete programming models such as annotated controllers and functional endpoints are built. * For the client side, there is a basic `ClientHttpConnector` contract to perform HTTP requests with non-blocking I/O and Reactive Streams back pressure, along with adapters for Reactor Netty, reactive Jetty HttpClient and Apache HttpComponents. The higher level WebClient used in applications builds on this basic contract. * For client and server, codecs for serialization and deserialization of HTTP request and response content. `HttpHandler` HttpHandler is a simple contract with a single method to handle a request and a response. It is intentionally minimal, and its main and only purpose is to be a minimal abstraction over different HTTP server APIs. The following table describes the supported server APIs: Server name | Server API used | Reactive Streams support | | --- | --- | --- | | | | | | | | | | | | | | | The following table describes server dependencies (also see supported versions): Server name | Group id | Artifact name | | --- | --- | --- | | | | | | | | | | | | | The code snippets below show using the `HttpHandler` adapters with each server API: Reactor Netty ``` HttpHandler handler = ... ReactorHttpHandlerAdapter adapter = new ReactorHttpHandlerAdapter(handler); HttpServer.create().host(host).port(port).handle(adapter).bindNow(); ``` ``` val handler: HttpHandler = ... val adapter = ReactorHttpHandlerAdapter(handler) HttpServer.create().host(host).port(port).handle(adapter).bindNow() ``` Undertow ``` HttpHandler handler = ... UndertowHttpHandlerAdapter adapter = new UndertowHttpHandlerAdapter(handler); Undertow server = Undertow.builder().addHttpListener(port, host).setHandler(adapter).build(); server.start(); ``` ``` val handler: HttpHandler = ... val adapter = UndertowHttpHandlerAdapter(handler) val server = Undertow.builder().addHttpListener(port, host).setHandler(adapter).build() server.start() ``` Tomcat ``` HttpHandler handler = ... Servlet servlet = new TomcatHttpHandlerAdapter(handler); Tomcat server = new Tomcat(); File base = new File(System.getProperty("java.io.tmpdir")); Context rootContext = server.addContext("", base.getAbsolutePath()); Tomcat.addServlet(rootContext, "main", servlet); rootContext.addServletMappingDecoded("/", "main"); server.setHost(host); server.setPort(port); server.start(); ``` ``` val handler: HttpHandler = ... val servlet = TomcatHttpHandlerAdapter(handler) val server = Tomcat() val base = File(System.getProperty("java.io.tmpdir")) val rootContext = server.addContext("", base.absolutePath) Tomcat.addServlet(rootContext, "main", servlet) rootContext.addServletMappingDecoded("/", "main") server.host = host server.setPort(port) server.start() ``` Jetty ``` HttpHandler handler = ... Servlet servlet = new JettyHttpHandlerAdapter(handler); Server server = new Server(); ServletContextHandler contextHandler = new ServletContextHandler(server, ""); contextHandler.addServlet(new ServletHolder(servlet), "/"); contextHandler.start(); ServerConnector connector = new ServerConnector(server); connector.setHost(host); connector.setPort(port); server.addConnector(connector); server.start(); ``` ``` val handler: HttpHandler = ... val servlet = JettyHttpHandlerAdapter(handler) val server = Server() val contextHandler = ServletContextHandler(server, "") contextHandler.addServlet(ServletHolder(servlet), "/") contextHandler.start(); val connector = ServerConnector(server) connector.host = host connector.port = port server.addConnector(connector) server.start() ``` Servlet Container To deploy as a WAR to any Servlet container, you can extend and include ``` AbstractReactiveWebInitializer ``` in the WAR. That class wraps an `HttpHandler` with ``` ServletHttpHandlerAdapter ``` and registers that as a `Servlet` . `WebHandler` API The ``` org.springframework.web.server ``` package builds on the `HttpHandler` contract to provide a general-purpose web API for processing requests through a chain of multiple `WebExceptionHandler` , multiple `WebFilter` , and a single `WebHandler` component. The chain can be put together with by simply pointing to a Spring `ApplicationContext` where components are auto-detected, and/or by registering components with the builder. While `HttpHandler` has a simple goal to abstract the use of different HTTP servers, the `WebHandler` API aims to provide a broader set of features commonly used in web applications such as: User session with attributes. * Request attributes. * Resolved `Locale` or `Principal` for the request. * Access to parsed and cached form data. * Abstractions for multipart data. * and more.. ### Special bean types The table below lists the components that can auto-detect in a Spring ApplicationContext, or that can be registered directly with it: Bean name | Bean type | Count | Description | | --- | --- | --- | --- | | | | | | | | | | | | | | | | | ``` Mono<MultiValueMap<String, String>> getFormData(); ``` ``` suspend fun getFormData(): MultiValueMap<String, String> ``` uses the configured `HttpMessageReader` to parse form data ( ) into a `MultiValueMap` . By default, is configured for use by the bean (see the Web Handler API). ### Multipart Data ``` Mono<MultiValueMap<String, Part>> getMultipartData(); ``` ``` suspend fun getMultipartData(): MultiValueMap<String, Part> ``` uses the configured ``` HttpMessageReader<MultiValueMap<String, Part>> ``` to parse `multipart/form-data` , `multipart/mixed` , and `multipart/related` content into a `MultiValueMap` . By default, this is the , which does not have any third-party dependencies. Alternatively, the can be used, which is based on the Synchronoss NIO Multipart library. Both are configured through the bean (see the Web Handler API). To parse multipart data in streaming fashion, you can use the `Flux<PartEvent>` returned from the ``` PartEventHttpMessageReader ``` instead of using `@RequestPart` , as that implies `Map` -like access to individual parts by name and, hence, requires parsing multipart data in full. By contrast, you can use `@RequestBody` to decode the content to `Flux<PartEvent>` without collecting to a `MultiValueMap` . ### Forwarded Headers is a component that modifies the host, port, and scheme of the request, based on forwarded headers, and then removes those headers. If you declare it as a bean with the name ``` forwardedHeaderTransformer ``` , it will be detected and used. There are security considerations for forwarded headers, since an application cannot know if the headers were added by a proxy, as intended, or by a malicious client. This is why a proxy at the boundary of trust should be configured to remove untrusted forwarded traffic coming from the outside. You can also configure the with `removeOnly=true` , in which case it removes but does not use the headers. In 5.1 | | --- | ## Filters In the `WebHandler` API, you can use a `WebFilter` to apply interception-style logic before and after the rest of the processing chain of filters and the target `WebHandler` . When using the WebFlux Config, registering a `WebFilter` is as simple as declaring it as a Spring bean and (optionally) expressing precedence by using `@Order` on the bean declaration or by implementing `Ordered` . Spring WebFlux provides fine-grained support for CORS configuration through annotations on controllers. However, when you use it with Spring Security, we advise relying on the built-in `CorsFilter` , which must be ordered ahead of Spring Security’s chain of filters. See the section on CORS and the CORS `WebFilter` for more details. In the `WebHandler` API, you can use a `WebExceptionHandler` to handle exceptions from the chain of `WebFilter` instances and the target `WebHandler` . When using the WebFlux Config, registering a `WebExceptionHandler` is as simple as declaring it as a Spring bean and (optionally) expressing precedence by using `@Order` on the bean declaration or by implementing `Ordered` . The following table describes the available `WebExceptionHandler` implementations: Exception Handler | Description | | --- | --- | | | The `spring-web` and `spring-core` modules provide support for serializing and deserializing byte content to and from higher level objects through non-blocking I/O with Reactive Streams back pressure. The following describes this support: * `Encoder` and `Decoder` are low level contracts to encode and decode content independent of HTTP. * `HttpMessageReader` and `HttpMessageWriter` are contracts to encode and decode HTTP message content. * An `Encoder` can be wrapped with ``` EncoderHttpMessageWriter ``` to adapt it for use in a web application, while a `Decoder` can be wrapped with ``` DecoderHttpMessageReader ``` . * `DataBuffer` abstracts different byte buffer representations (e.g. Netty `ByteBuf` , `java.nio.ByteBuffer` , etc.) and is what all codecs work on. See Data Buffers and Codecs in the "Spring Core" section for more on this topic. The `spring-core` module provides `byte[]` , `ByteBuffer` , `DataBuffer` , `Resource` , and `String` encoder and decoder implementations. The `spring-web` module provides Jackson JSON, Jackson Smile, JAXB2, Protocol Buffers and other encoders and decoders along with web-only HTTP message reader and writer implementations for form data, multipart content, server-sent events, and others. ``` ClientCodecConfigurer ``` are typically used to configure and customize the codecs to use in an application. See the section on configuring HTTP message codecs. ### Jackson JSON JSON and binary JSON (Smile) are both supported when the Jackson library is present. The `Jackson2Decoder` works as follows: Jackson’s asynchronous, non-blocking parser is used to aggregate a stream of byte chunks into `TokenBuffer` 's each representing a JSON object. * Each `TokenBuffer` is passed to Jackson’s `ObjectMapper` to create a higher level object. * When decoding to a single-value publisher (e.g. `Mono` ), there is one `TokenBuffer` . * When decoding to a multi-value publisher (e.g. `Flux` ), each `TokenBuffer` is passed to the `ObjectMapper` as soon as enough bytes are received for a fully formed object. The input content can be a JSON array, or any line-delimited JSON format such as NDJSON, JSON Lines, or JSON Text Sequences. The `Jackson2Encoder` works as follows: For a single value publisher (e.g. `Mono` ), simply serialize it through the `ObjectMapper` . * For a multi-value publisher with `application/json` , by default collect the values with `Flux#collectToList()` and then serialize the resulting collection. * For a multi-value publisher with a streaming media type such as `application/x-ndjson` or ``` application/stream+x-jackson-smile ``` , encode, write, and flush each value individually using a line-delimited JSON format. Other streaming media types may be registered with the encoder. * For SSE the `Jackson2Encoder` is invoked per event and the output is flushed to ensure delivery without delay. support decoding and encoding content. On the server side where form content often needs to be accessed from multiple places, `ServerWebExchange` provides a dedicated `getFormData()` method that parses the content through and then caches the result for repeated access. See Form Data in the `WebHandler` API section. Once `getFormData()` is used, the original raw content can no longer be read from the request body. For this reason, applications are expected to go through `ServerWebExchange` consistently for access to the cached form data versus reading from the raw request body. ``` MultipartHttpMessageWriter ``` support decoding and encoding "multipart/form-data", "multipart/mixed", and "multipart/related" content. In turn delegates to another `HttpMessageReader` for the actual parsing to a `Flux<Part>` and then simply collects the parts into a `MultiValueMap` . By default, the is used, but this can be changed through the . For more information about the , refer to the javadoc of . On the server side where multipart form content may need to be accessed from multiple places, `ServerWebExchange` provides a dedicated `getMultipartData()` method that parses the content through and then caches the result for repeated access. See Multipart Data in the `WebHandler` API section. Once `getMultipartData()` is used, the original raw content can no longer be read from the request body. For this reason applications have to consistently use `getMultipartData()` for repeated, map-like access to parts, or otherwise rely on the for a one-time access to `Flux<Part>` . ### Limits `Decoder` and `HttpMessageReader` implementations that buffer some or all of the input stream can be configured with a limit on the maximum number of bytes to buffer in memory. In some cases buffering occurs because input is aggregated and represented as a single object — for example, a controller method with `@RequestBody byte[]` , ``` x-www-form-urlencoded ``` data, and so on. Buffering can also occur with streaming, when splitting the input stream — for example, delimited text, a stream of JSON objects, and so on. For those streaming cases, the limit applies to the number of bytes associated with one object in the stream. To configure buffer sizes, you can check if a given `Decoder` or `HttpMessageReader` exposes a `maxInMemorySize` property and if so the Javadoc will have details about default values. On the server side, provides a single place from where to set all codecs, see HTTP message codecs. On the client side, the limit for all codecs can be changed in WebClient.Builder. For Multipart parsing the `maxInMemorySize` property limits the size of non-file parts. For file parts, it determines the threshold at which the part is written to disk. For file parts written to disk, there is an additional `maxDiskUsagePerPart` property to limit the amount of disk space per part. There is also a `maxParts` property to limit the overall number of parts in a multipart request. To configure all three in WebFlux, you’ll need to supply a pre-configured instance of ### Streaming When streaming to the HTTP response (for example, `text/event-stream` , `application/x-ndjson` ), it is important to send data periodically, in order to reliably detect a disconnected client sooner rather than later. Such a send could be a comment-only, empty SSE event or any other "no-op" data that would effectively serve as a heartbeat. `DataBuffer` `DataBuffer` is the representation for a byte buffer in WebFlux. The Spring Core part of this reference has more on that in the section on Data Buffers and Codecs. The key point to understand is that on some servers like Netty, byte buffers are pooled and reference counted, and must be released when consumed to avoid memory leaks. WebFlux applications generally do not need to be concerned with such issues, unless they consume or produce data buffers directly, as opposed to relying on codecs to convert to and from higher level objects, or unless they choose to create custom codecs. For such cases please review the information in Data Buffers and Codecs, especially the section on Using DataBuffer. ## Logging `DEBUG` level logging in Spring WebFlux is designed to be compact, minimal, and human-friendly. It focuses on high value bits of information that are useful over and over again vs others that are useful only when debugging a specific issue. `TRACE` level logging generally follows the same principles as `DEBUG` (and for example also should not be a firehose) but can be used for debugging any issue. In addition, some log messages may show a different level of detail at `TRACE` vs `DEBUG` . Good logging comes from the experience of using the logs. If you spot anything that does not meet the stated goals, please let us know. ### Log Id In WebFlux, a single request can be run over multiple threads and the thread ID is not useful for correlating log messages that belong to a specific request. This is why WebFlux log messages are prefixed with a request-specific ID by default. On the server side, the log ID is stored in the `ServerWebExchange` attribute ( `LOG_ID_ATTRIBUTE` ), while a fully formatted prefix based on that ID is available from ``` ServerWebExchange#getLogPrefix() ``` . On the `WebClient` side, the log ID is stored in the `ClientRequest` attribute ( `LOG_ID_ATTRIBUTE` ) ,while a fully formatted prefix is available from ``` ClientRequest#logPrefix() ``` ### Sensitive Data `DEBUG` and `TRACE` logging can log sensitive information. This is why form parameters and headers are masked by default and you must explicitly enable their logging in full. The following example shows how to do so for server-side requests: ``` @Configuration @EnableWebFlux class MyConfig implements WebFluxConfigurer { ``` @Configuration @EnableWebFlux class MyConfig : WebFluxConfigurer { override fun configureHttpMessageCodecs(configurer: ServerCodecConfigurer) { configurer.defaultCodecs().enableLoggingRequestDetails(true) } } ``` ``` Consumer<ClientCodecConfigurer> consumer = configurer -> configurer.defaultCodecs().enableLoggingRequestDetails(true); WebClient webClient = WebClient.builder() .exchangeStrategies(strategies -> strategies.codecs(consumer)) .build(); ``` ``` val consumer: (ClientCodecConfigurer) -> Unit = { configurer -> configurer.defaultCodecs().enableLoggingRequestDetails(true) } val webClient = WebClient.builder() .exchangeStrategies({ strategies -> strategies.codecs(consumer) }) .build() ``` ### Appenders Logging libraries such as SLF4J and Log4J 2 provide asynchronous loggers that avoid blocking. While those have their own drawbacks such as potentially dropping messages that could not be queued for logging, they are the best available options currently for use in a reactive, non-blocking application. ### Custom codecs Applications can register custom codecs for supporting additional media types, or specific behaviors that are not supported by the default codecs. Some configuration options expressed by developers are enforced on default codecs. Custom codecs might want to get a chance to align with those preferences, like enforcing buffering limits or logging sensitive data. ``` WebClient webClient = WebClient.builder() .codecs(configurer -> { CustomDecoder decoder = new CustomDecoder(); configurer.customCodecs().registerWithDefaultConfig(decoder); }) .build(); ``` ``` val webClient = WebClient.builder() .codecs({ configurer -> val decoder = CustomDecoder() configurer.customCodecs().registerWithDefaultConfig(decoder) }) .build() ``` # DispatcherHandler `DispatcherHandler` Spring WebFlux, similarly to Spring MVC, is designed around the front controller pattern, where a central `WebHandler` , the `DispatcherHandler` , provides a shared algorithm for request processing, while actual work is performed by configurable, delegate components. This model is flexible and supports diverse workflows. `DispatcherHandler` discovers the delegate components it needs from Spring configuration. It is also designed to be a Spring bean itself and implements for access to the context in which it runs. If `DispatcherHandler` is declared with a bean name of `webHandler` , it is, in turn, discovered by , which puts together a request-processing chain, as described in `WebHandler` API. Spring configuration in a WebFlux application typically contains: * `DispatcherHandler` with the bean name `webHandler` * `WebFilter` and `WebExceptionHandler` beans * Others The configuration is given to to build the processing chain, as the following example shows: ``` ApplicationContext context = ... HttpHandler handler = WebHttpHandlerBuilder.applicationContext(context).build(); ``` ``` val context: ApplicationContext = ... val handler = WebHttpHandlerBuilder.applicationContext(context).build() ``` The resulting `HttpHandler` is ready for use with a server adapter. ## Special Bean Types The `DispatcherHandler` delegates to special beans to process requests and render the appropriate responses. By “special beans,” we mean Spring-managed `Object` instances that implement WebFlux framework contracts. Those usually come with built-in contracts, but you can customize their properties, extend them, or replace them. The following table lists the special beans detected by the `DispatcherHandler` . Note that there are also some other beans detected at a lower level (see Special bean types in the Web Handler API). ## WebFlux Config Applications can declare the infrastructure beans (listed under Web Handler API and `DispatcherHandler` ) that are required to process requests. However, in most cases, the WebFlux Config is the best starting point. It declares the required beans and provides a higher-level configuration callback API to customize it. Spring Boot relies on the WebFlux config to configure Spring WebFlux and also provides many extra convenient options. | | --- | `DispatcherHandler` processes requests as follows: Each `HandlerMapping` is asked to find a matching handler, and the first match is used. * If a handler is found, it is run through an appropriate `HandlerAdapter` , which exposes the return value from the execution as `HandlerResult` . * The `HandlerResult` is given to an appropriate `HandlerResultHandler` to complete processing by writing to the response directly or by using a view to render. ## Result Handling The return value from the invocation of a handler, through a `HandlerAdapter` , is wrapped as a `HandlerResult` , along with some additional context, and passed to the first `HandlerResultHandler` that claims support for it. The following table shows the available `HandlerResultHandler` implementations, all of which are declared in the WebFlux Config: Result Handler Type | Return Values | Default Order | | --- | --- | --- | | | | | | `HandlerAdapter` implementations can handle internally exceptions from invoking a request handler, such as a controller method. However, an exception may be deferred if the request handler returns an asynchronous value. A `HandlerAdapter` may expose its exception handling mechanism as a set on the `HandlerResult` it returns. When that’s set, `DispatcherHandler` will also apply it to the handling of the result. A `HandlerAdapter` may also choose to implement . In that case `DispatcherHandler` will apply it to exceptions that arise before a handler is mapped, e.g. during handler mapping, or earlier, e.g. in a `WebFilter` . See also Exceptions in the “Annotated Controller” section or Exceptions in the WebHandler API section. ## View Resolution View resolution enables rendering to a browser with an HTML template and a model without tying you to a specific view technology. In Spring WebFlux, view resolution is supported through a dedicated HandlerResultHandler that uses `ViewResolver` instances to map a String (representing a logical view name) to a `View` instance. The `View` is then used to render the response. ### Handling The `HandlerResult` passed into contains the return value from the handler and the model that contains attributes added during request handling. The return value is processed as one of the following: * `String` , `CharSequence` : A logical view name to be resolved to a `View` through the list of configured `ViewResolver` implementations. * `void` : Select a default view name based on the request path, minus the leading and trailing slash, and resolve it to a `View` . The same also happens when a view name was not provided (for example, model attribute was returned) or an async return value (for example, `Mono` completed empty). * Rendering: API for view resolution scenarios. Explore the options in your IDE with code completion. * `Model` , `Map` : Extra model attributes to be added to the model for the request. * Any other: Any other return value (except for simple types, as determined by BeanUtils#isSimpleProperty) is treated as a model attribute to be added to the model. The attribute name is derived from the class name by using conventions, unless a handler method `@ModelAttribute` annotation is present. The model can contain asynchronous, reactive types (for example, from Reactor or RxJava). Prior to rendering, `AbstractView` resolves such model attributes into concrete values and updates the model. Single-value reactive types are resolved to a single value or no value (if empty), while multi-value reactive types (for example, `Flux<T>` ) are collected and resolved to `List<T>` . To configure view resolution is as simple as adding a bean to your Spring configuration. WebFlux Config provides a dedicated configuration API for view resolution. See View Technologies for more on the view technologies integrated with Spring WebFlux. ### Redirecting The special `redirect:` prefix in a view name lets you perform a redirect. The `UrlBasedViewResolver` (and sub-classes) recognize this as an instruction that a redirect is needed. The rest of the view name is the redirect URL. The net effect is the same as if the controller had returned a `RedirectView` or ``` Rendering.redirectTo("abc").build() ``` , but now the controller itself can operate in terms of logical view names. A view name such as ``` redirect:/some/resource ``` is relative to the current application, while a view name such as ``` redirect:https://example.com/arbitrary/path ``` ### Content Negotiation supports content negotiation. It compares the request media types with the media types supported by each selected `View` . The first `View` that supports the requested media type(s) is used. In order to support media types such as JSON and XML, Spring WebFlux provides , which is a special `View` that renders through an HttpMessageWriter. Typically, you would configure these as default views through the WebFlux Configuration. Default views are always selected and used if they match the requested media type. Spring WebFlux provides an annotation-based programming model, where `@Controller` and `@RestController` components use annotations to express request mappings, request input, handle exceptions, and more. Annotated controllers have flexible method signatures and do not have to extend base classes nor implement specific interfaces. The following listing shows a basic example: @GetMapping("/hello") public String handle() { return "Hello WebFlux"; } } ``` @GetMapping("/hello") fun handle() = "Hello WebFlux" } ``` In the preceding example, the method returns a `String` to be written to the response body. ## URI Patterns You can map requests by using glob patterns and wildcards: Pattern | Description | Example | | --- | --- | --- | | | | | | | Captured URI variables can be accessed with `@PathVariable` , as the following example shows: @GetMapping("/pets/{petId}") (2) public Pet findPet(@PathVariable Long ownerId, @PathVariable Long petId) { // ... } } ``` ), but you can leave that detail out if the names are the same and you compile your code with the `-parameters` compiler flag. The syntax `{*varName}` declares a URI variable that matches zero or more remaining path segments. For example `/resources/{*path}` matches all files under `/resources/` , and the `"path"` variable captures the complete path under `/resources` . The syntax `{varName:regex}` declares a URI variable with a regular expression that has the syntax: `{varName:regex}` . For example, given a URL of ``` /spring-web-3.0.5.jar ``` Spring WebFlux uses | | --- | Spring WebFlux does not support suffix pattern matching — unlike Spring MVC, where a mapping such as `/person` also matches to `/person.*` . For URL-based content negotiation, if needed, we recommend using a query parameter, which is simpler, more explicit, and less vulnerable to URL path based exploits. When multiple patterns match a URL, they must be compared to find the best match. This is done with ``` PathPattern.SPECIFICITY_COMPARATOR ``` , which looks for patterns that are more specific. For every pattern, a score is computed, based on the number of URI variables and wildcards, where a URI variable scores lower than a wildcard. A pattern with a lower total score wins. If two patterns have the same score, the longer is chosen. Catch-all patterns (for example, `**` , `{*varName}` ) are excluded from the scoring and are always sorted last instead. If two patterns are both catch-all, the longer is chosen. The media type can specify a character set. Negated expressions are supported — for example, `!text/plain` means any content type other than `text/plain` . You can declare a shared `produces` attribute at the class level. Unlike most other request mapping attributes, however, when used at the class level, a method-level `produces` attribute overrides rather than extend the class level declaration. MediaType provides constants for commonly used media types — e.g. APPLICATION_JSON_VALUE, APPLICATION_XML_VALUE. ## Parameters and Headers You can narrow request mappings based on query parameter conditions. You can test for the presence of a query parameter ( `myParam` ), for its absence ( `!myParam` ), or for a specific value ( `myParam=myValue` ). The following examples tests for a parameter with a value: `@GetMapping` and support HTTP HEAD transparently for request mapping purposes. Controller methods need not change. A response wrapper, applied in the `HttpHandler` server adapter, ensures a `Content-Length` header is set to the number of bytes written without actually writing to the response. By default, HTTP OPTIONS is handled by setting the `Allow` response header to the list of HTTP methods listed in all `@RequestMapping` methods with matching URL patterns. For a `@RequestMapping` without HTTP method declarations, the `Allow` header is set to You can programmatically register Handler methods, which can be used for dynamic registrations or for advanced cases, such as different instances of the same handler under different URLs. The following example shows how to do so: @Autowired fun setHandlerMapping(mapping: RequestMappingHandlerMapping, handler: UserHandler) { (1) val info = RequestMappingInfo.paths("/user/{id}").methods(RequestMethod.GET).build() (2) val method = UserHandler::class.java.getMethod("getUser", Long::class.java) (3) mapping.registerMapping(info, handler, method) (4) } } ``` The following table shows the supported controller method arguments. Reactive types (Reactor, RxJava, or other) are supported on arguments that require blocking I/O (for example, reading the request body) to be resolved. This is marked in the Description column. Reactive types are not expected on arguments that do not require blocking. JDK 1.8’s `java.util.Optional` is supported as a method argument in combination with annotations that have a `required` attribute (for example, `@RequestParam` , `@RequestHeader` , and others) and is equivalent to `required=false` . The return value is encoded through HttpMessageWriter instances and written to the response. See @ResponseBody. @ResponseBody `@ResponseBody` The return value is encoded through HttpMessageWriter instances and written to the response. See @ResponseBody. The return value is encoded through `HttpMessageWriter` instances and written to the response. See `@ResponseBody` . HttpEntity<B>, ResponseEntity<B The return value specifies the full response, including HTTP headers, and the body is encoded through HttpMessageWriter instances and written to the response. See ResponseEntity. HttpEntity<B>, ResponseEntity<B> `HttpEntity<B>` , `ResponseEntity<B>` The return value specifies the full response, including HTTP headers, and the body is encoded through HttpMessageWriter instances and written to the response. See ResponseEntity. The return value specifies the full response, including HTTP headers, and the body is encoded through `HttpMessageWriter` instances and written to the response. See `ResponseEntity` . HttpHeaders An API for model and view rendering scenarios. Rendering `Rendering` An API for model and view rendering scenarios. An API for model and view rendering scenarios. void A method with a void, possibly asynchronous (for example, Mono<Void>), return type (or a null return value) is considered to have fully handled the response if it also has a ServerHttpResponse, a ServerWebExchange argument, or an @ResponseStatus annotation. The same is also true if the controller has made a positive ETag or lastModified timestamp check. See Controllers for details. If none of the above is true, a void return type can also indicate “no response body” for REST controllers or default view name selection for HTML controllers. void `void` A method with a void, possibly asynchronous (for example, Mono<Void>), return type (or a null return value) is considered to have fully handled the response if it also has a ServerHttpResponse, a ServerWebExchange argument, or an @ResponseStatus annotation. The same is also true if the controller has made a positive ETag or lastModified timestamp check. See Controllers for details. If none of the above is true, a void return type can also indicate “no response body” for REST controllers or default view name selection for HTML controllers. A method with a `void` , possibly asynchronous (for example, `Mono<Void>` ), return type (or a `null` return value) is considered to have fully handled the response if it also has a `ServerHttpResponse` , a `ServerWebExchange` argument, or an `@ResponseStatus` annotation. The same is also true if the controller has made a positive ETag or `lastModified` timestamp check. See Controllers for details. If none of the above is true, a `void` return type can also indicate “no response body” for REST controllers or default view name selection for HTML controllers. Flux<ServerSentEvent>, Observable<ServerSentEvent>, or other reactive type Emit server-sent events. The ServerSentEvent wrapper can be omitted when only data needs to be written (however, text/event-stream must be requested or declared in the mapping through the produces attribute). Flux<ServerSentEvent>, Observable<ServerSentEvent>, or other reactive type , or other reactive type Emit server-sent events. The ServerSentEvent wrapper can be omitted when only data needs to be written (however, text/event-stream must be requested or declared in the mapping through the produces attribute). Emit server-sent events. The `ServerSentEvent` wrapper can be omitted when only data needs to be written (however, `text/event-stream` must be requested or declared in the mapping through the `produces` attribute). Other return values ``` "/cars;color=red,green;year=2012" ``` . Multiple values can also be specified through repeated variable names — for example, ``` "color=red;color=green;color=blue" ``` . Unlike Spring MVC, in WebFlux, the presence or absence of matrix variables in a URL does not affect request mappings. In other words, you are not required to use a URI variable to mask variable content. That said, if you want to access matrix variables from a controller method, you need to add a URI variable to the path segment where matrix variables are expected. The following example shows how to do so: You can define a matrix variable may be defined as optional and specify a default value as the following example shows: The Servlet API “request parameter” concept conflates query parameters, form data, and multiparts into one. However, in WebFlux, each is accessed individually through | | --- | Method parameters that use the `@RequestParam` annotation are required by default, but you can specify that a method parameter is optional by setting the required flag of a `@RequestParam` to `false` or by declaring the argument with a `java.util.Optional` wrapper. Type conversion is applied automatically if the target method parameter type is not `String` . See Type Conversion. When a `@RequestParam` annotation is declared on a `Map<String, String>` or ``` @PostMapping("/owners/{ownerId}/pets/{petId}/edit") public String processSubmit(@ModelAttribute Pet pet) { } (1) ``` ``` @PostMapping("/owners/{ownerId}/pets/{petId}/edit") fun processSubmit(@ModelAttribute pet: Pet): String { } (1) ``` The `Pet` instance in the preceding example is resolved as follows: From the model if already added through `Model` . * From the HTTP session through `@SessionAttributes` . * From the invocation of a default constructor. * From the invocation of a “primary constructor” with arguments that match query parameters or form fields. Argument names are determined through JavaBeans or through runtime-retained parameter names in the bytecode. After the model attribute instance is obtained, data binding is applied. The ``` WebExchangeDataBinder ``` class matches names of query parameters and form fields to field names on the target `Object` . Matching fields are populated after type conversion is applied where necessary. For more on data binding (and validation), see Validation. For more on customizing data binding, see `DataBinder` . Data binding can result in errors. By default, a is raised, but, to check for such errors in the controller method, you can add a `BindingResult` argument immediately next to the `@ModelAttribute` , as the following example shows: annotation or Spring’s `@Validated` annotation (see also Bean Validation and Spring validation). The following example uses the `@Valid` annotation: Spring WebFlux, unlike Spring MVC, supports reactive types in the model — for example, `Mono<Account>` or ``` io.reactivex.Single<Account> ``` . You can declare a `@ModelAttribute` argument with or without a reactive type wrapper, and it will be resolved accordingly, to the actual value if necessary. However, note that, to use a `BindingResult` argument, you must declare the `@ModelAttribute` argument before it without a reactive type wrapper, as shown earlier. Alternatively, you can handle any errors through the reactive type, as the following example shows: ``` @PostMapping("/owners/{ownerId}/pets/{petId}/edit") public Mono<String> processSubmit(@Valid @ModelAttribute("pet") Mono<Pet> petMono) { return petMono .flatMap(pet -> { // ... }) .onErrorResume(ex -> { // ... }); } ``` ``` @PostMapping("/owners/{ownerId}/pets/{petId}/edit") fun processSubmit(@Valid @ModelAttribute("pet") petMono: Mono<Pet>): Mono<String> { return petMono .flatMap { pet -> // ... } .onErrorResume{ ex -> // ... } } ``` As explained in Multipart Data, `ServerWebExchange` provides access to multipart content. The best way to handle a file upload form (for example, from a browser) in a controller is through data binding to a command object, as the following example shows: @PostMapping("/form") public String handleFormUpload(MyForm form, BindingResult errors) { // ... } ``` class MyForm( val name: String, val file: MultipartFile) @PostMapping("/form") fun handleFormUpload(form: MyForm, errors: BindingResult): String { // ... } You can also submit multipart requests from non-browser clients in a RESTful service scenario. The following example uses a file along with JSON: > POST /someUrl Content-Type: multipart/mixed --edt7Tfrdusa7r3lNQc79vXuhIIMlatb7PQg7Vp Content-Disposition: form-data; name="meta-data" Content-Type: application/json; charset=UTF-8 Content-Transfer-Encoding: 8bit { "name": "value" } --edt7Tfrdusa7r3lNQc79vXuhIIMlatb7PQg7Vp Content-Disposition: form-data; name="file-data"; filename="file.properties" Content-Type: text/xml Content-Transfer-Encoding: 8bit ... File Data ... You can access individual parts with `@RequestPart` , as the following example shows: ``` @PostMapping("/") public String handle(@RequestPart("meta-data") Part metadata, (1) @RequestPart("file-data") FilePart file) { (2) // ... } ``` ``` @PostMapping("/") fun handle(@RequestPart("meta-data") Part metadata, (1) @RequestPart("file-data") FilePart file): String { (2) // ... } ``` To deserialize the raw part content (for example, to JSON — similar to `@RequestBody` ), you can declare a concrete target `Object` , instead of `Part` , as the following example shows: ``` @PostMapping("/") public String handle(@RequestPart("meta-data") MetaData metadata) { (1) // ... } ``` ``` @PostMapping("/") fun handle(@RequestPart("meta-data") metadata: MetaData): String { (1) // ... } ``` that results in a 400 (BAD_REQUEST) response. The exception contains a `BindingResult` with the error details and can also be handled in the controller method by declaring the argument with an async wrapper and then using error related operators: ``` @PostMapping("/") public String handle(@Valid @RequestPart("meta-data") Mono<MetaData> metadata) { // use one of the onError* operators... } ``` ``` @PostMapping("/") fun handle(@Valid @RequestPart("meta-data") metadata: MetaData): String { // ... } ``` To access all multipart data as a `MultiValueMap` , you can use `@RequestBody` , as the following example shows: ``` @PostMapping("/") public String handle(@RequestBody Mono<MultiValueMap<String, Part>> parts) { (1) // ... } ``` ``` @PostMapping("/") fun handle(@RequestBody parts: MultiValueMap<String, Part>): String { (1) // ... } ``` `PartEvent` To access multipart data sequentially, in a streaming fashion, you can use `@RequestBody` with `Flux<PartEvent>` (or `Flow<PartEvent>` in Kotlin). Each part in a multipart HTTP message will produce at least one `PartEvent` containing both headers and a buffer with the contents of the part. Form fields will produce a single `FormPartEvent` , containing the value of the field. * File uploads will produce one or more `FilePartEvent` objects, containing the filename used when uploading. If the file is large enough to be split across multiple buffers, the first `FilePartEvent` will be followed by subsequent events. For example: Received part events can also be relayed to another service by using the `WebClient` . See Multipart Data. Unlike Spring MVC, in WebFlux, the `@RequestBody` method argument supports reactive types and fully non-blocking reading and (client-to-server) streaming. ``` @PostMapping("/accounts") public void handle(@RequestBody Mono<Account> account) { // ... } ``` ``` @PostMapping("/accounts") fun handle(@RequestBody accounts: Flow<Account>) { // ... } ``` You can use the HTTP message codecs option of the WebFlux Config to configure or customize message readers. You can use `@RequestBody` in combination with , which results in a 400 (BAD_REQUEST) response. The exception contains a `BindingResult` with error details and can be handled in the controller method by declaring the argument with an async wrapper and then using error related operators: ``` @PostMapping("/accounts") public void handle(@Valid @RequestBody Mono<Account> account) { // use one of the onError* operators... } ``` `@ResponseBody` You can use the `@ResponseBody` annotation on a method to have the return serialized to the response body through an HttpMessageWriter. The following example shows how to do so: `@ResponseBody` is also supported at the class level, in which case it is inherited by all controller methods. This is the effect of `@RestController` , which is nothing more than a meta-annotation marked with `@Controller` and `@ResponseBody` . `@ResponseBody` supports reactive types, which means you can return Reactor or RxJava types and have the asynchronous values they produce rendered to the response. For additional details, see Streaming and JSON rendering. You can combine `@ResponseBody` methods with JSON serialization views. See Jackson JSON for details. You can use the HTTP message codecs option of the WebFlux Config to configure or customize message writing. `DataBinder` `@Controller` or `@ControllerAdvice` classes can have `@InitBinder` methods, to initialize instances of `WebDataBinder` . Those, in turn, are used to: Convert `String` -based request values (such as request parameters, path variables, headers, cookies, and others) to the target type of controller method arguments. * or Spring `Converter` and `Formatter` components. In addition, you can use the WebFlux Java configuration to register `Converter` and `Formatter` types in a globally shared . `@InitBinder` methods support many of the same arguments that `@RequestMapping` methods do, except for `@ModelAttribute` (command object) arguments. Typically, they are declared with a `WebDataBinder` argument, for registrations, and a `void` return value. The following example uses the `@InitBinder` annotation: Alternatively, when using a `Formatter` -based setup through a shared , you could re-use the same approach and register controller-specific `Formatter` instances, as the following example shows: `@Controller` and @ControllerAdvice classes can have `@ExceptionHandler` methods to handle exceptions from controller methods. The following example includes such a handler method: The exception can match against a top-level exception being propagated (that is, a direct `IOException` being thrown) or against the immediate cause within a top-level wrapper exception (for example, an `IOException` wrapped inside an ). For matching exception types, preferably declare the target exception as a method argument, as shown in the preceding example. Alternatively, the annotation declaration can narrow the exception types to match. We generally recommend being as specific as possible in the argument signature and to declare your primary root exception mappings on a `@ControllerAdvice` prioritized with a corresponding order. See the MVC section for details. An | | --- | Support for `@ExceptionHandler` methods in Spring WebFlux is provided by the `HandlerAdapter` for `@RequestMapping` methods. See `DispatcherHandler` for more detail. `@ExceptionHandler` methods support the same method arguments as `@RequestMapping` methods, except the request body might have been consumed already. `@ExceptionHandler` methods support the same return values as `@RequestMapping` methods. Typically, the `@ExceptionHandler` , `@InitBinder` , and `@ModelAttribute` methods apply within the `@Controller` class (or class hierarchy) in which they are declared. If you want such methods to apply more globally (across controllers), you can declare them in a class annotated with `@ControllerAdvice` or . `@ControllerAdvice` is annotated with `@Component` , which means that such classes can be registered as Spring beans through component scanning . is a composed annotation that is annotated with both `@ControllerAdvice` and `@ResponseBody` , which essentially means `@ExceptionHandler` methods are rendered to the response body through message conversion (versus view resolution or template rendering). On startup, the infrastructure classes for `@RequestMapping` and `@ExceptionHandler` methods detect Spring beans annotated with `@ControllerAdvice` and then apply their methods at runtime. Global `@ExceptionHandler` methods (from a `@ControllerAdvice` ) are applied after local ones (from the `@Controller` ). By contrast, global `@ModelAttribute` and `@InitBinder` methods are applied before local ones. By default, `@ControllerAdvice` methods apply to every request (that is, all controllers), but you can narrow that down to a subset of controllers by using attributes on the annotation, as the following example shows: // Target all Controllers assignable to specific classes @ControllerAdvice(assignableTypes = [ControllerInterface::class, AbstractController::class]) public class ExampleAdvice3 {} ``` Spring WebFlux includes WebFlux.fn, a lightweight functional programming model in which functions are used to route and handle requests and contracts are designed for immutability. It is an alternative to the annotation-based programming model but otherwise runs on the same Reactive Core foundation. In WebFlux.fn, an HTTP request is handled with a `HandlerFunction` : a function that takes `ServerRequest` and returns a delayed `ServerResponse` (i.e. `Mono<ServerResponse>` ). Both the request and the response object have immutable contracts that offer JDK 8-friendly access to the HTTP request and response. `HandlerFunction` is the equivalent of the body of a `@RequestMapping` method in the annotation-based programming model. Incoming requests are routed to a handler function with a `RouterFunction` : a function that takes `ServerRequest` and returns a delayed `HandlerFunction` (i.e. ``` Mono<HandlerFunction> ``` public Mono<ServerResponse> listPeople(ServerRequest request) { // ... } One way to run a `RouterFunction` is to turn it into an `HttpHandler` and install it through one of the built-in server adapters: Most applications can run through the WebFlux Java configuration, see Running a Server. `ServerRequest` and `ServerResponse` are immutable interfaces that offer JDK 8-friendly access to the HTTP request and response. Both request and response provide Reactive Streams back pressure against the body streams. The request body is represented with a Reactor `Flux` or `Mono` . The response body is represented with any Reactive Streams `Publisher` , including `Flux` and `Mono` . For more on that, see Reactive Libraries. ``` Mono<String> string = request.bodyToMono(String.class); ``` ``` val string = request.awaitBody<String>() ``` The following example extracts the body to a `Flux<Person>` (or a `Flow<Person>` in Kotlin), where `Person` objects are decoded from some serialized form, such as JSON or XML: ``` Flux<Person> people = request.bodyToFlux(Person.class); ``` ``` val people = request.bodyToFlow<Person>() ``` The preceding examples are shortcuts that use the more general ``` ServerRequest.body(BodyExtractor) ``` , which accepts the `BodyExtractor` functional strategy interface. The utility class `BodyExtractors` provides access to a number of instances. For example, the preceding examples can also be written as follows: ``` Mono<String> string = request.body(BodyExtractors.toMono(String.class)); Flux<Person> people = request.body(BodyExtractors.toFlux(Person.class)); ``` ``` val string = request.body(BodyExtractors.toMono(String::class.java)).awaitSingle() val people = request.body(BodyExtractors.toFlux(Person::class.java)).asFlow() ``` The following example shows how to access form data: ``` Mono<MultiValueMap<String, String>> map = request.formData(); ``` ``` val map = request.awaitFormData() ``` The following example shows how to access multipart data as a map: ``` Mono<MultiValueMap<String, Part>> map = request.multipartData(); ``` ``` val map = request.awaitMultipartData() ``` The following example shows how to access multipart data, one at a time, in streaming fashion: Note that the body contents of the `PartEvent` objects must be completely consumed, relayed, or released to avoid memory leaks. ``` Mono<Person> person = ... ServerResponse.ok().contentType(MediaType.APPLICATION_JSON).body(person, Person.class); ``` Depending on the codec used, it is possible to pass hint parameters to customize how the body is serialized or deserialized. For example, to specify a Jackson JSON view: ``` ServerResponse.ok().hint(Jackson2CodecSupport.JSON_VIEW_HINT, MyJacksonView.class).body(...); ``` ``` ServerResponse.ok().hint(Jackson2CodecSupport.JSON_VIEW_HINT, MyJacksonView::class.java).body(...) ``` ``` HandlerFunction<ServerResponse> helloWorld = request -> ServerResponse.ok().bodyValue("Hello World"); ``` ``` val helloWorld = HandlerFunction<ServerResponse> { ServerResponse.ok().bodyValue("Hello World") } ``` public Mono<ServerResponse> listPeople(ServerRequest request) { (1) Flux<Person> people = repository.allPeople(); return ok().contentType(APPLICATION_JSON).body(people, Person.class); } public Mono<ServerResponse> getPerson(ServerRequest request) { (3) int personId = Integer.valueOf(request.pathVariable("id")); return repository.getPerson(personId) .flatMap(person -> ok().contentType(APPLICATION_JSON).bodyValue(person)) .switchIfEmpty(ServerResponse.notFound().build()); } } ``` 2 createPerson is a handler function that stores a new Person contained in the request body. Note that PersonRepository.savePerson(Person) returns Mono<Void>: an empty Mono that emits a completion signal when the person has been read from the request and stored. So we use the build(Publisher<Void>) method to send a response when that completion signal is received (that is, when the Person has been saved). suspend fun listPeople(request: ServerRequest): ServerResponse { (1) val people: Flow<Person> = repository.allPeople() return ok().contentType(APPLICATION_JSON).bodyAndAwait(people); } suspend fun getPerson(request: ServerRequest): ServerResponse { (3) val personId = request.pathVariable("id").toInt() return repository.getPerson(personId)?.let { ok().contentType(APPLICATION_JSON).bodyValueAndAwait(it) } ?: ServerResponse.notFound().buildAndAwait() 2 createPerson is a handler function that stores a new Person contained in the request body. Note that PersonRepository.savePerson(Person) is a suspending function with no return type. ``` RouterFunction<ServerResponse> route = RouterFunctions.route() .GET("/hello-world", accept(MediaType.TEXT_PLAIN), request -> ServerResponse.ok().bodyValue("Hello World")).build(); ``` ``` val route = coRouter { GET("/hello-world", accept(TEXT_PLAIN)) { ServerResponse.ok().bodyValueAndAwait("Hello World") } } ``` ``` import org.springframework.http.MediaType.APPLICATION_JSON val otherRoute: RouterFunction<ServerResponse> = coRouter { } How do you run a router function in an HTTP server? A simple option is to convert a router function to an `HttpHandler` by using one of the following: You can then use the returned `HttpHandler` with a number of server adapters by following HttpHandler for server-specific instructions. A more typical option, also used by Spring Boot, is to run with a `DispatcherHandler` -based setup through the WebFlux Config, which uses Spring configuration to declare the components required to process requests. The WebFlux Java configuration declares the following infrastructure components to support functional endpoints: : Simple adapter that lets `DispatcherHandler` invoke a `HandlerFunction` that was mapped to a request. * ``` ServerResponseResultHandler ``` : Handles the result from the invocation of a `HandlerFunction` by invoking the `writeTo` method of the `ServerResponse` . The preceding components let functional endpoints fit within the `DispatcherHandler` request processing lifecycle and also (potentially) run side by side with annotated controllers, if any are declared. It is also how functional endpoints are enabled by the Spring Boot WebFlux starter. The following example shows a WebFlux Java configuration (see DispatcherHandler for how to run it): @Override public void configureHttpMessageCodecs(ServerCodecConfigurer configurer) { // configure message conversion... } override fun configureHttpMessageCodecs(configurer: ServerCodecConfigurer) { // configure message conversion... } This section describes various options available in the Spring Framework to prepare URIs. The use of view technologies in Spring WebFlux is pluggable. Whether you decide to use Thymeleaf, FreeMarker, or some other view technology is primarily a matter of a configuration change. This chapter covers the view technologies integrated with Spring WebFlux. We assume you are already familiar with View Resolution. ## Thymeleaf Thymeleaf is a modern server-side Java template engine that emphasizes natural HTML templates that can be previewed in a browser by double-clicking, which is very helpful for independent work on UI templates (for example, by a designer) without the need for a running server. Thymeleaf offers an extensive set of features, and it is actively developed and maintained. For a more complete introduction, see the Thymeleaf project home page. The Thymeleaf integration with Spring WebFlux is managed by the Thymeleaf project. The configuration involves a few bean declarations, such as ``` SpringResourceTemplateResolver ``` ``` SpringWebFluxTemplateEngine ``` ``` ThymeleafReactiveViewResolver ``` . For more details, see Thymeleaf+Spring and the WebFlux integration announcement. ## FreeMarker ### View Configuration ``` classpath:/templates/freemarker/welcome.ftl ``` ### FreeMarker Configuration @Bean public FreeMarkerConfigurer freeMarkerConfigurer() { Map<String, Object> variables = new HashMap<>(); variables.put("xml_escape", new XmlEscape()); FreeMarkerConfigurer configurer = new FreeMarkerConfigurer(); configurer.setTemplateLoaderPath("classpath:/templates"); configurer.setFreemarkerVariables(variables); return configurer; } } ``` @Bean fun freeMarkerConfigurer() = FreeMarkerConfigurer().apply { setTemplateLoaderPath("classpath:/templates") setFreemarkerVariables(mapOf("xml_escape" to XmlEscape())) } } ``` ### Form Handling # The Bind Macros ``` org.springframework.web.reactive.result.view.freemarker ``` package. For additional details on binding support, see Simple Binding for Spring MVC. # Form Macros For details on Spring’s form macro support for FreeMarker templates, consult the following sections of the Spring MVC documentation. ## Script Views ### Requirements ### Script Templates The `render` function is called with the following parameters: `polyfill.js` defines only the `window` object needed by Handlebars to run properly, as the following snippet shows: `var window = {};` This basic `render.js` implementation compiles the template before using it. A production ready implementation should also store and reused cached templates or pre-compiled templates. This can be done on the script side, as well as any customization you need (managing template engine configuration for example). The following example shows how compile a template: ## JSON and XML For Content Negotiation purposes, it is useful to be able to alternate between rendering a model with an HTML template or as other formats (such as JSON or XML), depending on the content type requested by the client. To support doing so, Spring WebFlux provides the , which you can use to plug in any of the available Codecs from `spring-web` , such as `Jackson2JsonEncoder` , `Jackson2SmileEncoder` , or `Jaxb2XmlEncoder` . Unlike other view technologies, does not require a `ViewResolver` but is instead configured as a default view. You can configure one or more such default views, wrapping different `HttpMessageWriter` instances or `Encoder` instances. The one that matches the requested content type is used at runtime. In most cases, a model contains multiple attributes. To determine which one to serialize, you can configure with the name of the model attribute to use for rendering. If the model contains only one attribute, that one is used. The WebFlux Java configuration declares the components that are required to process requests with annotated controllers or functional endpoints, and it offers an API to customize the configuration. That means you do not need to understand the underlying beans created by the Java configuration. However, if you want to understand them, you can see them in ``` WebFluxConfigurationSupport ``` or read more about what they are in Special Bean Types. For more advanced customizations, not available in the configuration API, you can gain full control over the configuration through the Advanced Configuration Mode. ## Enabling WebFlux Config You can use the `@EnableWebFlux` annotation in your Java config, as the following example shows: ``` @Configuration @EnableWebFlux class WebConfig ``` The preceding example registers a number of Spring WebFlux infrastructure beans and adapts to dependencies available on the classpath — for JSON, XML, and others. ## WebFlux config API In your Java configuration, you can implement the `WebFluxConfigurer` interface, as the following example shows: ## Conversion, formatting See | | --- | ## Validation ## Content Type Resolvers You can configure how Spring WebFlux determines the requested media types for `@Controller` instances from the request. By default, only the `Accept` header is checked, but you can also enable a query parameter-based strategy. The following example shows how to customize the requested content type resolution: @Override public void configureContentTypeResolver(RequestedContentTypeResolverBuilder builder) { // ... } } ``` override fun configureContentTypeResolver(builder: RequestedContentTypeResolverBuilder) { // ... } } ``` ## HTTP message codecs The following example shows how to customize how the request and response body are read and written: override fun configureHttpMessageCodecs(configurer: ServerCodecConfigurer) { // ... } } ``` provides a set of default readers and writers. You can use it to add more readers and writers, customize the default ones, or replace the default ones completely. For Jackson JSON and XML, consider using , which customizes Jackson’s default properties with the following ones: ``` jackson-datatype-joda ``` : Support for Joda-Time types. * ``` jackson-datatype-jsr310 ``` ``` jackson-datatype-jdk8 ``` The following example shows how to configure view resolution: override fun configureViewResolvers(registry: ViewResolverRegistry) { // ... } } ``` The `ViewResolverRegistry` has shortcuts for view technologies with which the Spring Framework integrates. The following example uses FreeMarker (which also requires configuring the underlying FreeMarker view technology): @Bean fun freeMarkerConfigurer() = FreeMarkerConfigurer().apply { setTemplateLoaderPath("classpath:/templates") } } ``` You can also plug in any `ViewResolver` implementation, as the following example shows: @Override public void configureViewResolvers(ViewResolverRegistry registry) { ViewResolver resolver = ... ; registry.viewResolver(resolver); } } ``` override fun configureViewResolvers(registry: ViewResolverRegistry) { val resolver: ViewResolver = ... registry.viewResolver(resolver } } ``` To support Content Negotiation and rendering other formats through view resolution (besides HTML), you can configure one or more default views based on the implementation, which accepts any of the available Codecs from `spring-web` . The following example shows how to do so: Jackson2JsonEncoder encoder = new Jackson2JsonEncoder(); registry.defaultViews(new HttpMessageWriterView(encoder)); } override fun configureViewResolvers(registry: ViewResolverRegistry) { registry.freeMarker() val encoder = Jackson2JsonEncoder() registry.defaultViews(HttpMessageWriterView(encoder)) } See View Technologies for more on the view technologies that are integrated with Spring WebFlux. ## Static Resources @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry.addResourceHandler("/resources/**") .addResourceLocations("/public", "classpath:/static/") .setCacheControl(CacheControl.maxAge(365, TimeUnit.DAYS)); } (MD5 hash) is a good choice with some notable exceptions (such as JavaScript resources used with a module loader). The following example shows how to use in your Java configuration: You can use `ResourceUrlProvider` to rewrite URLs and apply the full chain of resolvers and transformers (for example, to insert versions). The WebFlux configuration provides a `ResourceUrlProvider` so that it can be injected into others. Unlike Spring MVC, at present, in WebFlux, there is no way to transparently rewrite static resource URLs, since there are no view technologies that can make use of a non-blocking chain of resolvers and transformers. When serving only local resources, the workaround is to use `ResourceUrlProvider` directly (for example, through a custom element) and block. Note that, when using both (for example, Gzip, Brotli encoded) and ``` VersionedResourceResolver ``` , they must be registered in that order, to ensure content-based versions are always computed reliably based on the unencoded file. For WebJars, versioned URLs like ## Path Matching You can customize options related to path matching. For details on the individual options, see the `PathMatchConfigurer` javadoc. The following example shows how to use `PathMatchConfigurer` : @Override public void configurePathMatch(PathMatchConfigurer configurer) { configurer .setUseCaseSensitiveMatch(true) .addPathPrefix("/api", HandlerTypePredicate.forAnnotation(RestController.class)); } } ``` @Override fun configurePathMatch(configurer: PathMatchConfigurer) { configurer .setUseCaseSensitiveMatch(true) .addPathPrefix("/api", HandlerTypePredicate.forAnnotation(RestController::class.java)) } } ``` ## WebSocketService The WebFlux Java config declares of a bean which provides support for the invocation of WebSocket handlers. That means all that remains to do in order to handle a WebSocket handshake request is to map a `WebSocketHandler` to a URL via ``` SimpleUrlHandlerMapping ``` . In some cases it may be necessary to create the bean with a provided `WebSocketService` service which allows configuring WebSocket server properties. For example: @Override public WebSocketService getWebSocketService() { TomcatRequestUpgradeStrategy strategy = new TomcatRequestUpgradeStrategy(); strategy.setMaxSessionIdleTimeout(0L); return new HandshakeWebSocketService(strategy); } } ``` ## Advanced Configuration Mode `@EnableWebFlux` imports that: Provides default Spring configuration for WebFlux applications * instead of implementing `WebFluxConfigurer` , as the following example shows: ``` @Configuration public class WebConfig extends DelegatingWebFluxConfiguration { ``` @Configuration class WebConfig : DelegatingWebFluxConfiguration { Spring WebFlux includes a client to perform HTTP requests with. `WebClient` has a functional, fluent API based on Reactor, see Reactive Libraries, which enables declarative composition of asynchronous logic without the need to deal with threads or concurrency. It is fully non-blocking, it supports streaming, and relies on the same codecs that are also used to encode and decode request and response content on the server side. `WebClient` needs an HTTP client library to perform requests with. There is built-in support for the following: Others can be plugged via `ClientHttpConnector` . The simplest way to create a `WebClient` is through one of the static factory methods: * `WebClient.create()` * ``` WebClient.create(String baseUrl) ``` You can also use `WebClient.builder()` with further options: * `uriBuilderFactory` : Customized `UriBuilderFactory` to use as a base URL. * `defaultUriVariables` : default values to use when expanding URI templates. * `defaultHeader` : Headers for every request. * `defaultCookie` : Cookies for every request. * `defaultRequest` : `Consumer` to customize every request. * `filter` : Client filter for every request. * `exchangeStrategies` : HTTP message reader/writer customizations. * `clientConnector` : HTTP client library settings. * `observationRegistry` : the registry to use for enabling Observability support. * ``` observationConvention ``` : an optional, custom convention to extract metadata for recorded observations. For example: ``` WebClient client = WebClient.builder() .codecs(configurer -> ... ) .build(); ``` ``` val webClient = WebClient.builder() .codecs { configurer -> ... } .build() ``` Once built, a `WebClient` is immutable. However, you can clone it and build a modified copy as follows: ``` WebClient client1 = WebClient.builder() .filter(filterA).filter(filterB).build(); WebClient client2 = client1.mutate() .filter(filterC).filter(filterD).build(); ``` val client1 = WebClient.builder() .filter(filterA).filter(filterB).build() val client2 = client1.mutate() .filter(filterC).filter(filterD).build() ## MaxInMemorySize Codecs have limits for buffering data in memory to avoid application memory issues. By default those are set to 256KB. If that’s not enough you’ll get the following error: > org.springframework.core.io.buffer.DataBufferLimitException: Exceeded limit on max bytes to buffer To change the limit for default codecs, use the following: ``` WebClient webClient = WebClient.builder() .codecs(configurer -> configurer.defaultCodecs().maxInMemorySize(2 * 1024 * 1024)) .build(); ``` ``` val webClient = WebClient.builder() .codecs { configurer -> configurer.defaultCodecs().maxInMemorySize(2 * 1024 * 1024) } .build() ``` ## Reactor Netty To customize Reactor Netty settings, provide a pre-configured `HttpClient` : ``` HttpClient httpClient = HttpClient.create().secure(sslSpec -> ...); ``` val httpClient = HttpClient.create().secure { ... } val webClient = WebClient.builder() .clientConnector(ReactorClientHttpConnector(httpClient)) .build() ``` ### Resources By default, `HttpClient` participates in the global Reactor Netty resources held in ``` reactor.netty.http.HttpResources ``` , including event loop threads and a connection pool. This is the recommended mode, since fixed, shared resources are preferred for event loop concurrency. In this mode global resources remain active until the process exits. If the server is timed with the process, there is typically no need for an explicit shutdown. However, if the server can start or stop in-process (for example, a Spring MVC application deployed as a WAR), you can declare a Spring-managed bean of type ``` ReactorResourceFactory ``` with `globalResources=true` (the default) to ensure that the Reactor Netty global resources are shut down when the Spring `ApplicationContext` is closed, as the following example shows: ``` @Bean public ReactorResourceFactory reactorResourceFactory() { return new ReactorResourceFactory(); } ``` ``` @Bean fun reactorResourceFactory() = ReactorResourceFactory() ``` You can also choose not to participate in the global Reactor Netty resources. However, in this mode, the burden is on you to ensure that all Reactor Netty client and server instances use shared resources, as the following example shows: ``` @Bean public ReactorResourceFactory resourceFactory() { ReactorResourceFactory factory = new ReactorResourceFactory(); factory.setUseGlobalResources(false); (1) return factory; } Function<HttpClient, HttpClient> mapper = client -> { // Further customizations... }; ClientHttpConnector connector = new ReactorClientHttpConnector(resourceFactory(), mapper); (2) return WebClient.builder().clientConnector(connector).build(); (3) } ``` ``` @Bean fun resourceFactory() = ReactorResourceFactory().apply { isUseGlobalResources = false (1) } val mapper: (HttpClient) -> HttpClient = { // Further customizations... } val connector = ReactorClientHttpConnector(resourceFactory(), mapper) (2) return WebClient.builder().clientConnector(connector).build() (3) } ``` ### Timeouts To configure a connection timeout: ``` import io.netty.channel.ChannelOption; HttpClient httpClient = HttpClient.create() .option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 10000); ``` import io.netty.channel.ChannelOption val httpClient = HttpClient.create() .option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 10000); To configure a read or write timeout: ``` import io.netty.handler.timeout.ReadTimeoutHandler; import io.netty.handler.timeout.WriteTimeoutHandler; HttpClient httpClient = HttpClient.create() .doOnConnected(conn -> conn .addHandlerLast(new ReadTimeoutHandler(10)) .addHandlerLast(new WriteTimeoutHandler(10))); ``` import io.netty.handler.timeout.ReadTimeoutHandler import io.netty.handler.timeout.WriteTimeoutHandler val httpClient = HttpClient.create() .doOnConnected { conn -> conn .addHandlerLast(ReadTimeoutHandler(10)) .addHandlerLast(WriteTimeoutHandler(10)) } To configure a response timeout for all requests: To configure a response timeout for a specific request: ``` WebClient.create().get() .uri("https://example.org/path") .httpRequest(httpRequest -> { HttpClientRequest reactorRequest = httpRequest.getNativeRequest(); reactorRequest.responseTimeout(Duration.ofSeconds(2)); }) .retrieve() .bodyToMono(String.class); ``` ``` WebClient.create().get() .uri("https://example.org/path") .httpRequest { httpRequest: ClientHttpRequest -> val reactorRequest = httpRequest.getNativeRequest<HttpClientRequest>() reactorRequest.responseTimeout(Duration.ofSeconds(2)) } .retrieve() .bodyToMono(String::class.java) ``` ## JDK HttpClient The following example shows how to customize the JDK `HttpClient` : ClientHttpConnector connector = new JdkClientHttpConnector(httpClient, new DefaultDataBufferFactory()); val connector = JdkClientHttpConnector(httpClient, DefaultDataBufferFactory()) val webClient = WebClient.builder().clientConnector(connector).build() ``` ## Jetty The following example shows how to customize Jetty `HttpClient` settings: ``` HttpClient httpClient = new HttpClient(); httpClient.setCookieStore(...); WebClient webClient = WebClient.builder() .clientConnector(new JettyClientHttpConnector(httpClient)) .build(); ``` ``` val httpClient = HttpClient() httpClient.cookieStore = ... By default, `HttpClient` creates its own resources ( `Executor` , `ByteBufferPool` , `Scheduler` ), which remain active until the process exits or `stop()` is called. You can share resources between multiple instances of the Jetty client (and server) and ensure that the resources are shut down when the Spring `ApplicationContext` is closed by declaring a Spring-managed bean of type `JettyResourceFactory` , as the following example shows: ``` @Bean public JettyResourceFactory resourceFactory() { return new JettyResourceFactory(); } HttpClient httpClient = new HttpClient(); // Further customizations... ClientHttpConnector connector = new JettyClientHttpConnector(httpClient, resourceFactory()); (1) return WebClient.builder().clientConnector(connector).build(); (2) } ``` ``` @Bean fun resourceFactory() = JettyResourceFactory() val httpClient = HttpClient() // Further customizations... val connector = JettyClientHttpConnector(httpClient, resourceFactory()) (1) return WebClient.builder().clientConnector(connector).build() (2) } ``` ## HttpComponents The following example shows how to customize Apache HttpComponents `HttpClient` settings: ``` HttpAsyncClientBuilder clientBuilder = HttpAsyncClients.custom(); clientBuilder.setDefaultRequestConfig(...); CloseableHttpAsyncClient client = clientBuilder.build(); ClientHttpConnector connector = new HttpComponentsClientHttpConnector(client); ``` val client = HttpAsyncClients.custom().apply { setDefaultRequestConfig(...) }.build() val connector = HttpComponentsClientHttpConnector(client) val webClient = WebClient.builder().clientConnector(connector).build() ``` # retrieve() `retrieve()` The `retrieve()` method can be used to declare how to extract the response. For example: Mono<ResponseEntity<Person>> result = client.get() .uri("/persons/{id}", id).accept(MediaType.APPLICATION_JSON) .retrieve() .toEntity(Person.class); ``` val result = client.get() .uri("/persons/{id}", id).accept(MediaType.APPLICATION_JSON) .retrieve() .toEntity<Person>().awaitSingle() ``` Or to get only the body: Mono<Person> result = client.get() .uri("/persons/{id}", id).accept(MediaType.APPLICATION_JSON) .retrieve() .bodyToMono(Person.class); ``` val result = client.get() .uri("/persons/{id}", id).accept(MediaType.APPLICATION_JSON) .retrieve() .awaitBody<Person>() ``` To get a stream of decoded objects: ``` Flux<Quote> result = client.get() .uri("/quotes").accept(MediaType.TEXT_EVENT_STREAM) .retrieve() .bodyToFlux(Quote.class); ``` ``` val result = client.get() .uri("/quotes").accept(MediaType.TEXT_EVENT_STREAM) .retrieve() .bodyToFlow<Quote>() ``` By default, 4xx or 5xx responses result in an , including sub-classes for specific HTTP status codes. To customize the handling of error responses, use `onStatus` handlers as follows: ``` Mono<Person> result = client.get() .uri("/persons/{id}", id).accept(MediaType.APPLICATION_JSON) .retrieve() .onStatus(HttpStatus::is4xxClientError, response -> ...) .onStatus(HttpStatus::is5xxServerError, response -> ...) .bodyToMono(Person.class); ``` ``` val result = client.get() .uri("/persons/{id}", id).accept(MediaType.APPLICATION_JSON) .retrieve() .onStatus(HttpStatus::is4xxClientError) { ... } .onStatus(HttpStatus::is5xxServerError) { ... } .awaitBody<Person>() ``` The `exchangeToMono()` and `exchangeToFlux()` methods (or `awaitExchange { }` and `exchangeToFlow { }` in Kotlin) are useful for more advanced cases that require more control, such as to decode the response differently depending on the response status: ``` Mono<Person> entityMono = client.get() .uri("/persons/1") .accept(MediaType.APPLICATION_JSON) .exchangeToMono(response -> { if (response.statusCode().equals(HttpStatus.OK)) { return response.bodyToMono(Person.class); } else { // Turn to error return response.createError(); } }); ``` ``` val entity = client.get() .uri("/persons/1") .accept(MediaType.APPLICATION_JSON) .awaitExchange { if (response.statusCode() == HttpStatus.OK) { return response.awaitBody<Person>() } else { throw response.createExceptionAndAwait() } } ``` When using the above, after the returned `Mono` or `Flux` completes, the response body is checked and if not consumed it is released to prevent memory and connection leaks. Therefore the response cannot be decoded further downstream. It is up to the provided function to declare how to decode the response if needed. The request body can be encoded from any asynchronous type handled by , like `Mono` or Kotlin Coroutines `Deferred` as the following example shows: ``` Mono<Person> personMono = ... ; ``` val personDeferred: Deferred<Person> = ... You can also have a stream of objects be encoded, as the following example shows: ``` Flux<Person> personFlux = ... ; Mono<Void> result = client.post() .uri("/persons/{id}", id) .contentType(MediaType.APPLICATION_STREAM_JSON) .body(personFlux, Person.class) .retrieve() .bodyToMono(Void.class); ``` ``` val people: Flow<Person> = ... Alternatively, if you have the actual value, you can use the `bodyValue` shortcut method, as the following example shows: ``` Person person = ... ; To send form data, you can provide a as the body. Note that the content is automatically set to by the ``` MultiValueMap<String, String> formData = ... ; Mono<Void> result = client.post() .uri("/path", id) .bodyValue(formData) .retrieve() .bodyToMono(Void.class); ``` ``` val formData: MultiValueMap<String, String> = ... client.post() .uri("/path", id) .bodyValue(formData) .retrieve() .awaitBody<Unit>() ``` You can also supply form data in-line by using `BodyInserters` , as the following example shows: client.post() .uri("/path", id) .body(fromFormData("k1", "v1").with("k2", "v2")) .retrieve() .awaitBody<Unit>() ``` ## Multipart Data whose values are either `Object` instances that represent part content or `HttpEntity` instances that represent the content and headers for a part. `MultipartBodyBuilder` provides a convenient API to prepare a multipart request. The following example shows how to create a ``` MultipartBodyBuilder builder = new MultipartBodyBuilder(); builder.part("fieldPart", "fieldValue"); builder.part("filePart1", new FileSystemResource("...logo.png")); builder.part("jsonPart", new Person("Jason")); builder.part("myPart", part); // Part from a server request MultiValueMap<String, HttpEntity<?>> parts = builder.build(); ``` ``` val builder = MultipartBodyBuilder().apply { part("fieldPart", "fieldValue") part("filePart1", FileSystemResource("...logo.png")) part("jsonPart", Person("Jason")) part("myPart", part) // Part from a server request } val parts = builder.build() ``` In most cases, you do not have to specify the `Content-Type` for each part. The content type is determined automatically based on the `HttpMessageWriter` chosen to serialize it or, in the case of a `Resource` , based on the file extension. If necessary, you can explicitly provide the `MediaType` to use for each part through one of the overloaded builder `part` methods. Once a `MultiValueMap` is prepared, the easiest way to pass it to the `WebClient` is through the `body` method, as the following example shows: ``` MultipartBodyBuilder builder = ...; ``` val builder: MultipartBodyBuilder = ... client.post() .uri("/path", id) .body(builder.build()) .retrieve() .awaitBody<Unit>() ``` If the `MultiValueMap` contains at least one non- `String` value, which could also represent regular form data (that is, ), you need not set the `Content-Type` to `multipart/form-data` . This is always the case when using `MultipartBodyBuilder` , which ensures an `HttpEntity` wrapper. As an alternative to `MultipartBodyBuilder` , you can also provide multipart content, inline-style, through the built-in `BodyInserters` , as the following example shows: Mono<Void> result = client.post() .uri("/path", id) .body(fromMultipartData("fieldPart", "value").with("filePart", resource)) .retrieve() .bodyToMono(Void.class); ``` client.post() .uri("/path", id) .body(fromMultipartData("fieldPart", "value").with("filePart", resource)) .retrieve() .awaitBody<Unit>() ``` `PartEvent` To stream multipart data sequentially, you can provide multipart content through `PartEvent` objects. Form fields can be created via ``` FormPartEvent::create ``` File uploads can be created via ``` FilePartEvent::create ``` . You can concatenate the streams returned from methods via `Flux::concat` , and create a request for the `WebClient` . For instance, this sample will POST a multipart form containing a form field and a file. ``` Resource resource = ... Mono<String> result = webClient .post() .uri("https://example.com") .body(Flux.concat( FormPartEvent.create("field", "field value"), FilePartEvent.create("file", resource) ), PartEvent.class) .retrieve() .bodyToMono(String.class); ``` ``` var resource: Resource = ... var result: Mono<String> = webClient .post() .uri("https://example.com") .body( Flux.concat( FormPartEvent.create("field", "field value"), FilePartEvent.create("file", resource) ) ) .retrieve() .bodyToMono() ``` On the server side, `PartEvent` objects that are received via `@RequestBody` or ``` ServerRequest::bodyToFlux(PartEvent.class) ``` can be relayed to another service via the `WebClient` . You can register a client filter ( ``` ExchangeFilterFunction ``` ) through the `WebClient.Builder` in order to intercept and modify requests, as the following example shows: ``` WebClient client = WebClient.builder() .filter((request, next) -> { ClientRequest filtered = ClientRequest.from(request) .header("foo", "bar") .build(); return next.exchange(filtered); }) .build(); ``` ``` val client = WebClient.builder() .filter { request, next - val filtered = ClientRequest.from(request) .header("foo", "bar") .build() next.exchange(filtered) } .build() ``` This can be used for cross-cutting concerns, such as authentication. The following example uses a filter for basic authentication through a static factory method: WebClient client = WebClient.builder() .filter(basicAuthentication("user", "password")) .build(); ``` ``` import org.springframework.web.reactive.function.client.ExchangeFilterFunctions.basicAuthentication val client = WebClient.builder() .filter(basicAuthentication("user", "password")) .build() ``` Filters can be added or removed by mutating an existing `WebClient` instance, resulting in a new `WebClient` instance that does not affect the original one. For example: WebClient client = webClient.mutate() .filters(filterList -> { filterList.add(0, basicAuthentication("user", "password")); }) .build(); ``` ``` val client = webClient.mutate() .filters { it.add(0, basicAuthentication("user", "password")) } .build() ``` `WebClient` is a thin facade around the chain of filters followed by an `ExchangeFunction` . It provides a workflow to make requests, to encode to and from higher level objects, and it helps to ensure that response content is always consumed. When filters handle the response in some way, extra care must be taken to always consume its content or to otherwise propagate it downstream to the `WebClient` which will ensure the same. Below is a filter that handles the `UNAUTHORIZED` status code but ensures that any response content, whether expected or not, is released: ``` public ExchangeFilterFunction renewTokenFilter() { return (request, next) -> next.exchange(request).flatMap(response -> { if (response.statusCode().value() == HttpStatus.UNAUTHORIZED.value()) { return response.releaseBody() .then(renewToken()) .flatMap(token -> { ClientRequest newRequest = ClientRequest.from(request).build(); return next.exchange(newRequest); }); } else { return Mono.just(response); } }); } ``` ``` fun renewTokenFilter(): ExchangeFilterFunction? { return ExchangeFilterFunction { request: ClientRequest?, next: ExchangeFunction -> next.exchange(request!!).flatMap { response: ClientResponse -> if (response.statusCode().value() == HttpStatus.UNAUTHORIZED.value()) { return@flatMap response.releaseBody() .then(renewToken()) .flatMap { token: String? -> val newRequest = ClientRequest.from(request).build() next.exchange(newRequest) } } else { return@flatMap Mono.just(response) } } } } ``` You can add attributes to a request. This is convenient if you want to pass information through the filter chain and influence the behavior of filters for a given request. For example: ``` WebClient client = WebClient.builder() .filter((request, next) -> { Optional<Object> usr = request.attribute("myAttribute"); // ... }) .build(); client.get().uri("https://example.org/") .attribute("myAttribute", "...") .retrieve() .bodyToMono(Void.class); ``` val client = WebClient.builder() .filter { request, _ -> val usr = request.attributes()["myAttribute"]; // ... } .build() client.get().uri("https://example.org/") .attribute("myAttribute", "...") .retrieve() .awaitBody<Unit>() ``` Note that you can configure a `defaultRequest` callback globally at the `WebClient.Builder` level which lets you insert attributes into all requests, which could be used for example in a Spring MVC application to populate request attributes based on `ThreadLocal` data. Attributes provide a convenient way to pass information to the filter chain but they only influence the current request. If you want to pass information that propagates to additional requests that are nested, e.g. via `flatMap` , or executed after, e.g. via `concatMap` , then you’ll need to use the Reactor `Context` . The Reactor `Context` needs to be populated at the end of a reactive chain in order to apply to all operations. For example: ``` WebClient client = WebClient.builder() .filter((request, next) -> Mono.deferContextual(contextView -> { String value = contextView.get("foo"); // ... })) .build(); client.get().uri("https://example.org/") .retrieve() .bodyToMono(String.class) .flatMap(body -> { // perform nested request (context propagates automatically)... }) .contextWrite(context -> context.put("foo", ...)); ``` `WebClient` can be used in synchronous style by blocking at the end for the result: ``` Person person = client.get().uri("/person/{id}", i).retrieve() .bodyToMono(Person.class) .block(); List<Person> persons = client.get().uri("/persons").retrieve() .bodyToFlux(Person.class) .collectList() .block(); ``` ``` val person = runBlocking { client.get().uri("/person/{id}", i).retrieve() .awaitBody<Person>() } val persons = runBlocking { client.get().uri("/persons").retrieve() .bodyToFlow<Person>() .toList() } ``` However if multiple calls need to be made, it’s more efficient to avoid blocking on each response individually, and instead wait for the combined result: ``` Mono<Person> personMono = client.get().uri("/person/{id}", personId) .retrieve().bodyToMono(Person.class); Mono<List<Hobby>> hobbiesMono = client.get().uri("/person/{id}/hobbies", personId) .retrieve().bodyToFlux(Hobby.class).collectList(); Map<String, Object> data = Mono.zip(personMono, hobbiesMono, (person, hobbies) -> { Map<String, String> map = new LinkedHashMap<>(); map.put("person", person); map.put("hobbies", hobbies); return map; }) .block(); ``` ``` val data = runBlocking { val personDeferred = async { client.get().uri("/person/{id}", personId) .retrieve().awaitBody<Person>() } val hobbiesDeferred = async { client.get().uri("/person/{id}/hobbies", personId) .retrieve().bodyToFlow<Hobby>().toList() } mapOf("person" to personDeferred.await(), "hobbies" to hobbiesDeferred.await()) } ``` The above is merely one example. There are lots of other patterns and operators for putting together a reactive pipeline that makes many remote calls, potentially some nested, interdependent, without ever blocking until the end. To test code that uses the `WebClient` , you can use a mock web server, such as the OkHttp MockWebServer. To see an example of its use, check out ``` WebClientIntegrationTests ``` in the Spring Framework test suite or the `static-server` sample in the OkHttp repository. This part of the reference documentation covers support for reactive-stack WebSocket messaging. ## WebSocket API ### Server To create a WebSocket server, you can first create a `WebSocketHandler` . The following example shows how to do so: ``` import org.springframework.web.reactive.socket.WebSocketHandler; import org.springframework.web.reactive.socket.WebSocketSession; public class MyWebSocketHandler implements WebSocketHandler { @Override public Mono<Void> handle(WebSocketSession session) { // ... } } ``` ``` import org.springframework.web.reactive.socket.WebSocketHandler import org.springframework.web.reactive.socket.WebSocketSession class MyWebSocketHandler : WebSocketHandler { Then you can map it to a URL: @Bean public HandlerMapping handlerMapping() { Map<String, WebSocketHandler> map = new HashMap<>(); map.put("/path", new MyWebSocketHandler()); int order = -1; // before annotated controllers return new SimpleUrlHandlerMapping(map, order); } } ``` @Bean fun handlerMapping(): HandlerMapping { val map = mapOf("/path" to MyWebSocketHandler()) val order = -1 // before annotated controllers return SimpleUrlHandlerMapping(map, order) } } ``` If using the WebFlux Config there is nothing further to do, or otherwise if not using the WebFlux config you’ll need to declare a as shown below: @Bean public WebSocketHandlerAdapter handlerAdapter() { return new WebSocketHandlerAdapter(); } } ``` @Bean fun handlerAdapter() = WebSocketHandlerAdapter() } ``` `WebSocketHandler` The `handle` method of `WebSocketHandler` takes `WebSocketSession` and returns `Mono<Void>` to indicate when application handling of the session is complete. The session is handled through two streams, one for inbound and one for outbound messages. The following table describes the two methods that handle the streams: WebSocketSession method Description Provides access to the inbound message stream and completes when the connection is closed. Takes a source for outgoing messages, writes the messages, and returns a A `WebSocketHandler` must compose the inbound and outbound streams into a unified flow and return a `Mono<Void>` that reflects the completion of that flow. Depending on application requirements, the unified flow completes when: Either the inbound or the outbound message stream completes. * The inbound stream completes (that is, the connection closed), while the outbound stream is infinite. * At a chosen point, through the `close` method of `WebSocketSession` . When inbound and outbound message streams are composed together, there is no need to check if the connection is open, since Reactive Streams signals end activity. The inbound stream receives a completion or error signal, and the outbound stream receives a cancellation signal. The most basic implementation of a handler is one that handles the inbound stream. The following example shows such an implementation: @Override public Mono<Void> handle(WebSocketSession session) { return session.receive() (1) .doOnNext(message -> { // ... (2) }) .concatMap(message -> { // ... (3) }) .then(); (4) } } ``` override fun handle(session: WebSocketSession): Mono<Void> { return session.receive() (1) .doOnNext { // ... (2) } .concatMap { // ... (3) } .then() (4) } } ``` For nested, asynchronous operations, you may need to call | | --- | The following implementation combines the inbound and outbound streams: Flux<WebSocketMessage> output = session.receive() (1) .doOnNext(message -> { // ... }) .concatMap(message -> { // ... }) .map(value -> session.textMessage("Echo " + value)); (2) return session.send(output); (3) } } ``` val output = session.receive() (1) .doOnNext { // ... } .concatMap { // ... } .map { session.textMessage("Echo $it") } (2) return session.send(output) (3) } } ``` Inbound and outbound streams can be independent and be joined only for completion, as the following example shows: Mono<Void> input = session.receive() (1) .doOnNext(message -> { // ... }) .concatMap(message -> { // ... }) .then(); Flux<String> source = ... ; Mono<Void> output = session.send(source.map(session::textMessage)); (2) return Mono.zip(input, output).then(); (3) } } ``` val input = session.receive() (1) .doOnNext { // ... } .concatMap { // ... } .then() val source: Flux<String> = ... val output = session.send(source.map(session::textMessage)) (2) return Mono.zip(input, output).then() (3) } } ``` `DataBuffer` `DataBuffer` is the representation for a byte buffer in WebFlux. The Spring Core part of the reference has more on that in the section on Data Buffers and Codecs. The key point to understand is that on some servers like Netty, byte buffers are pooled and reference counted, and must be released when consumed to avoid memory leaks. When running on Netty, applications must use ``` DataBufferUtils.retain(dataBuffer) ``` if they wish to hold on input data buffers in order to ensure they are not released, and subsequently use when the buffers are consumed. ### Handshake delegates to a `WebSocketService` . By default, that is an instance of , which performs basic checks on the WebSocket request and then uses for the server in use. Currently, there is built-in support for Reactor Netty, Tomcat, Jetty, and Undertow. exposes a ``` sessionAttributePredicate ``` property that allows setting a `Predicate<String>` to extract attributes from the `WebSession` and insert them into the attributes of the `WebSocketSession` . ### Server Configuration for each server exposes configuration specific to the underlying WebSocket server engine. When using the WebFlux Java config you can customize such properties as shown in the corresponding section of the WebFlux Config, or otherwise if not using the WebFlux config, use the below: @Bean public WebSocketHandlerAdapter handlerAdapter() { return new WebSocketHandlerAdapter(webSocketService()); } @Bean public WebSocketService webSocketService() { TomcatRequestUpgradeStrategy strategy = new TomcatRequestUpgradeStrategy(); strategy.setMaxSessionIdleTimeout(0L); return new HandshakeWebSocketService(strategy); } } ``` @Bean fun handlerAdapter() = WebSocketHandlerAdapter(webSocketService()) Check the upgrade strategy for your server to see what options are available. Currently, only Tomcat and Jetty expose such options. The easiest way to configure CORS and restrict access to a WebSocket endpoint is to have your `WebSocketHandler` implement ``` CorsConfigurationSource ``` and return a `CorsConfiguration` with allowed origins, headers, and other details. If you cannot do that, you can also set the `corsConfigurations` property on the `SimpleUrlHandler` to specify CORS settings by URL pattern. If both are specified, they are combined by using the `combine` method on `CorsConfiguration` . ### Client Spring WebFlux provides a `WebSocketClient` abstraction with implementations for Reactor Netty, Tomcat, Jetty, Undertow, and standard Java (that is, JSR-356). The Tomcat client is effectively an extension of the standard Java one with some extra functionality in the | | --- | To start a WebSocket session, you can create an instance of the client and use its `execute` methods: ``` WebSocketClient client = new ReactorNettyWebSocketClient(); URI url = new URI("ws://localhost:8080/path"); client.execute(url, session -> session.receive() .doOnNext(System.out::println) .then()); ``` val url = URI("ws://localhost:8080/path") client.execute(url) { session -> session.receive() .doOnNext(::println) .then() } ``` Some clients, such as Jetty, implement `Lifecycle` and need to be stopped and started before you can use them. All clients have constructor options related to configuration of the underlying WebSocket client. The `spring-test` module provides mock implementations of `ServerHttpRequest` , `ServerHttpResponse` , and `ServerWebExchange` . See Spring Web Reactive for a discussion of mock objects. `WebTestClient` builds on these mock request and response objects to provide support for testing WebFlux applications without an HTTP server. You can use the `WebTestClient` for end-to-end integration tests, too. This section describes Spring Framework’s support for the RSocket protocol. RSocket is an application protocol for multiplexed, duplex communication over TCP, WebSocket, and other byte stream transports, using one of the following interaction models: * `Request-Response` — send one message and receive one back. * `Request-Stream` — send one message and receive a stream of messages back. * `Channel` — send streams of messages in both directions. * `Fire-and-Forget` — send a one-way message. Once the initial connection is made, the "client" vs "server" distinction is lost as both sides become symmetrical and each side can initiate one of the above interactions. This is why in the protocol calls the participating sides "requester" and "responder" while the above interactions are called "request streams" or simply "requests". These are the key features and benefits of the RSocket protocol: Reactive Streams semantics across network boundary — for streaming requests such as `Request-Stream` and `Channel` , back pressure signals travel between requester and responder, allowing a requester to slow down a responder at the source, hence reducing reliance on network layer congestion control, and the need for buffering at the network level or at any level. * Request throttling — this feature is named "Leasing" after the `LEASE` frame that can be sent from each end to limit the total number of requests allowed by other end for a given time. Leases are renewed periodically. * Session resumption — this is designed for loss of connectivity and requires some state to be maintained. The state management is transparent for applications, and works well in combination with back pressure which can stop a producer when possible and reduce the amount of state required. * Fragmentation and re-assembly of large messages. * Keepalive (heartbeats). RSocket has implementations in multiple languages. The Java library is built on Project Reactor, and Reactor Netty for the transport. That means signals from Reactive Streams Publishers in your application propagate transparently through RSocket across the network. ### The Protocol One of the benefits of RSocket is that it has well defined behavior on the wire and an easy to read specification along with some protocol extensions. Therefore it is a good idea to read the spec, independent of language implementations and higher level framework APIs. This section provides a succinct overview to establish some context. Connecting Initially a client connects to a server via some low level streaming transport such as TCP or WebSocket and sends a `SETUP` frame to the server to set parameters for the connection. The server may reject the `SETUP` frame, but generally after it is sent (for the client) and received (for the server), both sides can begin to make requests, unless `SETUP` indicates use of leasing semantics to limit the number of requests, in which case both sides must wait for a `LEASE` frame from the other end to permit making requests. Making Requests Once a connection is established, both sides may initiate a request through one of the frames `REQUEST_RESPONSE` , `REQUEST_STREAM` , `REQUEST_CHANNEL` , or `REQUEST_FNF` . Each of those frames carries one message from the requester to the responder. The responder may then return `PAYLOAD` frames with response messages, and in the case of `REQUEST_CHANNEL` the requester may also send `PAYLOAD` frames with more request messages. When a request involves a stream of messages such as `Request-Stream` and `Channel` , the responder must respect demand signals from the requester. Demand is expressed as a number of messages. Initial demand is specified in `REQUEST_STREAM` and `REQUEST_CHANNEL` frames. Subsequent demand is signaled via `REQUEST_N` frames. Each side may also send metadata notifications, via the `METADATA_PUSH` frame, that do not pertain to any individual request but rather to the connection as a whole. Message Format RSocket messages contain data and metadata. Metadata can be used to send a route, a security token, etc. Data and metadata can be formatted differently. Mime types for each are declared in the `SETUP` frame and apply to all requests on a given connection. While all messages can have metadata, typically metadata such as a route are per-request and therefore only included in the first message on a request, i.e. with one of the frames `REQUEST_RESPONSE` , `REQUEST_STREAM` , `REQUEST_CHANNEL` , or `REQUEST_FNF` . Protocol extensions define common metadata formats for use in applications: Composite Metadata-- multiple, independently formatted metadata entries. * Routing — the route for a request. ### Java Implementation The Java implementation for RSocket is built on Project Reactor. The transports for TCP and WebSocket are built on Reactor Netty. As a Reactive Streams library, Reactor simplifies the job of implementing the protocol. For applications it is a natural fit to use `Flux` and `Mono` with declarative operators and transparent back pressure support. The API in RSocket Java is intentionally minimal and basic. It focuses on protocol features and leaves the application programming model (e.g. RPC codegen vs other) as a higher level, independent concern. The main contract io.rsocket.RSocket models the four request interaction types with `Mono` representing a promise for a single message, `Flux` a stream of messages, and `io.rsocket.Payload` the actual message with access to data and metadata as byte buffers. The `RSocket` contract is used symmetrically. For requesting, the application is given an `RSocket` to perform requests with. For responding, the application implements `RSocket` to handle requests. This is not meant to be a thorough introduction. For the most part, Spring applications will not have to use its API directly. However it may be important to see or experiment with RSocket independent of Spring. The RSocket Java repository contains a number of sample apps that demonstrate its API and protocol features. ### Spring Support The `spring-messaging` module contains the following: RSocketRequester — fluent API to make requests through an `io.rsocket.RSocket` with data and metadata encoding/decoding. * Annotated Responders — `@MessageMapping` annotated handler methods for responding. The `spring-web` module contains `Encoder` and `Decoder` implementations such as Jackson CBOR/JSON, and Protobuf that RSocket applications will likely need. It also contains the `PathPatternParser` that can be plugged in for efficient route matching. Spring Boot 2.2 supports standing up an RSocket server over TCP or WebSocket, including the option to expose RSocket over WebSocket in a WebFlux server. There is also client support and auto-configuration for an and `RSocketStrategies` . See the RSocket section in the Spring Boot reference for more details. Spring Security 5.2 provides RSocket support. Spring Integration 5.2 provides inbound and outbound gateways to interact with RSocket clients and servers. See the Spring Integration Reference Manual for more details. Spring Cloud Gateway supports RSocket connections. ## RSocketRequester `RSocketRequester` provides a fluent API to perform RSocket requests, accepting and returning objects for data and metadata instead of low level data buffers. It can be used symmetrically, to make requests from clients and to make requests from servers. ### Client Requester To obtain an `RSocketRequester` on the client side is to connect to a server which involves sending an RSocket `SETUP` frame with connection settings. `RSocketRequester` provides a builder that helps to prepare an including connection settings for the `SETUP` frame. This is the most basic way to connect with default settings: ``` RSocketRequester requester = RSocketRequester.builder().tcp("localhost", 7000); URI url = URI.create("https://example.org:8080/rsocket"); RSocketRequester requester = RSocketRequester.builder().webSocket(url); ``` ``` val requester = RSocketRequester.builder().tcp("localhost", 7000) URI url = URI.create("https://example.org:8080/rsocket"); val requester = RSocketRequester.builder().webSocket(url) ``` The above does not connect immediately. When requests are made, a shared connection is established transparently and used. # Connection Setup provides the following to customize the initial `SETUP` frame: ``` dataMimeType(MimeType) ``` — set the mime type for data on the connection. * ``` metadataMimeType(MimeType) ``` — set the mime type for metadata on the connection. * `setupData(Object)` — data to include in the `SETUP` . * ``` setupRoute(String, Object…​) ``` — route in the metadata to include in the `SETUP` . * ``` setupMetadata(Object, MimeType) ``` — other metadata to include in the `SETUP` . For data, the default mime type is derived from the first configured `Decoder` . For metadata, the default mime type is composite metadata which allows multiple metadata value and mime type pairs per request. Typically both don’t need to be changed. Data and metadata in the `SETUP` frame is optional. On the server side, @ConnectMapping methods can be used to handle the start of a connection and the content of the `SETUP` frame. Metadata may be used for connection level security. # Strategies accepts `RSocketStrategies` to configure the requester. You’ll need to use this to provide encoders and decoders for (de)-serialization of data and metadata values. By default only the basic codecs from `spring-core` for `String` , `byte[]` , and `ByteBuffer` are registered. Adding `spring-web` provides access to more that can be registered as follows: ``` RSocketStrategies strategies = RSocketStrategies.builder() .encoders(encoders -> encoders.add(new Jackson2CborEncoder())) .decoders(decoders -> decoders.add(new Jackson2CborDecoder())) .build(); RSocketRequester requester = RSocketRequester.builder() .rsocketStrategies(strategies) .tcp("localhost", 7000); ``` ``` val strategies = RSocketStrategies.builder() .encoders { it.add(Jackson2CborEncoder()) } .decoders { it.add(Jackson2CborDecoder()) } .build() val requester = RSocketRequester.builder() .rsocketStrategies(strategies) .tcp("localhost", 7000) ``` `RSocketStrategies` is designed for re-use. In some scenarios, e.g. client and server in the same application, it may be preferable to declare it in Spring configuration. # Client Responders can be used to configure responders to requests from the server. You can use annotated handlers for client-side responding based on the same infrastructure that’s used on a server, but registered programmatically as follows: ``` RSocketStrategies strategies = RSocketStrategies.builder() .routeMatcher(new PathPatternRouteMatcher()) (1) .build(); RSocketRequester requester = RSocketRequester.builder() .rsocketConnector(connector -> connector.acceptor(responder)) (3) .tcp("localhost", 7000); ``` ``` val strategies = RSocketStrategies.builder() .routeMatcher(PathPatternRouteMatcher()) (1) .build() val requester = RSocketRequester.builder() .rsocketConnector { it.acceptor(responder) } (3) .tcp("localhost", 7000) ``` Note the above is only a shortcut designed for programmatic registration of client responders. For alternative scenarios, where client responders are in Spring configuration, you can still declare as a Spring bean and then apply as follows: RSocketRequester requester = RSocketRequester.builder() .rsocketConnector(connector -> connector.acceptor(handler.responder())) .tcp("localhost", 7000); ``` val requester = RSocketRequester.builder() .rsocketConnector { it.acceptor(handler.responder()) } .tcp("localhost", 7000) ``` For the above you may also need to use `setHandlerPredicate` in to switch to a different strategy for detecting client responders, e.g. based on a custom annotation such as ``` @RSocketClientResponder ``` vs the default `@Controller` . This is necessary in scenarios with client and server, or multiple clients in the same application. See also Annotated Responders, for more on the programming model. ``` RSocketRequesterBuilder ``` provides a callback to expose the underlying for further configuration options for keepalive intervals, session resumption, interceptors, and more. You can configure options at that level as follows: ``` RSocketRequester requester = RSocketRequester.builder() .rsocketConnector(connector -> { // ... }) .tcp("localhost", 7000); ``` ``` val requester = RSocketRequester.builder() .rsocketConnector { //... } .tcp("localhost", 7000) ``` ### Server Requester To make requests from a server to connected clients is a matter of obtaining the requester for the connected client from the server. In Annotated Responders, `@ConnectMapping` and `@MessageMapping` methods support an `RSocketRequester` argument. Use it to access the requester for the connection. Keep in mind that `@ConnectMapping` methods are essentially handlers of the `SETUP` frame which must be handled before requests can begin. Therefore, requests at the very start must be decoupled from handling. For example: ``` @ConnectMapping Mono<Void> handle(RSocketRequester requester) { requester.route("status").data("5") .retrieveFlux(StatusReport.class) .subscribe(bar -> { (1) // ... }); return ... (2) } ``` 1 | Start the request asynchronously, independent from handling. | | --- | --- | 2 | Perform handling and return completion | ``` @ConnectMapping suspend fun handle(requester: RSocketRequester) { GlobalScope.launch { requester.route("status").data("5").retrieveFlow<StatusReport>().collect { (1) // ... } } /// ... (2) } ``` 1 | Start the request asynchronously, independent from handling. | | --- | --- | 2 | Perform handling in the suspending function. | ### Requests ``` ViewBox viewBox = ... ; Flux<AirportLocation> locations = requester.route("locate.radars.within") (1) .data(viewBox) (2) .retrieveFlux(AirportLocation.class); (3) ``` ``` val viewBox: ViewBox = ... val locations = requester.route("locate.radars.within") (1) .data(viewBox) (2) .retrieveFlow<AirportLocation>() (3) ``` The interaction type is determined implicitly from the cardinality of the input and output. The above example is a `Request-Stream` because one value is sent and a stream of values is received. For the most part you don’t need to think about this as long as the choice of input and output matches an RSocket interaction type and the types of input and output expected by the responder. The only example of an invalid combination is many-to-one. The `data(Object)` method also accepts any Reactive Streams `Publisher` , including `Flux` and `Mono` , as well as any other producer of value(s) that is registered in the . For a multi-value `Publisher` such as `Flux` which produces the same types of values, consider using one of the overloaded `data` methods to avoid having type checks and `Encoder` lookup on every element: ``` data(Object producer, Class<?> elementClass); data(Object producer, ParameterizedTypeReference<?> elementTypeRef); ``` The `data(Object)` step is optional. Skip it for requests that don’t send data: ``` Mono<AirportLocation> location = requester.route("find.radar.EWR")) .retrieveMono(AirportLocation.class); ``` ``` import org.springframework.messaging.rsocket.retrieveAndAwait val location = requester.route("find.radar.EWR") .retrieveAndAwait<AirportLocation>() ``` Extra metadata values can be added if using composite metadata (the default) and if the values are supported by a registered `Encoder` . For example: ``` String securityToken = ... ; ViewBox viewBox = ... ; MimeType mimeType = MimeType.valueOf("message/x.rsocket.authentication.bearer.v0"); Flux<AirportLocation> locations = requester.route("locate.radars.within") .metadata(securityToken, mimeType) .data(viewBox) .retrieveFlux(AirportLocation.class); ``` ``` import org.springframework.messaging.rsocket.retrieveFlow val requester: RSocketRequester = ... val securityToken: String = ... val viewBox: ViewBox = ... val mimeType = MimeType.valueOf("message/x.rsocket.authentication.bearer.v0") val locations = requester.route("locate.radars.within") .metadata(securityToken, mimeType) .data(viewBox) .retrieveFlow<AirportLocation>() ``` For `Fire-and-Forget` use the `send()` method that returns `Mono<Void>` . Note that the `Mono` indicates only that the message was successfully sent, and not that it was handled. For `Metadata-Push` use the `sendMetadata()` method with a `Mono<Void>` return value. ## Annotated Responders RSocket responders can be implemented as `@MessageMapping` and `@ConnectMapping` methods. `@MessageMapping` methods handle individual requests while `@ConnectMapping` methods handle connection-level events (setup and metadata push). Annotated responders are supported symmetrically, for responding from the server side and for responding from the client side. ### Server Responders To use annotated responders on the server side, add to your Spring configuration to detect `@Controller` beans with `@MessageMapping` and `@ConnectMapping` methods: @Bean fun rsocketMessageHandler() = RSocketMessageHandler().apply { routeMatcher = PathPatternRouteMatcher() } } ``` Then start an RSocket server through the Java RSocket API and plug the for the responder as follows: CloseableChannel server = RSocketServer.create(handler.responder()) .bind(TcpServerTransport.create("localhost", 7000)) .block(); ``` val server = RSocketServer.create(handler.responder()) .bind(TcpServerTransport.create("localhost", 7000)) .awaitSingle() ``` supports composite and routing metadata by default. You can set its MetadataExtractor if you need to switch to a different mime type or register additional metadata mime types. You’ll need to set the `Encoder` and `Decoder` instances required for metadata and data formats to support. You’ll likely need the `spring-web` module for codec implementations. By default `SimpleRouteMatcher` is used for matching routes via `AntPathMatcher` . We recommend plugging in the ``` PathPatternRouteMatcher ``` from `spring-web` for efficient route matching. RSocket routes can be hierarchical but are not URL paths. Both route matchers are configured to use "." as separator by default and there is no URL decoding as with HTTP URLs. can be configured via `RSocketStrategies` which may be useful if you need to share configuration between a client and a server in the same process: @Bean public RSocketStrategies rsocketStrategies() { return RSocketStrategies.builder() .encoders(encoders -> encoders.add(new Jackson2CborEncoder())) .decoders(decoders -> decoders.add(new Jackson2CborDecoder())) .routeMatcher(new PathPatternRouteMatcher()) .build(); } } ``` @Bean fun rsocketMessageHandler() = RSocketMessageHandler().apply { rSocketStrategies = rsocketStrategies() } @Bean fun rsocketStrategies() = RSocketStrategies.builder() .encoders { it.add(Jackson2CborEncoder()) } .decoders { it.add(Jackson2CborDecoder()) } .routeMatcher(PathPatternRouteMatcher()) .build() } ``` ### Client Responders Annotated responders on the client side need to be configured in the . For details, see Client Responders. ### @MessageMapping Once server or client responder configuration is in place, `@MessageMapping` methods can be used as follows: @MessageMapping("locate.radars.within") public Flux<AirportLocation> radars(MapRequest request) { // ... } } ``` @MessageMapping("locate.radars.within") fun radars(request: MapRequest): Flow<AirportLocation> { // ... } } ``` The above `@MessageMapping` method responds to a Request-Stream interaction having the route "locate.radars.within". It supports a flexible method signature with the option to use the following method arguments: Method Argument | Description | | --- | --- | | | | | | The return value is expected to be one or more Objects to be serialized as response payloads. That can be asynchronous types like `Mono` or `Flux` , a concrete value, or either `void` or a no-value asynchronous type such as `Mono<Void>` . The RSocket interaction type that an `@MessageMapping` method supports is determined from the cardinality of the input (i.e. `@Payload` argument) and of the output, where cardinality means the following: Cardinality | Description | | --- | --- | | | | | | | The table below shows all input and output cardinality combinations and the corresponding interaction type(s): Input Cardinality | Output Cardinality | Interaction Types | | --- | --- | --- | | | | | | | | | | | | | ### @ConnectMapping `@ConnectMapping` handles the `SETUP` frame at the start of an RSocket connection, and any subsequent metadata push notifications through the `METADATA_PUSH` frame, i.e. ``` metadataPush(Payload) ``` in `io.rsocket.RSocket` . `@ConnectMapping` methods support the same arguments as @MessageMapping but based on metadata and data from the `SETUP` and `METADATA_PUSH` frames. `@ConnectMapping` can have a pattern to narrow handling to specific connections that have a route in the metadata, or if no patterns are declared then all connections match. `@ConnectMapping` methods cannot return data and must be declared with `void` or `Mono<Void>` as the return value. If handling returns an error for a new connection then the connection is rejected. Handling must not be held up to make requests to the `RSocketRequester` for the connection. See Server Requester for details. ## MetadataExtractor Responders must interpret metadata. Composite metadata allows independently formatted metadata values (e.g. for routing, security, tracing) each with its own mime type. Applications need a way to configure metadata mime types to support, and a way to access extracted values. `MetadataExtractor` is a contract to take serialized metadata and return decoded name-value pairs that can then be accessed like headers by name, for example via `@Header` in annotated handler methods. can be given `Decoder` instances to decode metadata. Out of the box it has built-in support for "message/x.rsocket.routing.v0" which it decodes to `String` and saves under the "route" key. For any other mime type you’ll need to provide a `Decoder` and register the mime type as follows: ``` DefaultMetadataExtractor extractor = new DefaultMetadataExtractor(metadataDecoders); extractor.metadataToExtract(fooMimeType, Foo.class, "foo"); ``` val extractor = DefaultMetadataExtractor(metadataDecoders) extractor.metadataToExtract<Foo>(fooMimeType, "foo") ``` Composite metadata works well to combine independent metadata values. However the requester might not support composite metadata, or may choose not to use it. For this, may needs custom logic to map the decoded value to the output map. Here is an example where JSON is used for metadata: ``` DefaultMetadataExtractor extractor = new DefaultMetadataExtractor(metadataDecoders); extractor.metadataToExtract( MimeType.valueOf("application/vnd.myapp.metadata+json"), new ParameterizedTypeReference<Map<String,String>>() {}, (jsonMap, outputMap) -> { outputMap.putAll(jsonMap); }); ``` val extractor = DefaultMetadataExtractor(metadataDecoders) extractor.metadataToExtract<Map<String, String>>(MimeType.valueOf("application/vnd.myapp.metadata+json")) { jsonMap, outputMap -> outputMap.putAll(jsonMap) } ``` When configuring `MetadataExtractor` through `RSocketStrategies` , you can let ``` RSocketStrategies.Builder ``` create the extractor with the configured decoders, and simply use a callback to customize registrations as follows: ``` RSocketStrategies strategies = RSocketStrategies.builder() .metadataExtractorRegistry(registry -> { registry.metadataToExtract(fooMimeType, Foo.class, "foo"); // ... }) .build(); ``` val strategies = RSocketStrategies.builder() .metadataExtractorRegistry { registry: MetadataExtractorRegistry -> registry.metadataToExtract<Foo>(fooMimeType, "foo") // ... } .build() ``` ## RSocket Interface The Spring Framework lets you define an RSocket service as a Java interface with annotated methods for RSocket exchanges. You can then generate a proxy that implements this interface and performs the exchanges. This helps to simplify RSocket remote access by wrapping the use of the underlying RSocketRequester. One, declare an interface with `@RSocketExchange` methods: ``` interface RadarService { @RSocketExchange("radars") Flux<AirportLocation> getRadars(@Payload MapRequest request); // more RSocket exchange methods... ``` RSocketRequester requester = ... ; RSocketServiceProxyFactory factory = RSocketServiceProxyFactory.builder(requester).build(); RadarService service = factory.createClient(RadarService.class); ``` Annotated, RSocket exchange methods support flexible method signatures with the following method parameters: `spring-webflux` depends on `reactor-core` and uses it internally to compose asynchronous logic and to provide Reactive Streams support. Generally, WebFlux APIs return `Flux` or `Mono` (since those are used internally) and leniently accept any Reactive Streams `Publisher` implementation as input. The use of `Flux` versus `Mono` is important, because it helps to express cardinality — for example, whether a single or multiple asynchronous values are expected, and that can be essential for making decisions (for example, when encoding or decoding HTTP messages). For annotated controllers, WebFlux transparently adapts to the reactive library chosen by the application. This is done with the help of the , which provides pluggable support for reactive library and other asynchronous types. The registry has built-in support for RxJava 3, Kotlin coroutines and SmallRye Mutiny, but you can register others, too. For functional APIs (such as [webflux-fn], the `WebClient` , and others), the general rules for WebFlux APIs apply — `Flux` and `Mono` as return values and a Reactive Streams `Publisher` as input. When a `Publisher` , whether custom or from another reactive library, is provided, it can be treated only as a stream with unknown semantics (0..N). If, however, the semantics are known, you can wrap it with `Flux` or `Mono.from(Publisher)` instead of passing the raw `Publisher` . For example, given a `Publisher` that is not a `Mono` , the Jackson JSON message writer expects multiple values. If the media type implies an infinite stream (for example, ``` application/json+stream ``` ), values are written and flushed individually. Otherwise, values are buffered into a list and rendered as a JSON array. # Integration Integration This part of the reference documentation covers Spring Framework’s integration with a number of technologies. Section Summary REST Clients JMS (Java Message Service) JMX Email Task Execution and Scheduling Cache Abstraction Observability Support Appendix The Spring Framework provides the following choices for making calls to REST endpoints: * `WebClient` - non-blocking, reactive client with fluent API. * `RestTemplate` - synchronous client with template method API. * HTTP Interface - annotated interface with generated, dynamic proxy implementation. `RestTemplate` The `RestTemplate` provides a higher level API over HTTP client libraries. It makes it easy to invoke REST endpoints in a single line. It exposes the following groups of overloaded methods: RestTemplate is in maintenance mode, with only requests for minor changes and bugs to be accepted. Please, consider using the WebClient instead. Method group | Description | | --- | --- | | | | | | | | | | | | | ### Initialization The default constructor uses ``` java.net.HttpURLConnection ``` to perform requests. You can switch to a different HTTP library with an implementation of . There is built-in support for the following: Apache HttpComponents * Netty * OkHttp For example, to switch to Apache HttpComponents, you can use the following: ``` RestTemplate template = new RestTemplate(new HttpComponentsClientHttpRequestFactory()); ``` Each exposes configuration options specific to the underlying HTTP client library — for example, for credentials, connection pooling, and other details. Note that the | | --- | RestTemplate can be instrumented for observability, in order to produce metrics and traces. See the RestTemplate Observability support section. # URIs Many of the `RestTemplate` methods accept a URI template and URI template variables, either as a `String` variable argument, or as `Map<String,String>` . The following example uses a `String` variable argument: ``` String result = restTemplate.getForObject( "https://example.com/hotels/{hotel}/bookings/{booking}", String.class, "42", "21"); ``` The following example uses a `Map<String, String>` : ``` Map<String, String> vars = Collections.singletonMap("hotel", "42"); String result = restTemplate.getForObject( "https://example.com/hotels/{hotel}/rooms/{hotel}", String.class, vars); ``` Keep in mind URI templates are automatically encoded, as the following example shows: ``` restTemplate.getForObject("https://example.com/hotel list", String.class); // Results in request to "https://example.com/hotel%20list" ``` You can use the `uriTemplateHandler` property of `RestTemplate` to customize how URIs are encoded. Alternatively, you can prepare a `java.net.URI` and pass it into one of the `RestTemplate` methods that accepts a `URI` . For more details on working with and encoding URIs, see URI Links. # Headers You can use the `exchange()` methods to specify request headers, as the following example shows: ``` String uriTemplate = "https://example.com/hotels/{hotel}"; URI uri = UriComponentsBuilder.fromUriString(uriTemplate).build(42); RequestEntity<Void> requestEntity = RequestEntity.get(uri) .header("MyRequestHeader", "MyValue") .build(); String responseHeader = response.getHeaders().getFirst("MyResponseHeader"); String body = response.getBody(); ``` You can obtain response headers through many `RestTemplate` method variants that return `ResponseEntity` . ### Body Objects passed into and returned from `RestTemplate` methods are converted to and from raw content with the help of an `HttpMessageConverter` . On a POST, an input object is serialized to the request body, as the following example shows: > URI location = template.postForLocation("https://example.com/people", person); You need not explicitly set the Content-Type header of the request. In most cases, you can find a compatible message converter based on the source `Object` type, and the chosen message converter sets the content type accordingly. If necessary, you can use the `exchange` methods to explicitly provide the `Content-Type` request header, and that, in turn, influences what message converter is selected. On a GET, the body of the response is deserialized to an output `Object` , as the following example shows: > Person person = restTemplate.getForObject("https://example.com/people/{id}", Person.class, 42); The `Accept` header of the request does not need to be explicitly set. In most cases, a compatible message converter can be found based on the expected response type, which then helps to populate the `Accept` header. If necessary, you can use the `exchange` methods to provide the `Accept` header explicitly. By default, `RestTemplate` registers all built-in message converters, depending on classpath checks that help to determine what optional conversion libraries are present. You can also set the message converters to use explicitly. # Message Conversion The `spring-web` module contains the `HttpMessageConverter` contract for reading and writing the body of HTTP requests and responses through `InputStream` and `OutputStream` . `HttpMessageConverter` instances are used on the client side (for example, in the `RestTemplate` ) and on the server side (for example, in Spring MVC REST controllers). Concrete implementations for the main media (MIME) types are provided in the framework and are, by default, registered with the `RestTemplate` on the client side and with on the server side (see Configuring Message Converters). The implementations of `HttpMessageConverter` are described in the following sections. For all converters, a default media type is used, but you can override it by setting the `supportedMediaTypes` bean property. The following table describes each implementation: MessageConverter | Description | | --- | --- | | | | | | | | | ### Jackson JSON Views You can specify a Jackson JSON View to serialize only a subset of the object properties, as the following example shows: ``` MappingJacksonValue value = new MappingJacksonValue(new User("eric", "7!jd#h23")); value.setSerializationView(User.WithoutPasswordView.class); RequestEntity<MappingJacksonValue> requestEntity = RequestEntity.post(new URI("https://example.com/user")).body(value); ``` MultiValueMap<String, Object> ``` whose values may be an `Object` for part content, a `Resource` for a file part, or an `HttpEntity` for part content with headers. For example: ``` MultiValueMap<String, Object> parts = new LinkedMultiValueMap<>(); parts.add("fieldPart", "fieldValue"); parts.add("filePart", new FileSystemResource("...logo.png")); parts.add("jsonPart", new Person("Jason")); HttpHeaders headers = new HttpHeaders(); headers.setContentType(MediaType.APPLICATION_XML); parts.add("xmlPart", new HttpEntity<>(myBean, headers)); ``` In most cases, you do not have to specify the `Content-Type` for each part. The content type is determined automatically based on the `HttpMessageConverter` chosen to serialize it or, in the case of a `Resource` based on the file extension. If necessary, you can explicitly provide the `MediaType` with an `HttpEntity` wrapper. Once the `MultiValueMap` is ready, you can pass it to the `RestTemplate` , as show below: ``` MultiValueMap<String, Object> parts = ...; template.postForObject("https://example.com/upload", parts, Void.class); ``` If the `MultiValueMap` contains at least one non- `String` value, the `Content-Type` is set to `multipart/form-data` by the ``` FormHttpMessageConverter ``` . If the `MultiValueMap` has `String` values the `Content-Type` is defaulted to . If necessary the `Content-Type` may also be set explicitly. The Spring Framework lets you define an HTTP service as a Java interface with annotated methods for HTTP exchanges. You can then generate a proxy that implements this interface and performs the exchanges. This helps to simplify HTTP remote access which often involves a facade that wraps the details of using the underlying HTTP client. One, declare an interface with `@HttpExchange` methods: ``` interface RepositoryService { // more HTTP exchange methods... ``` WebClient client = WebClient.builder().baseUrl("https://api.github.com/").build(); HttpServiceProxyFactory factory = HttpServiceProxyFactory.builder(WebClientAdapter.forClient(client)).build(); RepositoryService service = factory.createClient(RepositoryService.class); ``` `@HttpExchange` is supported at the type level where it applies to all methods: ``` @HttpExchange(url = "/repos/{owner}/{repo}", accept = "application/vnd.github.v3+json") interface RepositoryService { @PatchExchange(contentType = MediaType.APPLICATION_FORM_URLENCODED_VALUE) void updateRepository(@PathVariable String owner, @PathVariable String repo, @RequestParam String name, @RequestParam String description, @RequestParam String homepage); Annotated, HTTP exchange methods support flexible method signatures with the following method parameters: Annotated, HTTP exchange methods support the following return values: Method return value | Description | | --- | --- | | | | | | | | You can also use any other async or reactive types registered in the | | --- | By default, `WebClient` raises for 4xx and 5xx HTTP status codes. To customize this, you can register a response status handler that applies to all responses performed through the client: ``` WebClient webClient = WebClient.builder() .defaultStatusHandler(HttpStatusCode::isError, resp -> ...) .build(); WebClientAdapter clientAdapter = WebClientAdapter.forClient(webClient); HttpServiceProxyFactory factory = HttpServiceProxyFactory .builder(clientAdapter).build(); ``` For more details and options, such as suppressing error status codes, see the Javadoc of `defaultStatusHandler` in `WebClient.Builder` . Spring provides a JMS integration framework that simplifies the use of the JMS API in much the same way as Spring’s integration does for the JDBC API. JMS can be roughly divided into two areas of functionality, namely the production and consumption of messages. The `JmsTemplate` class is used for message production and synchronous message reception. For asynchronous reception similar to Jakarta EE’s message-driven bean style, Spring provides a number of message-listener containers that you can use to create Message-Driven POJOs (MDPs). Spring also provides a declarative way to create message listeners. The ``` org.springframework.jms.core ``` package provides the core functionality for using JMS. It contains JMS template classes that simplify the use of the JMS by handling the creation and release of resources, much like the `JdbcTemplate` does for JDBC. The design principle common to Spring template classes is to provide helper methods to perform common operations and, for more sophisticated usage, delegate the essence of the processing task to user-implemented callback interfaces. The JMS template follows the same design. The classes offer various convenience methods for sending messages, consuming messages synchronously, and exposing the JMS session and message producer to the user. The ``` org.springframework.jms.support ``` package provides `JMSException` translation functionality. The translation converts the checked `JMSException` hierarchy to a mirrored hierarchy of unchecked exceptions. If any provider-specific subclasses of the checked ``` jakarta.jms.JMSException ``` exist, this exception is wrapped in the unchecked ``` UncategorizedJmsException ``` ``` org.springframework.jms.support.converter ``` package provides a `MessageConverter` abstraction to convert between Java objects and JMS messages. The ``` org.springframework.jms.support.destination ``` package provides various strategies for managing JMS destinations, such as providing a service locator for destinations stored in JNDI. The ``` org.springframework.jms.annotation ``` package provides the necessary infrastructure to support annotation-driven listener endpoints by using `@JmsListener` . The ``` org.springframework.jms.config ``` package provides the parser implementation for the `jms` namespace as well as the java config support to configure listener containers and create listener endpoints. Finally, the ``` org.springframework.jms.connection ``` package provides an implementation of the `ConnectionFactory` suitable for use in standalone applications. It also contains an implementation of Spring’s for JMS (the cunningly named ). This allows for seamless integration of JMS as a transactional resource into Spring’s transaction management mechanisms. This section describes how to use Spring’s JMS components. `JmsTemplate` The `JmsTemplate` class is the central class in the JMS core package. It simplifies the use of JMS, since it handles the creation and release of resources when sending or synchronously receiving messages. Code that uses the `JmsTemplate` needs only to implement callback interfaces that give them a clearly defined high-level contract. The `MessageCreator` callback interface creates a message when given a `Session` provided by the calling code in `JmsTemplate` . To allow for more complex usage of the JMS API, `SessionCallback` provides the JMS session, and `ProducerCallback` exposes a `Session` and `MessageProducer` pair. The JMS API exposes two types of send methods, one that takes delivery mode, priority, and time-to-live as Quality of Service (QOS) parameters and one that takes no QOS parameters and uses default values. Since `JmsTemplate` has many send methods, setting the QOS parameters have been exposed as bean properties to avoid duplication in the number of send methods. Similarly, the timeout value for synchronous receive calls is set by using the `setReceiveTimeout` property. Some JMS providers allow the setting of default QOS values administratively through the configuration of the `ConnectionFactory` . This has the effect that a call to a `MessageProducer` instance’s `send` method ( ``` send(Destination destination, Message message) ``` ) uses different QOS default values than those specified in the JMS specification. In order to provide consistent management of QOS values, the `JmsTemplate` must, therefore, be specifically enabled to use its own QOS values by setting the boolean property `isExplicitQosEnabled` to `true` . For convenience, `JmsTemplate` also exposes a basic request-reply operation that allows for sending a message and waiting for a reply on a temporary queue that is created as part of the operation. Instances of the | | --- | As of Spring Framework 4.1, `JmsMessagingTemplate` is built on top of `JmsTemplate` and provides an integration with the messaging abstraction — that is, . This lets you create the message to send in a generic manner. ## Connections The `JmsTemplate` requires a reference to a `ConnectionFactory` . The `ConnectionFactory` is part of the JMS specification and serves as the entry point for working with JMS. It is used by the client application as a factory to create connections with the JMS provider and encapsulates various configuration parameters, many of which are vendor-specific, such as SSL configuration options. When using JMS inside an EJB, the vendor provides implementations of the JMS interfaces so that they can participate in declarative transaction management and perform pooling of connections and sessions. In order to use this implementation, Jakarta EE containers typically require that you declare a JMS connection factory as a `resource-ref` inside the EJB or servlet deployment descriptors. To ensure the use of these features with the `JmsTemplate` inside an EJB, the client application should ensure that it references the managed implementation of the `ConnectionFactory` . ### Caching Messaging Resources The standard API involves creating many intermediate objects. To send a message, the following 'API' walk is performed: > ConnectionFactory->Connection->Session->MessageProducer->send Between the `ConnectionFactory` and the `Send` operation, three intermediate objects are created and destroyed. To optimize the resource usage and increase performance, Spring provides two implementations of `ConnectionFactory` . Spring provides an implementation of the `ConnectionFactory` interface, , that returns the same `Connection` on all `createConnection()` calls and ignores calls to `close()` . This is useful for testing and standalone environments so that the same connection can be used for multiple `JmsTemplate` calls that may span any number of transactions. takes a reference to a standard `ConnectionFactory` that would typically come from JNDI. extends the functionality of and adds the caching of `Session` , `MessageProducer` , and `MessageConsumer` instances. The initial cache size is set to `1` . You can use the `sessionCacheSize` property to increase the number of cached sessions. Note that the number of actual cached sessions is more than that number, as sessions are cached based on their acknowledgment mode, so there can be up to four cached session instances (one for each acknowledgment mode) when `sessionCacheSize` is set to one. `MessageProducer` and `MessageConsumer` instances are cached within their owning session and also take into account the unique properties of the producers and consumers when caching. MessageProducers are cached based on their destination. MessageConsumers are cached based on a key composed of the destination, selector, noLocal delivery flag, and the durable subscription name (if creating durable consumers). ## Destination Management Destinations, as `ConnectionFactory` instances, are JMS administered objects that you can store and retrieve in JNDI. When configuring a Spring application context, you can use the JNDI factory class or `<jee:jndi-lookup>` to perform dependency injection on your object’s references to JMS destinations. However, this strategy is often cumbersome if there are a large number of destinations in the application or if there are advanced destination management features unique to the JMS provider. Examples of such advanced destination management include the creation of dynamic destinations or support for a hierarchical namespace of destinations. The `JmsTemplate` delegates the resolution of a destination name to a JMS destination object that implements the `DestinationResolver` interface. is the default implementation used by `JmsTemplate` and accommodates resolving dynamic destinations. A is also provided to act as a service locator for destinations contained in JNDI and optionally falls back to the behavior contained in . Quite often, the destinations used in a JMS application are only known at runtime and, therefore, cannot be administratively created when the application is deployed. This is often because there is shared application logic between interacting system components that create destinations at runtime according to a well-known naming convention. Even though the creation of dynamic destinations is not part of the JMS specification, most vendors have provided this functionality. Dynamic destinations are created with a user-defined name, which differentiates them from temporary destinations, and are often not registered in JNDI. The API used to create dynamic destinations varies from provider to provider since the properties associated with the destination are vendor-specific. However, a simple implementation choice that is sometimes made by vendors is to disregard the warnings in the JMS specification and to use the method `TopicSession` ``` createTopic(String topicName) ``` or the `QueueSession` ``` createQueue(String queueName) ``` method to create a new destination with default destination properties. Depending on the vendor implementation, can then also create a physical destination instead of only resolving one. The boolean property `pubSubDomain` is used to configure the `JmsTemplate` with knowledge of what JMS domain is being used. By default, the value of this property is false, indicating that the point-to-point domain, `Queues` , is to be used. This property (used by `JmsTemplate` ) determines the behavior of dynamic destination resolution through implementations of the `DestinationResolver` interface. You can also configure the `JmsTemplate` with a default destination through the property `defaultDestination` . The default destination is with send and receive operations that do not refer to a specific destination. ## Message Listener Containers One of the most common uses of JMS messages in the EJB world is to drive message-driven beans (MDBs). Spring offers a solution to create message-driven POJOs (MDPs) in a way that does not tie a user to an EJB container. (See Asynchronous reception: Message-Driven POJOs for detailed coverage of Spring’s MDP support.) Since Spring Framework 4.1, endpoint methods can be annotated with `@JmsListener` — see Annotation-driven Listener Endpoints for more details. A message listener container is used to receive messages from a JMS message queue and drive the `MessageListener` that is injected into it. The listener container is responsible for all threading of message reception and dispatches into the listener for processing. A message listener container is the intermediary between an MDP and a messaging provider and takes care of registering to receive messages, participating in transactions, resource acquisition and release, exception conversion, and so on. This lets you write the (possibly complex) business logic associated with receiving a message (and possibly respond to it), and delegates boilerplate JMS infrastructure concerns to the framework. There are two standard JMS message listener containers packaged with Spring, each with its specialized feature set. This message listener container is the simpler of the two standard flavors. It creates a fixed number of JMS sessions and consumers at startup, registers the listener by using the standard JMS ``` MessageConsumer.setMessageListener() ``` method, and leaves it up the JMS provider to perform listener callbacks. This variant does not allow for dynamic adaption to runtime demands or for participation in externally managed transactions. Compatibility-wise, it stays very close to the spirit of the standalone JMS specification, but is generally not compatible with Jakarta EE’s JMS restrictions. While | | --- | This message listener container is used in most cases. In contrast to , this container variant allows for dynamic adaptation to runtime demands and is able to participate in externally managed transactions. Each received message is registered with an XA transaction when configured with a . As a result, processing may take advantage of XA transaction semantics. This listener container strikes a good balance between low requirements on the JMS provider, advanced functionality (such as participation in externally managed transactions), and compatibility with Jakarta EE environments. You can customize the cache level of the container. Note that, when no caching is enabled, a new connection and a new session is created for each message reception. Combining this with a non-durable subscription with high loads may lead to message loss. Make sure to use a proper cache level in such a case. This container also has recoverable capabilities when the broker goes down. By default, a simple `BackOff` implementation retries every five seconds. You can specify a custom `BackOff` implementation for more fine-grained recovery options. See `ExponentialBackOff` for an example. Like its sibling ( | | --- | ## Transaction Management that manages transactions for a single JMS `ConnectionFactory` . This lets JMS applications leverage the managed-transaction features of Spring, as described in Transaction Management section of the Data Access chapter. The performs local resource transactions, binding a JMS Connection/Session pair from the specified `ConnectionFactory` to the thread. `JmsTemplate` automatically detects such transactional resources and operates on them accordingly. In a Jakarta EE environment, the `ConnectionFactory` pools Connection and Session instances, so those resources are efficiently reused across transactions. In a standalone environment, using Spring’s result in a shared JMS `Connection` , with each transaction having its own independent `Session` . Alternatively, consider the use of a provider-specific pooling adapter, such as ActiveMQ’s ``` PooledConnectionFactory ``` class. You can also use `JmsTemplate` with the and an XA-capable JMS `ConnectionFactory` to perform distributed transactions. Note that this requires the use of a JTA transaction manager as well as a properly XA-configured ConnectionFactory. (Check your Jakarta EE server’s or JMS provider’s documentation.) Reusing code across a managed and unmanaged transactional environment can be confusing when using the JMS API to create a `Session` from a `Connection` . This is because the JMS API has only one factory method to create a `Session` , and it requires values for the transaction and acknowledgment modes. In a managed environment, setting these values is the responsibility of the environment’s transactional infrastructure, so these values are ignored by the vendor’s wrapper to the JMS Connection. When you use the `JmsTemplate` in an unmanaged environment, you can specify these values through the use of the properties `sessionTransacted` and ``` sessionAcknowledgeMode ``` . When you use a with `JmsTemplate` , the template is always given a transactional JMS `Session` . The `JmsTemplate` contains many convenience methods to send a message. Send methods specify the destination by using a object, and others specify the destination by using a `String` in a JNDI lookup. The `send` method that takes no destination argument uses the default destination. The following example uses the `MessageCreator` callback to create a text message from the supplied `Session` object: ``` import jakarta.jms.ConnectionFactory; import jakarta.jms.JMSException; import jakarta.jms.Message; import jakarta.jms.Queue; import jakarta.jms.Session; import org.springframework.jms.core.MessageCreator; import org.springframework.jms.core.JmsTemplate; public class JmsQueueSender { private JmsTemplate jmsTemplate; private Queue queue; public void setConnectionFactory(ConnectionFactory cf) { this.jmsTemplate = new JmsTemplate(cf); } public void setQueue(Queue queue) { this.queue = queue; } public void simpleSend() { this.jmsTemplate.send(this.queue, new MessageCreator() { public Message createMessage(Session session) throws JMSException { return session.createTextMessage("hello queue world"); } }); } } ``` In the preceding example, the `JmsTemplate` is constructed by passing a reference to a `ConnectionFactory` . As an alternative, a zero-argument constructor and `connectionFactory` is provided and can be used for constructing the instance in JavaBean style (using a `BeanFactory` or plain Java code). Alternatively, consider deriving from Spring’s `JmsGatewaySupport` convenience base class, which provides pre-built bean properties for JMS configuration. The ``` send(String destinationName, MessageCreator creator) ``` method lets you send a message by using the string name of the destination. If these names are registered in JNDI, you should set the `destinationResolver` property of the template to an instance of . If you created the `JmsTemplate` and specified a default destination, the ``` send(MessageCreator c) ``` sends a message to that destination. ## Using Message Converters To facilitate the sending of domain model objects, the `JmsTemplate` has various send methods that take a Java object as an argument for a message’s data content. The overloaded methods `convertAndSend()` and `receiveAndConvert()` methods in `JmsTemplate` delegate the conversion process to an instance of the `MessageConverter` interface. This interface defines a simple contract to convert between Java objects and JMS messages. The default implementation ( ``` SimpleMessageConverter ``` ) supports conversion between `String` and `TextMessage` , `byte[]` and `BytesMessage` , and `java.util.Map` and `MapMessage` . By using the converter, you and your application code can focus on the business object that is being sent or received through JMS and not be concerned with the details of how it is represented as a JMS message. The sandbox currently includes a `MapMessageConverter` , which uses reflection to convert between a JavaBean and a `MapMessage` . Other popular implementation choices you might implement yourself are converters that use an existing XML marshalling package (such as JAXB or XStream) to create a `TextMessage` that represents the object. To accommodate the setting of a message’s properties, headers, and body that can not be generically encapsulated inside a converter class, the `MessagePostProcessor` interface gives you access to the message after it has been converted but before it is sent. The following example shows how to modify a message header and a property after a `java.util.Map` is converted to a message: ``` public void sendWithConversion() { Map map = new HashMap(); map.put("Name", "Mark"); map.put("Age", new Integer(47)); jmsTemplate.convertAndSend("testQueue", map, new MessagePostProcessor() { public Message postProcessMessage(Message message) throws JMSException { message.setIntProperty("AccountID", 1234); message.setJMSCorrelationID("123-00001"); return message; } }); } ``` This results in a message of the following form: > MapMessage={ Header={ ... standard headers ... CorrelationID={123-00001} } Properties={ AccountID={Integer:1234} } Fields={ Name={String:Mark} Age={Integer:47} } } `SessionCallback` and `ProducerCallback` While the send operations cover many common usage scenarios, you might sometimes want to perform multiple operations on a JMS `Session` or `MessageProducer` . The `SessionCallback` and `ProducerCallback` expose the JMS `Session` and `Session` / `MessageProducer` pair, respectively. The `execute()` methods on `JmsTemplate` run these callback methods. This describes how to receive messages with JMS in Spring. ## Synchronous Reception While JMS is typically associated with asynchronous processing, you can consume messages synchronously. The overloaded `receive(..)` methods provide this functionality. During a synchronous receive, the calling thread blocks until a message becomes available. This can be a dangerous operation, since the calling thread can potentially be blocked indefinitely. The `receiveTimeout` property specifies how long the receiver should wait before giving up waiting for a message. ## Asynchronous reception: Message-Driven POJOs Spring also supports annotated-listener endpoints through the use of the | | --- | In a fashion similar to a Message-Driven Bean (MDB) in the EJB world, the Message-Driven POJO (MDP) acts as a receiver for JMS messages. The one restriction (but see Using ) on an MDP is that it must implement the ``` jakarta.jms.MessageListener ``` interface. Note that, if your POJO receives messages on multiple threads, it is important to ensure that your implementation is thread-safe. The following example shows a simple implementation of an MDP: ``` import jakarta.jms.JMSException; import jakarta.jms.Message; import jakarta.jms.MessageListener; import jakarta.jms.TextMessage; public class ExampleListener implements MessageListener { public void onMessage(Message message) { if (message instanceof TextMessage textMessage) { try { System.out.println(textMessage.getText()); } catch (JMSException ex) { throw new RuntimeException(ex); } } else { throw new IllegalArgumentException("Message must be of type TextMessage"); } } } ``` Once you have implemented your `MessageListener` , it is time to create a message listener container. The following example shows how to define and configure one of the message listener containers that ships with Spring (in this case, ``` <!-- this is the Message Driven POJO (MDP) --> <bean id="messageListener" class="jmsexample.ExampleListener"/See the Spring javadoc of the various message listener containers (all of which implement MessageListenerContainer) for a full description of the features supported by each implementation. interface is a Spring-specific interface that provides a similar contract to the JMS `MessageListener` interface but also gives the message-handling method access to the JMS `Session` from which the `Message` was received. The following listing shows the definition of the ``` package org.springframework.jms.listener; public interface SessionAwareMessageListener { void onMessage(Message message, Session session) throws JMSException; } ``` You can choose to have your MDPs implement this interface (in preference to the standard JMS `MessageListener` interface) if you want your MDPs to be able to respond to any received messages (by using the `Session` supplied in the ``` onMessage(Message, Session) ``` method). All of the message listener container implementations that ship with Spring have support for MDPs that implement either the `MessageListener` or interface. Classes that implement the come with the caveat that they are then tied to Spring through the interface. The choice of whether or not to use it is left entirely up to you as an application developer or architect. Note that the `onMessage(..)` method of the interface throws `JMSException` . In contrast to the standard JMS `MessageListener` interface, when using the interface, it is the responsibility of the client code to handle any thrown exceptions. class is the final component in Spring’s asynchronous messaging support. In a nutshell, it lets you expose almost any class as an MDP (though there are some constraints). Consider the following interface definition: ``` public interface MessageDelegate { void handleMessage(String message); void handleMessage(Map message); void handleMessage(byte[] message); void handleMessage(Serializable message); } ``` Notice that, although the interface extends neither the `MessageListener` nor the interface, you can still use it as an MDP by using the class. Notice also how the various message handling methods are strongly typed according to the contents of the various `Message` types that they can receive and handle. Now consider the following implementation of the `MessageDelegate` interface: ``` public class DefaultMessageDelegate implements MessageDelegate { // implementation elided for clarity... } ``` In particular, note how the preceding implementation of the `MessageDelegate` interface (the ``` DefaultMessageDelegate ``` class) has no JMS dependencies at all. It truly is a POJO that we can make into an MDP through the following configuration: ``` <!-- this is the Message Driven POJO (MDP) --> <bean id="messageListener" class="org.springframework.jms.listener.adapter.MessageListenerAdapter"> <constructor-arg> <bean class="jmsexample.DefaultMessageDelegate"/> </constructor-arg> </bean The next example shows another MDP that can handle only receiving JMS `TextMessage` messages. Notice how the message handling method is actually called `receive` (the name of the message handling method in a defaults to `handleMessage` ), but it is configurable (as you can see later in this section). Notice also how the `receive(..)` method is strongly typed to receive and respond only to JMS `TextMessage` messages. The following listing shows the definition of the `TextMessageDelegate` interface: ``` public interface TextMessageDelegate { void receive(TextMessage message); } ``` The following listing shows a class that implements the `TextMessageDelegate` interface: ``` public class DefaultTextMessageDelegate implements TextMessageDelegate { // implementation elided for clarity... } ``` The configuration of the attendant would then be as follows: ``` <bean id="messageListener" class="org.springframework.jms.listener.adapter.MessageListenerAdapter"> <constructor-arg> <bean class="jmsexample.DefaultTextMessageDelegate"/> </constructor-arg> <property name="defaultListenerMethod" value="receive"/> <!-- we don't want automatic message context extraction --> <property name="messageConverter"> <null/> </property> </bean> ``` Note that, if the `messageListener` receives a JMS `Message` of a type other than `TextMessage` , an is thrown (and subsequently swallowed). Another of the capabilities of the class is the ability to automatically send back a response `Message` if a handler method returns a non-void value. Consider the following interface and class: ``` public interface ResponsiveTextMessageDelegate { // notice the return type... String receive(TextMessage message); } ``` ``` public class DefaultResponsiveTextMessageDelegate implements ResponsiveTextMessageDelegate { // implementation elided for clarity... } ``` If you use the ``` DefaultResponsiveTextMessageDelegate ``` in conjunction with a , any non-null value that is returned from the execution of the `'receive(..)'` method is (in the default configuration) converted into a `TextMessage` . The resulting `TextMessage` is then sent to the `Destination` (if one exists) defined in the JMS `Reply-To` property of the original `Message` or the default `Destination` set on the (if one has been configured). If no `Destination` is found, an ``` InvalidDestinationException ``` is thrown (note that this exception is not swallowed and propagates up the call stack). ## Processing Messages Within Transactions Invoking a message listener within a transaction requires only reconfiguration of the listener container. You can activate local resource transactions through the `sessionTransacted` flag on the listener container definition. Each message listener invocation then operates within an active JMS transaction, with message reception rolled back in case of listener execution failure. Sending a response message (through ) is part of the same local transaction, but any other resource operations (such as database access) operate independently. This usually requires duplicate message detection in the listener implementation, to cover the case where database processing has committed but message processing failed to commit. Consider the following bean definition: ``` <bean id="jmsContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactory"/> <property name="destination" ref="destination"/> <property name="messageListener" ref="messageListener"/> <property name="sessionTransacted" value="true"/> </bean> ``` To participate in an externally managed transaction, you need to configure a transaction manager and use a listener container that supports externally managed transactions (typically, ). To configure a message listener container for XA transaction participation, you want to configure a (which, by default, delegates to the Jakarta EE server’s transaction subsystem). Note that the underlying JMS `ConnectionFactory` needs to be XA-capable and properly registered with your JTA transaction coordinator. (Check your Jakarta EE server’s configuration of JNDI resources.) This lets message reception as well as (for example) database access be part of the same transaction (with unified commit semantics, at the expense of XA transaction log overhead). The following bean definition creates a transaction manager: ``` <bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager"/> ``` Then we need to add it to our earlier container configuration. The container takes care of the rest. The following example shows how to do so: ``` <bean id="jmsContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactory"/> <property name="destination" ref="destination"/> <property name="messageListener" ref="messageListener"/> <property name="transactionManager" ref="transactionManager"/> (1) </bean> ``` 1 | Our transaction manager. | | --- | --- | Beginning with version 2.5, Spring also provides support for a JCA-based `MessageListener` container. The tries to automatically determine the `ActivationSpec` class name from the provider’s `ResourceAdapter` class name. Therefore, it is typically possible to provide Spring’s generic ``` <bean class="org.springframework.jms.listener.endpoint.JmsMessageEndpointManager"> <property name="resourceAdapter" ref="resourceAdapter"/> <property name="activationSpecConfig"> <bean class="org.springframework.jms.listener.endpoint.JmsActivationSpecConfig"> <property name="destinationName" value="myQueue"/> </bean> </property> <property name="messageListener" ref="myMessageListener"/> </bean> ``` Alternatively, you can set up a with a given `ActivationSpec` object. The `ActivationSpec` object may also come from a JNDI lookup (using `<jee:jndi-lookup>` ). The following example shows how to do so: ``` <bean class="org.springframework.jms.listener.endpoint.JmsMessageEndpointManager"> <property name="resourceAdapter" ref="resourceAdapter"/> <property name="activationSpec"> <bean class="org.apache.activemq.ra.ActiveMQActivationSpec"> <property name="destination" value="myQueue"/> <property name="destinationType" value="jakarta.jms.Queue"/> </bean> </property> <property name="messageListener" ref="myMessageListener"/> </bean> ``` Using Spring’s , you can configure the target `ResourceAdapter` locally, as the following example shows: ``` <bean id="resourceAdapter" class="org.springframework.jca.support.ResourceAdapterFactoryBean"> <property name="resourceAdapter"> <bean class="org.apache.activemq.ra.ActiveMQResourceAdapter"> <property name="serverUrl" value="tcp://localhost:61616"/> </bean> </property> <property name="workManager"> <bean class="org.springframework.jca.work.SimpleTaskWorkManager"/> </property> </bean> ``` The specified `WorkManager` can also point to an environment-specific thread pool — typically through a ``` SimpleTaskWorkManager ``` instance’s `asyncTaskExecutor` property. Consider defining a shared thread pool for all your `ResourceAdapter` instances if you happen to use multiple adapters. In some environments, you can instead obtain the entire `ResourceAdapter` object from JNDI (by using `<jee:jndi-lookup>` ). The Spring-based message listeners can then interact with the server-hosted `ResourceAdapter` , which also use the server’s built-in `WorkManager` . See the javadoc for for more details. Spring also provides a generic JCA message endpoint manager that is not tied to JMS: ``` org.springframework.jca.endpoint.GenericMessageEndpointManager ``` . This component allows for using any message listener type (such as a JMS `MessageListener` ) and any provider-specific `ActivationSpec` object. See your JCA provider’s documentation to find out about the actual capabilities of your connector, and see the ``` GenericMessageEndpointManager ``` javadoc for the Spring-specific configuration details. JCA-based message endpoint management is very analogous to EJB 2.1 Message-Driven Beans. It uses the same underlying resource provider contract. As with EJB 2.1 MDBs, you can use any message listener interface supported by your JCA provider in the Spring context as well. Spring nevertheless provides explicit “convenience” support for JMS, because JMS is the most common endpoint API used with the JCA endpoint management contract. | | --- | The easiest way to receive a message asynchronously is to use the annotated listener endpoint infrastructure. In a nutshell, it lets you expose a method of a managed bean as a JMS listener endpoint. The following example shows how to use it: @JmsListener(destination = "myDestination") public void processOrder(String data) { ... } } ``` The idea of the preceding example is that, whenever a message is available on the `myDestination` , the `processOrder` method is invoked accordingly (in this case, with the content of the JMS message, similar to what the provides). The annotated endpoint infrastructure creates a message listener container behind the scenes for each annotated method, by using a . Such a container is not registered against the application context but can be easily located for management purposes by using the ``` JmsListenerEndpointRegistry ``` bean. @JmsListener is a repeatable annotation on Java 8, so you can associate several JMS destinations with the same method by adding additional @JmsListener declarations to it. ## Enable Listener Endpoint Annotations To enable support for `@JmsListener` annotations, you can add `@EnableJms` to one of your `@Configuration` classes, as the following example shows: @Bean public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() { DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory(); factory.setConnectionFactory(connectionFactory()); factory.setDestinationResolver(destinationResolver()); factory.setSessionTransacted(true); factory.setConcurrency("3-10"); return factory; } } ``` By default, the infrastructure looks for a bean named ``` jmsListenerContainerFactory ``` as the source for the factory to use to create message listener containers. In this case (and ignoring the JMS infrastructure setup), you can invoke the `processOrder` method with a core poll size of three threads and a maximum pool size of ten threads. You can customize the listener container factory to use for each annotation or you can configure an explicit default by implementing the interface. The default is required only if at least one endpoint is registered without a specific container factory. See the javadoc of classes that implement for details and examples. If you prefer XML configuration, you can use the ``` <jms:annotation-driven> ``` ``` <jms:annotation-driven/<bean id="jmsListenerContainerFactory" class="org.springframework.jms.config.DefaultJmsListenerContainerFactory"> <property name="connectionFactory" ref="connectionFactory"/> <property name="destinationResolver" ref="destinationResolver"/> <property name="sessionTransacted" value="true"/> <property name="concurrency" value="3-10"/> </bean> ``` ## Programmatic Endpoint Registration `JmsListenerEndpoint` provides a model of a JMS endpoint and is responsible for configuring the container for that model. The infrastructure lets you programmatically configure endpoints in addition to the ones that are detected by the `JmsListener` annotation. The following example shows how to do so: @Override public void configureJmsListeners(JmsListenerEndpointRegistrar registrar) { SimpleJmsListenerEndpoint endpoint = new SimpleJmsListenerEndpoint(); endpoint.setId("myJmsEndpoint"); endpoint.setDestination("anotherQueue"); endpoint.setMessageListener(message -> { // processing }); registrar.registerEndpoint(endpoint); } } ``` In the preceding example, we used ``` SimpleJmsListenerEndpoint ``` , which provides the actual `MessageListener` to invoke. However, you could also build your own endpoint variant to describe a custom invocation mechanism. Note that you could skip the use of `@JmsListener` altogether and programmatically register only your endpoints through ## Annotated Endpoint Method Signature So far, we have been injecting a simple `String` in our endpoint, but it can actually have a very flexible method signature. In the following example, we rewrite it to inject the `Order` with a custom header: @JmsListener(destination = "myDestination") public void processOrder(Order order, @Header("order_type") String orderType) { ... } } ``` The main elements you can inject in JMS listener endpoints are as follows: The raw `jakarta.jms.Message` or any of its subclasses (provided that it matches the incoming message type). * The `jakarta.jms.Session` for optional access to the native JMS API (for example, for sending a custom reply). * that represents the incoming JMS message. Note that this message holds both the custom and the standard headers (as defined by `JmsHeaders` ). * `@Header` -annotated method arguments to extract a specific header value, including standard JMS headers. * A `@Headers` -annotated argument that must also be assignable to `java.util.Map` for getting access to all headers. * A non-annotated element that is not one of the supported types ( `Message` or `Session` ) is considered to be the payload. You can make that explicit by annotating the parameter with `@Payload` . You can also turn on validation by adding an extra `@Valid` . The ability to inject Spring’s `Message` abstraction is particularly useful to benefit from all the information stored in the transport-specific message without relying on transport-specific API. The following example shows how to do so: ``` @JmsListener(destination = "myDestination") public void processOrder(Message<Order> order) { ... } ``` Handling of method arguments is provided by ``` DefaultMessageHandlerMethodFactory ``` , which you can further customize to support additional method arguments. You can customize the conversion and validation support there as well. For instance, if we want to make sure our `Order` is valid before processing it, we can annotate the payload with `@Valid` and configure the necessary validator, as the following example shows: @Override public void configureJmsListeners(JmsListenerEndpointRegistrar registrar) { registrar.setMessageHandlerMethodFactory(myJmsHandlerMethodFactory()); } @Bean public DefaultMessageHandlerMethodFactory myHandlerMethodFactory() { DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory(); factory.setValidator(myValidator()); return factory; } } ``` ## Response Management The existing support in already lets your method have a non- `void` return type. When that is the case, the result of the invocation is encapsulated in a `jakarta.jms.Message` , sent either in the destination specified in the `JMSReplyTo` header of the original message or in the default destination configured on the listener. You can now set that default destination by using the `@SendTo` annotation of the messaging abstraction. Assuming that our `processOrder` method should now return an `OrderStatus` , we can write it to automatically send a response, as the following example shows: ``` @JmsListener(destination = "myDestination") @SendTo("status") public OrderStatus processOrder(Order order) { // order processing return status; } ``` If you have several | | --- | If you need to set additional headers in a transport-independent manner, you can return a `Message` instead, with a method similar to the following: ``` @JmsListener(destination = "myDestination") @SendTo("status") public Message<OrderStatus> processOrder(Order order) { // order processing return MessageBuilder .withPayload(status) .setHeader("code", 1234) .build(); } ``` If you need to compute the response destination at runtime, you can encapsulate your response in a `JmsResponse` instance that also provides the destination to use at runtime. We can rewrite the previous example as follows: ``` @JmsListener(destination = "myDestination") public JmsResponse<Message<OrderStatus>> processOrder(Order order) { // order processing Message<OrderStatus> response = MessageBuilder .withPayload(status) .setHeader("code", 1234) .build(); return JmsResponse.forQueue(response, "status"); } ``` Finally, if you need to specify some QoS values for the response such as the priority or the time to live, you can configure the @Bean public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() { DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory(); factory.setConnectionFactory(connectionFactory()); QosSettings replyQosSettings = new QosSettings(); replyQosSettings.setPriority(2); replyQosSettings.setTimeToLive(10000); factory.setReplyQosSettings(replyQosSettings); return factory; } } ``` Spring provides an XML namespace for simplifying JMS configuration. To use the JMS namespace elements, you need to reference the JMS schema, as the following example shows: 1 | Referencing the JMS schema. | | --- | --- | The namespace consists of three top-level elements: `<annotation-driven/>` , . `<annotation-driven/>` enables the use of annotation-driven listener endpoints . define shared listener container configuration and can contain `<listener/>` child elements. The following example shows a basic configuration for two listeners: ``` <jms:listener-container The preceding example is equivalent to creating two distinct listener container bean definitions and two distinct bean definitions, as shown in Using . In addition to the attributes shown in the preceding example, the `listener` element can contain several optional ones. The following table describes all of the available attributes: element also accepts several optional attributes. This allows for customization of the various strategies (for example, `taskExecutor` and `destinationResolver` ) as well as basic JMS settings and resource references. By using these attributes, you can define highly-customized listener containers while still benefiting from the convenience of the namespace. You can automatically expose such settings as a by specifying the `id` of the bean to expose through the `factory-id` attribute, as the following example shows: ``` <jms:listener-container connection-factory="myConnectionFactory" task-executor="myTaskExecutor" destination-resolver="myDestinationResolver" transaction-manager="myTransactionManager" concurrency="10" The following table describes all available attributes. See the class-level javadoc of the and its concrete subclasses for more details on the individual properties. The javadoc also provides a discussion of transaction choices and message redelivery scenarios. Configuring a JCA-based listener container with the `jms` schema support is very similar, as the following example shows: ``` <jms:jca-listener-container resource-adapter="myResourceAdapter" destination-resolver="myDestinationResolver" transaction-manager="myTransactionManager" concurrency="10" <jms:listener destination="queue.orders" ref="myMessageListener"/</jms:jca-listener-container> ``` The following table describes the available configuration options for the JCA variant: The JMX (Java Management Extensions) support in Spring provides features that let you easily and transparently integrate your Spring application into a JMX infrastructure. Specifically, Spring’s JMX support provides four core features: The automatic registration of any Spring bean as a JMX MBean. * A flexible mechanism for controlling the management interface of your beans. * The declarative exposure of MBeans over remote, JSR-160 connectors. * The simple proxying of both local and remote MBean resources. These features are designed to work without coupling your application components to either Spring or JMX interfaces and classes. Indeed, for the most part, your application classes need not be aware of either Spring or JMX in order to take advantage of the Spring JMX features. The core class in Spring’s JMX framework is the `MBeanExporter` . This class is responsible for taking your Spring beans and registering them with a JMX `MBeanServer` . For example, consider the following class: public class JmxTestBean implements IJmxTestBean { public int getAge() { return age; } public int add(int x, int y) { return x + y; } To expose the properties and methods of this bean as attributes and operations of an MBean, you can configure an instance of the `MBeanExporter` class in your configuration file and pass in the bean, as the following example shows: ``` <beans> <!-- this bean must not be lazily initialized if the exporting is to happen --> <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter" lazy-init="false"> <property name="beans"> <map> <entry key="bean:name=testBean1" value-ref="testBean"/> </map> </property> </bean> <bean id="testBean" class="org.springframework.jmx.JmxTestBean"> <property name="name" value="TEST"/> <property name="age" value="100"/> </bean> </beans> ``` The pertinent bean definition from the preceding configuration snippet is the `exporter` bean. The `beans` property tells the `MBeanExporter` exactly which of your beans must be exported to the JMX `MBeanServer` . In the default configuration, the key of each entry in the `beans` `Map` is used as the `ObjectName` for the bean referenced by the corresponding entry value. You can change this behavior, as described in Controlling `ObjectName` Instances for Your Beans. With this configuration, the `testBean` bean is exposed as an MBean under the `ObjectName` `bean:name=testBean1` . By default, all `public` properties of the bean are exposed as attributes and all `public` methods (except those inherited from the `Object` class) are exposed as operations. MBeanExporter is a Lifecycle bean (see Startup and Shutdown Callbacks ). By default, MBeans are exported as late as possible during the application lifecycle. You can configure the phase at which the export happens or disable automatic registration by setting the autoStartup flag. ## Creating an MBeanServer The configuration shown in the preceding section assumes that the application is running in an environment that has one (and only one) `MBeanServer` already running. In this case, Spring tries to locate the running `MBeanServer` and register your beans with that server (if any). This behavior is useful when your application runs inside a container (such as Tomcat or IBM WebSphere) that has its own `MBeanServer` . However, this approach is of no use in a standalone environment or when running inside a container that does not provide an `MBeanServer` . To address this, you can create an `MBeanServer` instance declaratively by adding an instance of the ``` org.springframework.jmx.support.MBeanServerFactoryBean ``` class to your configuration. You can also ensure that a specific `MBeanServer` is used by setting the value of the `MBeanExporter` instance’s `server` property to the `MBeanServer` value returned by an <bean id="mbeanServer" class="org.springframework.jmx.support.MBeanServerFactoryBean"/ <!-- this bean needs to be eagerly pre-instantiated in order for the exporting to occur; this means that it must not be marked as lazily initialized --> <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <property name="beans"> <map> <entry key="bean:name=testBean1" value-ref="testBean"/> </map> </property> <property name="server" ref="mbeanServer"/> </bean In the preceding example, an instance of `MBeanServer` is created by the and is supplied to the `MBeanExporter` through the `server` property. When you supply your own `MBeanServer` instance, the `MBeanExporter` does not try to locate a running `MBeanServer` and uses the supplied `MBeanServer` instance. For this to work correctly, you must have a JMX implementation on your classpath. ## Reusing an Existing `MBeanServer` If no server is specified, the `MBeanExporter` tries to automatically detect a running `MBeanServer` . This works in most environments, where only one `MBeanServer` instance is used. However, when multiple instances exist, the exporter might pick the wrong server. In such cases, you should use the `MBeanServer` `agentId` to indicate which instance to be used, as the following example shows: ``` <beans> <bean id="mbeanServer" class="org.springframework.jmx.support.MBeanServerFactoryBean"> <!-- indicate to first look for a server --> <property name="locateExistingServerIfPossible" value="true"/> <!-- search for the MBeanServer instance with the given agentId --> <property name="agentId" value="MBeanServer_instance_agentId>"/> </bean> <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <property name="server" ref="mbeanServer"/> ... </bean> </beans> ``` For platforms or cases where the existing `MBeanServer` has a dynamic (or unknown) `agentId` that is retrieved through lookup methods, you should use factory-method, as the following example shows: ``` <beans> <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <property name="server"> <!-- Custom MBeanServerLocator --> <bean class="platform.package.MBeanServerLocator" factory-method="locateMBeanServer"/> </property> </bean <!-- other beans here --## Lazily Initialized MBeans If you configure a bean with an `MBeanExporter` that is also configured for lazy initialization, the `MBeanExporter` does not break this contract and avoids instantiating the bean. Instead, it registers a proxy with the `MBeanServer` and defers obtaining the bean from the container until the first invocation on the proxy occurs. This also affects `FactoryBean` resolution where `MBeanExporter` will regularly introspect the produced object, effectively triggering ``` FactoryBean.getObject() ``` . In order to avoid this, mark the corresponding bean definition as lazy-init. ## Automatic Registration of MBeans Any beans that are exported through the `MBeanExporter` and are already valid MBeans are registered as-is with the `MBeanServer` without further intervention from Spring. You can cause MBeans to be automatically detected by the `MBeanExporter` by setting the `autodetect` property to `true` , as the following example shows: ``` <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <property name="autodetect" value="true"/> </bean<bean name="spring:mbean=true" class="org.springframework.jmx.export.TestDynamicMBean"/> ``` In the preceding example, the bean called `spring:mbean=true` is already a valid JMX MBean and is automatically registered by Spring. By default, a bean that is autodetected for JMX registration has its bean name used as the `ObjectName` . You can override this behavior, as detailed in Controlling `ObjectName` Instances for Your Beans. ## Controlling the Registration Behavior Consider the scenario where a Spring `MBeanExporter` attempts to register an `MBean` with an `MBeanServer` by using the `ObjectName` `bean:name=testBean1` . If an `MBean` instance has already been registered under that same `ObjectName` , the default behavior is to fail (and throw an ``` InstanceAlreadyExistsException ``` ). You can control exactly what happens when an `MBean` is registered with an `MBeanServer` . Spring’s JMX support allows for three different registration behaviors to control the registration behavior when the registration process finds that an `MBean` has already been registered under the same `ObjectName` . The following table summarizes these registration behaviors: Registration behavior | Explanation | | --- | --- | | | | The values in the preceding table are defined as enums on the `RegistrationPolicy` class. If you want to change the default registration behavior, you need to set the value of the `registrationPolicy` property on your `MBeanExporter` definition to one of those values. The following example shows how to change from the default registration behavior to the `REPLACE_EXISTING` behavior: <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <property name="beans"> <map> <entry key="bean:name=testBean1" value-ref="testBean"/> </map> </property> <property name="registrationPolicy" value="REPLACE_EXISTING"/> </bean In the example in the preceding section, you had little control over the management interface of your bean. All of the `public` properties and methods of each exported bean were exposed as JMX attributes and operations, respectively. To exercise finer-grained control over exactly which properties and methods of your exported beans are actually exposed as JMX attributes and operations, Spring JMX provides a comprehensive and extensible mechanism for controlling the management interfaces of your beans. `MBeanInfoAssembler` Interface Behind the scenes, the `MBeanExporter` delegates to an implementation of the ``` org.springframework.jmx.export.assembler.MBeanInfoAssembler ``` interface, which is responsible for defining the management interface of each bean that is exposed. The default implementation, ``` org.springframework.jmx.export.assembler.SimpleReflectiveMBeanInfoAssembler ``` , defines a management interface that exposes all public properties and methods (as you saw in the examples in the preceding sections). Spring provides two additional implementations of the `MBeanInfoAssembler` interface that let you control the generated management interface by using either source-level metadata or any arbitrary interface. ## Using Source-level Metadata: Java Annotations , you can define the management interfaces for your beans by using source-level metadata. The reading of metadata is encapsulated by the ``` org.springframework.jmx.export.metadata.JmxAttributeSource ``` interface. Spring JMX provides a default implementation that uses Java annotations, namely ``` org.springframework.jmx.export.annotation.AnnotationJmxAttributeSource ``` . You must configure the with an implementation instance of the `JmxAttributeSource` interface for it to function correctly (there is no default). To mark a bean for export to JMX, you should annotate the bean class with the `ManagedResource` annotation. You must mark each method you wish to expose as an operation with the `ManagedOperation` annotation and mark each property you wish to expose with the `ManagedAttribute` annotation. When marking properties, you can omit either the annotation of the getter or the setter to create a write-only or read-only attribute, respectively. A | | --- | The following example shows the annotated version of the `JmxTestBean` class that we used in Creating an MBeanServer: import org.springframework.jmx.export.annotation.ManagedResource; import org.springframework.jmx.export.annotation.ManagedOperation; import org.springframework.jmx.export.annotation.ManagedAttribute; @ManagedResource( objectName="bean:name=testBean4", description="My Managed Bean", log=true, logFile="jmx.log", currencyTimeLimit=15, persistPolicy="OnUpdate", persistPeriod=200, persistLocation="foo", persistName="bar") public class AnnotationTestBean implements IJmxTestBean { @ManagedAttribute(description="The Age Attribute", currencyTimeLimit=15) public int getAge() { return age; } @ManagedAttribute(description="The Name Attribute", currencyTimeLimit=20, defaultValue="bar", persistPolicy="OnUpdate") public void setName(String name) { this.name = name; } @ManagedAttribute(defaultValue="foo", persistPeriod=300) public String getName() { return name; } @ManagedOperation(description="Add two numbers") @ManagedOperationParameters({ @ManagedOperationParameter(name = "x", description = "The first number"), @ManagedOperationParameter(name = "y", description = "The second number")}) public int add(int x, int y) { return x + y; } In the preceding example, you can see that the `JmxTestBean` class is marked with the `ManagedResource` annotation and that this `ManagedResource` annotation is configured with a set of properties. These properties can be used to configure various aspects of the MBean that is generated by the `MBeanExporter` and are explained in greater detail later in Source-level Metadata Types. Both the `age` and `name` properties are annotated with the `ManagedAttribute` annotation, but, in the case of the `age` property, only the getter is marked. This causes both of these properties to be included in the management interface as attributes, but the `age` attribute is read-only. Finally, the `add(int, int)` method is marked with the `ManagedOperation` attribute, whereas the `dontExposeMe()` method is not. This causes the management interface to contain only one operation ( `add(int, int)` ) when you use the . The following configuration shows how you can configure the `MBeanExporter` to use the ``` <beans> <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <property name="assembler" ref="assembler"/> <property name="namingStrategy" ref="namingStrategy"/> <property name="autodetect" value="true"/> </bean <bean id="jmxAttributeSource" class="org.springframework.jmx.export.annotation.AnnotationJmxAttributeSource"/ <!-- will create management interface using annotation metadata --> <bean id="assembler" class="org.springframework.jmx.export.assembler.MetadataMBeanInfoAssembler"> <property name="attributeSource" ref="jmxAttributeSource"/> </bean <!-- will pick up the ObjectName from the annotation --> <bean id="namingStrategy" class="org.springframework.jmx.export.naming.MetadataNamingStrategy"> <property name="attributeSource" ref="jmxAttributeSource"/> </bean <bean id="testBean" class="org.springframework.jmx.AnnotationTestBean"> <property name="name" value="TEST"/> <property name="age" value="100"/> </bean> </beans> ``` In the preceding example, an bean has been configured with an instance of the ``` AnnotationJmxAttributeSource ``` class and passed to the `MBeanExporter` through the assembler property. This is all that is required to take advantage of metadata-driven management interfaces for your Spring-exposed MBeans. ## Source-level Metadata Types The following table describes the source-level metadata types that are available for use in Spring JMX: Purpose | Annotation | Annotation Type | | --- | --- | --- | | | | | | | | | The following table describes the configuration parameters that are available for use on these source-level metadata types: Parameter | Description | Applies to | | --- | --- | --- | | | | | | | | | | | | | Interface To simplify configuration even further, Spring includes the interface, which extends the `MBeanInfoAssembler` interface to add support for autodetection of MBean resources. If you configure the `MBeanExporter` with an instance of , it is allowed to “vote” on the inclusion of beans for exposure to JMX. The only implementation of the ``` AutodetectCapableMBeanInfo ``` interface is the , which votes to include any bean that is marked with the `ManagedResource` attribute. The default approach in this case is to use the bean name as the `ObjectName` , which results in a configuration similar to the following: <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <!-- notice how no 'beans' are explicitly configured here --> <property name="autodetect" value="true"/> <property name="assembler" ref="assembler"/> </bean <bean id="assembler" class="org.springframework.jmx.export.assembler.MetadataMBeanInfoAssembler"> <property name="attributeSource"> <bean class="org.springframework.jmx.export.annotation.AnnotationJmxAttributeSource"/> </property> </bean Notice that, in the preceding configuration, no beans are passed to the `MBeanExporter` . However, the `JmxTestBean` is still registered, since it is marked with the `ManagedResource` attribute and the detects this and votes to include it. The only problem with this approach is that the name of the `JmxTestBean` now has business meaning. You can address this issue by changing the default behavior for `ObjectName` creation as defined in Controlling `ObjectName` Instances for Your Beans. ## Defining Management Interfaces by Using Java Interfaces In addition to the , Spring also includes the , which lets you constrain the methods and properties that are exposed based on the set of methods defined in a collection of interfaces. Although the standard mechanism for exposing MBeans is to use interfaces and a simple naming scheme, extends this functionality by removing the need for naming conventions, letting you use more than one interface and removing the need for your beans to implement the MBean interfaces. Consider the following interface, which is used to define a management interface for the `JmxTestBean` class that we showed earlier: ``` public interface IJmxTestBean { public int add(int x, int y); public long myOperation(); public int getAge(); public void setAge(int age); public void setName(String name); public String getName(); This interface defines the methods and properties that are exposed as operations and attributes on the JMX MBean. The following code shows how to configure Spring JMX to use this interface as the definition for the management interface: <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <property name="beans"> <map> <entry key="bean:name=testBean5" value-ref="testBean"/> </map> </property> <property name="assembler"> <bean class="org.springframework.jmx.export.assembler.InterfaceBasedMBeanInfoAssembler"> <property name="managedInterfaces"> <value>org.springframework.jmx.IJmxTestBean</value> </property> </bean> </property> </beanis configured to use the `IJmxTestBean` interface when constructing the management interface for any bean. It is important to understand that beans processed by the are not required to implement the interface used to generate the JMX management interface. In the preceding case, the `IJmxTestBean` interface is used to construct all management interfaces for all beans. In many cases, this is not the desired behavior, and you may want to use different interfaces for different beans. In this case, you can pass a `Properties` instance through the `interfaceMappings` property, where the key of each entry is the bean name and the value of each entry is a comma-separated list of interface names to use for that bean. If no management interface is specified through either the `managedInterfaces` or `interfaceMappings` properties, the reflects on the bean and uses all of the interfaces implemented by that bean to create the management interface. lets you specify a list of method names that are exposed to JMX as attributes and operations. The following code shows a sample configuration: ``` <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <property name="beans"> <map> <entry key="bean:name=testBean5" value-ref="testBean"/> </map> </property> <property name="assembler"> <bean class="org.springframework.jmx.export.assembler.MethodNameBasedMBeanInfoAssembler"> <property name="managedMethods"> <value>add,myOperation,getName,setName,getAge</value> </property> </bean> </property> </bean> ``` In the preceding example, you can see that the `add` and `myOperation` methods are exposed as JMX operations, and `getName()` , `setName(String)` , and `getAge()` are exposed as the appropriate half of a JMX attribute. In the preceding code, the method mappings apply to beans that are exposed to JMX. To control method exposure on a bean-by-bean basis, you can use the `methodMappings` property of ``` MethodNameMBeanInfoAssembler ``` to map bean names to lists of method names. # Controlling ObjectName Instances for Your Beans # Controlling `ObjectName` Instances for Your Beans Behind the scenes, the `MBeanExporter` delegates to an implementation of the `ObjectNamingStrategy` to obtain an `ObjectName` instance for each of the beans it registers. By default, the default implementation, `KeyNamingStrategy` uses the key of the `beans` `Map` as the `ObjectName` . In addition, the `KeyNamingStrategy` can map the key of the `beans` `Map` to an entry in a `Properties` file (or files) to resolve the `ObjectName` . In addition to the `KeyNamingStrategy` , Spring provides two additional `ObjectNamingStrategy` implementations: the ``` IdentityNamingStrategy ``` (which builds an `ObjectName` based on the JVM identity of the bean) and the (which uses source-level metadata to obtain the `ObjectName` ). ## Reading `ObjectName` Instances from Properties You can configure your own `KeyNamingStrategy` instance and configure it to read `ObjectName` instances from a `Properties` instance rather than use a bean key. The `KeyNamingStrategy` tries to locate an entry in the `Properties` with a key that corresponds to the bean key. If no entry is found or if the `Properties` instance is `null` , the bean key itself is used. The following code shows a sample configuration for the `KeyNamingStrategy` : <bean id="namingStrategy" class="org.springframework.jmx.export.naming.KeyNamingStrategy"> <property name="mappings"> <props> <prop key="testBean">bean:name=testBean1</prop> </props> </property> <property name="mappingLocations"> <value>names1.properties,names2.properties</value> </property> </bean The preceding example configures an instance of `KeyNamingStrategy` with a `Properties` instance that is merged from the `Properties` instance defined by the mapping property and the properties files located in the paths defined by the mappings property. In this configuration, the `testBean` bean is given an `ObjectName` of `bean:name=testBean1` , since this is the entry in the `Properties` instance that has a key corresponding to the bean key. If no entry in the `Properties` instance can be found, the bean key name is used as the `ObjectName` . uses the `objectName` property of the `ManagedResource` attribute on each bean to create the `ObjectName` . The following code shows the configuration for the <bean id="namingStrategy" class="org.springframework.jmx.export.naming.MetadataNamingStrategy"> <property name="attributeSource" ref="attributeSource"/> </bean <bean id="attributeSource" class="org.springframework.jmx.export.annotation.AnnotationJmxAttributeSource"/ If no `objectName` has been provided for the `ManagedResource` attribute, an `ObjectName` is created with the following format: [fully-qualified-package-name]:type=[short-classname],name=[bean-name]. For example, the generated `ObjectName` for the following bean would be ``` com.example:type=MyClass,name=myBean ``` ``` <bean id="myBean" class="com.example.MyClass"/> ``` ## Configuring Annotation-based MBean Export If you prefer to use the annotation-based approach to define your management interfaces, a convenience subclass of `MBeanExporter` is available: . When defining an instance of this subclass, you no longer need the `namingStrategy` , `assembler` , and `attributeSource` configuration, since it always uses standard Java annotation-based metadata (autodetection is always enabled as well). In fact, rather than defining an `MBeanExporter` bean, an even simpler syntax is supported by the `@EnableMBeanExport` `@Configuration` annotation, as the following example shows: If you prefer XML-based configuration, the element serves the same purpose and is shown in the following listing: If necessary, you can provide a reference to a particular MBean `server` , and the `defaultDomain` attribute (a property of ) accepts an alternate value for the generated MBean `ObjectName` domains. This is used in place of the fully qualified package name as described in the previous section on MetadataNamingStrategy, as the following example shows: ``` @EnableMBeanExport(server="myMBeanServer", defaultDomain="myDomain") @Configuration ContextConfiguration { The following example shows the XML equivalent of the preceding annotation-based example: ``` <context:mbean-export server="myMBeanServer" default-domain="myDomain"/> ``` Do not use interface-based AOP proxies in combination with autodetection of JMX annotations in your bean classes. Interface-based proxies “hide” the target class, which also hides the JMX-managed resource annotations. Hence, you should use target-class proxies in that case (through setting the 'proxy-target-class' flag on | | --- | For remote access, Spring JMX module offers two `FactoryBean` implementations inside the ``` org.springframework.jmx.support ``` package for creating both server- and client-side connectors. ## Server-side Connectors To have Spring JMX create, start, and expose a JSR-160 `JMXConnectorServer` , you can use the following configuration: ``` <bean id="serverConnector" class="org.springframework.jmx.support.ConnectorServerFactoryBean"/> ``` creates a `JMXConnectorServer` bound to ``` service:jmx:jmxmp://localhost:9875 ``` . The `serverConnector` bean thus exposes the local `MBeanServer` to clients through the JMXMP protocol on localhost, port 9875. Note that the JMXMP protocol is marked as optional by the JSR 160 specification. Currently, the main open-source JMX implementation, MX4J, and the one provided with the JDK do not support JMXMP. To specify another URL and register the `JMXConnectorServer` itself with the `MBeanServer` , you can use the `serviceUrl` and `ObjectName` properties, respectively, as the following example shows: If the `ObjectName` property is set, Spring automatically registers your connector with the `MBeanServer` under that `ObjectName` . The following example shows the full set of parameters that you can pass to the when creating a `JMXConnector` : ``` <bean id="serverConnector" class="org.springframework.jmx.support.ConnectorServerFactoryBean"> <property name="objectName" value="connector:name=iiop"/> <property name="serviceUrl" value="service:jmx:iiop://localhost/jndi/iiop://localhost:900/myconnector"/> <property name="threaded" value="true"/> <property name="daemon" value="true"/> <property name="environment"> <map> <entry key="someKey" value="someValue"/> </map> </property> </bean> ``` Note that, when you use a RMI-based connector, you need the lookup service ( `tnameserv` or `rmiregistry` ) to be started in order for the name registration to complete. ## Client-side Connectors To create an to a remote JSR-160-enabled `MBeanServer` , you can use the ``` <bean id="clientConnector" class="org.springframework.jmx.support.MBeanServerConnectionFactoryBean"> <property name="serviceUrl" value="service:jmx:rmi://localhost/jndi/rmi://localhost:1099/jmxrmi"/> </bean> ``` ## JMX over Hessian or SOAP JSR-160 permits extensions to the way in which communication is done between the client and the server. The examples shown in the preceding sections use the mandatory RMI-based implementation required by the JSR-160 specification (IIOP and JRMP) and the (optional) JMXMP. By using other providers or JMX implementations (such as MX4J) you can take advantage of protocols such as SOAP or Hessian over simple HTTP or SSL and others, as the following example shows: In the preceding example, we used MX4J 3.0.0. See the official MX4J documentation for more information. Spring JMX lets you create proxies that re-route calls to MBeans that are registered in a local or remote `MBeanServer` . These proxies provide you with a standard Java interface, through which you can interact with your MBeans. The following code shows how to configure a proxy for an MBean running in a local `MBeanServer` : In the preceding example, you can see that a proxy is created for the MBean registered under the `ObjectName` of `bean:name=testBean` . The set of interfaces that the proxy implements is controlled by the `proxyInterfaces` property, and the rules for mapping methods and properties on these interfaces to operations and attributes on the MBean are the same rules used by the can create a proxy to any MBean that is accessible through an . By default, the local `MBeanServer` is located and used, but you can override this and provide an that points to a remote `MBeanServer` to cater for proxies that point to remote MBeans: ``` <bean id="clientConnector" class="org.springframework.jmx.support.MBeanServerConnectionFactoryBean"> <property name="serviceUrl" value="service:jmx:rmi://remotehost:9875"/> </beanthat points to a remote machine that uses the . This is then passed to the through the `server` property. The proxy that is created forwards all invocations to the `MBeanServer` through this Spring’s JMX offering includes comprehensive support for JMX notifications. ## Registering Listeners for Notifications Spring’s JMX support makes it easy to register any number of ``` NotificationListeners ``` with any number of MBeans (this includes MBeans exported by Spring’s `MBeanExporter` and MBeans registered through some other mechanism). For example, consider the scenario where one would like to be informed (through a `Notification` ) each and every time an attribute of a target MBean changes. The following example writes notifications to the console: ``` package com.example; import javax.management.AttributeChangeNotification; import javax.management.Notification; import javax.management.NotificationFilter; import javax.management.NotificationListener; public class ConsoleLoggingNotificationListener implements NotificationListener, NotificationFilter { public void handleNotification(Notification notification, Object handback) { System.out.println(notification); System.out.println(handback); } public boolean isNotificationEnabled(Notification notification) { return AttributeChangeNotification.class.isAssignableFrom(notification.getClass()); } The following example adds (defined in the preceding example) to <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <property name="beans"> <map> <entry key="bean:name=testBean1" value-ref="testBean"/> </map> </property> <property name="notificationListenerMappings"> <map> <entry key="bean:name=testBean1"> <bean class="com.example.ConsoleLoggingNotificationListener"/> </entry> </map> </property> </bean With the preceding configuration in place, every time a JMX `Notification` is broadcast from the target MBean ( `bean:name=testBean1` ), the bean that was registered as a listener through the property is notified. The bean can then take whatever action it deems appropriate in response to the `Notification` . You can also use straight bean names as the link between exported beans and listeners, as the following example shows: <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <property name="beans"> <map> <entry key="bean:name=testBean1" value-ref="testBean"/> </map> </property> <property name="notificationListenerMappings"> <map> <entry key="testBean"> <bean class="com.example.ConsoleLoggingNotificationListener"/> </entry> </map> </property> </bean If you want to register a single `NotificationListener` instance for all of the beans that the enclosing `MBeanExporter` exports, you can use the special wildcard ( `*` ) as the key for an entry in the property map, as the following example shows: ``` <property name="notificationListenerMappings"> <map> <entry key="*"> <bean class="com.example.ConsoleLoggingNotificationListener"/> </entry> </map> </property> ``` If you need to do the inverse (that is, register a number of distinct listeners against an MBean), you must instead use the ``` notificationListeners ``` list property (in preference to the property). This time, instead of configuring a `NotificationListener` for a single MBean, we configure instances. A encapsulates a `NotificationListener` and the `ObjectName` (or `ObjectNames` ) that it is to be registered against in an `MBeanServer` . The also encapsulates a number of other properties, such as a `NotificationFilter` and an arbitrary handback object that can be used in advanced JMX notification scenarios. The configuration when using instances is not wildly different to what was presented previously, as the following example shows: <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <property name="beans"> <map> <entry key="bean:name=testBean1" value-ref="testBean"/> </map> </property> <property name="notificationListeners"> <list> <bean class="org.springframework.jmx.export.NotificationListenerBean"> <constructor-arg> <bean class="com.example.ConsoleLoggingNotificationListener"/> </constructor-arg> <property name="mappedObjectNames"> <list> <value>bean:name=testBean1</value> </list> </property> </bean> </list> </property> </bean The preceding example is equivalent to the first notification example. Assume, then, that we want to be given a handback object every time a `Notification` is raised and that we also want to filter out extraneous `Notifications` by supplying a `NotificationFilter` . The following example accomplishes these goals: <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"> <property name="beans"> <map> <entry key="bean:name=testBean1" value-ref="testBean1"/> <entry key="bean:name=testBean2" value-ref="testBean2"/> </map> </property> <property name="notificationListeners"> <list> <bean class="org.springframework.jmx.export.NotificationListenerBean"> <constructor-arg ref="customerNotificationListener"/> <property name="mappedObjectNames"> <list> <!-- handles notifications from two distinct MBeans --> <value>bean:name=testBean1</value> <value>bean:name=testBean2</value> </list> </property> <property name="handback"> <bean class="java.lang.String"> <constructor-arg value="This could be anything..."/> </bean> </property> <property name="notificationFilter" ref="customerNotificationListener"/> </bean> </list> </property> </bean <!-- implements both the NotificationListener and NotificationFilter interfaces --> <bean id="customerNotificationListener" class="com.example.ConsoleLoggingNotificationListener"/ <bean id="testBean1" class="org.springframework.jmx.JmxTestBean"> <property name="name" value="TEST"/> <property name="age" value="100"/> </bean <bean id="testBean2" class="org.springframework.jmx.JmxTestBean"> <property name="name" value="ANOTHER TEST"/> <property name="age" value="200"/> </bean (For a full discussion of what a handback object is and, indeed, what a `NotificationFilter` is, see the section of the JMX specification (1.2) entitled 'The JMX Notification Model'.) ## Publishing Notifications Spring provides support not only for registering to receive `Notifications` but also for publishing `Notifications` . This section is really only relevant to Spring-managed beans that have been exposed as MBeans through an | | --- | The key interface in Spring’s JMX notification publication support is the interface (defined in the ``` org.springframework.jmx.export.notification ``` package). Any bean that is going to be exported as an MBean through an `MBeanExporter` instance can implement the related interface to gain access to a instance. The interface supplies an instance of a to the implementing bean through a simple setter method, which the bean can then use to publish `Notifications` . As stated in the javadoc of the interface, managed beans that publish events through the mechanism are not responsible for the state management of notification listeners. Spring’s JMX support takes care of handling all the JMX infrastructure issues. All you need to do, as an application developer, is implement the interface and start publishing events by using the supplied instance. Note that the is set after the managed bean has been registered with an `MBeanServer` . Using a instance is quite straightforward. You create a JMX `Notification` instance (or an instance of an appropriate `Notification` subclass), populate the notification with the data pertinent to the event that is to be published, and invoke the ``` sendNotification(Notification) ``` on the instance, passing in the `Notification` . In the following example, exported instances of the `JmxTestBean` publish a `NotificationEvent` every time the `add(int, int)` operation is invoked: import org.springframework.jmx.export.notification.NotificationPublisherAware; import org.springframework.jmx.export.notification.NotificationPublisher; import javax.management.Notification; public class JmxTestBean implements IJmxTestBean, NotificationPublisherAware { // other getters and setters omitted for clarity public int add(int x, int y) { int answer = x + y; this.publisher.sendNotification(new Notification("add", this, 0)); return answer; } public void setNotificationPublisher(NotificationPublisher notificationPublisher) { this.publisher = notificationPublisher; } interface and the machinery to get it all working is one of the nicer features of Spring’s JMX support. It does, however, come with the price tag of coupling your classes to both Spring and JMX. As always, the advice here is to be pragmatic. If you need the functionality offered by the and you can accept the coupling to both Spring and JMX, then do so. This section contains links to further resources about JMX: The JMX homepage at Oracle. * The JMX specification (JSR-000003). * The JMX Remote API specification (JSR-000160). * The MX4J homepage. (MX4J is an open-source implementation of various JMX specs.) # Email This section describes how to send email with the Spring Framework. The Spring Framework provides a helpful utility library for sending email that shields you from the specifics of the underlying mailing system and is responsible for low-level resource handling on behalf of the client. The ``` org.springframework.mail ``` package is the root level package for the Spring Framework’s email support. The central interface for sending emails is the `MailSender` interface. A simple value object that encapsulates the properties of a simple mail such as `from` and `to` (plus many others) is the `SimpleMailMessage` class. This package also contains a hierarchy of checked exceptions that provide a higher level of abstraction over the lower level mail system exceptions, with the root exception being `MailException` . See the javadoc for more information on the rich mail exception hierarchy. The ``` org.springframework.mail.javamail.JavaMailSender ``` interface adds specialized JavaMail features, such as MIME message support to the `MailSender` interface (from which it inherits). `JavaMailSender` also provides a callback interface called ``` org.springframework.mail.javamail.MimeMessagePreparator ``` for preparing a `MimeMessage` . ## Usage Assume that we have a business interface called `OrderManager` , as the following example shows: ``` public interface OrderManager { void placeOrder(Order order); Further assume that we have a requirement stating that an email message with an order number needs to be generated and sent to a customer who placed the relevant order. ### Basic `MailSender` and `SimpleMailMessage` Usage The following example shows how to use `MailSender` and `SimpleMailMessage` to send an email when someone places an order: ``` import org.springframework.mail.MailException; import org.springframework.mail.MailSender; import org.springframework.mail.SimpleMailMessage; private MailSender mailSender; private SimpleMailMessage templateMessage; public void setTemplateMessage(SimpleMailMessage templateMessage) { this.templateMessage = templateMessage; } public void placeOrder(Order order) { // Do the business calculations... // Call the collaborators to persist the order... // Create a thread safe "copy" of the template message and customize it SimpleMailMessage msg = new SimpleMailMessage(this.templateMessage); msg.setTo(order.getCustomer().getEmailAddress()); msg.setText( "Dear " + order.getCustomer().getFirstName() + order.getCustomer().getLastName() + ", thank you for placing order. Your order number is " + order.getOrderNumber()); try { this.mailSender.send(msg); } catch (MailException ex) { // simply log it and go on... System.err.println(ex.getMessage()); } } The following example shows the bean definitions for the preceding code: ``` <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="host" value="mail.mycompany.example"/> </bean<!-- this is a template message that we can pre-load with default state --> <bean id="templateMessage" class="org.springframework.mail.SimpleMailMessage"> <property name="from" value="[email protected]"/> <property name="subject" value="Your order"/> </bean<bean id="orderManager" class="com.mycompany.businessapp.support.SimpleOrderManager"> <property name="mailSender" ref="mailSender"/> <property name="templateMessage" ref="templateMessage"/> </bean> ``` `JavaMailSender` and This section describes another implementation of `OrderManager` that uses the callback interface. In the following example, the `mailSender` property is of type `JavaMailSender` so that we are able to use the JavaMail `MimeMessage` class: ``` import jakarta.mail.Message; import jakarta.mail.MessagingException; import jakarta.mail.internet.InternetAddress; import jakarta.mail.internet.MimeMessage; import jakarta.mail.internet.MimeMessage; import org.springframework.mail.MailException; import org.springframework.mail.javamail.JavaMailSender; import org.springframework.mail.javamail.MimeMessagePreparator; private JavaMailSender mailSender; public void placeOrder(final Order order) { // Do the business calculations... // Call the collaborators to persist the order... MimeMessagePreparator preparator = new MimeMessagePreparator() { public void prepare(MimeMessage mimeMessage) throws Exception { mimeMessage.setRecipient(Message.RecipientType.TO, new InternetAddress(order.getCustomer().getEmailAddress())); mimeMessage.setFrom(new InternetAddress("[email protected]")); mimeMessage.setText("Dear " + order.getCustomer().getFirstName() + " " + order.getCustomer().getLastName() + ", thanks for your order. " + "Your order number is " + order.getOrderNumber() + "."); } }; try { this.mailSender.send(preparator); } catch (MailException ex) { // simply log it and go on... System.err.println(ex.getMessage()); } } The mail code is a crosscutting concern and could well be a candidate for refactoring into a custom Spring AOP aspect, which could then be run at appropriate joinpoints on the | | --- | The Spring Framework’s mail support ships with the standard JavaMail implementation. See the relevant javadoc for more information. ## Using the JavaMail `MimeMessageHelper` A class that comes in pretty handy when dealing with JavaMail messages is ``` org.springframework.mail.javamail.MimeMessageHelper ``` , which shields you from having to use the verbose JavaMail API. Using the `MimeMessageHelper` , it is pretty easy to create a `MimeMessage` , as the following example shows: ``` // of course you would use DI in any real-world cases JavaMailSenderImpl sender = new JavaMailSenderImpl(); sender.setHost("mail.host.com"); MimeMessage message = sender.createMimeMessage(); MimeMessageHelper helper = new MimeMessageHelper(message); helper.setTo("[email protected]"); helper.setText("Thank you for ordering!"); ### Sending Attachments and Inline Resources Multipart email messages allow for both attachments and inline resources. Examples of inline resources include an image or a stylesheet that you want to use in your message but that you do not want displayed as an attachment. # Attachments The following example shows you how to use the `MimeMessageHelper` to send an email with a single JPEG image attachment: helper.setText("Check out this image!"); // let's attach the infamous windows Sample file (this time copied to c:/) FileSystemResource file = new FileSystemResource(new File("c:/Sample.jpg")); helper.addAttachment("CoolImage.jpg", file); # Inline Resources The following example shows you how to use the `MimeMessageHelper` to send an email with an inline image: // use the true flag to indicate the text included is HTML helper.setText("<html><body><img src='cid:identifier1234'></body></html>", true); // let's include the infamous windows Sample file (this time copied to c:/) FileSystemResource res = new FileSystemResource(new File("c:/Sample.jpg")); helper.addInline("identifier1234", res); Inline resources are added to the | | --- | ### Creating Email Content by Using a Templating Library The code in the examples shown in the previous sections explicitly created the content of the email message, by using methods calls such as `message.setText(..)` . This is fine for simple cases, and it is okay in the context of the aforementioned examples, where the intent was to show you the very basics of the API. In your typical enterprise application, though, developers often do not create the content of email messages by using the previously shown approach for a number of reasons: Creating HTML-based email content in Java code is tedious and error prone. * There is no clear separation between display logic and business logic. * Changing the display structure of the email content requires writing Java code, recompiling, redeploying, and so on. Typically, the approach taken to address these issues is to use a template library (such as FreeMarker) to define the display structure of email content. This leaves your code tasked only with creating the data that is to be rendered in the email template and sending the email. It is definitely a best practice when the content of your email messages becomes even moderately complex, and, with the Spring Framework’s support classes for FreeMarker, it becomes quite easy to do. The Spring Framework provides abstractions for the asynchronous execution and scheduling of tasks with the `TaskExecutor` and `TaskScheduler` interfaces, respectively. Spring also features implementations of those interfaces that support thread pools or delegation to CommonJ within an application server environment. Ultimately, the use of these implementations behind the common interfaces abstracts away the differences between Java SE and Jakarta EE environments. Spring also features integration classes to support scheduling with the Quartz Scheduler. `TaskExecutor` Abstraction Executors are the JDK name for the concept of thread pools. The “executor” naming is due to the fact that there is no guarantee that the underlying implementation is actually a pool. An executor may be single-threaded or even synchronous. Spring’s abstraction hides implementation details between the Java SE and Jakarta EE environments. Spring’s `TaskExecutor` interface is identical to the interface. In fact, originally, its primary reason for existence was to abstract away the need for Java 5 when using thread pools. The interface has a single method ( ``` execute(Runnable task) ``` ) that accepts a task for execution based on the semantics and configuration of the thread pool. The `TaskExecutor` was originally created to give other Spring components an abstraction for thread pooling where needed. Components such as the , JMS’s , and Quartz integration all use the `TaskExecutor` abstraction to pool threads. However, if your beans need thread pooling behavior, you can also use this abstraction for your own needs. `TaskExecutor` Types Spring includes a number of pre-built implementations of `TaskExecutor` . In all likelihood, you should never need to implement your own. The variants that Spring provides are as follows: * `SyncTaskExecutor` : This implementation does not run invocations asynchronously. Instead, each invocation takes place in the calling thread. It is primarily used in situations where multi-threading is not necessary, such as in simple test cases. * : This implementation does not reuse any threads. Rather, it starts up a new thread for each invocation. However, it does support a concurrency limit that blocks any invocations that are over the limit until a slot has been freed up. If you are looking for true pooling, see , later in this list. * : This implementation is an adapter for a instance. There is an alternative ( ) that exposes the `Executor` configuration parameters as bean properties. There is rarely a need to use directly. However, if the is not flexible enough for your needs, is an alternative. * : This implementation is most commonly used. It exposes bean properties for configuring a ``` java.util.concurrent.ThreadPoolExecutor ``` and wraps it in a `TaskExecutor` . If you need to adapt to a different kind of , we recommend that you use a instead. * ``` DefaultManagedTaskExecutor ``` : This implementation uses a JNDI-obtained ``` ManagedExecutorService ``` in a JSR-236 compatible runtime environment (such as a Jakarta EE application server), replacing a CommonJ WorkManager for that purpose. `TaskExecutor` Spring’s `TaskExecutor` implementations are commonly used with dependency injection. In the following example, we define a bean that uses the to asynchronously print out a set of messages: ``` import org.springframework.core.task.TaskExecutor; public class TaskExecutorExample { private class MessagePrinterTask implements Runnable { private String message; public MessagePrinterTask(String message) { this.message = message; } public void run() { System.out.println(message); } } private TaskExecutor taskExecutor; public TaskExecutorExample(TaskExecutor taskExecutor) { this.taskExecutor = taskExecutor; } public void printMessages() { for(int i = 0; i < 25; i++) { taskExecutor.execute(new MessagePrinterTask("Message" + i)); } } } ``` As you can see, rather than retrieving a thread from the pool and executing it yourself, you add your `Runnable` to the queue. Then the `TaskExecutor` uses its internal rules to decide when the task gets run. To configure the rules that the `TaskExecutor` uses, we expose simple bean properties: ``` <bean id="taskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor"> <property name="corePoolSize" value="5"/> <property name="maxPoolSize" value="10"/> <property name="queueCapacity" value="25"/> </bean<bean id="taskExecutorExample" class="TaskExecutorExample"> <constructor-arg ref="taskExecutor"/> </bean> ``` `TaskScheduler` Abstraction In addition to the `TaskExecutor` abstraction, Spring has a `TaskScheduler` SPI with a variety of methods for scheduling tasks to run at some point in the future. The following listing shows the `TaskScheduler` interface definition: ``` public interface TaskScheduler { ScheduledFuture schedule(Runnable task, Trigger trigger); ScheduledFuture schedule(Runnable task, Instant startTime); ScheduledFuture scheduleAtFixedRate(Runnable task, Instant startTime, Duration period); ScheduledFuture scheduleAtFixedRate(Runnable task, Duration period); ScheduledFuture scheduleWithFixedDelay(Runnable task, Instant startTime, Duration delay); ScheduledFuture scheduleWithFixedDelay(Runnable task, Duration delay); ``` The simplest method is the one named `schedule` that takes only a `Runnable` and an `Instant` . That causes the task to run once after the specified time. All of the other methods are capable of scheduling tasks to run repeatedly. The fixed-rate and fixed-delay methods are for simple, periodic execution, but the method that accepts a `Trigger` is much more flexible. `Trigger` Interface The `Trigger` interface is essentially inspired by JSR-236. The basic idea of the `Trigger` is that execution times may be determined based on past execution outcomes or even arbitrary conditions. If these determinations take into account the outcome of the preceding execution, that information is available within a `TriggerContext` . The `Trigger` interface itself is quite simple, as the following listing shows: ``` public interface Trigger { Instant nextExecution(TriggerContext triggerContext); } ``` The `TriggerContext` is the most important part. It encapsulates all of the relevant data and is open for extension in the future, if necessary. The `TriggerContext` is an interface (a `SimpleTriggerContext` implementation is used by default). The following listing shows the available methods for `Trigger` implementations. ``` public interface TriggerContext { Instant lastScheduledExecution(); Instant lastActualExecution(); Instant lastCompletion(); } ``` `Trigger` Implementations Spring provides two implementations of the `Trigger` interface. The most interesting one is the `CronTrigger` . It enables the scheduling of tasks based on cron expressions. For example, the following task is scheduled to run 15 minutes past each hour but only during the 9-to-5 "business hours" on weekdays: ``` scheduler.schedule(task, new CronTrigger("0 15 9-17 * * MON-FRI")); ``` The other implementation is a `PeriodicTrigger` that accepts a fixed period, an optional initial delay value, and a boolean to indicate whether the period should be interpreted as a fixed-rate or a fixed-delay. Since the `TaskScheduler` interface already defines methods for scheduling tasks at a fixed rate or with a fixed delay, those methods should be used directly whenever possible. The value of the `PeriodicTrigger` implementation is that you can use it within components that rely on the `Trigger` abstraction. For example, it may be convenient to allow periodic triggers, cron-based triggers, and even custom trigger implementations to be used interchangeably. Such a component could take advantage of dependency injection so that you can configure such `Triggers` externally and, therefore, easily modify or extend them. `TaskScheduler` implementations As with Spring’s `TaskExecutor` abstraction, the primary benefit of the `TaskScheduler` arrangement is that an application’s scheduling needs are decoupled from the deployment environment. This abstraction level is particularly relevant when deploying to an application server environment where threads should not be created directly by the application itself. For such scenarios, Spring provides a ``` DefaultManagedTaskScheduler ``` that delegates to a JSR-236 ``` ManagedScheduledExecutorService ``` in a Jakarta EE environment. Whenever external thread management is not a requirement, a simpler alternative is a local setup within the application, which can be adapted through Spring’s ``` ConcurrentTaskScheduler ``` . As a convenience, Spring also provides a , which internally delegates to a to provide common bean-style configuration along the lines of . These variants work perfectly fine for locally embedded thread pool setups in lenient application server environments, as well — in particular on Tomcat and Jetty. ## Annotation Support for Scheduling and Asynchronous Execution Spring provides annotation support for both task scheduling and asynchronous method execution. ### Enable Scheduling Annotations To enable support for `@Scheduled` and `@Async` annotations, you can add `@EnableScheduling` and `@EnableAsync` to one of your `@Configuration` classes, as the following example shows: ``` @Configuration @EnableAsync @EnableScheduling public class AppConfig { } ``` You can pick and choose the relevant annotations for your application. For example, if you need only support for `@Scheduled` , you can omit `@EnableAsync` . For more fine-grained control, you can additionally implement the `SchedulingConfigurer` interface, the `AsyncConfigurer` interface, or both. See the `SchedulingConfigurer` and `AsyncConfigurer` javadoc for full details. If you prefer XML configuration, you can use the ``` <task:annotation-driven> ``` ``` <task:annotation-driven executor="myExecutor" scheduler="myScheduler"/> <task:executor id="myExecutor" pool-size="5"/> <task:scheduler id="myScheduler" pool-size="10"/> ``` Note that, with the preceding XML, an executor reference is provided for handling those tasks that correspond to methods with the `@Async` annotation, and the scheduler reference is provided for managing those methods annotated with `@Scheduled` . `@Scheduled` annotation You can add the `@Scheduled` annotation to a method, along with trigger metadata. For example, the following method is invoked every five seconds (5000 milliseconds) with a fixed delay, meaning that the period is measured from the completion time of each preceding invocation. If you need a fixed-rate execution, you can use the `fixedRate` attribute within the annotation. The following method is invoked every five seconds (measured between the successive start times of each invocation): For fixed-delay and fixed-rate tasks, you can specify an initial delay by indicating the amount of time to wait before the first execution of the method, as the following `fixedRate` example shows: ``` @Scheduled(initialDelay = 1000, fixedRate = 5000) public void doSomething() { // something that should run periodically } ``` If simple periodic scheduling is not expressive enough, you can provide a cron expression. The following example runs only on weekdays: ``` @Scheduled(cron="*/5 * * * * MON-FRI") public void doSomething() { // something that should run on weekdays only } ``` Notice that the methods to be scheduled must have void returns and must not accept any arguments. If the method needs to interact with other objects from the application context, those would typically have been provided through dependency injection. `@Scheduled` can be used as a repeatable annotation. If several scheduled declarations are found on the same method, each of them will be processed independently, with a separate trigger firing for each of them. As a consequence, such co-located schedules may overlap and execute multiple times in parallel or in immediate succession. Please make sure that your specified cron expressions etc do not accidentally overlap. `@Async` annotation You can provide the `@Async` annotation on a method so that invocation of that method occurs asynchronously. In other words, the caller returns immediately upon invocation, while the actual execution of the method occurs in a task that has been submitted to a Spring `TaskExecutor` . In the simplest case, you can apply the annotation to a method that returns `void` , as the following example shows: ``` @Async void doSomething() { // this will be run asynchronously } ``` Unlike the methods annotated with the `@Scheduled` annotation, these methods can expect arguments, because they are invoked in the “normal” way by callers at runtime rather than from a scheduled task being managed by the container. For example, the following code is a legitimate application of the `@Async` annotation: ``` @Async void doSomething(String s) { // this will be run asynchronously } ``` Even methods that return a value can be invoked asynchronously. However, such methods are required to have a `Future` -typed return value. This still provides the benefit of asynchronous execution so that the caller can perform other tasks prior to calling `get()` on that `Future` . The following example shows how to use `@Async` on a method that returns a value: ``` @Async Future<String> returnSomething(int i) { // this will be run asynchronously } ``` @Async methods may not only declare a regular java.util.concurrent.Future return type but also Spring’s org.springframework.util.concurrent.ListenableFuture or, as of Spring 4.2, JDK 8’s java.util.concurrent.CompletableFuture, for richer interaction with the asynchronous task and for immediate composition with further processing steps. You can not use `@Async` in conjunction with lifecycle callbacks such as `@PostConstruct` . To asynchronously initialize Spring beans, you currently have to use a separate initializing Spring bean that then invokes the `@Async` annotated method on the target, as the following example shows: ``` public class SampleBeanImpl implements SampleBean { @Async void doSomething() { // ... } public class SampleBeanInitializer { private final SampleBean bean; public SampleBeanInitializer(SampleBean bean) { this.bean = bean; } @PostConstruct public void initialize() { bean.doSomething(); } There is no direct XML equivalent for | | --- | ### Executor Qualification with `@Async` By default, when specifying `@Async` on a method, the executor that is used is the one configured when enabling async support, i.e. the “annotation-driven” element if you are using XML or your `AsyncConfigurer` implementation, if any. However, you can use the `value` attribute of the `@Async` annotation when you need to indicate that an executor other than the default should be used when executing a given method. The following example shows how to do so: ``` @Async("otherExecutor") void doSomething(String s) { // this will be run asynchronously by "otherExecutor" } ``` In this case, `"otherExecutor"` can be the name of any `Executor` bean in the Spring container, or it may be the name of a qualifier associated with any `Executor` (for example, as specified with the `<qualifier>` element or Spring’s `@Qualifier` annotation). ### Exception Management with `@Async` When an `@Async` method has a `Future` -typed return value, it is easy to manage an exception that was thrown during the method execution, as this exception is thrown when calling `get` on the `Future` result. With a `void` return type, however, the exception is uncaught and cannot be transmitted. You can provide an to handle such exceptions. The following example shows how to do so: ``` public class MyAsyncUncaughtExceptionHandler implements AsyncUncaughtExceptionHandler { @Override public void handleUncaughtException(Throwable ex, Method method, Object... params) { // handle exception } } ``` By default, the exception is merely logged. You can define a custom by using `AsyncConfigurer` or the ``` <task:annotation-driven/> ``` XML element. `task` Namespace As of version 3.0, Spring includes an XML namespace for configuring `TaskExecutor` and `TaskScheduler` instances. It also provides a convenient way to configure tasks to be scheduled with a trigger. ### The 'scheduler' Element The following element creates a instance with the specified thread pool size: ``` <task:scheduler id="scheduler" pool-size="10"/> ``` The value provided for the `id` attribute is used as the prefix for thread names within the pool. The `scheduler` element is relatively straightforward. If you do not provide a `pool-size` attribute, the default thread pool has only a single thread. There are no other configuration options for the scheduler. `executor` Element The following creates a ``` <task:executor id="executor" pool-size="10"/> ``` As with the scheduler shown in the previous section, the value provided for the `id` attribute is used as the prefix for thread names within the pool. As far as the pool size is concerned, the `executor` element supports more configuration options than the `scheduler` element. For one thing, the thread pool for a is itself more configurable. Rather than only a single size, an executor’s thread pool can have different values for the core and the max size. If you provide a single value, the executor has a fixed-size thread pool (the core and max sizes are the same). However, the `executor` element’s `pool-size` attribute also accepts a range in the form of `min-max` . The following example sets a minimum value of `5` and a maximum value of `25` : ``` <task:executor id="executorWithPoolSizeRange" pool-size="5-25" queue-capacity="100"/> ``` In the preceding configuration, a `queue-capacity` value has also been provided. The configuration of the thread pool should also be considered in light of the executor’s queue capacity. For the full description of the relationship between pool size and queue capacity, see the documentation for `ThreadPoolExecutor` . The main idea is that, when a task is submitted, the executor first tries to use a free thread if the number of active threads is currently less than the core size. If the core size has been reached, the task is added to the queue, as long as its capacity has not yet been reached. Only then, if the queue’s capacity has been reached, does the executor create a new thread beyond the core size. If the max size has also been reached, then the executor rejects the task. By default, the queue is unbounded, but this is rarely the desired configuration, because it can lead to `OutOfMemoryErrors` if enough tasks are added to that queue while all pool threads are busy. Furthermore, if the queue is unbounded, the max size has no effect at all. Since the executor always tries the queue before creating a new thread beyond the core size, a queue must have a finite capacity for the thread pool to grow beyond the core size (this is why a fixed-size pool is the only sensible case when using an unbounded queue). Consider the case, as mentioned above, when a task is rejected. By default, when a task is rejected, a thread pool executor throws a ``` TaskRejectedException ``` . However, the rejection policy is actually configurable. The exception is thrown when using the default rejection policy, which is the `AbortPolicy` implementation. For applications where some tasks can be skipped under heavy load, you can instead configure either `DiscardPolicy` or `DiscardOldestPolicy` . Another option that works well for applications that need to throttle the submitted tasks under heavy load is the `CallerRunsPolicy` . Instead of throwing an exception or discarding tasks, that policy forces the thread that is calling the submit method to run the task itself. The idea is that such a caller is busy while running that task and not able to submit other tasks immediately. Therefore, it provides a simple way to throttle the incoming load while maintaining the limits of the thread pool and queue. Typically, this allows the executor to “catch up” on the tasks it is handling and thereby frees up some capacity on the queue, in the pool, or both. You can choose any of these options from an enumeration of values available for the `rejection-policy` attribute on the `executor` element. The following example shows an `executor` element with a number of attributes to specify various behaviors: ``` <task:executor id="executorWithCallerRunsPolicy" pool-size="5-25" queue-capacity="100" rejection-policy="CALLER_RUNS"/> ``` Finally, the `keep-alive` setting determines the time limit (in seconds) for which threads may remain idle before being stopped. If there are more than the core number of threads currently in the pool, after waiting this amount of time without processing a task, excess threads get stopped. A time value of zero causes excess threads to stop immediately after executing a task without remaining follow-up work in the task queue. The following example sets the `keep-alive` value to two minutes: ``` <task:executor id="executorWithKeepAlive" pool-size="5-25" keep-alive="120"/> ``` ### The 'scheduled-tasks' Element The most powerful feature of Spring’s task namespace is the support for configuring tasks to be scheduled within a Spring Application Context. This follows an approach similar to other “method-invokers” in Spring, such as that provided by the JMS namespace for configuring message-driven POJOs. Basically, a `ref` attribute can point to any Spring-managed object, and the `method` attribute provides the name of a method to be invoked on that object. The following listing shows a simple example: ``` <task:scheduled-tasks scheduler="myScheduler"> <task:scheduled ref="beanA" method="methodA" fixed-delay="5000"/> </task:scheduled-tasks The scheduler is referenced by the outer element, and each individual task includes the configuration of its trigger metadata. In the preceding example, that metadata defines a periodic trigger with a fixed delay indicating the number of milliseconds to wait after each task execution has completed. Another option is `fixed-rate` , indicating how often the method should be run regardless of how long any previous execution takes. Additionally, for both `fixed-delay` and `fixed-rate` tasks, you can specify an 'initial-delay' parameter, indicating the number of milliseconds to wait before the first execution of the method. For more control, you can instead provide a `cron` attribute to provide a cron expression. The following example shows these other options: ``` <task:scheduled-tasks scheduler="myScheduler"> <task:scheduled ref="beanA" method="methodA" fixed-delay="5000" initial-delay="1000"/> <task:scheduled ref="beanB" method="methodB" fixed-rate="5000"/> <task:scheduled ref="beanC" method="methodC" cron="*/5 * * * * MON-FRI"/> </task:scheduled-tasks## Cron Expressions All Spring cron expressions have to conform to the same format, whether you are using them in `@Scheduled` annotations, `task:scheduled-tasks` elements, or someplace else. A well-formed cron expression, such as `* * * * * *` , consists of six space-separated time and date fields, each with its own range of valid values: > ┌───────────── second (0-59) │ ┌───────────── minute (0 - 59) │ │ ┌───────────── hour (0 - 23) │ │ │ ┌───────────── day of the month (1 - 31) │ │ │ │ ┌───────────── month (1 - 12) (or JAN-DEC) │ │ │ │ │ ┌───────────── day of the week (0 - 7) │ │ │ │ │ │ (0 or 7 is Sunday, or MON-SUN) │ │ │ │ │ │ * * * * * * There are some rules that apply: A field may be an asterisk ( `*` ), which always stands for “first-last”. For the day-of-the-month or day-of-the-week fields, a question mark ( `?` ) may be used instead of an asterisk. * Commas ( `,` ) are used to separate items of a list. * Two numbers separated with a hyphen ( `-` ) express a range of numbers. The specified range is inclusive. * Following a range (or `*` ) with `/` specifies the interval of the number’s value through the range. * English names can also be used for the month and day-of-week fields. Use the first three letters of the particular day or month (case does not matter). * The day-of-month and day-of-week fields can contain an `L` character, which has a different meaning. In the day-of-month field, `L` stands for the last day of the month. If followed by a negative offset (that is, `L-n` ), it means `n` th-to-last day of the month. * In the day-of-week field, `L` stands for the last day of the week. If prefixed by a number or three-letter name ( `dL` or `DDDL` ), it means the last day of week ( `d` or `DDD` ) in the month. * The day-of-month field can be `nW` , which stands for the nearest weekday to day of the month `n` . If `n` falls on Saturday, this yields the Friday before it. If `n` falls on Sunday, this yields the Monday after, which also happens if `n` is `1` and falls on a Saturday (that is: `1W` stands for the first weekday of the month). * If the day-of-month field is `LW` , it means the last weekday of the month. * The day-of-week field can be `d#n` (or `DDD#n` ), which stands for the `n` th day of week `d` (or `DDD` ) in the month. Here are some examples: Cron Expression | Meaning | | --- | --- | | | | | | | | | | | | | | | | Expressions such as `0 0 * * * *` are hard for humans to parse and are, therefore, hard to fix in case of bugs. To improve readability, Spring supports the following macros, which represent commonly used sequences. You can use these macros instead of the six-digit value, thus: ``` @Scheduled(cron = "@hourly") ``` Macro | Meaning | | --- | --- | | | | | | ## Using the Quartz Scheduler Quartz uses `Trigger` , `Job` , and `JobDetail` objects to realize scheduling of all kinds of jobs. For the basic concepts behind Quartz, see the Quartz Web site. For convenience purposes, Spring offers a couple of classes that simplify using Quartz within Spring-based applications. `JobDetailFactoryBean` Quartz `JobDetail` objects contain all the information needed to run a job. Spring provides a `JobDetailFactoryBean` , which provides bean-style properties for XML configuration purposes. Consider the following example: ``` <bean name="exampleJob" class="org.springframework.scheduling.quartz.JobDetailFactoryBean"> <property name="jobClass" value="example.ExampleJob"/> <property name="jobDataAsMap"> <map> <entry key="timeout" value="5"/> </map> </property> </bean> ``` The job detail configuration has all the information it needs to run the job ( `ExampleJob` ). The timeout is specified in the job data map. The job data map is available through the `JobExecutionContext` (passed to you at execution time), but the `JobDetail` also gets its properties from the job data mapped to properties of the job instance. So, in the following example, the `ExampleJob` contains a bean property named `timeout` , and the `JobDetail` has it applied automatically: public class ExampleJob extends QuartzJobBean { private int timeout; /** * Setter called after the ExampleJob is instantiated * with the value from the JobDetailFactoryBean. */ public void setTimeout(int timeout) { this.timeout = timeout; } protected void executeInternal(JobExecutionContext ctx) throws JobExecutionException { // do the actual work } } ``` All additional properties from the job data map are available to you as well. By using the | | --- | Often you merely need to invoke a method on a specific object. By using the , you can do exactly this, as the following example shows: ``` <bean id="jobDetail" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean"> <property name="targetObject" ref="exampleBusinessObject"/> <property name="targetMethod" value="doIt"/> </bean> ``` The preceding example results in the `doIt` method being called on the ``` exampleBusinessObject ``` ``` public class ExampleBusinessObject { // properties and collaborators public void doIt() { // do the actual work } } ``` ``` <bean id="exampleBusinessObject" class="examples.ExampleBusinessObject"/> ``` , you need not create one-line jobs that merely invoke a method. You need only create the actual business object and wire up the detail object. By default, Quartz Jobs are stateless, resulting in the possibility of jobs interfering with each other. If you specify two triggers for the same `JobDetail` , it is possible that the second one starts before the first job has finished. If `JobDetail` classes implement the `Stateful` interface, this does not happen: the second job does not start before the first one has finished. To make jobs resulting from the be non-concurrent, set the `concurrent` flag to `false` , as the following example shows: ``` <bean id="jobDetail" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean"> <property name="targetObject" ref="exampleBusinessObject"/> <property name="targetMethod" value="doIt"/> <property name="concurrent" value="false"/> </bean> ``` By default, jobs will run in a concurrent fashion. | | --- | ### Wiring up Jobs by Using Triggers and `SchedulerFactoryBean` We have created job details and jobs. We have also reviewed the convenience bean that lets you invoke a method on a specific object. Of course, we still need to schedule the jobs themselves. This is done by using triggers and a `SchedulerFactoryBean` . Several triggers are available within Quartz, and Spring offers two Quartz `FactoryBean` implementations with convenient defaults: . Triggers need to be scheduled. Spring offers a `SchedulerFactoryBean` that exposes triggers to be set as properties. `SchedulerFactoryBean` schedules the actual jobs with those triggers. The following listing uses both a and a ``` <bean id="simpleTrigger" class="org.springframework.scheduling.quartz.SimpleTriggerFactoryBean"> <!-- see the example of method invoking job above --> <property name="jobDetail" ref="jobDetail"/> <!-- 10 seconds --> <property name="startDelay" value="10000"/> <!-- repeat every 50 seconds --> <property name="repeatInterval" value="50000"/> </bean<bean id="cronTrigger" class="org.springframework.scheduling.quartz.CronTriggerFactoryBean"> <property name="jobDetail" ref="exampleJob"/> <!-- run every morning at 6 AM --> <property name="cronExpression" value="0 0 6 * * ?"/> </bean> ``` The preceding example sets up two triggers, one running every 50 seconds with a starting delay of 10 seconds and one running every morning at 6 AM. To finalize everything, we need to set up the `SchedulerFactoryBean` , as the following example shows: ``` <bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean"> <property name="triggers"> <list> <ref bean="cronTrigger"/> <ref bean="simpleTrigger"/> </list> </property> </bean> ``` More properties are available for the `SchedulerFactoryBean` , such as the calendars used by the job details, properties to customize Quartz with, and a Spring-provided JDBC DataSource. See the `SchedulerFactoryBean` javadoc for more information. SchedulerFactoryBean also recognizes a quartz.properties file in the classpath, based on Quartz property keys, as with regular Quartz configuration. Please note that many SchedulerFactoryBean settings interact with common Quartz settings in the properties file; it is therefore not recommended to specify values at both levels. For example, do not set an "org.quartz.jobStore.class" property if you mean to rely on a Spring-provided DataSource, or specify an org.springframework.scheduling.quartz.LocalDataSourceJobStore variant which is a full-fledged replacement for the standard org.quartz.impl.jdbcjobstore.JobStoreTX. Since version 3.1, the Spring Framework provides support for transparently adding caching to an existing Spring application. Similar to the transaction support, the caching abstraction allows consistent use of various caching solutions with minimal impact on the code. In Spring Framework 4.1, the cache abstraction was significantly extended with support for JSR-107 annotations and more customization options. At its core, the cache abstraction applies caching to Java methods, thus reducing the number of executions based on the information available in the cache. That is, each time a targeted method is invoked, the abstraction applies a caching behavior that checks whether the method has been already invoked for the given arguments. If it has been invoked, the cached result is returned without having to invoke the actual method. If the method has not been invoked, then it is invoked, and the result is cached and returned to the user so that, the next time the method is invoked, the cached result is returned. This way, expensive methods (whether CPU- or IO-bound) can be invoked only once for a given set of parameters and the result reused without having to actually invoke the method again. The caching logic is applied transparently without any interference to the invoker. This approach works only for methods that are guaranteed to return the same output (result) for a given input (or arguments) no matter how many times they are invoked. | | --- | The caching abstraction provides other cache-related operations, such as the ability to update the content of the cache or to remove one or all entries. These are useful if the cache deals with data that can change during the course of the application. As with other services in the Spring Framework, the caching service is an abstraction (not a cache implementation) and requires the use of actual storage to store the cache data — that is, the abstraction frees you from having to write the caching logic but does not provide the actual data store. This abstraction is materialized by the ``` org.springframework.cache.Cache ``` ``` org.springframework.cache.CacheManager ``` interfaces. Spring provides a few implementations of that abstraction: JDK ``` java.util.concurrent.ConcurrentMap ``` based caches, Gemfire cache, Caffeine, and JSR-107 compliant caches (such as Ehcache 3.x). See Plugging-in Different Back-end Caches for more information on plugging in other cache stores and providers. The caching abstraction has no special handling for multi-threaded and multi-process environments, as such features are handled by the cache implementation. | | --- | If you have a multi-process environment (that is, an application deployed on several nodes), you need to configure your cache provider accordingly. Depending on your use cases, a copy of the same data on several nodes can be enough. However, if you change the data during the course of the application, you may need to enable other propagation mechanisms. Caching a particular item is a direct equivalent of the typical get-if-not-found-then-proceed-and-put-eventually code blocks found with programmatic cache interaction. No locks are applied, and several threads may try to load the same item concurrently. The same applies to eviction. If several threads are trying to update or evict data concurrently, you may use stale data. Certain cache providers offer advanced features in that area. See the documentation of your cache provider for more details. To use the cache abstraction, you need to take care of two aspects: Caching declaration: Identify the methods that need to be cached and their policies. * Cache configuration: The backing cache where the data is stored and from which it is read. For caching declaration, Spring’s caching abstraction provides a set of Java annotations: * `@Cacheable` : Triggers cache population. * `@CacheEvict` : Triggers cache eviction. * `@CachePut` : Updates the cache without interfering with the method execution. * `@Caching` : Regroups multiple cache operations to be applied on a method. * `@CacheConfig` : Shares some common cache-related settings at class-level. `@Cacheable` Annotation As the name implies, you can use `@Cacheable` to demarcate methods that are cacheable — that is, methods for which the result is stored in the cache so that, on subsequent invocations (with the same arguments), the value in the cache is returned without having to actually invoke the method. In its simplest form, the annotation declaration requires the name of the cache associated with the annotated method, as the following example shows: ``` @Cacheable("books") public Book findBook(ISBN isbn) {...} ``` In the preceding snippet, the `findBook` method is associated with the cache named `books` . Each time the method is called, the cache is checked to see whether the invocation has already been run and does not have to be repeated. While in most cases, only one cache is declared, the annotation lets multiple names be specified so that more than one cache is being used. In this case, each of the caches is checked before invoking the method — if at least one cache is hit, the associated value is returned. All the other caches that do not contain the value are also updated, even though the cached method was not actually invoked. | | --- | The following example uses `@Cacheable` on the `findBook` method with multiple caches: ``` @Cacheable({"books", "isbns"}) public Book findBook(ISBN isbn) {...} ``` ### Default Key Generation Since caches are essentially key-value stores, each invocation of a cached method needs to be translated into a suitable key for cache access. The caching abstraction uses a simple `KeyGenerator` based on the following algorithm: If no parameters are given, return `SimpleKey.EMPTY` . * If only one parameter is given, return that instance. * If more than one parameter is given, return a `SimpleKey` that contains all parameters. This approach works well for most use-cases, as long as parameters have natural keys and implement valid `hashCode()` and `equals()` methods. If that is not the case, you need to change the strategy. To provide a different default key generator, you need to implement the ``` org.springframework.cache.interceptor.KeyGenerator ``` ### Custom Key Generation Declaration Since caching is generic, the target methods are quite likely to have various signatures that cannot be readily mapped on top of the cache structure. This tends to become obvious when the target method has multiple arguments out of which only some are suitable for caching (while the rest are used only by the method logic). Consider the following example: At first glance, while the two `boolean` arguments influence the way the book is found, they are no use for the cache. Furthermore, what if only one of the two is important while the other is not? For such cases, the `@Cacheable` annotation lets you specify how the key is generated through its `key` attribute. You can use SpEL to pick the arguments of interest (or their nested properties), perform operations, or even invoke arbitrary methods without having to write any code or implement any interface. This is the recommended approach over the default generator, since methods tend to be quite different in signatures as the code base grows. While the default strategy might work for some methods, it rarely works for all methods. The following examples use various SpEL declarations (if you are not familiar with SpEL, do yourself a favor and read Spring Expression Language): The preceding snippets show how easy it is to select a certain argument, one of its properties, or even an arbitrary (static) method. If the algorithm responsible for generating the key is too specific or if it needs to be shared, you can define a custom `keyGenerator` on the operation. To do so, specify the name of the `KeyGenerator` bean implementation to use, as the following example shows: ### Default Cache Resolution The caching abstraction uses a simple `CacheResolver` that retrieves the caches defined at the operation level by using the configured `CacheManager` . To provide a different default cache resolver, you need to implement the ``` org.springframework.cache.interceptor.CacheResolver ``` ### Custom Cache Resolution The default cache resolution fits well for applications that work with a single `CacheManager` and have no complex cache resolution requirements. For applications that work with several cache managers, you can set the `cacheManager` to use for each operation, as the following example shows: ``` @Cacheable(cacheNames="books", cacheManager="anotherCacheManager") (1) public Book findBook(ISBN isbn) {...} ``` You can also replace the `CacheResolver` entirely in a fashion similar to that of replacing key generation. The resolution is requested for every cache operation, letting the implementation actually resolve the caches to use based on runtime arguments. The following example shows how to specify a `CacheResolver` : ``` @Cacheable(cacheResolver="runtimeCacheResolver") (1) public Book findBook(ISBN isbn) {...} ``` ### Synchronized Caching In a multi-threaded environment, certain operations might be concurrently invoked for the same argument (typically on startup). By default, the cache abstraction does not lock anything, and the same value may be computed several times, defeating the purpose of caching. For those particular cases, you can use the `sync` attribute to instruct the underlying cache provider to lock the cache entry while the value is being computed. As a result, only one thread is busy computing the value, while the others are blocked until the entry is updated in the cache. The following example shows how to use the `sync` attribute: ``` @Cacheable(cacheNames="foos", sync=true) (1) public Foo executeExpensiveOperation(String id) {...} ``` This is an optional feature, and your favorite cache library may not support it. All | | --- | ### Conditional Caching Sometimes, a method might not be suitable for caching all the time (for example, it might depend on the given arguments). The cache annotations support such use cases through the `condition` parameter, which takes a `SpEL` expression that is evaluated to either `true` or `false` . If `true` , the method is cached. If not, it behaves as if the method is not cached (that is, the method is invoked every time no matter what values are in the cache or what arguments are used). For example, the following method is cached only if the argument `name` has a length shorter than 32: 1 | Setting a condition on | | --- | --- | In addition to the `condition` parameter, you can use the `unless` parameter to veto the adding of a value to the cache. Unlike `condition` , `unless` expressions are evaluated after the method has been invoked. To expand on the previous example, perhaps we only want to cache paperback books, as the following example does: The cache abstraction supports `java.util.Optional` return types. If an `Optional` value is present, it will be stored in the associated cache. If an `Optional` value is not present, `null` will be stored in the associated cache. `#result` always refers to the business entity and never a supported wrapper, so the previous example can be rewritten as follows: ``` @Cacheable(cacheNames="book", condition="#name.length() < 32", unless="#result?.hardback") public Optional<Book> findBook(String name) ``` Note that `#result` still refers to `Book` and not `Optional<Book>` . Since it might be `null` , we use SpEL’s safe navigation operator. ### Available Caching SpEL Evaluation Context Each `SpEL` expression evaluates against a dedicated `context` . In addition to the built-in parameters, the framework provides dedicated caching-related metadata, such as the argument names. The following table describes the items made available to the context so that you can use them for key and conditional computations: `@CachePut` Annotation When the cache needs to be updated without interfering with the method execution, you can use the `@CachePut` annotation. That is, the method is always invoked and its result is placed into the cache (according to the `@CachePut` options). It supports the same options as `@Cacheable` and should be used for cache population rather than method flow optimization. The following example uses the `@CachePut` annotation: ``` @CachePut(cacheNames="book", key="#isbn") public Book updateBook(ISBN isbn, BookDescriptor descriptor) ``` Using | | --- | `@CacheEvict` Annotation The cache abstraction allows not just population of a cache store but also eviction. This process is useful for removing stale or unused data from the cache. As opposed to `@Cacheable` , `@CacheEvict` demarcates methods that perform cache eviction (that is, methods that act as triggers for removing data from the cache). Similarly to its sibling, `@CacheEvict` requires specifying one or more caches that are affected by the action, allows a custom cache and key resolution or a condition to be specified, and features an extra parameter ( `allEntries` ) that indicates whether a cache-wide eviction needs to be performed rather than just an entry eviction (based on the key). The following example evicts all entries from the `books` cache: ``` @CacheEvict(cacheNames="books", allEntries=true) (1) public void loadBooks(InputStream batch) ``` This option comes in handy when an entire cache region needs to be cleared out. Rather than evicting each entry (which would take a long time, since it is inefficient), all the entries are removed in one operation, as the preceding example shows. Note that the framework ignores any key specified in this scenario as it does not apply (the entire cache is evicted, not only one entry). You can also indicate whether the eviction should occur after (the default) or before the method is invoked by using the `beforeInvocation` attribute. The former provides the same semantics as the rest of the annotations: Once the method completes successfully, an action (in this case, eviction) on the cache is run. If the method does not run (as it might be cached) or an exception is thrown, the eviction does not occur. The latter ( ``` beforeInvocation=true ``` ) causes the eviction to always occur before the method is invoked. This is useful in cases where the eviction does not need to be tied to the method outcome. Note that `void` methods can be used with `@CacheEvict` - as the methods act as a trigger, the return values are ignored (as they do not interact with the cache). This is not the case with `@Cacheable` which adds data to the cache or updates data in the cache and, thus, requires a result. `@Caching` Annotation Sometimes, multiple annotations of the same type (such as `@CacheEvict` or `@CachePut` ) need to be specified — for example, because the condition or the key expression is different between different caches. `@Caching` lets multiple nested `@Cacheable` , `@CachePut` , and `@CacheEvict` annotations be used on the same method. The following example uses two `@CacheEvict` annotations: ``` @Caching(evict = { @CacheEvict("primary"), @CacheEvict(cacheNames="secondary", key="#p0") }) public Book importBooks(String deposit, Date date) ``` `@CacheConfig` Annotation So far, we have seen that caching operations offer many customization options and that you can set these options for each operation. However, some of the customization options can be tedious to configure if they apply to all operations of the class. For instance, specifying the name of the cache to use for every cache operation of the class can be replaced by a single class-level definition. This is where `@CacheConfig` comes into play. The following examples uses `@CacheConfig` to set the name of the cache: ``` @CacheConfig("books") (1) public class BookRepositoryImpl implements BookRepository { @Cacheable public Book findBook(ISBN isbn) {...} } ``` `@CacheConfig` is a class-level annotation that allows sharing the cache names, the custom `KeyGenerator` , the custom `CacheManager` , and the custom `CacheResolver` . Placing this annotation on the class does not turn on any caching operation. An operation-level customization always overrides a customization set on `@CacheConfig` . Therefore, this gives three levels of customizations for each cache operation: Globally configured, e.g. through `CachingConfigurer` : see next section. * At the class level, using `@CacheConfig` . * At the operation level. Provider-specific settings are typically available on the | | --- | ## Enabling Caching Annotations It is important to note that even though declaring the cache annotations does not automatically trigger their actions - like many things in Spring, the feature has to be declaratively enabled (which means if you ever suspect caching is to blame, you can disable it by removing only one configuration line rather than all the annotations in your code). To enable caching annotations add the annotation `@EnableCaching` to one of your `@Configuration` classes: @Bean CacheManager cacheManager() { CaffeineCacheManager cacheManager = new CaffeineCacheManager(); cacheManager.setCacheSpecification(...); return cacheManager; } } ``` Alternatively, for XML configuration you can use the ``` <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cache="http://www.springframework.org/schema/cache" xsi:schemaLocation=" http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/cache https://www.springframework.org/schema/cache/spring-cache.xsd" <cache:annotation-driven/ <bean id="cacheManager" class="org.springframework.cache.caffeine.CaffeineCacheManager"> <property name="cacheSpecification" value="..."/> </bean> </beans> ``` Both the element and the `@EnableCaching` annotation let you specify various options that influence the way the caching behavior is added to the application through AOP. The configuration is intentionally similar with that of `@Transactional` . The default advice mode for processing caching annotations is | | --- | For more detail about advanced customizations (using Java configuration) that are required to implement | | --- | <cache:annotation-driven/> looks for @Cacheable/@CachePut/@CacheEvict/@Caching only on beans in the same application context in which it is defined. This means that, if you put <cache:annotation-driven/> in a WebApplicationContext for a DispatcherServlet, it checks for beans only in your controllers, not your services. See the MVC section for more information. Spring recommends that you only annotate concrete classes (and methods of concrete classes) with the | | --- | ## Using Custom Annotations The caching abstraction lets you use your own annotations to identify what method triggers cache population or eviction. This is quite handy as a template mechanism, as it eliminates the need to duplicate cache annotation declarations, which is especially useful if the key or condition are specified or if the foreign imports ( `org.springframework` ) are not allowed in your code base. Similarly to the rest of the stereotype annotations, you can use `@Cacheable` , `@CachePut` , `@CacheEvict` , and `@CacheConfig` as meta-annotations (that is, annotations that can annotate other annotations). In the following example, we replace a common `@Cacheable` declaration with our own custom annotation: ``` @Retention(RetentionPolicy.RUNTIME) @Target({ElementType.METHOD}) @Cacheable(cacheNames="books", key="#isbn") public @interface SlowService { } ``` In the preceding example, we have defined our own `SlowService` annotation, which itself is annotated with `@Cacheable` . Now we can replace the following code: The following example shows the custom annotation with which we can replace the preceding code: Even though `@SlowService` is not a Spring annotation, the container automatically picks up its declaration at runtime and understands its meaning. Note that, as mentioned earlier, annotation-driven behavior needs to be enabled. Since version 4.1, Spring’s caching abstraction fully supports the JCache standard (JSR-107) annotations: `@CacheResult` , `@CachePut` , `@CacheRemove` , and `@CacheRemoveAll` as well as the `@CacheDefaults` , `@CacheKey` , and `@CacheValue` companions. You can use these annotations even without migrating your cache store to JSR-107. The internal implementation uses Spring’s caching abstraction and provides default `CacheResolver` and `KeyGenerator` implementations that are compliant with the specification. In other words, if you are already using Spring’s caching abstraction, you can switch to these standard annotations without changing your cache storage (or configuration, for that matter). ## Feature Summary For those who are familiar with Spring’s caching annotations, the following table describes the main differences between the Spring annotations and their JSR-107 counterparts: Spring | JSR-107 | Remark | | --- | --- | --- | | | | | | JCache has the notion of ``` javax.cache.annotation.CacheResolver ``` , which is identical to the Spring’s `CacheResolver` interface, except that JCache supports only a single cache. By default, a simple implementation retrieves the cache to use based on the name declared on the annotation. It should be noted that, if no cache name is specified on the annotation, a default is automatically generated. See the javadoc of ``` @CacheResult#cacheName() ``` for more information. `CacheResolver` instances are retrieved by a `CacheResolverFactory` . It is possible to customize the factory for each cache operation, as the following example shows: ``` @CacheResult(cacheNames="books", cacheResolverFactory=MyCacheResolverFactory.class) (1) public Book findBook(ISBN isbn) ``` 1 | Customizing the factory for this operation. | | --- | --- | For all referenced classes, Spring tries to locate a bean with the given type. If more than one match exists, a new instance is created and can use the regular bean lifecycle callbacks, such as dependency injection. | | --- | Keys are generated by a ``` javax.cache.annotation.CacheKeyGenerator ``` that serves the same purpose as Spring’s `KeyGenerator` . By default, all method arguments are taken into account, unless at least one parameter is annotated with `@CacheKey` . This is similar to Spring’s custom key generation declaration . For instance, the following are identical operations, one using Spring’s abstraction and the other using JCache: @CacheResult(cacheName="books") public Book findBook(@CacheKey ISBN isbn, boolean checkWarehouse, boolean includeUsed) ``` You can also specify the `CacheKeyResolver` on the operation, similar to how you can specify the `CacheResolverFactory` . JCache can manage exceptions thrown by annotated methods. This can prevent an update of the cache, but it can also cache the exception as an indicator of the failure instead of calling the method again. Assume that ``` InvalidIsbnNotFoundException ``` is thrown if the structure of the ISBN is invalid. This is a permanent failure (no book could ever be retrieved with such a parameter). The following caches the exception so that further calls with the same, invalid, ISBN throw the cached exception directly instead of invoking the method again: ``` @CacheResult(cacheName="books", exceptionCacheName="failures" cachedExceptions = InvalidIsbnNotFoundException.class) public Book findBook(ISBN isbn) ``` ## Enabling JSR-107 Support You do not need to do anything specific to enable the JSR-107 support alongside Spring’s declarative annotation support. Both `@EnableCaching` and the XML element automatically enable the JCache support if both the JSR-107 API and the ``` spring-context-support ``` module are present in the classpath. Depending on your use case, the choice is basically yours. You can even mix and match services by using the JSR-107 API on some and using Spring’s own annotations on others. However, if these services impact the same caches, you should use a consistent and identical key generation implementation. | | --- | If annotations are not an option (perhaps due to having no access to the sources or no external code), you can use XML for declarative caching. So, instead of annotating the methods for caching, you can specify the target method and the caching directives externally (similar to the declarative transaction management advice). The example from the previous section can be translated into the following example: ``` <!-- the service we want to make cacheable --> <bean id="bookService" class="x.y.service.DefaultBookService"/<!-- cache definitions --> <cache:advice id="cacheAdvice" cache-manager="cacheManager"> <cache:caching cache="books"> <cache:cacheable method="findBook" key="#isbn"/> <cache:cache-evict method="loadBooks" all-entries="true"/> </cache:caching> </cache:advice<!-- apply the cacheable behavior to all BookService interfaces --> <aop:config> <aop:advisor advice-ref="cacheAdvice" pointcut="execution(* x.y.BookService.*(..))"/> </aop:config<!-- cache manager definition omitted --> ``` In the preceding configuration, the `bookService` is made cacheable. The caching semantics to apply are encapsulated in the `cache:advice` definition, which causes the `findBooks` method to be used for putting data into the cache and the `loadBooks` method for evicting data. Both definitions work against the `books` cache. The `aop:config` definition applies the cache advice to the appropriate points in the program by using the AspectJ pointcut expression (more information is available in Aspect Oriented Programming with Spring). In the preceding example, all methods from the `BookService` are considered and the cache advice is applied to them. The declarative XML caching supports all of the annotation-based model, so moving between the two should be fairly easy. Furthermore, both can be used inside the same application. The XML-based approach does not touch the target code. However, it is inherently more verbose. When dealing with classes that have overloaded methods that are targeted for caching, identifying the proper methods does take an extra effort, since the `method` argument is not a good discriminator. In these cases, you can use the AspectJ pointcut to cherry pick the target methods and apply the appropriate caching functionality. However, through XML, it is easier to apply package or group or interface-wide caching (again, due to the AspectJ pointcut) and to create template-like definitions (as we did in the preceding example by defining the target cache through the `cache:definitions` `cache` attribute). The cache abstraction provides several storage integration options. To use them, you need to declare an appropriate `CacheManager` (an entity that controls and manages `Cache` instances and that can be used to retrieve these for storage). ## JDK `ConcurrentMap` -based Cache The JDK-based `Cache` implementation resides under ``` org.springframework.cache.concurrent ``` package. It lets you use `ConcurrentHashMap` as a backing `Cache` store. The following example shows how to configure two caches: ``` <!-- simple cache manager --> <bean id="cacheManager" class="org.springframework.cache.support.SimpleCacheManager"> <property name="caches"> <set> <bean class="org.springframework.cache.concurrent.ConcurrentMapCacheFactoryBean" p:name="default"/> <bean class="org.springframework.cache.concurrent.ConcurrentMapCacheFactoryBean" p:name="books"/> </set> </property> </bean> ``` The preceding snippet uses the `SimpleCacheManager` to create a `CacheManager` for the two nested `ConcurrentMapCache` instances named `default` and `books` . Note that the names are configured directly for each cache. As the cache is created by the application, it is bound to its lifecycle, making it suitable for basic use cases, tests, or simple applications. The cache scales well and is very fast, but it does not provide any management, persistence capabilities, or eviction contracts. ## Ehcache-based Cache Ehcache 3.x is fully JSR-107 compliant and no dedicated support is required for it. See JSR-107 Cache for details. ## Caffeine Cache Caffeine is a Java 8 rewrite of Guava’s cache, and its implementation is located in the ``` org.springframework.cache.caffeine ``` package and provides access to several features of Caffeine. The following example configures a `CacheManager` that creates the cache on demand: ``` <bean id="cacheManager" class="org.springframework.cache.caffeine.CaffeineCacheManager"/> ``` You can also provide the caches to use explicitly. In that case, only those are made available by the manager. The following example shows how to do so: ``` <bean id="cacheManager" class="org.springframework.cache.caffeine.CaffeineCacheManager"> <property name="cacheNames"> <set> <value>default</value> <value>books</value> </set> </property> </bean> ``` The Caffeine `CacheManager` also supports custom `Caffeine` and `CacheLoader` . See the Caffeine documentation for more information about those. ## GemFire-based Cache GemFire is a memory-oriented, disk-backed, elastically scalable, continuously available, active (with built-in pattern-based subscription notifications), globally replicated database and provides fully-featured edge caching. For further information on how to use GemFire as a `CacheManager` (and more), see the Spring Data GemFire reference documentation. ## JSR-107 Cache Spring’s caching abstraction can also use JSR-107-compliant caches. The JCache implementation is located in the ``` org.springframework.cache.jcache ``` package. Again, to use it, you need to declare the appropriate `CacheManager` . The following example shows how to do so: ``` <bean id="cacheManager" class="org.springframework.cache.jcache.JCacheCacheManager" p:cache-manager-ref="jCacheManager"/<!-- JSR-107 cache manager setup --> <bean id="jCacheManager" .../> ``` ## Dealing with Caches without a Backing Store Sometimes, when switching environments or doing testing, you might have cache declarations without having an actual backing cache configured. As this is an invalid configuration, an exception is thrown at runtime, since the caching infrastructure is unable to find a suitable store. In situations like this, rather than removing the cache declarations (which can prove tedious), you can wire in a simple dummy cache that performs no caching — that is, it forces the cached methods to be invoked every time. The following example shows how to do so: ``` <bean id="cacheManager" class="org.springframework.cache.support.CompositeCacheManager"> <property name="cacheManagers"> <list> <ref bean="jdkCache"/> <ref bean="gemfireCache"/> </list> </property> <property name="fallbackToNoOpCache" value="true"/> </bean> ``` ``` CompositeCacheManager ``` in the preceding chains multiple `CacheManager` instances and, through the `fallbackToNoOpCache` flag, adds a no-op cache for all the definitions not handled by the configured cache managers. That is, every cache definition not found in either `jdkCache` or `gemfireCache` (configured earlier in the example) is handled by the no-op cache, which does not store any information, causing the target method to be invoked every time. Clearly, there are plenty of caching products out there that you can use as a backing store. For those that do not support JSR-107 you need to provide a `CacheManager` and a `Cache` implementation. This may sound harder than it is, since, in practice, the classes tend to be simple adapters that map the caching abstraction framework on top of the storage API, as the Caffeine classes do. Most `CacheManager` classes can use the classes in the ``` org.springframework.cache.support ``` package (such as `AbstractCacheManager` which takes care of the boiler-plate code, leaving only the actual mapping to be completed). Directly through your cache provider. The cache abstraction is an abstraction, not a cache implementation. The solution you use might support various data policies and different topologies that other solutions do not support (for example, the JDK `ConcurrentHashMap` — exposing that in the cache abstraction would be useless because there would no backing support). Such functionality should be controlled directly through the backing cache (when configuring it) or through its native API. Micrometer defines an Observation concept that enables both Metrics and Traces in applications. Metrics support offers a way to create timers, gauges, or counters for collecting statistics about the runtime behavior of your application. Metrics can help you to track error rates, usage patterns, performance, and more. Traces provide a holistic view of an entire system, crossing application boundaries; you can zoom in on particular user requests and follow their entire completion across applications. Spring Framework instruments various parts of its own codebase to publish observations if an `ObservationRegistry` is configured. You can learn more about configuring the observability infrastructure in Spring Boot. ## List of produced Observations Spring Framework instruments various features for observability. As outlined at the beginning of this section, observations can generate timer Metrics and/or Traces depending on the configuration. Observation name | Description | | --- | --- | | | Observations are using Micrometer’s official naming convention, but Metrics names will be automatically converted to the format preferred by the monitoring system backend (Prometheus, Atlas, Graphite, InfluxDB…). | | --- | ## Micrometer Observation concepts If you are not familiar with Micrometer Observation, here’s a quick summary of the concepts you should know about. * `Observation` is the actual recording of something happening in your application. This is processed by `ObservationHandler` implementations to produce metrics or traces. * Each observation has a corresponding `ObservationContext` implementation; this type holds all the relevant information for extracting metadata for it. In the case of an HTTP server observation, the context implementation could hold the HTTP request, the HTTP response, any exception thrown during processing, and so forth. * Each `Observation` holds `KeyValues` metadata. In the case of an HTTP server observation, this could be the HTTP request method, the HTTP response status, and so forth. This metadata is contributed by implementations which should declare the type of `ObservationContext` they support. * `KeyValues` are said to be "low cardinality" if there is a low, bounded number of possible values for the `KeyValue` tuple (HTTP method is a good example). Low cardinality values are contributed to metrics only. Conversely, "high cardinality" values are unbounded (for example, HTTP request URIs) and are only contributed to traces. * An ``` ObservationDocumentation ``` documents all observations in a particular domain, listing the expected key names and their meaning. ## Configuring Observations Global configuration options are available at the ``` ObservationRegistry#observationConfig() ``` level. Each instrumented component will provide two extension points: setting the `ObservationRegistry` ; if not set, observations will not be recorded and will be no-ops * providing a custom to change the default observation name and extracted `KeyValues` ### Using custom Observation conventions Let’s take the example of the Spring MVC "http.server.requests" metrics instrumentation with the ``` ServerHttpObservationFilter ``` . This observation uses a ``` ServerRequestObservationConvention ``` with a ; custom conventions can be configured on the Servlet filter. If you would like to customize the metadata produced with the observation, you can extend the ``` DefaultServerRequestObservationConvention ``` for your requirements: import org.springframework.http.server.observation.DefaultServerRequestObservationConvention; import org.springframework.http.server.observation.ServerRequestObservationContext; public class ExtendedServerRequestObservationConvention extends DefaultServerRequestObservationConvention { @Override public KeyValues getLowCardinalityKeyValues(ServerRequestObservationContext context) { // here, we just want to have an additional KeyValue to the observation, keeping the default values return super.getLowCardinalityKeyValues(context).and(custom(context)); } private KeyValue custom(ServerRequestObservationContext context) { return KeyValue.of("custom.method", context.getCarrier().getMethod()); } If you want full control, you can implement the entire convention contract for the observation you’re interested in: import org.springframework.http.server.observation.ServerHttpObservationDocumentation; import org.springframework.http.server.observation.ServerRequestObservationContext; import org.springframework.http.server.observation.ServerRequestObservationConvention; public class CustomServerRequestObservationConvention implements ServerRequestObservationConvention { @Override public String getName() { // will be used as the metric name return "http.server.requests"; } @Override public String getContextualName(ServerRequestObservationContext context) { // will be used for the trace name return "http " + context.getCarrier().getMethod().toLowerCase(); } @Override public KeyValues getLowCardinalityKeyValues(ServerRequestObservationContext context) { return KeyValues.of(method(context), status(context), exception(context)); } @Override public KeyValues getHighCardinalityKeyValues(ServerRequestObservationContext context) { return KeyValues.of(httpUrl(context)); } private KeyValue method(ServerRequestObservationContext context) { // You should reuse as much as possible the corresponding ObservationDocumentation for key names return KeyValue.of(ServerHttpObservationDocumentation.LowCardinalityKeyNames.METHOD, context.getCarrier().getMethod()); } // status(), exception(), httpUrl()... private KeyValue status(ServerRequestObservationContext context) { return KeyValue.of(ServerHttpObservationDocumentation.LowCardinalityKeyNames.STATUS, String.valueOf(context.getResponse().getStatus())); } private KeyValue exception(ServerRequestObservationContext context) { String exception = (context.getError() != null) ? context.getError().getClass().getSimpleName() : KeyValue.NONE_VALUE; return KeyValue.of(ServerHttpObservationDocumentation.LowCardinalityKeyNames.EXCEPTION, exception); } private KeyValue httpUrl(ServerRequestObservationContext context) { return KeyValue.of(ServerHttpObservationDocumentation.HighCardinalityKeyNames.HTTP_URL, context.getCarrier().getRequestURI()); } You can also achieve similar goals using a custom `ObservationFilter` – adding or removing key values for an observation. Filters do not replace the default convention and are used as a post-processing component. ``` import io.micrometer.common.KeyValue; import io.micrometer.observation.Observation; import io.micrometer.observation.ObservationFilter; import org.springframework.http.server.observation.ServerRequestObservationContext; public class ServerRequestObservationFilter implements ObservationFilter { @Override public Observation.Context map(Observation.Context context) { if (context instanceof ServerRequestObservationContext serverContext) { context.setName("custom.observation.name"); context.addLowCardinalityKeyValue(KeyValue.of("project", "spring")); String customAttribute = (String) serverContext.getCarrier().getAttribute("customAttribute"); context.addLowCardinalityKeyValue(KeyValue.of("custom.attribute", customAttribute)); } return context; } } ``` You can configure `ObservationFilter` instances on the `ObservationRegistry` . ## HTTP Server instrumentation ``` "http.server.requests" ``` for Servlet and Reactive applications. ### Servlet applications ``` org.springframework.web.filter.ServerHttpObservationFilter ``` Servlet filter in their application. It uses the ``` org.springframework.http.server.observation.DefaultServerRequestObservationConvention ``` ``` import jakarta.servlet.http.HttpServletRequest; import org.springframework.http.ResponseEntity; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.ExceptionHandler; import org.springframework.web.filter.ServerHttpObservationFilter; @ExceptionHandler(MissingUserException.class) ResponseEntity<Void> handleMissingUser(HttpServletRequest request, MissingUserException exception) { // We want to record this exception with the observation ServerHttpObservationFilter.findObservationContext(request) .ifPresent(context -> context.setError(exception)); return ResponseEntity.notFound().build(); } Because the instrumentation is done at the Servlet Filter level, the observation scope only covers the filters ordered after this one as well as the handling of the request. Typically, Servlet container error handling is performed at a lower level and won’t have any active observation or span. For this use case, a container-specific implementation is required, such as a | | --- | ### Reactive applications ``` org.springframework.web.filter.reactive.ServerHttpObservationFilter ``` reactive `WebFilter` in their application. It uses the ``` org.springframework.http.server.reactive.observation.DefaultServerRequestObservationConvention ``` ``` import org.springframework.http.ResponseEntity; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.ExceptionHandler; import org.springframework.web.filter.reactive.ServerHttpObservationFilter; import org.springframework.web.server.ServerWebExchange; @ExceptionHandler(MissingUserException.class) ResponseEntity<Void> handleMissingUser(ServerWebExchange exchange, MissingUserException exception) { // We want to record this exception with the observation ServerHttpObservationFilter.findObservationContext(exchange) .ifPresent(context -> context.setError(exception)); return ResponseEntity.notFound().build(); } ## HTTP Client Instrumentation ``` "http.client.requests" ``` for blocking and reactive clients. Unlike their server counterparts, the instrumentation is implemented directly in the client so the only required step is to configure an `ObservationRegistry` on the client. ### RestTemplate ``` org.springframework.http.client.observation.ClientRequestObservationConvention ``` ### WebClient ``` org.springframework.web.reactive.function.client.ClientRequestObservationConvention ``` This part of the appendix lists XML schemas related to integration technologies. `jee` Schema The `jee` elements deal with issues related to Jakarta EE (Enterprise Edition) configuration, such as looking up a JNDI object and defining EJB references. To use the elements in the `jee` schema, you need to have the following preamble at the top of your Spring XML configuration file. The text in the following snippet references the correct schema so that the elements in the `jee` namespace are available to you: # <jee:jndi-lookup/> (simple) ``` <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/MyDataSource"/> </bean> <bean id="userDao" class="com.foo.JdbcUserDao"> <!-- Spring will do the cast automatically (as usual) --> <property name="dataSource" ref="dataSource"/> </bean> ``` ``` <jee:jndi-lookup id="dataSource" jndi-name="jdbc/MyDataSource"/<bean id="userDao" class="com.foo.JdbcUserDao"> <!-- Spring will do the cast automatically (as usual) --> <property name="dataSource" ref="dataSource"/> </bean> ``` `<jee:jndi-lookup/>` (with Single JNDI Environment Setting) The following example shows how to use JNDI to look up an environment variable without `jee` : The following example shows how to use JNDI to look up an environment variable with `jee` : ``` <jee:jndi-lookup id="simple" jndi-name="jdbc/MyDataSource"> <jee:environment>ping=pong</jee:environment> </jee:jndi-lookup> ``` `<jee:jndi-lookup/>` (with Multiple JNDI Environment Settings) The following example shows how to use JNDI to look up multiple environment variables without `jee` : The following example shows how to use JNDI to look up multiple environment variables with `jee` : ``` <jee:jndi-lookup id="simple" jndi-name="jdbc/MyDataSource"> <!-- newline-separated, key-value pairs for the environment (standard Properties format) --> <jee:environment> sing=song ping=pong </jee:environment> </jee:jndi-lookup> ``` ``` <bean id="simple" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/MyDataSource"/> <property name="cache" value="true"/> <property name="resourceRef" value="true"/> <property name="lookupOnStartup" value="false"/> <property name="expectedType" value="com.myapp.DefaultThing"/> <property name="proxyInterface" value="com.myapp.Thing"/> </bean> ``` ``` <jee:jndi-lookup id="simple" jndi-name="jdbc/MyDataSource" cache="true" resource-ref="true" lookup-on-startup="false" expected-type="com.myapp.DefaultThing" proxy-interface="com.myapp.Thing"/> ``` `<jee:local-slsb/>` (Simple) The `<jee:local-slsb/>` element configures a reference to a local EJB Stateless Session Bean. The following example shows how to configures a reference to a local EJB Stateless Session Bean without `jee` : ``` <bean id="simple" class="org.springframework.ejb.access.LocalStatelessSessionProxyFactoryBean"> <property name="jndiName" value="ejb/RentalServiceBean"/> <property name="businessInterface" value="com.foo.service.RentalService"/> </bean> ``` ``` <jee:local-slsb id="simpleSlsb" jndi-name="ejb/RentalServiceBean" business-interface="com.foo.service.RentalService"/> ``` `<jee:local-slsb/>` (Complex) The `<jee:local-slsb/>` element configures a reference to a local EJB Stateless Session Bean. The following example shows how to configures a reference to a local EJB Stateless Session Bean and a number of properties without `jee` : ``` <bean id="complexLocalEjb" class="org.springframework.ejb.access.LocalStatelessSessionProxyFactoryBean"> <property name="jndiName" value="ejb/RentalServiceBean"/> <property name="businessInterface" value="com.example.service.RentalService"/> <property name="cacheHome" value="true"/> <property name="lookupHomeOnStartup" value="true"/> <property name="resourceRef" value="true"/> </bean> ``` The following example shows how to configures a reference to a local EJB Stateless Session Bean and a number of properties with `jee` : ``` <jee:local-slsb id="complexLocalEjb" jndi-name="ejb/RentalServiceBean" business-interface="com.foo.service.RentalService" cache-home="true" lookup-home-on-startup="true" resource-ref="true"> ``` # <jee:remote-slsb/ The `<jee:remote-slsb/>` element configures a reference to a `remote` EJB Stateless Session Bean. The following example shows how to configures a reference to a remote EJB Stateless Session Bean without `jee` : ``` <bean id="complexRemoteEjb" class="org.springframework.ejb.access.SimpleRemoteStatelessSessionProxyFactoryBean"> <property name="jndiName" value="ejb/MyRemoteBean"/> <property name="businessInterface" value="com.foo.service.RentalService"/> <property name="cacheHome" value="true"/> <property name="lookupHomeOnStartup" value="true"/> <property name="resourceRef" value="true"/> <property name="homeInterface" value="com.foo.service.RentalService"/> <property name="refreshHomeOnConnectFailure" value="true"/> </bean> ``` ``` <jee:remote-slsb id="complexRemoteEjb" jndi-name="ejb/MyRemoteBean" business-interface="com.foo.service.RentalService" cache-home="true" lookup-home-on-startup="true" resource-ref="true" home-interface="com.foo.service.RentalService" refresh-home-on-connect-failure="true"> ``` `jms` Schema The `jms` elements deal with configuring JMS-related beans, such as Spring’s Message Listener Containers. These elements are detailed in the section of the JMS chapter entitled JMS Namespace Support . See that chapter for full details on this support and the `jms` elements themselves. In the interest of completeness, to use the elements in the `jms` schema, you need to have the following preamble at the top of your Spring XML configuration file. The text in the following snippet references the correct schema so that the elements in the `jms` namespace are available to you: This element is detailed in Configuring Annotation-based MBean Export. `cache` Schema You can use the `cache` elements to enable support for Spring’s `@CacheEvict` , `@CachePut` , and `@Caching` annotations. It it also supports declarative XML-based caching. See Enabling Caching Annotations and Declarative XML-based Caching for details. To use the elements in the `cache` schema, you need to have the following preamble at the top of your Spring XML configuration file. The text in the following snippet references the correct schema so that the elements in the `cache` namespace are available to you: ``` <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cache="http://www.springframework.org/schema/cache" xsi:schemaLocation=" http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/cache https://www.springframework.org/schema/cache/spring-cache.xsd"Kotlin is a statically typed language that targets the JVM (and other platforms) which allows writing concise and elegant code while providing very good interoperability with existing libraries written in Java. The Spring Framework provides first-class support for Kotlin and lets developers write Kotlin applications almost as if the Spring Framework was a native Kotlin framework. Most of the code samples of the reference documentation are provided in Kotlin in addition to Java. The easiest way to build a Spring application with Kotlin is to leverage Spring Boot and its dedicated Kotlin support. This comprehensive tutorial will teach you how to build Spring Boot applications with Kotlin using start.spring.io. Feel free to join the #spring channel of Kotlin Slack or ask a question with `spring` and `kotlin` as tags on Stackoverflow if you need support. Spring Framework supports Kotlin 1.3+ and requires `kotlin-stdlib` (or one of its variants, such as `kotlin-stdlib-jdk8` ) and `kotlin-reflect` to be present on the classpath. They are provided by default if you bootstrap a Kotlin project on start.spring.io. Kotlin inline classes are not yet supported. | | --- | The Jackson Kotlin module is required for serializing or deserializing JSON data for Kotlin classes with Jackson, so make sure to add the | | --- | Kotlin extensions provide the ability to extend existing classes with additional functionality. The Spring Framework Kotlin APIs use these extensions to add new Kotlin-specific conveniences to existing Spring APIs. The Spring Framework KDoc API lists and documents all available Kotlin extensions and DSLs. Keep in mind that Kotlin extensions need to be imported to be used. This means, for example, that the | | --- | For example, Kotlin reified type parameters provide a workaround for JVM generics type erasure, and the Spring Framework provides some extensions to take advantage of this feature. This allows for a better Kotlin API `RestTemplate` , for the new `WebClient` from Spring WebFlux, and for various other APIs. Other libraries, such as Reactor and Spring Data, also provide Kotlin extensions for their APIs, thus giving a better Kotlin development experience overall. | | --- | To retrieve a list of `User` objects in Java, you would normally write the following: ``` Flux<User> users = client.get().retrieve().bodyToFlux(User.class) ``` With Kotlin and the Spring Framework extensions, you can instead write the following: ``` val users = client.get().retrieve().bodyToFlux<User>() // or (both are equivalent) val users : Flux<User> = client.get().retrieve().bodyToFlux() ``` As in Java, `users` in Kotlin is strongly typed, but Kotlin’s clever type inference allows for shorter syntax. One of Kotlin’s key features is null-safety, which cleanly deals with `null` values at compile time rather than bumping into the famous `NullPointerException` at runtime. This makes applications safer through nullability declarations and expressing “value or no value” semantics without paying the cost of wrappers, such as `Optional` . (Kotlin allows using functional constructs with nullable values. See this comprehensive guide to Kotlin null-safety.) Although Java does not let you express null-safety in its type-system, the Spring Framework provides null-safety of the whole Spring Framework API via tooling-friendly annotations declared in the package. By default, types from Java APIs used in Kotlin are recognized as platform types, for which null-checks are relaxed. Kotlin support for JSR-305 annotations and Spring nullability annotations provide null-safety for the whole Spring Framework API to Kotlin developers, with the advantage of dealing with `null` -related issues at compile time. Libraries such as Reactor or Spring Data provide null-safe APIs to leverage this feature. | | --- | You can configure JSR-305 checks by adding the `-Xjsr305` compiler flag with the following options: ``` -Xjsr305={strict|warn|ignore} ``` . For kotlin versions 1.1+, the default behavior is the same as `-Xjsr305=warn` . The `strict` value is required to have Spring Framework API null-safety taken into account in Kotlin types inferred from Spring API but should be used with the knowledge that Spring API nullability declaration could evolve even between minor releases and that more checks may be added in the future. Generic type arguments, varargs, and array elements nullability are not supported yet, but should be in an upcoming release. See this discussion for up-to-date information. | | --- | The Spring Framework supports various Kotlin constructs, such as instantiating Kotlin classes through primary constructors, immutable classes data binding, and function optional parameters with default values. Kotlin parameter names are recognized through a dedicated , which allows finding interface method parameter names without requiring the Java 8 `-parameters` compiler flag to be enabled during compilation. (For completeness, we nevertheless recommend running the Kotlin compiler with its `-java-parameters` flag for standard Java parameter exposure.) You can declare configuration classes as top level or nested but not inner, since the later requires a reference to the outer class. The Spring Framework also takes advantage of Kotlin null-safety to determine if an HTTP parameter is required without having to explicitly define the `required` attribute. That means ``` @RequestParam name: String? ``` is treated as not required and, conversely, ``` @RequestParam name: String ``` is treated as being required. This feature is also supported on the Spring Messaging `@Header` annotation. In a similar fashion, Spring bean injection with `@Autowired` , `@Bean` , or `@Inject` uses this information to determine if a bean is required or not. For example, ``` @Autowired lateinit var thing: Thing ``` implies that a bean of type `Thing` must be registered in the application context, while ``` @Autowired lateinit var thing: Thing? ``` does not raise an error if such a bean does not exist. Following the same principle, ``` @Bean fun play(toy: Toy, car: Car?) = Baz(toy, Car) ``` implies that a bean of type `Toy` must be registered in the application context, while a bean of type `Car` may or may not exist. The same behavior applies to autowired constructor parameters. If you use bean validation on classes with properties or a primary constructor parameters, you may need to use annotation use-site targets, such as | | --- | Spring Framework supports registering beans in a functional way by using lambdas as an alternative to XML or Java configuration ( `@Configuration` and `@Bean` ). In a nutshell, it lets you register beans with a lambda that acts as a `FactoryBean` . This mechanism is very efficient, as it does not require any reflection or CGLIB proxies. In Java, you can, for example, write the following: ``` class Foo {} class Bar { private final Foo foo; public Bar(Foo foo) { this.foo = foo; } } GenericApplicationContext context = new GenericApplicationContext(); context.registerBean(Foo.class); context.registerBean(Bar.class, () -> new Bar(context.getBean(Foo.class))); ``` In Kotlin, with reified type parameters and Kotlin extensions, you can instead write the following: ``` class Foo class Bar(private val foo: Foo) val context = GenericApplicationContext().apply { registerBean<Foo>() registerBean { Bar(it.getBean()) } } ``` When the class `Bar` has a single constructor, you can even just specify the bean class, the constructor parameters will be autowired by type: ``` val context = GenericApplicationContext().apply { registerBean<Foo>() registerBean<Bar>() } ``` In order to allow a more declarative approach and cleaner syntax, Spring Framework provides a Kotlin bean definition DSL It declares an through a clean declarative API, which lets you deal with profiles and `Environment` for customizing how beans are registered. In the following example notice that: Type inference usually allows to avoid specifying the type for bean references like `ref("bazBean")` * It is possible to use Kotlin top level functions to declare beans using callable references like `bean(::myRouter)` in this example * When specifying `bean<Bar>()` or `bean(::myRouter)` , parameters are autowired by type * The `FooBar` bean will be registered only if the `foobar` profile is active ``` class Foo class Bar(private val foo: Foo) class Baz(var message: String = "") class FooBar(private val baz: Baz) val myBeans = beans { bean<Foo>() bean<Bar>() bean("bazBean") { Baz().apply { message = "Hello world" } } profile("foobar") { bean { FooBar(ref("bazBean")) } } bean(::myRouter) } fun myRouter(foo: Foo, bar: Bar, baz: Baz) = router { // ... } ``` You can then use this `beans()` function to register beans on the application context, as the following example shows: ``` val context = GenericApplicationContext().apply { myBeans.initialize(this) refresh() } ``` Spring Boot is based on JavaConfig and does not yet provide specific support for functional bean definition, but you can experimentally use functional bean definitions through Spring Boot’s | | --- | ## Router DSL Spring Framework comes with a Kotlin router DSL available in 3 flavors: WebMvc.fn DSL with router { } * WebFlux.fn Reactive DSL with router { } * WebFlux.fn [Coroutines] DSL with coRouter { } These DSL let you write clean and idiomatic Kotlin code to build a `RouterFunction` instance as the following example shows: ``` @Configuration class RouterRouterConfiguration { @Bean fun mainRouter(userHandler: UserHandler) = router { accept(TEXT_HTML).nest { GET("/") { ok().render("index") } GET("/sse") { ok().render("sse") } GET("/users", userHandler::findAllView) } "/api".nest { accept(APPLICATION_JSON).nest { GET("/users", userHandler::findAll) } accept(TEXT_EVENT_STREAM).nest { GET("/users", userHandler::stream) } } resources("/**", ClassPathResource("static/")) } } ``` See MiXiT project for a concrete example. ## MockMvc DSL A Kotlin DSL is provided via `MockMvc` Kotlin extensions in order to provide a more idiomatic Kotlin API and to allow better discoverability (no usage of static methods). ``` val mockMvc: MockMvc = ... mockMvc.get("/person/{name}", "Lee") { secure = true accept = APPLICATION_JSON headers { contentLanguage = Locale.FRANCE } principal = Principal { "foo" } }.andExpect { status { isOk } content { contentType(APPLICATION_JSON) } jsonPath("$.name") { value("Lee") } content { json("""{"someBoolean": false}""", false) } }.andDo { print() } ``` ## Kotlin Script Templates Spring Framework provides a `ScriptTemplateView` which supports JSR-223 to render templates by using script engines. By leveraging `scripting-jsr223` dependencies, it is possible to use such feature to render Kotlin-based templates with kotlinx.html DSL or Kotlin multiline interpolated `String` . `build.gradle.kts` ``` dependencies { runtime("org.jetbrains.kotlin:kotlin-scripting-jsr223:${kotlinVersion}") } ``` Configuration is usually done with ``` ScriptTemplateViewResolver ``` beans. ``` KotlinScriptConfiguration.kt ``` ``` @Configuration class KotlinScriptConfiguration { @Bean fun kotlinScriptConfigurer() = ScriptTemplateConfigurer().apply { engineName = "kotlin" setScripts("scripts/render.kts") renderFunction = "render" isSharedEngine = false } @Bean fun kotlinScriptViewResolver() = ScriptTemplateViewResolver().apply { setPrefix("templates/") setSuffix(".kts") } } ``` See the kotlin-script-templating example project for more details. ## Kotlin multiplatform serialization As of Spring Framework 5.3, Kotlin multiplatform serialization is supported in Spring MVC, Spring WebFlux and Spring Messaging (RSocket). The builtin support currently targets CBOR, JSON, and ProtoBuf formats. To enable it, follow those instructions to add the related dependency and plugin. With Spring MVC and WebFlux, both Kotlin serialization and Jackson will be configured by default if they are in the classpath since Kotlin serialization is designed to serialize only Kotlin classes annotated with `@Serializable` . With Spring Messaging (RSocket), make sure that neither Jackson, GSON or JSONB are in the classpath if you want automatic configuration, if Jackson is needed configure ``` KotlinSerializationJsonMessageConverter ``` manually. Kotlin Coroutines are Kotlin lightweight threads allowing to write non-blocking code in an imperative way. On language side, suspending functions provides an abstraction for asynchronous operations while on library side kotlinx.coroutines provides functions like `async { }` and types like `Flow` . Spring Framework provides support for Coroutines on the following scope: Deferred and Flow return values support in Spring MVC and WebFlux annotated `@Controller` * Suspending function support in Spring MVC and WebFlux annotated `@Controller` * WebFlux.fn coRouter { } DSL * Suspending function and `Flow` support in RSocket `@MessageMapping` annotated methods * Extensions for `RSocketRequester` Coroutines support is enabled when ``` kotlinx-coroutines-core ``` ``` kotlinx-coroutines-reactor ``` dependencies are in the classpath: `build.gradle.kts` ``` dependencies { implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:${coroutinesVersion}") implementation("org.jetbrains.kotlinx:kotlinx-coroutines-reactor:${coroutinesVersion}") } ``` Version `1.4.0` and above are supported. ## How Reactive translates to Coroutines? For return values, the translation from Reactive to Coroutines APIs is the following: ``` fun handler(): Mono<Void> ``` ``` suspend fun handler() ``` ``` fun handler(): Mono<T> ``` ``` suspend fun handler(): T ``` ``` suspend fun handler(): T? ``` depending on if the `Mono` can be empty or not (with the advantage of being more statically typed) * ``` fun handler(): Flux<T> ``` ``` fun handler(): Flow<T> ``` For input parameters: If laziness is not needed, ``` fun handler(value: T) ``` since a suspending functions can be invoked to get the value parameter. * If laziness is needed, `Flow` is `Flux` equivalent in Coroutines world, suitable for hot or cold stream, finite or infinite streams, with the following main differences: * `Flow` is push-based while `Flux` is push-pull hybrid * Backpressure is implemented via suspending functions * `Flow` has only a single suspending `collect` method and operators are implemented as extensions * Operators are easy to implement thanks to Coroutines * Extensions allow to add custom operators to `Flow` * Collect operations are suspending functions * `map` operator supports asynchronous operation (no need for `flatMap` ) since it takes a suspending function parameter Read this blog post about Going Reactive with Spring, Coroutines and Kotlin Flow for more details, including how to run code concurrently with Coroutines. ## Controllers Here is an example of a Coroutines `@RestController` . ``` @RestController class CoroutinesRestController(client: WebClient, banner: Banner) { @GetMapping("/suspend") suspend fun suspendingEndpoint(): Banner { delay(10) return banner } @GetMapping("/flow") fun flowEndpoint() = flow { delay(10) emit(banner) delay(10) emit(banner) } @GetMapping("/deferred") fun deferredEndpoint() = GlobalScope.async { delay(10) banner } @GetMapping("/sequential") suspend fun sequential(): List<Banner> { val banner1 = client .get() .uri("/suspend") .accept(MediaType.APPLICATION_JSON) .awaitExchange() .awaitBody<Banner>() val banner2 = client .get() .uri("/suspend") .accept(MediaType.APPLICATION_JSON) .awaitExchange() .awaitBody<Banner>() return listOf(banner1, banner2) } @GetMapping("/parallel") suspend fun parallel(): List<Banner> = coroutineScope { val deferredBanner1: Deferred<Banner> = async { client .get() .uri("/suspend") .accept(MediaType.APPLICATION_JSON) .awaitExchange() .awaitBody<Banner>() } val deferredBanner2: Deferred<Banner> = async { client .get() .uri("/suspend") .accept(MediaType.APPLICATION_JSON) .awaitExchange() .awaitBody<Banner>() } listOf(deferredBanner1.await(), deferredBanner2.await()) } @GetMapping("/error") suspend fun error() { throw IllegalStateException() } @GetMapping("/cancel") suspend fun cancel() { throw CancellationException() } View rendering with a `@Controller` is also supported. ``` @Controller class CoroutinesViewController(banner: Banner) { @GetMapping("/") suspend fun render(model: Model): String { delay(10) model["banner"] = banner return "index" } } ``` ## WebFlux.fn Here is an example of Coroutines router defined via the coRouter { } DSL and related handlers. ``` @Configuration class RouterConfiguration { @Bean fun mainRouter(userHandler: UserHandler) = coRouter { GET("/", userHandler::listView) GET("/api/user", userHandler::listApi) } } ``` ``` class UserHandler(builder: WebClient.Builder) { private val client = builder.baseUrl("...").build() suspend fun listView(request: ServerRequest): ServerResponse = ServerResponse.ok().renderAndAwait("users", mapOf("users" to client.get().uri("...").awaitExchange().awaitBody<User>())) suspend fun listApi(request: ServerRequest): ServerResponse = ServerResponse.ok().contentType(MediaType.APPLICATION_JSON).bodyAndAwait( client.get().uri("...").awaitExchange().awaitBody<User>()) } ``` ## Transactions Transactions on Coroutines are supported via the programmatic variant of the Reactive transaction management provided as of Spring Framework 5.2. For suspending functions, a ``` TransactionalOperator.executeAndAwait ``` ``` import org.springframework.transaction.reactive.executeAndAwait suspend fun initDatabase() = operator.executeAndAwait { insertPerson1() insertPerson2() } For Kotlin `Flow` , a ``` Flow<T>.transactional ``` ``` import org.springframework.transaction.reactive.transactional fun updatePeople() = findPeople().map(::updatePerson).transactional(operator) private fun findPeople(): Flow<Person> { // SELECT SQL statement } private suspend fun updatePerson(person: Person): Person { // UPDATE SQL statement } } ``` This section provides some specific hints and recommendations worth for developing Spring projects in Kotlin. ## Final by Default By default, all classes in Kotlin are `final` . The `open` modifier on a class is the opposite of Java’s `final` : It allows others to inherit from this class. This also applies to member functions, in that they need to be marked as `open` to be overridden. While Kotlin’s JVM-friendly design is generally frictionless with Spring, this specific Kotlin feature can prevent the application from starting, if this fact is not taken into consideration. This is because Spring beans (such as `@Configuration` annotated classes which by default need to be extended at runtime for technical reasons) are normally proxied by CGLIB. The workaround is to add an `open` keyword on each class and member function of Spring beans that are proxied by CGLIB, which can quickly become painful and is against the Kotlin principle of keeping code concise and predictable. It is also possible to avoid CGLIB proxies for configuration classes by using | | --- | Fortunately, Kotlin provides a `kotlin-spring` plugin (a preconfigured version of the `kotlin-allopen` plugin) that automatically opens classes and their member functions for types that are annotated or meta-annotated with one of the following annotations: * `@Component` * `@Async` * `@Transactional` * `@Cacheable` Meta-annotation support means that types annotated with `@Configuration` , `@Controller` , `@RestController` , `@Service` , or `@Repository` are automatically opened since these annotations are meta-annotated with `@Component` . start.spring.io enables the `kotlin-spring` plugin by default. So, in practice, you can write your Kotlin beans without any additional `open` keyword, as in Java. The Kotlin code samples in Spring Framework documentation do not explicitly specify | | --- | ## Using Immutable Class Instances for Persistence In Kotlin, it is convenient and considered to be a best practice to declare read-only properties within the primary constructor, as in the following example: You can optionally add the `data` keyword to make the compiler automatically derive the following members from all properties declared in the primary constructor: * `equals()` and `hashCode()` * `toString()` of the form ``` "User(name=John, age=42)" ``` * `componentN()` functions that correspond to the properties in their order of declaration * `copy()` function As the following example shows, this allows for easy changes to individual properties, even if `Person` properties are read-only: val jack = Person(name = "Jack", age = 1) val olderJack = jack.copy(age = 2) ``` Common persistence technologies (such as JPA) require a default constructor, preventing this kind of design. Fortunately, there is a workaround for this “default constructor hell”, since Kotlin provides a `kotlin-jpa` plugin that generates synthetic no-arg constructor for classes annotated with JPA annotations. If you need to leverage this kind of mechanism for other persistence technologies, you can configure the `kotlin-noarg` plugin. As of the Kay release train, Spring Data supports Kotlin immutable class instances and does not require the | | --- | Our recommendation is to try to favor constructor injection with `val` read-only (and non-nullable when possible) properties, as the following example shows: ``` @Component class YourBean( private val mongoTemplate: MongoTemplate, private val solrClient: SolrClient ) ``` Classes with a single constructor have their parameters automatically autowired. That’s why there is no need for an explicit | | --- | If you really need to use field injection, you can use the `lateinit var` construct, as the following example shows: ``` @Component class YourBean { @Autowired lateinit var mongoTemplate: MongoTemplate @Autowired lateinit var solrClient: SolrClient } ``` ## Injecting Configuration Properties In Java, you can inject configuration properties by using annotations (such as ``` @Value("${property}") ``` ). However, in Kotlin, `$` is a reserved character that is used for string interpolation. Therefore, if you wish to use the `@Value` annotation in Kotlin, you need to escape the `$` character by writing ``` @Value("\${property}") ``` If you use Spring Boot, you should probably use | | --- | As an alternative, you can customize the property placeholder prefix by declaring the following configuration beans: ``` @Bean fun propertyConfigurer() = PropertySourcesPlaceholderConfigurer().apply { setPlaceholderPrefix("%{") } ``` You can customize existing code (such as Spring Boot actuators or `@LocalServerPort` ) that uses the `${…​}` syntax, with configuration beans, as the following example shows: ``` @Bean fun kotlinPropertyConfigurer() = PropertySourcesPlaceholderConfigurer().apply { setPlaceholderPrefix("%{") setIgnoreUnresolvablePlaceholders(true) } @Bean fun defaultPropertyConfigurer() = PropertySourcesPlaceholderConfigurer() ``` ## Checked Exceptions Java and Kotlin exception handling are pretty close, with the main difference being that Kotlin treats all exceptions as unchecked exceptions. However, when using proxied objects (for example classes or methods annotated with `@Transactional` ), checked exceptions thrown will be wrapped by default in an ``` UndeclaredThrowableException ``` . To get the original exception thrown like in Java, methods should be annotated with `@Throws` to specify explicitly the checked exceptions thrown (for example ``` @Throws(IOException::class) ``` ). ## Annotation Array Attributes Kotlin annotations are mostly similar to Java annotations, but array attributes (which are extensively used in Spring) behave differently. As explained in the Kotlin documentation you can omit the `value` attribute name, unlike other attributes, and specify it as a `vararg` parameter. To understand what that means, consider `@RequestMapping` (which is one of the most widely used Spring annotations) as an example. This Java annotation is declared as follows: ``` public @interface RequestMapping { @AliasFor("path") String[] value() default {}; @AliasFor("value") String[] path() default {}; RequestMethod[] method() default {}; The typical use case for `@RequestMapping` is to map a handler method to a specific path and method. In Java, you can specify a single value for the annotation array attribute, and it is automatically converted to an array. That is why one can write ``` @RequestMapping(value = "/toys", method = RequestMethod.GET) ``` ``` @RequestMapping(path = "/toys", method = RequestMethod.GET) ``` . However, in Kotlin, you must write ``` @RequestMapping("/toys", method = [RequestMethod.GET]) ``` ``` @RequestMapping(path = ["/toys"], method = [RequestMethod.GET]) ``` (square brackets need to be specified with named array attributes). An alternative for this specific `method` attribute (the most common one) is to use a shortcut annotation, such as `@GetMapping` , `@PostMapping` , and others. ## Declaration-site variance Dealing with generic types in Spring applications written in Kotlin may require, for some use cases, to understand Kotlin declaration-site variance which allows to define the variance when declaring a type, which is not possible in Java which supports only use-site variance. For example, declaring `List<Foo>` in Kotlin is conceptually equivalent to ``` java.util.List<? extends Foo> ``` because ``` kotlin.collections.List ``` is declared as ``` interface List<out E> : kotlin.collections.Collection<E> ``` . This needs to be taken in account by using the `out` Kotlin keyword on generic types when using Java classes, for example when writing a ``` org.springframework.core.convert.converter.Converter ``` from a Kotlin type to a Java type. ``` class ListOfFooConverter : Converter<List<Foo>, CustomJavaList<out Foo>> { // ... } ``` When converting any kind of objects, star projection with `*` can be used instead of `out Any` . ``` class ListOfAnyConverter : Converter<List<*>, CustomJavaList<*>> { // ... } ``` Spring Framework does not leverage yet declaration-site variance type information for injecting beans, subscribe to spring-framework#22313 to track related progresses. | | --- | ## Testing This section addresses testing with the combination of Kotlin and Spring Framework. The recommended testing framework is JUnit 5 along with Mockk for mocking. If you are using Spring Boot, see this related documentation. | | --- | ### Constructor injection As described in the dedicated section, JUnit 5 allows constructor injection of beans which is pretty useful with Kotlin in order to use `val` instead of `lateinit var` . You can use ``` @TestConstructor(autowireMode = AutowireMode.ALL) ``` to enable autowiring for all parameters. ``` @SpringJUnitConfig(TestConfig::class) @TestConstructor(autowireMode = AutowireMode.ALL) class OrderServiceIntegrationTests(val orderService: OrderService, val customerService: CustomerService) { `PER_CLASS` Lifecycle Kotlin lets you specify meaningful test function names between backticks ( ``` ). As of JUnit 5, Kotlin test classes can use the ``` @TestInstance(TestInstance.Lifecycle.PER_CLASS) ``` annotation to enable single instantiation of test classes, which allows the use of `@BeforeAll` and `@AfterAll` annotations on non-static methods, which is a good fit for Kotlin. You can also change the default behavior to `PER_CLASS` thanks to a ``` junit-platform.properties ``` file with a ``` junit.jupiter.testinstance.lifecycle.default = per_class ``` property. The following example demonstrates `@BeforeAll` and `@AfterAll` annotations on non-static methods: ``` @TestInstance(TestInstance.Lifecycle.PER_CLASS) class IntegrationTests { val application = Application(8181) val client = WebClient.create("http://localhost:8181") @BeforeAll fun beforeAll() { application.start() } @Test fun `Find all users on HTML page`() { client.get().uri("/users") .accept(TEXT_HTML) .retrieve() .bodyToMono<String>() .test() .expectNextMatches { it.contains("Foo") } .verifyComplete() } @AfterAll fun afterAll() { application.stop() } } ``` ### Specification-like Tests You can create specification-like tests with JUnit 5 and Kotlin. The following example shows how to do so: ``` class SpecificationLikeTests { @Nested @DisplayName("a calculator") inner class Calculator { val calculator = SampleCalculator() @Test fun `should return the result of adding the first number to the second number`() { val sum = calculator.sum(2, 4) assertEquals(6, sum) } @Test fun `should return the result of subtracting the second number from the first number`() { val subtract = calculator.subtract(4, 2) assertEquals(2, subtract) } } } ``` `WebTestClient` Type Inference Issue in Kotlin Due to a type inference issue, you must use the Kotlin `expectBody` extension (such as ``` .expectBody<String>().isEqualTo("toys") ``` ), since it provides a workaround for the Kotlin issue with the Java API. See also the related SPR-16057 issue. The easiest way to learn how to build a Spring application with Kotlin is to follow the dedicated tutorial. `start.spring.io` The easiest way to start a new Spring Framework project in Kotlin is to create a new Spring Boot 2 project on start.spring.io. ## Choosing the Web Flavor Spring Framework now comes with two different web stacks: Spring MVC and Spring WebFlux. Spring WebFlux is recommended if you want to create applications that will deal with latency, long-lived connections, streaming scenarios or if you want to use the web functional Kotlin DSL. For other use cases, especially if you are using blocking technologies such as JPA, Spring MVC and its annotation-based programming model is the recommended choice. We recommend the following resources for people learning how to build applications with Kotlin and the Spring Framework: Kotlin Slack (with a dedicated #spring channel) The following Github projects offer examples that you can learn from and possibly even extend: spring-boot-kotlin-demo: Regular Spring Boot and Spring Data JPA project * mixit: Spring Boot 2, WebFlux, and Reactive Spring Data MongoDB * spring-kotlin-functional: Standalone WebFlux and functional bean definition DSL * spring-kotlin-fullstack: WebFlux Kotlin fullstack example with Kotlin2js for frontend instead of JavaScript or TypeScript * spring-petclinic-kotlin: Kotlin version of the Spring PetClinic Sample Application * spring-kotlin-deepdive: A step-by-step migration guide for Boot 1.0 and Java to Boot 2.0 and Kotlin * spring-cloud-gcp-kotlin-app-sample: Spring Boot with Google Cloud Platform Integrations ## Issues The following list categorizes the pending issues related to Spring and Kotlin support: Spring Framework * Kotlin properties do not override Java-style getters and setters Groovy is a powerful, optionally typed, and dynamic language, with static-typing and static compilation capabilities. It offers a concise syntax and integrates smoothly with any existing Java application. The Spring Framework provides a dedicated `ApplicationContext` that supports a Groovy-based Bean Definition DSL. For more details, see The Groovy Bean Definition DSL. Further support for Groovy, including beans written in Groovy, refreshable script beans, and more is available in Dynamic Language Support. Spring provides comprehensive support for using classes and objects that have been defined by using a dynamic language (such as Groovy) with Spring. This support lets you write any number of classes in a supported dynamic language and have the Spring container transparently instantiate, configure, and dependency inject the resulting objects. Spring’s scripting support primarily targets Groovy and BeanShell. Beyond those specifically supported languages, the JSR-223 scripting mechanism is supported for integration with any JSR-223 capable language provider (as of Spring 4.2), e.g. JRuby. You can find fully working examples of where this dynamic language support can be immediately useful in Scenarios. ## A First Example The bulk of this chapter is concerned with describing the dynamic language support in detail. Before diving into all of the ins and outs of the dynamic language support, we look at a quick example of a bean defined in a dynamic language. The dynamic language for this first bean is Groovy. (The basis of this example was taken from the Spring test suite. If you want to see equivalent examples in any of the other supported languages, take a look at the source code). The next example shows the `Messenger` interface, which the Groovy bean is going to implement. Note that this interface is defined in plain Java. Dependent objects that are injected with a reference to the `Messenger` do not know that the underlying implementation is a Groovy script. The following listing shows the `Messenger` interface: The following example defines a class that has a dependency on the `Messenger` interface: public class DefaultBookingService implements BookingService { private Messenger messenger; public void setMessenger(Messenger messenger) { this.messenger = messenger; } public void processBooking() { // use the injected Messenger object... } } ``` The following example implements the `Messenger` interface in Groovy: // Import the Messenger interface (written in Java) that is to be implemented import org.springframework.scripting.Messenger // Define the implementation in Groovy in file 'Messenger.groovy' class GroovyMessenger implements Messenger { Finally, the following example shows the bean definitions that effect the injection of the Groovy-defined `Messenger` implementation into an instance of the <!-- this is the bean definition for the Groovy-backed Messenger implementation --> <lang:groovy id="messenger" script-source="classpath:Messenger.groovy"> <lang:property name="message" value="I Can Do The Frug" /> </lang:groovy <!-- an otherwise normal bean that will be injected by the Groovy-backed Messenger --> <bean id="bookingService" class="x.y.DefaultBookingService"> <property name="messenger" ref="messenger" /> </bean The `bookingService` bean (a ) can now use its private `messenger` member variable as normal, because the `Messenger` instance that was injected into it is a `Messenger` instance. There is nothing special going on here — just plain Java and plain Groovy. Hopefully, the preceding XML snippet is self-explanatory, but do not worry unduly if it is not. Keep reading for the in-depth detail on the whys and wherefores of the preceding configuration. ## Defining Beans that Are Backed by Dynamic Languages This section describes exactly how you define Spring-managed beans in any of the supported dynamic languages. Note that this chapter does not attempt to explain the syntax and idioms of the supported dynamic languages. For example, if you want to use Groovy to write certain of the classes in your application, we assume that you already know Groovy. If you need further details about the dynamic languages themselves, see Further Resources at the end of this chapter. ### Common Concepts The steps involved in using dynamic-language-backed beans are as follows: Write the test for the dynamic language source code (naturally). * Then write the dynamic language source code itself. * Define your dynamic-language-backed beans by using the appropriate `<lang:language/>` element in the XML configuration (you can define such beans programmatically by using the Spring API, although you will have to consult the source code for directions on how to do this, as this chapter does not cover this type of advanced configuration). Note that this is an iterative step. You need at least one bean definition for each dynamic language source file (although multiple bean definitions can reference the same source file). The first two steps (testing and writing your dynamic language source files) are beyond the scope of this chapter. See the language specification and reference manual for your chosen dynamic language and crack on with developing your dynamic language source files. You first want to read the rest of this chapter, though, as Spring’s dynamic language support does make some (small) assumptions about the contents of your dynamic language source files. # The <lang:language/> element The final step in the list in the preceding section involves defining dynamic-language-backed bean definitions, one for each bean that you want to configure (this is no different from normal JavaBean configuration). However, instead of specifying the fully qualified class name of the class that is to be instantiated and configured by the container, you can use the `<lang:language/>` element to define the dynamic language-backed bean. Each of the supported languages has a corresponding `<lang:language/>` element: * `<lang:groovy/>` (Groovy) * `<lang:bsh/>` (BeanShell) * `<lang:std/>` (JSR-223, e.g. with JRuby) The exact attributes and child elements that are available for configuration depends on exactly which language the bean has been defined in (the language-specific sections later in this chapter detail this). # Refreshable Beans One of the (and perhaps the single) most compelling value adds of the dynamic language support in Spring is the “refreshable bean” feature. A refreshable bean is a dynamic-language-backed bean. With a small amount of configuration, a dynamic-language-backed bean can monitor changes in its underlying source file resource and then reload itself when the dynamic language source file is changed (for example, when you edit and save changes to the file on the file system). This lets you deploy any number of dynamic language source files as part of an application, configure the Spring container to create beans backed by dynamic language source files (using the mechanisms described in this chapter), and (later, as requirements change or some other external factor comes into play) edit a dynamic language source file and have any change they make be reflected in the bean that is backed by the changed dynamic language source file. There is no need to shut down a running application (or redeploy in the case of a web application). The dynamic-language-backed bean so amended picks up the new state and logic from the changed dynamic language source file. This feature is off by default. | | --- | Now we can take a look at an example to see how easy it is to start using refreshable beans. To turn on the refreshable beans feature, you have to specify exactly one additional attribute on the `<lang:language/>` element of your bean definition. So, if we stick with the example from earlier in this chapter, the following example shows what we would change in the Spring XML configuration to effect refreshable beans: <!-- this bean is now 'refreshable' due to the presence of the 'refresh-check-delay' attribute --> <lang:groovy id="messenger" refresh-check-delay="5000" <!-- switches refreshing on with 5 seconds between checks --> script-source="classpath:Messenger.groovy"> <lang:property name="message" value="I Can Do The Frug" /> </lang:groovy That really is all you have to do. The `refresh-check-delay` attribute defined on the `messenger` bean definition is the number of milliseconds after which the bean is refreshed with any changes made to the underlying dynamic language source file. You can turn off the refresh behavior by assigning a negative value to the `refresh-check-delay` attribute. Remember that, by default, the refresh behavior is disabled. If you do not want the refresh behavior, do not define the attribute. If we then run the following application, we can exercise the refreshable feature. (Please excuse the “jumping-through-hoops-to-pause-the-execution” shenanigans in this next slice of code.) The `System.in.read()` call is only there so that the execution of the program pauses while you (the developer in this scenario) go off and edit the underlying dynamic language source file so that the refresh triggers on the dynamic-language-backed bean when the program resumes execution. The following listing shows this sample application: public static void main(final String[] args) throws Exception { ApplicationContext ctx = new ClassPathXmlApplicationContext("beans.xml"); Messenger messenger = (Messenger) ctx.getBean("messenger"); System.out.println(messenger.getMessage()); // pause execution while I go off and make changes to the source file... System.in.read(); System.out.println(messenger.getMessage()); } } ``` Assume then, for the purposes of this example, that all calls to the `getMessage()` method of `Messenger` implementations have to be changed such that the message is surrounded by quotation marks. The following listing shows the changes that you (the developer) should make to the `Messenger.groovy` source file when the execution of the program is paused: ``` package org.springframework.scripting class GroovyMessenger implements Messenger { private String message = "Bingo" public String getMessage() { // change the implementation to surround the message in quotes return "'" + this.message + "'" } public void setMessage(String message) { this.message = message } } ``` When the program runs, the output before the input pause will be `I Can Do The Frug` . After the change to the source file is made and saved and the program resumes execution, the result of calling the `getMessage()` method on the dynamic-language-backed `Messenger` implementation is `'I Can Do The Frug'` (notice the inclusion of the additional quotation marks). Changes to a script do not trigger a refresh if the changes occur within the window of the `refresh-check-delay` value. Changes to the script are not actually picked up until a method is called on the dynamic-language-backed bean. It is only when a method is called on a dynamic-language-backed bean that it checks to see if its underlying script source has changed. Any exceptions that relate to refreshing the script (such as encountering a compilation error or finding that the script file has been deleted) results in a fatal exception being propagated to the calling code. The refreshable bean behavior described earlier does not apply to dynamic language source files defined with the element notation (see Inline Dynamic Language Source Files). Additionally, it applies only to beans where changes to the underlying source file can actually be detected (for example, by code that checks the last modified date of a dynamic language source file that exists on the file system). # Inline Dynamic Language Source Files The dynamic language support can also cater to dynamic language source files that are embedded directly in Spring bean definitions. More specifically, the element lets you define dynamic language source immediately inside a Spring configuration file. An example might clarify how the inline script feature works: ``` <lang:groovy id="messenger"> <lang:inline-script package org.springframework.scripting.groovy class GroovyMessenger implements Messenger { String message } </lang:inline-script> <lang:property name="message" value="I Can Do The Frug" /> </lang:groovy> ``` If we put to one side the issues surrounding whether it is good practice to define dynamic language source inside a Spring configuration file, the element can be useful in some scenarios. For instance, we might want to quickly add a Spring `Validator` implementation to a Spring MVC `Controller` . This is but a moment’s work using inline source. (See Scripted Validators for such an example.) # Understanding Constructor Injection in the Context of Dynamic-language-backed Beans There is one very important thing to be aware of with regard to Spring’s dynamic language support. Namely, you can not (currently) supply constructor arguments to dynamic-language-backed beans (and, hence, constructor-injection is not available for dynamic-language-backed beans). In the interests of making this special handling of constructors and properties 100% clear, the following mixture of code and configuration does not work: // from the file 'Messenger.groovy' class GroovyMessenger implements Messenger { GroovyMessenger() {} // this constructor is not available for Constructor Injection GroovyMessenger(String message) { this.message = message; } String message ``` <lang:groovy id="badMessenger" script-source="classpath:Messenger.groovy"> <!-- this next constructor argument will not be injected into the GroovyMessenger --> <!-- in fact, this isn't even allowed according to the schema --> <constructor-arg value="This will not work" / <!-- only property values are injected into the dynamic-language-backed object --> <lang:property name="anotherMessage" value="Passed straight through to the dynamic-language-backed object" /</lang> ``` In practice this limitation is not as significant as it first appears, since setter injection is the injection style favored by the overwhelming majority of developers (we leave the discussion as to whether that is a good thing to another day). ### Groovy Beans This section describes how to use beans defined in Groovy in Spring. The Groovy homepage includes the following description: “Groovy is an agile dynamic language for the Java 2 Platform that has many of the features that people like so much in languages like Python, Ruby and Smalltalk, making them available to Java developers using a Java-like syntax.” If you have read this chapter straight from the top, you have already seen an example of a Groovy-dynamic-language-backed bean. Now consider another example (again using an example from the Spring test suite): public interface Calculator { int add(int x, int y); } ``` The following example implements the `Calculator` interface in Groovy: // from the file 'calculator.groovy' class GroovyCalculator implements Calculator { int add(int x, int y) { x + y } } ``` The following bean definition uses the calculator defined in Groovy: ``` <!-- from the file 'beans.xml' --> <beans> <lang:groovy id="calculator" script-source="classpath:calculator.groovy"/> </beans> ``` Finally, the following small application exercises the preceding configuration: import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext("beans.xml"); Calculator calc = ctx.getBean("calculator", Calculator.class); System.out.println(calc.add(2, 8)); } } ``` The resulting output from running the above program is (unsurprisingly) `10` . (For more interesting examples, see the dynamic language showcase project for a more complex example or see the examples Scenarios later in this chapter). You must not define more than one class per Groovy source file. While this is perfectly legal in Groovy, it is (arguably) a bad practice. In the interests of a consistent approach, you should (in the opinion of the Spring team) respect the standard Java conventions of one (public) class per source file. # Customizing Groovy Objects by Using a Callback interface is a callback that lets you hook additional creation logic into the process of creating a Groovy-backed bean. For example, implementations of this interface could invoke any required initialization methods, set some default property values, or specify a custom `MetaClass` . The following listing shows the interface definition: ``` public interface GroovyObjectCustomizer { void customize(GroovyObject goo); } ``` The Spring Framework instantiates an instance of your Groovy-backed bean and then passes the created `GroovyObject` to the specified (if one has been defined). You can do whatever you like with the supplied `GroovyObject` reference. We expect that most people want to set a custom `MetaClass` with this callback, and the following example shows how to do so: ``` public final class SimpleMethodTracingCustomizer implements GroovyObjectCustomizer { public void customize(GroovyObject goo) { DelegatingMetaClass metaClass = new DelegatingMetaClass(goo.getMetaClass()) { public Object invokeMethod(Object object, String methodName, Object[] arguments) { System.out.println("Invoking '" + methodName + "'."); return super.invokeMethod(object, methodName, arguments); } }; metaClass.initialize(); goo.setMetaClass(metaClass); } A full discussion of meta-programming in Groovy is beyond the scope of the Spring reference manual. See the relevant section of the Groovy reference manual or do a search online. Plenty of articles address this topic. Actually, making use of a is easy if you use the Spring namespace support, as the following example shows: ``` <!-- define the GroovyObjectCustomizer just like any other bean --> <bean id="tracingCustomizer" class="example.SimpleMethodTracingCustomizer"/ <!-- ... and plug it into the desired Groovy bean via the 'customizer-ref' attribute --> <lang:groovy id="calculator" script-source="classpath:org/springframework/scripting/groovy/Calculator.groovy" customizer-ref="tracingCustomizer"/> ``` If you do not use the Spring namespace support, you can still use the ``` <bean id="calculator" class="org.springframework.scripting.groovy.GroovyScriptFactory"> <constructor-arg value="classpath:org/springframework/scripting/groovy/Calculator.groovy"/> <!-- define the GroovyObjectCustomizer (as an inner bean) --> <constructor-arg> <bean id="tracingCustomizer" class="example.SimpleMethodTracingCustomizer"/> </constructor-arg> </bean<bean class="org.springframework.scripting.support.ScriptFactoryPostProcessor"/> ``` You may also specify a Groovy | | --- | ### BeanShell Beans This section describes how to use BeanShell beans in Spring. The BeanShell homepage includes the following description: > BeanShell is a small, free, embeddable Java source interpreter with dynamic language features, written in Java. BeanShell dynamically runs standard Java syntax and extends it with common scripting conveniences such as loose types, commands, and method closures like those in Perl and JavaScript. In contrast to Groovy, BeanShell-backed bean definitions require some (small) additional configuration. The implementation of the BeanShell dynamic language support in Spring is interesting, because Spring creates a JDK dynamic proxy that implements all of the interfaces that are specified in the `script-interfaces` attribute value of the `<lang:bsh>` element (this is why you must supply at least one interface in the value of the attribute, and, consequently, program to interfaces when you use BeanShell-backed beans). This means that every method call on a BeanShell-backed object goes through the JDK dynamic proxy invocation mechanism. Now we can show a fully working example of using a BeanShell-based bean that implements the `Messenger` interface that was defined earlier in this chapter. We again show the definition of the `Messenger` interface: The following example shows the BeanShell “implementation” (we use the term loosely here) of the `Messenger` interface: ``` String message; String getMessage() { return message; } void setMessage(String aMessage) { message = aMessage; } ``` The following example shows the Spring XML that defines an “instance” of the above “class” (again, we use these terms very loosely here): ``` <lang:bsh id="messageService" script-source="classpath:BshMessenger.bsh" script-interfaces="org.springframework.scripting.Messenger" <lang:property name="message" value="Hello World!" /> </lang:bsh> ``` See Scenarios for some scenarios where you might want to use BeanShell-based beans. ## Scenarios The possible scenarios where defining Spring managed beans in a scripting language would be beneficial are many and varied. This section describes two possible use cases for the dynamic language support in Spring. ### Scripted Spring MVC Controllers One group of classes that can benefit from using dynamic-language-backed beans is that of Spring MVC controllers. In pure Spring MVC applications, the navigational flow through a web application is, to a large extent, determined by code encapsulated within your Spring MVC controllers. As the navigational flow and other presentation layer logic of a web application needs to be updated to respond to support issues or changing business requirements, it may well be easier to effect any such required changes by editing one or more dynamic language source files and seeing those changes being immediately reflected in the state of a running application. Remember that, in the lightweight architectural model espoused by projects such as Spring, you typically aim to have a really thin presentation layer, with all the meaty business logic of an application being contained in the domain and service layer classes. Developing Spring MVC controllers as dynamic-language-backed beans lets you change presentation layer logic by editing and saving text files. Any changes to such dynamic language source files is (depending on the configuration) automatically reflected in the beans that are backed by dynamic language source files. To effect this automatic “pickup” of any changes to dynamic-language-backed beans, you have to enable the “refreshable beans” functionality. See Refreshable Beans for a full treatment of this feature. | | --- | ``` org.springframework.web.servlet.mvc.Controller ``` implemented by using the Groovy dynamic language: ``` package org.springframework.showcase.fortune.web import org.springframework.showcase.fortune.service.FortuneService import org.springframework.showcase.fortune.domain.Fortune import org.springframework.web.servlet.ModelAndView import org.springframework.web.servlet.mvc.Controller import jakarta.servlet.http.HttpServletRequest import jakarta.servlet.http.HttpServletResponse // from the file '/WEB-INF/groovy/FortuneController.groovy' class FortuneController implements Controller { @Property FortuneService fortuneService ModelAndView handleRequest(HttpServletRequest request, HttpServletResponse httpServletResponse) { return new ModelAndView("tell", "fortune", this.fortuneService.tellFortune()) } } ``` ``` <lang:groovy id="fortune" refresh-check-delay="3000" script-source="/WEB-INF/groovy/FortuneController.groovy"> <lang:property name="fortuneService" ref="fortuneService"/> </lang:groovy> ``` ### Scripted Validators Another area of application development with Spring that may benefit from the flexibility afforded by dynamic-language-backed beans is that of validation. It can be easier to express complex validation logic by using a loosely typed dynamic language (that may also have support for inline regular expressions) as opposed to regular Java. Again, developing validators as dynamic-language-backed beans lets you change validation logic by editing and saving a simple text file. Any such changes is (depending on the configuration) automatically reflected in the execution of a running application and would not require the restart of an application. To effect the automatic “pickup” of any changes to dynamic-language-backed beans, you have to enable the 'refreshable beans' feature. See Refreshable Beans for a full and detailed treatment of this feature. | | --- | implemented by using the Groovy dynamic language (see Validation using Spring’s Validator interface for a discussion of the `Validator` interface): ``` import org.springframework.validation.Validator import org.springframework.validation.Errors import org.springframework.beans.TestBean class TestBeanValidator implements Validator { boolean supports(Class clazz) { return TestBean.class.isAssignableFrom(clazz) } void validate(Object bean, Errors errors) { if(bean.name?.trim()?.size() > 0) { return } errors.reject("whitespace", "Cannot be composed wholly of whitespace.") } } ``` ## Additional Details This last section contains some additional details related to the dynamic language support. ### AOP — Advising Scripted Beans You can use the Spring AOP framework to advise scripted beans. The Spring AOP framework actually is unaware that a bean that is being advised might be a scripted bean, so all of the AOP use cases and functionality that you use (or aim to use) work with scripted beans. When you advise scripted beans, you cannot use class-based proxies. You must use interface-based proxies. You are not limited to advising scripted beans. You can also write aspects themselves in a supported dynamic language and use such beans to advise other Spring beans. This really would be an advanced use of the dynamic language support though. ### Scoping In case it is not immediately obvious, scripted beans can be scoped in the same way as any other bean. The `scope` attribute on the various `<lang:language/>` elements lets you control the scope of the underlying scripted bean, as it does with a regular bean. (The default scope is singleton, as it is with “regular” beans.) The following example uses the `scope` attribute to define a Groovy bean scoped as a prototype: <lang:groovy id="messenger" script-source="classpath:Messenger.groovy" scope="prototype"> <lang:property name="message" value="I Can Do The RoboCop" /> </lang:groovySee Bean Scopes in The IoC Container for a full discussion of the scoping support in the Spring Framework. `lang` XML schema The `lang` elements in Spring XML configuration deal with exposing objects that have been written in a dynamic language (such as Groovy or BeanShell) as beans in the Spring container. These elements (and the dynamic language support) are comprehensively covered in Dynamic Language Support. See that section for full details on this support and the `lang` elements. To use the elements in the `lang` schema, you need to have the following preamble at the top of your Spring XML configuration file. The text in the following snippet references the correct schema so that the tags in the `lang` namespace are available to you: This part of the reference documentation covers topics that apply to multiple modules within the core Spring Framework. ## Spring Properties `SpringProperties` is a static holder for properties that control certain low-level aspects of the Spring Framework. Users can configure these properties via JVM system properties or programmatically via the ``` SpringProperties.setProperty(String key, String value) ``` method. The latter may be necessary if the deployment environment disallows custom JVM system properties. As an alternative, these properties may be configured in a `spring.properties` file in the root of the classpath — for example, deployed within the application’s JAR file. The following table lists all currently supported Spring properties.
ggmotif
cran
R
Package ‘ggmotif’ October 13, 2022 Type Package Title Extract and Visualize Motif Information from MEME Software Version 0.2.1 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Extract and visualize motif information from XML file from MEME software. In biology, a motif is a nucleotide or amino acid sequence pattern that is widespread and usually assumed to be related to specifical biological functions. There exist many software was used to discover motif sequences from a set of nucleotide or amino acid sequences. MEME is almost the most used software to detect motif. It's difficult for biologists to extract and visualize the location of a motif on sequences from the results from MEME software. License Artistic-2.0 Encoding UTF-8 Depends R (>= 3.5.0) Imports tidyverse, dplyr, XML, magrittr, ggplot2, stringr, ggtree, ape, patchwork, ggseqlogo, memes, universalmotif, treeio, cowplot, ggsci, data.table RoxygenNote 7.2.1 Suggests knitr, rmarkdown VignetteBuilder knitr NeedsCompilation no Repository CRAN Date/Publication 2022-08-11 10:30:05 UTC R topics documented: getMotifFromMEM... 2 motifLocatio... 2 getMotifFromMEME Extract and Visualize Motif Information from MEME Software Description getMotifFromMEME Extract motif information from the MEME software results. Arguments data A txt file from MEME software. format The result format from MEME, txt or xml. Value Return a datafram Author(s) <NAME> <<EMAIL>> Examples filepath <- system.file("examples", "meme.txt", package = "ggmotif") motif.info <- getMotifFromMEME(data = filepath, format = "txt") filepath <- system.file("examples", "meme.xml", package = "ggmotif") motif.info <- getMotifFromMEME(data = filepath, format = "xml") motifLocation Extract and Visualize Motif Information from MEME Software Description motifLocation Visualize motif location in a specificial sequences.. Arguments data A data frame file from getMotifFromXML function. tree.path A file path of the correponding phylogenetic tree. The IDs of the phylogenetic tree must be same as the IDs of sequences used to identify motifs using MEME. Value Return a plot Author(s) <NAME> <<EMAIL>> Examples # without phylogenetic tree filepath <- system.file("examples", "meme.xml", package = "ggmotif") motif_extract <- getMotifFromMEME(data = filepath, format="xml") motif_plot <- motifLocation(data = motif_extract) # with phylogenetic tree filepath <- system.file("examples", "meme.xml", package = "ggmotif") treepath <- system.file("examples", "ara.nwk", package="ggmotif") motif_extract <- getMotifFromMEME(data = filepath, format="xml") motif_plot <- motifLocation(data = motif_extract, tree = treepath)
raqs
cran
R
Package ‘raqs’ September 24, 2023 Title Interface to the US EPA Air Quality System (AQS) API Version 1.0.1 Description Offers functions for fetching JSON data from the US EPA Air Quality System (AQS) API with options to comply with the API rate limits. See <https://aqs.epa.gov/aqsweb/documents/data_api.html> for details of the AQS API. License MIT + file LICENSE Encoding UTF-8 RoxygenNote 7.2.3 Imports cli, httr2 Depends R (>= 4.0) URL https://github.com/HimesGroup/raqs BugReports https://github.com/HimesGroup/raqs/issues Suggests data.table, tibble NeedsCompilation no Author <NAME> [aut, cre], <NAME> [aut] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-09-24 02:50:02 UTC R topics documented: raqs-packag... 2 aqs_annualdat... 3 aqs_dailydat... 7 aqs_lis... 11 aqs_metadat... 14 aqs_monitor... 16 aqs_qaannualperformanceevaluation... 19 aqs_qablank... 23 aqs_qacollocatedassessments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 aqs_qaflowrateaudits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 aqs_qaflowrateverifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 aqs_qaonepointqcrawdata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 aqs_qapepaudit... 39 aqs_quarterlydat... 42 aqs_sampledat... 46 aqs_signu... 50 aqs_transactionsqaannualperformanceevaluations . . . . . . . . . . . . . . . . . . . . . 51 aqs_transactionssample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 raqs_option... 57 set_aqs_use... 58 raqs-package raqs: Interface to the US EPA Air Quality System (AQS) API Description Offers functions for fetching JSON data from the US EPA Air Quality System (AQS) API with op- tions to comply with the API rate limits. See https://aqs.epa.gov/aqsweb/documents/data_ api.html for details of the AQS API. Details The ’raqs’ package provides an R interface to the US EPA AQS API that publish data in JSON format. To use this package, you first need to register for the AQS API and get your API key. Please check aqs_signup and set_aqs_user to set up your API credentials in R. All main functions, for fetching data from the AQS API, were named with the following scheme: aqs_{service} • aqs_metadata returns information about the API. • aqs_list returns variable values you may need to create other service requests. • aqs_monitors returns operational information about the monitors used to collect data. • aqs_sampledata returns sample data - the finest grain data reported to EPA. • aqs_dailydata returns data summarized at the daily level. • aqs_quarterlydata returns data summarized at the calendar quarter level. • aqs_annualdata returns data summarized at the yearly level • aqs_qaannualperformanceevaluations returns pairs of data (known and measured values) at several concentration levels for gaseous criteria pollutants. • aqs_qablanks returns concentrations from blank samples. • aqs_qacollocatedassessments returns pairs of PM samples collected at the same time and place by different samplers. • aqs_qaflowrateverifications returns flow rate checks performed by monitoring agencies. • aqs_qaflowrateaudits returns flow rate audits data • aqs_qaonepointqcrawdata returns measured versus actual concentration of one point QC checks. • aqs_qapepaudits returns data related to PM2.5 monitoring system audits. • aqs_transactionssample returns sample data in the transaction format for AQS. • aqs_transactionsqaannualperformanceevaluations returns pairs of data QA at several concen- tration levels in the transaction format for AQS. Each main function has a set of underlying functions that are responsible for sending requests to spe- cific endpoints (service/filter) and were named with the following scheme: {service}_{filter}. Please refer to the manual to see how the aforementioned functions work. Author(s) Maintainer: <NAME> <<EMAIL>> Authors: • <NAME> See Also Useful links: • https://github.com/HimesGroup/raqs • Report bugs at https://github.com/HimesGroup/raqs/issues aqs_annualdata AQS API Annual Summary Data service Description A collection of functions to fetch data summarized at the yearly level. Note that only the year portions of the bdate and edate are used and only whole years of data are returned. Usage aqs_annualdata( aqs_filter = c("bySite", "byCounty", "byState", "byBox", "byCBSA"), aqs_variables = NULL, header = FALSE, ... ) annualdata_bysite( param, bdate, 4 aqs_annualdata edate, state, county, site, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) annualdata_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) annualdata_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) annualdata_bybox( param, bdate, edate, minlat, maxlat, minlon, maxlon, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) annualdata_bycbsa( param, bdate, edate, cbsa, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only the year portion is used. edate A string specifying the end date of data selection in YYYYMMDD format. Only the year portion is used. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. duration (optional) A string specifying the 1-character AQS sample duration code. A list of the duration codes can be obtained via list_durations. Only data reported at this sample duration will be returned. cbdate (optional) A string specifying the change begin date in YYYYMMDD format to subset data based on "date of last change" in database. Only data that changed on or after this date will be returned. cedate (optional) A string specifying the change end date in YYYYMMDD format to subset data based on "date of last change" in database. Only data that changed on or before this date will be returned. minlat A string or numeric value specifying the minimum latitude of a geographic box. Decimal latitude with north being positive. maxlat A string or numeric value specifying the maximum latitude of a geographic box. Decimal latitude with north being positive. minlon A string or numeric value specifying the minimum longitude of a geographic box. Decimal longitude with east being positive. maxlon A string or numeric value specifying the maximum longitude of a geographic box. Decimal longitude with east being positive. cbsa A string specifying the AQS CBSA code. A list of the CBSA codes can be obtained via list_cbsas. Details aqs_annualdata sends a request to the AQS API based on a user-provided filter using the following underlying functions: • annualdata_bysite returns annual summary param data for site in county, within state, based on the year portions of bdate and edate. • annualdata_bycounty returns annual summary param data for county in state based on the year portions of bdate and edate. • annualdata_bystate returns annual summary param data for state based on the year portions of bdate and edate. • annualdata_bybox returns annual summary param data for a user-provided latitude/longitude bounding box (minlat, maxlat, minlon, maxlon) based on the year portions of bdate and edate. • annualdata_bycbsa returns annual summary param data for a user-provided CBSA based on the year portions of bdate and edate. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## FRM/FEM PM2.5 data for Wake County, NC for 2016 ## Only the year portions of bdate and edate are used aqs_variables <- list( param = c("88101", "88502"), bdate = "20160101", edate = "20160228", state = "37", county = "183" ) aqs_annualdata(aqs_filter = "byCounty", aqs_variables = aqs_variables) ## Equivalent to above; used integers instead of strings annualdata_bycounty( param = c(88101, 88502), bdate = "20160101", edate = "20160228", state = 37, county = 183 ) ## End(Not run) aqs_dailydata AQS API Daily Summary Data service Description A collection of functions to fetch data summarized at the daily level. Please use a narrow range of dates to adhere to the API’s limit imposed on request size. Usage aqs_dailydata( aqs_filter = c("bySite", "byCounty", "byState", "byBox", "byCBSA"), aqs_variables = NULL, header = FALSE, ... ) dailydata_bysite( param, bdate, edate, 8 aqs_dailydata state, county, site, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) dailydata_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) dailydata_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) dailydata_bybox( param, bdate, edate, minlat, maxlat, minlon, maxlon, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) dailydata_bycbsa( param, bdate, edate, cbsa, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only data on or after this date will be returned. edate A string specifying the end date of data selection in YYYYMMDD format. Only data on or before this date will be returned. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. duration (optional) A string specifying the 1-character AQS sample duration code. A list of the duration codes can be obtained via list_durations. Only data reported at this sample duration will be returned. cbdate (optional) A string specifying the change begin date in YYYYMMDD format to subset data based on "date of last change" in database. Only data that changed on or after this date will be returned. cedate (optional) A string specifying the change end date in YYYYMMDD format to subset data based on "date of last change" in database. Only data that changed on or before this date will be returned. minlat A string or numeric value specifying the minimum latitude of a geographic box. Decimal latitude with north being positive. maxlat A string or numeric value specifying the maximum latitude of a geographic box. Decimal latitude with north being positive. minlon A string or numeric value specifying the minimum longitude of a geographic box. Decimal longitude with east being positive. maxlon A string or numeric value specifying the maximum longitude of a geographic box. Decimal longitude with east being positive. cbsa A string specifying the AQS CBSA code. A list of the CBSA codes can be obtained via list_cbsas. Details aqs_dailydata sends a request to the AQS API based on a user-provided filter using the following underlying functions: • dailydata_bysite returns daily summary param data for site in county, within state, be- tween bdate and edate. • dailydata_bycounty returns daily summary param data for county in state between bdate and edate. • dailydata_bystate returns daily summary param data for state between bdate and edate. • dailydata_bybox returns daily summary param data for a user-provided latitude/longitude bounding box (minlat, maxlat, minlon, maxlon) between bdate and edate. • dailydata_bycbsa returns daily summary param data for a user-provided CBSA between bdate and edate. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## FRM/FEM PM2.5 data for Wake County, NC between Jan and Feb 2016 aqs_variables <- list( param = "88101", bdate = "20160101", edate = "20160228", state = "37", county = "183" ) aqs_dailydata(aqs_filter = "byCounty", aqs_variables = aqs_variables) ## Equivalent to above; used integers instead of strings dailydata_bycounty( param = 88101, bdate = "20160101", edate = "20160228", state = 37, county = 183 ) ## End(Not run) aqs_list AQS API List service Description A collection of functions to fetch variable values you need to create other service requests. All outputs are a value and the definition of that value. Usage aqs_list( aqs_filter = c("states", "countiesByState", "sitesByCounty", "cbsas", "classes", "parametersByClass", "pqaos", "mas", "durations"), aqs_variables = NULL, header = FALSE, ... ) list_states(email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ...) list_countiesbystate( state, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) list_sitesbycounty( state, county, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) list_cbsas(email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ...) list_classes(email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ...) list_parametersbyclass( pc, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) list_pqaos(email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ...) list_mas(email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ...) list_durations( email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. pc A string specifying the AQS parameter class name. A list of the class names can be obtained via list_classes. Details aqs_list sends a request to the AQS API based on a user-provided filter using the following under- lying functions: • list_states returns a list of the states and their FIPS codes. • list_countiesbystate returns a list of all counties within a user-provided state. • list_sitesbycounty returns a list of all sites within a user-provided county. • list_cbsas returns a list of the 5-digit Core Based Statistical Area (CBSA) codes. • list_classes returns a list of parameter class codes. • list_parametersbyclass returns all parameters in a user-provided parameter class. • list_pqaos returns a list of AQS Primary Quality Assurance Organization (PQAO) codes. • list_mas returns a list of AQS Monitoring Agency (MA) codes. • list_durations returns a list of the 1-character AQS sample duration codes. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes aqs_list(aqs_filter = "states") list_states() # equivalent to above aqs_list("countiesByState", aqs_variables = list(state = "01")) list_countiesbystate(state = "01") aqs_list("sitesByCounty", aqs_variables = list(state = "37", county = "183")) list_sitesbycounty(state = "37", county = "183") aqs_list("cbsas") list_cbsas() aqs_list("classes") list_classes() aqs_list("parametersByClass", list(pc = "CRITERIA")) # Criteria pollutants list_parametersbyclass(pc = "CRITERIA") aqs_list("pqaos") list_pqaos() aqs_list("mas") list_mas() aqs_list("durations") list_durations() ## End(Not run) aqs_metadata AQS API Meta Data service Description A collection of functions to fetch information about the AQS API. The main purpose of this service is to let you know the system is up before you run a long job. Usage aqs_metadata( aqs_filter = c("isAvailable", "revisionHistory", "fieldsByService", "issues"), aqs_variables = NULL, header = FALSE, ... ) metadata_isavailable(...) metadata_revisionhistory( email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) metadata_fieldsbyservice( service, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) metadata_issues( email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. service A string specifying one of the services available (e.g., sampleData) Details aqs_metadata sends a request to the AQS API based on a user-provided filter using the following underlying functions: • metadata_isavailable checks if the API is up and running. • metadata_revisionhistory returns a complete list of revisions to the API in reverse chronolog- ical order. • metadata_fieldsbyservice returns a list and definitions of fields in a user-provided service. • metadata_issues returns a list of any known issues with system functionality or the data. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes aqs_metadata(aqs_filter = "isAvailable") metadata_isavailable() # equivalent to above aqs_metadata("revisionHistory") metadata_revisionhistory() aqs_metadata("fieldsByService", aqs_variables = list(service = "annualData")) metadata_fieldsbyservice(service = "annualData") aqs_metadata("issues") metadata_issues() ## End(Not run) aqs_monitors AQS API Monitors service Description A collection of functions to fetch operational information about the samplers (monitors) used to collect data, including identifying information, operational dates, operating organizations, and etc. Usage aqs_monitors( aqs_filter = c("bySite", "byCounty", "byState", "byBox", "byCBSA"), aqs_variables = NULL, header = FALSE, ... ) monitors_bysite( param, bdate, edate, state, county, site, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) monitors_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) monitors_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) monitors_bybox( param, bdate, edate, minlat, maxlat, minlon, maxlon, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) monitors_bycbsa( param, bdate, edate, cbsa, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only data on or after this date will be returned. edate A string specifying the end date of data selection in YYYYMMDD format. Only data on or before this date will be returned. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. minlat A string or numeric value specifying the minimum latitude of a geographic box. Decimal latitude with north being positive. maxlat A string or numeric value specifying the maximum latitude of a geographic box. Decimal latitude with north being positive. minlon A string or numeric value specifying the minimum longitude of a geographic box. Decimal longitude with east being positive. maxlon A string or numeric value specifying the maximum longitude of a geographic box. Decimal longitude with east being positive. cbsa A string specifying the AQS CBSA code. A list of the CBSA codes can be obtained via list_cbsas. Details aqs_monitors sends a request to the AQS API based on a user-provided filter using the following underlying functions: • monitors_bysite returns param monitors that were operating at site in county, within state, between bdate and edate. • monitors_bycounty returns param monitors that were operating in county within state be- tween bdate and edate. • monitors_bystate returns param monitors that were operating in state between bdate and edate. • monitors_bybox returns param monitors that were operating at a user-provided latitude/longitude bounding box (minlat, maxlat, minlon, maxlon) between bdate and edate. • monitors_bycbsa returns param monitors that were operating at a user-provided CBSA be- tween bdate and edate. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## SO2 monitors in Hawaii that were operating on May 01, 2015 aqs_variables <- list( param = "42401", bdate = "20150501", edate = "20150502", state = "15" ) aqs_monitors(aqs_filter = "bySite", aqs_variables = aqs_variables) ## Equivalent to above; used integers instead of strings monitors_bystate( param = 42401, bdate = "20150501", edate = "20150502", state = 15 ) ## End(Not run) aqs_qaannualperformanceevaluations AQS API QA Annual Performance Evaluations service Description A collection of functions to fetch pairs of data (known and measured values) at several concentration levels for gaseous criteria pollutants. Usage aqs_qaannualperformanceevaluations( aqs_filter = c("bySite", "byCounty", "byState", "byPQAO", "byMA"), aqs_variables = NULL, header = FALSE, 20 aqs_qaannualperformanceevaluations ... ) qaannualperformanceevaluations_bysite( param, bdate, edate, state, county, site, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaannualperformanceevaluations_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaannualperformanceevaluations_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaannualperformanceevaluations_bypqao( param, bdate, edate, pqao, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaannualperformanceevaluations_byma( param, bdate, edate, agency, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only data on or after this date will be returned. edate A string specifying the end date of data selection in YYYYMMDD format. Only data on or before this date will be returned. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. pqao A string specifying the AQS Primary Quality Assurance Organization (PQAO) code. A list of the PQAO codes can be obtained via list_pqaos. agency A string specifying the AQS Monitoring Agency (MA) code. A list of the MA codes can be obtained via list_mas. Here, we named this input as agency instead of "ma" because agency is actually used in the API endpoint URL. Details aqs_qaannualperformanceevaluations sends a request to the AQS API based on a user-provided filter using the following underlying functions: • qaannualperformanceevaluations_bysite returns annual performance evaluation data for param at site in county, within state, between bdate and edate. • qaannualperformanceevaluations_bycounty returns annual performance evaluation data for param in county within state between bdate and edate. • qaannualperformanceevaluations_bystate returns annual performance evaluation data for param in state between bdate and edate. • qaannualperformanceevaluations_bypqao returns annual performance evaluation data for param in pqao between bdate and edate. • qaannualperformanceevaluations_byma returns annual performance evaluation data for param in agency (monitoring agency) between bdate and edate. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## Annual performance evaluation data for ozone in Alabama during 2017 aqs_variables <- list( param = "44201", bdate = "20170101", edate = "20171231", state = "01" ) aqs_qaannualperformanceevaluations( aqs_filter = "byState", aqs_variables = aqs_variables ) ## Equivalent to above; used integers instead of strings qaannualperformanceevaluations_bystate( param = 44201, bdate = "20170101", edate = "20171231", state = 1 ) ## End(Not run) aqs_qablanks AQS API QA Blanks Data service Description A collection of functions to fetch the concentration of from blank samples. Usage aqs_qablanks( aqs_filter = c("bySite", "byCounty", "byState", "byPQAO", "byMA"), aqs_variables = NULL, header = FALSE, ... ) qablanks_bysite( param, bdate, edate, state, county, site, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qablanks_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qablanks_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qablanks_bypqao( param, bdate, edate, pqao, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qablanks_byma( param, bdate, edate, agency, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only the year portion is used. edate A string specifying the end date of data selection in YYYYMMDD format. Only the year portion is used. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. pqao A string specifying the AQS Primary Quality Assurance Organization (PQAO) code. A list of the PQAO codes can be obtained via list_pqaos. agency A string specifying the AQS Monitoring Agency (MA) code. A list of the MA codes can be obtained via list_mas. Here, we named this input as agency instead of "ma" because agency is actually used in the API endpoint URL. Details aqs_qablanks sends a request to the AQS API based on a user-provided filter using the following underlying functions: • qablanks_bysite returns param blank data for site in county, within state, between bdate and edate. • qablanks_bycounty returns param blank data for county in state between bdate and edate. • qablanks_bystate returns param blank data for state between bdate and edate. • qablanks_bypqao returns param blank data for pqao between bdate and edate. • qablanks_byma returns param blank data for agency (monitoring agency) between bdate and edate. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## PM2.5 blank data for Alabama for January 2018 aqs_variables <- list( param = "88101", bdate = "20180101", edate = "20180131", state = "01" ) aqs_qablanks(aqs_filter = "byState", aqs_variables = aqs_variables) ## Equivalent to above; used integers instead of strings qablanks_bystate( param = 88101, bdate = "20180101", edate = "20180131", state = 1 ) ## End(Not run) aqs_qacollocatedassessments AQS API QA Collocated Assessments service Description A collection of functions to fetch pairs of PM samples collected at the same time and place by different samplers. Usage aqs_qacollocatedassessments( aqs_filter = c("bySite", "byCounty", "byState", "byPQAO", "byMA"), aqs_variables = NULL, header = FALSE, ... ) qacollocatedassessments_bysite( param, bdate, edate, state, county, site, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qacollocatedassessments_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qacollocatedassessments_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qacollocatedassessments_bypqao( param, bdate, edate, pqao, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qacollocatedassessments_byma( param, bdate, edate, agency, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only data on or after this date will be returned. edate A string specifying the end date of data selection in YYYYMMDD format. Only data on or before this date will be returned. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. pqao A string specifying the AQS Primary Quality Assurance Organization (PQAO) code. A list of the PQAO codes can be obtained via list_pqaos. agency A string specifying the AQS Monitoring Agency (MA) code. A list of the MA codes can be obtained via list_mas. Here, we named this input as agency instead of "ma" because agency is actually used in the API endpoint URL. Details aqs_qacollocatedassessments sends a request to the AQS API based on a user-provided filter using the following underlying functions: • qacollocatedassessments_bysite returns collocated assessment data for param at site in county, within state, between bdate and edate. • qacollocatedassessments_bycounty returns collocated assessment data for param in county within state between bdate and edate. • qacollocatedassessments_bystate returns collocated assessment data for param in state be- tween bdate and edate. • qacollocatedassessments_bypqao returns collocated assessment data for param in pqao be- tween bdate and edate. • qacollocatedassessments_byma returns collocated assessment data for param in agency (mon- itoring agency) between bdate and edate. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## Collocated assessment data for FRM PM2.5 in Alabama for January 2013 aqs_variables <- list( param = "88101", bdate = "20130101", edate = "20130131", state = "01" ) aqs_qacollocatedassessments( aqs_filter = "byState", aqs_variables = aqs_variables ) ## Equivalent to above; used integers instead of strings qacollocatedassessments_bystate( param = 88101, bdate = "20130101", edate = "20130131", state = 1 ) ## End(Not run) aqs_qaflowrateaudits AQS API QA Flow Rate Audits service Description A collection of functions to fetch flow rate audit data. Usage aqs_qaflowrateaudits( aqs_filter = c("bySite", "byCounty", "byState", "byPQAO", "byMA"), aqs_variables = NULL, header = FALSE, ... ) qaflowrateaudits_bysite( param, bdate, edate, state, county, site, email = get_aqs_email(), 30 aqs_qaflowrateaudits key = get_aqs_key(), header = FALSE, ... ) qaflowrateaudits_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaflowrateaudits_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaflowrateaudits_bypqao( param, bdate, edate, pqao, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaflowrateaudits_byma( param, bdate, edate, agency, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only data on or after this date will be returned. edate A string specifying the end date of data selection in YYYYMMDD format. Only data on or before this date will be returned. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. pqao A string specifying the AQS Primary Quality Assurance Organization (PQAO) code. A list of the PQAO codes can be obtained via list_pqaos. agency A string specifying the AQS Monitoring Agency (MA) code. A list of the MA codes can be obtained via list_mas. Here, we named this input as agency instead of "ma" because agency is actually used in the API endpoint URL. Details aqs_qaflowrateaudits sends a request to the AQS API based on a user-provided filter using the following underlying functions: • qaflowrateaudits_bysite returns Flow Rate Audit data for param at site in county, within state, between bdate and edate. • qaflowrateverifications_bycounty returns Flow Rate Audit data for param in county within state between bdate and edate. • qaflowrateaudits_bystate returns Flow Rate Audit data for param in state between bdate and edate. • qaflowrateaudits_bypqao returns Flow Rate Audit data for param in pqao between bdate and edate. • qaflowrateaudits_byma returns Flow Rate Audit data for param in agency (monitoring agency) between bdate and edate. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## Flow rate audit data for Alabama during January 2018 aqs_variables <- list( param = "88101", bdate = "20180101", edate = "20180131", state = "01" ) aqs_qaflowrateaudits( aqs_filter = "byState", aqs_variables = aqs_variables ) ## Equivalent to above; used integers instead of strings qaflowrateaudits_bystate( param = 88101, bdate = "20180101", edate = "20180131", state = 1 ) ## End(Not run) aqs_qaflowrateverifications AQS API QA Flow Rate Verifications service Description A collection of functions to fetch flow rate checks performed by monitoring agencies. aqs_qaflowrateverifications 33 Usage aqs_qaflowrateverifications( aqs_filter = c("bySite", "byCounty", "byState", "byPQAO", "byMA"), aqs_variables = NULL, header = FALSE, ... ) qaflowrateverifications_bysite( param, bdate, edate, state, county, site, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaflowrateverifications_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaflowrateverifications_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaflowrateverifications_bypqao( param, bdate, edate, pqao, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaflowrateverifications_byma( param, bdate, edate, agency, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only data on or after this date will be returned. edate A string specifying the end date of data selection in YYYYMMDD format. Only data on or before this date will be returned. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. pqao A string specifying the AQS Primary Quality Assurance Organization (PQAO) code. A list of the PQAO codes can be obtained via list_pqaos. agency A string specifying the AQS Monitoring Agency (MA) code. A list of the MA codes can be obtained via list_mas. Here, we named this input as agency instead of "ma" because agency is actually used in the API endpoint URL. Details aqs_qaflowrateverifications sends a request to the AQS API based on a user-provided filter using the following underlying functions: • qaflowrateverifications_bysite returns Flow Rate Verification data for param at site in county, within state, between bdate and edate. • qaflowrateverifications_bycounty returns Flow Rate Verification data for param in county within state between bdate and edate. • qaflowrateverifications_bystate returns Flow Rate Verification data for param in state be- tween bdate and edate. • qaflowrateverifications_bypqao returns Flow Rate Verification data for param in pqao between bdate and edate. • qaflowrateverifications_byma returns Flow Rate Verification data for param in agency (mon- itoring agency) between bdate and edate. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## Flow Rate Verification data for Alabama during January 2018 aqs_variables <- list( param = "88101", bdate = "20180101", edate = "20180131", state = "01" ) aqs_qaflowrateverifications( aqs_filter = "byState", aqs_variables = aqs_variables ) ## Equivalent to above; used integers instead of strings qaflowrateverifications_bystate( param = 88101, bdate = "20180101", edate = "20180131", state = 1 ) ## End(Not run) aqs_qaonepointqcrawdata AQS API QA One Point QC Raw Data service Description A collection of functions to fetch measured versus actual concentration of 1 point QC checks. Usage aqs_qaonepointqcrawdata( aqs_filter = c("bySite", "byCounty", "byState", "byPQAO", "byMA"), aqs_variables = NULL, header = FALSE, ... ) qaonepointqcrawdata_bysite( param, bdate, edate, state, county, site, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaonepointqcrawdata_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaonepointqcrawdata_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaonepointqcrawdata_bypqao( param, bdate, edate, pqao, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qaonepointqcrawdata_byma( param, bdate, edate, agency, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only data on or after this date will be returned. edate A string specifying the end date of data selection in YYYYMMDD format. Only data on or before this date will be returned. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. pqao A string specifying the AQS Primary Quality Assurance Organization (PQAO) code. A list of the PQAO codes can be obtained via list_pqaos. agency A string specifying the AQS Monitoring Agency (MA) code. A list of the MA codes can be obtained via list_mas. Here, we named this input as agency instead of "ma" because agency is actually used in the API endpoint URL. Details aqs_qaonepointqcrawdata sends a request to the AQS API based on a user-provided filter using the following underlying functions: • qaonepointqcrawdata_bysite returns One Point QC data for param at site in county, within state, between bdate and edate. • qaonepointqcrawdata_bycounty returns One Point QC data for param in county within state between bdate and edate. • qaonepointqcrawdata_bystate returns One Point QC data for param in state between bdate and edate. • qaonepointqcrawdata_bypqao returns One Point QC data for param in pqao between bdate and edate. • qaflowrateaudits_byma returns One Point QC data for param in agency (monitoring agency) between bdate and edate. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## One Point QC data for ozone in Massachusetts for January 2018 aqs_variables <- list( param = "44201", bdate = "20180101", edate = "20180131", state = "25" ) aqs_qaonepointqcrawdata( aqs_filter = "byState", aqs_variables = aqs_variables ) ## Equivalent to above; used integers instead of strings qaonepointqcrawdata_bystate( param = 44201, bdate = "20180101", edate = "20180131", state = 25 ) ## End(Not run) aqs_qapepaudits AQS API QA PEP Audits service Description A collection of functions to fetch data related to PM2.5 monitoring system audits. Usage aqs_qapepaudits( aqs_filter = c("bySite", "byCounty", "byState", "byPQAO", "byMA"), aqs_variables = NULL, header = FALSE, ... ) qapepaudits_bysite( param, bdate, edate, state, county, site, email = get_aqs_email(), 40 aqs_qapepaudits key = get_aqs_key(), header = FALSE, ... ) qapepaudits_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qapepaudits_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qapepaudits_bypqao( param, bdate, edate, pqao, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) qapepaudits_byma( param, bdate, edate, agency, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only data on or after this date will be returned. edate A string specifying the end date of data selection in YYYYMMDD format. Only data on or before this date will be returned. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. pqao A string specifying the AQS Primary Quality Assurance Organization (PQAO) code. A list of the PQAO codes can be obtained via list_pqaos. agency A string specifying the AQS Monitoring Agency (MA) code. A list of the MA codes can be obtained via list_mas. Here, we named this input as agency instead of "ma" because agency is actually used in the API endpoint URL. Details aqs_qapepaudits sends a request to the AQS API based on a user-provided filter using the following underlying functions: • qapepaudits_bysite returns PEP Audit data for param at site in county, within state, be- tween bdate and edate. • qapepaudits_bycounty returns PEP Audit data for param in county within state between bdate and edate. • qapepaudits_bystate returns PEP Audit data for param in state between bdate and edate. • qapepaudits_bypqao returns PEP Audit data for param in pqao between bdate and edate. • qapepaudits_byma returns PEP Audit data for param in agency (monitoring agency) between bdate and edate. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## PEP Audit data for FRM PM2.5 in Alabama for 2017 aqs_variables <- list( param = "88101", bdate = "20170101", edate = "20171231", state = "01" ) aqs_qapepaudits( aqs_filter = "byState", aqs_variables = aqs_variables ) ## Equivalent to above; used integers instead of strings qapepaudits_bystate( param = 88101, bdate = "20170101", edate = "20171231", state = 1 ) ## End(Not run) aqs_quarterlydata AQS API Quarterly Summary Data service Description A collection of functions to fetch data summarized at the calendar quarter level. Data is labeled with quarter number (Q1 = Jan - Mar, Q2 = Apr - Jun, Q3 = Jul - Sep, Q4 = Oct - Dec). Note that only the year portion of the bdate and edate are used and all 4 quarters in the year are returned. In addition, duration is not allowed on the API unlike aqs_sampledata, aqs_dailydata, and aqs_annualdata. Usage aqs_quarterlydata( aqs_filter = c("bySite", "byCounty", "byState", "byBox", "byCBSA"), aqs_variables = NULL, header = FALSE, ... ) quarterlydata_bysite( param, bdate, edate, state, county, site, email = get_aqs_email(), key = get_aqs_key(), cbdate = NULL, cedate = NULL, header = FALSE, ... ) quarterlydata_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), cbdate = NULL, cedate = NULL, header = FALSE, ... ) quarterlydata_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), cbdate = NULL, cedate = NULL, header = FALSE, ... ) quarterlydata_bybox( param, bdate, edate, minlat, maxlat, minlon, maxlon, email = get_aqs_email(), key = get_aqs_key(), cbdate = NULL, cedate = NULL, header = FALSE, ... ) quarterlydata_bycbsa( param, bdate, edate, cbsa, email = get_aqs_email(), key = get_aqs_key(), cbdate = NULL, cedate = NULL, header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only the year portion is used. edate A string specifying the end date of data selection in YYYYMMDD format. Only the year portion is used. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. cbdate (optional) A string specifying the change begin date in YYYYMMDD format to subset data based on "date of last change" in database. Only data that changed on or after this date will be returned. cedate (optional) A string specifying the change end date in YYYYMMDD format to subset data based on "date of last change" in database. Only data that changed on or before this date will be returned. minlat A string or numeric value specifying the minimum latitude of a geographic box. Decimal latitude with north being positive. maxlat A string or numeric value specifying the maximum latitude of a geographic box. Decimal latitude with north being positive. minlon A string or numeric value specifying the minimum longitude of a geographic box. Decimal longitude with east being positive. maxlon A string or numeric value specifying the maximum longitude of a geographic box. Decimal longitude with east being positive. cbsa A string specifying the AQS CBSA code. A list of the CBSA codes can be obtained via list_cbsas. Details aqs_quarterlydata sends a request to the AQS API based on a user-provided filter using the following underlying functions: • quarterlydata_bysite returns quarterly summary param data for site in county, within state, based on the year portions of bdate and edate. • quarterlydata_bycounty returns quarterly summary param data for county in state based on the year portions bdate and edate. • quarterlydata_bystate returns quarterly summary param data for state based on the year por- tions of bdate and edate. • quarterlydata_bybox returns quarterly summary param data for a user-provided latitude/longitude bounding box (minlat, maxlat, minlon, maxlon) based on the year portions of bdate and edate. • quarterlydata_bycbsa returns quarterly summary param data for a user-provided CBSA based on the year portions of bdate and edate. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## FRM/FEM PM2.5 data for Wake County, NC for 2016 ## Only the year portions of bdate and edate are used aqs_variables <- list( param = c("88101", "88502"), bdate = "20160101", edate = "20160228", state = "37", county = "183" ) aqs_quarterlydata(aqs_filter = "byCounty", aqs_variables = aqs_variables) ## Equivalent to above; used integers instead of strings quarterlydata_bycounty( param = c(88101, 88502), bdate = "20160101", edate = "20160228", state = 37, county = 183 ) ## End(Not run) aqs_sampledata AQS API Sample Data service Description A collection of functions to fetch sample data - the finest grain data reported to EPA. Please use a narrow range of dates to adhere to the API’s limit imposed on request size. Usage aqs_sampledata( aqs_filter = c("bySite", "byCounty", "byState", "byBox", "byCBSA"), aqs_variables = NULL, header = FALSE, ... aqs_sampledata 47 ) sampledata_bysite( param, bdate, edate, state, county, site, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) sampledata_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) sampledata_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) sampledata_bybox( param, bdate, edate, minlat, maxlat, minlon, maxlon, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) sampledata_bycbsa( param, bdate, edate, cbsa, email = get_aqs_email(), key = get_aqs_key(), duration = NULL, cbdate = NULL, cedate = NULL, header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only data on or after this date will be returned. edate A string specifying the end date of data selection in YYYYMMDD format. Only data on or before this date will be returned. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. duration (optional) A string specifying the 1-character AQS sample duration code. A list of the duration codes can be obtained via list_durations. Only data reported at this sample duration will be returned. cbdate (optional) A string specifying the change begin date in YYYYMMDD format to subset data based on "date of last change" in database. Only data that changed on or after this date will be returned. cedate (optional) A string specifying the change end date in YYYYMMDD format to subset data based on "date of last change" in database. Only data that changed on or before this date will be returned. minlat A string or numeric value specifying the minimum latitude of a geographic box. Decimal latitude with north being positive. maxlat A string or numeric value specifying the maximum latitude of a geographic box. Decimal latitude with north being positive. minlon A string or numeric value specifying the minimum longitude of a geographic box. Decimal longitude with east being positive. maxlon A string or numeric value specifying the maximum longitude of a geographic box. Decimal longitude with east being positive. cbsa A string specifying the AQS CBSA code. A list of the CBSA codes can be obtained via list_cbsas. Details aqs_sampledata sends a request to the AQS API based on a user-provided filter using the following underlying functions: • sampledata_bysite returns all param samples for site in county, within state, between bdate and edate. • sampledata_bycounty returns all param samples for county in state between bdate and edate. • sampledata_bystate returns all param samples for state between bdate and edate. • sampledata_bybox returns all param samples for a user-provided latitude/longitude bounding box (minlat, maxlat, minlon, maxlon) between bdate and edate. • sampledata_bycbsa returns all param samples for a user-provided CBSA between bdate and edate. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## FRM/FEM PM2.5 data for Wake County, NC between Jan and Feb 2016 aqs_variables <- list( param = "88101", bdate = "20160101", edate = "20160228", state = "37", county = "183" ) aqs_sampledata(aqs_filter = "byCounty", aqs_variables = aqs_variables) ## Equivalent to above; used integers instead of strings sampledata_bycounty( param = 88101, bdate = "20160101", edate = "20160228", state = 37, county = 183 ) ## End(Not run) aqs_signup Create an account for the AQS API Description This function helps you create an account or reset a password. Once you execute this function, a verification email will be sent to the email account specified. If the request is made with an email that is already registered, a new key will be issued for that account and emailed to the listed address. Usage aqs_signup(email) Arguments email A string specifying an email account to register as a user Value No return value, called to sign up for the AQS API See Also See set_aqs_user to set your credentials to send a request to the AQS API. Examples ## Not run: ## Please use your email address to create an account aqs_signup(email = "<EMAIL>") ## End(Not run) aqs_transactionsqaannualperformanceevaluations AQS API QA Annual Performance Evaluations Transaction service Description A collection of functions to fetch pairs of data QA at several concentration levels in the submission (transaction) format for AQS. Usage aqs_transactionsqaannualperformanceevaluations( aqs_filter = c("bySite", "byCounty", "byState", "byPQAO", "byMA"), aqs_variables = NULL, header = FALSE, ... ) transactionsqaannualperformanceevaluations_bysite( param, bdate, edate, state, county, site, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) transactionsqaannualperformanceevaluations_bycounty( param, 52 aqs_transactionsqaannualperformanceevaluations bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) transactionsqaannualperformanceevaluations_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) transactionsqaannualperformanceevaluations_bypqao( param, bdate, edate, pqao, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) transactionsqaannualperformanceevaluations_byma( param, bdate, edate, agency, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only data on or after this date will be returned. edate A string specifying the end date of data selection in YYYYMMDD format. Only data on or before this date will be returned. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. pqao A string specifying the AQS Primary Quality Assurance Organization (PQAO) code. A list of the PQAO codes can be obtained via list_pqaos. agency A string specifying the AQS Monitoring Agency (MA) code. A list of the MA codes can be obtained via list_mas. Here, we named this input as agency instead of "ma" because agency is actually used in the API endpoint URL. Details aqs_transactionsqaannualperformanceevaluations sends a request to the AQS API based on a user- provided filter using the following underlying functions: • transactionsqaannualperformanceevaluations_bysite returns annual performance evaluation data for param at site in county, within state, between bdate and edate in the transaction for- mat. • transactionsqaannualperformanceevaluations_bycounty returns annual performance evaluation data for param in county within state between bdate and edate in the transaction format. • transactionsqaannualperformanceevaluations_bystate returns annual performance evaluation data for param in state between bdate and edate in the transaction format. • transactionsqaannualperformanceevaluations_bypqao returns annual performance evaluation data for param in pqao between bdate and edate in the transaction format. • transactionsqaannualperformanceevaluations_byma returns annual performance evaluation data for param in agency (monitoring agency) between bdate and edate in the transaction format. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## Annual performance evaluation data for ozone in Alabama during 2017 aqs_variables <- list( param = "44201", bdate = "20170101", edate = "20171231", state = "01" ) aqs_transactionsqaannualperformanceevaluations( aqs_filter = "byState", aqs_variables = aqs_variables ) ## Equivalent to above; used integers instead of strings transactionsqaannualperformanceevaluations_bystate( param = 44201, bdate = "20170101", edate = "20171231", state = 1 ) ## End(Not run) aqs_transactionssample AQS API Sample Data Transaction service Description A collection of functions to fetch data in the submission (transaction) format for AQS. Usage aqs_transactionssample( aqs_filter = c("bySite", "byCounty", "byState", "byMA"), aqs_variables = NULL, header = FALSE, ... aqs_transactionssample 55 ) transactionssample_bysite( param, bdate, edate, state, county, site, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) transactionssample_bycounty( param, bdate, edate, state, county, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) transactionssample_bystate( param, bdate, edate, state, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) transactionssample_byma( param, bdate, edate, agency, email = get_aqs_email(), key = get_aqs_key(), header = FALSE, ... ) Arguments aqs_filter A string specifying one of the service filters. NOT case-sensitive. aqs_variables A named list of variables to fetch data (e.g., state). Only necessary variables are passed to a specific endpoint (service/filter) to make a valid request. header A logical specifying whether the function returns additional information from the API header. Default is FALSE to return data only. ... Reserved for future use. param A string or vector of strings specifying the 5-digit AQS parameter code for data selection. An integer will be coerced to a string. A maximum of 5 parameter codes may be listed in a single request. A list of the parameter codes can be obtained via list_parametersbyclass. bdate A string specifying the begin date of data selection in YYYYMMDD format. Only data on or after this date will be returned. edate A string specifying the end date of data selection in YYYYMMDD format. Only data on or before this date will be returned. If the end date is not in the same year as the begin date, the function will automatically split the date range into multiple chunks by year and send requests sequentially. state A string specifying the 2-digit state FIPS code. An integer will be coerced to a string with a leading zero if necessary (e.g., 1 -> "01"). A list of the state codes can be obtained via list_states. county A string specifying the 3-digit county FIPS code. An integer will be coerced to a string with leading zeros if necessary (e.g., 89 -> "089"). A list of the county codes within each state can be obtained via list_countiesbystate. site A string specifying the 4-digit AQS site number within the county. An integer will be coerced to a string with leading zeros if necessary (e.g., 14 -> "0014"). A list of the site codes within each county can be obtained via list_sitesbycounty. email A string specifying the email address of the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. key A string specifying the key matching the email address for the requester. If you set your email and key with set_aqs_user, you don’t have to specify this. agency A string specifying the AQS Monitoring Agency (MA) code. A list of the MA codes can be obtained via list_mas. Here, we named this input as agency instead of "ma" because agency is actually used in the API endpoint URL. Details aqs_transactionssample sends a request to the AQS API based on a user-provided filter using the following underlying functions: • transactionssample_bysite returns all param data for site in county, within state, between bdate and edate in the transaction format. • transactionssample_bycounty returns all param data for county in state between bdate and edate in the transaction format. • transactionssample_bystate returns all param data in state between bdate and edate in the transaction format. • transactionssample_byma returns all param data in agency (monitoring agency) between bdate and edate in the transaction format. Value A data.frame containing parsed data or a named list containing header and data. Examples ## Not run: ## Set your API Key first using set_aqs_user to run the following codes ## Example from the AQS website ## all benzene samples from North Carolina collected on May 15th, 1995 aqs_variables <- list( param = "45201", bdate = "19950515", edate = "19950515", state = "37" ) aqs_transactionssample( aqs_filter = "byState", aqs_variables = aqs_variables ) ## Equivalent to above; used integers instead of strings transactionssample_bystate( param = 45201, bdate = "19950515", edate = "19950515", state = 37 ) ## End(Not run) raqs_options Package options Description The following package options can be set via options and queried via getOption. Options to handle the AQS API rate limits The AQS API recommends not to make more than 10 requests per minute and pause 5 seconds between requests. • raqs.req_per_min controls the maximum number of API requests per minute. Default is 10. • raqs.delay_between_req controls a delay between API requests sent via a function when your bdate and edate inputs span multiple years. A value will be rounded to the nearest integer. Default is 5 seconds. • raqs.delay_fun_exit controls a delay before a function execution ends. A value will be rounded to the nearest integer. Default is zero if R is being used interactively. Otherwise, it is 5 seconds. This option only applies to functions that send API requests. Option to handle the type of data object to return By default, the parsed data will be returned as a data.frame object, but can be adjusted for users’ preferences. • raqs.return_type controls the type of data object to return. Default is "data.frame" but it can also be set to "tibble" or "data.table". Examples ## Change for the duration of the session op <- options(raqs.rep_per_min = 5) ## Change back to the original value options(op) set_aqs_user Set your AQS API credentials Description Set your registered email and key as environmental variables for the current session. Please sign up first using aqs_signup if you haven’t set up an account on the AQS API. If you want to set your email and key permanently, please add the following lines in your .Renviron file: • AQS_EMAIL = YOUR REGISTERED EMAIL • AQS_KEY = YOUR API KEY Usage set_aqs_user(email, key) get_aqs_user() get_aqs_email() get_aqs_key() Arguments email A string specifying your registered email address key A string specifying your API key Details set_aqs_user sets your API credentials for the current session. get_aqs_user, get_aqs_email, and get_aqs_key are helper functions to display saved user values. Value No return value, called to set environmental variables See Also See aqs_signup to create an account for the AQS API Examples ## Please use your registered email and key set_aqs_user(email = "<EMAIL>", key = "your_api_key") ## Show your API credentials get_aqs_user() # return list(email, key) get_aqs_email() # return email get_aqs_key() # return key
collapsibleTree
cran
R
Package ‘collapsibleTree’ October 12, 2022 Type Package Title Interactive Collapsible Tree Diagrams using 'D3.js' Version 0.1.7 Maintainer <NAME> <<EMAIL>> Description Interactive Reingold-Tilford tree diagrams created using 'D3.js', where ev- ery node can be expanded and collapsed by clicking on it. Tooltips and color gradients can be mapped to nodes using a numeric col- umn in the source data frame. See 'collapsibleTree' website for more information and examples. License GPL (>= 3) URL https://github.com/AdeelK93/collapsibleTree, https://AdeelK93.github.io/collapsibleTree/ BugReports https://github.com/AdeelK93/collapsibleTree/issues Encoding UTF-8 Depends R (>= 3.0.0) Imports htmlwidgets, data.tree, stats, methods Enhances knitr, shiny LazyData true RoxygenNote 6.1.0 Suggests colorspace, RColorBrewer, dplyr, testthat, tibble NeedsCompilation no Author <NAME> [aut, cre], <NAME> [ctb], <NAME> [ctb, cph] (D3.js library, http://d3js.org) Repository CRAN Date/Publication 2018-08-22 16:10:03 UTC R topics documented: collapsibleTre... 2 collapsibleTree-shin... 5 collapsibleTreeNetwor... 5 collapsibleTreeSummar... 8 collapsibleTree Create Interactive Collapsible Tree Diagrams Description Interactive Reingold-Tilford tree diagram created using D3.js, where every node can be expanded and collapsed by clicking on it. Usage collapsibleTree(df, ..., inputId = NULL, attribute = "leafCount", aggFun = sum, fill = "lightsteelblue", linkLength = NULL, fontSize = 10, tooltip = FALSE, tooltipHtml = NULL, nodeSize = NULL, collapsed = TRUE, zoomable = TRUE, width = NULL, height = NULL) ## S3 method for class 'data.frame' collapsibleTree(df, hierarchy, root = deparse(substitute(df)), inputId = NULL, attribute = "leafCount", aggFun = sum, fill = "lightsteelblue", fillByLevel = TRUE, linkLength = NULL, fontSize = 10, tooltip = FALSE, nodeSize = NULL, collapsed = TRUE, zoomable = TRUE, width = NULL, height = NULL, ...) ## S3 method for class 'Node' collapsibleTree(df, hierarchy_attribute = "level", root = df$name, inputId = NULL, attribute = "leafCount", aggFun = sum, fill = "lightsteelblue", linkLength = NULL, fontSize = 10, tooltip = FALSE, tooltipHtml = NULL, nodeSize = NULL, collapsed = TRUE, zoomable = TRUE, width = NULL, height = NULL, ...) Arguments df a data.frame from which to construct a nested list (where every row is a leaf) or a preconstructed data.tree ... other arguments to pass onto S3 methods that implement this generic function - collapsibleTree.data.frame, collapsibleTree.Node inputId the input slot that will be used to access the selected node (for Shiny). Will return a named list of the most recently clicked node, along with all of its parents. attribute numeric column not listed in hierarchy that will be used for tooltips, if applica- ble. Defaults to ’leafCount’, which is the cumulative count of a node’s children aggFun aggregation function applied to the attribute column to determine values of par- ent nodes. Defaults to sum, but mean also makes sense. fill either a single color or a mapping of colors: • For data.frame input, a vector of colors the same length as the number of nodes. By default, vector should be ordered by level, such that the root color is described first, then all the children’s colors, and then all the grand- children’s colors • For data.tree input, a tree attribute containing the color for each node linkLength length of the horizontal links that connect nodes in pixels. (optional, defaults to automatic sizing) Applicable only for data.frame input. fontSize font size of the label text in pixels tooltip tooltip shows the node’s label and attribute value. tooltipHtml column name (possibly containing html) to override default tooltip contents, al- lowing for more advanced customization. Applicable only for data.tree input. nodeSize numeric column that will be used to determine relative node size. Default is to have a constant node size throughout. ’leafCount’ can also be used here (cumulative count of a node’s children), or ’count’ (count of node’s immediate children). collapsed the tree’s children will start collapsed by default zoomable pan and zoom by dragging and scrolling width width in pixels (optional, defaults to automatic sizing) height height in pixels (optional, defaults to automatic sizing) hierarchy a character vector of column names that define the order and hierarchy of the tree network. Applicable only for data.frame input. root label for the root node fillByLevel which order to assign fill values to nodes. TRUE: Filling by level; will assign fill values to nodes vertically. FALSE: Filling by order; will assign fill values to nodes horizontally. hierarchy_attribute name of the data.tree attribute that contains hierarchy information of the tree network. Applicable only for data.tree input. Source <NAME>: http://christophergandrud.github.io/networkD3/. d3noob: https://bl.ocks.org/d3noob/43a860bc0024792f8803bba8ca0d5ecd. Examples collapsibleTree(warpbreaks, c("wool", "tension", "breaks")) # Data from US Forest Service DataMart species <- read.csv(system.file("extdata/species.csv", package = "collapsibleTree")) collapsibleTree(df = species, c("REGION", "CLASS", "NAME"), fill = "green") # Visualizing the order in which the node colors are filled library(RColorBrewer) collapsibleTree( warpbreaks, c("wool", "tension"), fill = brewer.pal(9, "RdBu"), fillByLevel = TRUE ) collapsibleTree( warpbreaks, c("wool", "tension"), fill = brewer.pal(9, "RdBu"), fillByLevel = FALSE ) # Tooltip can be mapped to an attribute, or default to leafCount collapsibleTree( warpbreaks, c("wool", "tension", "breaks"), tooltip = TRUE, attribute = "breaks" ) # Node size can be mapped to any numeric column, or to leafCount collapsibleTree( warpbreaks, c("wool", "tension", "breaks"), nodeSize = "breaks" ) # collapsibleTree.Node example data(acme, package="data.tree") acme$Do(function(node) node$cost <- data.tree::Aggregate(node, attribute = "cost", aggFun = sum)) collapsibleTree(acme, nodeSize = "cost", attribute = "cost", tooltip = TRUE) # Emulating collapsibleTree.data.frame using collapsibleTree.Node species <- read.csv(system.file("extdata/species.csv", package = "collapsibleTree")) hierarchy <- c("REGION", "CLASS", "NAME") species$pathString <- paste( "species", apply(species[,hierarchy], 1, paste, collapse = "//"), sep = "//" ) df <- data.tree::as.Node(species, pathDelimiter = "//") collapsibleTree(df) collapsibleTree-shiny Shiny bindings for collapsibleTree Description Output and render functions for using collapsibleTree within Shiny applications and interactive Rmd documents. Usage collapsibleTreeOutput(outputId, width = "100%", height = "400px") renderCollapsibleTree(expr, env = parent.frame(), quoted = FALSE) Arguments outputId output variable to read from width, height Must be a valid CSS unit (like '100%', '400px', 'auto') or a number, which will be coerced to a string and have 'px' appended. expr An expression that generates a collapsibleTree env The environment in which to evaluate expr. quoted Is expr a quoted expression (with quote())? This is useful if you want to save an expression in a variable. Examples if(interactive()) { # Shiny Interaction shiny::runApp(system.file("examples/02shiny", package = "collapsibleTree")) # Interactive Gradient Mapping shiny::runApp(system.file("examples/03shiny", package = "collapsibleTree")) } collapsibleTreeNetwork Create Network Interactive Collapsible Tree Diagrams Description Interactive Reingold-Tilford tree diagram created using D3.js, where every node can be expanded and collapsed by clicking on it. This function serves as a convenience wrapper for network style data frames containing the node’s parent in the first column, node parent in the second column, and additional attributes in the rest of the columns. The root node is denoted by having an NA for a parent. There must be exactly 1 root. Usage collapsibleTreeNetwork(df, inputId = NULL, attribute = "leafCount", aggFun = sum, fill = "lightsteelblue", linkLength = NULL, fontSize = 10, tooltip = TRUE, tooltipHtml = NULL, nodeSize = NULL, collapsed = TRUE, zoomable = TRUE, width = NULL, height = NULL) Arguments df a network data frame (where every row is a node) from which to construct a nested list • First column must be the parent (NA for root node) • Second column must be the child • Additional columns are passed on as attributes for other parameters • There must be exactly 1 root node inputId the input slot that will be used to access the selected node (for Shiny). Will return a named list of the most recently clicked node, along with all of its parents. (For collapsibleTreeNetwork the names of the list are tree depth) attribute numeric column not listed in hierarchy that will be used as weighting to define the color gradient across nodes. Defaults to ’leafCount’, which colors nodes by the cumulative count of its children aggFun aggregation function applied to the attribute column to determine values of par- ent nodes. Defaults to sum, but mean also makes sense. fill either a single color or a column name with the color for each node linkLength length of the horizontal links that connect nodes in pixels. (optional, defaults to automatic sizing) fontSize font size of the label text in pixels tooltip tooltip shows the node’s label and attribute value. tooltipHtml column name (possibly containing html) to override default tooltip contents, allowing for more advanced customization nodeSize numeric column that will be used to determine relative node size. Default is to have a constant node size throughout. ’leafCount’ can also be used here (cumulative count of a node’s children), or ’count’ (count of node’s immediate children). collapsed the tree’s children will start collapsed by default zoomable pan and zoom by dragging and scrolling width width in pixels (optional, defaults to automatic sizing) height height in pixels (optional, defaults to automatic sizing) Source <NAME>: http://christophergandrud.github.io/networkD3/. d3noob: https://bl.ocks.org/d3noob/43a860bc0024792f8803bba8ca0d5ecd. See Also FromDataFrameNetwork for underlying function that constructs trees from the network data frame Examples # Create a simple org chart org <- data.frame( Manager = c( NA, "Ana", "Ana", "Bill", "Bill", "Bill", "Claudette", "Claudette", "Danny", "Fred", "Fred", "Grace", "Larry", "Larry", "Nicholas", "Nicholas" ), Employee = c( "Ana", "Bill", "Larry", "Claudette", "Danny", "Erika", "Fred", "Grace", "Henri", "Ida", "Joaquin", "Kate", "Mindy", "Nicholas", "Odette", "Peter" ), Title = c( "President", "VP Operations", "VP Finance", "Director", "Director", "Scientist", "Manager", "Manager", "Jr Scientist", "Operator", "Operator", "Associate", "Analyst", "Director", "Accountant", "Accountant" ) ) collapsibleTreeNetwork(org, attribute = "Title") # Add in colors and sizes org$Color <- org$Title levels(org$Color) <- colorspace::rainbow_hcl(11) collapsibleTreeNetwork( org, attribute = "Title", fill = "Color", nodeSize = "leafCount", collapsed = FALSE ) # Use unsplash api to add in random photos to tooltip org$tooltip <- paste0( org$Employee, "<br>Title: ", org$Title, "<br><img src='https://source.unsplash.com/collection/385548/150x100'>" ) collapsibleTreeNetwork( org, attribute = "Title", fill = "Color", nodeSize = "leafCount", tooltipHtml = "tooltip", collapsed = FALSE ) collapsibleTreeSummary Create Summary Interactive Collapsible Tree Diagrams Description Interactive Reingold-Tilford tree diagram created using D3.js, where every node can be expanded and collapsed by clicking on it. This function serves as a convenience wrapper to add color gradients to nodes either by counting that node’s children (default) or specifying another numeric column in the input data frame. Usage collapsibleTreeSummary(df, hierarchy, root = deparse(substitute(df)), inputId = NULL, attribute = "leafCount", fillFun = colorspace::heat_hcl, maxPercent = 25, percentOfParent = FALSE, linkLength = NULL, fontSize = 10, tooltip = TRUE, nodeSize = NULL, collapsed = TRUE, zoomable = TRUE, width = NULL, height = NULL, ...) Arguments df a data frame (where every row is a leaf) from which to construct a nested list hierarchy a character vector of column names that define the order and hierarchy of the tree network root label for the root node inputId the input slot that will be used to access the selected node (for Shiny). Will return a named list of the most recently clicked node, along with all of its parents. attribute numeric column not listed in hierarchy that will be used as weighting to define the color gradient across nodes. Defaults to ’leafCount’, which colors nodes by the cumulative count of its children fillFun function that takes its first argument and returns a vector of colors of that length. rainbow_hcl is a good example. maxPercent highest weighting percent to use in color scale mapping. All numbers above this value will be treated as the same maximum value for the sake of coloring in the nodes (but not the ordering of nodes). Setting this value too high will make it difficult to tell the difference between nodes with many children. percentOfParent toggle attribute tooltip to be percent of parent rather than the actual value of attribute linkLength length of the horizontal links that connect nodes in pixels. (optional, defaults to automatic sizing) fontSize font size of the label text in pixels tooltip tooltip shows the node’s label and attribute value. nodeSize numeric column that will be used to determine relative node size. Default is to have a constant node size throughout. ’leafCount’ can also be used here (cumulative count of a node’s children), or ’count’ (count of node’s immediate children). collapsed the tree’s children will start collapsed by default zoomable pan and zoom by dragging and scrolling width width in pixels (optional, defaults to automatic sizing) height height in pixels (optional, defaults to automatic sizing) ... other arguments passed on to fillFun, such declaring a palette for brewer.pal Source <NAME>: http://christophergandrud.github.io/networkD3/. d3noob: https://bl.ocks.org/d3noob/43a860bc0024792f8803bba8ca0d5ecd. Examples # Color in by number of children collapsibleTreeSummary(warpbreaks, c("wool", "tension", "breaks"), maxPercent = 50) # Color in by the value of breaks and use the terrain_hcl gradient collapsibleTreeSummary( warpbreaks, c("wool", "tension", "breaks"), attribute = "breaks", fillFun = colorspace::terrain_hcl, maxPercent = 50 )
rust_sync_force
rust
Rust
Crate rust_sync_force === Crate for interacting with the Salesforce API This crate includes the tools connecting to Salesforce and manipulating Salesforce objects Example --- The following example will connect to Salesforce and insert an Account object ``` use rust_sync_force::{Client, Error}; use serde::Deserialize; use std::collections::HashMap; use std::env; #[derive(Deserialize, Debug)] #[serde(rename_all = "PascalCase")] struct Account { #[serde(rename = "attributes")] attributes: Attribute, id: String, name: String, } #[derive(Deserialize, Debug)] struct Attribute { url: String, #[serde(rename = "type")] sobject_type: String, } fn main() -> Result<(), Error> { let client_id = env::var("SFDC_CLIENT_ID").unwrap(); let client_secret = env::var("SFDC_CLIENT_SECRET").unwrap(); let username = env::var("SFDC_USERNAME").unwrap(); let password = env::var("SFDC_PASSWORD").unwrap(); let mut client = Client::new(Some(client_id), Some(client_secret)); client.login_with_credential(username, password)?; let mut params = HashMap::new(); params.insert("Name", "hello rust"); let res = client.insert("Account", params)?; println!("{:?}", res); Ok(()) } ``` This example will listen to any change made on any Account records through the Bayeux protocol. ``` use rust_sync_force::stream::{CometdClient, StreamResponse}; use rust_sync_force::{Client, Error}; use serde::Deserialize; use std::{collections::HashMap, env}; #[derive(Debug, Deserialize)] #[allow(non_snake_case)] pub struct SFChangeEventHeader { pub commitNumber: usize, pub commitUser: String, pub sequenceNumber: usize, pub entityName: String, pub changeType: String, pub commitTimestamp: usize, pub recordIds: Vec<String>, } #[derive(Debug, Deserialize)] #[allow(non_snake_case)] pub struct SFPayload { pub LastModifiedDate: String, pub ChangeEventHeader: SFChangeEventHeader, } pub fn listen_sf(mut client: CometdClient) { println!("Listen SF loop started"); loop { let responses = client.connect(); match responses { Ok(responses) => { for response in responses { if let StreamResponse::Delivery(resp) = response { match serde_json::from_value::<SFMetadata>(resp.data.payload.clone()) { Ok(data) => { println!("Data: {:#?}", data); //! Here you should have your patterns matching your own objects } Err(err) => { println!( "SF delivery data could not be parsed: {:?}\nData:{:?}", err, resp ) } } } } } Err(err) => println!("{}", err.to_string()), } } } fn main() -> Result<(), Error> { let client_id = env::var("SFDC_CLIENT_ID").unwrap(); let client_secret = env::var("SFDC_CLIENT_SECRET").unwrap(); let username = env::var("SFDC_USERNAME").unwrap(); let password = env::var("SFDC_PASSWORD").unwrap(); let mut client = Client::new(Some(client_id), Some(client_secret)); client.login_with_credential(username, password)?; let mut stream_client = rust_sync_force::stream::CometdClient::new( client, HashMap::from([("/data/AccountChangeEvent".to_string(), -1)]), ); stream_client.init().expect("Could not init cometd client"); println!("Cometd client successfully initialized"); listen_sf(stream_client); Ok(()) } ``` Modules --- * client * errors * response * stream * utils Type Definitions --- * Client * Error Crate rust_sync_force === Crate for interacting with the Salesforce API This crate includes the tools connecting to Salesforce and manipulating Salesforce objects Example --- The following example will connect to Salesforce and insert an Account object ``` use rust_sync_force::{Client, Error}; use serde::Deserialize; use std::collections::HashMap; use std::env; #[derive(Deserialize, Debug)] #[serde(rename_all = "PascalCase")] struct Account { #[serde(rename = "attributes")] attributes: Attribute, id: String, name: String, } #[derive(Deserialize, Debug)] struct Attribute { url: String, #[serde(rename = "type")] sobject_type: String, } fn main() -> Result<(), Error> { let client_id = env::var("SFDC_CLIENT_ID").unwrap(); let client_secret = env::var("SFDC_CLIENT_SECRET").unwrap(); let username = env::var("SFDC_USERNAME").unwrap(); let password = env::var("SFDC_PASSWORD").unwrap(); let mut client = Client::new(Some(client_id), Some(client_secret)); client.login_with_credential(username, password)?; let mut params = HashMap::new(); params.insert("Name", "hello rust"); let res = client.insert("Account", params)?; println!("{:?}", res); Ok(()) } ``` This example will listen to any change made on any Account records through the Bayeux protocol. ``` use rust_sync_force::stream::{CometdClient, StreamResponse}; use rust_sync_force::{Client, Error}; use serde::Deserialize; use std::{collections::HashMap, env}; #[derive(Debug, Deserialize)] #[allow(non_snake_case)] pub struct SFChangeEventHeader { pub commitNumber: usize, pub commitUser: String, pub sequenceNumber: usize, pub entityName: String, pub changeType: String, pub commitTimestamp: usize, pub recordIds: Vec<String>, } #[derive(Debug, Deserialize)] #[allow(non_snake_case)] pub struct SFPayload { pub LastModifiedDate: String, pub ChangeEventHeader: SFChangeEventHeader, } pub fn listen_sf(mut client: CometdClient) { println!("Listen SF loop started"); loop { let responses = client.connect(); match responses { Ok(responses) => { for response in responses { if let StreamResponse::Delivery(resp) = response { match serde_json::from_value::<SFMetadata>(resp.data.payload.clone()) { Ok(data) => { println!("Data: {:#?}", data); //! Here you should have your patterns matching your own objects } Err(err) => { println!( "SF delivery data could not be parsed: {:?}\nData:{:?}", err, resp ) } } } } } Err(err) => println!("{}", err.to_string()), } } } fn main() -> Result<(), Error> { let client_id = env::var("SFDC_CLIENT_ID").unwrap(); let client_secret = env::var("SFDC_CLIENT_SECRET").unwrap(); let username = env::var("SFDC_USERNAME").unwrap(); let password = env::var("SFDC_PASSWORD").unwrap(); let mut client = Client::new(Some(client_id), Some(client_secret)); client.login_with_credential(username, password)?; let mut stream_client = rust_sync_force::stream::CometdClient::new( client, HashMap::from([("/data/AccountChangeEvent".to_string(), -1)]), ); stream_client.init().expect("Could not init cometd client"); println!("Cometd client successfully initialized"); listen_sf(stream_client); Ok(()) } ``` Modules --- * client * errors * response * stream * utils Type Definitions --- * Client * Error
coffeescript
devdocs
JavaScript
CoffeeScript ============ **CoffeeScript is a little language that compiles into JavaScript.** Underneath that awkward Java-esque patina, JavaScript has always had a gorgeous heart. CoffeeScript is an attempt to expose the good parts of JavaScript in a simple way. The golden rule of CoffeeScript is: *“It’s just JavaScript.”* The code compiles one-to-one into the equivalent JS, and there is no interpretation at runtime. You can use any existing JavaScript library seamlessly from CoffeeScript (and vice-versa). The compiled output is readable, pretty-printed, and tends to run as fast or faster than the equivalent handwritten JavaScript. **Latest Version:** [2.7.0](https://github.com/jashkenas/coffeescript/tarball/2.7.0) ``` # Install locally for a project: npm install --save-dev coffeescript # Install globally to execute .coffee files anywhere: npm install --global coffeescript ``` Overview -------- *CoffeeScript on the topleft, compiled JavaScript output on the bottomright. The CoffeeScript is editable!* ``` # Assignment: number = 42 opposite = true # Conditions: number = -42 if opposite # Functions: square = (x) -> x * x # Arrays: list = [1, 2, 3, 4, 5] # Objects: math = root: Math.sqrt square: square cube: (x) -> x * square x # Splats: race = (winner, runners...) -> print winner, runners # Existence: alert "I knew it!" if elvis? # Array comprehensions: cubes = (math.cube num for num in list) ``` ``` // Assignment: var cubes, list, math, num, number, opposite, race, square; number = 42; opposite = true; if (opposite) { // Conditions: number = -42; } // Functions: square = function(x) { return x * x; }; // Arrays: list = [1, 2, 3, 4, 5]; // Objects: math = { root: Math.sqrt, square: square, cube: function(x) { return x * square(x); } }; // Splats: race = function(winner, ...runners) { return print(winner, runners); }; if (typeof elvis !== "undefined" && elvis !== null) { // Existence: alert("I knew it!"); } // Array comprehensions: cubes = (function() { var i, len, results; results = []; for (i = 0, len = list.length; i < len; i++) { num = list[i]; results.push(math.cube(num)); } return results; })(); ``` CoffeeScript 2 -------------- ### What’s New In CoffeeScript 2? The biggest change in CoffeeScript 2 is that now the CoffeeScript compiler produces modern JavaScript syntax (ES6, or ES2015 and later). A CoffeeScript `=>` becomes a JS `=>`, a CoffeeScript `class` becomes a JS `class` and so on. Major new features in CoffeeScript 2 include [async functions](#async-functions) and [JSX](#jsx). You can read more in the [announcement](https://coffeescript.org/announcing-coffeescript-2/). There are very few [breaking changes from CoffeeScript 1.x to 2](#breaking-changes); we hope the upgrade process is smooth for most projects. ### Compatibility Most modern JavaScript features that CoffeeScript supports can run natively in Node 7.6+, meaning that Node can run CoffeeScript’s output without any further processing required. Here are some notable exceptions: * [JSX](#jsx) always requires transpilation. * [Splats, a.k.a. object rest/spread syntax, for objects](https://coffeescript.org/#splats) are supported by Node 8.6+. * The [regular expression `s` (dotall) flag](https://github.com/tc39/proposal-regexp-dotall-flag) is supported by Node 9+. * [Async generator functions](https://github.com/tc39/proposal-async-iteration) are supported by Node 10+. * [Modules](#modules) are supported by Node 12+ with `"type": "module"` in your project’s `package.json`. This list may be incomplete, and excludes versions of Node that support newer features behind flags; please refer to [node.green](http://node.green/) for full details. You can [run the tests in your browser](https://coffeescript.org/test.html) to see what your browser supports. It is your responsibility to ensure that your runtime supports the modern features you use; or that you [transpile](#transpilation) your code. When in doubt, transpile. For compatibility with other JavaScript frameworks and tools, see [Integrations](#integrations). Installation ------------ The command-line version of `coffee` is available as a [Node.js](https://nodejs.org/) utility, requiring Node 6 or later. The [core compiler](https://coffeescript.org/v2/browser-compiler-modern/coffeescript.js) however, does not depend on Node, and can be run in any JavaScript environment, or in the browser (see [Try CoffeeScript](#try)). To install, first make sure you have a working copy of the latest stable version of [Node.js](https://nodejs.org/). You can then install CoffeeScript globally with [npm](https://www.npmjs.com/): ``` npm install --global coffeescript ``` This will make the `coffee` and `cake` commands available globally. If you are using CoffeeScript in a project, you should install it locally for that project so that the version of CoffeeScript is tracked as one of your project’s dependencies. Within that project’s folder: ``` npm install --save-dev coffeescript ``` The `coffee` and `cake` commands will first look in the current folder to see if CoffeeScript is installed locally, and use that version if so. This allows different versions of CoffeeScript to be installed globally and locally. If you plan to use the `--transpile` option (see [Transpilation](#transpilation)) you will need to also install `@babel/core` either globally or locally, depending on whether you are running a globally or locally installed version of CoffeeScript. Usage ----- ### Command Line Once installed, you should have access to the `coffee` command, which can execute scripts, compile `.coffee` files into `.js`, and provide an interactive REPL. The `coffee` command takes the following options: | Option | Description | | --- | --- | | `-c, --compile` | Compile a `.coffee` script into a `.js` JavaScript file of the same name. | | `-t, --transpile` | Pipe the CoffeeScript compiler’s output through Babel before saving or running the generated JavaScript. Requires `@babel/core` to be installed, and options to pass to Babel in a `.babelrc` file or a `package.json` with a `babel` key in the path of the file or folder to be compiled. See [Transpilation](#transpilation). | | `-m, --map` | Generate source maps alongside the compiled JavaScript files. Adds `sourceMappingURL` directives to the JavaScript as well. | | `-M, --inline-map` | Just like `--map`, but include the source map directly in the compiled JavaScript files, rather than in a separate file. | | `-i, --interactive` | Launch an interactive CoffeeScript session to try short snippets. Identical to calling `coffee` with no arguments. | | `-o, --output [DIR]` | Write out all compiled JavaScript files into the specified directory. Use in conjunction with `--compile` or `--watch`. | | `-w, --watch` | Watch files for changes, rerunning the specified command when any file is updated. | | `-p, --print` | Instead of writing out the JavaScript as a file, print it directly to **stdout**. | | `-s, --stdio` | Pipe in CoffeeScript to STDIN and get back JavaScript over STDOUT. Good for use with processes written in other languages. An example:`cat src/cake.coffee | coffee -sc` | | `-l, --literate` | Parses the code as Literate CoffeeScript. You only need to specify this when passing in code directly over **stdio**, or using some sort of extension-less file name. | | `-e, --eval` | Compile and print a little snippet of CoffeeScript directly from the command line. For example:`coffee -e "console.log num for num in [10..1]"` | | `-r, --require [MODULE]` | `require()` the given module before starting the REPL or evaluating the code given with the `--eval` flag. | | `-b, --bare` | Compile the JavaScript without the [top-level function safety wrapper](#lexical-scope). | | `--no-header` | Suppress the “Generated by CoffeeScript” header. | | `--nodejs` | The `node` executable has some useful options you can set, such as `--debug`, `--debug-brk`, `--max-stack-size`, and `--expose-gc`. Use this flag to forward options directly to Node.js. To pass multiple flags, use `--nodejs` multiple times. | | `--ast` | Generate an abstract syntax tree of nodes of the CoffeeScript. Used for integrating with JavaScript build tools. | | `--tokens` | Instead of parsing the CoffeeScript, just lex it, and print out the token stream. Used for debugging the compiler. | | `-n, --nodes` | Instead of compiling the CoffeeScript, just lex and parse it, and print out the parse tree. Used for debugging the compiler. | #### Examples: * Compile a directory tree of `.coffee` files in `src` into a parallel tree of `.js` files in `lib`: `coffee --compile --output lib/ src/` * Watch a file for changes, and recompile it every time the file is saved: `coffee --watch --compile experimental.coffee` * Concatenate a list of files into a single script: `coffee --join project.js --compile src/*.coffee` * Print out the compiled JS from a one-liner: `coffee -bpe "alert i for i in [0..10]"` * All together now, watch and recompile an entire project as you work on it: `coffee -o lib/ -cw src/` * Start the CoffeeScript REPL (`Ctrl-D` to exit, `Ctrl-V`for multi-line): `coffee` To use `--transpile`, see [Transpilation](#transpilation). ### Node.js If you’d like to use Node.js’ CommonJS to `require` CoffeeScript files, e.g. `require './app.coffee'`, you must first “register” CoffeeScript as an extension: ``` require 'coffeescript/register' App = require './app' # The .coffee extension is optional ``` If you want to use the compiler’s API, for example to make an app that compiles strings of CoffeeScript on the fly, you can `require` the full module: ``` CoffeeScript = require 'coffeescript' eval CoffeeScript.compile 'console.log "Mmmmm, I could really go for some #{Math.pi}"' ``` The `compile` method has the signature `compile(code, options)` where `code` is a string of CoffeeScript code, and the optional `options` is an object with some or all of the following properties: * `options.sourceMap`, boolean: if true, a source map will be generated; and instead of returning a string, `compile` will return an object of the form `{js, v3SourceMap, sourceMap}`. * `options.inlineMap`, boolean: if true, output the source map as a base64-encoded string in a comment at the bottom. * `options.filename`, string: the filename to use for the source map. It can include a path (relative or absolute). * `options.bare`, boolean: if true, output without the [top-level function safety wrapper](#lexical-scope). * `options.header`, boolean: if true, output the `Generated by CoffeeScript` header. * `options.transpile`, **object**: if set, this must be an object with the [options to pass to Babel](https://babeljs.io/docs/usage/api/#options). See [Transpilation](#transpilation). * `options.ast`, boolean: if true, return an abstract syntax tree of the input CoffeeScript source code. ### Transpilation CoffeeScript 2 generates JavaScript that uses the latest, modern syntax. The runtime or browsers where you want your code to run [might not support all of that syntax](#compatibility). In that case, we want to convert modern JavaScript into older JavaScript that will run in older versions of Node or older browsers; for example, `{ a } = obj` into `a = obj.a`. This is done via transpilers like [Babel](https://babeljs.io/), [Bublé](https://buble.surge.sh/) or [Traceur Compiler](https://github.com/google/traceur-compiler). See [Build Tools](#build-tools). #### Quickstart From the root of your project: ``` npm install --save-dev @babel/core @babel/preset-env echo '{ "presets": ["@babel/env"] }' > .babelrc coffee --compile --transpile --inline-map some-file.coffee ``` #### Transpiling with the CoffeeScript compiler To make things easy, CoffeeScript has built-in support for the popular [Babel](https://babeljs.io/) transpiler. You can use it via the `--transpile` command-line option or the `transpile` Node API option. To use either, `@babel/core` must be installed in your project: ``` npm install --save-dev @babel/core ``` Or if you’re running the `coffee` command outside of a project folder, using a globally-installed `coffeescript` module, `@babel/core` needs to be installed globally: ``` npm install --global @babel/core ``` By default, Babel doesn’t do anything—it doesn’t make assumptions about what you want to transpile to. You need to provide it with a configuration so that it knows what to do. One way to do this is by creating a [`.babelrc` file](https://babeljs.io/docs/usage/babelrc/) in the folder containing the files you’re compiling, or in any parent folder up the path above those files. (Babel supports [other ways](https://babeljs.io/docs/usage/babelrc/), too.) A minimal `.babelrc` file would be just `{ "presets": ["@babel/env"] }`. This implies that you have installed [`@babel/preset-env`](https://babeljs.io/docs/plugins/preset-env/): ``` npm install --save-dev @babel/preset-env # Or --global for non-project-based usage ``` See [Babel’s website to learn about presets and plugins](https://babeljs.io/docs/plugins/) and the multitude of options you have. Another preset you might need is [`@babel/plugin-transform-react-jsx`](https://babeljs.io/docs/en/babel-plugin-transform-react-jsx/) if you’re using JSX with React (JSX can also be used with other frameworks). Once you have `@babel/core` and `@babel/preset-env` (or other presets or plugins) installed, and a `.babelrc` file (or other equivalent) in place, you can use `coffee --transpile` to pipe CoffeeScript’s output through Babel using the options you’ve saved. If you’re using CoffeeScript via the [Node API](https://coffeescript.org/nodejs_usage), where you call `CoffeeScript.compile` with a string to be compiled and an `options` object, the `transpile` key of the `options` object should be the Babel options: ``` CoffeeScript.compile(code, {transpile: {presets: ['@babel/env']}}) ``` You can also transpile CoffeeScript’s output without using the `transpile` option, for example as part of a build chain. This lets you use transpilers other than Babel, and it gives you greater control over the process. There are many great task runners for setting up JavaScript build chains, such as [Gulp](http://gulpjs.com/), [Webpack](https://webpack.github.io/), [Grunt](https://gruntjs.com/) and [Broccoli](http://broccolijs.com/). #### Polyfills Note that transpiling doesn’t automatically supply [polyfills](https://developer.mozilla.org/en-US/docs/Glossary/Polyfill) for your code. CoffeeScript itself will output [`Array.indexOf`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/indexOf) if you use the `in` operator, or destructuring or spread/rest syntax; and [`Function.bind`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/bind) if you use a bound (`=>`) method in a class. Both are supported in Internet Explorer 9+ and all more recent browsers, but you will need to supply polyfills if you need to support Internet Explorer 8 or below and are using features that would cause these methods to be output. You’ll also need to supply polyfills if your own code uses these methods or another method added in recent versions of JavaScript. One polyfill option is [`@babel/polyfill`](https://babeljs.io/docs/en/babel-polyfill/), though there are many [other](https://hackernoon.com/polyfills-everything-you-ever-wanted-to-know-or-maybe-a-bit-less-7c8de164e423) [strategies](https://philipwalton.com/articles/loading-polyfills-only-when-needed/). Language Reference ------------------ *This reference is structured so that it can be read from top to bottom, if you like. Later sections use ideas and syntax previously introduced. Familiarity with JavaScript is assumed. In all of the following examples, the source CoffeeScript is provided on the left, and the direct compilation into JavaScript is on the right.* *Many of the examples can be run (where it makes sense) by pressing the* ▶ *button on the right. The CoffeeScript on the left is editable, and the JavaScript will update as you edit.* First, the basics: CoffeeScript uses significant whitespace to delimit blocks of code. You don’t need to use semicolons `;` to terminate expressions, ending the line will do just as well (although semicolons can still be used to fit multiple expressions onto a single line). Instead of using curly braces `{ }` to surround blocks of code in [functions](#literals), [if-statements](#conditionals), [switch](#switch), and [try/catch](#try-catch), use indentation. You don’t need to use parentheses to invoke a function if you’re passing arguments. The implicit call wraps forward to the end of the line or block expression. `console.log sys.inspect object` → `console.log(sys.inspect(object));` Functions --------- Functions are defined by an optional list of parameters in parentheses, an arrow, and the function body. The empty function looks like this: `->` ``` square = (x) -> x * x cube = (x) -> square(x) * x ``` ``` var cube, square; square = function(x) { return x * x; }; cube = function(x) { return square(x) * x; }; ``` Functions may also have default values for arguments, which will be used if the incoming argument is missing (`undefined`). ``` fill = (container, liquid = "coffee") -> "Filling the #{container} with #{liquid}..." ``` ``` var fill; fill = function(container, liquid = "coffee") { return `Filling the ${container} with ${liquid}...`; }; ``` Strings ------- Like JavaScript and many other languages, CoffeeScript supports strings as delimited by the `"` or `'` characters. CoffeeScript also supports string interpolation within `"`-quoted strings, using `#{ … }`. Single-quoted strings are literal. You may even use interpolation in object keys. ``` author = "Wittgenstein" quote = "A picture is a fact. -- #{ author }" sentence = "#{ 22 / 7 } is a decent approximation of π" ``` ``` var author, quote, sentence; author = "Wittgenstein"; quote = `A picture is a fact. -- ${author}`; sentence = `${22 / 7} is a decent approximation of π`; ``` Multiline strings are allowed in CoffeeScript. Lines are joined by a single space unless they end with a backslash. Indentation is ignored. ``` mobyDick = "Call me Ishmael. Some years ago -- never mind how long precisely -- having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world..." ``` ``` var mobyDick; mobyDick = "Call me Ishmael. Some years ago -- never mind how long precisely -- having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world..."; ``` Block strings, delimited by `"""` or `'''`, can be used to hold formatted or indentation-sensitive text (or, if you just don’t feel like escaping quotes and apostrophes). The indentation level that begins the block is maintained throughout, so you can keep it all aligned with the body of your code. ``` html = """ <strong> cup of coffeescript </strong> """ ``` ``` var html; html = `<strong> cup of coffeescript </strong>`; ``` Double-quoted block strings, like other double-quoted strings, allow interpolation. Objects and Arrays ------------------ The CoffeeScript literals for objects and arrays look very similar to their JavaScript cousins. When each property is listed on its own line, the commas are optional. Objects may be created using indentation instead of explicit braces, similar to [YAML](http://yaml.org). ``` song = ["do", "re", "mi", "fa", "so"] singers = {Jagger: "Rock", Elvis: "Roll"} bitlist = [ 1, 0, 1 0, 0, 1 1, 1, 0 ] kids = brother: name: "Max" age: 11 sister: name: "Ida" age: 9 ``` ``` var bitlist, kids, singers, song; song = ["do", "re", "mi", "fa", "so"]; singers = { Jagger: "Rock", Elvis: "Roll" }; bitlist = [1, 0, 1, 0, 0, 1, 1, 1, 0]; kids = { brother: { name: "Max", age: 11 }, sister: { name: "Ida", age: 9 } }; ``` CoffeeScript has a shortcut for creating objects when you want the key to be set with a variable of the same name. Note that the `{` and `}` are required for this shorthand. ``` name = "Michelangelo" mask = "orange" weapon = "nunchuks" turtle = {name, mask, weapon} output = "#{turtle.name} wears an #{turtle.mask} mask. Watch out for his #{turtle.weapon}!" ``` ``` var mask, name, output, turtle, weapon; name = "Michelangelo"; mask = "orange"; weapon = "nunchuks"; turtle = {name, mask, weapon}; output = `${turtle.name} wears an ${turtle.mask} mask. Watch out for his ${turtle.weapon}!`; ``` Comments -------- In CoffeeScript, comments are denoted by the `#` character to the end of a line, or from `###` to the next appearance of `###`. Comments are ignored by the compiler, though the compiler makes its best effort at reinserting your comments into the output JavaScript after compilation. ``` ### Fortune Cookie Reader v1.0 Released under the MIT License ### sayFortune = (fortune) -> console.log fortune # in bed! ``` ``` /* Fortune Cookie Reader v1.0 Released under the MIT License */ var sayFortune; sayFortune = function(fortune) { return console.log(fortune); // in bed! }; ``` Inline `###` comments make [type annotations](#type-annotations) possible. Lexical Scoping and Variable Safety ----------------------------------- The CoffeeScript compiler takes care to make sure that all of your variables are properly declared within lexical scope — you never need to write `var` yourself. ``` outer = 1 changeNumbers = -> inner = -1 outer = 10 inner = changeNumbers() ``` ``` var changeNumbers, inner, outer; outer = 1; changeNumbers = function() { var inner; inner = -1; return outer = 10; }; inner = changeNumbers(); ``` Notice how all of the variable declarations have been pushed up to the top of the closest scope, the first time they appear. `outer` is not redeclared within the inner function, because it’s already in scope; `inner` within the function, on the other hand, should not be able to change the value of the external variable of the same name, and therefore has a declaration of its own. Because you don’t have direct access to the `var` keyword, it’s impossible to shadow an outer variable on purpose, you may only refer to it. So be careful that you’re not reusing the name of an external variable accidentally, if you’re writing a deeply nested function. Although suppressed within this documentation for clarity, all CoffeeScript output (except in files with `import` or `export` statements) is wrapped in an anonymous function: `(function(){ … })();`. This safety wrapper, combined with the automatic generation of the `var` keyword, make it exceedingly difficult to pollute the global namespace by accident. (The safety wrapper can be disabled with the [`bare` option](#usage), and is unnecessary and automatically disabled when using modules.) If you’d like to create top-level variables for other scripts to use, attach them as properties on `window`; attach them as properties on the `exports` object in CommonJS; or use an [`export` statement](#modules). If you’re targeting both CommonJS and the browser, the [existential operator](#existential-operator) (covered below), gives you a reliable way to figure out where to add them: `exports ? this`. Since CoffeeScript takes care of all variable declaration, it is not possible to declare variables with ES2015’s `let` or `const`. [This is intentional](#unsupported-let-const); we feel that the simplicity gained by not having to think about variable declaration outweighs the benefit of having three separate ways to declare variables. If, Else, Unless, and Conditional Assignment -------------------------------------------- `if`/`else` statements can be written without the use of parentheses and curly brackets. As with functions and other block expressions, multi-line conditionals are delimited by indentation. There’s also a handy postfix form, with the `if` or `unless` at the end. CoffeeScript can compile `if` statements into JavaScript expressions, using the ternary operator when possible, and closure wrapping otherwise. There is no explicit ternary statement in CoffeeScript — you simply use a regular `if` statement on a single line. ``` mood = greatlyImproved if singing if happy and knowsIt clapsHands() chaChaCha() else showIt() date = if friday then sue else jill ``` ``` var date, mood; if (singing) { mood = greatlyImproved; } if (happy && knowsIt) { clapsHands(); chaChaCha(); } else { showIt(); } date = friday ? sue : jill; ``` Splats, or Rest Parameters/Spread Syntax ---------------------------------------- The JavaScript `arguments` object is a useful way to work with functions that accept variable numbers of arguments. CoffeeScript provides splats `...`, both for function definition as well as invocation, making variable numbers of arguments a little bit more palatable. ES2015 adopted this feature as their [rest parameters](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/rest_parameters). ``` gold = silver = rest = "unknown" awardMedals = (first, second, others...) -> gold = first silver = second rest = others contenders = [ "<NAME>" "<NAME>" "<NAME>" "<NAME>" "<NAME>" "<NAME>" "<NAME>" "<NAME>" "<NAME>" "<NAME>" ] awardMedals contenders... alert """ Gold: #{gold} Silver: #{silver} The Field: #{rest.join ', '} """ ``` ``` var awardMedals, contenders, gold, rest, silver; gold = silver = rest = "unknown"; awardMedals = function(first, second, ...others) { gold = first; silver = second; return rest = others; }; contenders = ["<NAME>", "<NAME>", "<NAME>", "<NAME>", "<NAME>", "<NAME>", "<NAME>", "<NAME>", "<NAME>", "<NAME>"]; awardMedals(...contenders); alert(`Gold: ${gold} Silver: ${silver} The Field: ${rest.join(', ')}`); ``` Splats also let us elide array elements… ``` popular = ['pepperoni', 'sausage', 'cheese'] unwanted = ['anchovies', 'olives'] all = [popular..., unwanted..., 'mushrooms'] ``` ``` var all, popular, unwanted; popular = ['pepperoni', 'sausage', 'cheese']; unwanted = ['anchovies', 'olives']; all = [...popular, ...unwanted, 'mushrooms']; ``` …and object properties. ``` user = name: '<NAME>' occupation: 'theoretical physicist' currentUser = { user..., status: 'Uncertain' } ``` ``` var currentUser, user; user = { name: '<NAME>', occupation: 'theoretical physicist' }; currentUser = { ...user, status: 'Uncertain' }; ``` In ECMAScript this is called [spread syntax](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator), and has been supported for arrays since ES2015 and objects since ES2018. Loops and Comprehensions ------------------------ Most of the loops you’ll write in CoffeeScript will be **comprehensions** over arrays, objects, and ranges. Comprehensions replace (and compile into) `for` loops, with optional guard clauses and the value of the current array index. Unlike for loops, array comprehensions are expressions, and can be returned and assigned. ``` # Eat lunch. eat = (food) -> "#{food} eaten." eat food for food in ['toast', 'cheese', 'wine'] # Fine five course dining. courses = ['greens', 'caviar', 'truffles', 'roast', 'cake'] menu = (i, dish) -> "Menu Item #{i}: #{dish}" menu i + 1, dish for dish, i in courses # Health conscious meal. foods = ['broccoli', 'spinach', 'chocolate'] eat food for food in foods when food isnt 'chocolate' ``` ``` // Eat lunch. var courses, dish, eat, food, foods, i, j, k, l, len, len1, len2, menu, ref; eat = function(food) { return `${food} eaten.`; }; ref = ['toast', 'cheese', 'wine']; for (j = 0, len = ref.length; j < len; j++) { food = ref[j]; eat(food); } // Fine five course dining. courses = ['greens', 'caviar', 'truffles', 'roast', 'cake']; menu = function(i, dish) { return `Menu Item ${i}: ${dish}`; }; for (i = k = 0, len1 = courses.length; k < len1; i = ++k) { dish = courses[i]; menu(i + 1, dish); } // Health conscious meal. foods = ['broccoli', 'spinach', 'chocolate']; for (l = 0, len2 = foods.length; l < len2; l++) { food = foods[l]; if (food !== 'chocolate') { eat(food); } } ``` Comprehensions should be able to handle most places where you otherwise would use a loop, `each`/`forEach`, `map`, or `select`/`filter`, for example: `shortNames = (name for name in list when name.length < 5)` If you know the start and end of your loop, or would like to step through in fixed-size increments, you can use a range to specify the start and end of your comprehension. ``` countdown = (num for num in [10..1]) ``` ``` var countdown, num; countdown = (function() { var i, results; results = []; for (num = i = 10; i >= 1; num = --i) { results.push(num); } return results; })(); ``` Note how because we are assigning the value of the comprehensions to a variable in the example above, CoffeeScript is collecting the result of each iteration into an array. Sometimes functions end with loops that are intended to run only for their side-effects. Be careful that you’re not accidentally returning the results of the comprehension in these cases, by adding a meaningful return value — like `true` — or `null`, to the bottom of your function. To step through a range comprehension in fixed-size chunks, use `by`, for example: `evens = (x for x in [0..10] by 2)` If you don’t need the current iteration value you may omit it: `browser.closeCurrentTab() for [0...count]` Comprehensions can also be used to iterate over the keys and values in an object. Use `of` to signal comprehension over the properties of an object instead of the values in an array. ``` yearsOld = max: 10, ida: 9, tim: 11 ages = for child, age of yearsOld "#{child} is #{age}" ``` ``` var age, ages, child, yearsOld; yearsOld = { max: 10, ida: 9, tim: 11 }; ages = (function() { var results; results = []; for (child in yearsOld) { age = yearsOld[child]; results.push(`${child} is ${age}`); } return results; })(); ``` If you would like to iterate over just the keys that are defined on the object itself, by adding a `hasOwnProperty` check to avoid properties that may be inherited from the prototype, use `for own key, value of object`. To iterate a generator function, use `from`. See [Generator Functions](#generator-iteration). The only low-level loop that CoffeeScript provides is the `while` loop. The main difference from JavaScript is that the `while` loop can be used as an expression, returning an array containing the result of each iteration through the loop. ``` # Econ 101 if this.studyingEconomics buy() while supply > demand sell() until supply > demand # Nursery Rhyme num = 6 lyrics = while num -= 1 "#{num} little monkeys, jumping on the bed. One fell out and bumped his head." ``` ``` // Econ 101 var lyrics, num; if (this.studyingEconomics) { while (supply > demand) { buy(); } while (!(supply > demand)) { sell(); } } // Nursery Rhyme num = 6; lyrics = (function() { var results; results = []; while (num -= 1) { results.push(`${num} little monkeys, jumping on the bed. One fell out and bumped his head.`); } return results; })(); ``` For readability, the `until` keyword is equivalent to `while not`, and the `loop` keyword is equivalent to `while true`. When using a JavaScript loop to generate functions, it’s common to insert a closure wrapper in order to ensure that loop variables are closed over, and all the generated functions don’t just share the final values. CoffeeScript provides the `do` keyword, which immediately invokes a passed function, forwarding any arguments. ``` for filename in list do (filename) -> if filename not in ['.DS_Store', 'Thumbs.db', 'ehthumbs.db'] fs.readFile filename, (err, contents) -> compile filename, contents.toString() ``` ``` var filename, i, len; for (i = 0, len = list.length; i < len; i++) { filename = list[i]; (function(filename) { if (filename !== '.DS_Store' && filename !== 'Thumbs.db' && filename !== 'ehthumbs.db') { return fs.readFile(filename, function(err, contents) { return compile(filename, contents.toString()); }); } })(filename); } ``` Array Slicing and Splicing with Ranges -------------------------------------- Ranges can also be used to extract slices of arrays. With two dots (`3..6`), the range is inclusive (`3, 4, 5, 6`); with three dots (`3...6`), the range excludes the end (`3, 4, 5`). Slices indices have useful defaults. An omitted first index defaults to zero and an omitted second index defaults to the size of the array. ``` numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9] start = numbers[0..2] middle = numbers[3...-2] end = numbers[-2..] copy = numbers[..] ``` ``` var copy, end, middle, numbers, start; numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9]; start = numbers.slice(0, 3); middle = numbers.slice(3, -2); end = numbers.slice(-2); copy = numbers.slice(0); ``` The same syntax can be used with assignment to replace a segment of an array with new values, splicing it. ``` numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] numbers[3..6] = [-3, -4, -5, -6] ``` ``` var numbers, ref, splice = [].splice; numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]; splice.apply(numbers, [3, 4].concat(ref = [-3, -4, -5, -6])), ref; ``` Note that JavaScript strings are immutable, and can’t be spliced. Everything is an Expression (at least, as much as possible) ----------------------------------------------------------- You might have noticed how even though we don’t add return statements to CoffeeScript functions, they nonetheless return their final value. The CoffeeScript compiler tries to make sure that all statements in the language can be used as expressions. Watch how the `return` gets pushed down into each possible branch of execution in the function below. ``` grade = (student) -> if student.excellentWork "A+" else if student.okayStuff if student.triedHard then "B" else "B-" else "C" eldest = if 24 > 21 then "Liz" else "Ike" ``` ``` var eldest, grade; grade = function(student) { if (student.excellentWork) { return "A+"; } else if (student.okayStuff) { if (student.triedHard) { return "B"; } else { return "B-"; } } else { return "C"; } }; eldest = 24 > 21 ? "Liz" : "Ike"; ``` Even though functions will always return their final value, it’s both possible and encouraged to return early from a function body writing out the explicit return (`return value`), when you know that you’re done. Because variable declarations occur at the top of scope, assignment can be used within expressions, even for variables that haven’t been seen before: ``` six = (one = 1) + (two = 2) + (three = 3) ``` ``` var one, six, three, two; six = (one = 1) + (two = 2) + (three = 3); ``` Things that would otherwise be statements in JavaScript, when used as part of an expression in CoffeeScript, are converted into expressions by wrapping them in a closure. This lets you do useful things, like assign the result of a comprehension to a variable: ``` # The first ten global properties. globals = (name for name of window)[0...10] ``` ``` // The first ten global properties. var globals, name; globals = ((function() { var results; results = []; for (name in window) { results.push(name); } return results; })()).slice(0, 10); ``` As well as silly things, like passing a `try`/`catch` statement directly into a function call: ``` alert( try nonexistent / undefined catch error "And the error is ... #{error}" ) ``` ``` var error; alert((function() { try { return nonexistent / void 0; } catch (error1) { error = error1; return `And the error is ... ${error}`; } })()); ``` There are a handful of statements in JavaScript that can’t be meaningfully converted into expressions, namely `break`, `continue`, and `return`. If you make use of them within a block of code, CoffeeScript won’t try to perform the conversion. Operators and Aliases --------------------- Because the `==` operator frequently causes undesirable coercion, is intransitive, and has a different meaning than in other languages, CoffeeScript compiles `==` into `===`, and `!=` into `!==`. In addition, `is` compiles into `===`, and `isnt` into `!==`. You can use `not` as an alias for `!`. For logic, `and` compiles to `&&`, and `or` into `||`. Instead of a newline or semicolon, `then` can be used to separate conditions from expressions, in `while`, `if`/`else`, and `switch`/`when` statements. As in [YAML](http://yaml.org/), `on` and `yes` are the same as boolean `true`, while `off` and `no` are boolean `false`. `unless` can be used as the inverse of `if`. As a shortcut for `this.property`, you can use `@property`. You can use `in` to test for array presence, and `of` to test for JavaScript object-key presence. In a `for` loop, `from` compiles to the [ES2015 `of`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...of). (Yes, it’s unfortunate; the CoffeeScript `of` predates the ES2015 `of`.) To simplify math expressions, `**` can be used for exponentiation and `//` performs floor division. `%` works just like in JavaScript, while `%%` provides [“dividend dependent modulo”](https://en.wikipedia.org/wiki/Modulo_operation): ``` -7 % 5 == -2 # The remainder of 7 / 5 -7 %% 5 == 3 # n %% 5 is always between 0 and 4 tabs.selectTabAtIndex((tabs.currentIndex - count) %% tabs.length) ``` ``` var modulo = function(a, b) { return (+a % (b = +b) + b) % b; }; -7 % 5 === -2; // The remainder of 7 / 5 modulo(-7, 5) === 3; // n %% 5 is always between 0 and 4 tabs.selectTabAtIndex(modulo(tabs.currentIndex - count, tabs.length)); ``` All together now: | CoffeeScript | JavaScript | | --- | --- | | `is` | `===` | | `isnt` | `!==` | | `not` | `!` | | `and` | `&&` | | `or` | `||` | | `true`, `yes`, `on` | `true` | | `false`, `no`, `off` | `false` | | `@`, `this` | `this` | | `a in b` | `[].indexOf.call(b, a) >= 0` | | `a of b` | `a in b` | | `for a from b` | `for (a of b)` | | `a ** b` | `a ** b` | | `a // b` | `Math.floor(a / b)` | | `a %% b` | `(a % b + b) % b` | ``` launch() if ignition is on volume = 10 if band isnt SpinalTap letTheWildRumpusBegin() unless answer is no if car.speed < limit then accelerate() winner = yes if pick in [47, 92, 13] print inspect "My name is #{@name}" ``` ``` var volume, winner; if (ignition === true) { launch(); } if (band !== SpinalTap) { volume = 10; } if (answer !== false) { letTheWildRumpusBegin(); } if (car.speed < limit) { accelerate(); } if (pick === 47 || pick === 92 || pick === 13) { winner = true; } print(inspect(`My name is ${this.name}`)); ``` The Existential Operator ------------------------ It’s a little difficult to check for the existence of a variable in JavaScript. `if (variable) …` comes close, but fails for zero, the empty string, and false (to name just the most common cases). CoffeeScript’s existential operator `?` returns true unless a variable is `null` or `undefined` or undeclared, which makes it analogous to Ruby’s `nil?`. It can also be used for safer conditional assignment than the JavaScript pattern `a = a || value` provides, for cases where you may be handling numbers or strings. ``` solipsism = true if mind? and not world? speed = 0 speed ?= 15 footprints = yeti ? "bear" ``` ``` var footprints, solipsism, speed; if ((typeof mind !== "undefined" && mind !== null) && (typeof world === "undefined" || world === null)) { solipsism = true; } speed = 0; if (speed == null) { speed = 15; } footprints = typeof yeti !== "undefined" && yeti !== null ? yeti : "bear"; ``` Note that if the compiler knows that `a` is in scope and therefore declared, `a?` compiles to `a != null`, *not* `a !== null`. The `!=` makes a loose comparison to `null`, which does double duty also comparing against `undefined`. The reverse also holds for `not a?` or `unless a?`. ``` major = 'Computer Science' unless major? signUpForClass 'Introduction to Wines' ``` ``` var major; major = 'Computer Science'; if (major == null) { signUpForClass('Introduction to Wines'); } ``` If a variable might be undeclared, the compiler does a thorough check. This is what JavaScript coders *should* be typing when they want to check if a mystery variable exists. ``` if window? environment = 'browser (probably)' ``` ``` var environment; if (typeof window !== "undefined" && window !== null) { environment = 'browser (probably)'; } ``` The accessor variant of the existential operator `?.` can be used to soak up null references in a chain of properties. Use it instead of the dot accessor `.` in cases where the base value may be `null` or `undefined`. If all of the properties exist then you’ll get the expected result, if the chain is broken, `undefined` is returned instead of the `TypeError` that would be raised otherwise. ``` zip = lottery.drawWinner?().address?.zipcode ``` ``` var ref, zip; zip = typeof lottery.drawWinner === "function" ? (ref = lottery.drawWinner().address) != null ? ref.zipcode : void 0 : void 0; ``` For completeness: | Example | Definition | | --- | --- | | `a?` | tests that `a` is in scope and `a != null` | | `a ? b` | returns `a` if `a` is in scope and `a != null`; otherwise, `b` | | `a?.b` or `a?['b']` | returns `a.b` if `a` is in scope and `a != null`; otherwise, `undefined` | | `a?(b, c)` or `a? b, c` | returns the result of calling `a` (with arguments `b` and `c`) if `a` is in scope and callable; otherwise, `undefined` | | `a ?= b` | assigns the value of `b` to `a` if `a` is not in scope or if `a == null`; produces the new value of `a` | Chaining Function Calls ----------------------- Leading `.` closes all open calls, allowing for simpler chaining syntax. ``` $ 'body' .click (e) -> $ '.box' .fadeIn 'fast' .addClass 'show' .css 'background', 'white' ``` ``` $('body').click(function(e) { return $('.box').fadeIn('fast').addClass('show'); }).css('background', 'white'); ``` Destructuring Assignment ------------------------ Just like JavaScript (since ES2015), CoffeeScript has destructuring assignment syntax. When you assign an array or object literal to a value, CoffeeScript breaks up and matches both sides against each other, assigning the values on the right to the variables on the left. In the simplest case, it can be used for parallel assignment: ``` theBait = 1000 theSwitch = 0 [theBait, theSwitch] = [theSwitch, theBait] ``` ``` var theBait, theSwitch; theBait = 1000; theSwitch = 0; [theBait, theSwitch] = [theSwitch, theBait]; ``` But it’s also helpful for dealing with functions that return multiple values. ``` weatherReport = (location) -> # Make an Ajax request to fetch the weather... [location, 72, "Mostly Sunny"] [city, temp, forecast] = weatherReport "Berkeley, CA" ``` ``` var city, forecast, temp, weatherReport; weatherReport = function(location) { // Make an Ajax request to fetch the weather... return [location, 72, "Mostly Sunny"]; }; [city, temp, forecast] = weatherReport("Berkeley, CA"); ``` Destructuring assignment can be used with any depth of array and object nesting, to help pull out deeply nested properties. ``` futurists = sculptor: "<NAME>" painter: "<NAME>" poet: name: "<NAME>" address: [ "Via Roma 42R" "Bellagio, Italy 22021" ] {sculptor} = futurists {poet: {name, address: [street, city]}} = futurists ``` ``` var city, futurists, name, sculptor, street; futurists = { sculptor: "<NAME>", painter: "<NAME>", poet: { name: "<NAME>", address: ["Via Roma 42R", "Bellagio, Italy 22021"] } }; ({sculptor} = futurists); ({ poet: { name, address: [street, city] } } = futurists); ``` Destructuring assignment can even be combined with splats. ``` tag = "<impossible>" [open, contents..., close] = tag.split("") ``` ``` var close, contents, open, ref, tag, splice = [].splice; tag = "<impossible>"; ref = tag.split(""), [open, ...contents] = ref, [close] = splice.call(contents, -1); ``` Expansion can be used to retrieve elements from the end of an array without having to assign the rest of its values. It works in function parameter lists as well. ``` text = "Every literary critic believes he will outwit history and have the last word" [first, ..., last] = text.split " " ``` ``` var first, last, ref, text, slice = [].slice; text = "Every literary critic believes he will outwit history and have the last word"; ref = text.split(" "), [first] = ref, [last] = slice.call(ref, -1); ``` Destructuring assignment is also useful when combined with class constructors to assign properties to your instance from an options object passed to the constructor. ``` class Person constructor: (options) -> {@name, @age, @height = 'average'} = options tim = new Person name: 'Tim', age: 4 ``` ``` var Person, tim; Person = class Person { constructor(options) { ({name: this.name, age: this.age, height: this.height = 'average'} = options); } }; tim = new Person({ name: 'Tim', age: 4 }); ``` The above example also demonstrates that if properties are missing in the destructured object or array, you can, just like in JavaScript, provide defaults. Note though that unlike with the existential operator, the default is only applied with the value is missing or `undefined`—[passing `null` will set a value of `null`](#breaking-changes-default-values), not the default. Bound (Fat Arrow) Functions --------------------------- In JavaScript, the `this` keyword is dynamically scoped to mean the object that the current function is attached to. If you pass a function as a callback or attach it to a different object, the original value of `this` will be lost. If you’re not familiar with this behavior, [this Digital Web article](https://web.archive.org/web/20150316122013/http://www.digital-web.com/articles/scope_in_javascript) gives a good overview of the quirks. The fat arrow `=>` can be used to both define a function, and to bind it to the current value of `this`, right on the spot. This is helpful when using callback-based libraries like Prototype or jQuery, for creating iterator functions to pass to `each`, or event-handler functions to use with `on`. Functions created with the fat arrow are able to access properties of the `this` where they’re defined. ``` Account = (customer, cart) -> @customer = customer @cart = cart $('.shopping_cart').on 'click', (event) => @customer.purchase @cart ``` ``` var Account; Account = function(customer, cart) { this.customer = customer; this.cart = cart; return $('.shopping_cart').on('click', (event) => { return this.customer.purchase(this.cart); }); }; ``` If we had used `->` in the callback above, `@customer` would have referred to the undefined “customer” property of the DOM element, and trying to call `purchase()` on it would have raised an exception. The fat arrow was one of the most popular features of CoffeeScript, and ES2015 [adopted it](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions); so CoffeeScript 2 compiles `=>` to ES `=>`. Generator Functions ------------------- CoffeeScript supports ES2015 [generator functions](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/function*) through the `yield` keyword. There’s no `function*(){}` nonsense — a generator in CoffeeScript is simply a function that yields. ``` perfectSquares = -> num = 0 loop num += 1 yield num * num return window.ps or= perfectSquares() ``` ``` var perfectSquares; perfectSquares = function*() { var num; num = 0; while (true) { num += 1; yield num * num; } }; window.ps || (window.ps = perfectSquares()); ``` `yield*` is called `yield from`, and `yield return` may be used if you need to force a generator that doesn’t yield. You can iterate over a generator function using `for…from`. ``` fibonacci = -> [previous, current] = [1, 1] loop [previous, current] = [current, previous + current] yield current return getFibonacciNumbers = (length) -> results = [1] for n from fibonacci() results.push n break if results.length is length results ``` ``` var fibonacci, getFibonacciNumbers; fibonacci = function*() { var current, previous; [previous, current] = [1, 1]; while (true) { [previous, current] = [current, previous + current]; yield current; } }; getFibonacciNumbers = function(length) { var n, results; results = [1]; for (n of fibonacci()) { results.push(n); if (results.length === length) { break; } } return results; }; ``` Async Functions --------------- ES2017’s [async functions](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) are supported through the `await` keyword. Like with generators, there’s no need for an `async` keyword; an async function in CoffeeScript is simply a function that awaits. Similar to how `yield return` forces a generator, `await return` may be used to force a function to be async. ``` # Your browser must support async/await and speech synthesis # to run this example. sleep = (ms) -> new Promise (resolve) -> window.setTimeout resolve, ms say = (text) -> window.speechSynthesis.cancel() window.speechSynthesis.speak new SpeechSynthesisUtterance text countdown = (seconds) -> for i in [seconds..1] say i await sleep 1000 # wait one second say "Blastoff!" countdown 3 ``` ``` // Your browser must support async/await and speech synthesis // to run this example. var countdown, say, sleep; sleep = function(ms) { return new Promise(function(resolve) { return window.setTimeout(resolve, ms); }); }; say = function(text) { window.speechSynthesis.cancel(); return window.speechSynthesis.speak(new SpeechSynthesisUtterance(text)); }; countdown = async function(seconds) { var i, j, ref; for (i = j = ref = seconds; (ref <= 1 ? j <= 1 : j >= 1); i = ref <= 1 ? ++j : --j) { say(i); await sleep(1000); // wait one second } return say("Blastoff!"); }; countdown(3); ``` Classes ------- CoffeeScript 1 provided the `class` and `extends` keywords as syntactic sugar for working with prototypal functions. With ES2015, JavaScript has adopted those keywords; so CoffeeScript 2 compiles its `class` and `extends` keywords to ES2015 classes. ``` class Animal constructor: (@name) - move: (meters) -> alert @name + " moved #{meters}m." class Snake extends Animal move: -> alert "Slithering..." super 5 class Horse extends Animal move: -> alert "Galloping..." super 45 sam = new Snake "<NAME>" tom = new Horse "<NAME>" sam.move() tom.move() ``` ``` var Animal, Horse, Snake, sam, tom; Animal = class Animal { constructor(name) { this.name = name; } move(meters) { return alert(this.name + ` moved ${meters}m.`); } }; Snake = class Snake extends Animal { move() { alert("Slithering..."); return super.move(5); } }; Horse = class Horse extends Animal { move() { alert("Galloping..."); return super.move(45); } }; sam = new Snake("<NAME>"); tom = new Horse("<NAME>"); sam.move(); tom.move(); ``` Static methods can be defined using `@` before the method name: ``` class Teenager @say: (speech) -> words = speech.split ' ' fillers = ['uh', 'um', 'like', 'actually', 'so', 'maybe'] output = [] for word, index in words output.push word output.push fillers[Math.floor(Math.random() * fillers.length)] unless index is words.length - 1 output.join ', ' ``` ``` var Teenager; Teenager = class Teenager { static say(speech) { var fillers, i, index, len, output, word, words; words = speech.split(' '); fillers = ['uh', 'um', 'like', 'actually', 'so', 'maybe']; output = []; for (index = i = 0, len = words.length; i < len; index = ++i) { word = words[index]; output.push(word); if (index !== words.length - 1) { output.push(fillers[Math.floor(Math.random() * fillers.length)]); } } return output.join(', '); } }; ``` Finally, class definitions are blocks of executable code, which make for interesting metaprogramming possibilities. In the context of a class definition, `this` is the class object itself; therefore, you can assign static properties by using `@property: value`. Prototypal Inheritance ---------------------- In addition to supporting ES2015 classes, CoffeeScript provides a shortcut for working with prototypes. The `::` operator gives you quick access to an object’s prototype: ``` String::dasherize = -> this.replace /_/g, "-" ``` ``` String.prototype.dasherize = function() { return this.replace(/_/g, "-"); }; ``` Switch/When/Else ---------------- `switch` statements in JavaScript are a bit awkward. You need to remember to `break` at the end of every `case` statement to avoid accidentally falling through to the default case. CoffeeScript prevents accidental fall-through, and can convert the `switch` into a returnable, assignable expression. The format is: `switch` condition, `when` clauses, `else` the default case. As in Ruby, `switch` statements in CoffeeScript can take multiple values for each `when` clause. If any of the values match, the clause runs. ``` switch day when "Mon" then go work when "Tue" then go relax when "Thu" then go iceFishing when "Fri", "Sat" if day is bingoDay go bingo go dancing when "Sun" then go church else go work ``` ``` switch (day) { case "Mon": go(work); break; case "Tue": go(relax); break; case "Thu": go(iceFishing); break; case "Fri": case "Sat": if (day === bingoDay) { go(bingo); go(dancing); } break; case "Sun": go(church); break; default: go(work); } ``` `switch` statements can also be used without a control expression, turning them in to a cleaner alternative to `if`/`else` chains. ``` score = 76 grade = switch when score < 60 then 'F' when score < 70 then 'D' when score < 80 then 'C' when score < 90 then 'B' else 'A' # grade == 'C' ``` ``` var grade, score; score = 76; grade = (function() { switch (false) { case !(score < 60): return 'F'; case !(score < 70): return 'D'; case !(score < 80): return 'C'; case !(score < 90): return 'B'; default: return 'A'; } })(); // grade == 'C' ``` Try/Catch/Finally ----------------- `try` expressions have the same semantics as `try` statements in JavaScript, though in CoffeeScript, you may omit *both* the catch and finally parts. The catch part may also omit the error parameter if it is not needed. ``` try allHellBreaksLoose() catsAndDogsLivingTogether() catch error print error finally cleanUp() ``` ``` var error; try { allHellBreaksLoose(); catsAndDogsLivingTogether(); } catch (error1) { error = error1; print(error); } finally { cleanUp(); } ``` Chained Comparisons ------------------- CoffeeScript borrows [chained comparisons](https://docs.python.org/3/reference/expressions.html#not-in) from Python — making it easy to test if a value falls within a certain range. ``` cholesterol = 127 healthy = 200 > cholesterol > 60 ``` ``` var cholesterol, healthy; cholesterol = 127; healthy = (200 > cholesterol && cholesterol > 60); ``` Block Regular Expressions ------------------------- Similar to block strings and comments, CoffeeScript supports block regexes — extended regular expressions that ignore internal whitespace and can contain comments and interpolation. Modeled after Perl’s `/x` modifier, CoffeeScript’s block regexes are delimited by `///` and go a long way towards making complex regular expressions readable. To quote from the CoffeeScript source: ``` NUMBER = /// ^ 0b[01]+ | # binary ^ 0o[0-7]+ | # octal ^ 0x[\da-f]+ | # hex ^ \d*\.?\d+ (?:e[+-]?\d+)? # decimal ///i ``` ``` var NUMBER; NUMBER = /^0b[01]+|^0o[0-7]+|^0x[\da-f]+|^\d*\.?\d+(?:e[+-]?\d+)?/i; // binary // octal // hex // decimal ``` Tagged Template Literals ------------------------ CoffeeScript supports [ES2015 tagged template literals](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Template_literals#Tagged_template_literals), which enable customized string interpolation. If you immediately prefix a string with a function name (no space between the two), CoffeeScript will output this “function plus string” combination as an ES2015 tagged template literal, which will [behave accordingly](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Template_literals#Tagged_template_literals): the function is called, with the parameters being the input text and expression parts that make up the interpolated string. The function can then assemble these parts into an output string, providing custom string interpolation. ``` upperCaseExpr = (textParts, expressions...) -> textParts.reduce (text, textPart, i) -> text + expressions[i - 1].toUpperCase() + textPart greet = (name, adjective) -> upperCaseExpr""" Hi #{name}. You look #{adjective}! """ ``` ``` var greet, upperCaseExpr; upperCaseExpr = function(textParts, ...expressions) { return textParts.reduce(function(text, textPart, i) { return text + expressions[i - 1].toUpperCase() + textPart; }); }; greet = function(name, adjective) { return upperCaseExpr`Hi ${name}. You look ${adjective}!`; }; ``` Modules ------- [ES2015 modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules) are supported in CoffeeScript, with very similar `import` and `export` syntax: ``` import './local-file.js' # Must be the filename of the generated file import 'package' import _ from 'underscore' import * as underscore from 'underscore' import { now } from 'underscore' import { now as currentTimestamp } from 'underscore' import { first, last } from 'underscore' import utilityBelt, { each } from 'underscore' import dates from './calendar.json' assert { type: 'json' } export default Math export square = (x) -> x * x export class Mathematics least: (x, y) -> if x < y then x else y export { sqrt } export { sqrt as squareRoot } export { Mathematics as default, sqrt as squareRoot } export * from 'underscore' export { max, min } from 'underscore' export { version } from './package.json' assert { type: 'json' } ``` ``` import './local-file.js'; import 'package'; import _ from 'underscore'; import * as underscore from 'underscore'; import { now } from 'underscore'; import { now as currentTimestamp } from 'underscore'; import { first, last } from 'underscore'; import utilityBelt, { each } from 'underscore'; import dates from './calendar.json' assert { type: 'json' }; export default Math; export var square = function(x) { return x * x; }; export var Mathematics = class Mathematics { least(x, y) { if (x < y) { return x; } else { return y; } } }; export { sqrt }; export { sqrt as squareRoot }; export { Mathematics as default, sqrt as squareRoot }; export * from 'underscore'; export { max, min } from 'underscore'; export { version } from './package.json' assert { type: 'json' }; ``` [Dynamic import](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import#Dynamic_Imports) is also supported, with mandatory parentheses: ``` # Your browser must support dynamic import to run this example. do -> { run } = await import('./browser-compiler-modern/coffeescript.js') run ''' if 5 < new Date().getHours() < 9 alert 'Time to make the coffee!' else alert 'Time to get some work done.' ''' ``` ``` // Your browser must support dynamic import to run this example. (async function() { var run; ({run} = (await import('./browser-compiler-modern/coffeescript.js'))); return run(`if 5 < new Date().getHours() < 9 alert 'Time to make the coffee!' else alert 'Time to get some work done.'`); })(); ``` Note that the CoffeeScript compiler **does not resolve modules**; writing an `import` or `export` statement in CoffeeScript will produce an `import` or `export` statement in the resulting output. Such statements can be run by all modern browsers (when the script is referenced via `<script type="module">`) and [by Node.js](https://nodejs.org/api/esm.html#esm_enabling) when the output `.js` files are in a folder where the nearest parent `package.json` file contains `"type": "module"`. Because the runtime is evaluating the generated output, the `import` statements must reference the output files; so if `file.coffee` is output as `file.js`, it needs to be referenced as `file.js` in the `import` statement, with the `.js` extension included. Also, any file with an `import` or `export` statement will be output without a [top-level function safety wrapper](#lexical-scope); in other words, importing or exporting modules will automatically trigger [bare](#usage) mode for that file. This is because per the ES2015 spec, `import` or `export` statements must occur at the topmost scope. Embedded JavaScript ------------------- Hopefully, you’ll never need to use it, but if you ever need to intersperse snippets of JavaScript within your CoffeeScript, you can use backticks to pass it straight through. ``` hi = `function() { return [document.title, "Hello JavaScript"].join(": "); }` ``` ``` var hi; hi = function() { return [document.title, "Hello JavaScript"].join(": "); }; ``` Escape backticks with backslashes: `\`​` becomes ``​`. Escape backslashes before backticks with more backslashes: `\\\`​` becomes `\`​`. ``` markdown = `function () { return \`In Markdown, write code like \\\`this\\\`\`; }` ``` ``` var markdown; markdown = function () { return `In Markdown, write code like \`this\``; }; ``` You can also embed blocks of JavaScript using triple backticks. That’s easier than escaping backticks, if you need them inside your JavaScript block. ``` ``` function time() { return `The time is ${new Date().toLocaleTimeString()}`; } ``` ``` ``` function time() { return `The time is ${new Date().toLocaleTimeString()}`; } ; ``` JSX --- [JSX](https://facebook.github.io/react/docs/introducing-jsx.html) is JavaScript containing interspersed XML elements. While conceived for [React](https://facebook.github.io/react/), it is not specific to any particular library or framework. CoffeeScript supports interspersed XML elements, without the need for separate plugins or special settings. The XML elements will be compiled as such, outputting JSX that could be parsed like any normal JSX file, for example by [Babel with the React JSX transform](https://babeljs.io/docs/plugins/transform-react-jsx/). CoffeeScript does *not* output `React.createElement` calls or any code specific to React or any other framework. It is up to you to attach another step in your build chain to convert this JSX to whatever function calls you wish the XML elements to compile to. Just like in JSX and HTML, denote XML tags using `<` and `>`. You can interpolate CoffeeScript code inside a tag using `{` and `}`. To avoid compiler errors, when using `<` and `>` to mean “less than” or “greater than,” you should wrap the operators in spaces to distinguish them from XML tags. So `i < len`, not `i<len`. The compiler tries to be forgiving when it can be sure what you intend, but always putting spaces around the “less than” and “greater than” operators will remove ambiguity. ``` renderStarRating = ({ rating, maxStars }) -> <aside title={"Rating: #{rating} of #{maxStars} stars"}> {for wholeStar in [0...Math.floor(rating)] <Star className="wholeStar" key={wholeStar} />} {if rating % 1 isnt 0 <Star className="halfStar" />} {for emptyStar in [Math.ceil(rating)...maxStars] <Star className="emptyStar" key={emptyStar} />} </aside``` ``` var renderStarRating; renderStarRating = function({rating, maxStars}) { var emptyStar, wholeStar; return <aside title={`Rating: ${rating} of ${maxStars} stars`}> {(function() { var i, ref, results; results = []; for (wholeStar = i = 0, ref = Math.floor(rating); (0 <= ref ? i < ref : i > ref); wholeStar = 0 <= ref ? ++i : --i) { results.push(<Star className="wholeStar" key={wholeStar} />); } return results; })()} {rating % 1 !== 0 ? <Star className="halfStar" /> : void 0} {(function() { var i, ref, ref1, results; results = []; for (emptyStar = i = ref = Math.ceil(rating), ref1 = maxStars; (ref <= ref1 ? i < ref1 : i > ref1); emptyStar = ref <= ref1 ? ++i : --i) { results.push(<Star className="emptyStar" key={emptyStar} />); } return results; })()} </aside>; }; ``` Older plugins or forks of CoffeeScript supported JSX syntax and referred to it as CSX or CJSX. They also often used a `.cjsx` file extension, but this is no longer necessary; regular `.coffee` will do. Type Annotations ---------------- Static type checking can be achieved in CoffeeScript by using [Flow](https://flow.org/)’s [Comment Types syntax](https://flow.org/en/docs/types/comments/): ``` # @flow ###:: type Obj = { num: number, }; ### fn = (str ###: string ###, obj ###: Obj ###) ###: string ### -> str + obj.num ``` ``` // @flow /*:: type Obj = { num: number, }; */ var fn; fn = function(str/*: string */, obj/*: Obj */)/*: string */ { return str + obj.num; }; ``` CoffeeScript does not do any type checking itself; the JavaScript output you see above needs to get passed to Flow for it to validate your code. We expect most people will use a [build tool](#es2015plus-output) for this, but here’s how to do it the simplest way possible using the [CoffeeScript](#cli) and [Flow](https://flow.org/en/docs/usage/) command-line tools, assuming you’ve already [installed Flow](https://flow.org/en/docs/install/) and the [latest CoffeeScript](#installation) in your project folder: ``` coffee --bare --no-header --compile app.coffee && npm run flow ``` `--bare` and `--no-header` are important because Flow requires the first line of the file to be the comment `// @flow`. If you configure your build chain to compile CoffeeScript and pass the result to Flow in-memory, you can get better performance than this example; and a proper build tool should be able to watch your CoffeeScript files and recompile and type-check them for you on save. If you know of another way to achieve static type checking with CoffeeScript, please [create an issue](https://github.com/jashkenas/coffeescript/issues/new) and let us know. Literate CoffeeScript --------------------- Besides being used as an ordinary programming language, CoffeeScript may also be written in “literate” mode. If you name your file with a `.litcoffee` extension, you can write it as a Markdown document — a document that also happens to be executable CoffeeScript code. The compiler will treat any indented blocks (Markdown’s way of indicating source code) as executable code, and ignore the rest as comments. Code blocks must also be separated from comments by at least one blank line. Just for kicks, a little bit of the compiler is currently implemented in this fashion: See it [as a document](https://gist.github.com/jashkenas/3fc3c1a8b1009c00d9df), [raw](https://raw.githubusercontent.com/jashkenas/coffeescript/master/src/scope.litcoffee), and [properly highlighted in a text editor](https://cl.ly/LxEu). A few caveats: * Code blocks need to maintain consistent indentation relative to each other. When the compiler parses your Literate CoffeeScript file, it first discards all the non-code block lines and then parses the remainder as a regular CoffeeScript file. Therefore the code blocks need to be written as if the comment lines don’t exist, with consistent indentation (including whether they are indented with tabs or spaces). * Along those lines, code blocks within list items or blockquotes are not treated as executable code. Since list items and blockquotes imply their own indentation, it would be ambiguous how to treat indentation between successive code blocks when some are within these other blocks and some are not. * List items can be at most only one paragraph long. The second paragraph of a list item would be indented after a blank line, and therefore indistinguishable from a code block. Source Maps ----------- CoffeeScript includes support for generating source maps, a way to tell your JavaScript engine what part of your CoffeeScript program matches up with the code being evaluated. Browsers that support it can automatically use source maps to show your original source code in the debugger. To generate source maps alongside your JavaScript files, pass the `--map` or `-m` flag to the compiler. For a full introduction to source maps, how they work, and how to hook them up in your browser, read the [HTML5 Tutorial](https://www.html5rocks.com/en/tutorials/developertools/sourcemaps/). Cake, and Cakefiles ------------------- CoffeeScript includes a (very) simple build system similar to [Make](https://www.gnu.org/software/make/) and [Rake](http://rake.rubyforge.org/). Naturally, it’s called Cake, and is used for the tasks that build and test the CoffeeScript language itself. Tasks are defined in a file named `Cakefile`, and can be invoked by running `cake [task]` from within the directory. To print a list of all the tasks and options, just type `cake`. Task definitions are written in CoffeeScript, so you can put arbitrary code in your Cakefile. Define a task with a name, a long description, and the function to invoke when the task is run. If your task takes a command-line option, you can define the option with short and long flags, and it will be made available in the `options` object. Here’s a task that uses the Node.js API to rebuild CoffeeScript’s parser: ``` fs = require 'fs' option '-o', '--output [DIR]', 'directory for compiled code' task 'build:parser', 'rebuild the Jison parser', (options) -> require 'jison' code = require('./lib/grammar').parser.generate() dir = options.output or 'lib' fs.writeFile "#{dir}/parser.js", code ``` ``` var fs; fs = require('fs'); option('-o', '--output [DIR]', 'directory for compiled code'); task('build:parser', 'rebuild the Jison parser', function(options) { var code, dir; require('jison'); code = require('./lib/grammar').parser.generate(); dir = options.output || 'lib'; return fs.writeFile(`${dir}/parser.js`, code); }); ``` If you need to invoke one task before another — for example, running `build` before `test`, you can use the `invoke` function: `invoke 'build'`. Cake tasks are a minimal way to expose your CoffeeScript functions to the command line, so [don’t expect any fanciness built-in](https://coffeescript.org/v2/annotated-source/cake.html). If you need dependencies, or async callbacks, it’s best to put them in your code itself — not the cake task. `"text/coffeescript"` Script Tags ---------------------------------- While it’s not recommended for serious use, CoffeeScripts may be included directly within the browser using `<script type="text/coffeescript">` tags. The source includes a compressed and minified version of the compiler ([Download current version here, 77k when gzipped](https://coffeescript.org/v2/browser-compiler-legacy/coffeescript.js)) as `docs/v2/browser-compiler-legacy/coffeescript.js`. Include this file on a page with inline CoffeeScript tags, and it will compile and evaluate them in order. The usual caveats about CoffeeScript apply — your inline scripts will run within a closure wrapper, so if you want to expose global variables or functions, attach them to the `window` object. Integrations ------------ CoffeeScript is part of the vast JavaScript ecosystem, and many libraries help integrate CoffeeScript with JavaScript. Major projects, especially projects updated to work with CoffeeScript 2, are listed here; more can be found in the [wiki pages](https://github.com/jashkenas/coffeescript/wiki). If there’s a project that you feel should be added to this section, please open an issue or [pull request](https://github.com/jashkenas/coffeescript/wiki/%5BHowTo%5D-Update-the-docs). Projects are listed in alphabetical order by category. ### Build Tools * [Browserify](http://browserify.org) with [coffeeify](https://github.com/jnordberg/coffeeify) * [Grunt](https://gruntjs.com) with [grunt-contrib-coffee](https://github.com/gruntjs/grunt-contrib-coffee) * [Gulp](https://gulpjs.com) with [gulp-coffee](https://github.com/gulp-community/gulp-coffee) * [Parcel](https://parceljs.org) with [transformer-coffeescript](https://github.com/parcel-bundler/parcel/tree/v2/packages/transformers/coffeescript) * [Rollup](https://rollupjs.org) with [rollup-plugin-coffee-script](https://github.com/lautis/rollup-plugin-coffee-script) * [Webpack](https://webpack.js.org) with [coffee-loader](https://github.com/webpack-contrib/coffee-loader) ### Code Editors * [Atom](https://atom.io) [packages](https://atom.io/packages/search?q=coffeescript) * [Sublime Text](https://sublimetext.com) [packages](https://packagecontrol.io/search/coffeescript) * [Visual Studio Code](https://code.visualstudio.com) [extensions](https://marketplace.visualstudio.com/search?target=VSCode&term=coffeescript) ### Frameworks * [Ember](https://emberjs.com) with [ember-cli-coffeescript](https://github.com/kimroen/ember-cli-coffeescript) * [Meteor](https://meteor.com) with [coffeescript-compiler](https://atmospherejs.com/meteor/coffeescript-compiler) ### Linters and Formatting * [CoffeeLint](https://coffeelint.github.io/) * [ESLint](https://eslint.org) with [eslint-plugin-coffee](https://github.com/helixbass/eslint-plugin-coffee) * [Prettier](https://prettier.io) with [prettier-plugin-coffeescript](https://github.com/helixbass/prettier-plugin-coffeescript) ### Testing * [Jest](https://jestjs.io) with [jest-preset-coffeescript](https://github.com/danielbayley/jest-preset-coffeescript) Unsupported ECMAScript Features ------------------------------- There are a few ECMAScript features that CoffeeScript intentionally doesn’t support. ### `let` and `const`: block-scoped and reassignment-protected variables When CoffeeScript was designed, `var` was [intentionally omitted](https://github.com/jashkenas/coffeescript/issues/238#issuecomment-153502). This was to spare developers the mental housekeeping of needing to worry about variable *declaration* (`var foo`) as opposed to variable *assignment* (`foo = 1`). The CoffeeScript compiler automatically takes care of declaration for you, by generating `var` statements at the top of every function scope. This makes it impossible to accidentally declare a global variable. `let` and `const` add a useful ability to JavaScript in that you can use them to declare variables within a *block* scope, for example within an `if` statement body or a `for` loop body, whereas `var` always declares variables in the scope of an entire function. When CoffeeScript 2 was designed, there was much discussion of whether this functionality was useful enough to outweigh the simplicity offered by never needing to consider variable declaration in CoffeeScript. In the end, it was decided that the simplicity was more valued. In CoffeeScript there remains only one type of variable. Keep in mind that `const` only protects you from *reassigning* a variable; it doesn’t prevent the variable’s value from changing, the way constants usually do in other languages: ``` const obj = {foo: 'bar'}; obj.foo = 'baz'; // Allowed! obj = {}; // Throws error ``` ### Named functions and function declarations Newcomers to CoffeeScript often wonder how to generate the JavaScript `function foo() {}`, as opposed to the `foo = function() {}` that CoffeeScript produces. The first form is a [function declaration](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/function), and the second is a [function expression](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/function). As stated above, in CoffeeScript [everything is an expression](#expressions), so naturally we favor the expression form. Supporting only one variant helps avoid confusing bugs that can arise from the [subtle differences between the two forms](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/function#Function_declaration_hoisting). Technically, `foo = function() {}` is creating an anonymous function that gets assigned to a variable named `foo`. Some very early versions of CoffeeScript named this function, e.g. `foo = function foo() {}`, but this was dropped because of compatibility issues with Internet Explorer. For a while this annoyed people, as these functions would be unnamed in stack traces; but modern JavaScript runtimes [infer the names of such anonymous functions](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/name) from the names of the variables to which they’re assigned. Given that this is the case, it’s simplest to just preserve the current behavior. ### `get` and `set` keyword shorthand syntax `get` and `set`, as keywords preceding functions or class methods, are intentionally unimplemented in CoffeeScript. This is to avoid grammatical ambiguity, since in CoffeeScript such a construct looks identical to a function call (e.g. `get(function foo() {})`); and because there is an [alternate syntax](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/defineProperty) that is slightly more verbose but just as effective: ``` screen = width: 1200 ratio: 16/9 Object.defineProperty screen, 'height', get: -> this.width / this.ratio set: (val) -> this.width = val * this.ratio ``` ``` var screen; screen = { width: 1200, ratio: 16 / 9 }; Object.defineProperty(screen, 'height', { get: function() { return this.width / this.ratio; }, set: function(val) { return this.width = val * this.ratio; } }); ``` Breaking Changes From CoffeeScript 1.x to 2 ------------------------------------------- CoffeeScript 2 aims to output as much idiomatic ES2015+ syntax as possible with as few breaking changes from CoffeeScript 1.x as possible. Some breaking changes, unfortunately, were unavoidable. ### Bound (fat arrow) functions In CoffeeScript 1.x, `=>` compiled to a regular `function` but with references to `this`/`@` rewritten to use the outer scope’s `this`, or with the inner function bound to the outer scope via `.bind` (hence the name “bound function”). In CoffeeScript 2, `=>` compiles to [ES2015’s `=>`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions), which behaves slightly differently. The largest difference is that in ES2015, `=>` functions lack an `arguments` object: ``` outer = -> inner = => Array.from arguments inner() outer(1, 2) # Returns '' in CoffeeScript 1.x, '1, 2' in CoffeeScript 2 ``` ``` var outer; outer = function() { var inner; inner = () => { return Array.from(arguments); }; return inner(); }; outer(1, 2); // Returns '' in CoffeeScript 1.x, '1, 2' in CoffeeScript 2 ``` ### Default values for function parameters and destructured elements Per the [ES2015 spec regarding function default parameters](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Default_parameters) and [destructuring default values](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment#Default_values), default values are only applied when a value is missing or `undefined`. In CoffeeScript 1.x, the default value would be applied in those cases but also if the value was `null`. ``` f = (a = 1) -> a f(null) # Returns 1 in CoffeeScript 1.x, null in CoffeeScript 2 ``` ``` var f; f = function(a = 1) { return a; }; f(null); // Returns 1 in CoffeeScript 1.x, null in CoffeeScript 2 ``` ``` {a = 1} = {a: null} a # Equals 1 in CoffeeScript 1.x, null in CoffeeScript 2 ``` ``` var a; ({a = 1} = { a: null }); a; // Equals 1 in CoffeeScript 1.x, null in CoffeeScript 2 ``` ### Bound generator functions Bound generator functions, a.k.a. generator arrow functions, [aren’t allowed in ECMAScript](https://stackoverflow.com/questions/27661306/can-i-use-es6s-arrow-function-syntax-with-generators-arrow-notation). You can write `function*` or `=>`, but not both. Therefore, CoffeeScript code like this: ``` f = => yield this # Throws a compiler error ``` Needs to be rewritten the old-fashioned way: ``` self = this f = -> yield self ``` ``` var f, self; self = this; f = function*() { return (yield self); }; ``` ### Classes are compiled to ES2015 classes ES2015 classes and their methods have some restrictions beyond those on regular functions. Class constructors can’t be invoked without `new`: ``` (class)() # Throws a TypeError at runtime ``` ES2015 classes don’t allow bound (fat arrow) methods. The CoffeeScript compiler goes through some contortions to preserve support for them, but one thing that can’t be accommodated is calling a bound method before it is bound: ``` class Base constructor: -> @onClick() # This works clickHandler = @onClick clickHandler() # This throws a runtime error class Component extends Base onClick: => console.log 'Clicked!', @ ``` Class methods can’t be used with `new` (uncommon): ``` class Namespace @Klass = -> new Namespace.Klass # Throws a TypeError at runtime ``` Due to the hoisting required to compile to ES2015 classes, dynamic keys in class methods can’t use values from the executable class body unless the methods are assigned in prototype style. ``` class A name = 'method' "#{name}": -> # This method will be named 'undefined' @::[name] = -> # This will work; assigns to `A.prototype.method` ``` ### `super` and `this` In the constructor of a derived class (a class that `extends` another class), `this` cannot be used before calling `super`: ``` class B extends A constructor: -> this # Throws a compiler error ``` This also means you cannot pass a reference to `this` as an argument to `super` in the constructor of a derived class: ``` class B extends A constructor: (@arg) -> super @arg # Throws a compiler error ``` This is a limitation of ES2015 classes. As a workaround, assign to `this` after the `super` call: ``` class B extends A constructor: (arg) -> super arg @arg = arg ``` ``` var B; B = class B extends A { constructor(arg) { super(arg); this.arg = arg; } }; ``` ### `super` and `extends` Due to a syntax clash with `super` with accessors, “bare” `super` (the keyword `super` without parentheses) no longer compiles to a super call forwarding all arguments. ``` class B extends A foo: -> super # Throws a compiler error ``` Arguments can be forwarded explicitly using splats: ``` class B extends A foo: -> super arguments... ``` ``` var B; B = class B extends A { foo() { return super.foo(...arguments); } }; ``` Or if you know that the parent function doesn’t require arguments, just call `super()`: ``` class B extends A foo: -> super() ``` ``` var B; B = class B extends A { foo() { return super.foo(); } }; ``` CoffeeScript 1.x allowed the `extends` keyword to set up prototypal inheritance between functions, and `super` could be used manually prototype-assigned functions: ``` A = -> B = -> B extends A B.prototype.foo = -> super arguments... # Last two lines each throw compiler errors in CoffeeScript 2 ``` Due to the switch to ES2015 `extends` and `super`, using these keywords for prototypal functions are no longer supported. The above case could be refactored to: ``` # Helper functions hasProp = {}.hasOwnProperty extend = (child, parent) -> ctor = -> @constructor = child return for key of parent if hasProp.call(parent, key) child[key] = parent[key] ctor.prototype = parent.prototype child.prototype = new ctor child A = -> B = -> extend B, A B.prototype.foo = -> A::foo.apply this, arguments ``` ``` // Helper functions var A, B, extend, hasProp; hasProp = {}.hasOwnProperty; extend = function(child, parent) { var ctor, key; ctor = function() { this.constructor = child; }; for (key in parent) { if (hasProp.call(parent, key)) { child[key] = parent[key]; } } ctor.prototype = parent.prototype; child.prototype = new ctor(); return child; }; A = function() {}; B = function() {}; extend(B, A); B.prototype.foo = function() { return A.prototype.foo.apply(this, arguments); }; ``` or ``` class A class B extends A foo: -> super arguments... ``` ``` var A, B; A = class A {}; B = class B extends A { foo() { return super.foo(...arguments); } }; ``` ### JSX and the `<` and `>` operators With the addition of [JSX](#jsx), the `<` and `>` characters serve as both the “less than” and “greater than” operators and as the delimiters for XML tags, like `<div>`. For best results, in general you should always wrap the operators in spaces to distinguish them from XML tags: `i < len`, not `i<len`. The compiler tries to be forgiving when it can be sure what you intend, but always putting spaces around the “less than” and “greater than” operators will remove ambiguity. ### Literate CoffeeScript parsing CoffeeScript 2’s parsing of Literate CoffeeScript has been refactored to now be more careful about not treating indented lists as code blocks; but this means that all code blocks (unless they are to be interpreted as comments) must be separated by at least one blank line from lists. Code blocks should also now maintain a consistent indentation level—so an indentation of one tab (or whatever you consider to be a tab stop, like 2 spaces or 4 spaces) should be treated as your code’s “left margin,” with all code in the file relative to that column. Code blocks that you want to be part of the commentary, and not executed, must have at least one line (ideally the first line of the block) completely unindented. ### Argument parsing and shebang (`#!`) lines In CoffeeScript 1.x, `--` was required after the path and filename of the script to be run, but before any arguments passed to that script. This convention is now deprecated. So instead of: ``` coffee [options] path/to/script.coffee -- [args] ``` Now you would just type: ``` coffee [options] path/to/script.coffee [args] ``` The deprecated version will still work, but it will print a warning before running the script. On non-Windows platforms, a `.coffee` file can be made executable by adding a shebang (`#!`) line at the top of the file and marking the file as executable. For example: ``` #!/usr/bin/env coffee x = 2 + 2 console.log x ``` If this were saved as `executable.coffee`, it could be made executable and run: ``` ▶ chmod +x ./executable.coffee ▶ ./executable.coffee 4 ``` In CoffeeScript 1.x, this used to fail when trying to pass arguments to the script. Some users on OS X worked around the problem by using `#!/usr/bin/env coffee --` as the first line of the file. That didn’t work on Linux, however, which cannot parse shebang lines with more than a single argument. While such scripts will still run on OS X, CoffeeScript will now display a warning before compiling or evaluating files that begin with a too-long shebang line. Now that CoffeeScript 2 supports passing arguments without needing `--`, we recommend simply changing the shebang lines in such scripts to just `#!/usr/bin/env coffee`. © 2009–2022 <NAME> Licensed under the MIT License. <https://coffeescript.org/coffeescript~2
leanpub_com_thebigbookofpowershellgotchas_read
free_programming_book
Unknown
# Table of Contents Date: 2012-01-01 Categories: Tags: ## Table of Contents * The Big Book of PowerShell Gotchas * Format right * Where is the ____ Command? I’ve Installed the Latest Version of PowerShell and Can’t Find it! * PowerShell.exe isn’t PowerShell * Accumulating Output in a Function * ForEach vs ForEach vs ForEach * Tab Completion * -Contains isn’t -Like * You Can’t Have What You Don’t Have * -Filter Values Diversity * Not Everything Produces Output * One HTML Page at a Time, Please * [Bloody] {Awful} (Punctuation) * Don’t+Concatenate+Strings * $ isn’t Part of the Variable Name * Use the Pipeline, not an Array * Backtick, Grave Accent, Escape * A Crowd isn’t an Individual * These aren’t Your Father’s Commands * Properties vs. Values * Remote Variables * New-Object PSObject vs. PSCustomObject * Running Something as the “Currently Logged-in User” * Commands that Need a User Profile May Fail When Run Remotely * Writing to SQL Server * Getting Folder Sizes ## The Big Book of PowerShell Gotchas by (mostly) <NAME> PowerShell is full of “gotchas” - little things that just get in your way and are hard to figure out on your own. This short book is intended to help you figure them out and avoid them. This guide is released under the Creative Commons Attribution-NoDerivs 3.0 Unported License. The authors encourage you to redistribute this file as widely as possible, but ask that you do not modify the document. Getting the Code The EnhancedHTML2 module mentioned in this book can be found in the https://www.powershellgallery.com/packages/EnhancedHTML2/. That page includes download instructions. PowerShellGet is required, and can be obtained from PowerShellGallery.com Was this book helpful? The author(s) kindly ask(s) that you make a tax-deductible (in the US; check your laws if you live elsewhere) donation of any amount to The DevOps Collective to support their ongoing work. Check for Updates! Our ebooks are often updated with new and corrected content. We make them available in two ways: * Our main, authoritative GitHub organization, with a repo for each book. Visit https://github.com/devops-collective-inc/ * On LeanPub, where you can read them online, download as PDF, EPUB, or MOBI (login required), and “purchase” the books to make a donation to DevOps Collective. You can also choose to be notified of updates. Visit https://leanpub.com/u/devopscollective LeanPub can also notify you when we push updates. Our main GitHub repo is authoritative; repositories on other sites are usually just mirrors used for the publishing process. LeanPub always contains the most recent “public release” of any book. ## Format right Everyone runs into this one. Here’s how it goes: you start by writing a truly awesome command. And you think, “wow, that’d go great in an HTML file.” Wait… what?!?!? This happens all the time. If you want an easy way to remember what not to do, it’s this: Never pipe a Format command to anything else. That isn’t the whole truth, and we’ll get to the whole truth in a sec, but if you just want a quick answer, that’s it. In the community, we call it the “Format Right” rule, because you have to move your Format command to the right-most end of the command line. That is, the Format command comes last, and nothing else comes after it. The reason is that the Format commands all produce special internal formatting codes, that are really just intended to create an on-screen display. Piping those codes to anything else - ConvertTo-HTML, Export-CSV, whatever - just gets you gibberish output. In fact, there are actually a few commands that can come after a Format command in the pipeline: * Out-Default. This is technically always at the end of the pipeline, although it’s invisible. It redirects to Out-Host. * Out-Host also understands the output of Format commands, because Out-Host is how those formatting codes get on the screen in the first place. * Out-Printer understands the formatting codes too, and constructs a printed page that would look exactly like the normal on-screen output. * Out-File, like Out-Printer, redirects the on-screen output, but this time to a text file on disk. * Out-String consumes the formatting codes and just outputs a plain string containing the text that would otherwise have appeared on-screen. Apart from those exceptions - and of them, you’ll mainly only ever use Out-File - you can’t pipe the output of a Format command to much else and get anything that looks useful. ## Where is the ____ Command? I’ve Installed the Latest Version of PowerShell and Can’t Find it! One tricky thing is understanding that there are a certain number of commands that come with PowerShell, _while _most commands do not. Every new version of PowerShell includes at least a few new commands. For example, Start-Job appeared for the first time in PowerShell v2, while Invoke-AsWorkflow was introduced in PowerShell v3. What confuses people is that a new version of PowerShell also tends to correspond with a new version of the Windows operating system? and the OS itself comes with hundreds of commands. For example, you may have used Get-SmbShare for the first time in Windows Server 2012, which included PowerShell v3. But Get-SmbShare is part of the operating system, not part of PowerShell. That is, you won’t have Get-SmbShare on every system that has PowerShell v3 or later, because the command isn’t a “feature of PowerShell,” it’s a “feature of Windows.” So? where do you get commands? Usually, with whatever product those commands are a part of. Want the Exchange Server commands? Install the Exchange Server admin tools. Want the Windows Server 2012 commands? Install the Remote Server Administration Toolkit (RSAT), which contains the server admin tools. ## PowerShell.exe isn’t PowerShell It’s important to understand that Windows PowerShell is actually an untouchable, behind-the-scenes engine. You as a mere human being cannot easily interact directly with PowerShell. Instead, you need a host application. A host embeds the engine internally, and then gives you a way to interact with it. For example, PowerShell.exe is a host application. It is built around the same Windows console host (ConHost.exe) as the old Cmd.exe command-line shell, but it embeds the PowerShell engine. You type commands, and the host hands those to the engine for execution. The host is also responsible for displaying any results ? in this case, on-screen. Why is this distinction important? Because different hosts can behave in different ways. For example, the PowerShell ISE behaves a bit differently than the console host, and both of them behave very differently from Active Directory Administration Center ? another PowerShell host. ## Accumulating Output in a Function This is a bit of an “advanced” gotcha, but it’s one that many experienced developers will run into. Here’s a very trimmed-down example, just to make the point (it isn’t functional, as the command used is fictional): The problem here is that the function can generate multiple output objects, and the programmer is accumulating those into the $output variable. That means this function won’t output anything until it’s completely finished running. That isn’t how PowerShell commands (and functions are commands) are usually meant to work. PowerShell commands should usually output each object to the pipeline, one at a time, as those objects are ready. That allows the pipeline to accumulate the output, and to immediately pass it along to whatever is next in the pipeline. That’s how PowerShell commands are intended to work. Now, there are always exceptions. Sort-Object, for example, has to accumulate its output, because it can’t actually sort anything until it has all of them. So it’s called a _blocking command, _because it “blocks” the pipeline from doing anything else until it produces its output. But that’s an exception. It’s usually easy to fix this, by simply outputting to the pipeline instead of accumulating: ## ForEach vs ForEach vs ForEach PowerShell’s three lookalike friends can confusing, especially for newcomers. Basically, you’ve got two entities: * The ForEach-Object cmdlet, which has an alias ForEach (it also has the alias %). This is meant to operate in the pipeline, and it uses a ?Process parameter that accepts a scriptblock. * The ForEach scripting construct. This has a specific syntax, is not intended to be used in the pipeline, and does not have an alias. Here’s all three in action, in a very simplistic example: The big difference is that, in the pipeline, ForEach-Object _processes one object at a time. _That means it can be slower, since that scriptblock must be interpreted on each iteration. It also tends to use less memory, since objects streaming down the pipeline one at a time don’t all have to be bunched up in a variable first. The scripting construct tends to be faster, but it often has more memory overhead, because you have to give it the entire collection of objects at once, instead of streaming objects into it one at a time. Both use vaguely similar syntax, but there are differences. It’s important to understand that they are not the same, and that they execute differently. It’s confusing because “ForEach” is both an alias and a scripting construct; the shell determines which you’re using by looking at the context in which you’re using it. ## Tab Completion It’s sad and amazing how few people rely on tab completion, both in the PowerShell ISE and in the console window. * When you tab complete, you’ll never spell commands or parameter names wrong * For many parameter values that are static lists, or easily-queried lists, tab completion (especially in v3 and later) can fill-in legal parameter values for you * Tab completion makes long cmdlet names a lot easier to type, without the need for difficult-to-remember aliases. Get into the habit of using tab completion all the time, and you’re guaranteed to make fewer mistakes. ## -Contains isn’t -Like Oh, if I had a nickel for every time I’ve seen this: I get how this happens. The -contains operator seems like it should be checking to see if a process’ name contains the letters “notepad.” But that isn’t what it does. The correct approach is to use the -like operator, which in fact does do a wildcard string comparison: I’ll let pass the thought that the really correct answer is to just run Stop-Process -name *notepad*, because I was aiming for a simple example here. But… don’t overthink things. Sometimes a script and a ForEach loop isn’t the best approach. So anyway, what does -contains (and its friend, -notcontains) actually do? They’re similar to the -in and -notin operators introduced in PowerShell v3, and those operators cause more than a bit of confusion, too. What they do is check to see if a collection of objects contains a given single object. For example: In fact, that example is probably the best way to see it work. The trick is that, when you use a complex object instead of a simple value (as I did in that example), -contains and -in look at every property of the object to make a match. If you think about something like a process, they’re always changing. From moment to moment, a process’ CPU and memory, for example, are different. In this example, I’ve started Notepad. I’ve put its process object into $single_proc, and you can see that I verified it was there. But when I run Get-Process and check to see if its collection contained my Notepad, I got False. That’s because the object in $single_proc is out of date. Notepad is running, but it now looks different, so -contains can’t find the match. The -in and -contains operators are best with simple values, or with objects that don’t have constantly-changing property values. But they’re not wild card string matching operators. Use -like (or -notlike) for that. ## You Can’t Have What You Don’t Have Can you see what’s wrong with this approach? I mean, I’m pretty sure I have some running services, which is what this was supposed to display. If you don’t see the answer right away - or frankly, even if you do - this is a good time to talk about how to troubleshoot long command lines. Start, as I always say, by backing off a step. Delete the last command, and see if that does anything different. In this case, I removed the Sort-Object (Sort) command, and nothing different happened. So that wasn’t causing the problem. Next, I removed the Where-Object (Where, using v3 short syntax) command, and ah-ha! I got output. So something broke with Where-Object. Let’s take what did work and pipe it to Get-Member, to see what’s in the pipeline after Select-Object runs. OK, I have an object that has a DisplayName property and a Name property. And my Where-Object command was checking the Status property. Do you see a Status property? No, you do not. My error is that I removed the Status property when I didn’t include it in the property list of Select-Object. So Where-Object had nothing to work with, so it returned nothing. (Yeah, it’d be cooler if it threw an error - “Hey, you said to filter on the Status property, and there ain’t one!” - but that isn’t how it works.) Moral of the story: Pay attention to what’s in the pipeline. You can’t work with something you don’t have, and you might have taken it away yourself. You won’t always get a helpful error message, so sometimes you’ll need to dig in and figure it out another way - such as backing off a step. ## -Filter Values Diversity Here’s one of the toughest things to get used to in PowerShell: Here you see three commands, each using a -Filter parameter. Every one of those filters is different. * With Get-ChildItem, -Filter accepts file system wildcards like *. * With Get-WmiObject, -Filter requires a string, and uses programming-style operators (like = for equality). * With Get-ADUser, -Filter wanted a script block, and accepted PowerShell-style comparison operators (like -eq for equality). Here’s how I think of it: When you use a -Filter parameter, PowerShell isn’t processing the filtering. Instead, the filtration criteria is being handed down to the underlying technology, like the file system, or WMI, or Active Directory. That technology gets to decide what kind of filter criteria it will accept. PowerShell is just the middleman. So you have to carefully read the help, and maybe look for examples, to understand how the underlying technology needs you to specify its filter. Yeah, it’d be nice if PowerShell just translated for you (that’s actually what Get-ADUser does - the command translates that into an LDAP filter under the hood). But, usually, it doesn’t. ## Not Everything Produces Output I see this one a lot in classes: If you expected anything on the screen in terms of output, you’d be disappointed. The trick here is to keep track of what each command produces as output, and right there is a possible point of confusion. In PowerShell’s world, output is what would show up on the screen if you ran the command and didn’t pipe it to anything else. Yes, Export-CSV does do something - it creates a file on disk - but in PowerShell’s world that file isn’t output. What Export-CSV does not do is produce any output - that is, something which would show up on the screen. For example: See? Nothing. Since there’s nothing on the screen, there’s nothing in the pipeline. You can’t pipe Export-CSV to another command, because there’s nothing to pipe. Some commands will include a -PassThru parameter. When they have one, and when you use it, they’ll do whatever they normally do but also pass their input objects through to the pipeline, so that you can then pipe them on to something else. Export-CSV isn’t one of those commands, though - it never produces output, so it will never make sense to pipe it to something else. ## One HTML Page at a Time, Please This drives me batty: What’s happening is that someone ran two command, piping the output of each to ConvertTo-HTML, and essentially sticking both HTML pages into a single file. What drives me really nuts is that Internet Explorer is okay with that nonsense. HTML files are allowed to start with one top-level <HTML> tag, but if you check out that file you’ll see that it contains two. Here’s the middle bit: I’ve highlighted the lines that end one HTML page and start the next one. This is technically a malformed HTML file. It becomes tough to use this with some Web browsers (Firefox 20 is choking it down, but my current Webkit browsers aren’t), tough to parse if you ever need to manipulate it programmatically, and… well, it’s just a bad thing. It’s like incest or something. Gross. If you need to combine multiple elements into a single HTML file, you use the -Fragment switch of ConvertTo-HTML. That produces just a portion of the HTML, and you can produce several such portions and then combine them into a single, complete page. Ahhh, nice. That whole process is covered in Creating HTML Reports in PowerShell, another free ebook that came with this one ## [Bloody] {Awful} (Punctuation) This isn’t so much a “gotcha” as it is just plain confusing. PowerShell’s nuts with the punctuation. (Parentheses) are used to enclose expressions, such as the ForEach() construct’s expression, and in certain cases to contain declarative syntax. You see that in the Param() block, and in the [Parameter()] attribute. [Square brackets] are used around some attributes, like [CmdletBinding()], and around data types like [string], and to indicate arrays - as in [string[]]. They pop up a few other places, too. {Curly brackets} nearly always contain executable code, as in the Try{} block, the BEGIN{} block, and the function itself. It’s also used to express hash table literals (like @{}). If your keyboard had a few dozen more buttons, PowerShell probably wouldn’t have had to have all these overlapping uses of punctuation. But it does. At this point, they’re pretty much just part of the shell’s “cost of entry,” and you’ll have to get used to them. ## Don’t+Concatenate+Strings I really dislike string concatenation. It’s like forcing someone to cuddle with someone they don’t even know. Rude. And completely unnecessary, when you use double quotes. Same end effect. In double quotes, PowerShell will look for the $ character. When it finds it: * If the next character is a { then PowerShell will take everything to the matching } as a variable name, and replace the whole thing with that variable’s contents. For example, putting ${my variable} inside double quotes will replace that with the contents of ${my variable}. * If the next character is a ( then PowerShell will take everything to the matching ) and execute it as code. So, I executed $wmi.serialnumber to access the serialnumber property of whatever object was in the $wmi variable. * Otherwise, PowerShell will take every character that is legal for a variable name, up until the first illegal variable name character, and replace it with that variable. That’s how $computer works in my example. The space after r isn’t legal for a variable name, so PowerShell knows the variable name stops at r. There’s a sub-gotcha here: This won’t work as expected. In most cases, $wmi will be replaced by an object type name, and .serialnumber will still be in there. That’s because . isn’t a legal variable name character, so PowerShell stops looking at the variable with the letter i. It replaces $wmi with its contents. You see, in the previous example, I’d put $($wmi.serialnumber), which is a subexpression, and which works. The parentheses make their contents execute as code. ## $ isn’t Part of the Variable Name Big gotcha. Can you predict what happened? You see, the $ is not part of the variable’s name. If you have a variable named example, that’s like having a box with “example” written on the side. Referring to example means you’re talking about the box itself. Referring to $example means you’re messing with the contents of the box. So in my example, I used $example=5 to put 5 into the box. I then created a new variable. The new variable’s name was $example - that isn’t naming it “example,” it’s naming it the contents of the “example” box, which is 5. So I create a variable named 5, that contains 6, which you can see by referring to $5. Tricky, right? Comes up all the time: In that example, I used the -ErrorVariable parameter to specify a variable in which I would store any error that would occur. Problem is, I used $x. I should have used x by itself: That will store any error in a variable named x, which I can later access by using $x to get its contents - meaning, whatever error was stored in there. ## Use the Pipeline, not an Array A very common mistake made by traditional programmers who come to PowerShell - which is not a programming language: This person has created an empty array in $output, and as they run through their computer list and query WMI, they’re adding new output objects to the array. Finally, at the end, they output the array to the pipeline. Poor practice. You see, this forces PowerShell to wait while this entire command completes. Any subsequent commands in the pipeline will sit their twiddling their thumbs. A better approach? Use the pipeline. Its whole purpose is to accumulate output for you - there’s no need to accumulate it yourself in an array. Now, subsequent commands will receive output as its being created, letting several commands run more or less simultaneously in the pipeline. ## Backtick, Grave Accent, Escape You’ll see folks do this a lot: That isn’t a dead pixel on your monitor or a stray piece of toner on the page, it’s the grave accent mark or backtick. ` is PowerShell’s escape character. In this example, it’s “escaping” the invisible carriage return at the end of the line, removing its special purpose as a logical line-end, and simply making it a literal carriage return. I don’t like the backtick used this way. First, it’s hard to see. Second, if you get any extra whitespace after it, it’ll no longer escape the carriage return, and your script will break. The ISE even figures this out: Carefully compare the -ComputerName parameter - in this second example, it’s the wrong color for a parameter name, because I added a space after the backtick on the preceding line. IMPOSSIBLE to track these down. And the backtick is unnecessary as a line continuation character. Let me explain why: PowerShell already allows you to hit Enter in certain situations. You just have to learn what those situations are, and learn to take advantage of them. I totally understand the desire to have neatly-formatted code - I preach about that all the time, myself - but you don’t have to rely on a little three-pixel character to get nicely formatted code. You just have to be clever. To begin, I’ve put my Get-WmiObject commands in a hash table, so I can format them all nice and pretty. Each line ends on a semicolon, and PowerShell lets me line-break after each semicolon. Even if I get an extra space or tab after the semicolon, it’ll work fine. I then splat those parameters to the Get-WmiObject command. After Get-WmiObject, I have a pipe character - and you can legally line-break after that, too. You’ll notice on Select-Object that breaking after a comma as well. So I end up with formatting that looks at least as good, if not better, because it doesn’t have that little ` floating all over the place. ## A Crowd isn’t an Individual A very common newcomer mistake: Here, the person is treating everything like it contains only one value. But $computername might contain multiple computer names (that’s what [string[]] means), meaning $bios and $os will contain multiple items too. You’ll often have to enumerate those to get this working right: Folks will run into this even in simple situations. For example: PowerShell v2 won’t react so nicely; in v3, the variable inside double quotes is $procs, and since that variable contains multiple objects, PowerShell implicitly enumerates them and looks for a Name property. You’ll notice “.name” from the original string appended to the end - PowerShell didn’t do anything with that. You’d probably want to enumerate these: ## These aren’t Your Father’s Commands Always keep in mind that while PowerShell has things called Dir and Cd, they aren’t the old MS-DOS commands. They’re simply aliases, or nicknames, to PowerShell commands. That means they have different syntax. You can run help dir (or ask for help on any other alias) to see the actual command name, and its proper syntax. ## Properties vs. Values Know why that won’t work? It’s because the result of Get-ADComputer is an object, which has properties. You probably knew that. But the result of Select-Object is also an object that has properties. Specifically, in this case, it’s “Selected” ADComputer object, having a single property: Name. Look at the help for Get-CimInstance. The -ComputerName parameter accepts objects of the type String. It says so, right in the help! But a Selected ADComputer object isn’t the same thing as a String. The Name property you selected contains strings, but it isn’t a string itself. This is a huge distinction, and one that trips people up all the time. Think of a property as a box. That box can contain things, but it’s a thing in and of itself, also. In this case, the box is called Name, and it contains strings. But you can’t shove that whole box into something that was just expecting strings. “Hey, I wanted a string, not a box!” Think about a fax machine. Do you remember those? They accept pages, and transmit those pages. Now suppose you have an envelope full of pages. You can’t just shove the envelope into the fax machine and expect good results. In that analogy, the envelope is a property, and the pages inside it are values. To get the pages into the fax machine, you have to take them out of the envelope first. What you want to do in this case is get the strings out of the box, and Select-Object offers a way of doing that: See the difference? -ExpandProperty gets just the contents of the specified property, rather than returning an object that only has that property. Want a simple way to test this in the shell? Run these commands: ## Remote Variables When using PowerShell remoting, you need to remember that you’re dealing with two or more computers that don’t share information between them. For example, the following command will run fine on your local computer: However, if you try to run just the Copy-Item command on a remote computer, it will fail: The problem here is that $f1 and $f2 are defined on your computer, but not on the remote computer. The script block passed by Invoke-Command isn’t evaluated on your computer, it’s simply passed as-is. There are two possible fixes. The first is to simply include the variable definitions in the script block: Another technique, available in PowerShell v3 and later, is to use the $using variable designator. PowerShell pre-scans the script block for these, and will pass along your local variable values to the remote computer(s); The special $using: syntax is what makes this version of the command work. ## New-Object PSObject vs. PSCustomObject There’s often some confusion in regards to the differences between using New-Object PSObject and PSCustomObject, as well as how the two work. Either approach can be used to take a set of values from a collection of PowerShell objects and collate them into a single output. As well, both avenues will output the data as NoteProperties in the System.Management.Automation.PSCustomObject object types. So what’s the big deal between them? For starters, the New-Object cmdlet was introduced in PowerShell v1.0 and has gone through a number of changes, while the use of the PSCustomObject class came later in v3.0. For systems using PowerShell v2.0 or earlier, New-Object must be used. The key difference between the 2.0 version and 1.0 version from an administrative point of view is that 2.0 allows the use of hash tables. For example: ### New-Object PSObject in v1.0 With the New-Object method in PowerShell v1.0, you have to declare the object type you want to create and add members to the collection in individual commands. This changed however in v2.0 with the ability to use hashtables: ### New-Object in PS 2.0 Here’s the output: This saved a lot of overhead in typing and provided a cleaner looking script. However, both methods have the same problem in that the output is not necessarily in the same order as you have it listed, so if you’re looking for a particular format, it may not work. PSCustomObject fixed this when it was introduced in v3.0, along with providing more streamlining in your scripts. ### PSCustomObject in PowerShell v3.0 As demonstrated, your output will always match what you have defined in your hashtable. Another advantage of using PSCustomObject is that it has been noted to enumerate the data faster than its New-Object counterpart. The only thing to keep in mind with PSCustomObject is that it will not work with systems running PSv2.0 or earlier. ## Running Something as the “Currently Logged-in User” A common PowerShell request is to be able to remotely kick off some code that runs under the account of the user that’s currently logged on to the remote machine, or the user who most often uses the remote machine. This is really difficult, and usually impractical. First, understand that Windows is inherently a multi-user operating system. It doesn’t have a concept for “the currently logged-on user” because there might be many logged-on users. Even though client versions of Windows don’t technically permit multiple interactive logons, the base operating system acts as if it can. Second, as a multi-user OS, Windows’ job is to maintain a strict firewall around each user’s process space. You don’t want one user jumping into another’s space, because that would be a huge risk to security and stability. So you can’t easily log in as one user and run something that another user can “see.” For example, a common version of this request is for an admin to remotely make Notepad pop up in front of users, so they can remotely convey some important message. Sadly, Notepad is not a good instant messaging app, and Windows doesn’t make this easy. And, if you think about it, what would malware be able to do if this was possible? It’d be horrible! With very few, difficult exceptions, you can’t really run something “as another user on a remote machine.” One exception is if you know the remote user’s user name and password. If you do, you can establish a Remoting session to the computer using their credentials, and potentially have applications run in that user’s process space. But you can see how impractical that is in most situations. ## Commands that Need a User Profile May Fail When Run Remotely Many commands act against the currently logged-on user’s profile. Those commands can sometimes fail when you run them over a Remoting connection, such as by using Invoke-Command or Enter-PSSession. For example, many installers default to creating per-user icons, and those can fail when run remotely – even when run in a “silent install” mode. The problem is that, when you connect to a remote computer, you aren’t spinning up a complete user environment. You’re technically not “logging on” to the machine in the usual sense. You’re authenticating, yes, but in much the same way that you’d authenticate to a shared folder. Your remote connection doesn’t have a complete user profile, and so anything that’s expecting one can get errors and fail (even if they don’t show those errors). There’s no easy fix for this, unfortunately. ## Writing to SQL Server Saving data to SQL Server - versus Excel or some other contraption - is easy. Assume that you have SQL Server Express installed locally. You’ve created in it a database called MYDB, and in that a table called MYTABLE. The table has ColumnA and ColumnB, which are both strings (VARCHAR) fields. And the database file is in c:\myfiles\mydb.mdf. This is all easy to set up in a GUI if you download SQL Server Express “with tools” edition. And it’s free! You can insert lots of values by just looping through the three lines that define the SQL statement and execute it: It’s just as easy to run UPDATE or DELETE queries in exactly the same way. SELECT queries use ExecuteReader() instead of ExecuteNonQuery(), and return a SqlDataReader object that you can use to read column data or advance to the next row. ## Getting Folder Sizes Folks often ask how to use PowerShell to get the size of a folder, such as a user home folder. Problem is, _folders don’t have a size. _Windows literally doesn’t track size for folder objects. A folder’s “size” is merely the sum of it’s files’ sizes. Which means you have to add them up. As one example. Bottom line, you need to get all the files, and add up their Length properties.
vanquish
cran
R
Package ‘vanquish’ October 12, 2022 Title Variant Quality Investigation Helper Version 1.0.0 Description Imports Variant Calling Format file into R. It can detect whether a sample contains contaminant from the same species. In the first stage of the approach, a change-point detection method is used to identify copy number variations for filtering. Next, features are extracted from the data for a support vector machine model. For log-likelihood calculation, the deviation parameter is estimated by maximum likelihood method. Using a radial basis function kernel support vector machine, the contamination of a sample can be detected. Depends R (>= 3.4.0) Imports changepoint, e1071, ggplot2, stats, VGAM License GPL-2 Encoding UTF-8 LazyData true RoxygenNote 6.0.1 NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2018-09-05 14:50:04 UTC R topics documented: config_d... 2 defco... 3 generate_featur... 4 getAlt... 4 getAnnoRat... 5 getAvgL... 5 getLowDept... 6 getRati... 6 getSkewnes... 7 getSNVRat... 7 getVa... 8 locateFil... 8 negl... 9 readGAT... 9 readStrelk... 10 readVarDic... 10 readVarPROW... 11 read_vc... 12 rho_es... 13 rmChangePoin... 13 rmCNVinVC... 14 summary_vc... 14 svm_class_mode... 15 svm_regression_mode... 15 train_c... 16 update_vc... 16 vcf_exampl... 17 config_df Default parameters of config. Description A dataframe containing default parameters. Usage config_df Format A data frame with 12 variables: threshold Threshold for allele frequency skew Skewness for allele frequency lower Lower bound for allele frequency region upper Upper bound for allele frequency region ldpthred Threshold to determine low depth hom_mle Hom MLE of p in Beta-Binomial model het_mle Het MLE of p in Beta-Binomial model Hom_thred Threshold between hom and high High_thred Threshold between high and het Het_thred Threshold between het and low hom_rho Hom MLE of rho in Beta-Binomial model het_rho Het MLE of rho in Beta-Binomial model Source Created by <NAME> defcon DEtection of Frequency CONtamination Description Detects whether a sample is contaminated another sample of its same species. The input file should be in vcf format. Usage defcon(file, rmCNV = FALSE, cnvobj = NULL, config = NULL, class_model = NULL, regression_model = NULL) Arguments file VCF input object rmCNV Remove CNV regions, default is FALSE cnvobj CNV object, default is NULL config config information of parameters. A default set is generated as part of the model and is included in a model object, which contains class_model An SVM classification model regression_model An SVM regression model Value A list containing (1) stat: a data frame with all statistics for contamination estimation; (2) result: contamination estimation (Class = 0, pure; Class = 1, contaminated) Examples data(vcf_example) result <- defcon(file = vcf_example) generate_feature Feature Generation for Contamination Detection Model Description Generates features from each pair of input VCF objects for training contamination detection model. Usage generate_feature(file, hom_p = 0.999, het_p = 0.5, hom_rho = 0.005, het_rho = 0.1, mixture, homcut = 0.99, highcut = 0.7, hetcut = 0.3) Arguments file VCF input object hom_p The initial value for p in Homozygous Beta-Binomial model, default is 0.999 het_p The initial value for p in Heterozygous Beta-Binomial model, default is 0.5 hom_rho The initial value for rho in Homozygous Beta-Binomial model, default is 0.005 het_rho The initial value for rho in Heterozygous Beta-Binomial model, default is 0.1 mixture A vector of whether the sample is contaminated: 0 for pure; 1 for contaminated homcut Cutoff allele frequency value between hom and high, default is 0.99 highcut Cutoff allele frequency value between high and het, default is 0.7 hetcut Cutoff allele frequency value between het and low, default is 0.3 Value A data frame with all features for training model of contamination detection getAlt2 Second alternative allele percentage Description Second alternative allele percentage Usage getAlt2(f) Arguments f Input raw file Value Percent of the second alternative allele getAnnoRate Annotation rate Description Annotation rate Usage getAnnoRate(f) Arguments f Input raw file Value Percentage of annotation locus getAvgLL Calculate average log-likelihood Description Calculate average log-likelihood Usage getAvgLL(df, hom_mle, het_mle, hom_rho, het_rho) Arguments df Input modified file hom_mle Hom MLE of p in Beta-Binomial model, default is 0.9981416 from NA12878_1_L5 het_mle Het MLE of p in Beta-Binomial model, default is 0.4737897 from NA12878_1_L5 hom_rho Hom MLE of rho in Beta-Binomial model, default is 0.04570275 from NA12878_1_L5 het_rho Het MLE of rho in Beta-Binomial model, default is 0.02224098 from NA12878_1_L5 Value meanLL getLowDepth Low depth percentage Description Low depth percentage Usage getLowDepth(f, ldpthred) Arguments f Input raw file ldpthred Threshold to determine low depth, default is 20 Value Percentage of low depth getRatio Get the ratio of allele frequencies with a region Description Get the ratio of allele frequencies with a region Usage getRatio(subdf, lower, upper) Arguments subdf Dataframe with calculated statistics lower Lower bound for allele frequency region upper Upper bound for allele frequency region Value Ratio of allele frequencies with a region getSkewness Get absolute value of skewness Description Get absolute value of skewness Usage getSkewness(subdf) Arguments subdf Input dataframe Value Absolute value of skewness getSNVRate SNV percentage Description SNV percentage Usage getSNVRate(df) Arguments df Input raw file Value Percentage of SNV getVar Calculate zygosity variable Description Calculate zygosity variable Usage getVar(df, state, hom_mle, het_mle) Arguments df Input modified file state Zygosity state hom_mle MLE in hom model het_mle MLE in het model Value Zygosity variable locateFile Check input filename Description Check input filename Usage locateFile(fn, extension) Arguments fn Exact full file name of input file, including directory extension Expected input file extension: vcf & txt Value Valid directory negll Negative Log Likelihood Description Calculates negative log likelihood for beta binomial distribution. Usage negll(x, size, prob, rho) Arguments x Depth of alternative allele size Total depth prob Theoretical probability for heterozygous is 0.5, for homozygous is 0.999 rho Rho parameter of Beta-Binomial distribution of alternative allele readGATK Read in input vcf data in GATK format for Contamination detection Description Read in input vcf data in GATK format for Contamination detection Usage readGATK(dr, dbOnly, depCut, thred, content, extnum, keepall) Arguments dr A valid input object dbOnly Use dbSNP as filter, default is FALSE, passed from read_vcf depCut Use a threshold for min depth , default is False thred Threshold for min depth, default is 20 content Column names in VCF files extnum The column number or numbers to be extracted from vcf, default is 10; 0 for not extracting any columns keepall Keep unextracted column in output, default is TRUE, passed from read_vcf Value Dataframe from VCF file readStrelka Read in input vcf data in strelka2 format for Contamination detection Description Read in input vcf data in strelka2 format for Contamination detection Usage readStrelka(dr, dbOnly, depCut, thred, content, extnum, keepall) Arguments dr A valid input object dbOnly Use dbSNP as filter, default is FALSE, passed from read_vcf depCut Use a threshold for min depth , default is False thred Threshold for min depth, default is 20 content Column names in VCF files extnum The column number or numbers to be extracted from vcf, default is 10; 0 for not extracting any columns keepall Keep unextracted column in output, default is TRUE, passed from read_vcf Value Dataframe from VCF file readVarDict Read in input vcf data in VarDict format for Contamination detection Description Read in input vcf data in VarDict format for Contamination detection Usage readVarDict(dr, dbOnly, depCut, thred, content, extnum, keepall) Arguments dr A valid input object dbOnly Use dbSNP as filter, default is FALSE, passed from read_vcf depCut Use a threshold for min depth , default is False thred Threshold for min depth, default is 20 content Column names in VCF files extnum The column number to be extracted from vcf, default is 10; 0 for not extracting any column keepall Keep unextracted column in output, default is TRUE, passed from read_vcf Value Dataframe from VCF file readVarPROWL Read in input vcf data in VarPROWL format Description Read in input vcf data in VarPROWL format Usage readVarPROWL(dr, dbOnly, depCut, thred, content, extnum, keepall) Arguments dr A valid input object dbOnly Use dbSNP as filter, default is FALSE, passed from read_vcf depCut Use a threshold for min depth , default is False thred Threshold for min depth, default is 20 content Column names in VCF files extnum The column number or numbers to be extracted from vcf, default is 10; 0 for not extracting any columns keepall Keep unextracted column in output, default is TRUE, passed from read_vcf Value vcf Dataframe from VCF file read_vcf VCF Data Input Description Reads a file in vcf or vcf.gz file and creates a list containing Content, Meta, VCF and file_sample_name Usage read_vcf(fn, vcffor, dbOnly = FALSE, depCut = FALSE, thred = 20, metaline = 200, extnum = 10, keepall = TRUE, filter = FALSE) Arguments fn Input vcf file name vcffor Input vcf data format: 1) GATK; 2) VarPROWL; 3) VarDict; 4) strelka2 dbOnly Use dbSNP as filter, default is FALSE depCut Use a threshold for min depth , default is False thred Threshold for min depth, default is 20 metaline Number of head lines to read in (better to be large enough), the lines will be checked if they contain meta information, default is 200 extnum The column number to be extracted from vcf, default is 10; 0 for not extracting any column; extnum should be between 10 and total column number keepall Keep unextracted column in output, default is TRUE filter Whether to select "PASS" variants for analyses if they contain unfiltered vari- ants, default is FALSE Value A list containing (1) Content: a vector showing what is contained; (2) Meta: a data frame containing meta-information of the file; (3) VCF: a data frame, the main part of VCF file; (4) file_sample_name: the file name and sample name, in case when multiple samples exist in one file, file and sample names might be different Examples file.name <- system.file("extdata", "example.vcf.gz", package = "vanquish") example <- read_vcf(fn=file.name, vcffor="VarPROWL") rho_est Estimate Rho for Alternative Allele Frequency Description Estimates Rho parameter in beta binomial distribution for alternative allele frequency Usage rho_est(vl) Arguments vl A list of vcf objects from read_vcf function. Value A list containing (1) het_rho: Rho parameter of heterozygous location; (2) hom_rho: Rho parameter homozygous location; Examples data("vcf_example") vcf_list <- list() vcf_list[[1]] <- vcf_example$VCF res <- rho_est(vl = vcf_list) res$het_rho[[1]]$par res$hom_rho[[1]]$par rmChangePoint Remove CNV regions within VCF files by change point method Description Remove CNV regions within VCF files by change point method Usage rmChangePoint(vcf, threshold, skew, lower, upper) Arguments vcf Input VCF files threshold Threshold for allele frequency skew Skewness for allele frequency lower Lower bound for allele frequency region upper Upper bound for allele frequency region Value VCF object without change point region rmCNVinVCF Remove CNV regions within VCF files given cnv file Description Remove CNV regions within VCF files given cnv file Usage rmCNVinVCF(vcf, cnvobj) Arguments vcf Input VCF files cnvobj cnv object Value VCF object without change point region summary_vcf VCF Data Summary Description Summarizes allele frequency information in scatter and density plots Usage summary_vcf(vcf, ZG = NULL, CHR = NULL) Arguments vcf VCF object from read_vcf function ZG zygosity: (1) null, for both het and hom, default; (2) het; (3) hom CHR chromosome number: (1) null, all chromosome, default; (2) any specific number Value A list containing (1) scatter: allele frequency scatter plot; (2) density: allele frequency density plot Examples data("vcf_example") tmp <- summary_vcf(vcf = vcf_example, ZG = 'het', CHR = c(1,2)) plot(tmp$scatter) plot(tmp$density) svm_class_model Default svm classification model. Description An svm object containing default svm classification model. Usage svm_class_model Format An svm object: Source Created by Tao Jiang svm_regression_model Default svm regression model. Description An svm object containing default svm regression model. Usage svm_regression_model Format An svm object: Source Created by Tao Jiang train_ct Train Contamination Detection Model Description Trains two SVM models (classification and regression) to detects whether a sample is contaminated another sample of its same species. Usage train_ct(feature) Arguments feature Feature list objects from generate_feature() Value A list contains two trained svm models: regression & classification update_vcf Remove CNV regions within VCF files Description Remove CNV regions within VCF files Usage update_vcf(rmCNV = FALSE, vcf, cnvobj = NULL, threshold = 0.1, skew = 0.5, lower = 0.45, upper = 0.55) Arguments rmCNV Remove CNV regions, default is FALSE vcf Input VCF files cnvobj cnv object, default is NULL threshold Threshold for allele frequency, default is 0.1 skew Skewness for allele frequency, default is 0.5 lower Lower bound for allele frequency region, default is 0.45 upper Upper bound for allele frequency region, default is 0.55 Value VCF file without CNV region vcf_example 17 vcf_example VCF example file. Description An example containing a list of 4 data frames. Usage vcf_example Format A list of 4 data frames: Source Created by <NAME>
npcp
cran
R
Package ‘npcp’ February 9, 2023 Type Package Title Some Nonparametric CUSUM Tests for Change-Point Detection in Possibly Multivariate Observations Version 0.2-5 Date 2023-02-09 Maintainer <NAME> <<EMAIL>> Depends R (>= 3.5.0) Imports stats, sandwich Suggests copula Description Provides nonparametric CUSUM tests for detecting changes in possibly serially dependent univariate or low-dimensional multivariate observations. Retrospective tests sensitive to changes in the expectation, the variance, the covariance, the autocovariance, the distribution function, Spearman's rho, Kendall's tau, Gini's mean difference, and the copula are provided, as well as a test for detecting changes in the distribution of independent block maxima (with environmental studies in mind). The package also contains a test sensitive to changes in the autocopula and a combined test of stationarity sensitive to changes in the distribution function and the autocopula. The latest additions are an open-end sequential test based on the retrospective CUSUM statistic that can be used for monitoring changes in the mean of possibly serially dependent univariate observations, as well as closed-end and open-end sequential tests based on empirical distribution functions that can be used for monitoring changes in the contemporary distribution of possibly serially dependent univariate or low-dimensional multivariate observations. License GPL (>= 3) | file LICENCE LazyLoad yes Encoding UTF-8 NeedsCompilation yes Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-2903-1543>), <NAME> [ctb] (<https://orcid.org/0000-0001-9634-4796>) Repository CRAN Date/Publication 2023-02-09 16:20:02 UTC R topics documented: bOptEmpPro... 2 cpAutoco... 4 cpBlockMa... 6 cpCopul... 8 cpDis... 10 cpRh... 13 cp... 15 quantile... 18 selectPoint... 19 seqClosedEndCpDis... 20 seqOpenEndCpDis... 24 seqOpenEndCpMea... 27 stDistAutoco... 30 bOptEmpProc Bandwidth Parameter Estimation Description In the context of the standard CUSUM test based on the sample mean or in a particular empiri- cal process setting, the following functions estimate the bandwidth parameter controlling the se- rial dependence when generating dependent multiplier sequences using the ’moving average ap- proach’; see Section 5 of the third reference. The function function bOpt() is called in the func- tions cpMean(), cpVar(), cpGini(), cpAutocov(), cpCov(), cpTau() and detOpenEndCpMean() when b is set to NULL. The function function bOptEmpProc() is called in the functions cpDist(), cpCopula(), cpAutocop(), stDistAutocop() and simClosedEndCpDist() when b is set to NULL. Usage bOpt(influ, weights = c("parzen", "bartlett")) bOptEmpProc(x, m=5, weights = c("parzen", "bartlett"), L.method=c("max","median","mean","min")) Arguments influ a numeric containing the relevant influence coefficients, which, in the case of the standard CUSUM test based on the sample mean, are simply the available observations; see also the last reference. x a data matrix whose rows are continuous observations. weights a string specifying the kernel for creating the weights used in the generation of dependent multiplier sequences within the ’moving average approach’; see Section 5 of the third reference. m a strictly positive integer specifying the number of points of the uniform grid on (0, 1)d (where d is ncol(x)) involved in the estimation of the bandwidth parameter; see Section 5 of the third reference. The number of points of the grid is given by m^ncol(x) so that m needs to be decreased as d increases. L.method a string specifying how the parameter L involved in the estimation of the band- width parameter is computed; see Section 5 of the third reference. Details The implemented approach results from an adaptation of the procedure described in the first two references (see also the references therein). The use of theses functions in a context different from that considered in the third or fourth reference may not be meaningful. Acknowledgment: Part of the code of the function results from an adaptation of R code of C. Parmeter and <NAME>, itself an adaptation of Matlab code by <NAME>. Value A strictly positive integer. References <NAME> and <NAME> (2004), Automatic block-length selection for the dependent bootstrap, Econometric Reviews 23(1), pages 53-70. <NAME>, <NAME> and <NAME> (2004), Correction: Automatic block-length selection for the dependent bootstrap, Econometric Reviews 28(4), pages 372-375. <NAME> and <NAME> (2016), A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing, Bernoulli 22:2, pages 927-968, https://arxiv.org/abs/ 1306.3930. <NAME> and <NAME> (2016), Dependent multiplier bootstraps for non-degenerate U-statistics under mixing conditions with applications, Journal of Statistical Planning and Inference 170 pages 83-105, https://arxiv.org/abs/1412.5875. See Also cpDist(), cpCopula(), cpAutocop(), stDistAutocop(), cpMean(), cpVar(), cpGini(), cpAutocov(), cpCov(), cpTau(), seqOpenEndCpMean and seqClosedEndCpDist. cpAutocop Test for Change-Point Detection in Univariate Observations Sensitive to Changes in the Autocopula Description Nonparametric test for change-point detection particularly sensitive to changes in the autocopula of univariate continuous observations. Approximate p-values for the test statistic are obtained by means of a multiplier approach. Details can be found in the first reference. Usage cpAutocop(x, lag = 1, b = NULL, bivariate = FALSE, weights = c("parzen", "bartlett"), m = 5, N = 1000, init.seq = NULL, include.replicates = FALSE) Arguments x a one-column matrix containing continuous observations. lag an integer specifying at which lag to consider the autocopula; the autocopula is a (lag+1)-dimensional copula. b strictly positive integer specifying the value of the bandwidth parameter deter- mining the serial dependence when generating dependent multiplier sequences using the ’moving average approach’; see Section 5 of the second reference. If set to NULL, b will be estimated using the function bOptEmpProc(); see the first reference. bivariate a logical specifying whether the test should focus only on the bivariate mar- gin of the (lag+1)-dimensional autocopula obtained from the first and the last dimension. weights a string specifying the kernel for creating the weights used in the generation of dependent multiplier sequences within the ’moving average approach’; see Section 5 of the second reference. m a strictly positive integer specifying the number of points of the uniform grid on (0, 1) involved in the estimation of the bandwidth parameter; see Section 5 of the second reference. N number of multiplier replications. init.seq a sequence of independent standard normal variates of length N * (nrow(x) - lag + 2 * (b - 1)) used to generate dependent multiplier sequences. include.replicates a logical specifying whether the object of class htest returned by the function (see below) will include the multiplier replicates. Details The approximate p-value is computed as XN (0.5 + 1{Si ≥S} )/(N + 1), where S and Si denote the test statistic and a multiplier replication, respectively. This ensures that the approximate p-value is a number strictly between 0 and 1, which is sometimes necessary for further treatments. Value An object of class htest which is a list, some of the components of which are statistic value of the test statistic. p.value corresponding approximate p-value. cvm the values of the length(x)-lag-1 intermediate Cramér-von Mises change- point statistics; the test statistic is defined as the maximum of those. b the value of parameter b. Note This is a tests for a continuous univariate time series. References <NAME>, <NAME> and <NAME> (2019), Combining cumulative sum change-point detection tests for assessing the stationarity of univariate time series, Journal of Time Series Analysis 40, pages 124-150, https://arxiv.org/abs/1709.02673. <NAME> and <NAME> (2016), A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing, Bernoulli 22:2, pages 927-968, https://arxiv.org/abs/ 1306.3930. See Also cpAutocov() for a related test based on the autocovariance. Examples ## AR1 example n <- 200 k <- n/2 ## the true change-point x <- matrix(c(arima.sim(list(ar = -0.5), n = k), arima.sim(list(ar = 0.5), n = n - k))) cp <- cpAutocop(x) cp ## Estimated change-point which(cp$cvm == max(cp$cvm)) ## AR2 example n <- 200 k <- n/2 ## the true change-point x <- matrix(c(arima.sim(list(ar = c(0,-0.5)), n = k), arima.sim(list(ar = c(0,0.5)), n = n - k))) cpAutocop(x) cpAutocop(x, lag = 2) cpAutocop(x, lag = 2, bivariate = TRUE) cpBlockMax Nonparametric Tests for Change-Point Detection in the Distribution of Independent Block Maxima Description Nonparametric tests for change-point detection in the distribution of independent block maxima based either on the probability weighted moment method (see the second reference) or on the gen- eralized probability weighted moment method (see the first reference) for estimating the parameters of the generalized extreme value (GEV) distribution. It is assumed that the block maxima are in- dependent and that their unknown distribution functions (d.f.s) are continuous, but not necessarily that they are GEV distributed. Three statistics are computed. Under the assumption that the block maxima are GEV distributed, these are statistics particularly sensitive to changes in the location, scale and shape parameters of the GEV. Details can be found in third reference. Usage cpBlockMax(x, method = c("pwm", "gpwm"), r = 10) Arguments x a numeric vector representing independent block maxima whose unknown d.f.s are assumed continuous. method a string specifying how statistics will be defined; can be either "pwm" (the proba- bility weighted moment method) or "gpwm" (the generalized probability weighted moment method). The method "pwm" is suggested for climate block maxima that are typically not too heavy tailed, more precisely, whose distributions are in the maximum domains of attraction of GEV distributions with shape parameters smaller than a half. The method "gpwm" should be preferred otherwise. r strictly positive integer specifying the set of breakpoints that will be tested; more precisely, starting from the initial sample of block maxima, the tests compare subsamples formed by the k first maxima and n-k last maxima for k in the set {r,...,n-r}, where n is the sample size. Details Approximate p-values are computed from the estimated asymptotic null distributions, which involve the Kolmogorov distribution. The latter is dealt with reusing code from the ks.test() function; credit to RCore. Value An object of class htest which is a list, some of the components of which are statistic value of the three test statistics. pvalues corresponding approximate p-values. stats.loc the values of the n - (2 * r - 1) intermediate change-point statistics sensitive to changes in the location; the first test statistic is defined as the maximum of those. stats.scale the values of the n - (2 * r - 1) intermediate change-point statistics sensitive to changes in the scale; the second test statistic is defined as the maximum of those. stats.shape the values of the n - (2 * r - 1) intermediate change-point statistics sensitive to changes in the shape; the third test statistic is defined as the maximum of those. Note The tests were derived under the assumption of block maxima with continuous d.f., which implies that ties occur with probability zero. A way to deal with ties based on randomization is proposed in the third reference. References <NAME>, <NAME>, <NAME> and <NAME> (2008), Improving probability-weighted moment methods for the generalized extreme-value distribution, REVSTAT 6, pages 33-50. <NAME>, <NAME> and <NAME> (1985), Estimation of the generalized extreme-value distribution by the method of probability-weighted moments, Technometrics 27, pages 251-261. <NAME> and <NAME> (2017), Nonparametric tests for change-point detection in the distri- bution of block maxima based on probability weighted moments, Extremes 20:2, pages 417-450, https://arxiv.org/abs/1507.06121. See Also cpDist() for a related test based on the empirical d.f. Examples ## Not run: require(evd) n <- 100 k <- 50 ## the true change-point ## Change in the shape parameter of a GEV x <- rgev(k,loc=0,scale=1,shape=-0.8) y <- rgev(k,loc=0,scale=1,shape=0.4) cp <- cpBlockMax(c(x,y)) cp ## Estimated change-point which(cp$stats.shape == max(cp$stats.shape)) ## Change in the scale parameter of a GEV x <- rgev(k,loc=0,scale=0.5,shape=0) y <- rgev(k,loc=0,scale=1,shape=0) cp <- cpBlockMax(c(x,y)) cp ## Estimated change-point which(cp$stats.scale == max(cp$stats.scale)) ## Change in the location parameter of a GEV x <- rgev(k,loc=0,scale=1,shape=0) y <- rgev(k,loc=0.5,scale=1,shape=0) cp <- cpBlockMax(c(x,y)) cp ## Estimated change-point which(cp$stats.loc == max(cp$stats.loc)) ## End(Not run) cpCopula Test for Change-Point Detection in Multivariate Observations Sensi- tive to Changes in the Copula Description Nonparametric test for change-point detection particularly sensitive to changes in the copula of multivariate continuous observations. The observations can be serially independent or dependent (strongly mixing). Approximate p-values for the test statistic are obtained by means of a multiplier approach. Details can be found in the first reference. Usage cpCopula(x, method = c("seq", "nonseq"), b = NULL, weights = c("parzen", "bartlett"), m = 5, L.method=c("max","median","mean","min"), N = 1000, init.seq = NULL, include.replicates = FALSE) Arguments x a data matrix whose rows are multivariate continuous observations. method a string specifying the simulation method for generating multiplier replicates of the test statistic; can be either "seq" (the ’check’ approach in the first reference) or "nonseq" (the ’hat’ approach in the first reference). The ’check’ approach appears to lead to better behaved tests in the case of samples of moderate size. The ’hat’ approach is substantially faster. b strictly positive integer specifying the value of the bandwidth parameter deter- mining the serial dependence when generating dependent multiplier sequences using the ’moving average approach’; see Section 5 of the second reference. The value 1 will create i.i.d. multiplier sequences suitable for serially indepen- dent observations. If set to NULL, b will be estimated from x using the function bOptEmpProc(); see the procedure described in Section 5 of the second refer- ence. weights a string specifying the kernel for creating the weights used in the generation of dependent multiplier sequences within the ’moving average approach’; see Section 5 of the second reference. m a strictly positive integer specifying the number of points of the uniform grid on (0, 1)d (where d is ncol(x)) involved in the estimation of the bandwidth parameter; see Section 5 of the third reference. The number of points of the grid is given by m^ncol(x) so that m needs to be decreased as d increases. L.method a string specifying how the parameter L involved in the estimation of the band- width parameter is computed; see Section 5 of the second reference. N number of multiplier replications. init.seq a sequence of independent standard normal variates of length N * (nrow(x) + 2 * (b - 1)) used to generate dependent multiplier sequences. include.replicates a logical specifying whether the object of class htest returned by the function (see below) will include the multiplier replicates. Details The approximate p-value is computed as XN (0.5 + 1{Si ≥S} )/(N + 1), where S and Si denote the test statistic and a multiplier replication, respectively. This ensures that the approximate p-value is a number strictly between 0 and 1, which is sometimes necessary for further treatments. Value An object of class htest which is a list, some of the components of which are statistic value of the test statistic. p.value corresponding approximate p-value. cvm the values of the nrow(x)-1 intermediate Cramér-von Mises change-point statis- tics; the test statistic is defined as the maximum of those. b the value of parameter b. Note These tests were derived under the assumption of continuous margins. References <NAME>, <NAME>, <NAME> and <NAME> (2014), Detecting changes in cross-sectional de- pendence in multivariate time series, Journal of Multivariate Analysis 132, pages 111-128, https: //arxiv.org/abs/1206.2557. <NAME> and <NAME> (2016), A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing, Bernoulli 22:2, pages 927-968, https://arxiv.org/abs/ 1306.3930. See Also cpRho() for a related test based on Spearman’s rho, cpTau() for a related test based on Kendall’s tau, cpDist() for a related test based on the multivariate empirical d.f., bOptEmpProc() for the function used to estimate b from x if b = NULL. Examples ## Not run: require(copula) n <- 100 k <- 50 ## the true change-point u <- rCopula(k, gumbelCopula(1.5)) v <- rCopula(n - k, gumbelCopula(3)) x <- rbind(u,v) cp <- cpCopula(x, b = 1) cp ## Estimated change-point which(cp$cvm == max(cp$cvm)) ## End(Not run) cpDist Test for Change-Point Detection in Possibly Multivariate Observations Sensitive to Changes in the Distribution Function Description Nonparametric test for change-point detection based on the (multivariate) empirical distribution function. The observations can be continuous univariate or multivariate, and serially independent or dependent (strongly mixing). Approximate p-values for the test statistics are obtained by means of a multiplier approach. The first reference treats the serially independent case while details about the serially dependent case can be found in second and third references. Usage cpDist(x, statistic = c("cvmmax", "cvmmean", "ksmax", "ksmean"), method = c("nonseq", "seq"), b = NULL, gamma = 0, delta = 1e-4, weights = c("parzen", "bartlett"), m = 5, L.method=c("max","median","mean","min"), N = 1000, init.seq = NULL, include.replicates = FALSE) Arguments x a data matrix whose rows are continuous observations. statistic a string specifying the statistic whose value and p-value will be displayed; can be either "cvmmax" or "cvmmean" (the maximum or average of the nrow(x)-1 intermediate Cramér-von Mises statistics), or "ksmax" or "ksmean" (the max- imum or average of the nrow(x)-1 intermediate Kolmogorov-Smirnov statis- tics); see Section 3 in the first reference. The four statistics and the correspond- ing p-values are computed at each execution. method a string specifying the simulation method for generating multiplier replicates of the test statistic; can be either "nonseq" (the ’check’ approach in the first refer- ence) or "seq" (the ’hat’ approach in the first reference). The ’check’ approach appears to lead to better behaved tests and is recommended. b strictly positive integer specifying the value of the bandwidth parameter deter- mining the serial dependence when generating dependent multiplier sequences using the ’moving average approach’; see Section 5 of the second reference. The value 1 will create i.i.d. multiplier sequences suitable for serially indepen- dent observations. If set to NULL, b will be estimated from x using the function bOptEmpProc(); see the procedure described in Section 5 of the second refer- ence. gamma parameter between 0 and 0.5 appearing in the definition of the weight function used in the detector function. delta parameter between 0 and 1 appearing in the definition of the weight function used in the detector function. weights a string specifying the kernel for creating the weights used in the generation of dependent multiplier sequences within the ’moving average approach’; see Section 5 of the second reference. m a strictly positive integer specifying the number of points of the uniform grid on (0, 1)d (where d is ncol(x)) involved in the estimation of the bandwidth parameter; see Section 5 of the third reference. The number of points of the grid is given by m^ncol(x) so that m needs to be decreased as d increases. L.method a string specifying how the parameter L involved in the estimation of the band- width parameter is computed; see Section 5 of the second reference. N number of multiplier replications. init.seq a sequence of independent standard normal variates of length N * (nrow(x) + 2 * (b - 1)) used to generate dependent multiplier sequences. include.replicates a logical specifying whether the object of class htest returned by the function (see below) will include the multiplier replicates. Details The approximate p-value is computed as XN (0.5 + 1{Si ≥S} )/(N + 1), i=1 where S and Si denote the test statistic and a multiplier replication, respectively. This ensures that the approximate p-value is a number strictly between 0 and 1, which is sometimes necessary for further treatments. Value An object of class htest which is a list, some of the components of which are statistic value of the test statistic. p.value corresponding approximate p-value. cvm the values of the nrow(x)-1 intermediate Cramér-von Mises change-point statis- tics. ks the values of the nrow(x)-1 intermediate Kolmogorov-Smirnov change-point statistics. all.statistics the values of all four test statistics. all.p.values the corresponding p-values. b the value of parameter b. Note Note that when the observations are continuous univariate and serially independent, independent realizations of the tests statistics under the null hypothesis of no change in the distribution can be obtained by simulation; see Section 4 in the first reference. References <NAME>, <NAME> and <NAME> (2013), Nonparametric tests for change-point detection à la Gombay and Horváth, Journal of Multivariate Analysis 115, pages 16-32. <NAME> and <NAME> (2016), A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing, Bernoulli 22:2, pages 927-968, https://arxiv.org/abs/ 1306.3930. <NAME>, <NAME> and <NAME> (2019), Combining cumulative sum change-point detection tests for assessing the stationarity of univariate time series, Journal of Time Series Analysis 40, pages 124-150, https://arxiv.org/abs/1709.02673. See Also cpCopula() for a related test based on the empirical copula, cpRho() for a related test based on Spearman’s rho, cpTau() for a related test based on Kendall’s tau, bOptEmpProc() for the function used to estimate b from x if b = NULL, seqClosedEndCpDist for the corresponding sequential test. Examples ## A univariate example n <- 100 k <- 50 ## the true change-point y <- rnorm(k) z <- rexp(n-k) x <- matrix(c(y,z)) cp <- cpDist(x, b = 1) cp ## All statistics cp$all.statistics ## Corresponding p.values cp$all.p.values ## Estimated change-point which(cp$cvm == max(cp$cvm)) which(cp$ks == max(cp$ks)) ## A very artificial trivariate example ## with a break in the first margin n <- 100 k <- 50 ## the true change-point y <- rnorm(k) z <- rnorm(n-k, mean = 2) x <- cbind(c(y,z),matrix(rnorm(2*n), n, 2)) cp <- cpDist(x, b = 1) cp ## All statistics cp$all.statistics ## Corresponding p.values cp$all.p.values ## Estimated change-point which(cp$cvm == max(cp$cvm)) which(cp$ks == max(cp$ks)) cpRho Test for Change-Point Detection Based on Spearman’s Rho Description Nonparametric test for change-point detection particularly sensitive to changes in Spearman’s rho in multivariate time series. The observations can be serially independent or dependent (strongly mixing). Approximate p-values for the test statistic are obtained by means of a multiplier approach or by estimating the asymptotic null distribution. Details can be found in first reference. Usage cpRho(x, method = c("mult", "asym.var"), statistic = c("pairwise", "global"), b = NULL, weights = c("parzen", "bartlett"), N = 1000, init.seq = NULL, include.replicates = FALSE) Arguments x a data matrix whose rows are multivariate continuous observations. method a string specifying the method for computing the approximate p-value for the test statistic; can be either "mult" (the multiplier approach ’tilde’ in the first ref- erence) or "asym.var" (the approach based on the estimation of the asymptotic null distribution of the test statistic described in the first reference). The ’mult’ approach appears to lead to better behaved tests. statistic a string specifying the test statistic; can be either "pairwise" (the statistic Sn,3 in the first reference) or "global" (the statistic Sn,1 in the first reference). b strictly positive integer specifying the value of the bandwidth parameter deter- mining the serial dependence when generating dependent multiplier sequences using the ’moving average approach’; see Section 5 of the second reference. The value 1 will create i.i.d. multiplier sequences suitable for serially indepen- dent observations. If set to NULL, b will be estimated from x using the procedure described in the first reference. weights a string specifying the kernel for creating the weights used in the generation of dependent multiplier sequences within the ’moving average approach’; see Section 5 of the second reference. N number of multiplier replications. init.seq a sequence of independent standard normal variates of length N * (nrow(x) + 2 * (b - 1)) used to generate dependent multiplier sequences. include.replicates a logical specifying whether the object of class htest returned by the function (see below) will include the multiplier replicates, if generated. Details When method == "mult", the approximate p-value is computed as XN (0.5 + 1{Si ≥S} )/(N + 1), i=1 where S and Si denote the test statistic and a multiplier replication, respectively. This ensures that the approximate p-value is a number strictly between 0 and 1, which is sometimes necessary for further treatments. When method == "asym.var", the approximate p-value is computed from the estimated asymptotic null distribution, which involves the Kolmogorov distribution. The latter is dealt with reusing code from the ks.test() function; credit to RCore. Value An object of class htest which is a list, some of the components of which are statistic value of the test statistic. p.value corresponding approximate p-value. rho the values of the nrow(x)-1 intermediate change-point statistics; the test statis- tic is defined as the maximum of those. b the value of parameter b. Note These tests were derived under the assumption of continuous margins. References <NAME>, <NAME> and <NAME> (2016), Testing the constancy of Spearman’s rho in multivariate time series, Annals of the Institute of Statistical Mathematics 68:5, pages 929-954, https://arxiv.org/abs/1407.1624. <NAME> and <NAME> (2016), A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing, Bernoulli 22:2, pages 927-968, https://arxiv.org/abs/ 1306.3930. See Also cpTau() for a related test based on Kendall’s tau, cpDist() for a related test based on the multi- variate empirical d.f., cpCopula() for a related test based on the empirical copula. Examples ## Not run: require(copula) n <- 100 k <- 50 ## the true change-point u <- rCopula(k,gumbelCopula(1.5)) v <- rCopula(n-k,gumbelCopula(3)) x <- rbind(u,v) cp <- cpRho(x, b = 1) cp ## Estimated change-point which(cp$rho == max(cp$rho)) ## End(Not run) cpU Some CUSUM Tests for Change-Point Detection Based on U-statistics Description Nonparametric CUSUM tests for change-point detection particularly sensitive to changes in certain quantities that can be estimated using one-sample U-statistics of order one or two. So far, the quantities under consideration are the expectation (thus corresponding to the standard CUSUM test based on the sample mean), the variance, Gini’s mean difference, the autocovariance at a specified lag, the covariance for bivariate data and Kendall’s tau for multivariate data. The observations can be serially independent or dependent (strongly mixing). Approximate p-values for the test statistic are obtained by means of a multiplier approach or by estimating the asymptotic null distribution. Details can be found in the first reference. Usage cpMean(x, method = c("nonseq", "seq", "asym.var"), b = NULL, weights = c("parzen", "bartlett"), N = 1000, init.seq = NULL, include.replicates = FALSE) cpVar(x, method = c("nonseq", "seq", "asym.var"), b = NULL, weights = c("parzen", "bartlett"), N = 1000, init.seq = NULL, include.replicates = FALSE) cpGini(x, method = c("nonseq", "seq", "asym.var"), b = NULL, weights = c("parzen", "bartlett"), N = 1000, init.seq = NULL, include.replicates = FALSE) cpAutocov(x, lag = 1, method = c("nonseq", "seq", "asym.var"), b = NULL, weights = c("parzen", "bartlett"), N = 1000, init.seq = NULL, include.replicates = FALSE) cpCov(x, method = c("nonseq", "seq", "asym.var"), b = NULL, weights = c("parzen", "bartlett"), N = 1000, init.seq = NULL, include.replicates = FALSE) cpTau(x, method = c("seq", "nonseq", "asym.var"), b = NULL, weights = c("parzen", "bartlett"), N = 1000, init.seq = NULL, include.replicates = FALSE) Arguments x a numeric vector or a data matrix containing continuous observations. lag an integer specifying at which lag to consider the autocovariance. method a string specifying the method for computing the approximate p-value for the test statistic; can be either "seq" (the ’check’ approach in the first reference), "nonseq" (the ’hat’ approach in the first reference), or "asym.var" (the ap- proach based on the estimation of the asymptotic null distribution of the test statistic described in the first reference). The ’seq’ approach appears overall to lead to better behaved tests for cpTau(). More experiments are necessary for the other functions. b strictly positive integer specifying the value of the bandwidth parameter deter- mining the serial dependence when generating dependent multiplier sequences using the ’moving average approach’; see Section 5 of the second reference. The value 1 will create i.i.d. multiplier sequences suitable for serially indepen- dent observations. If set to NULL, b will be estimated from x using the procedure described in the first reference. weights a string specifying the kernel for creating the weights used in the generation of dependent multiplier sequences within the ’moving average approach’; see Section 5 of the second reference. N number of multiplier replications. init.seq a sequence of independent standard normal variates of length N * (nrow(x) + 2 * (b - 1)) used to generate dependent multiplier sequences. include.replicates a logical specifying whether the object of class htest returned by the function (see below) will include the multiplier replicates, if generated. Details When method is either "seq" or "nonseq", the approximate p-value is computed as XN (0.5 + 1{Si ≥S} )/(N + 1), i=1 where S and Si denote the test statistic and a multiplier replication, respectively. This ensures that the approximate p-value is a number strictly between 0 and 1, which is sometimes necessary for further treatments. When method = "asym.var", the approximate p-value is computed from the estimated asymptotic null distribution, which involves the Kolmogorov distribution. The latter is dealt with reusing code from the ks.test() function; credit to RCore. Value An object of class htest which is a list, some of the components of which are statistic value of the test statistic. p.value corresponding approximate p-value. u the values of the nrow(x)-3 intermediate change-point statistics; the test statis- tic is defined as the maximum of those. b the value of parameter b. References <NAME> and <NAME> (2016), Dependent multiplier bootstraps for non-degenerate U-statistics under mixing conditions with applications, Journal of Statistical Planning and Inference 170, pages 83-105, https://arxiv.org/abs/1412.5875. <NAME> and <NAME> (2016), A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing, Bernoulli 22:2, pages 927-968, https://arxiv.org/abs/ 1306.3930. See Also cpDist() for a related test based on the multivariate empirical d.f., cpCopula() for a related test based on the empirical copula, cpAutocop() for a related test based on the empirical autocopula, cpRho() for a related test based on Spearman’s rho, bOpt() for the function used to estimate b from x if b = NULL and seqOpenEndCpMean for related sequential tests that can be used for online monitoring. Examples ## The standard CUSUM test based on the sample mean cp <- cpMean(c(rnorm(50), rnorm(50, mean=1)), b=1) cp ## Estimated change-point which(cp$statistics == cp$statistic) ## Testing for changes in the autocovariance n <- 200 k <- n/2 ## the true change-point x <- c(arima.sim(list(ar = -0.5), n = k), arima.sim(list(ar = 0.5), n = n - k)) cp <- cpAutocov(x) cp ## Estimated change-point which(cp$u == cp$statistic) ## Another example x <- c(arima.sim(list(ar = c(0,-0.5)), n = k), arima.sim(list(ar = c(0,0.5)), n = n - k)) cpAutocov(x) cp <- cpAutocov(x, lag = 2) cp ## Estimated change-point which(cp$u == cp$statistic) ## Not run: ## Testing for changes in Kendall's tau require(copula) n <- 100 k <- 50 ## the true change-point u <- rCopula(k,gumbelCopula(1.5)) v <- rCopula(n-k,gumbelCopula(3)) x <- rbind(u,v) cp <- cpTau(x) cp ## Estimated change-point which(cp$u == cp$statistic) ## Testing for changes in the covariance cp <- cpCov(x) cp ## Estimated change-point which(cp$u == cp$statistic) ## End(Not run) quantiles Estimated Quantiles for the Open-end Nonparametric Sequential Change-Point Detection Tests Description Estimated quantiles for the open-end nonparametric sequential change-point detection tests de- scribed in seqOpenEndCpMean and seqOpenEndCpDist. More details can be found in the references below. Usage data("quantiles") Format list of 6 elements. The first 5 are arrays containing the estimated 90%, 95% and 99% quantiles necessary for carrying out the sequential tests described in seqOpenEndCpMean. The last element is a list containing the estimated 90%, 95% and 99% quantiles as well as other estimated parameters necessary for carrying out the sequential test described in seqOpenEndCpDist. References <NAME>, <NAME> and <NAME> (2021), A new approach for open-end sequential change point monitoring, Journal of Time Series Analysis 42:1, pages 63-84, https://arxiv.org/abs/1906. 03225. <NAME> and <NAME> (2021), Open-end nonparametric sequential change-point detection based on the retrospective CUSUM statistic, Electronic Journal of Statistics 15:1, pages 2288-2335, doi:10.1214/21EJS1840. <NAME>, <NAME>, <NAME> and <NAME> (2004). Monitoring changes in linear models. Journal of Statistical Planning and Inference 126, pages 225-251. <NAME>, <NAME> and <NAME> (2022), Multi-purpose open-end monitoring proce- dures for multivariate observations based on the empirical distribution function, 45 pages, https: //arxiv.org/abs/2201.10311. Examples data("quantiles") str(quantiles) selectPoints A point selection procedure for multivariate data Description Returns a matrix of ‘representative’ points. Usage selectPoints(x, r, kappa = 1.5, plot = FALSE) Arguments x a numeric matrix with d columns whose rows represent multivariate observa- tions. r integer specifying the size of an initial uniformly-spaced grid ‘on the probability scale’; an upper bound for the number of selected points is r^d. kappa numeric constant required to be strictly greater than one involved in the point selection procedure. plot logical used only if d = 2 specifying whether a plot should be produced. Details The selection procedure is described in detail in Section 3.2 of the reference below. Set plot = TRUE for visual feedback and information on the minimum number of ‘surrounding’ observations required for a grid point to be selected. The initial grid ‘on the probability scale’ is in blue, while the points selected by the procedure are in red. Value a matrix with d columns whose rows are the selected points. References <NAME>, <NAME>, and <NAME>, Multi-purpose open-end monitoring procedures for multivariate observations based on the empirical distribution function, 45 pages, https://arxiv. org/abs/2201.10311. See Also selectPoints() is used in detOpenEndCpDist(). Examples ## Generate data set.seed(123) x1 <- rnorm(1000, 0, 1) x2 <- rnorm(1000, 0.7 * x1, sqrt((1 - 0.7^2))) x <- cbind(x1, x2) ## Point selection selectPoints(x, r = 3, kappa = 1.5, plot = TRUE) selectPoints(x, r = 3, kappa = 4, plot = TRUE) selectPoints(x, r = 5, kappa = 1.5, plot = TRUE) selectPoints(x, r = 5, kappa = 4, plot = TRUE) seqClosedEndCpDist Closed-end Sequential Test for Change-Point Detection in Possibly Multivariate Time Series Sensitive to Changes in the Contemporary Distribution Function Description Closed-end nonparametric sequential test for change-point detection based on the (multivariate) empirical distribution function. The observations can be continuous univariate or multivariate, and serially independent or dependent (strongly mixing). To carry out the test, four steps are required. The first step consists of simulating under the null many trajectories of the detector function. The second step consists of estimating a piecewise constant threshold function from these trajectories. The third step consists of computing the detector function from the data to be monitored. The fourth and last step consists of comparing the detector function with the estimated threshold function. Each of these steps corresponds to one of the functions in the usage section below. The current implementation is preliminary and not optimized for real-time monitoring (but could still be used for that). If the observations to be monitored are univariate and can be assumed serially independent, the simulation of the trajectories of the detector functions can be carried out using Monte Carlo simulation. In all other cases, the test relies on a dependent multiplier bootstrap. Details can be found in the second reference. Usage simClosedEndCpDist(x.learn = NULL, m = NULL, n, gamma = 0.25, delta = 1e-4, B = 1000, method = c("sim", "mult"), b = NULL, weights = c("parzen", "bartlett"), g = 5, L.method = c("max","median","mean","min")) threshClosedEndCpDist(sims, p = 1, alpha = 0.05, type = 7) detClosedEndCpDist(x.learn, x, gamma = 0.25, delta = 1e-4) monClosedEndCpDist(det, thresh, statistic = c("mac", "mmc", "mmk", "mk", "mc"), plot = TRUE) Arguments x.learn a data matrix whose rows are continuous observations, representing the learning sample. m a strictly positive integer specifying the size of the learning sample if x.learn is not specified; the latter implies that the observations are univariate and assumed to be independent; if m is not specified, it is taken equal to nrow(x.learn). n a strictly positive integer specifying the monitoring horizon; the monitoring pe- riod is m+1, ..., n. gamma a real parameter between 0 and 0.5 appearing in the definition of the weight function used in the detector function. delta a real parameter between 0 and 1 appearing in the definition of the weight func- tion used in the detector function. B the number of trajectories of the detector function to simulate under the null. method a string specifying the trajectory simulation method; can be either "sim" (Monte Carlo simulation – only in the univariate case under the assumption of serial independence) or "mult" (the dependent multiplier bootstrap). b strictly positive integer specifying the value of the bandwidth parameter deter- mining the serial dependence when generating dependent multiplier sequences using the ’moving average approach’; see Section 5 of the first reference. The value 1 will create i.i.d. multiplier sequences suitable for serially independent observations. If set to NULL, b will be estimated from x.learn using the function bOptEmpProc(); see the procedure described in Section 5 of the first reference. weights a string specifying the kernel for creating the weights used in the generation of dependent multiplier sequences within the ’moving average approach’; see Section 5 of the first reference. g a strictly positive integer specifying the number of points of the uniform grid on (0, 1)d (where d is ncol(x)) involved in the estimation of the bandwidth parameter; see Section 5 of the first reference. The number of points of the grid is given by g^ncol(x) so that g needs to be decreased as d increases. L.method a string specifying how the parameter L involved in the estimation of the band- width parameter is computed; see Section 5 of the first reference. sims an object of class sims.cpDist containing simulated trajectories of the detector function under the null. p a strictly positive integer specifying the number of steps of the piece constant threshold function; p should not be taken too large (say, smaller than 4) if method = "mult". alpha the value of the desired significance level for the sequential test. type an integer between 1 and 9 selecting one of the nine quantile algorithms detailed in the help of the function quantile(). x a data matrix whose rows are continuous observations corresponding to the new observations to be monitored for a change in contemporary distribution. det an object of class det.cpDist representing a detector function computed using detClosedEndCpDist(). thresh an object of class thresh.cpDist representing a threshold function estimated using threshClosedEndCpDist(). statistic a string specifying the statistic/detector to be used for the monitoring; can be either "mac", "mmc", "mmk", "mc" or "mk"; the last letter specifies whether it is a Cramér-von Mises-like statistic (letter "c") or a Kolmogorov-Smirnov-like statistic (letter "k"); the letters before specify the type of aggregation steps used to compute the detectors ("m" for maximum, "a" for average); "mac" corre- sponds to the detector Tm,q in the second reference, "mmc" to the detector Sm,q , "mmk" to the detector Rm,q , "mc" to the detector Qm and "mk" to the detector Pm . plot logical indicating whether the monitoring should be plotted. Details The testing procedure is described in detail in the second reference. Value All functions return lists whose components have explicit names. The function monClosedEndCpDist() in particular returns a list whose components are alarm a logical indicating whether the detector function has exceeded the threshold function. time.alarm an integer corresponding to the time at which the detector function has exceeded the threshold function or NA. times.max a vector of times at which the successive detectors "mmc" (if statistic = "mac" or statistic = "mmc") or "mmk" (if statistic = "mmk") have reached their maximum; a vector of NA’s if statistic = "mc" or statistic = "mk"; this se- quence of times can be used to estimate the time of change from the time of alarm. time.change an integer giving the estimated time of change if alarm is TRUE; the latter is simply the value in times.max which corresponds to time.alarm. Note This is a test for continuous (multivariate) time series. References <NAME> and <NAME> (2016), A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing, Bernoulli 22:2, pages 927-968, https://arxiv.org/abs/ 1306.3930. <NAME> and <NAME> (2021), Nonparametric sequential change-point detection for multi- variate time series based on empirical distribution functions, Electronic Journal of Statistics 15(1), pages 773-829, doi:10.1214/21EJS1798. See Also see cpDist() for the corresponding a posteriori (offline) test. Examples ## Not run: ## Example of montoring for the period m+1, ..., n m <- 100 # size of the learning sample n <- 150 # monitoring horizon ## The learning sample set.seed(123) x.learn <- matrix(rnorm(m)) ## New observations with a large change in mean ## to simulate monitoring for the period m+1, ..., n k <- 125 ## the true change-point x <- matrix(c(rnorm(k-m), rnorm(n-k, mean = 2))) ## Step 1: Simulation of B trajectories of the detector functions under the null B <- 1e4 ## Under the assumption of serial independence ## (no need to specify the learning sample) traj.sim <- simClosedEndCpDist(m = m, n = n, B = B, method = "sim") ## Without the assumption of serial independence ## (the learning sample is compulsory; the larger it is, the better; ## the monitoring horizon n should not be too large) traj.mult <- simClosedEndCpDist(x.learn = x.learn, n = n, B = B, method = "mult") ## Step 2: Compute threshold functions with p steps p <- 2 tf.sim <- threshClosedEndCpDist(traj.sim, p = p) # p can be taken large if B is very large tf.mult <- threshClosedEndCpDist(traj.mult, p = p) # p should not be taken too # large unless both m and B # are very large ## Step 3: Compute the detectors for the monitoring period m+1, ... , n det <- detClosedEndCpDist(x.learn = x.learn, x = x) ## Step 4: Monitoring ## Simulate the monitoring with the first threshold function monClosedEndCpDist(det, tf.sim) ## Simulate the monitoring with the second threshold function monClosedEndCpDist(det, tf.mult) ## Simulate the monitoring with the first threshold function ## and another detector function monClosedEndCpDist(det, tf.sim, statistic = "mmk") ## Alternative steps 3 and 4: ## Compute the detectors for the monitoring period m+1, ... , m+20 only det <- detClosedEndCpDist(x.learn = x.learn, x = x[1:20,,drop = FALSE]) ## Simulate the monitoring with the first threshold function monClosedEndCpDist(det, tf.sim) ## Simulate the monitoring with the second threshold function monClosedEndCpDist(det, tf.mult) ## End(Not run) seqOpenEndCpDist Open-end Nonparametric Sequential Change-Point Detection Test for (Possibly) Multivariate Time Series Sensitive to Changes in the Distri- bution Function Description Open-end nonparametric sequential test for change-point detection based on a retrospective CUSUM statistic constructed from differences of empirical distribution functions. The observations can be univariate or multivariate (low-dimensional), and serially dependent. To carry out the test, two steps are required. The first step consists of computing a detector function. The second step con- sists of comparing the detector function to a suitable constant threshold function. Each of these steps corresponds to one of the functions in the usage section below. The current implementation is preliminary and not optimized for real-time monitoring (but could still be used for that). Details can be found in the first reference. Usage detOpenEndCpDist(x.learn, x, pts = NULL, r = NULL, sigma = NULL, kappa = 1.5, ...) monOpenEndCpDist(det, alpha = 0.05, plot = TRUE) Arguments x.learn a numeric matrix representing the learning sample. x a numeric matrix representing the observations collected after the beginning of the monitoring. pts a numeric matrix whose rows represent the evaluation points; if not provided by user, chosen automatically from the learning sample using parameter r. r integer greater or equal than 2 representing the number of evaluation points per dimension to be chosen from the learning sample; used only if pts = NULL. sigma a numeric matrix representing the covariance matrix to be used; if NULL, esti- mated by sandwich::lrvar(). kappa constant involved in the point selection procedure; used only if the multivariate case; should be larger than 1. ... optional arguments passed to sandwich::lrvar(). det an object of class det.OpenEndCpDist representing a detector function com- puted using detOpenEndCpDist(). alpha the value of the desired significance level for the sequential test. plot logical indicating whether the monitoring should be plotted. Details The testing procedure is described in detail in the first reference. Value Both functions return lists whose components have explicit names. The function monOpenEndCpDist() in particular returns a list whose components are alarm a logical indicating whether the detector function has exceeded the threshold function. time.alarm an integer corresponding to the time at which the detector function has exceeded the threshold function or NA. times.max a vector of times at which the successive detectors have reached their maximum; this sequence of times can be used to estimate the time of change from the time of alarm. time.change an integer giving the estimated time of change if alarm is TRUE; the latter is simply the value in times.max which corresponds to time.alarm. statistic the value of statistic in the call of the function. eta the value of eta in the call of the function. p number of evaluations points of the empirical distribution functions. pts evaluation points of the empirical distribution functions. alpha the value of alpha in the call of the function. sigma the value of sigma in the call of the function. detector the successive values of the detector. threshold the value of the constant threshold for the detector. References <NAME>, <NAME> and <NAME> (2022), Multi-purpose open-end monitoring proce- dures for multivariate observations based on the empirical distribution function, 45 pages, https: //arxiv.org/abs/2201.10311. <NAME> and <NAME> (2021), Open-end nonparametric sequential change-point detection based on the retrospective CUSUM statistic, Electronic Journal of Statistics 15:1, pages 2288-2335, doi:10.1214/21EJS1840. See Also See detOpenEndCpMean() for the corresponding test sensitive to changes in the mean, selectPoints() for the underlying point selection procedure used in the multivariate case and lrvar() for informa- tion on the estimation of the underlying long-run covariance matrix. Examples ## Not run: ## Example of open-end monitoring m <- 800 # size of the learning sample nm <- 5000 # number of collected observations after the start n <- nm + m # total number of observations set.seed(456) ## Univariate, no change in distribution r <- 5 # number of evaluation points x <- rnorm(n) ## Step 1: Compute the detector det <- detOpenEndCpDist(x.learn = matrix(x[1:m]), x = matrix(x[(m + 1):n]), r = r) ## Step 2: Monitoring mon <- monOpenEndCpDist(det = det, alpha = 0.05, plot = TRUE) ## Univariate, change in distribution k <- 2000 # m + k + 1 is the time of change x[(m + k + 1):n] <- rt(nm - k, df = 3) det <- detOpenEndCpDist(x.learn = matrix(x[1:m]), x = matrix(x[(m + 1):n]), r = r) mon <- monOpenEndCpDist(det = det, alpha = 0.05, plot = TRUE) ## Bivariate, no change d <- 2 r <- 4 # number of evaluation points per dimension x <- matrix(rnorm(n * d), nrow = n, ncol = d) det <- detOpenEndCpDist(x.learn = x[1:m, ], x = x[(m + 1):n, ], r = r) mon <- monOpenEndCpDist(det = det, alpha = 0.05, plot = TRUE) ## Bivariate, change in the mean of the first margin x[(m + k + 1):n, 1] <- x[(m + k + 1):n, 1] + 0.3 det <- detOpenEndCpDist(x.learn = x[1:m, ], x = x[(m + 1):n, ], r = r) mon <- monOpenEndCpDist(det = det, alpha = 0.05, plot = TRUE) ## Bivariate, change in the dependence structure x1 <- rnorm(n) x2 <- c(rnorm(m + k, 0.2 * x1[1:(m + k)], sqrt((1 - 0.2^2))), rnorm(nm - k, 0.7 * x1[(m + k + 1):n], sqrt((1 - 0.7^2)))) x <- cbind(x1, x2) det <- detOpenEndCpDist(x.learn = x[1:m, ], x = x[(m + 1):n, ], r = r) mon <- monOpenEndCpDist(det = det, alpha = 0.05, plot = TRUE) ## End(Not run) seqOpenEndCpMean Open-end Nonparametric Sequential Change-Point Detection Test for Univariate Time Series Sensitive to Changes in the Mean Description Open-end nonparametric sequential test for change-point detection based on the retrospective CUSUM statistic. The observations need to be univariate but can be serially dependent. To carry out the test, two steps are required. The first step consists of computing a detector function. The second step consists of comparing the detector function to a suitable constant threshold function. Each of these steps corresponds to one of the functions in the usage section below. The current implementation is preliminary and not optimized for real-time monitoring (but could still be used for that). Details can be found in the third reference. Usage detOpenEndCpMean(x.learn, x, sigma = NULL, b = NULL, weights = c("parzen", "bartlett")) monOpenEndCpMean(det, statistic = c("t", "s", "r", "e", "cs"), eta = 0.001, gamma = 0.45, alpha = 0.05, sigma = NULL, plot = TRUE) Arguments x.learn a numeric vector representing the learning sample. x a numeric vector representing the observations collected after the beginning of the monitoring for a change in mean. sigma an estimate of the long-run variance of the time series of which x.learn is a stretch. If set to NULL, sigma will be estimated using an approach similar to those described in the fourth reference. b strictly positive integer specifying the value of the bandwidth for the estimation of the long-run variance if sigma is not provided. If set to NULL, b will be estimated from x.learn using the function bOpt(). weights a string specifying the kernel for creating the weights used for the estimation of the long-run variance if sigma is not provided; see Section 5 of the first reference. det an object of class det.cpMean representing a detector function computed using detOpenEndCpMean(). statistic a string specifying the statistic/detector to be used for the monitoring; can be either "t", "s", "r", "e" or "cs"; "t" corresponds to the detector Tm in the third reference, "s" to the detector Sm , "r" to the detector Rm , "e" to the detector Em and "cs" to so-called ordinary CUSUM detector denoted by Qm in the third reference. Note that the detector Em was proposed in the second reference. eta a real parameter whose role is described in detail in the third reference. gamma a real parameter that can improve the power of the sequential test at the begin- ning of the monitoring; possible values are 0, 0.1, 0.25, 0.45, 0.65 and 0.85, but not for all statistics; see the third reference. alpha the value of the desired significance level for the sequential test. plot logical indicating whether the monitoring should be plotted. Details The testing procedure is described in detail in the third reference. An alternative way of estimating the long-run variance is to use the function lrvar() of the package sandwich and to pass it through the argument sigma. Value Both functions return lists whose components have explicit names. The function monOpenEndCpMean() in particular returns a list whose components are alarm a logical indicating whether the detector function has exceeded the threshold function. time.alarm an integer corresponding to the time at which the detector function has exceeded the threshold function or NA. times.max a vector of times at which the successive detectors "r" (if statistic = "r", statistic = "s" or statistic = "t") or "e" (if statistic = "e") have reached their maximum; a vector of NA’s if statistic = "cs"; this sequence of times can be used to estimate the time of change from the time of alarm. time.change an integer giving the estimated time of change if alarm is TRUE; the latter is simply the value in times.max which corresponds to time.alarm. statistic the value of statistic in the call of the function. eta the value of eta in the call of the function. gamma the value of gamma in the call of the function. alpha the value of alpha in the call of the function. sigma the value of sigma in the call of the function. detector the successive values of the chosen detector. threshold the value of the constant threshold for the chosen detector. References <NAME> and <NAME> (2016), A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing, Bernoulli 22:2, pages 927-968, https://arxiv.org/abs/ 1306.3930. <NAME>, <NAME> and <NAME> (2021), A new approach for open-end sequential change point monitoring, Journal of Time Series Analysis 42:1, pages 63-84, https://arxiv.org/abs/1906. 03225. <NAME> and <NAME> (2021), Open-end nonparametric sequential change-point detection based on the retrospective CUSUM statistic, Electronic Journal of Statistics 15:1, pages 2288-2335, doi:10.1214/21EJS1840. <NAME> and <NAME> (2004), Automatic block-length selection for the dependent bootstrap, Econometric Reviews 23(1), pages 53-70. See Also See cpMean() for the corresponding a posteriori (offline) test and detOpenEndCpDist() for the corresponding test for changes in the distribution function. Examples ## Not run: ## Example of open-end monitoring m <- 100 # size of the learning sample ## The learning sample set.seed(123) x.learn <- rnorm(m) ## New observations with a change in mean ## to simulate monitoring for the period m+1, ..., n n <- 5000 k <- 2500 ## the true change-point x <- c(rnorm(k-m), rnorm(n-k, mean = 0.2)) ## Step 1: Compute the detector det <- detOpenEndCpMean(x.learn = x.learn, x = x) ## Step 2: Monitoring with the default detector m1 <- monOpenEndCpMean(det) str(m1) ## Monitoring with another detector m2 <- monOpenEndCpMean(det, statistic = "s", gamma = 0.85) str(m2) ## End(Not run) stDistAutocop Combined Test of Stationarity for Univariate Continuous Time Series Sensitive to Changes in the Distribution Function and the Autocopula Description A nonparametric test of stationarity for univariate continuous time series resulting from a combi- nation à la Fisher of the change-point test sensitive to changes in the distribution function imple- mented in cpDist() and the change-point test sensitive to changes in the autcopula implemented in cpAutocop(). Approximate p-values are obtained by combining two multiplier resampling schemes. Details can be found in the first reference. Usage stDistAutocop(x, lag = 1, b = NULL, pairwise = FALSE, weights = c("parzen", "bartlett"), m = 5, N = 1000) Arguments x a one-column matrix containing continuous observations. lag an integer specifying at which lag to consider the autocopula; the autcopula is a (lag+1)-dimensional copula. b strictly positive integer specifying the value of the bandwidth parameter deter- mining the serial dependence when generating dependent multiplier sequences using the ’moving average approach’; see Section 5 of the second reference. If set to NULL, b will be estimated using the function bOptEmpProc(); see the first reference. pairwise a logical specifying whether the test should focus only on the bivariate margins of the (lag+1)-dimensional autocopula. weights a string specifying the kernel for creating the weights used in the generation of dependent multiplier sequences within the ’moving average approach’; see Section 5 of the second reference. m a strictly positive integer specifying the number of points of the uniform grid on (0, 1) involved in the estimation of the bandwidth parameter; see Section 5 of the second reference. N number of multiplier replications. Details The testing procedure is described in detail in the second section of the first reference. Value An object of class htest which is a list, some of the components of which are statistic value of the test statistic. p.value corresponding approximate p-value à Fisher. component.p.values p-values of the component tests arising in the combination. b the value of parameter b. Note This is a test for continuous univariate time series. References <NAME>, <NAME> and <NAME> (2019), Combining cumulative sum change-point detection tests for assessing the stationarity of univariate time series, Journal of Time Series Analysis 40, pages 124-150, https://arxiv.org/abs/1709.02673. <NAME> and <NAME> (2016), A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing, Bernoulli 22:2, pages 927-968, https://arxiv.org/abs/ 1306.3930. See Also see cpDist() and cpAutocop() for the component tests. Examples ## AR1 example n <- 200 k <- n/2 ## the true change-point x <- matrix(c(arima.sim(list(ar = -0.1), n = k), arima.sim(list(ar = 0.5), n = n - k))) stDistAutocop(x) ## AR2 example n <- 200 k <- n/2 ## the true change-point x <- matrix(c(arima.sim(list(ar = c(0,-0.1)), n = k), arima.sim(list(ar = c(0,0.5)), n = n - k))) 32 stDistAutocop ## Not run: stDistAutocop(x) stDistAutocop(x, lag = 2) ## End(Not run) stDistAutocop(x, lag = 2, pairwise = TRUE)
nhanesA
cran
R
Package ‘nhanesA’ July 16, 2023 Version 0.7.4 Date 2023-07-16 Title NHANES Data Retrieval BugReports https://github.com/cjendres1/nhanes/issues Depends R (>= 3.0.0) Imports stringr, foreign, rvest, magrittr, xml2, plyr Description Utility to retrieve data from the National Health and Nutrition Examination Survey (NHANES) website <https://www.cdc.gov/nchs/nhanes/index.htm>. License GPL (>= 2) Encoding UTF-8 URL https://cran.r-project.org/package=nhanesA Suggests knitr, rmarkdown VignetteBuilder knitr RoxygenNote 7.2.3 NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-07-16 16:00:03 UTC R topics documented: browseNHANE... 2 nhane... 3 nhanesAtt... 3 nhanesCodeboo... 4 nhanesDX... 5 nhanesSearc... 6 nhanesSearchTableName... 7 nhanesSearchVarName . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 nhanesTables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 nhanesTableVars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 nhanesTranslate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 browseNHANES Open a browser to NHANES. Description The browser may be directed to a specific year, survey, or table. Usage browseNHANES(year = NULL, data_group = NULL, nh_table = NULL) Arguments year The year in yyyy format where 1999 <= yyyy. data_group The type of survey (DEMOGRAPHICS, DIETARY, EXAMINATION, LABO- RATORY, QUESTIONNAIRE). Abbreviated terms may also be used: (DEMO, DIET, EXAM, LAB, Q). nh_table The name of an NHANES table. Details browseNHANES will open a web browser to the specified NHANES site. Value No return value Examples browseNHANES() # Defaults to the main data sets page browseNHANES(2005) # The main page for the specified survey year browseNHANES(2009, 'EXAM') # Page for the specified year and survey group browseNHANES(nh_table = 'VIX_D') # Page for a specific table browseNHANES(nh_table = 'DXA') # DXA main page nhanes Download an NHANES table and return as a data frame. Description Use to download NHANES data tables that are in SAS format. Usage nhanes(nh_table, includelabels = FALSE) Arguments nh_table The name of the specific table to retrieve. includelabels If TRUE, then include SAS labels as variable attribute (default = FALSE). Details Downloads a table from the NHANES website as is, i.e. in its entirety with no modification or cleansing. NHANES tables are stored in SAS ’.XPT’ format but are imported as a data frame. Function nhanes cannot be used to import limited access data. Value The table is returned as a data frame. Examples nhanes('BPX_E') nhanes('FOLATE_F', includelabels = TRUE) nhanesAttr Returns the attributes of an NHANES data table. Description Returns attributes such as number of rows, columns, and memory size, but does not return the table itself. Usage nhanesAttr(nh_table) Arguments nh_table The name of the specific table to retrieve Details nhanesAttr allows one to check the size and other charactersistics of a data table before import- ing into R. To retrieve these characteristics, the specified table is downloaded, characteristics are determined, then the table is deleted. Value The following attributes are returned as a list nrow = number of rows ncol = number of columns names = name of each column unique = true if all SEQN values are unique na = number of ’NA’ cells in the table size = total size of table in bytes types = data types of each column Examples nhanesAttr('BPX_E') nhanesAttr('FOLATE_F') nhanesCodebook Display codebook for selected variable. Description Returns full NHANES codebook including Variable Name, SAS Label, English Text, Target, and Value distribution. Usage nhanesCodebook(nh_table, colname, dxa = FALSE) Arguments nh_table The name of the NHANES table that contains the desired variable. colname The name of the table column (variable). dxa If TRUE then the 2005-2006 DXA codebook will be used (default=FALSE). Details Each NHANES variable has a codebook that provides a basic description as well as the distribution or range of values. This function returns the full codebook information for the selected variable. Value The codebook is returned as a list object. Returns NULL upon error. Examples nhanesCodebook('AUX_D', 'AUQ020D') nhanesCodebook('BPX_J', 'BPACSZ') nhanesDXA Import Dual Energy X-ray Absorptiometry (DXA) data. Description DXA data were acquired from 1999-2006. Usage nhanesDXA(year, suppl = FALSE, destfile = NULL) Arguments year The year of the data to import, where 1999<=year<=2006. suppl If TRUE then retrieve the supplemental data (default=FALSE). destfile The name of a destination file. If NULL then the data are imported into the R environment but no file is created. Details Provide destfile in order to write the data to file. If destfile is not provided then the data will be imported into the R environment. Value By default the table is returned as a data frame. When downloading to file, the return argument is the integer code from download.file where 0 means success and non-zero indicates failure to download. Examples dxa_b <- nhanesDXA(2001) dxa_c_s <- nhanesDXA(2003, suppl=TRUE) ## Not run: nhanesDXA(1999, destfile="dxx.xpt") nhanesSearch Perform a search over the comprehensive NHANES variable list. Description The descriptions in the master variable list will be filtered by the provided search terms to retrieve a list of relevant variables. The search can be restricted to specific survey years by specifying ystart and/or ystop. Usage nhanesSearch( search_terms = NULL, exclude_terms = NULL, data_group = NULL, ignore.case = FALSE, ystart = NULL, ystop = NULL, includerdc = FALSE, nchar = 128, namesonly = FALSE ) Arguments search_terms List of terms or keywords. exclude_terms List of exclusive terms or keywords. data_group Which data groups (e.g. DIET, EXAM, LAB) to search. Default is to search all groups. ignore.case Ignore case if TRUE. (Default=FALSE). ystart Four digit year of first survey included in search, where ystart >= 1999. ystop Four digit year of final survey included in search, where ystop >= ystart. includerdc If TRUE then RDC only tables are included in list (default=FALSE). nchar Truncates the variable description to a max length of nchar. namesonly If TRUE then only the table names are returned (default=FALSE). Details nhanesSearch is useful to obtain a comprehensive list of relevant tables. Search terms will be matched against the variable descriptions in the NHANES Comprehensive Variable Lists. Match- ing variables must have at least one of the search_terms and not have any exclude_terms. The search may be restricted to specific surveys using ystart and ystop. If no arguments are given, then nhanesSearch returns the complete variable list. Value Returns a data frame that describes variables that matched the search terms. If namesonly=TRUE, then a character vector of table names that contain matched variables is returned. Examples nhanesSearch("bladder", ystart=2001, ystop=2008, nchar=50) nhanesSearch("urin", exclude_terms="During", ystart=2009) nhanesSearch(c("urine", "urinary"), ignore.case=TRUE, ystop=2006, namesonly=TRUE) nhanesSearchTableNames Search for matching table names Description Returns a list of table names that match a specified pattern. Usage nhanesSearchTableNames( pattern = NULL, ystart = NULL, ystop = NULL, includerdc = FALSE, includewithdrawn = FALSE, nchar = 128, details = FALSE ) Arguments pattern Pattern of table names to match ystart Four digit year of first survey included in search, where ystart >= 1999. ystop Four digit year of final survey included in search, where ystop >= ystart. includerdc If TRUE then RDC only tables are included (default=FALSE). includewithdrawn IF TRUE then withdrawn tables are included (default=FALSE). nchar Truncates the variable description to a max length of nchar. details If TRUE then complete table information from the comprehensive data list is returned (default=FALSE). Details Searches the Doc File field in the NHANES Comprehensive Data List (see https://wwwn.cdc.gov/nchs/nhanes/search/DataPa for tables that match a given name pattern. Only a single pattern may be entered. Value Returns a character vector of table names that match the given pattern. If details=TRUE, then a data frame of table attributes is returned. NULL is returned when an HTML read error is encountered. Examples nhanesSearchTableNames('BMX') nhanesSearchTableNames('HPVS', includerdc=TRUE, details=TRUE) nhanesSearchVarName Search for tables that contain a specified variable. Description Returns a list of table names that contain the variable Usage nhanesSearchVarName( varname = NULL, ystart = NULL, ystop = NULL, includerdc = FALSE, nchar = 128, namesonly = TRUE ) Arguments varname Name of variable to match. ystart Four digit year of first survey included in search, where ystart >= 1999. ystop Four digit year of final survey included in search, where ystop >= ystart. includerdc If TRUE then RDC only tables are included in list (default=FALSE). nchar Truncates the variable description to a max length of nchar. namesonly If TRUE then only the table names are returned (default=TRUE). Details The NHANES Comprehensive Variable List is scanned to find all data tables that contain the given variable name. Only a single variable name may be entered, and only exact matches will be found. Value By default, a character vector of table names that include the specified variable is returned. If namesonly=FALSE, then a data frame of table attributes is returned. Examples nhanesSearchVarName('BMXLEG') nhanesSearchVarName('BMXHEAD', ystart=2003) nhanesTables Returns a list of table names for the specified survey group. Description Enables quick display of all available tables in the survey group. Usage nhanesTables( data_group, year, nchar = 128, details = FALSE, namesonly = FALSE, includerdc = FALSE ) Arguments data_group The type of survey (DEMOGRAPHICS, DIETARY, EXAMINATION, LABO- RATORY, QUESTIONNAIRE). Abbreviated terms may also be used: (DEMO, DIET, EXAM, LAB, Q). year The year in yyyy format where 1999 <= yyyy. nchar Truncates the table description to a max length of nchar. details If TRUE then a more detailed description of the tables is returned (default=FALSE). namesonly If TRUE then only the table names are returned (default=FALSE). includerdc If TRUE then RDC only tables are included in list (default=FALSE). Details Function nhanesTables retrieves a list of tables and a description of their contents from the NHANES website. This provides a convenient way to browse the available tables. NULL is returned when an HTML read error is encountered. Value Returns a data frame that contains table attributes. If namesonly=TRUE, then a character vector of table names is returned. Examples nhanesTables('EXAM', 2007) nhanesTables('LAB', 2009, details=TRUE, includerdc=TRUE) nhanesTables('Q', 2005, namesonly=TRUE) nhanesTables('DIET', 'P') nhanesTables('EXAM', 'Y') nhanesTableVars Displays a list of variables in the specified NHANES table. Description Enables quick display of table variables and their definitions. Usage nhanesTableVars( data_group, nh_table, details = FALSE, nchar = 128, namesonly = FALSE ) Arguments data_group The type of survey (DEMOGRAPHICS, DIETARY, EXAMINATION, LABO- RATORY, QUESTIONNAIRE). Abbreviated terms may also be used: (DEMO, DIET, EXAM, LAB, Q). nh_table The name of the specific table to retrieve. details If TRUE then all columns in the variable description are returned (default=FALSE). nchar The number of characters in the Variable Description to print. Default length is 128, which is set to enhance readability cause variable descriptions can be very long. namesonly If TRUE then only the variable names are returned (default=FALSE). Details NHANES tables may contain more than 100 variables. Function nhanesTableVars provides a con- cise display of variables for a specified table, which helps to ascertain quickly if the table is of interest. NULL is returned when an HTML read error is encountered. Value Returns a data frame that describes variable attributes for the specified table. If namesonly=TRUE, then a character vector of the variable names is returned. Examples nhanesTableVars('LAB', 'CBC_E') nhanesTableVars('EXAM', 'OHX_E', details=TRUE, nchar=50) nhanesTableVars('DEMO', 'DEMO_F', namesonly = TRUE) nhanesTranslate Display code translation information. Description Returns code translations for categorical variables, which appear in most NHANES tables. Usage nhanesTranslate( nh_table, colnames = NULL, data = NULL, nchar = 32, mincategories = 2, details = FALSE, dxa = FALSE ) Arguments nh_table The name of the NHANES table to retrieve. colnames The names of the columns to translate. data If a data frame is passed, then code translation will be applied directly to the data frame. In that case the return argument is the code-translated data frame. nchar Applies only when data is defined. Code translations can be very long. Truncate the length by setting nchar (default = 32). mincategories The minimum number of categories needed for code translations to be applied to the data (default=2). details If TRUE then all available table translation information is displayed (default=FALSE). dxa If TRUE then the 2005-2006 DXA translation table will be used (default=FALSE). Details Most NHANES data tables have encoded values. E.g. 1 = ’Male’, 2 = ’Female’. Thus it is often helpful to view the code translations and perhaps insert the translated values in a data frame. Only a single table may be specified, but multiple variables within that table can be selected. Code translations are retrieved for each variable. Value The code translation table (or translated data frame when data is defined). Returns NULL upon error. Examples nhanesTranslate('DEMO_B', c('DMDBORN','DMDCITZN')) nhanesTranslate('BPX_F', 'BPACSZ', details=TRUE) nhanesTranslate('BPX_F', 'BPACSZ', data=nhanes('BPX_F'))
phia
cran
R
Package ‘phia’ October 14, 2022 Version 0.2-1 Date 2015-11-07 Title Post-Hoc Interaction Analysis Description Analysis of terms in linear, generalized and mixed linear models, on the basis of multiple comparisons of factor contrasts. Specially suited for the analysis of interaction terms. Depends car, graphics, stats Suggests nlme, lme4 Imports Matrix, grDevices, methods, utils License GPL (>= 3) URL https://github.com/heliosdrm/phia LazyData yes NeedsCompilation no Author <NAME> [aut, cre], <NAME> [ctb], R Core Team [ctb] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2015-11-07 14:53:33 R topics documented: phia-packag... 2 Boi... 2 contrastCoefficient... 3 interactionMean... 5 Keselma... 8 Rosno... 9 testFactor... 9 testInteraction... 14 phia-package Post-Hoc Interaction Analysis Description Analysis of the expected values and other terms of in linear, generalized, and mixed linear mod- els, on the basis of multiple comparisons of factor contrasts. Specially suited for the analysis of interaction effects. Details Package: phia Type: Package Version: 0.2-1 Date: 2015-11-07 License: GPL (>= 3) This package contains functions that may be used for the post-hoc analysis of any term of linear models (univariate or multivariate), generalized and mixed linear models. The function testFactors provides a flexible user interface for defining combinations of factor levels and covariates, to evalu- ate and test the model, using the function linearHypothesis from package car. testInteractions uses this function for multiple comparisons of simple effects, interaction residuals, interaction con- trasts, or user-defined contrasts. interactionMeans may be used to explore the ‘cell means’ of factorial designs, and plot main effects or first-order interactions. Author(s) <NAME>, with code contributions from <NAME> (package car) and the R Core Team (stats package). Maintainer: <NAME> <<EMAIL>> Boik Contrived Data of Treatments for Hemophobia Description Data set based on the hypothetical example used by R.J. Boik (1979) to explain the analysis of interaction contrasts. It represents the electrodermal response of 72 students complaining of hemo- phobia, treated with different fear reduction therapies and doses of antianxiety medication, in a balanced factorial design. The observed values of the dependent variable (not given in the original article) are contrived so that the results of all the tests are coherent with the examples. Usage Boik Format A data frame with 72 observations and three columns: therapy Fear reduction therapy. Factor with levels control, T1, T2. medication Dose of antianxiety medication. Ordered factor with levels placebo, D1, D2, D3. edr Electrodermal response (in arbitrary units). Note The anova table in Boik’s article (p. 1085) has a misprint in the MS value for ‘Treatment B’ (medication): it should be 790.32, instead of 970.32. Source Boik, R. J. (1979). ‘Interactions, Partial Interactions, and Interaction Contrasts in the Analysis of Variance’, Psychological Bulletin, 86(5), 1084-1089. contrastCoefficients Calculate Coefficient Matrices of Factor Contrasts Description Take symbolic formulas of contrasts across the levels of one or more factors, and return a list of matrices with the corresponding linear combination coefficients, in a suitable form to use in testFactors, or as custom contrasts in testInteractions. Usage contrastCoefficients(..., contrast.definitions, data=parent.frame(), normalize=FALSE) Arguments ..., contrast.definitions definitions of the contrasts. data list, data frame or environment that contains the factors symbolically represented in the contrast definitions. normalize logical value: should the coefficients be normalized to unit norm? Details In the context of this function, a “contrast” means a linear combination of factor levels (regardless of the dummy coefficients that are used to code those levels in fitted models). For a factor f with three levels f1, f2, f3, this could be a single level (e.g. f ~ f1), a “pairwise” contrast (e.g. f ~ f1 - f2), an average (e.g. f ~ (f1 + f2 + f3) / 3), or any other linear combination. Such arithmetic operations are usually applied to the values of a dependent variable, conditioned to the value of the represented factor, but those symbolic representations, as if the levels of the factors were themselves combined, are useful abstractions that may come in handy to define hypotheses about the effects of the factor. This function takes one or more formulas of that type, and returns matrices of the linear coef- ficients that define such contrasts. For the previous examples, it would return the column ma- trices list(f=matrix(c(f1=1, f2=0, f3=0))), list(f=matrix(c(f1=1, f2=-1, f3=0))), and list(f=matrix(c(f1=0.333, f2=-0.333, f3=0.333))), respectively. The factors must be de- fined in the data frame, list, or environment given in data, with the name provided in the right hand side of the formulas. By default this is the parent.frame, which in normal interactive use is the global environment. The contrast matrices are returned in a named list, where the names are the represented factors, as re- quired by the arguments levels and custom of the functions testFactors and testInteractions, respectively. When more than one formula is given for the same factor, all the corresponding columns of coefficients are bound into a single matrix. Other ways of representing contrasts, allowed by testFactors and testInteractions, can also be mixed with the formulas as input arguments. Such alternative contrast definitions are just appended without changes in the returned list, after the matrices created from the formulas. (Notice that if other coefficient vectors or matrices are explicitly entered, these will not be combined with the ones calculated from formulas.) In case of having the contrast definitions stored in a list, that list can be entered as the argument contrast.definitions. This option is recommended if the contrast definitions are mixed with named elements which could create a conflict with data, normalize, or contrast.definitions itself (although this is unlikely to happen specially if all the definitions are given as formulas). In any event, only one alternative of entering the definitions is allowed. If contrast.definitions is given, all the other definitions are ignored with a warning. Value A named list, where each element has the name of a factor, and contains a numeric matrix with the contrast coefficients. Each row of such matrices is associated to a level of the factor, in the order given by levels. Author(s) <NAME>, <<EMAIL>> See Also testFactors, interactionMeans. Examples # Calculate the coefficients of example(testInteractions) # cntrl.vs.T1 <- list(therapy = c(1, -1, 0)) contrastCoefficients(therapy ~ control - T1, data = Boik) # cntrl.vs.T2 <- list(therapy = c(1, 0, -1)) contrastCoefficients(therapy ~ control - T2, data = Boik) # plcb.vs.doses <- list(medication = c(1, -1/3, -1/3, -1/3)) contrastCoefficients(medication ~ placebo - (D1+D2+D3)/3, data = Boik) # Combine cntrl.vs.T1 and plcb.vs.doses contrastCoefficients( therapy ~ control - T1, medication ~ placebo - (D1+D2+D3)/3, data = Boik) # Put various contrasts of the same factor in a matrix, and normalize them contrastCoefficients( therapy ~ control - T1, therapy ~ control - T2, medication ~ placebo - (D1+D2+D3)/3, data = Boik, normalize=TRUE) interactionMeans Calculate and Plot Adjusted Means for Interactions Description Creates a data frame with the adjusted means of a fitted model or the slopes associated to its co- variates, plus the standard error of those values, for all the interactions of given factors, including intra-subjects factors in multivariate linear models with intra-subjects designs. These interactions may be plotted by pairs of factors. Usage interactionMeans(model, factors=names(xlevels), slope=NULL, ...) ## S3 method for class 'interactionMeans' plot(x, atx=attr(x,"factors"), traces=atx, multiple=TRUE, y.equal=FALSE, legend=TRUE, legend.margin=0.2, cex.legend=1, abbrev.levels=FALSE, type="b", pch=0:6, errorbar, ...) Arguments model fitted model. Currently supported classes include "lm", "glm", "mlm", "lme", and "mer" or "merMod" (excluding models fitted by nlmer). factors character vector with the names of interacting factors. All the factors of the model are used by default; use NULL for calculating the overall mean. slope character vector with the names of the interacting covariates associated to the slope that will be represented; if it is NULL (the default value), the function will calculate adjusted mean values. x an object created by the function interactionMeans. atx character vector with the names of factors that will be represented in the hori- zontal axis of the plots. All the factors represented in x are used by default. traces character vector with the names of the factors that will be represented as differ- ent traces in a plot (the same as atx by default). multiple logical indicating if the plots shall be combined in multi-panel figures. y.equal logical indicating if all the plots should have the same range in the y-axis (all plots would be expanded to the maximum range). legend logical indicating if legends should be drawn in case of plots with multiple traces. legend.margin fraction of the combined plotting area that shall be reserved for drawing the legends, if they are required. cex.legend character expansion for the legend, relative to the value of par("cex"). abbrev.levels A logical indicating whether the factor levels are to be abbreviated in the plot, or an integer that specifies the minimum length of such abbreviations. See abbreviate. type, pch type of plot and point characters, as used by matplot. errorbar definition of the error bars in the plots (see Details). ... further arguments passed to testFactors or matplot. Details This function calculates the adjusted values of the model and their standard errors for interactions between factors, at fixed values of covariates, if they exist. The main or crossed effect of covariates is represented by their “slope”, i.e. the variation rate of the response with respect to the product of the specified covariates. The default value of the covariates (and of the offset, if any) is their average in the model data frame, and it can be changed by the arguments covariates or offset, passed down to testFactors. Note that in generalized linear models, standard errors and slopes are referred to the link function, not to the mean (see testFactors for details, and how to force the calculation of the link function instead of the response for adjusted means). In multivariate linear models, the adjusted means or slopes are calculated separately for each column by default, but it is possible to define an intra-subjects design of factors across columns, and put all columns in one. This may be defined by the argument idata passed down to testFactors (see Anova or linearHypothesis in package car for further details). If such transformation is done, it is also possible to include the factors of the intra-subjects design in factors, for calculating their main effects or interactions. The generic plot function creates matrices of interaction plots, with the main effects of each factor represented in the diagonal, and the interactions between each pair of factors in the rest of panels. For multivariate models without intra-subjects design, a new device for each variable will be cre- ated. By default it also prints error bars around the means, plus/minus their standard errors. The size of the error bars can be adjusted by the argument errorbar. Currently supported definitions are strings with the pattern ciXX, where XX is a number between 01 and 99, standing for the $XX The adjusted means and error bars of generalized models (fitted with glm or glmer) are plotted on the scale of the link function, although the y-axis is labelled on the scale of the response (unless the link function had been forced in the calculation of the means). If the interactions involve many factors, it may be convenient to plot each panel in a different device (with multiple=FALSE), or select a subset of factors to be plotted with the arguments atx and traces. A panel will be plotted for each pair of factors defined by crossing these arguments; if the crossed factors are the same one, that panel will show its main effect. Value interactionMeans returns an object of class "interactionMeans", that contains a data frame with the factor interactions and the means or slopes of the model adjusted to them, and some at- tributes used for plotting. Note The purpose of the plot method is similar to the function interaction.plot, but it uses the lower- level function matplot, so the aspect of the plots is different. Author(s) <NAME>, <<EMAIL>> See Also testFactors, interactionMeans. Examples # Interaction between two factors # See ?Adler for a description of the data set mod.adler <- lm(rating ~ instruction * expectation, data=Adler) (means.adler <- interactionMeans(mod.adler)) plot(means.adler, abbrev.levels=TRUE) # Effect of factors on the slopes of the model # See ?SLID for a description of the data set SLID$logwages <- log2(SLID$wages) mod.slid <- lm(logwages ~ education + age * (sex * language), data=SLID) (slopes.slid <- interactionMeans(mod.slid, slope="age")) plot(slopes.slid) # Include intra-subjects factors # See ?OBrienKaiser for a description of the data set mod.ok <- lm(cbind(pre.1, pre.2, pre.3, pre.4, pre.5, post.1, post.2, post.3, post.4, post.5, fup.1, fup.2, fup.3, fup.4, fup.5) ~ treatment*gender, data=OBrienKaiser) # Intra-subjects data: phase <- factor(rep(c("pretest", "posttest", "followup"), each=5)) hour <- ordered(rep(1:5, 3)) idata <- data.frame(phase, hour) # Calculate all interactions, but plot only the interactions between # hour (represented in the x-axis) and the other factors (in traces) means.ok <- interactionMeans(mod.ok, idata=idata) plot(means.ok, atx="hour", traces=c("gender","treatment","phase")) Keselman Repeated-Measures Pyschopsychological Experiment Description This data set represents the outcome of an hypothetical experiment, that recorded psychophysio- logical measures of subjects with different susceptibility to stressors. Each subject performed a task at four different levels of challenge. Data artificially generated from a multivariate lognormal distribution, with unequal variances and covariance structure related to group sample sizes. Usage Keselman1 Keselman2 Format Keselman1 is a data frame of 156 rows and 4 columns, with the measures of 39 subjects in a balanced design. Keselman2 is an unbalanced subset of Keselman1, with 120 rows and 4 columns, corresponding to the measures of 30 subjects. Both data frames contain the following variables: subject Integer identity of the subject. group Classification of the subjects according to their susceptbility to stressors. Factor with levels G1, G2, and G3. challenge Level of challenge of the task. Ordered factor with levels M1, M2, M3, M4. measure Psychophisiological measure. Source <NAME>. (1998) ‘Testing treatment effects in repeated measures designs: an update for psychophysiological researchers’. Psychophysiology, 35(4), 470-478. Rosnow Rosnow’s and Rosenthal’s Baseball Performance Data Description Contrived data set of an hypothetical study about the efficacy of a new training method for improv- ing the performance of baseball players. 2x2 factorial design with two factors: training method and expertise of the player. Usage Rosnow Format A data frame with 36 observations and three columns: method Training method. Factor with levels control and Ralphing. experience Experience of the player in championship competition. Factor with levels no, yes. hits Performance scores, measured as total number of hits in an experimentally devised game. Source <NAME>., & <NAME>. (1989). ‘Definition and Interpretation of Interaction Effects’, Psy- chological Bulletin, 105(1), 143-146. testFactors Evaluate and Test Combinations of Factor Levels Description Calculates and tests the adjusted mean value of the response and other terms of a fitted model, for specific linear combinations of factor levels and specific values of the covariates. This function is specially indicated for post-hoc analyses of models with factors, to test pairwise comparisons of factor levels, simple main effects, and interaction contrasts. In multivariate models, it is possible to define and test contrasts of intra-subjects factors, as in a repeated-measures analysis. Usage ## Default S3 method: testFactors(model, levels, covariates, offset, terms.formula=~1, inherit.contrasts=FALSE, default.contrasts=c("contr.sum","contr.poly"), lht=TRUE, ...) ## S3 method for class 'lm' testFactors(model, ...) ## S3 method for class 'glm' testFactors(model, ..., link=FALSE) ## S3 method for class 'mlm' testFactors(model, levels, covariates, offset, terms.formula=~1, inherit.contrasts=FALSE, default.contrasts=c("contr.sum","contr.poly"), idata, icontrasts=default.contrasts, lht=TRUE, ...) ## S3 method for class 'lme' testFactors(model, ...) ## S3 method for class 'mer' testFactors(model, ..., link=FALSE) ## S3 method for class 'merMod' testFactors(model, ..., link=FALSE) ## S3 method for class 'testFactors' summary(object, predictors=TRUE, matrices=TRUE, covmat=FALSE, ...) Arguments model fitted model. Currently supported classes include "lm", "glm", "mlm", "lme", and "mer" or "merMod" (excluding models fitted by nlmer). levels list that defines the factor levels that will be contrasted; see the Details for more information. covariates optional numeric vector with specific values of the covariates (see Details). offset optional numeric value with a specific value of the offset (if any). terms.formula formula that defines the terms of the model that will be calculated and tested. The default value is ~1, and stands for the adjusted mean value. See the Details for more information. inherit.contrasts logical value: should the default contrasts of model factors be inherited from the model data frame? default.contrasts names of contrast-generating functions to be applied by default to factors and ordered factors, respectively, if inherit.contrasts is FALSE (the default); the contrasts must produce an intra-subjects model matrix in which different terms are orthogonal. The default is c("contr.sum", "contr.poly"). lht logical indicating if the adjusted values are tested (via linearHypothesis). link for models fitted with glm or glmer, logical indicating if the adjusted mean values should represent the link function (FALSE by default, i.e. represent the adjusted means of the response variable). idata an optional data frame giving a factor or factors defining the intra-subjects model for multivariate repeated-measures data, as defined in Anova or linearHypothesis. icontrasts names of contrast-generating functions to be applied in the within-subject “data”. The default is the same as default.contrasts. object object returned by testFactors. predictors logical value: should summary return the values of the predictor values used in the calculations? matrices logical value: should summary return the matrices used for testing by linearHypothesis? covmat logical value: should summary return the covariance matrix of the adjusted val- ues? ... other arguments passed down to linearHypothesis. Details The only mandatory argument is model, which may include any number of factor or numeric pre- dictors, and one offset. The simplest usage of this method, where no other argument is defined, calculates the adjusted mean of the model response variable, pooling over all the levels of factor predictors, and setting the numeric predictors (covariates and offset, ifany) to their average values in the model data frame. The calculations will be done for the linear combinations of factor levels defined by levels. This argument must be a list, with one element for each factor of the model that has to be manipulated (including factors of the intra-subjects design, if suitable). The factors that are not represented in this list will be pooled over, and elements that do not correspond to any factor of the model will be ignored with a warning. levels may be a named list, where the name of each element identifies the represented factor, and its contents may be one of the following: 1. A character string of length 1 or 2, with the name of one or two factor levels. In the former case, the calculations will be restricted to this level of the factor, as in a simple main effects analysis; in the latter, a pairwise contrast will be calculated between both factor levels. 2. A numeric vector without names, as long as the number of levels in the factor. This will create a linear combination of factor levels, with the elements of the vector as coefficients. For instance, if the factor f has three levels, an element f=c(0.5, 0.5, 0) will average the two first levels, and f=c(0.5, 0.5, -1) will contrast the average of the two first levels against the third one. 3. A numeric vector with names equal to some or all the levels of the factor. This is a simplifi- cation of the previous option, where some levels can be omitted, and the coefficient of each level is determined by the names of the vector, which do not have to follow a specific order. Omitted levels will automatically be set to zero. 4. A numeric matrix, as an extension of the two previous options for calculating several combi- nations at a time. Combinations are defined in columns, so if the matrix does not have row names, the number of rows must be equal to the number of levels in the factor, or if the matrix does have row names, they must coincide with the levels of the factor. Alternatively, levels may be a single formula or an unnamed list of formulas, of the type factorname ~ K1*level1 + K2*level2 ... (see contrastCoefficients for further details). Both types of lists (named list of string or numeric vectors and matrices, and unnamed lists of formulas) may be mixed. The argument covariates may be used for setting specific values of the model numeric predictors (covariates). It must be a vector of numeric values. If the elements of this vector have names, their values will be assigned to the covariates that match them; covariates of the model with names not represented in this vector will be set to their default value (the average in the model data frame), and elements with names that do not match with covariates will be ignored. On the other hand, if covariates has no names, and its length is equal to the number of covariates of the model, the values will be assigned to those covariates in the same order as they occur in the model. If it has a different length, the vector will be trimmed or reclycled as needed to fit the number of covariates in the model; this feature may be used, for instance, to set all covariates to the same value, e.g. covariates = 0. The argument offset can likewise be used to define a specific value for the offset of the model. To analyse terms other than the adjusted mean value of the response, use the argument terms.formula. For instance, if the model has the covariates var1, var2, . . . , the slopes of the response with respect to them may be added to the analysis by defining terms.formula as ~var1 + var2 .... This for- mula may be used more generally, for analysing interactions, omitting the mean response, adding the main effects of factors, etc. A different analysis is done for each term of this formula, that must also be contained in the formula of model. For instance, if terms.formula is equal to ~ var1*var2, the function will analyse the adjusted intercept, plus the terms var1, var2, and var1:var2. The intercept stands for the mean value of the response, and terms formed by one or more covariates stand por the slope of the response with respect to the product of those covariates. If any of the variables in the term is a factor, the function analyses a full set of contrasts for that factor of the remaining part of the term; for instance if var1 were a factor, the term var1 would stand for the contrasts of the intercept, and var1:var2 would stand for the contrasts of the slope var2, across the levels of var1. The set of contrasts used in the analysis is normally defined by the argument default.contrasts: by default, if the factor is ordered it will be a set of “polynomial contrasts”, and otherwise “sum contrasts”; however, if inherit.contrasts is TRUE the contrasts will directly be copied from the ones used to define the model. Factors that have explicit con- trasts defined in the model data frame will use those contrasts, regardless of the values defined for default.contrasts and inherit.contrasts. The analysis assumes that the contrasts are orthog- onal to the intercept, which is the usual case if the default arguments are provided, and a warning will be issued if non-orthogonal contrasts are used; take special care of not using “treatment con- trats” if inherit.contrasts is set to TRUE or default.contrasts is changed. In generalized linear models, the adjusted means represent the expected values of the response by default, but the expected value of the link function may be shown by setting the argument link=FALSE. On the other hand, slope values and standard errors always refer to the link function. For multivariate models, the arguments idata, and icontrasts may be used to define an intra- subjects model for multivariate repeated-measures data, as described for Anova or linearHypothesis in package car. Note, however, that the combinations of intra-subjects factor levels are defined in levels, and other arguments defined in those functions like idesign, imatrix or iterms will have no effect in testFactors. The significance of adjusted values is tested by a call to linearHypothesis for each term, unless lht is set to FALSE. Extra arguments may be passed down to that function, for instance to specify the test statistic that will be evaluated. Value An object of class "testFactors", that contains the adjusted values and their standard errors for each term, and the otuput of the test, plus other variables used in the calculations. The summary method for this object will display those variables, unless they be omitted by setting the optional arguments predictors, matrices or covmat to FALSE. The argument predictors refers to the coefficients of specified combinations of factor levels, the values of covariates, and the contrast matrices used for terms that include factors; matrices refers to the “linear hypothesis matrix” used by linearHypothesis, and in multivariate linear models, to the “response transformation matrix” as well — if it exists; covmat refers to the variance-covariance matrix of the adjusted values. Moreover, summary groups the results of the tests for all terms in one table. By default this ta- ble shows the test statistics, their degrees of freedom, and the p-values. If the model is of class "lm", it also shows the sums of squares; and if it is of class "mlm", only the first type of test statistic returned by linearHypothesis (by default “Pillai”) is shown. This variable shape of the ANOVA table is controlled by additional classes assigned to the object (either "testFactors.lm" or "testFactors.mlm", as suitable). Note The tests of mixed models are done under the assumption that the estimation of the random part of the model is exact. Author(s) <NAME>, <<EMAIL>> See Also linearHypothesis in package car. interactionMeans, and testInteractions as useful wrap- pers of testFactors. Examples # Example with factors and covariates # Analyse prestige of Canadian occupations depending on # education, income and type of occupation # See ?Prestige for a description of the data set prestige.mod <- lm(prestige ~ (education+log2(income))*type, data=Prestige) # Pairwise comparisons for factor "type", to see how it influences # the mean value of prestige and interacts with log.income # 1: "white collar" vs "blue collar" wc.vs.bc <- list(type=c("wc", "bc")) testFactors(prestige.mod, wc.vs.bc, terms.formula=~log2(income)) # 2: "professional" vs. "blue collar" prof.vs.bc <- list(type=c("prof", "bc")) testFactors(prestige.mod, prof.vs.bc, terms.formula=~log2(income)) # 3: "professional" vs. "white collar" prof.vs.wc <- list(type=c("prof", "wc")) testFactors(prestige.mod, prof.vs.wc, terms.formula=~log2(income)) # Interaction contrasts in a repeated-measures experiment # See ?OBrienKaiser for a description of the data set mod.ok <- lm(cbind(pre.1, pre.2, pre.3, pre.4, pre.5, post.1, post.2, post.3, post.4, post.5, fup.1, fup.2, fup.3, fup.4, fup.5) ~ treatment*gender, data=OBrienKaiser) # intra-subjects data: phase <- factor(rep(c("pretest", "posttest", "followup"), each=5)) hour <- ordered(rep(1:5, 3)) idata <- data.frame(phase, hour) Anova(mod.ok, idata=idata, idesign=~phase*hour) # Contrasts across "phase", for different contrasts of "treatment" # Using different definitions of the argument "levels" # 1: Treatment "A" vs. treatment "B". A.vs.B <- list(treatment=c("A", "B")) # The following are equivalent: # A.vs.B <- list(treatment=c(A=1, B=-1, control=0)) # A.vs.B <- list(treatment=c(A=1, B=-1)) # A.vs.B <- list(treatment ~ A - B) # A.vs.B <- treatment ~ A - B testFactors(mod.ok, A.vs.B, idata=idata, terms.formula=~0+phase) # 2: Controls vs. treatments control.vs.AB <- list(treatment=c(A=0.5, B=0.5, control=-1)) # The following is equivalent: # control.vs.AB <- treatment ~ (A+B)/2 - control testFactors(mod.ok, control.vs.AB, idata=idata, terms.formula=~0+phase) # Shortcut to get only the adjusted values and simplified ANOVA tables contr <- list(A.vs.B=A.vs.B, control.vs.AB=control.vs.AB) anovaTables <- function(contrast) summary(testFactors(mod.ok, contrast, idata=idata, terms.formula=~0+phase), predictors=FALSE, matrices=FALSE) lapply(contr,anovaTables) testInteractions Test Contrasts of Factor Interactions Description Calculates and tests different types of contrasts for factor interactions, in linear, generalized and mixed linear models: simple main effects, interaction contrasts, residual effects, and others. Usage testInteractions(model, pairwise=NULL, fixed=NULL, residual=NULL, across=NULL, custom=NULL, slope=NULL, adjustment=NULL, label.factors=FALSE, abbrev.levels=FALSE, ...) Arguments model fitted model. Currently supported classes include "lm", "glm", "mlm", "lme", and "mer" or "merMod" (excluding models fitted by nlmer). pairwise character vector with the names of factors represented by pairwise contrasts. fixed character vector with the names of factors represented by fixed levels. residual character vector with the names of factors represented by residuals effects. across character vector with the names of factors represented by a full set of indepen- dent contrasts. custom list with custom contrasts for other factors. See the Details for more informa- tion. slope character vector with the names of the covariates associated to the slope that will tested; if it is NULL (the default value), the function will test the adjusted mean values. adjustment adjustment method for p-values, as defined in p.adjust. label.factors If true, the rownames for each row in the resulting table include the name(s) of the factor(s) involved, followed by the level values. Otherwise, the rownames include only the levels of the factor(s), with multiple factors separated by ‘:’. abbrev.levels Either a logical or an integer, specifying whether the levels values of the factors in the term are to be abbreviated in constructing the rownames. An integer specifies the minimum length of the abbreviation for each factor in the term. ... further arguments passed down to testFactors. Details Each factor of the model can at most be contained in one of the arguments pairwise, fixed, residual, across, or custom; redundant assignment of factors is not allowed. If none of these arguments is defined, the default behavior is as if pairwise contained all the factors of the model. The result will show a set of tests on the model adjusted mean, at different combinations of factor levels. If there are covariates defined in slope, the test will apply to the slope for the interaction of such covariates. Each row will contain a different combination of factor levels or contrasts, depending on the argument wherein the factor has been defined: • The factors contained in pairwise will appear as pairwise contrasts between levels. • The factors contained in fixed will appear as one of their possible levels. • The factors contained in residual will appear as residual effects of their levels, after remov- ing effects of higher order. • The factors contained in across will appear as a full set of contrasts. By default they will be orthogonal contrasts, unless overriden by the contrasts of the model data frame or by the arguments passed down to testFactors. See the documentation of that function for further details. Ommitted factors will be averaged across all their levels. Thus, to test the overall adjusted means or slopes, use pairwise=NULL (or do the same with any of the arguments of the previous list). Other combinations of factor levels can be defined by custom. This argument should be a list of nu- meric matrices or vectors, named as the model factors. Each matrix must have as many rows as the number of levels of the corresponding factor, so that each column represents a linear combination of such levels that will be tested, crossed with the combinations of the other factors. Vectors will be treated as column matrices. In multivariate linear models it is possible to define an intra-subjects design, with the argument idata passed down to testFactors (see Anova or linearHypothesis in package car for further details). The factors defined by that argument can be included as any other factor of the model. Value An anova table with one row for each different combination of levels and contrasts defined in pairwise, fixed, across, and custom. The rownames represent the specific levels or contrasts used for the different factors, separated by ‘:’. These names can be tweaked by the arguments label.factors and abbrev.levels, as done by termMeans in package heplots. Note The tests of mixed models are done under the assumption that the estimation of the random part of the model is exact. Author(s) <NAME>, <<EMAIL>> See Also testFactors, interactionMeans. Use contrastCoefficients as a facility to create matrices of custom contrasts. Examples # Tests of the interactions described in Boik (1979) # See ?Boik for a description of the data set mod.boik <- lm(edr ~ therapy * medication, data=Boik) Anova(mod.boik) cntrl.vs.T1 <- list(therapy = c(1, -1, 0)) cntrl.vs.T2 <- list(therapy = c(1, 0, -1)) plcb.vs.doses <- list(medication = c(1, -1/3, -1/3, -1/3)) testInteractions(mod.boik, pairwise="therapy", adjustment="none") testInteractions(mod.boik, custom=plcb.vs.doses, adjustment="none") testInteractions(mod.boik, custom=cntrl.vs.T1, across="medication", adjustment="none") testInteractions(mod.boik, custom=c(cntrl.vs.T1, plcb.vs.doses), adjustment="none") testInteractions(mod.boik, custom=cntrl.vs.T2, across="medication", adjustment="none") testInteractions(mod.boik, custom=plcb.vs.doses, across="therapy", adjustment="none")
github.com/grandcat/zeroconf
go
Go
README [¶](#section-readme) --- ### ZeroConf: Service Discovery with mDNS ZeroConf is a pure Golang library that employs Multicast DNS-SD for * browsing and resolving services in your network * registering own services in the local network. It basically implements aspects of the standards [RFC 6762](https://tools.ietf.org/html/rfc6762) (mDNS) and [RFC 6763](https://tools.ietf.org/html/rfc6763) (DNS-SD). Though it does not support all requirements yet, the aim is to provide a complient solution in the long-term with the community. By now, it should be compatible to [Avahi](http://avahi.org/) (tested) and Apple's Bonjour (untested). Target environments: private LAN/Wifi, small or isolated networks. [![GoDoc](https://godoc.org/github.com/grandcat/zeroconf?status.svg)](https://godoc.org/github.com/grandcat/zeroconf) [![Go Report Card](https://goreportcard.com/badge/github.com/grandcat/zeroconf)](https://goreportcard.com/report/github.com/grandcat/zeroconf) [![Build Status](https://travis-ci.com/grandcat/zeroconf.svg?branch=master)](https://travis-ci.com/grandcat/zeroconf) #### Install Nothing is as easy as that: ``` $ go get -u github.com/grandcat/zeroconf ``` This package requires **Go 1.7** (context in std lib) or later. #### Browse for services in your local network ``` // Discover all services on the network (e.g. _workstation._tcp) resolver, err := zeroconf.NewResolver(nil) if err != nil { log.Fatalln("Failed to initialize resolver:", err.Error()) } entries := make(chan *zeroconf.ServiceEntry) go func(results <-chan *zeroconf.ServiceEntry) { for entry := range results { log.Println(entry) } log.Println("No more entries.") }(entries) ctx, cancel := context.WithTimeout(context.Background(), time.Second*15) defer cancel() err = resolver.Browse(ctx, "_workstation._tcp", "local.", entries) if err != nil { log.Fatalln("Failed to browse:", err.Error()) } <-ctx.Done() ``` See <https://github.com/grandcat/zeroconf/blob/master/examples/resolv/client.go>. #### Lookup a specific service instance ``` // Example filled soon. ``` #### Register a service ``` server, err := zeroconf.Register("GoZeroconf", "_workstation._tcp", "local.", 42424, []string{"txtv=0", "lo=1", "la=2"}, nil) if err != nil { panic(err) } defer server.Shutdown() // Clean exit. sig := make(chan os.Signal, 1) signal.Notify(sig, os.Interrupt, syscall.SIGTERM) select { case <-sig: // Exit by user case <-time.After(time.Second * 120): // Exit by timeout } log.Println("Shutting down.") ``` See <https://github.com/grandcat/zeroconf/blob/master/examples/register/server.go>. #### Features and ToDo's This list gives a quick impression about the state of this library. See what needs to be done and submit a pull request :) * Browse / Lookup / Register services * Multiple IPv6 / IPv4 addresses support * Send multiple probes (exp. back-off) if no service answers (*) * Timestamp entries for TTL checks * Compare new multicasts with already received services *Notes:* (*) The denoted functionalities might not be 100% standard conform, but should not be a deal breaker. Some test scenarios demonstrated that the overall robustness and performance increases when applying the suggested improvements. #### Credits Great thanks to [hashicorp](https://github.com/hashicorp/mdns) and to [oleksandr](https://github.com/oleksandr/bonjour) and all contributing authors for the code this projects bases upon. Large parts of the code are still the same. However, there are several reasons why I decided to create a fork of the original project: The previous project seems to be unmaintained. There are several useful pull requests waiting. I merged most of them in this project. Still, the implementation has some bugs and lacks some other features that make it quite unreliable in real LAN environments when running continously. Last but not least, the aim for this project is to build a solution that targets standard conformance in the long term with the support of the community. Though, resiliency should remain a top goal. Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package zeroconf is a pure Golang library that employs Multicast DNS-SD for browsing and resolving services in your network and registering own services in the local network. It basically implements aspects of the standards [RFC 6762](https://rfc-editor.org/rfc/rfc6762.html) (mDNS) and [RFC 6763](https://rfc-editor.org/rfc/rfc6763.html) (DNS-SD). Though it does not support all requirements yet, the aim is to provide a complient solution in the long-term with the community. By now, it should be compatible to [Avahi](<http://avahi.org/>) (tested) and Apple's Bonjour (untested). Should work in the most office, home and private environments. ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [type ClientOption](#ClientOption) * + [func SelectIPTraffic(t IPType) ClientOption](#SelectIPTraffic) + [func SelectIfaces(ifaces []net.Interface) ClientOption](#SelectIfaces) * [type IPType](#IPType) * [type LookupParams](#LookupParams) * + [func NewLookupParams(instance, service, domain string, entries chan<- *ServiceEntry) *LookupParams](#NewLookupParams) * [type Resolver](#Resolver) * + [func NewResolver(options ...ClientOption) (*Resolver, error)](#NewResolver) * + [func (r *Resolver) Browse(ctx context.Context, service, domain string, entries chan<- *ServiceEntry) error](#Resolver.Browse) + [func (r *Resolver) Lookup(ctx context.Context, instance, service, domain string, ...) error](#Resolver.Lookup) * [type Server](#Server) * + [func Register(instance, service, domain string, port int, text []string, ...) (*Server, error)](#Register) + [func RegisterProxy(instance, service, domain string, port int, host string, ips []string, ...) (*Server, error)](#RegisterProxy) * + [func (s *Server) SetText(text []string)](#Server.SetText) + [func (s *Server) Shutdown()](#Server.Shutdown) + [func (s *Server) TTL(ttl uint32)](#Server.TTL) * [type ServiceEntry](#ServiceEntry) * + [func NewServiceEntry(instance, service, domain string) *ServiceEntry](#NewServiceEntry) * [type ServiceRecord](#ServiceRecord) * + [func NewServiceRecord(instance, service, domain string) *ServiceRecord](#NewServiceRecord) * + [func (s *ServiceRecord) ServiceInstanceName() string](#ServiceRecord.ServiceInstanceName) + [func (s *ServiceRecord) ServiceName() string](#ServiceRecord.ServiceName) + [func (s *ServiceRecord) ServiceTypeName() string](#ServiceRecord.ServiceTypeName) ### Constants [¶](#pkg-constants) ``` const ( IPv4 = 0x01 IPv6 = 0x02 IPv4AndIPv6 = ([IPv4](#IPv4) | [IPv6](#IPv6)) //< Default option. ) ``` Options for IPType. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) #### type [ClientOption](https://github.com/grandcat/zeroconf/blob/v1.0.0/client.go#L37) [¶](#ClientOption) ``` type ClientOption func(*clientOpts) ``` ClientOption fills the option struct to configure intefaces, etc. #### func [SelectIPTraffic](https://github.com/grandcat/zeroconf/blob/v1.0.0/client.go#L44) [¶](#SelectIPTraffic) ``` func SelectIPTraffic(t [IPType](#IPType)) [ClientOption](#ClientOption) ``` SelectIPTraffic selects the type of IP packets (IPv4, IPv6, or both) this instance listens for. This does not guarantee that only mDNS entries of this sepcific type passes. E.g. typical mDNS packets distributed via IPv4, may contain both DNS A and AAAA entries. #### func [SelectIfaces](https://github.com/grandcat/zeroconf/blob/v1.0.0/client.go#L51) [¶](#SelectIfaces) ``` func SelectIfaces(ifaces [][net](/net).[Interface](/net#Interface)) [ClientOption](#ClientOption) ``` SelectIfaces selects the interfaces to query for mDNS records #### type [IPType](https://github.com/grandcat/zeroconf/blob/v1.0.0/client.go#L22) [¶](#IPType) ``` type IPType [uint8](/builtin#uint8) ``` IPType specifies the IP traffic the client listens for. This does not guarantee that only mDNS entries of this sepcific type passes. E.g. typical mDNS packets distributed via IPv4, often contain both DNS A and AAAA entries. #### type [LookupParams](https://github.com/grandcat/zeroconf/blob/v1.0.0/service.go#L63) [¶](#LookupParams) ``` type LookupParams struct { [ServiceRecord](#ServiceRecord) Entries chan<- *[ServiceEntry](#ServiceEntry) // Entries Channel // contains filtered or unexported fields } ``` LookupParams contains configurable properties to create a service discovery request #### func [NewLookupParams](https://github.com/grandcat/zeroconf/blob/v1.0.0/service.go#L72) [¶](#NewLookupParams) ``` func NewLookupParams(instance, service, domain [string](/builtin#string), entries chan<- *[ServiceEntry](#ServiceEntry)) *[LookupParams](#LookupParams) ``` NewLookupParams constructs a LookupParams. #### type [Resolver](https://github.com/grandcat/zeroconf/blob/v1.0.0/client.go#L58) [¶](#Resolver) ``` type Resolver struct { // contains filtered or unexported fields } ``` Resolver acts as entry point for service lookups and to browse the DNS-SD. #### func [NewResolver](https://github.com/grandcat/zeroconf/blob/v1.0.0/client.go#L64) [¶](#NewResolver) ``` func NewResolver(options ...[ClientOption](#ClientOption)) (*[Resolver](#Resolver), [error](/builtin#error)) ``` NewResolver creates a new resolver and joins the UDP multicast groups to listen for mDNS messages. #### func (*Resolver) [Browse](https://github.com/grandcat/zeroconf/blob/v1.0.0/client.go#L85) [¶](#Resolver.Browse) ``` func (r *[Resolver](#Resolver)) Browse(ctx [context](/context).[Context](/context#Context), service, domain [string](/builtin#string), entries chan<- *[ServiceEntry](#ServiceEntry)) [error](/builtin#error) ``` Browse for all services of a given type in a given domain. #### func (*Resolver) [Lookup](https://github.com/grandcat/zeroconf/blob/v1.0.0/client.go#L111) [¶](#Resolver.Lookup) ``` func (r *[Resolver](#Resolver)) Lookup(ctx [context](/context).[Context](/context#Context), instance, service, domain [string](/builtin#string), entries chan<- *[ServiceEntry](#ServiceEntry)) [error](/builtin#error) ``` Lookup a specific service by its name and type in a given domain. #### type [Server](https://github.com/grandcat/zeroconf/blob/v1.0.0/server.go#L146) [¶](#Server) ``` type Server struct { // contains filtered or unexported fields } ``` Server structure encapsulates both IPv4/IPv6 UDP connections #### func [Register](https://github.com/grandcat/zeroconf/blob/v1.0.0/server.go#L28) [¶](#Register) ``` func Register(instance, service, domain [string](/builtin#string), port [int](/builtin#int), text [][string](/builtin#string), ifaces [][net](/net).[Interface](/net#Interface)) (*[Server](#Server), [error](/builtin#error)) ``` Register a service by given arguments. This call will take the system's hostname and lookup IP by that hostname. #### func [RegisterProxy](https://github.com/grandcat/zeroconf/blob/v1.0.0/server.go#L86) [¶](#RegisterProxy) ``` func RegisterProxy(instance, service, domain [string](/builtin#string), port [int](/builtin#int), host [string](/builtin#string), ips [][string](/builtin#string), text [][string](/builtin#string), ifaces [][net](/net).[Interface](/net#Interface)) (*[Server](#Server), [error](/builtin#error)) ``` RegisterProxy registers a service proxy. This call will skip the hostname/IP lookup and will use the provided values. #### func (*Server) [SetText](https://github.com/grandcat/zeroconf/blob/v1.0.0/server.go#L201) [¶](#Server.SetText) ``` func (s *[Server](#Server)) SetText(text [][string](/builtin#string)) ``` SetText updates and announces the TXT records #### func (*Server) [Shutdown](https://github.com/grandcat/zeroconf/blob/v1.0.0/server.go#L196) [¶](#Server.Shutdown) ``` func (s *[Server](#Server)) Shutdown() ``` Shutdown closes all udp connections and unregisters the service #### func (*Server) [TTL](https://github.com/grandcat/zeroconf/blob/v1.0.0/server.go#L207) [¶](#Server.TTL) ``` func (s *[Server](#Server)) TTL(ttl [uint32](/builtin#uint32)) ``` TTL sets the TTL for DNS replies #### type [ServiceEntry](https://github.com/grandcat/zeroconf/blob/v1.0.0/service.go#L94) [¶](#ServiceEntry) ``` type ServiceEntry struct { [ServiceRecord](#ServiceRecord) HostName [string](/builtin#string) `json:"hostname"` // Host machine DNS name Port [int](/builtin#int) `json:"port"` // Service Port Text [][string](/builtin#string) `json:"text"` // Service info served as a TXT record TTL [uint32](/builtin#uint32) `json:"ttl"` // TTL of the service record AddrIPv4 [][net](/net).[IP](/net#IP) `json:"-"` // Host machine IPv4 address AddrIPv6 [][net](/net).[IP](/net#IP) `json:"-"` // Host machine IPv6 address } ``` ServiceEntry represents a browse/lookup result for client API. It is also used to configure service registration (server API), which is used to answer multicast queries. #### func [NewServiceEntry](https://github.com/grandcat/zeroconf/blob/v1.0.0/service.go#L105) [¶](#NewServiceEntry) ``` func NewServiceEntry(instance, service, domain [string](/builtin#string)) *[ServiceEntry](#ServiceEntry) ``` NewServiceEntry constructs a ServiceEntry. #### type [ServiceRecord](https://github.com/grandcat/zeroconf/blob/v1.0.0/service.go#L10) [¶](#ServiceRecord) ``` type ServiceRecord struct { Instance [string](/builtin#string) `json:"name"` // Instance name (e.g. "My web page") Service [string](/builtin#string) `json:"type"` // Service name (e.g. _http._tcp.) Domain [string](/builtin#string) `json:"domain"` // If blank, assumes "local" // contains filtered or unexported fields } ``` ServiceRecord contains the basic description of a service, which contains instance name, service type & domain #### func [NewServiceRecord](https://github.com/grandcat/zeroconf/blob/v1.0.0/service.go#L39) [¶](#NewServiceRecord) ``` func NewServiceRecord(instance, service, domain [string](/builtin#string)) *[ServiceRecord](#ServiceRecord) ``` NewServiceRecord constructs a ServiceRecord. #### func (*ServiceRecord) [ServiceInstanceName](https://github.com/grandcat/zeroconf/blob/v1.0.0/service.go#L29) [¶](#ServiceRecord.ServiceInstanceName) ``` func (s *[ServiceRecord](#ServiceRecord)) ServiceInstanceName() [string](/builtin#string) ``` ServiceInstanceName returns a complete service instance name (e.g. MyDemo\ Service._foobar._tcp.local.), which is composed from service instance name, service name and a domain. #### func (*ServiceRecord) [ServiceName](https://github.com/grandcat/zeroconf/blob/v1.0.0/service.go#L23) [¶](#ServiceRecord.ServiceName) ``` func (s *[ServiceRecord](#ServiceRecord)) ServiceName() [string](/builtin#string) ``` ServiceName returns a complete service name (e.g. _foobar._tcp.local.), which is composed of a service name (also referred as service type) and a domain. #### func (*ServiceRecord) [ServiceTypeName](https://github.com/grandcat/zeroconf/blob/v1.0.0/service.go#L34) [¶](#ServiceRecord.ServiceTypeName) ``` func (s *[ServiceRecord](#ServiceRecord)) ServiceTypeName() [string](/builtin#string) ``` ServiceTypeName returns the complete identifier for a DNS-SD query.
django-tagging
readthedoc
Python
Django Tagging 0.4.4 documentation [Django Tagging](index.html#document-index) --- Django Tagging[¶](#django-tagging) === A generic tagging application for [Django](http://www.djangoproject.com) projects, which allows association of a number of tags with any Django model instance and makes retrieval of tags simple. * [Installation](#installation) + [Installing an official release](#installing-an-official-release) - [Source distribution](#source-distribution) + [Installing the development version](#installing-the-development-version) + [Using Django Tagging in your applications](#using-django-tagging-in-your-applications) * [Settings](#settings) + [FORCE_LOWERCASE_TAGS](#force-lowercase-tags) + [MAX_TAG_LENGTH](#max-tag-length) * [Registering your models](#registering-your-models) + [The `register` function](#the-register-function) + [`TagDescriptor`](#tagdescriptor) + [`ModelTagManager`](#modeltagmanager) + [`ModelTaggedItemManager`](#modeltaggeditemmanager) * [Tags](#tags) + [API reference](#api-reference) - [Fields](#fields) - [Manager functions](#manager-functions) + [Basic usage](#basic-usage) - [Tagging objects and retrieving an object’s tags](#tagging-objects-and-retrieving-an-object-s-tags) - [Retrieving tags used by a particular model](#retrieving-tags-used-by-a-particular-model) + [Tag input](#tag-input) * [Tagged items](#tagged-items) + [API reference](#id1) - [Fields](#id2) - [Manager functions](#id3) + [Basic usage](#id4) - [Retrieving tagged objects](#retrieving-tagged-objects) - [Restricting objects returned](#restricting-objects-returned) * [Utilities](#utilities) + [`parse_tag_input(input)`](#parse-tag-input-input) + [`edit_string_for_tags(tags)`](#edit-string-for-tags-tags) + [`get_tag_list(tags)`](#get-tag-list-tags) + [`calculate_cloud(tags, steps=4, distribution=tagging.utils.LOGARITHMIC)`](#calculate-cloud-tags-steps-4-distribution-tagging-utils-logarithmic) * [Model Fields](#model-fields) + [Field types](#field-types) - [`TagField`](#tagfield) * [Form fields](#form-fields) + [Field types](#id5) - [`TagField`](#id6) * [Generic views](#generic-views) + [`tagging.views.TaggedObjectList`](#tagging-views-taggedobjectlist) - [Example usage](#example-usage) * [Template tags](#template-tags) + [Tag reference](#tag-reference) - [tags_for_model](#tags-for-model) - [tag_cloud_for_model](#tag-cloud-for-model) - [tags_for_object](#tags-for-object) - [tagged_objects](#tagged-objects) [Installation](#id7)[¶](#installation) --- ### [Installing an official release](#id8)[¶](#installing-an-official-release) Official releases are made available from <https://pypi.python.org/pypi/django-tagging/#### [Source distribution](#id9)[¶](#source-distribution) Download the a distribution file and unpack it. Inside is a script named `setup.py`. Enter this command: ``` $ python setup.py install ``` ...and the package will install automatically. More easily with **pip**: ``` $ pip install django-tagging ``` ### [Installing the development version](#id10)[¶](#installing-the-development-version) Alternatively, if you’d like to update Django Tagging occasionally to pick up the latest bug fixes and enhancements before they make it into an official release, clone the git repository instead. The following command will clone the development branch to `django-tagging` directory: ``` git clone <EMAIL>:Fantomas42/django-tagging.git ``` Add the resulting folder to your [PYTHONPATH](http://www.python.org/doc/2.5.2/tut/node8.html#SECTION008120000000000000000) or symlink ([junction](http://www.microsoft.com/technet/sysinternals/FileAndDisk/Junction.mspx), if you’re on Windows) the `tagging` directory inside it into a directory which is on your PYTHONPATH, such as your Python installation’s `site-packages` directory. You can verify that the application is available on your PYTHONPATH by opening a Python interpreter and entering the following commands: ``` >>> import tagging >>> tagging.__version__ 0.4.dev0 ``` When you want to update your copy of the Django Tagging source code, run the command `git pull` from within the `django-tagging` directory. Caution The development version may contain bugs which are not present in the release version and introduce backwards-incompatible changes. If you’re tracking git, keep an eye on the [CHANGELOG](https://github.com/Fantomas42/django-tagging/blob/develop/CHANGELOG.txt) before you update your copy of the source code. ### [Using Django Tagging in your applications](#id11)[¶](#using-django-tagging-in-your-applications) Once you’ve installed Django Tagging and want to use it in your Django applications, do the following: > 1. Put `'tagging'` in your `INSTALLED_APPS` setting. > 2. Run the command `manage.py migrate`. The `migrate` command creates the necessary database tables and creates permission objects for all installed apps that need them. That’s it! [Settings](#id12)[¶](#settings) --- Some of the application’s behaviour can be configured by adding the appropriate settings to your project’s settings file. The following settings are available: ### [FORCE_LOWERCASE_TAGS](#id13)[¶](#force-lowercase-tags) Default: `False` A boolean that turns on/off forcing of all tag names to lowercase before they are saved to the database. ### [MAX_TAG_LENGTH](#id14)[¶](#max-tag-length) Default: `50` An integer which specifies the maximum length which any tag is allowed to have. This is used for validation in the `django.contrib.admin` application and in any forms automatically generated using `ModelForm`. [Registering your models](#id15)[¶](#registering-your-models) --- Your Django models can be registered with the tagging application to access some additional tagging-related features. Note You don’t *have* to register your models in order to use them with the tagging application - many of the features added by registration are just convenience wrappers around the tagging API provided by the `Tag` and `TaggedItem` models and their managers, as documented further below. ### [The `register` function](#id16)[¶](#the-register-function) To register a model, import the `tagging.registry` module and call its `register` function, like so: ``` from django.db import models from tagging.registry import register class Widget(models.Model): name = models.CharField(max_length=50) register(Widget) ``` The following argument is required: `model` The model class to be registered. An exception will be raised if you attempt to register the same class more than once. The following arguments are optional, with some recommended defaults - take care to specify different attribute names if the defaults clash with your model class’ definition: `tag_descriptor_attr` The name of an attribute in the model class which will hold a tag descriptor for the model. Default: `'tags'` See [TagDescriptor](#tagdescriptor) below for details about the use of this descriptor. `tagged_item_manager_attr` The name of an attribute in the model class which will hold a custom manager for accessing tagged items for the model. Default: `'tagged'`. See [ModelTaggedItemManager](#modeltaggeditemmanager) below for details about the use of this manager. ### [`TagDescriptor`](#id17)[¶](#tagdescriptor) When accessed through the model class itself, this descriptor will return a `ModelTagManager` for the model. See [ModelTagManager](#modeltagmanager) below for more details about its use. When accessed through a model instance, this descriptor provides a handy means of retrieving, updating and deleting the instance’s tags. For example: ``` >>> widget = Widget.objects.create(name='Testing descriptor') >>> widget.tags [] >>> widget.tags = 'toast, melted cheese, butter' >>> widget.tags [<Tag: butter>, <Tag: melted cheese>, <Tag: toast>] >>> del widget.tags >>> widget.tags [] ``` ### [`ModelTagManager`](#id18)[¶](#modeltagmanager) A manager for retrieving tags used by a particular model. Defines the following methods: * `get_queryset()` – as this method is redefined, any `QuerySets` created by this model will be initially restricted to contain the distinct tags used by all the model’s instances. * `cloud(*args, **kwargs)` – creates a list of tags used by the model’s instances, with `count` and `font_size` attributes set for use in displaying a tag cloud. See the documentation on `Tag`‘s manager’s [cloud_for_model method](#cloud-for-model-method) for information on additional arguments which can be given. * `related(self, tags, *args, **kwargs)` – creates a list of tags used by the model’s instances, which are also used by all instance which have the given `tags`. See the documentation on `Tag`‘s manager’s [related_for_model method](#related-for-model-method) for information on additional arguments which can be given. * `usage(self, *args, **kwargs))` – creates a list of tags used by the model’s instances, with optional usages counts, restriction based on usage counts and restriction of the model instances from which usage and counts are determined. See the documentation on `Tag`‘s manager’s [usage_for_model method](#usage-for-model-method) for information on additional arguments which can be given. Example usage: ``` # Create a ``QuerySet`` of tags used by Widget instances Widget.tags.all() # Retrieve a list of tags used by Widget instances with usage counts Widget.tags.usage(counts=True) # Retrieve tags used by instances of WIdget which are also tagged with # 'cheese' and 'toast' Widget.tags.related(['cheese', 'toast'], counts=True, min_count=3) ``` ### [`ModelTaggedItemManager`](#id19)[¶](#modeltaggeditemmanager) A manager for retrieving model instance for a particular model, based on their tags. * `related_to(obj, queryset=None, num=None)` – creates a list of model instances which are related to `obj`, based on its tags. If a `queryset` argument is provided, it will be used to restrict the resulting list of model instances. If `num` is given, a maximum of `num` instances will be returned. * `with_all(tags, queryset=None)` – creates a `QuerySet` containing model instances which are tagged with *all* the given tags. If a `queryset` argument is provided, it will be used as the basis for the resulting `QuerySet`. * `with_any(tags, queryset=None)` – creates a `QuerySet` containing model instances which are tagged with *any* the given tags. If a `queryset` argument is provided, it will be used as the basis for the resulting `QuerySet`. [Tags](#id20)[¶](#tags) --- Tags are represented by the `Tag` model, which lives in the `tagging.models` module. ### [API reference](#id21)[¶](#api-reference) #### [Fields](#id22)[¶](#fields) `Tag` objects have the following fields: * `name` – The name of the tag. This is a unique value. #### [Manager functions](#id23)[¶](#manager-functions) The `Tag` model has a custom manager which has the following helper methods: * `update_tags(obj, tag_names)` – updates tags associated with an object. `tag_names` is a string containing tag names with which `obj` should be tagged. If `tag_names` is `None` or `''`, the object’s tags will be cleared. * `add_tag(obj, tag_name)` – associates a tag with an an object. `tag_name` is a string containing a tag name with which `obj` should be tagged. * `get_for_object(obj)` – returns a `QuerySet` containing all `Tag` objects associated with `obj`. * `usage_for_model(model, counts=False, min_count=None, filters=None)` – returns a list of `Tag` objects associated with instances of `model`. If `counts` is `True`, a `count` attribute will be added to each tag, indicating how many times it has been associated with instances of `model`. If `min_count` is given, only tags which have a `count` greater than or equal to `min_count` will be returned. Passing a value for `min_count` implies `counts=True`. To limit the tags (and counts, if specified) returned to those used by a subset of the model’s instances, pass a dictionary of field lookups to be applied to `model` as the `filters` argument. * `related_for_model(tags, Model, counts=False, min_count=None)` – returns a list of tags related to a given list of tags - that is, other tags used by items which have all the given tags. If `counts` is `True`, a `count` attribute will be added to each tag, indicating the number of items which have it in addition to the given list of tags. If `min_count` is given, only tags which have a `count` greater than or equal to `min_count` will be returned. Passing a value for `min_count` implies `counts=True`. * `cloud_for_model(Model, steps=4, distribution=LOGARITHMIC, filters=None, min_count=None)` – returns a list of the distinct `Tag` objects associated with instances of `Model`, each having a `count` attribute as above and an additional `font_size` attribute, for use in creation of a tag cloud (a type of weighted list). `steps` defines the number of font sizes available - `font_size` may be an integer between `1` and `steps`, inclusive. `distribution` defines the type of font size distribution algorithm which will be used - logarithmic or linear. It must be either `tagging.utils.LOGARITHMIC` or `tagging.utils.LINEAR`. To limit the tags displayed in the cloud to those associated with a subset of the Model’s instances, pass a dictionary of field lookups to be applied to the given Model as the `filters` argument. To limit the tags displayed in the cloud to those with a `count` greater than or equal to `min_count`, pass a value for the `min_count` argument. * `usage_for_queryset(queryset, counts=False, min_count=None)` – Obtains a list of tags associated with instances of a model contained in the given queryset. If `counts` is True, a `count` attribute will be added to each tag, indicating how many times it has been used against the Model class in question. If `min_count` is given, only tags which have a `count` greater than or equal to `min_count` will be returned. Passing a value for `min_count` implies `counts=True`. ### [Basic usage](#id24)[¶](#basic-usage) #### [Tagging objects and retrieving an object’s tags](#id25)[¶](#tagging-objects-and-retrieving-an-object-s-tags) Objects may be tagged using the `update_tags` helper function: ``` >>> from shop.apps.products.models import Widget >>> from tagging.models import Tag >>> widget = Widget.objects.get(pk=1) >>> Tag.objects.update_tags(widget, 'house thing') ``` Retrieve tags for an object using the `get_for_object` helper function: ``` >>> Tag.objects.get_for_object(widget) [<Tag: house>, <Tag: thing>] ``` Tags are created, associated and unassociated accordingly when you use `update_tags` and `add_tag`: ``` >>> Tag.objects.update_tags(widget, 'house monkey') >>> Tag.objects.get_for_object(widget) [<Tag: house>, <Tag: monkey>] >>> Tag.objects.add_tag(widget, 'tiles') >>> Tag.objects.get_for_object(widget) [<Tag: house>, <Tag: monkey>, <Tag: tiles>] ``` Clear an object’s tags by passing `None` or `''` to `update_tags`: ``` >>> Tag.objects.update_tags(widget, None) >>> Tag.objects.get_for_object(widget) [] ``` #### [Retrieving tags used by a particular model](#id26)[¶](#retrieving-tags-used-by-a-particular-model) To retrieve all tags used for a particular model, use the `get_for_model` helper function: ``` >>> widget1 = Widget.objects.get(pk=1) >>> Tag.objects.update_tags(widget1, 'house thing') >>> widget2 = Widget.objects.get(pk=2) >>> Tag.objects.update_tags(widget2, 'cheese toast house') >>> Tag.objects.usage_for_model(Widget) [<Tag: cheese>, <Tag: house>, <Tag: thing>, <Tag: toast>] ``` To get a count of how many times each tag was used for a particular model, pass in `True` for the `counts` argument: ``` >>> tags = Tag.objects.usage_for_model(Widget, counts=True) >>> [(tag.name, tag.count) for tag in tags] [('cheese', 1), ('house', 2), ('thing', 1), ('toast', 1)] ``` To get counts and limit the tags returned to those with counts above a certain size, pass in a `min_count` argument: ``` >>> tags = Tag.objects.usage_for_model(Widget, min_count=2) >>> [(tag.name, tag.count) for tag in tags] [('house', 2)] ``` You can also specify a dictionary of [field lookups](http://docs.djangoproject.com/en/dev/topics/db/queries/#field-lookups) to be used to restrict the tags and counts returned based on a subset of the model’s instances. For example, the following would retrieve all tags used on Widgets created by a user named Alan which have a size greater than 99: ``` >>> Tag.objects.usage_for_model(Widget, filters=dict(size__gt=99, user__username='Alan')) ``` The `usage_for_queryset` method allows you to pass a pre-filtered queryset to be used when determining tag usage: ``` >>> Tag.objects.usage_for_queryset(Widget.objects.filter(size__gt=99, user__username='Alan')) ``` ### [Tag input](#id27)[¶](#tag-input) Tag input from users is treated as follows: * If the input doesn’t contain any commas or double quotes, it is simply treated as a space-delimited list of tag names. * If the input does contain either of these characters, we parse the input like so: + Groups of characters which appear between double quotes take precedence as multi-word tags (so double quoted tag names may contain commas). An unclosed double quote will be ignored. + For the remaining input, if there are any unquoted commas in the input, the remainder will be treated as comma-delimited. Otherwise, it will be treated as space-delimited. Examples: | Tag input string | Resulting tags | Notes | | --- | --- | --- | | apple ball cat | [`apple`], [`ball`], [`cat`] | No commas, so space delimited | | apple, ball cat | [`apple`], [`ball cat`] | Comma present, so comma delimited | | “apple, ball” cat dog | [`apple, ball`], [`cat`], [`dog`] | All commas are quoted, so space delimited | | “apple, ball”, cat dog | [`apple, ball`], [`cat dog`] | Contains an unquoted comma, so comma delimited | | apple “ball cat” dog | [`apple`], [`ball cat`], [`dog`] | No commas, so space delimited | | “apple” “ball dog | [`apple`], [`ball`], [`dog`] | Unclosed double quote is ignored | [Tagged items](#id28)[¶](#tagged-items) --- The relationship between a `Tag` and an object is represented by the `TaggedItem` model, which lives in the `tagging.models` module. ### [API reference](#id29)[¶](#id1) #### [Fields](#id30)[¶](#id2) `TaggedItem` objects have the following fields: * `tag` – The `Tag` an object is associated with. * `content_type` – The `ContentType` of the associated model instance. * `object_id` – The id of the associated object. * `object` – The associated object itself, accessible via the Generic Relations API. #### [Manager functions](#id31)[¶](#id3) The `TaggedItem` model has a custom manager which has the following helper methods, which accept either a `QuerySet` or a `Model` class as one of their arguments. To restrict the objects which are returned, pass in a filtered `QuerySet` for this argument: * `get_by_model(queryset_or_model, tag)` – creates a `QuerySet` containing instances of the specififed model which are tagged with the given tag or tags. * `get_intersection_by_model(queryset_or_model, tags)` – creates a `QuerySet` containing instances of the specified model which are tagged with every tag in a list of tags. `get_by_model` will call this function behind the scenes when you pass it a list, so you can use `get_by_model` instead of calling this method directly. * `get_union_by_model(queryset_or_model, tags)` – creates a `QuerySet` containing instances of the specified model which are tagged with any tag in a list of tags. * `get_related(obj, queryset_or_model, num=None)` - returns a list of instances of the specified model which share tags with the model instance `obj`, ordered by the number of shared tags in descending order. If `num` is given, a maximum of `num` instances will be returned. ### [Basic usage](#id32)[¶](#id4) #### [Retrieving tagged objects](#id33)[¶](#retrieving-tagged-objects) Objects may be retrieved based on their tags using the `get_by_model` manager method: ``` >>> from shop.apps.products.models import Widget >>> from tagging.models import Tag >>> house_tag = Tag.objects.get(name='house') >>> TaggedItem.objects.get_by_model(Widget, house_tag) [<Widget: pk=1>, <Widget: pk=2>] ``` Passing a list of tags to `get_by_model` returns an intersection of objects which have those tags, i.e. tag1 AND tag2 ... AND tagN: ``` >>> thing_tag = Tag.objects.get(name='thing') >>> TaggedItem.objects.get_by_model(Widget, [house_tag, thing_tag]) [<Widget: pk=1>] ``` Functions which take tags are flexible when it comes to tag input: ``` >>> TaggedItem.objects.get_by_model(Widget, Tag.objects.filter(name__in=['house', 'thing'])) [<Widget: pk=1>] >>> TaggedItem.objects.get_by_model(Widget, 'house thing') [<Widget: pk=1>] >>> TaggedItem.objects.get_by_model(Widget, ['house', 'thing']) [<Widget: pk=1>] ``` #### [Restricting objects returned](#id34)[¶](#restricting-objects-returned) Pass in a `QuerySet` to restrict the objects returned: ``` # Retrieve all Widgets which have a price less than 50, tagged with 'house' TaggedItem.objects.get_by_model(Widget.objects.filter(price__lt=50), 'house') # Retrieve all Widgets which have a name starting with 'a', tagged with any # of 'house', 'garden' or 'water'. TaggedItem.objects.get_union_by_model(Widget.objects.filter(name__startswith='a'), ['house', 'garden', 'water']) ``` [Utilities](#id35)[¶](#utilities) --- Tag-related utility functions are defined in the `tagging.utils` module: ### [`parse_tag_input(input)`](#id36)[¶](#parse-tag-input-input) Parses tag input, with multiple word input being activated and delineated by commas and double quotes. Quotes take precedence, so they may contain commas. Returns a sorted list of unique tag names. See [tag input](#tag-input) for more details. ### [`edit_string_for_tags(tags)`](#id37)[¶](#edit-string-for-tags-tags) Given list of `Tag` instances, creates a string representation of the list suitable for editing by the user, such that submitting the given string representation back without changing it will give the same list of tags. Tag names which contain commas will be double quoted. If any tag name which isn’t being quoted contains whitespace, the resulting string of tag names will be comma-delimited, otherwise it will be space-delimited. ### [`get_tag_list(tags)`](#id38)[¶](#get-tag-list-tags) Utility function for accepting tag input in a flexible manner. If a `Tag` object is given, it will be returned in a list as its single occupant. If given, the tag names in the following will be used to create a `Tag` `QuerySet`: > * A string, which may contain multiple tag names. > * A list or tuple of strings corresponding to tag names. > * A list or tuple of integers corresponding to tag ids. If given, the following will be returned as-is: > * A list or tuple of `Tag` objects. > * A `Tag` `QuerySet`. ### [`calculate_cloud(tags, steps=4, distribution=tagging.utils.LOGARITHMIC)`](#id39)[¶](#calculate-cloud-tags-steps-4-distribution-tagging-utils-logarithmic) Add a `font_size` attribute to each tag according to the frequency of its use, as indicated by its `count` attribute. `steps` defines the range of font sizes - `font_size` will be an integer between 1 and `steps` (inclusive). `distribution` defines the type of font size distribution algorithm which will be used - logarithmic or linear. It must be one of `tagging.utils.LOGARITHMIC` or `tagging.utils.LINEAR`. [Model Fields](#id40)[¶](#model-fields) --- The `tagging.fields` module contains fields which make it easy to integrate tagging into your models and into the `django.contrib.admin` application. ### [Field types](#id41)[¶](#field-types) #### [`TagField`](#id42)[¶](#tagfield) A `CharField` that actually works as a relationship to tags “under the hood”. Using this example model: ``` class Link(models.Model): ... tags = TagField() ``` Setting tags: ``` >>> l = Link.objects.get(...) >>> l.tags = 'tag1 tag2 tag3' ``` Getting tags for an instance: ``` >>> l.tags 'tag1 tag2 tag3' ``` Getting tags for a model - i.e. all tags used by all instances of the model: ``` >>> Link.tags 'tag1 tag2 tag3 tag4 tag5' ``` This field will also validate that it has been given a valid list of tag names, separated by a single comma, a single space or a comma followed by a space. [Form fields](#id43)[¶](#form-fields) --- The `tagging.forms` module contains a `Field` for use with Django’s [forms library](http://docs.djangoproject.com/en/dev/topics/forms/) which takes care of validating tag name input when used in your forms. ### [Field types](#id44)[¶](#id5) #### [`TagField`](#id45)[¶](#id6) A form `Field` which is displayed as a single-line text input, which validates that the input it receives is a valid list of tag names. When you generate a form for one of your models automatically, using the `ModelForm` class, any `tagging.fields.TagField` fields in your model will automatically be represented by a `tagging.forms.TagField` in the generated form. [Generic views](#id46)[¶](#generic-views) --- The `tagging.views` module contains views to handle simple cases of common display logic related to tagging. ### [`tagging.views.TaggedObjectList`](#id47)[¶](#tagging-views-taggedobjectlist) **Description:** A view that displays a list of objects for a given model which have a given tag. This is a thin wrapper around the `django.views.generic.list.ListView` view, which takes a model and a tag as its arguments (in addition to the other optional arguments supported by `ListView`), building the appropriate `QuerySet` for you instead of expecting one to be passed in. **Required arguments:** > * `tag`: The tag which objects of the given model must have in > order to be listed. **Optional arguments:** Please refer to the [ListView documentation](https://docs.djangoproject.com/en/1.8/ref/class-based-views/generic-display/#listview) for additional optional arguments which may be given. > * `related_tags`: If `True`, a `related_tags` context variable > will also contain tags related to the given tag for the given > model. > * `related_tag_counts`: If `True` and `related_tags` is > `True`, each related tag will have a `count` attribute > indicating the number of items which have it in addition to the > given tag. **Template context:** Please refer to the [ListView documentation](https://docs.djangoproject.com/en/1.8/ref/class-based-views/generic-display/#listview) for additional template context variables which may be provided. > * `tag`: The `Tag` instance for the given tag. #### [Example usage](#id48)[¶](#example-usage) The following sample URLconf demonstrates using this generic view to list items of a particular model class which have a given tag: ``` from django.conf.urls.defaults import * from tagging.views import TaggedObjectList from shop.apps.products.models import Widget urlpatterns = patterns('', url(r'^widgets/tag/(?P<tag>[^/]+(?u))/$', TaggedObjectList.as_view(model=Widget, paginate_by=10, allow_empty=True), name='widget_tag_detail'), ) ``` The following sample view demonstrates wrapping this generic view to perform filtering of the objects which are listed: ``` from myapp.models import People from tagging.views import TaggedObjectList class TaggedPeopleFilteredList(TaggedObjectList): queryset = People.objects.filter(country__code=country_code) ``` [Template tags](#id49)[¶](#template-tags) --- The `tagging.templatetags.tagging_tags` module defines a number of template tags which may be used to work with tags. ### [Tag reference](#id50)[¶](#tag-reference) #### [tags_for_model](#id51)[¶](#tags-for-model) Retrieves a list of `Tag` objects associated with a given model and stores them in a context variable. Usage: ``` {% tags_for_model [model] as [varname] %} ``` The model is specified in `[appname].[modelname]` format. Extended usage: ``` {% tags_for_model [model] as [varname] with counts %} ``` If specified - by providing extra `with counts` arguments - adds a `count` attribute to each tag containing the number of instances of the given model which have been tagged with it. Examples: ``` {% tags_for_model products.Widget as widget_tags %} {% tags_for_model products.Widget as widget_tags with counts %} ``` #### [tag_cloud_for_model](#id52)[¶](#tag-cloud-for-model) Retrieves a list of `Tag` objects for a given model, with tag cloud attributes set, and stores them in a context variable. Usage: ``` {% tag_cloud_for_model [model] as [varname] %} ``` The model is specified in `[appname].[modelname]` format. Extended usage: ``` {% tag_cloud_for_model [model] as [varname] with [options] %} ``` Extra options can be provided after an optional `with` argument, with each option being specified in `[name]=[value]` format. Valid extra options are: > `steps` > Integer. Defines the range of font sizes. > `min_count` > Integer. Defines the minimum number of times a tag must have > been used to appear in the cloud. > `distribution` > One of `linear` or `log`. Defines the font-size > distribution algorithm to use when generating the tag cloud. Examples: ``` {% tag_cloud_for_model products.Widget as widget_tags %} {% tag_cloud_for_model products.Widget as widget_tags with steps=9 min_count=3 distribution=log %} ``` #### [tags_for_object](#id53)[¶](#tags-for-object) Retrieves a list of `Tag` objects associated with an object and stores them in a context variable. Usage: ``` {% tags_for_object [object] as [varname] %} ``` Example: ``` {% tags_for_object foo_object as tag_list %} ``` #### [tagged_objects](#id54)[¶](#tagged-objects) Retrieves a list of instances of a given model which are tagged with a given `Tag` and stores them in a context variable. Usage: ``` {% tagged_objects [tag] in [model] as [varname] %} ``` The model is specified in `[appname].[modelname]` format. The tag must be an instance of a `Tag`, not the name of a tag. Example: ``` {% tagged_objects comedy_tag in tv.Show as comedies %} ```
bigace
readthedoc
Markdown
Welcome to the documentation for the BIGACE CMS! This documentation looks funky, as it is an auto-import from a dokuwiki installation, using the DokuWiki-to-Markdown-Converter. It was imported to keep the history of Bigace alive, without the need of hosting the old infrastructure. * Install BIGACE - Step-by-step to your new Bigace driven website * Upgrade your System - Step-by-step to the next Bigace Version * Plugins & Extensions - A list of existing addon's, plugins, templates and modules * User Manual - Using BIGACE and its Administration, editing content, resizing images ... * Developer - Extending Bigace with your code. Write plugins, design layouts * Administrator - Administrate and configure your Bigace installation # Development Quick links: * Templates - create your own page design * Smarty Tags - for templates customization & dynamic data * Smarty Introduction - basic usages and links * PHP API - use the core API to extend BIGACE Tutorials: * Portlets - creation and customization of Portlets * Smarty - using templates in external applications * Search - integrate search in your design * Plugins - how to build installable Plugins * Modules - a short introduction Core API: * Access database - use the BIGACE db framework * Update container - package format to install BIGACE extensions * Classes - can easily be used in your code * Core Libraries - important BIGACE core functions * Translations - enable your extension for multiple languages More stuff: * Editor Framework - integrate a new editor * Admin Themes - customize the administration look & feel * Zend Framework - extend your applications with this rich API Code Snippets: * Fetch a Category Tree and loop above its entries * BIGACE AJAX - fetch item information without reloading pages # Administration & Configuration * URL Rewriting - SEO friendly (beauty) URLs * LightHTTPD - for those using this light-weight webserver * IIS - using BIGACE on Windows servers * User attributes - configure the available user attributes * Languages - create new languages * Updates - administrate and install updates * Community configuration - understand and change the Community configuration * FCKeditor - internals, configuration and updating * Captcha - changing the used implementation * SSL - for authentication and administration * .htaccess - protect your system and increase your data security * Config files - the most important configurations at a glance * Database tuning - Expert level: increase the speed of item calls * Move to a new Server - transfering BIGACE to a new host * UTF8 - UTF8 and migration infos (for upgrading to 2.5) * FileLogger - write log entries to a file instead of the database Core Development: * Building BIGACE - the ANT scripts * bigace:subversion - fetch the latest snapshot * bigace:todo - ideas for BIGACE, the Website, Documentation ... Admin Themes customize the Look and Feel of the Administration the way you like it. It is possible to change Images, Stylesheet and even the Templates. Here is a HOWTO for creating own Administration Themes. * Here is a list of all available Administration Themes. # Administration # Admin Handbook Please choose any article of interest. # User * Configuration of available User attributes # Languages # Updates * Creating a new Update/Extension * Available Extensions * Administration of Updates # Search * Integration of Search into your Website design # Editors * Configuration and updating of FCKeditor # Security * .htaccess Security of your files * Password salting * Using SSL for authentication and administration # Mixed up * The most important configuration files at a glance * BIGACE URL Rewriting features * Database tuning can increase the speed of item calls Date: 2003-01-01 Categories: Tags: ## ANT Build File BIGACE uses an ANT Build File to automate the several Build tasks. It is only available in subversion and does not come with the released Files. ## Build Properties The ANT Build File only works when the Build Properties File is customized to fit to your System. You have to adjust Path and Files to point to your local sources! ## 3rd Party Software BIGACE uses a lot of other OpenSource Components to fulfil several Tasks. These Packages are all available for free and need to be placed on the building Computer. You also have to correct the path within the Build Properties, to point to your HDD. AdoDB - http://adodb.sourceforge.net/ * Smarty - http://www.smarty.net/ * FCKeditor - http://www.fckeditor.net * b2evo Captcha - http://b2evo-captcha.sourceforge.net/ * TinyMCE (for TinyMCE Extension) - http://tinymce.moxiecode.com/ The used version of each package can be found at the BIGACE Credit Screen (in your BIGACE Administration). ## Documentation (deprecated!) The old Documentation of BIGACE (up to version 2.0) is written in Docbook. In this way it was possible to generate many different Documentation Formats (HTML, chunked HTML, PDF). But time changes and the documentation is now be written with BIGACE (see Forum discussion). There are some ANT XSLT tasks available to generate the Documentation, but you might also generate them with Java Command Line tools or any other XSLT Engine... For a documentation about these tasks have a look at Google and at "DocBook: The Definitive Guide". Software required for generating the Documentations: ``` * docbook-xsl-1.69.1 (XSL Stylesheets) * docbook-xml-4.2 (DTD) * saxon.jar (XML/XSL Engine - Saxon 6.5.5) * resolver.jar (from Apache: xml-commons-resolver-1.1 - needed for Ant XSL Task to find catalog files!) * Java 1.5 (couldn't make it run with 1.4) ``` If the Build File Solution with Catalog Files does not work, you have to adjust the file locations to the XSL Stylesheets (to point to your drive). Adjust the following files: ``` * /DOCU/MANUAL/developer/manual.xml * /DOCU/MANUAL/manual/manual.xml * /DOCU/MANUAL/xsl/style*.xsl ``` If you do not want the XSL Engine to load the DTD each time from the Internet, you also have to adjust the Build Properties File to point to the Docbook Catalog XML File. Some links that might be useful: ``` * [http://sagehill.net/docbookxsl/index.html](http://sagehill.net/docbookxsl/index.html) * [http://docbook.sourceforge.net/](http://docbook.sourceforge.net/) * [http://richard.cyganiak.de/2003/xml/oeb_docbook/index.html](http://richard.cyganiak.de/2003/xml/oeb_docbook/index.html) * [http://www.goshaky.com/docbook-tutorial/](http://www.goshaky.com/docbook-tutorial/) ``` The default captcha solution in BIGACE is B2evo. To change the Captcha implementation, you have to configure the class at Administration -> System -> Configurations - Package "system" - Key "captcha". ## B2evo Captcha (default) Configuration: ``` classes.util.captcha.B2EvoCaptcha ``` Smarty is one of the most popular template eingines in the PHP world. In Bigace 2.x it was the default template engine, as in Bigace 3 it is available as extension. Here are some articles related to Smarty, which were originally written for Bigace 2.x: Smarty Templates - create a design, Step-by-Step * Smarty Tags - for templates customization & dynamic data * TAGs in page content - for these special situations... * Smarty external - using templates in your applications * "Previous page" & "Next page" - create these links using Smarty TAGs ## Using web resources To load resuorces like images and CSS files in your template, you need the {directory} TAG: ``` :::html `<link rel="stylesheet" type="text/css" href="{directory}styles.css" />` `<script type="text/javascript" src="{directory}script.js">``</script>` ``` The {directory} returns the path to your communities web directory. If you created your template using a extension, you might be using a subfolder. Make sure to append it in this case: ``` :::html `<link rel="stylesheet" type="text/css" href="{directory}mytheme/styles.css" />` `<script type="text/javascript" src="{directory}mytheme/script.js">``</script>` ``` This will expand to: ``` :::html `<link rel="stylesheet" type="text/css" href="http://yourdomain/public/cid1/mytheme/styles.css" />` `<script type="text/javascript" src="http://yourdomain/public/cid1/mytheme/script.js">``</script>` ``` ## Inline styles and scripts Smarty uses the {} brackets as identifier for template tags. You will notice that using inline CSS or Javascript like ... ... will result in something like ... ``` :::html `<style type="text/css">` * body `</style>` `<script type="text/javascript">` function sayHello(name) sayHello("World"); `</script>` ``` ... which is definitely not what you wanted. Smarty tries to interpret everything between an open { and a closing } bracket as tag, strips it out and dumps an error in the bigace log if couldn't handle it. Now the solution is to tell the template engine that it shouldn't parse parts of the template by using the {literal} tag: Now Smarty nows, that it should leave everything between {literal} and {/literal} untouched. Please read the about delimiter and the literal tag in the official documentation. ## Template Skeleton When you start to create a new template, you might want to consider to use this HTML skeleton, which includes best practices. ``` :::html `<html>` `<head>` `<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />` {metatags item=$MENU} `</head>` `<body>` `<div class="header">` `<h1>`{sitename}`</h1>` `</div>` `<div class="content">` `<h2>`{$MENU->getName()}`</h2>` {modul menu=$MENU} `</div>` `<div class="footer">` &copy; Copyright {$smarty.now|date_format:"%Y"} {configuration package="community" key="copyright.holder" default="BIGACE CMS"}. All rights reserved. `<br />` Powered by {bigace_version link=true full=true}. {if $USER->isAnonymous()} | `<a rel="nofollow" target="_self" href="{link_login id=$MENU->`getID()}">Login`</a>` {else} `<br/>` `<a target="_blank" href="{link_admin id=$MENU->`getID()}">Administration`</a>` | {permission_editcontent assign="allowEditor"} {if $allowEditor} `<a href="{link_editor id=$MENU->`getID() language=$MENU->getLanguageID()}" target="_blank">Editor`</a>` | {/if} `<a target="_self" href="{link_logout id=$MENU->`getID()}">Logout`</a>` {/if} `</div>` `</body>` `</html>` ``` Since Bigace 3.0 the Zend Framework is the core of Bigace, where the MVC components are its heart. We use the Zend Framework in version 1.10.x for the time of this article. You don't have to do anything to use the classes, the environment is properly setup for you. Just call the classes as expected: ``` :::php $config = new Zend_Config(); ``` To use Zend Framework in Bigace 2.x you have to adjust the ZF include path. ## See also http://framework.zend.com/apidoc/core/ - the Zend API * http://framework.zend.com/manual/ - the offical manual about all ZF components * http://framework.zend.com/manual/en/learning.html - the official QuickStart * http://www.survivethedeepend.com/zendframeworkbook/ - a good tutorial in english * http://www.zf-tutorials.de/ - a german tutorial to start with the ZF (a bit outdated, but still great!) * http://www.zendcasts.com - Screencasts about the ZF # Moved: How to write new Designs bigace Moved: How to write new Designs ~~REDIRECT>bigace:developer:smarty:tutorial~~ The directory ViewHelper does something ... * $key - (optional, string) $foo = $this->key(); The dojo ViewHelper was implemented to make sure, that we use a pre-configured system. It extends the Zend implementation only in that kind, that the path information is correct, Bigace specific locale and theme are used. Read more about the Dojo Javascript Framework. If you want to use the dojo ViewHelper, refer to the Zend documentation and API (Package: Zend_Dojo_View_Helper). This example shows how to activate Dojo in your layout. $this->dojo(); The sitename ViewHelper does something ... Further infos at the ViewHelper sitename() PHPDoc. * $default - (optional, string) The $default defines ... $foo = $this->sitename(); This module renders a list of image thumnbnails, linked to one category. On click, the image is shown full size, as overlay. Uses the Slimbox jQuery Plugin to display full size versions. Also compatible for user with deactivated Javascript! A lot of configurations are available, for a full list see below (and screenshot). ``` <WRAP center round download> ``` Get all downloads from: `</WRAP>` ## Features Show all images, linked to a selectable category. * Can be used on multiple pages, each page has its own set of configurations. Create multiple pages with different Galleries (categories). * Configurable Thumbnail height. * Configurable display order. * Configurable display of Name and description. ## Permission To configure a gallery, your user needs to be in a usergroup which owns the permission "fotogallery_admin" * Attention: In older version (until 1.7) the permission was not even assigned to Administrators. You need to do that manually before you will see the "Settings" link. ## Documentation * Create a category (choose a descriptive name like "Gallery birthday images") * Assign this category to all images that shall be displayed * Assign the module to the page that will display the gallery * Open the page and there the modul settings * Choose the category and save * Reload the page - images should show up ## Configurations * Image category - the category the images are linked to * Show Name - show name in listing * Show description in list - display description in initial listing * Show description in PopUp - display description in Lightbox * Sort by position - Order the images by their natural position * Sort by name - Sort the images by their names * Sort in reverse order - Reverse the order the images are displayed * Thumbnail height (px) - Height of thumbnails in initial listing ## Screenshot {{bigace:extensions:modul:screenshot_fotogallery.jpg|}} # Auto Jobs bigace Auto Jobs Auto Jobs are also known as Cron Jobs. These Jobs run in a continuous time cycle. # Community # Community Configuration As Bigace is a multi-site CMS, it can handle independent websites in one installation, where a Community is one of these independent websites. You get a list of all communities that your installation handles at the community admin panel. For each Community you see: ``` * the **domain**, the community is mapped to * all **alias domains** for this community * the [maintenance](maintenance) state * whether it is the system **default community** or not ``` # What are Communites? Communities are client websites. With one BIGACE installation you can handle as many websites as you want. Websites here does NOT mean pages, but complete self-sufficient systems. Each community has its own data pool, which is separated in database and filesystem. It is not possible for an author using Editor and/or Administration to fetch data from another community than his own one. Other names for Community are Consumer (till version 2.0) or Mandant in German. # Community INI File Community settings are stored in the file /system/config/consumer.ini. If you want to change the communities main domain (what is currently not possible with the web interface), you might edit this file manually and change it there. # Alias Domain A community can be mapped to unlimited domains. At installation time, you enter the intial domain, but when time passes, you might need to add a second or third domain alias. Now, lets make an short exmaple: Your Community "Intranet" can be accessed at "intranet.example.com" (http://intranet.example.com/). If you know want to make it accessible at the URL "http://cms.example.com/", you have to add an Alias Domain "cms.example.com". Note: Please make sure NOT to enter neither the protocol nor the directory when creating a new alias domain! Only enter the complete hostname (without protocol and path). ## Default Community Sollte Ihre Installation über andere Domains erreichbar sein als über die Community gebundenen, so wird versucht eine Standard Community zu laden. Eine Standard Community dient also dazu, Ihren Besucher nicht eine Fehlermeldung zu präsentieren, sondern ihnen trotz der "fehlerhaften" URL nutzvolle Inhalte zu bieten. Nehmen wir an, Sie haben zwei Community mit je einem Alias: ``` * intranet.example.com o cms.example.com * internet.example.com o www.example.com ``` Wenn Sie nun wollen, das alle Domains die nicht registriert sind (z.B. example.com) auf Ihre Internet Community geleitet werden “www.example.com”, müssen Sie nur “internet.example.com” mit einem Klick zur Standard Community machen. ## consumer.ini This is a simple example configuration and shows all possible configurations (please be careful when you edit this file!): ``` [www.example.de] id = 1 language = "de" [www.example.com] id = 1 language = "en" [www.example.org] id = 0 [*] id = 0 ``` If any domain maps to your BIGACE configuration, which is not mentioned here, the default community [*] will be used with the ID 0. The language flag will overwrite the community/default.language setting, which can be applied in the administration panel. Therefor www.example.de uses German (de), www.example.com uses English (en) and www.example.org uses the dynamic setting from community/default.language. As you can see, www.example.de and www.example.com map to the same Community (ID 1). With this in mind, you could build a multi-language site, which shows different contents on different domains. The next example is a bit more complex and needs your ability to create symlinks on your server. Lets say you want integrate BIGACE into your current website, only within a special folder structure, then you could create a folder structure like this: ``` /var/www/ | + bigace/ (your bigace installation) | + html/ (your website structure) | + cms/ (the directory holding the symlinks) | + sports/ (symlink to /var/www/bigace/) | + technic/ (symlink to /var/www/bigace/) ``` Now you are able to create one (and one alias) or two communities within your existing website! This could be the consumer.ini: ``` [www.example.de/cms/sports/] id = 1 path = "cms/sports/" [www.example.de/cms/technic/] id = 2 path = "cms/technic/" [*] id = 1 ``` Using such a configuration, you have to make sure, that your config.system.php has the following setting: ``` :::php define ('BIGACE_DIR_PATH', ''); ``` Not clear? Questions? Please ask in the forum, the configuration capabilities are quite powerful. # Bigace Error codes This page lists known errors, which can occur during usage: Error Codes 16777217 16777218 16777219 16777221 33554432 67108866 67108867 67108868 83886081 83886082 83886083 100663298 In Bigace v3 the MySQL search was replaced with Lucene. ## Indexing Currently Bigace indexes the following data automatically when adding or updating/saving: * User * Pages (metadata and content) * Image metadata (like name, description ...) * File metadata (like name, description ...) * Docx (MS Word 2010) * Pptx (MS Powerpoint 2010) ## Re-indexing Re-indexing is the process of reloading all content into the search index. This process is, depending on your amount of data in Bigace, pretty time- and memory consuming. You should not use this function unless you have pretty good reasons for it. Legal reasons could be: a crashed index * a missing index after Bigace migration * ... The left side of the Menu Administration is the Menu Tree where you choose the pages to administrate. If you click an menu entry, its administration will open and the toolbar buttons ob the top will be activated. From there, you can open all tools and the Editor to edit the pages content. ## Page permission Read more about Object access rights. Administrate your (and if allowed all other Users) Profile here. Users have the following required attributes: Language * State (Active/Inactive) * Password They also have the following meta data: Membership in Usergroups * free configurable attributes (some are pre-configured, like email adress) * Default: ''TRUE'' The minimum word length that is accepted in a search. This parameter is directly connected to the MySQL setting ''ft_min_word_len'' - which is 4 character in default installations. This setting is for example used in validation processes or during the content storage. Type: Integer * Default: ''4'' Type: String * Type: String Whether page URLs should be lowercased. To prevent problems with lower- and uppercased characters and duplicate URLs consider to turn it on (using ''TRUE''). This setting only applies to new page URLs, so you might leave it as is, if you have a running system with a lot of existing (and indexed in terms of search-engines) pages. * Default: ''FALSE'' If a page is requested in a language where no content exists, Bigace can load the content from the default language. This can be useful if you create pages and fill them with content soon afterwards. But consider turning this off, if you have many empty pages, otherwise you might run into serious SEO (duplicate content) problems. Type: Boolean * Default: ''TRUE'' * Default: ''default'' * Default: ''MAINTENANCE'' # singlereview.group.id bigace singlereview.group.id Package: . The Group-ID of the Usergroup that is able to review a Single-review workflow. Default: ''Editor'' See also Configuring Bigace # DEPRECATED, SEE LINKS bigace DEPRECATED, SEE LINKS How-to: Find, upload and install Web CMS extensions The {explode} Modifier splits after a defined delimiter. Use this mod with e.g. the {assign}-TAG. var - (required,string) * value - (required,string) ## Syntax ``` :::html {assign var="DEFINED_VAR" value="DEFINED_VALUE"|VAR_TO_EXPLODE} ``` ``` :::html {assign var="stuff" value=";"|explode:$MENU->getCatchwords()} `<ul>` `<li>`Item 1:`</span>` {$stuff[0]}`</li>` `<li>`Item 2:`</span>` {$stuff[1]}`</li>` `<li>`Item 3: {$stuff[2]}`</li>` `</ul>` ``` The {addthis_widget_drop} TAG shows the ADD This widget at the page where you put it to. The AddThis Widget shows bookmarking links to the most important social bookmarking sites. See a demo at: http://www.addthis.com/ -> Since BIGACE 2.5 (probably) Please note: If you want to use statistics you need to put in your AddThis username, which has nothing todo with your BIGAEC username! You must register at http://www.addthis.com/register first. To watch your statistics you need to login at the AddThis website as well! username - (optional, string) A username for statistic purpose * link - (optional, string) The URL to be bookmarked (if passed, 'item' will be ignored) * item - (optional, Item) The item which should be bookmarked (will be ignored if 'link' is set) * title - (optional, string) Name for the bookmark (if not set, the URL will be used) This example shows how to add a default AddThis widget with activated statistics. ``` :::html `<html>` `<head>` `<title>`AddThis for {$MENU->getName()}`</title>` `</head>` `<body>` {addthis_widget_drop username="dummy" item=$MENU} `</body>` `</html>` ``` The {areatree} TAG prints navigation for the current area of the webpage (area = first level below top level) -> Since BIGACE 2.5. language - (optional, String) The language to get the tree for, default is the current language from the session (the users language) * css - (optional,String) the css class for the rendered link ( `<a>` ) node, default is empty * hidden - (optional,boolean) show also hidden pages in the page tree, defult is false * prefix - (optional,String) the html to be prepended to every item link in the areatree (will be printed before the `<a href="...>` itemnam `</a>` code of each item), default is empty * suffix - (optional,String) the html to be appended to every item link in the areatree (will be printed after the `<a href="...>` itemnam `</a>` code of each item), default is empty * selected - (optional,String) the css class for all items on the path to root, default is empty * start - (optional,String) the initial value that will be prepended to the output of this tag, default is empty * maxdepth - (optional,int) the maximum level depth to crawl into the page tree. No pages with a higher level value then the speicifed will be displayed in the areatree, default is 999 (all levels) * beforeLevelDown - (optional,String) get the text/html that should be place before a level up step source code, default is empty * beforeLevelUp - (optional,String) the text/html that should be place before a level down step source code, default is empty * assign - (optional,String) if set the output of this tag will be assigned to the variable instead of being printed to the output stream * folded - (optional,boolean) indicates if only the subtree elements on path to root (and their siblings) should be shown or all items in the subtree, default is false (=show all items in subtree) * debug - (optional,boolean) If set to true, debug output will be printed out while rendering the areatree, default is false This example shows how to create a simple multilevel list representing the current area subtree as you would use it for building a css styled navigation: ``` :::html `<html>` ... `<body>` `<ul class="subnav">` {areatree folded='true' prefix=`<li>` suffix=`</li>` css="subnav" selected='subnavactive' beforeLevelDown=`<li>``<ul>` beforeLevelUp=`</ul>``</li>`} `</ul>` `</body>` `</html>` ``` assuming this page structure: ``` TOP LEVEL |_ area 1 (current area) | | | |_ page1 name | | | | | |_ page2 name | | | | | | | |_page3 name (lets assume this is the current page the user is on) | | | | | | | |_page4 name | | | | | |_ page5 name | | | | | |_ page5b name (this page will not be displayed, since the folded parameter is set to true and the parent is not on path to root) | | |_ page5c name (this page will not be displayed, since the folded parameter is set to true and the parent is not on path to root) | | | |_ page6 name | | | |_ page6b name (this page will not be displayed, since the folded parameter is set to true and the parent is not on path to root) | |_ area 2 (will not be displayed, since it is not the current area) |... ``` this is what the resulting html would look like: ``` :::html `<html>` ... `<body>` `<ul class="subnav">` `<li>` `<a href="http://www.yourdomain.com/page1.html" class="subnavactive subnavactive_areatree_level_0">`page1 name`</a>` `</li>` `<li>` `<ul>` `<li>` `<a href="http://www.yourdomain.com/page2.html" class="subnavactive subnavactive_areatree_level_1">`page2 name`</a>` `</li>` `<li>` `<ul>` `<li>` `<a href="http://www.yourdomain.com/page3.html" class="subnavactive subnavactive_areatree_level_2">`page3 name`</a>` `</li>` `<li>` `<a href="http://www.yourdomain.com/page4.html" class="subnav subnav_areatree_level_2">`page4 name`</a>` `</li>` `</ul>` `</li>` `<li>` `<a href="http://www.zirbus.de/page5.html" class="subnav subnav_areatree_level_1">`page5 name`</a>` `</li>` `</ul>` `</li>` `<li>` `<a href="http://www.zirbus.de/page6.html" class="subnav subnav_areatree_level_0">`page6 name`</a>` `</li>` `</ul>` `</body>` `</html>` ``` use the "css" and "selected" attributes to specify the css classes to be applied to the links. The areatree tag will also add an additional class to each link which is a level (depth) specific class, allowing you to style every levels items in a different way e.g. if you use "subnav" as the value for the "css" attributes, then every link in level 0 get the following class attribute values: ``` <a href="..." class="subnav subnav_areatree_level_0"> ``` ... `</a>` the schema for the created class attribute value is always as follows: areatree_level ``` <the level of the linked item> ``` the same applies for the "selected" attribute. For a better understanding see also the resulting html of the example given above The {bigace_copyright} TAG echos a `<div>` ... `</div>` with the proper BIGACE Copyright information, name of the Author, BIGACE Version and URL to the Homepage. This TAG has no attributes. ``` :::html `<html>` `<head>` `<title>`Example`</title>` `</head>` `<body>` Some content... {bigace_copyright} `</body>` `</html>` ``` Customize the Look by adding CSS for the following generated HTML: ``` :::html `<div class="CopyrightFooter" align="center">``<span class="copyright">`...`</span>``</div>` ``` For example: ``` :::css .CopyrightFooter { border-top:1px solid #cccccc; background-color:#eeeeee; padding-top:4px; padding-bottom:4px; } span.copyright, span.copyright a{ color: #666666; font-size: 12px; } ``` The {bigace_version} TAG returns an Version String. Depending on the parameters, it returns the proper Software name and link to the BIGACE Homepage. This TAG has only optional attributes: full (optional, boolean) The version string will start with the correct Software name. * build (optional, boolean) The version string will include the internal used Build ID. * link (optional, boolean) The version string will be returned as HTML link ( `<a href=...>` version string `</a>` ) ``` :::html `<html>` `<head>` `<title>`Example`</title>` `</head>` `<body>` This site is running: {bigace_version full="true"}.`<br/>` Click the following lnk to visit the Homepage of: {bigace_version full="true" link="true"}.`<br/>` The internal version of your BIGACE installation is: {bigace_version full="true" build="true"}.`<br/>` `</body>` `</html>` ``` The {comment_counter} TAG counts the amount of comments for an item. If you do not pass the assign attribute, the amount will be returned directly, so you can put the TAG where you want the counter to be displayed. assign - (optional, string) The template variable name where the value will be assigned to. If this is not set, the value will be returned. * item - (required, item) The item to count the comments for. If this is not passed, you must at least pass the id and language attribute. * id - (optional, int) The Item ID to count the comments for. * language - (optional, string) The Item language to count the comments for. * itemtype - (optional, int) The Itemtype to count the comments for. Default is "1" for menus. This example shows how to fetch the comments for the current page. This example shows how to assign the value to a variable to use it later. This example shows how to fetch the comments for the english image with the ID 10. ``` :::html `<html>` `<head>` `<title>`Comment counter example`</title>` `</head>` `<body>` The image has `<b>`{comment_counter itemtype="4" language="en" id="10"}`</b>` comments. `</body>` `</html>` ``` The {link_admin} TAG returns thr absolute URL to the Administration. -> Since BIGACE 2.1. id - (optional, int) The ID of the admin menu/panel to load. * language - (optional, int) The language in which the administration framework will be loaded. ## Description To access the administration, you don't need to set one of these values. But if you pass an ID, which does not exist, you get an "File not found" screen! If you want to access the administration framework skip the ID attribute! You don't want to hear? ;-) Then pass "index" or "-1" as ID. This displays a link to the Administration, depending on the users login state. ``` :::html `<html>` `<head>` `<title>`Administration link`</title>` `</head>` `<body>` {if $USER->isAnonymous()} <a target="_self" href="{link_login}">Login</a> {else} <a target="_blank" href="{link_admin}">Administration</a> &middot; <a target="_self" href="{link_logout}">Logout</a> {/if} `</body>` `</html>` ``` For a full "admin links" example, see bigace:smarty_tags:permission_editcontent. The {link_search} TAG crates the URL to the Search result page and will be likely used in a quick search formular. See the "Customizing Search" article as well for more information about request parameter. id - (optional, int) The menu id that will be used for rendering the search. * language - (optional, string) The language locale to display the search result page. This example shows a minimal quick search form: ``` :::html `<html>` `<head>` `<title>`Search BIGACE pages`</title>` `</head>` `<body>` `<form action="{link_search id=$MENU->`getID()}" method="post"> `<input type="text" name="search" value="" />` `<input type="hidden" name="language" value="{$MENU->`getLanguageID()}" /> `<input type="submit" value="Search" id="searchbutton" name="searchbutton" />` `</form>` `</body>` `</html>` ``` The {load_item_childs} TAG fetches the children items of a specified menu. assign - (required, int) The name of the template variable the array will be assigned to. * id - (optional, int) The ID to load the children Items from. Default is the current Menu ID. * language - (optional, string) The language to load the Item. Default is the current menu language. * counter - (optional, string) The amount of items that will be returned. This value will be assigned to the named Smarty variable. This example shows a preview of the current menus children with `<h1>` Menu Name `</h1>` and the description as teaser, followed by a link to open the full article. If no description is saved the default value "No teaser available..." will be shown instead. ``` :::html `<html>` `<head>` `<title>`{$MENU->getName()}`</title>` `</head>` `<body>` {load_item_childs id=$MENU->getID() assign="preview"} {foreach from=$preview item="currentPage"} `<h1>`{$currentPage->getName()}`</h1>` `<p>`{$currentPage->getDescription()|default:"No teaser available..."} `<a href="{link item=$currentPage}">`~~Read more~~`</a>``</p>` {/foreach} `</body>` `</html>` ``` ## Example 2 The next - more complex - example shows how to show a 2-3 level menu tree, starting with all menus below the HOME page. If the current page is a 2nd or 3rd level menu, we display the 3rd level as well. ``` :::html `<html>` `<head>` `<title>`{$MENU->getName()}`</title>` `</head>` `<body>` {load_item_childs id="-1" assign="level1"} `<ul>` {foreach from=$level1 item="currentPage"} `<li>``<a href="{link item=$currentPage}">`{$currentPage->getName()}`</a>` {load_item_childs id=$currentPage->getID() assign="level2"} {if count($level2) > 0} `<ul>` {foreach from=$level2 item="currentPage2"} `<li>``<a href="{link item=$currentPage2}">`{$currentPage2->getName()}`</a>` {if $MENU->getID() == $currentPage2->getID() || $MENU->getParentID() == $currentPage2->getID()} {load_item_childs id=$currentPage2->getID() assign="level3"} {if count($level3) > 0} `<ul>` {foreach from=$level3 item="currentPage3"} `<li>``<a href="{link item=$currentPage3}">`{$currentPage3->getName()}`</a>``</li>` {/foreach} `</ul>` {/if} {/if} `</li>` {/foreach} `</ul>` {/if} `</li>` {/foreach} `</ul>` `</body>` `</html>` ``` ## Example 3 Display a simple one-level navigation with CSS classes, to replace the text with images. NOTE: This examples uses the page field "catchwords". You have to fill in the CSS class name for each page! ``` :::html `<html>` `<head>` `<title>`{$MENU->getName()}`</title>` `</head>` `<body>` {load_item_childs id=-1 assign="topNavi"} `<ul class="topNavi">` {foreach from=$topNavi item="currentPage"} `<li class="{$currentPage->`getCatchwords()}">`<a href="{link item=$currentPage}">`{$currentPage->getName()}`</a>``</li>` {/foreach} `</ul>` `</body>` `</html>` ``` Now let say we have three pages below TopLevel: 1. Profile (cacthwords: about) 2. Contact (catchwords: contact) 3. Impress (catchwords: impress) Then you have to add some CSS, like this: ``` :::css ul.topNavi {list-style-type: none;} ul.topNavi a { text-indent: -9999px; } ul.topNavi .about { background: url(profile.jpg) 0 0 no-repeat; } ul.topNavi .contact { background: url(contact.jpg) 0 0 no-repeat; } ul.topNavi .impress { background: url(impress.jpg) 0 0 no-repeat; } ``` As you can see, we move the text inside each link away. Doing this, we only see the image, but search engines still see the name of the page (which is pretty important for your ranking in search results!). The {load_translation} TAG loads a file with translations into the translation namespace. name - (required, string) The name of the languae file to load. * locale - (optional, string) The language (locale) of the file to load. Default is the environment language. * directory - (optional, string) The directory to lookup the translation files. Default locations will be searched afterwards automatically. This example shows how to load the "editor" translation which is located at /system/languages/en/editor.properties. ``` :::html `<html>` `<head>` `<title>`Load a translation file and display a translated string.`</title>` `</head>` `<body>` {load_translation name="editor"} {Translation of the key "content_page" is: `<b>`{translate key="content_page"}`</b>`. `</body>` `</html>` ``` The {modul} TAG loads (and executes) a module by name or if no name given, the current menus module. menu - (optional, Item) The menu to load the configured modul for. In most templates that will be $MENU. * name - (optional, string) The name of the module to load. If not set, we lookup the current menu and load its configured module. * language - (optional, string) The language to load modules translations with. If not set, uses the language from the current request. This example shows how to load the menus module. ``` :::html `<html>` `<head>` `<title>`Module of {$MENU->getName()}`</title>` `</head>` `<body>` {modul} OR {modul name=$MENU->getModulID()} `</body>` `</html>` ``` This example shows how to load a special module. ``` :::html `<html>` `<head>` `<title>`Loading the module :contactMail:`</title>` `</head>` `<body>` {modul name="contactMail"} `</body>` `</html>` ``` The {news_item} TAG loads one single News item. -> Since BIGACE 2.4 and installed News Extension. id - (required, int) The ID of the News item to load. * assign - (required, string) The name of the template variable the News item will be assigned to. This example shows how to load a News item and its linked image. The News item is represented by the current page (but you could replace $MENU->getID() easily with any News ID you want to). ``` :::html `<html>` `<head>` `<title>`I am a News`</title>` `</head>` `<body>` {news_item id=$MENU->getID() assign="NEWS"} {load_item itemtype=4 id=$NEWS->getImageID() assign="newsImage"} `<h1>`{$NEWS->getTitle()}`</h1>` `<p>``<a href="{link item=$newsPage}" title="{$newsImage->`getName()}">`<img src="{link item=$newsImage}" style="float:left">``</a>` `<b>`{$NEWS->getTeaser()}`</b>``</p>` `<p>`{$NEWS->getContent()}`</p>` `</body>` `</html>` ``` The {""|smileys} modifier parses a string to replace all textual smileys with icons. This modifier is part of the Smileys Plugin. * textual - (optional, boolean) Whether the string is parsed for extended smileys like :band: and :lol: or only for the well known smileys like ;). Default is false, only standard smileys will be replaced. This example shows how to replace the Smileys in a string. ``` :::html `<html>` `<head>` `<title>`A simple Smiley string`</title>` `</head>` `<body>` {"Some Smileys: :) :lol: :hbd: :-D"|smileys:true} `</body>` `</html>` ``` This example shows how to parse the 5 latest comments {comments_latest}: ``` :::html `<html>` `<head>` `<title>`Simple smileys in comments`</title>` `</head>` `<body>` {comments_latest end="5" assign="myComments"} {foreach from=$myComments item="comment"} <div class="comment"> <a name="comment{$comment.id}"></a> <div class="commentMeta"><span class="commentName">{$comment.name}</span {if $comment.homepage != ''}(<a{if $comment.anonymous} rel="nofollow"{/if} href="{$comment.homepage}" target="_blank">Web</a>){/if} Written at {$comment.timestamp|date_format:"%m.%d.%y, %H:%M"}: </div> <p>{$comment.comment|nl2br|smileys}<hr /></p> </div> {/foreach} `</body>` `</html>` ``` The {thumbnail} TAG creates a URL to a dynamically created Image thumbnail. -> Since BIGACE 2.4 RC2. id - (required, int) The ID of the Image to create the thumbnail for. * language - (optional, string) The languages of the Image to load. * If "height" and "width" is not set, the normal Image will be displayed. If only of of them is set, the other one is calculated automatically (what is the preferred way, set only one of "width" OR "height"). This example displays a thumbnail for the Image with the ID 5 and the new width of 100 pixel. The height will be automatically calculated. ``` :::html `<html>` `<head>` `<title>`Image thumbnail`</title>` `</head>` `<body>` `<img src="{thumbnail id=5 width=100}" alt="Thumbnail">` `</body>` `</html>` ``` The {user} TAG fetches a User object or returns a username, depending on the used attributes. -> Since BIGACE 2.4. id - (required, int) The User ID to lookup. One of "name" or "id" attribute is required. * name - (required, string) The Username to lookup. One of "name" or "id" attribute is required. * assign - (optional, string) The name of the template variable the user object will be assigned to. This example shows how to display the author of a menu. ``` :::html `<html>` `<head>` `<title>`{$MENU->getName()}`</title>` `</head>` `<body>` `<h1>`{$MENU->getName()}`</h1>` `<div id="content">` {content} `</div>` `<p style="font-size: x-small;">`Written at {$MENU->getCreateDate()|date_format:"%B %d, %Y"} by {user id=$MENU->getCreateByID()}`</p>` `</body>` `</html>` ``` The next example shows how to display (some) information about a user. ``` :::html `<html>` `<head>` `<title>`User Details`</title>` `</head>` `<body>` {user name="admin" assign="principal"} `<h1>`Details for: {$principal->getName()}`</h1>` `<p>`ID: {$principal->getID()}`</p>` `<p>`Email: {$principal->getEmail()}`</p>` `<p>`Language: {$principal->getLanguageID()}`</p>` `<p>`Activated: {if $principal->isActive()}Yes{else}No{/if}`</p>` `</body>` `</html>` ``` The Upgrade to 2.4 is only supported from the latest available stable release: 2.3 * Read the article "bigace:upgrade" * Download and extract bigace_2.4.zip and bigace_update_2.3-2.4.zip * Execute the Upgrade * Perform the next Upgrade tasks DON'T FORGET TO MAKE FULL BACKUP: FILESYSTEM & DATABASE `</WRAP>` ## .htaccess If you are using the Rewrite features of BIGACE, you have to use the latest .htaccess files, otherwise you will only receive 404 errors. SKIP THIS STEP, IF YOU DO NOT USE THE REWRITE FEATURE. Therefor, copy the files /misc/install/access/root.htaccess and rename it to .htaccess. Replace the existing .htaccess file in your BIGACE working/root directory. Please read this Forum thread (German) and the .htaccess Administration Guide for further information. ## Search Template You have to configure your Seatch Templates Header and Footer after the Upgrade, if you do NOT use the default Template (BLIX). If you used the Search as Popup, you might use the default Includes "APPLICATIONS-HEADER" and "APPLICATIONS-FOOTER", please read bigace:administration:search for further informations. You also have to fix your Search Formular, cause the new search uses different Parameter names! # 25 # Upgrading BIGACE from Version 2.4 to 2.5 The Upgrade to 2.5 is supported from the latest available stable release 2.4 and 2.5 x. Please read about the changed requirements with 2.5. {{http://www.bigace.de/News-Video.png}}You can watch a screencast to see how the upgrade process works in real life. I recorded that video for all those who feel better also watching these infos, instead of only reading them. Click to watch the tutorial screencast. ## IMPORTANT If you use the Comments extension, you need to install the latest version first (at least 1.1). # 1. Create backup Backup your filesystem and database before proceeding with the upgrade `</WRAP>` # Download upgrade package Download and extract bigace_update_2.4-2.5.zip or bigace_update_2.4-2.5.tar.bz2 * Upload install.php and bigace_install_2.5_RC2.zip to your webhost # Execute install.php and upgrade.php Open install.php in your browser * Open upgrade.php and follow the steps (wait for every tasks to complete) # Edit /system/config/config.system.php This is the new default config file. Please copy it into yours and apply the values. Remove all keys from your config, not shown here! ``` <?php // ------------------------------ [CORE SETTINGS] ---------------------------- define ('BIGACE_TIMEZONE', 'Europe/Berlin'); // Timezone of your server, see http://de3.php.net/timezones define ('BIGACE_URL_REWRITE', ''); // true/false = De-/activates rewriten/friendly URLs define ('BIGACE_DIR_PATH', ''); // BIGACE root directory, relative from DocumentRoot // ------------------------------ [DATABASE CONNECTION] ---------------------------- $_BIGACE['db']['type'] = 'mysql'; $_BIGACE['db']['character-set'] = 'utf8'; $_BIGACE['db']['host'] = 'localhost'; // database server name, ip or localhost $_BIGACE['db']['name'] = ''; // the db name $_BIGACE['db']['user'] = 'root'; // db user name $_BIGACE['db']['pass'] = ''; // password for the above user $_BIGACE['db']['prefix'] = 'cms_'; // prefix for the table names (can be empty!) ``` Make sure there is no closing ?> and the file starts with <?php - there is no whitepace allowed at the beginning. # Migrate to UTF-8 * Read the article bigace:administration:utf8 * Read the instructions at the conversion helper page * Execute the conversion by using the utf8.php script (included in your upgrade package) # 6. Configurations You have to set these values manually in Administration at System/Configurations for EVERY community! community / contact.email * community / default.language * community / sitename * editor / default.editor # 7. Check Smarty Templates If you used the {content} TAG before, you have to change your template code. The {content} TAG is now used to fetch “additional contents” from the page. It is NOT used for fetching the pages main content any longer. In most cases it should work, if you replace {content} by {modul}. Please have a look the Wiki pages: # 8. The last step: Remove files from additional language packs If you installed additional language packs like Swedish, Finnish or Turkish there might be some files left, which cause troubles in administration (forum thread). You need to make sure that the following folders are gone: /system/admin/plugins/menu/menuAdmin/ * /system/admin/plugins/menu/menuTree/ Probably there are more folders left, which can cause problems. If you cannot open an admin screen after upgrading “not found”, please post the URL in our forum, we will fix the problem together! # 9. Browser Cache If you experience problems, using administration and/or editor, please clear your browser cache, to make sure the new stylesheets are used. # 10. Congratulations You did it! Yes, this update was a tough one, I know. Sorry folks, but the UTF-8 conversion was a required step for future development.
simplefix
readthedoc
Ruby
SimpleFIX 1.0.17 documentation SimpleFIX[¶](#simplefix) === Introduction[¶](#introduction) --- [FIX](http://www.fixtradingcommunity.org/pg/structure/tech-specs/fix-protocol) (Financial Information eXchange) Protocol is a widely-used, text-based protocol for interaction between parties in financial trading. Banks, brokers, clearing firms, exchanges, and other general market participants use FIX protocol for all phases of electronic trading. Typically, a FIX implementation exists as a FIX Engine: a standalone service that acts as a gateway for other applications (matching engines, trading algos, etc) and implements the FIX protocol. The most popular Open Source FIX engine is probably one of the versions of [QuickFIX](https://github.com/quickfix/quickfix). This package provides a *simple* implementation of the FIX application-layer protocol. It does no socket handling, and does not implement FIX recovery or any message persistence. It supports the creation, encoding, and decoding of FIX messages. Installation[¶](#installation) --- simplefix has a few dependencies. Firstly, it is known to run on [Python](http://www.python.org/) 3.6 through to 3.11. It will not run on Python 3.5 or earlier versions, including Python 2.7. You can install it using [pip](http://www.pip-installer.org/): ``` $ pip install simplefix ``` or using [easy_install](http://github.com/pypa/setuptools): ``` $ easy_install simplefix ``` It’s usually a good idea to install simplefix into a virtualenv, to avoid issues with incompatible versions and system packaging schemes. ### Getting the code[¶](#getting-the-code) You can also get the code from [PyPI](https://pypi.org/project/simplefix/) or [GitHub](https://github.com/da4089/simplefix). You can either clone the public repository: ``` $ git clone git://github.com/da4089/simplefix.git ``` Download the tarball: ``` $ curl -OL https://github.com/da4089/simplefix/tarball/master ``` Or, download the zipball: ``` $ curl -OL https://github.com/da4089/simplefix/zipball/master ``` Once you have a copy of the source you can install it into your site-packages easily: ``` $ python setup.py install ``` Importing[¶](#importing) --- You can import the *simplefix* module maintaining its internal structure, or you can import some or all bindings directly. | | | | --- | --- | | ``` 1 2 3 ``` | ``` import simplefix message = simplefix.FixMessage() ``` | Or | | | | --- | --- | | ``` 1 2 3 ``` | ``` from simplefix import * message = FixMessage() ``` | Note that the “import *” form is explicitly supported, with the exposed namespace explicitly managed to contain only the public features of the package. All the example code in this document will use the first form, which is recommended. Creating Messages[¶](#creating-messages) --- To create a FIX message, first create an instance of the FixMessage class. | | | | --- | --- | | ``` 1 ``` | ``` message = simplefix.FixMessage() ``` | You can then add fields to the message as required. You should add the standard header tags 8, 34, 35, 49, 52, and 56 to all messages, unless you’re deliberately creating a malformed message for testing or similar. ### Simple Fields[¶](#simple-fields) For most tags, using `append_pair()` is the easiest way to add a field to the message. | | | | --- | --- | | ``` 1 2 3 ``` | ``` message.append_pair(1, "MC435967") message.append_pair(54, 1) message.append_pair(44, 37.0582) ``` | Note that any type of value can be used: it will be explicitly converted to a string before encoding the message. With a few exceptions, the message retains the order in which fields are added. The exceptions are that fields BeginString (8), BodyLength (9), MsgType (35), and Checksum (10) are encoded in their required locations, regardless of what order they were added to the Message. ### Header Fields[¶](#header-fields) The `FixMessage` class does not distinguish header fields from body fields, with one exception. To enable fields to be added to the FIX header after body fields have already been added, there’s an optional keyword parameter to the `append_pair()` method (and other append field methods). If this `header` parameter is set to `True`, the field is inserted after any previously added header fields, starting at the beginning of the message. This is normally used for setting things like MsgSeqNum (34) and SendingTime (52) immediately prior to encoding and sending the message. | | | | --- | --- | | ``` 1 2 3 4 5 6 7 ``` | ``` message.append_pair(8, "FIX.4.4") message.append_pair(35, 0) message.append_pair(49, "SENDER") message.append_pair(56, "TARGET") message.append_pair(112, "TR0003692") message.append_pair(34, 4684, header=True) message.append_time(52, header=True) ``` | In the example above, field 34 would be inserted at the beginning of the message. After encoding, the order of fields would be: 8, 9, 35, 34, 52, 49, 56, 112, 10. It’s not necessary, but field 49 and 56 could also be written with `header` set `True`, in which case, they’d precede 34 and 52 when encoded. See `append_utc_timestamp()` below for details of that method. ### Pre-composed Pairs[¶](#pre-composed-pairs) In some cases, your FIX application might have the message content as pre-composed “tag=value” strings. In this case, as an optimisation, the `append_string()` or `append_strings()` methods can be used. | | | | --- | --- | | ``` 1 2 3 4 5 ``` | ``` BEGIN_STRING = "8=FIX.4.2" STR_SEQ = ["49=SENDER", "56=TARGET"] message.append_string(BEGIN_STRING, header=True) message.append_strings(STR_SEQ, header=True) ``` | As with `append_pair()`, note that these methods have an optional keyword parameter to ensure that their fields are inserted before body fields. ### Timestamps[¶](#timestamps) The FIX protocol defines four time types: UTCTimestamp, UTCTimeOnly, TZTimestamp, and TZTimeOnly. Field values of these types can be added using dedicated functions, avoiding the need to translate and format time values in the application code. | | | | --- | --- | | ``` 1 2 3 4 ``` | ``` message.append_utc_timestamp(52, precision=6, header=True) message.append_tz_timestamp(1132, my_datetime) message.append_utc_time_only(1495, start_time) message.append_tz_time_only(1079, maturity_time) ``` | The first parameter to these functions is the field’s tag number. The second parameter is optional: if None or not supplied, it defaults to the current time, otherwise it must be a Unix epoch time (like from `time.time()`), or a `datetime` instance. There are two keyword parameters: `precision` which can be 0 for just seconds, 3 for milliseconds, or 6 for microseconds; and `header` to insert this field in the header rather than the body. In addition, there are a set of methods for creating correctly formatted time only values from their components: | | | | --- | --- | | ``` 1 2 ``` | ``` message.append_utc_time_only_parts(1495, 7, 0, 0, 0, 0) message.append_tz_time_only_parts(1079, 20, 0, 0, offset=-300) ``` | As usual, the first parameter to these functions is the field’s tag number. The next three parameters are the hour, minute, and seconds of the time value, followed by optional milliseconds and microseconds values. The timezone for the TZTimeOnly field is set using an offset value, the number of minutes east of UTC. Thus CET will be offset 60 minutes, and New York offset -240 minutes (four hours west). Finally, remember that time fields can always be set using a string value if the application already has the value in the correct format or prefers to manage the formatting itself. ### Repeating Groups[¶](#repeating-groups) There is no specific support for creating repeating groups in FixMessages. The count field must be appended first, followed by the group’s member’s fields. Consequently, it’s not an error to append two fields with the same tag, but note that the count fields are not added automatically. ### Data Fields[¶](#data-fields) There are numerous defined fields in the FIX protocol that use the *data* type. These fields consist of two parts: a length, which must come first, immediately followed by the value field, whose value may include the ASCII SOH character, the ASCII NUL character, and in fact any 8-bit byte value. To append a data field to a message, the `append_data()` method can be used. It will correctly add both the length field and the value field. | | | | --- | --- | | ``` 1 ``` | ``` message.append_data(95, 96, "RAW DATA \x00\x01 VALUE") ``` | which will result in the FIX message content (where | represents the SOH): > 95=17|96=RAW DATA \x00\x01 VALUE| Encoding[¶](#encoding) --- Once all fields are set, calling `encode()` will return a byte buffer containing the correctly formatted FIX message, with fields in the required order, and automatically added and set values for the BodyLength (9) and Checksum (10) fields. ``` byte_buffer = message.encode() ``` ### Raw Mode[¶](#raw-mode) Note that if you want to manually control the ordering of all fields, or the value of the BodyLength (9) or Checksum (10) fields, there’s a ‘raw’ flag to the `encode()` method that disables the default automatic functionality. ``` byte_buffer = message.encode(True) ``` This is primarily useful for creating known-bad messages for testing purposes. Parsing Messages[¶](#parsing-messages) --- To parse a FIX message, first create an instance of the FixParser class. | | | | --- | --- | | ``` 1 ``` | ``` parser = simplefix.FixParser() ``` | To extract FIX messages from a byte buffer, such as that received from a socket, you should append it to the internal reassembly buffer using `append_buffer()` . At any time, you can call `get_message()` : if there’s no complete message in the parser’s internal buffer, it’ll return `None`, otherwise, it’ll return a `FixMessage` instance. | | | | --- | --- | | ``` 1 2 ``` | ``` parser.append_buffer(response_from_socket) message = parser.get_message() ``` | Once you’ve received a `FixMessage` from `get_message()`, you can: check the number of fields with `count()`, retrieve the value of a field using `get()` or the built-in `[ ]` syntax, or iterate over all the fields using `for ... in ...`. | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 ``` | ``` message.count(49) >>> 1 message.get(35) >>> 'A' message["35"] >>> 'A' ``` | Members of repeating groups can be accessed using `get(tag, nth)`, where the “nth” value is an integer indicating the number of the group to use (note that the first group is number one, not zero). | | | | --- | --- | | ``` 1 2 ``` | ``` message = get(9061, 2) >>> 22 ``` | ### Parser Options[¶](#parser-options) By default, the parser is quite forgiving, and will attempt to extract FIX messages from the supplied buffer ignoring strict adherence to the standard. In some cases however, it can be useful to instruct the parser to be more or less strict in particular ways, depending on the protocol implementation you’re dealing with. The parser’s constructor accepts several different keyword arguments, each controlling a specific aspect of the parser’s behaviour. These options can also be configured using individual functions on the parser object. #### Empty Values[¶](#empty-values) The FIX standards explicitly prohibit the use of empty (zero-length) values. In practice however, these are sometimes seen, and this option allows them to be parsed. For example, a message like would, by default, raise the `EmptyValueError` exception. This option prevents that exception, and returns an empty string value instead. | | | | --- | --- | | ``` 1 ``` | ``` parser = simplefix.FixParser(allow_empty_values=True) ``` | or | | | | --- | --- | | ``` 1 2 ``` | ``` parser = simplefix.FixParser() parser.set_allow_empty_values(True) ``` | #### Missing BeginString[¶](#missing-beginstring) The *BeginString(8)* tag is required by the standard to be the first field of all messages: always present, and always first. By default, the parser ensures that this is the case. This option disables that check. | | | | --- | --- | | ``` 1 ``` | ``` parser = simplefix.FixParser(allow_missing_begin_string=True) ``` | or | | | | --- | --- | | ``` 1 2 ``` | ``` parser = simplefix.FixParser() parser.set_allow_missing_begin_string(True) ``` | Note: see Strip Fields Before BeginString below for restrictions on combining that with this option. #### Strip Fields Before BeginString[¶](#strip-fields-before-beginstring) In some cases, message reception timestamps, inbound/outbound direction flags, or other data might be encoded as “FIX” fields prior to the *BeginString(8)*. This option instructs the parser to discard any fields found before the *BeginString(8)* when parsing. | | | | --- | --- | | ``` 1 ``` | ``` parser = simplefix.FixParser(strip_fields_before_begin_string=True) ``` | or | | | | --- | --- | | ``` 1 2 ``` | ``` parser = simplefix.FixParser() parser.set_strip_fields_before_begin_string(True) ``` | Note: this option cannot be combined with `allow_missing_begin_string` as it requires a *BeginString(8)* field to stop stripping. ### Parser Errors[¶](#parser-errors) The `get_message()` method on the parser attempts to decode a FIX message from its internal reassembly buffer. It is not an error for there to be no message or an incomplete message to be in the reassembly buffer when it is called. In these cases, `get_message()` will simply return `None`. However, if the parser is unable to successfully decode a message, or if any configured validation checks fail, the parser will raise an exception to report the problem. Possible exceptions are: *exception* `EmptyValueError`[¶](#EmptyValueError) The parser read a field where the equals-sign was followed immediately by the field terminator byte (`SOH`). This is not permitted by the FIX standard. Use the `allow_empty_values` parser option override this prohibition. *exception* `FieldOrderError`[¶](#FieldOrderError) The FIX standard requires messages to contain some tags in a specific order and position. For instance, *BeginString(8)*, *BodyLength(9)*, and *MsgType(35)* must occur in that order at the start of the message. This exception indicates that a tag was seen in an unexpected order or a tag was not seen where it was expected. *exception* `IncompleteTagError`[¶](#IncompleteTagError) When the parser is configured with `stop_byte`, this exception indicates that the stop byte was read part-way through reading a tag – that is, following a field terminator (`SOH`), and one or more tag digits, but before the equals sign. This normally indicates a corrupted message. *exception* `RawLengthNotNumberError`[¶](#RawLengthNotNumberError) Raw data is encoded using two fields: a length field followed by the value field. This exception indicates that a field whose tag number is registered as being a raw data *length* field was parsed, but that its value could not be decoded as a positive integer as expected. Usually, this means that the message being parsed uses a tag number that the FIX standard reserves as a raw data length field, but is here being used for another purpose. See `simplefix.FixParser.remove_raw()` for a way to change the set of tag numbers expected to be raw data lengths and values. *exception* `TagNotNumberError`[¶](#TagNotNumberError) A field was parsed where the tag value, between the previous field terminator byte (`SOH`) and the equals-sign, could not be converted to a positive integer. This normally results from a corrupted message, often during development but usually rare in production. It might suggest problems reassembling the byte stream from the socket layer. Change Log[¶](#change-log) --- ### v1.0.17 (2023-09-12)[¶](#v1-0-17-2023-09-12) * Fix checksum calculation bug introduced in v1.0.16. This will break any usage that relies on simplefix calculating the checksum value: most users will need to upgrade. ### v1.0.16 (2023-09-08)[¶](#v1-0-16-2023-09-08) * Add missing EXECTYPE constants * Better conversion to string (#40) * Better installation instructions (#45) * Add testing for large (64 bit) integer values (#52) * Fixed handling of IntEnum tag values (#56) * Added testing for CPython 3.11 (Released: 2022-10-24) * Dropped testing for Python 3.6 (EOL: 2021-12-31) ### v1.0.15 (2022-02-17)[¶](#v1-0-15-2022-02-17) * Add framework for parser options * Add parsing error exceptions * Support parsing of empty values (#34) * Updated programmer’s guide * Added testing for CPython 3.10 (Released: 2021-10-04) * Removed testing for: + CPython 2.6 (EOL: 2013-10-29) + CPython 2.7 (EOL: 2020-01-01) + CPython 3.3 (EOL: 2017-09-29) + CPython 3.4 (EOL: 2019-03-18) + CPython 3.5 (EOL: 2020-09-30) ### v1.0.14 (2020-04-30)[¶](#v1-0-14-2020-04-30) * Fix typo in constant * Add additional tags ### v1.0.13 (2020-02-19)[¶](#v1-0-13-2020-02-19) * Allow configuration of alternative end-of-message indicators. This is useful for parsing log files or mangled FIX with a non-standard terminating tag. * Added various tags and their values (thanks Christian Oudard). * Added testing for CPython 3.8 (Released: 2019-10-14) * Dropped testing for CPython 3.3 (EOL: 2017-09-29) ### v1.0.12 (2018-11-26)[¶](#v1-0-12-2018-11-26) * Fix parser issue when parsing a message where the data field length is parsed from one call to append, but the content field is appended and parsed later (ie. append, parse -> None, append, parse -> msg). ### v1.0.11[¶](#v1-0-11) * *Never released* ### v1.0.10 (2018-09-28)[¶](#v1-0-10-2018-09-28) * Fix a few issues pointed out by LGTM. * Added testing for CPython 3.7 (Released: 2018-06-27) ### v1.0.9 (2018-02-16)[¶](#v1-0-9-2018-02-16) * Added new remove() function to delete a field from a message * Added new __str__() special function, useful for showing a message in logging or debugging. * Linked to <https://simplefix.readthedocs.io> from the README, hopefully making the detailed docs more visible. * Added more constant values from the FIX specifications. ### v1.0.8 (2017-12-15)[¶](#v1-0-8-2017-12-15) * Added support for Python2.6 to support RHEL6/CentOS6 which doesn’t EOL until November 2020. * Added support for in and not in tests for tag numbers in messages. * Adding a field with a value of None will silently fail. * Unless it’s preceded by a length field, a data type value will be treated as a standard (string) value. ### v1.0.7 (2017-11-13)[¶](#v1-0-7-2017-11-13) * Some major changes to the use of strings (vs. bytes) for Python 3.x, with all received values now exported as bytes, and input values being transformed to bytes using UTF-8 encoding (from strings) and ASCII encoding for everything else. If you want to use a different encoding, transform to bytes yourself first, but you probably should be using the FIX DATA type for encoded values anyway? * Also a major expansion/rewrite of date and time value handling. Added a bunch of method covering all the FIX date/time types properly. The existing append_time method is deprecated, in favour of more specifically named methods for UTC and local timezones, and datetime, date-only and time-only values. ### v1.0.6 (2017-08-24)[¶](#v1-0-6-2017-08-24) * Add support for adding “header” fields: they are inserted, starting at the beginning of the message, prior to any existing fields. This allows FIX header fields, for instance SendingTime(52), to be added after the body fields. ### v1.0.5 (2017-07-19)[¶](#v1-0-5-2017-07-19) * Fix error in timestamp formatting * Improved documentation ### v1.0.4 (2017-07-07)[¶](#v1-0-4-2017-07-07) * Flag release as Production/Stable. * Added handling of FIX ‘data’ type fields to the parser. Data fields can contain arbitrary data, including the SOH character, and were not previously supported. * Adding testing for CPython 3.6 (Released: 2016-12-23) ### v1.0.3 (2017-01-17)[¶](#v1-0-3-2017-01-17) * Added ability to iterate over the fields in a message. * More test coverage. ### v1.0.2 (2016-12-10)[¶](#v1-0-2-2016-12-10) * Changes to raw mode, now supported only for `encode()`. * Improved test coverage. ### v1.0.1 (2016-12-08)[¶](#v1-0-1-2016-12-08) * Added software license. ### v1.0.0 (2016-12-07)[¶](#v1-0-0-2016-12-07) * Initial release [SimpleFIX](index.html#document-index) === ### Navigation Contents: * [Introduction](index.html#document-intro) * [Installation](index.html#document-getting) * [Importing](index.html#document-import) * [Creating Messages](index.html#document-creating) * [Encoding](index.html#document-encoding) * [Parsing Messages](index.html#document-parsing) * [Change Log](index.html#document-changes) ### Related Topics * [Documentation overview](index.html#document-index) ### Quick search
multisensi
cran
R
Package ‘multisensi’ October 13, 2022 Type Package Title Multivariate Sensitivity Analysis Version 2.1-1 Date 2018-04-04 Author <NAME> <<EMAIL>>, <NAME> <<EMAIL>>, <NAME> <<EMAIL>> Maintainer <NAME> <<EMAIL>> Description Functions to perform sensitivity analysis on a model with multivariate output. License CeCILL-2 Repository CRAN LazyLoad yes Depends R (>= 2.8.0) Suggests MASS Imports stats, graphics, utils, grDevices, sensitivity, knitr VignetteBuilder knitr Encoding UTF-8 NeedsCompilation no Date/Publication 2018-04-10 10:27:07 UTC R topics documented: multisensi-packag... 2 analysis.anoas... 3 analysis.sensitivit... 5 basis.AC... 6 basis.bspline... 7 basis.min... 8 basis.ospline... 9 basis.pol... 10 biomass... 11 biomasse... 12 biomasse... 12 bsplin... 13 Clima... 14 dyns... 14 graph.ba... 16 graph.p... 17 grpe.gs... 18 gs... 18 multisens... 21 multiva... 23 planfac... 24 planfact.a... 25 plot.dyns... 25 plot.gs... 26 predict.gs... 27 print.dyns... 28 print.gs... 28 qualit... 29 sesBsplinesNOR... 29 sesBsplinesORTHONOR... 30 simulmode... 31 summary.dyns... 32 summary.gs... 32 yappro... 33 multisensi-package Multivariate sensitivity Analysis Description Sensitivity Analysis (SA) for models with multivariate output Details This package generalises sensitivity analysis to simulation models with multivariate output. It makes it easy to run a series of independent sensitivity analyses on a set of output variables and to plot the results. Alternatively, it allows to apply sensitivity analyses to the variables resulting from the application of a multivariate method (such as PCA or splines or polynomial regression) to the output data (Lamboni et al., 2009). The function multisensi integrates all the different possible methods implemented in the pack- age. Besides, the user may consider the functions which have existed since the first version of the package: i) gsi function for the Generalised Sensitivity Analysis (Lamboni et al., 2011, <NAME> Li, 2016) based on inertia decomposition. This method synthesizes the information that is spread between the time outputs or between the principal components and produces a unique sensitivity index for each factor. ii) gsi function for the componentwise sensitivity analysis obtained by computing sensitivity in- dices on principal components (Campbell et al., 2006) iii) dynsi function for the dynamic sensitivity analysis obtained by computing sensitivity indices on each output variable. In the first version of multisensi, sensitivity indices were based on using a factorial design and a classical ANOVA decomposition. It is now possible to use other methods for the design and for the sensitivity analysis. Simulation model management The multisensi package works on simulation models coded either in R or using an external language (typically as an executable file). Models coded in R must be either functions or objects that have a predict method, such as lm objects. Models defined as functions will be called once with an expression of the form y <- f(X) where X is a vector containing a combination of levels of the input factors, and y is the output vector of length q, where q is the number of output variables. If the model is external to R, for instance a computational code, it must be analyzed with the decoupled approach: the methods require an input data frame (X) containing all the combinations of the input levels and the outputs data frame (Y) containing the response of the model corresponding to these combinations. The size of X is n ∗ p and the size of Y is n ∗ q where p is the number of the input factor, q is the number of the model outputs and n is the number of all the combinations of the input levels. This approach can also be used on R models that do not fit the required specifications. References <NAME>., <NAME>., <NAME>., 2009. Multivariate global sensitivity analysis for dynamic crop models. Field Crops Research, volume 113, pp. 312-320. <NAME>., <NAME>., <NAME>., 2011. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models. Reliability Engineering & System Safety, volume 96, pp. 450-459. <NAME>., <NAME>., 2016. Discussion of paper by <NAME>, <NAME>, <NAME> Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models, Reliab. Eng. Syst. Saf. 96 (2011) 450-459. Reliability Engineering & System Safety, volume 147, pp. 194-195. <NAME>., <NAME>., <NAME>. eds, 2000. Sensitivity Analysis Wiley, New York. analysis.anoasg Runs a series of analyses of variance Description The analysis.anoasg function runs a series of analyses of variance on the columns of a data.frame, by using the aov function. Usage analysis.anoasg(Y, plan, nbcomp = 2, sigma.car = NULL, analysis.args = list(formula = 2, keep.outputs = FALSE)) Arguments Y a data.frame of output variables or principal components. plan a data.frame containing the design. nbcomp the number of Y variables to analyse (the first nbcomp variables of Y will be analysed). sigma.car NULL or sum of squares of Y. If not NULL, compute the Generalised Sensitivity Indices (saved in the last column of the data.frame mSI/tSI/iSI outputs. analysis.args a list of arguments. The formula component is for ANOVA formula like "A+B+c+A:B" OR an integer giving the maximum interaction order (1 for main effects). If it contains keep.outputs=TRUE, the outputs associated with the analysis of each variable are returned (see section Value). Value A list containing: SI data.frame of sensitivity indices mSI data.frame of first-order sensitivity indices tSI data.frame of total sensitivity indices iSI data.frame of interaction sensitivity indices inertia vector of Inertia explained by the variables indic.fact 0-1 matrix to indicate the factors associated with each factorial effect Hpredict prediction of outputs outputkept if analysis.args$keep.outputs=TRUE, list of the outputs returned by the sen- sitivity analysis performed on each variable call.info list with first element analysis="anova" See Also aov Examples # Test case : the Winter Wheat Dynamic Models (WWDM) # input factors design data(biomasseX) # output variables (precalculated to speed up the example) data(biomasseY) res <- analysis.anoasg(biomasseY, biomasseX, nbcomp = 2, sigma.car = NULL, analysis.args = list(formula = 2, keep.outputs = FALSE)) analysis.sensitivity Runs a series of sensitivity analyses by a function from the sensitivity package Description The analysis.sensitivity function runs a series of sensitivity analyses on the columns of a data.frame, using a method implemented in the sensitivity package. Usage analysis.sensitivity(Y, plan, nbcomp = 2, sigma.car = NULL, analysis.args = list(keep.outputs = FALSE)) Arguments Y a data.frame of output variables or principal components. plan an object containing the design. It must be created by a function from the sensi- tivity package with argument model=NULL. nbcomp the number of Y variables to analyse (the first nbcomp variables of Y will be analysed). sigma.car NULL or sum of squares of Y. If not NULL, compute the Generalised Sensitivity Indices (saved in the last column of the data.frame mSI/tSI/iSI outputs. analysis.args a list of arguments. If it contains keep.outputs=TRUE, the outputs associated with the analysis of each variable are returned (see section Value). Details The argument plan must be an object created by a method implemented in the sensitivity package. Thus it belongs to a class such as morris or fast99. The name of the class is stored in the element call.info$fct of the output returned by analysis.sensitivity. Value A list containing: SI data.frame of sensitivity indices or other importance measures returned by the function from the sensitivity package used. Sometimes empty but kept for com- patibility reasons. mSI data.frame of first-order sensitivity indices tSI data.frame of total sensitivity indices iSI data.frame of interaction sensitivity indices inertia empty (kept for compatibility reasons) indic.fact 0-1 matrix to indicate the factors associated with each factorial effect Hpredict empty (kept for compatibility reasons) outputkept if analysis.args$keep.outputs=TRUE, list of the outputs returned by the sen- sitivity analysis performed on each variable call.info list with first element analysis="sensitivity" and second element fct stor- ing the class name of the argument plan Examples # Test case : the Winter Wheat Dynamic Models (WWDM) library(sensitivity) # to use fast99 # input factors design data(biomasseX) # input climate variable data(Climat) # example of the sensitivity:fast99 function # design newplan <- fast99(model = NULL, factors = names(biomasseX), n = 100, q = "qunif", q.arg = list(list(min = 0.9, max = 2.8), list(min = 0.9, max = 0.99), list(min = 0.6, max = 0.8), list(min = 3, max = 12), list(min = 0.0035, max = 0.01), list(min = 0.0011, max = 0.0025), list(min = 700, max = 1100))) # simulations wwdm.Y <- simulmodel(model=biomasse, plan=newplan$X, climdata=Climat) # analysis res <- analysis.sensitivity(data.frame(wwdm.Y), plan=newplan, nbcomp=4) basis.ACP A function to decompose multivariate data by principal components analysis (PCA) Description The basis.ACP function decomposes a multivariate data set according to principal components analysis. Usage basis.ACP(simuls, basis.args = list()) Arguments simuls a data.frame of size N x T, typically a set of N simulation outputs of length T. basis.args an empty list of arguments for the PCA decomposition. Details This function uses prcomp. Value H a data.frame of size N x T, containing the coefficients of the PCA decomposition. It is equal to the x output of function prcomp. L a matrix of size T x T. It contains the eigenvectors of the PCA decomposition. call.info list with the element reduction="pca" See Also prcomp Examples data(biomasseY) res <- basis.ACP(biomasseY) basis.bsplines A function to decompose multivariate data on a B-spline basis Description The basis.bsplines function decomposes a multivariate data set on a B-spline basis defined by its knots and mdegree parameters. Usage basis.bsplines(simuls, basis.args = list(knots = 5, mdegree = 3)) Arguments simuls a data.frame of size N x T, typically a set of N simulation outputs of length T. basis.args a list of arguments for the B-spline decomposition. The knots argument is the number of knots or the vector of knot positions. The mdegree argument is the polynomial degree. For the optional x.coord argument, see the Details section. Details The optional x.coord element of the list in basis.args can be used to specify the support of the B-spline decomposition, if different from 1:T. It must be a vector of length T. Value H a data.frame of size N x d, where d is the dimension of the B-spline decomposi- tion. It contains the coefficients of the decomposition for each row of the simuls data.frame. L a matrix of size T x d. It contains the vectors of the B-spline basis. call.info list with the element reduction="b-splines" See Also bspline, sesBsplinesNORM Examples data(biomasseY) res <- basis.bsplines(biomasseY,basis.args=list(knots=7,mdegree=3)) basis.mine A function to decompose multivariate data on a user-defined basis Description The basis.mine function decomposes a multivariate data set on a user-defined basis. Usage basis.mine(simuls, basis.args = list( baseL=1*outer(sort(0:(ncol(simuls)-1)%%5),0:4,"==") ) ) Arguments simuls a data.frame of size N x T, typically a set of N simulation outputs of length T. basis.args a list of arguments for the polynomial decomposition. The baseL argument is a matrix of size T x d containing the coordinates of the d basis vectors. Details The default basis.args argument generates a projection on a moving-average basis. But if in the multisensi function this basis.args argument is not given for reduction=basis.mine, the execution will be stopped. Value H a data.frame of size N x d, where d is the number of basis vectors. It contains the coefficients of the decomposition for each row of the simuls data.frame. L a matrix of size T x d. It contains the vectors of the user-defined basis. call.info list with the element reduction="matrix" Examples data(biomasseY) M <- 1*outer(sort(0:(ncol(biomasseY)-1)%%5),0:4,"==") norm.M <- sqrt(colSums(M^2)) for (i in 1:ncol(M)){ M[,i]=M[,i]/norm.M[i] } res <- basis.mine(biomasseY, basis.args=list(baseL=M)) basis.osplines A function to decompose multivariate data on an orthogonal B-spline basis (O-spline) Description The basis.osplines function decomposes a multivariate data set on an orthogonalised B-spline (or O-spline) basis defined by its knots and mdegree parameters. Usage basis.osplines(simuls, basis.args = list(knots = 5, mdegree = 3)) Arguments simuls a data.frame of size N x T, typically a set of N simulation outputs of length T. basis.args a list of arguments for the O-spline decomposition. The knots argument is the number of knots or the vector of knot positions. The mdegree argument is the polynomial degree. For the optional x.coord argument, see the Details section. Details The optional x.coord element of the list in basis.args can be used to specify the support of the O-spline decomposition, if different from 1:T. It must be a vector of length T. Value H a data.frame of size N x d, where d is the dimension of the O-spline decomposi- tion. It contains the coefficients of the decomposition for each row of the simuls data.frame. L a matrix of size T x d. It contains the vectors of the O-spline basis. call.info list with the element reduction="o-splines" See Also bspline, sesBsplinesORTHONORM Examples data(biomasseY) res <- basis.osplines(biomasseY,basis.args=list(knots=7,mdegree=3)) basis.poly A function to decompose multivariate data on a polynomial basis Description The basis.poly function decomposes a multivariate data set on a polynomial basis. Usage basis.poly(simuls, basis.args = list(degree = 3)) Arguments simuls a data.frame of size N x T, typically a set of N simulation outputs of length T. basis.args a list of arguments for the polynomial decomposition. The degree argument is the maximum degree of the polynomial basis. For the optional x.coord argu- ment, see the Details section. Details This function uses poly. The optional x.coord element of the list in basis.args can be used to specify the support of the polynomial decomposition, if different from 1:T. It must be a vector of length T. Value H a data.frame of size N x (d+1), where d is the degree of the polynomial decom- position. It contains the coefficients of the decomposition for each row of the simuls data.frame. L a matrix of size T x (d+1). It contains the vectors of the polynomial basis. call.info list with the element reduction="polynomial" See Also poly Examples data(biomasseY) res <- basis.poly(biomasseY,basis.args=list(degree=3)) biomasse The Winter Wheat Dynamic Model Description The Winter Wheat Dynamic Model, a toy model to illustrate the main multisensi methods Usage biomasse(input, climdata, annee = 3) Arguments input vector of input values. annee year. climdata a meteorological data.frame specific to biomasse. Details The Winter Wheat Dry Matter model (WWDM) is a dynamic crop model running at a daily time step (Makowski et al, 2004). It has two state variables, the above-ground winter wheat dry matter U (t), in g/m2 and the leaf area index LAI(t) with t the day number from sowing (t = 1) to harvest (t = 223). In the multisensi package implementation, the biomasse function simulates the output for only one parameter set (the first row of input if it is a matrix or a data.frame). Value a vector of daily dry matter increase of the Winter Wheat biomass, over 223 days References <NAME>., <NAME>., <NAME>., 2004 Bayesian methods for updating crop model predictions, applications for predicting biomass and grain protein content. In: Bayesian Statistics and Quality Modelling in the Agro-Food Production Chain (van Boeakel et al. eds), pp. 57-68. Kluwer, Dordrecht <NAME>., <NAME>., <NAME>., 2006 Uncertainty and sensitivity analysis for crop models. In: Working with Dynamic Crop Models (<NAME>., <NAME>. and Jones J. eds), pp. 55-100. Elsevier, Amsterdam biomasseX A factorial input design for the main example Description Factorial design (resolution V) data for the 7 WWDM model input factors Usage data(biomasseX) Format A data frame with 2187 observations on the following 7 variables. Eb First WWDM input factor name Eimax Second WWDM input factor name K Thirth WWDM input factor name Lmax Fourth WWDM input factor name A Fifth WWDM input factor name B Sixth WWDM input factor name TI Seventh WWDM input factor name See Also biomasse, biomasseY Examples data(biomasseX) ## maybe str(biomasseX) ; plot(biomasseX) ... biomasseY Output of the biomasse model for the plan provided in the package Description Simplified output of the biomasse model (one column per decade), especially generated for exam- ples in the package help files Usage data(biomasseY) Format A data frame with 2187 rows and 22 output variables (one per decade). See Also biomasse, biomasseX Examples data(biomasseY) dim(biomasseY) bspline function to evaluate B-spline basis functions Description The bspline function evaluates ith B-spline basis function of order m at the values in x, given knot locations in k Usage bspline(x = seq(0, 1, len = 101), k = knots, i = 1, m = 2) Arguments x vector or scalar, coordinate where to calculate the B-spline functions k vector of knot locations i integer; from 0 to length(knots)+1-m m integer, degree of the B-Splines Details B-splines are defined by recursion : bi,0 (x) = 1 if kj ≤ x < kj+1 ; 0 else. x − ki ki+m+1 − x bi,m (x) = bi,m−1 (x) + bi+1,m−1 (x) ki+m − ki ki+m+1 − ki+1 Value values in x of the ith B-spline basis function of order m Note This is essentially an internal function for the multisensi package References <NAME>, 2006. Generalized Additive Models: An Introduction with R Chapman and Hall/CRC. Climat Climate data Description Climate data for the WWDM model (needed by the biomasse function) Usage data(Climat) Format A data frame with 3126 observations on the following 4 variables. ANNEE a factor with levels 1 to 14, indicating 14 different years RG daily radiation variable Tmin daily maximum temperature Tmax daily minimum temperature Source <NAME>., <NAME>., <NAME>., 2004 Bayesian methods for updating crop model predictions, applications for predicting biomass and grain protein content. In: Bayesian Statistics and Quality Modelling in the Agro-Food Production Chain (van Boeakel et al. eds), pp. 57-68. Kluwer, Dordrecht. <NAME>., <NAME>., <NAME>., 2006 Uncertainty and sensitivity analysis for crop models. In: Working with Dynamic Crop Models (<NAME>., <NAME>. and <NAME>. eds), pp. 55-100. Elsevier, Amsterdam dynsi Dynamic Sensitivity Indices: DSI Description dynsi implements the Dynamic Sensitivity Indices. This method allows to compute classical Sen- sitivity Indices on each output variable of a dynamic or multivariate model by using the ANOVA decomposition Usage dynsi(formula, model, factors, cumul = FALSE, simulonly=FALSE, nb.outp = NULL, Name.File=NULL, ...) Arguments formula ANOVA formula like "A+B+c+A:B" OR an integer equal to the maximum inter- action order in the sensitivity model. model output data.frame OR the name of the R-function which calculates the model output. The only argument of this function must be a vector containing the input factors values. factors input data.frame (the design) if model is a data.frame OR a list of factors levels such as factor.example <- list(A=c(0,1),B=c(0,1,4)). cumul logical value. If TRUE the sensitivity analysis will be done on the cumalative outputs. simulonly logical value. If TRUE the program stops after calculating the design and the model outputs. nb.outp The first nb.outp number of model outputs to be considered. If NULL all the outputs are considered. Name.File optional name of a R script file containing the R-function that calculates the simulation model. e.g "exc.ssc". ... possible fixed parameters of the model function. Details If factors is a list of factors, the dynsi function generates a complete factorial design. If it is a data.frame, dynsi expects that each column is associated with an input factor. Value dynsi returns a list of class "dynsi" containing the following components: X a data.frame containing the experimental design (input samples) Y a data.frame containing the output (response) SI a data.frame containing the Sensitivity Indices (SI) on each output variable of the model and the Generalised SI (GSI) mSI a data.frame of first order SI on each output variable and first order GSI tSI a data.frame containing the total SI on each output variable and the total GSI iSI a data.frame of interaction SI on each output variable and interaction GSI Att 0-1 matrix of association between input factors and factorial terms in the anovas call.info a list containing informations on the process (reduction=NULL, analysis, fct, call) inputdesign either the input data.frame or the sensitivity object used outputs a list of results on each output variable ... Note This function can now be replaced by a call to the multisensi function. It is kept for compatibility with Version 1 of the multisensi package. References <NAME>, <NAME> and <NAME>, 2009. Multivariate global sensitivity analysis for dy- namic crop models. Field Crops Research, 113, 312-320. <NAME>, <NAME> and <NAME>, 2000. Sensitivity Analysis. Wiley, New York. See Also gsi, multisensi Examples # Test case : the Winter Wheat Dynamic Models (WWDM) # input factors design, data(biomasseX) # input Climate variables data(Climat) # output variables (precalculated to speed up the example) data(biomasseY) # DYNSI <- dynsi(2, biomasseY, biomasseX) summary(DYNSI) print(DYNSI) plot(DYNSI, color=heat.colors) #graph.bar(DYNSI,col=1, beside=F) # sensitivity bar plot # for the first output (col=1) #graph.bar(DYNSI,col=2, xmax=1) # graph.bar Sensitivity index bar plot Description A function that plots sensitivity indices by a bar graph Usage graph.bar(x, col = 1, nb.plot = 15, xmax = NULL, beside = TRUE, xlab = NULL, ...) Arguments x an object of class gsi or dynsi col the column number of GSI to represent in the bar graph nb.plot number of input factors to be considered xmax a user-defined maximal x value (x ≤ 1) in all the bar graphs that show sensitivity indices; or NULL if the user wants to keep default values beside if TRUE, the main and total sensitivity indices are represented by two bars; if FALSE, they are represented by the same bar xlab a label for the x axis ... graphical parameters graph.pc Principal Components graph for gsi objects Description A function that plots the Principal Components (PCs) and the sensitivity indices on each PC Usage graph.pc(x, nb.plot = 15, nb.comp = NULL, xmax = NULL, beside = TRUE, cor.plot=FALSE, xtick=TRUE, type="l", ...) Arguments x gsi object. nb.plot number of input factors to be considered. nb.comp number of PCs. xmax a user-defined maximal x value (x ≤ 1) in all the bar graphs that show sensitivity indices; or NULL if the user wants to keep default values. beside if TRUE, the main and total sensitivity indices are represented by two bars; if FALSE, they are represented by the same bar. cor.plot if TRUE a correlation graph is made to represent the PCs ; if FALSE (default) a functionnal boxplot of the PCs is plotted. xtick if TRUE, put column names of outputs (Y) as ticks for the x axis. type what type of plot should be drawn for correlation graph ("l" for lines). ... graphical parameters. grpe.gsi Group factor GSI, obsolete function Description An obsolete function that computed the GSI of a group factor as one factor Usage grpe.gsi(GSI, fact.interet) Arguments GSI a gsi or dynsi object fact.interet input factor to be grouped Note This is essentially an internal function for the multisensi package gsi Generalised Sensitivity Indices: GSI Description The gsi function implements the calculation of Generalised Sensitivity Indices. This method allows to compute a synthetic Sensitivity Index for the dynamic or multivariate models by using factorial designs and the MANOVA decomposition of inertia. It computes also the Sensitivity Indices on principal components Usage gsi(formula, model, factors, inertia = 0.95, normalized = TRUE, cumul = FALSE, simulonly = FALSE, Name.File = NULL, ...) Arguments formula ANOVA formula like "A+B+C+A:B" OR an integer equal to the maximum inter- action order in the sensitivity model model output data.frame OR the name of the R-function which calculates the model output. The only argument of this function must be a vector containing the input factors values factors input data.frame (the design) if model is a data.frame OR a list of factors levels such as : factor.example <- list(A=c(0,1),B=c(0,1,4)) inertia cumulated proportion of inertia (a scalar < 1) to be explained by the selected Principal components OR number of PCs to be used (e.g 3) normalized logical value. TRUE (default) computes a normalized Principal Component analysis. cumul logical value. If TRUE the PCA will be done on the cumulative outputs simulonly logical value. If TRUE the program stops after calculating the design and the model outputs Name.File optional name of a R script file containing the R-function that calculates the simulation model. e.g "exc.ssc" ... possible fixed parameters of the model function Details If factors is a list of factors, the gsi function generates a complete factorial design. If it is a data.frame, gsi expects that each column is associated with an input factor. Value gsi returns a list of class "gsi", containing all the input arguments detailed before, plus the following components: X a data.frame containing the experimental design (input samples) Y a data.frame containing the output matrix (response) H a data.frame containing the principal components L a data.frame whose columns contain the basis eigenvectors (the variable load- ings) lambda the variances of the principal components inertia vector of inertia percentages per PCs and global criterion cor a data.frame of correlation between PCs and outputs SI a data.frame containing the Sensitivity Indices (SI) on PCs and the Generalised SI (GSI) mSI a data.frame of first order SI on PCs and first order GSI tSI a data.frame containing the total SI on PCs and the total GSI iSI a data.frame of interaction SI on PCs and interaction GSI pred a data.frame containing the output predicted by the metamodel arising from the PCA and anova decompositions residuals a data.frame containing the residuals between actual and predicted outputs Rsquare vector of dynamic coefficient of determination Att 0-1 matrix of association between input factors and factorial terms in the anovas scale logical value, see the arguments normalized logical value, see the arguments cumul logical value, see the arguments call.info a list containing informations on the process (reduction, analysis, fct, call) inputdesign either the input data.frame or the sensitivity object used outputs a list of results on each output variable ... Note This function can now be replaced by a call to the multisensi function. It is kept for compatibility with Version 1 of the multisensi package. References <NAME>, <NAME> and <NAME>, 2009. Multivariate global sensitivity analysis for dy- namic crop models. Field Crops Research, volume 113. pp. 312-320 <NAME>, <NAME> and <NAME>, 2009. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models. Submitted to Reliability Engineering and System Safety. See Also dynsi, multisensi Examples # Test case : the Winter Wheat Dynamic Models (WWDM) # input factors design data(biomasseX) # input climate variable data(Climat) # output variables (precalculated to speed up the example) data(biomasseY) # GSI <- gsi(2, biomasseY, biomasseX, inertia=3, normalized=TRUE, cumul=FALSE, climdata=Climat) summary(GSI) print(GSI) plot(x=GSI, beside=FALSE) #plot(GSI, nb.plot=4) # the 'nb.plot' most influent factors # are represented in the plots #plot(GSI,nb.comp=2, xmax=1) # nb.comp = number of principal components #plot(GSI,nb.comp=3, graph=1) # graph=1 for first figure; 2 for 2nd one # and 3 for 3rd one; or 1:3 etc. #graph.bar(GSI,col=1, beside=F) # sensitivity bar plot on the first PC #graph.bar(GSI,col=2, xmax=1) # multisensi A function with multiple options to perform multivariate sensitivity analysis Description The multisensi function can conduct the different steps of a multivariate sensitivity analysis (de- sign, simulation, dimension reduction, analysis, plots). It includes different options for each of these steps. Usage multisensi(design = expand.grid, model, reduction = basis.ACP, dimension = 0.95, center = TRUE, scale = TRUE, analysis = analysis.anoasg, cumul = FALSE, simulonly = FALSE, Name.File = NULL, design.args = list(), basis.args = list(), analysis.args = list(), ...) Arguments design EITHER a function such as expand.grid to generate the design OR a data.frame of size N x P containing N combinations of levels of the P input factors OR a function from the sensitivity package such as fast99 OR an object generated by a function from the sensitivity package. The first and third cases require additional information to be given in the design.args argument. model EITHER a function to run the model simulations OR a data.frame of size N x T containing N realizations of T output variables. reduction EITHER a function to decompose the multivariate output on a basis of smaller dimension OR NULL. The first case requires additional information to be given in the basis.args argument. In the second case, sensitivity analyses are per- formed on the raw output variables. dimension EITHER the number of variables to analyse, specified by an integer or by the minimal proportion of inertia (a scalar < 1) to keep in the output decomposition OR a vector specifying a subset of columns in the output data.frame OR NULL if all variables must be analysed. center logical value. If TRUE (default value) the output variables are centred. scale logical value. If TRUE (default value) the output variables are normalized before applying the reduction function. analysis a function to run the sensitivity analysis. Additional information can be given in the analysis.args argument. cumul logical value. If TRUE the output variables are replaced by their cumulative sums. simulonly logical value. If TRUE the program stops after the model simulations. Name.File Name of file containing the R-function model. design.args a list of arguments for the function possibly given in the design argument. basis.args a list of arguments for the function given in the reduction argument. See the function help for more precision. analysis.args a list of arguments for the function possibly given in the analysis argument. See the function help for more precision. ... optional parameters of the function possibly given in the model argument. Value an object of class dynsi if reduction=NULL, otherwise an object of class gsi. See the functions dynsi and gsi for more information. See Also dynsi, gsi Examples ## Test case : the Winter Wheat Dynamic Models (WWDM) # input factors design data(biomasseX) # input climate variable data(Climat) # output variables (precalculated to speed up the example) data(biomasseY) # to do dynsi process # argument reduction=NULL resD <- multisensi(design=biomasseX, model=biomasseY, reduction=NULL, dimension=NULL, analysis=analysis.anoasg, analysis.args=list(formula=2,keep.outputs = FALSE)) summary(resD) # to do gsi process #------------ # with dimension reduction by PCA # argument reduction=basis.ACP resG1 <- multisensi(design=biomasseX, model=biomasseY, reduction=basis.ACP, dimension=0.95, analysis=analysis.anoasg, analysis.args=list(formula=2,keep.outputs = FALSE)) summary(resG1) plot(x=resG1, beside=FALSE) #------------ # with dimension reduction by o-splines basis # arguments reduction=basis.osplines # and basis.args=list(knots= ... , mdegree= ... ) resG2 <- multisensi(design=biomasseX, model=biomasseY, reduction=basis.osplines, dimension=NULL, center=FALSE, scale=FALSE, basis.args=list(knots=11, mdegree=3), analysis=analysis.anoasg, analysis.args=list(formula=2,keep.outputs = FALSE)) summary(resG2) #------------ library(sensitivity) # to use fast99 # with dimension reduction by o-splines basis # and sensitivity analysis with sensitivity:fast99 resG3 <- multisensi(design=fast99, model=biomasse, analysis=analysis.sensitivity, design.args=list(factors = names(biomasseX), n = 100, q = "qunif", q.arg = list(list(min = 0.9, max = 2.8), list(min = 0.9, max = 0.99), list(min = 0.6, max = 0.8), list(min = 3, max = 12), list(min = 0.0035, max = 0.01), list(min = 0.0011, max = 0.0025), list(min = 700, max = 1100))), climdata=Climat, reduction=basis.osplines, basis.args=list(knots=7, mdegree=3), center=FALSE,scale=FALSE,dimension=NULL) summary(resG3) multivar A function to decompose the output data set and reduce its dimension Description The function multivar applies a multivariate method to decompose the output variables on a given basis. Usage multivar(simuls, dimension = NULL, reduction, centered = TRUE, scale = TRUE, basis.args = list()) Arguments simuls a data.frame of size N x T, typically a set of N simulation outputs of length T dimension the number of variables to analyse, specified by an integer (for example 3) or by the minimal proportion of inertia (for example 0.95) to keep in the output decomposition reduction a function to decompose the multivariate output on a basis of smaller dimension centered logical value. If TRUE the output variables are centred. scale logical value. If TRUE the output variables are normalized. basis.args a list of arguments for the function given in the reduction argument. See the function help for more precision. Value A list containing: H a data.frame of size N x d, where d is the number of basis vectors. It contains the coefficients of the decomposition for each row of the simuls data.frame. L a matrix of size T x d. It contains the vectors of the user-defined basis. sdev standard deviations of the columns of H nbcomp number of components kept from the decomposition SStot total sums of squares of the simulations (after application of centered and scale) centering either 0 or the column averages of simuls scaling either 1 or sdY, depending on the scale argument sdY standard deviations of the columns of simuls cor correlation matrix (L*sdev), of size T x nbcomp scale kept in case the option scale has been changed in the function importance cumulated percentage of SS_H (sdev^2) with respect to SStot call.info list with the element reduction storing the name of the argument reduction See Also basis.ACP, basis.bsplines, basis.poly, basis.osplines Examples data(biomasseY) res <- multivar(biomasseY, dimension=0.95, reduction=basis.ACP) planfact Complete factorial design in lexical order Description Function that generates a complete factorial design in lexical order Usage planfact(nb.niv, make.factor = TRUE) Arguments nb.niv vector containing the number of each input levels make.factor logical value. If TRUE the columns of the output are of class factor Value plan data frame of the complete factorial design Note This is essentially an internal function for the multisensi package planfact.as Complete factorial design Description Computation of a complete factorial design for model input factors. Usage planfact.as(input) Arguments input list of factor levels Value comp2 complete factorial design of model input Note This is essentially an internal function for the multisensi package. It is almost equivalent to the function expand.grid. plot.dynsi Plot method for dynamic sensitivity results Description Plot method for dynamic sensitivity results of class dynsi Usage ## S3 method for class 'dynsi' plot(x, normalized=FALSE, text.tuning = NULL, shade=FALSE, color=NULL, xtick=TRUE, total.plot=FALSE, gsi.plot=TRUE, ...) Arguments x a dynsi object. normalized logical value, FALSE => SI plotted within var(Y). text.tuning NULL or a small integer to improve the position of input factor labels. shade if TRUE, put different shadings to enhance the different factorial effects in the plot (long). color a palette of colors to enhance the different factorial effects in the plot (for exam- ple color=heat.colors). xtick if TRUE, put column names of outputs (Y) as ticks for the x axis. total.plot logical value, TRUE => a new plot is produced with the total SI. gsi.plot logical value, TRUE => a new plot is produced for the Generalised Sensitivity Indice. ... graphical parameters. Details For labels that would be partly positioned outside the plot frame, the argument "text.tuning" may allow to get a better positioning. If it is equal to n, say, these labels are moved by n positions inside the frame, where 1 position corresponds to 1 output variable on the x-axis. See Also dynsi, multisensi plot.gsi Plot method for generalised sensitivity analysis Description Plot method for generalised sensitivity analysis of class gsi Usage ## S3 method for class 'gsi' plot(x, nb.plot = 10, nb.comp = 3, graph = 1:3, xmax=NULL, beside=TRUE, cor.plot=FALSE, xtick=TRUE, type="l",...) Arguments x a gsi object. nb.plot number of input factors to be considered. nb.comp number of Principal Components to be plotted. graph figures number: 1 or 2 or 3. 1 is for plotting the PCs and their sensitivity indices, 2 is for plotting the Generalised Sensitivity Indice, 3 is for plotting the Rsquare. xmax a user-defined maximal x value (x ≤ 1) in all the bar graphs that show sensitivity indices; or NULL if the user wants to keep default values. beside if TRUE, the main and total sensitivity indices are represented by two bars; if FALSE, they are represented by the same bar. cor.plot if TRUE a correlation graph is made to represent the PCs ; if FALSE (default) a functionnal boxplot of the PCs is plotted. xtick if TRUE, put column names of outputs (Y) as ticks for the x axis. type what type of plot should be drawn for correlation graph ("l" for lines). ... graphical parameters. See Also gsi, multisensi, graph.bar, graph.pc predict.gsi A function to predict multivariate output Description The function predict.gsi generates predicted multivariate output for user-specified combinations of levels of the input factors. Usage ## S3 method for class 'gsi' predict(object, newdata, ...) Arguments object Object of class gsi. newdata An optional data frame in which to look for variables with which to predict. If omitted, the fitted values are used. need to be same factors and levels as for obtained the gsi object. ... others parameters Details Only available if the gsi object was obtained with analysis.anoasg and analysis.args$keep.outputs=TRUE. Value a data.frame of predicted values for newdata See Also gsi, multisensi, analysis.anoasg Examples data(biomasseX) data(biomasseY) x=multisensi(design=biomasseX,model=biomasseY,basis=basis.ACP, analysis=analysis.anoasg, analysis.args=list(formula=2, keep.outputs=TRUE)) newdata=as.data.frame(apply(biomasseX,2,unique)) predict(x,newdata) print.dynsi print DYNSI Description A function to print DYNSI results Usage ## S3 method for class 'dynsi' print(x, ...) Arguments x a dynsi object ... print parameters See Also dynsi, multisensi print.gsi print GSI Description function to print GSI results Usage ## S3 method for class 'gsi' print(x, ...) Arguments x a gsi object ... print parameters See Also gsi, multisensi quality quality of any approximation Description Function that computes the sensitivity quality after making some assumptions about the number of PCs and the number of interactions Usage quality(echsimul, echsimul.app) Arguments echsimul model outputs echsimul.app Predicted model output Value A list with the following components: moy.biais mean of the residuals residuals biais coef.det R-square Note This is essentially an internal function for the multisensi package sesBsplinesNORM normalized B-splines basis functions Description The sesBsplinesNORM evaluates B-Splines basis functions at some points. Usage sesBsplinesNORM(x = seq(0, 1, len = 101), knots = 5, m = 2) Arguments x vector, coordinates where to calculate the B-spline functions knots number of knots or vector of knots locations m integer, degree of the B-Splines Value x as input bsplines matrix, values in x of all B-spline basis functions of order m knots vector of knots locations projecteur inverse matrix of bsplines Note This is essentially an internal function for the multisensi package See Also bspline, basis.bsplines sesBsplinesORTHONORM orthogonalized B-splines basis functions Description The sesBsplinesORTHONORM evaluates O-Splines basis functions at some points. Usage sesBsplinesORTHONORM(x = seq(0, 1, len = 101), knots = 5, m = 2) Arguments x vector, coordinates where to calculate the B-spline functions knots number of knots or vector of knots locations m integer, degree of the B-Splines Value x as input osplines matrix, values in x of all O-spline basis functions of order m knots vector of knots locations projecteur inverse matrix of osplines Note This is essentially an internal function for the multisensi package See Also bspline, basis.osplines simulmodel Model simulation Description Function that simulates the model outputs Usage simulmodel(model, plan, nomFic = NULL, verbose = FALSE, ...) Arguments model name of R-function plan data frame of input design nomFic name of file that contains the model function verbose verbose ... ... possible fixed parameters of the R-function Details The model function must be a R-function. Models defined as functions will be called once with an expression of the form y <- f(X) where X is a vector containing a combination of levels of the input factors, and y is the output vector of length q, where q is the number of output variables Value data frame of model outputs Note This is essentially an internal function for the multisensi package summary.dynsi dynsi summary Description Function to summarize the dynamic sensitivity results Usage ## S3 method for class 'dynsi' summary(object, ...) Arguments object a dynsi object ... summary parameters See Also dynsi, multisensi summary.gsi summary of GSI results Description function to summarize the GSI results Usage ## S3 method for class 'gsi' summary(object, ...) Arguments object a GSI object ... summary parameters See Also gsi, multisensi yapprox Prediction based on PCA and anovas (NOT ONLY) Description A function that predicts the model output after PCA and aov analyses Usage yapprox(multivar.obj, nbcomp = 2, aov.obj) Arguments multivar.obj output of the multivar function nbcomp number of columns aov.obj aov object Value model output predictions Note This is essentially an internal function for the multisensi package
arrow-array
rust
Rust
Crate arrow_array === The central type in Apache Arrow are arrays, which are a known-length sequence of values all having the same type. This crate provides concrete implementations of each type, as well as an `Array` trait that can be used for type-erasure. Building an Array --- Most `Array` implementations can be constructed directly from iterators or `Vec` ``` Int32Array::from(vec![1, 2]); Int32Array::from(vec![Some(1), None]); Int32Array::from_iter([1, 2, 3, 4]); Int32Array::from_iter([Some(1), Some(2), None, Some(4)]); StringArray::from(vec!["foo", "bar"]); StringArray::from(vec![Some("foo"), None]); StringArray::from_iter([Some("foo"), None]); StringArray::from_iter_values(["foo", "bar"]); ListArray::from_iter_primitive::<Int32Type, _, _>([ Some(vec![Some(1), None, Some(3)]), None, Some(vec![]) ]); ``` Additionally `ArrayBuilder` implementations can be used to construct arrays with a push-based interface ``` // Create a new builder with a capacity of 100 let mut builder = Int16Array::builder(100); // Append a single primitive value builder.append_value(1); // Append a null value builder.append_null(); // Append a slice of primitive values builder.append_slice(&[2, 3, 4]); // Build the array let array = builder.finish(); assert_eq!(5, array.len()); assert_eq!(2, array.value(2)); assert_eq!(&array.values()[3..5], &[3, 4]) ``` Low-level API --- Internally, arrays consist of one or more shared memory regions backed by a `Buffer`, the number and meaning of which depend on the array’s data type, as documented in the Arrow specification. For example, the type `Int16Array` represents an array of 16-bit integers and consists of: * An optional `NullBuffer` identifying any null values * A contiguous `ScalarBuffer<i16>` of values Similarly, the type `StringArray` represents an array of UTF-8 strings and consists of: * An optional `NullBuffer` identifying any null values * An offsets `OffsetBuffer<i32>` identifying valid UTF-8 sequences within the values buffer * A values `Buffer` of UTF-8 encoded string data Array constructors such as `PrimitiveArray::try_new` provide the ability to cheaply construct an array from these parts, with functions such as `PrimitiveArray::into_parts` providing the reverse operation. ``` // Create a Int32Array from Vec without copying let array = Int32Array::new(vec![1, 2, 3].into(), None); assert_eq!(array.values(), &[1, 2, 3]); assert_eq!(array.null_count(), 0); // Create a StringArray from parts let offsets = OffsetBuffer::new(vec![0, 5, 10].into()); let array = StringArray::new(offsets, b"helloworld".into(), None); let values: Vec<_> = array.iter().map(|x| x.unwrap()).collect(); assert_eq!(values, &["hello", "world"]); ``` As `Buffer`, and its derivatives, can be created from `Vec` without copying, this provides an efficient way to not only interoperate with other Rust code, but also implement kernels optimised for the arrow data layout - e.g. by handling buffers instead of values. Zero-Copy Slicing --- Given an `Array` of arbitrary length, it is possible to create an owned slice of this data. Internally this just increments some ref-counts, and so is incredibly cheap ``` let array = Int32Array::from_iter([1, 2, 3]); // Slice with offset 1 and length 2 let sliced = array.slice(1, 2); assert_eq!(sliced.values(), &[2, 3]); ``` Downcasting an Array --- Arrays are often passed around as a dynamically typed `&dyn Array` or `ArrayRef`. For example, `RecordBatch` stores columns as `ArrayRef`. Whilst these arrays can be passed directly to the `compute`, `csv`, `json`, etc… APIs, it is often the case that you wish to interact with the concrete arrays directly. This requires downcasting to the concrete type of the array: ``` // Safely downcast an `Array` to an `Int32Array` and compute the sum // using native i32 values fn sum_int32(array: &dyn Array) -> i32 { let integers: &Int32Array = array.as_any().downcast_ref().unwrap(); integers.iter().map(|val| val.unwrap_or_default()).sum() } // Safely downcasts the array to a `Float32Array` and returns a &[f32] view of the data // Note: the values for positions corresponding to nulls will be arbitrary (but still valid f32) fn as_f32_slice(array: &dyn Array) -> &[f32] { array.as_any().downcast_ref::<Float32Array>().unwrap().values() } ``` The `cast::AsArray` extension trait can make this more ergonomic ``` fn as_f32_slice(array: &dyn Array) -> &[f32] { array.as_primitive::<Float32Type>().values() } ``` Re-exports --- * `pub use array::*;` Modules --- * arrayThe concrete array definitions * builderDefines push-based APIs for constructing arrays * castDefines helper functions for downcasting `dyn Array` to concrete types * iteratorIdiomatic iterators for `Array` * run_iteratorIdiomatic iterator for `RunArray` * temporal_conversionsConversion methods for dates and times. * timezoneTimezone for timestamp arrays * typesZero-sized types used to parameterize generic array implementations Macros --- * downcast_dictionary_arrayDowncast an `Array` to a `DictionaryArray` based on its `DataType`, accepts a number of subsequent patterns to match the data type * downcast_integerGiven one or more expressions evaluating to an integer `DataType` invokes the provided macro `m` with the corresponding integer `ArrowPrimitiveType`, followed by any additional arguments * downcast_primitiveGiven one or more expressions evaluating to primitive `DataType` invokes the provided macro `m` with the corresponding `ArrowPrimitiveType`, followed by any additional arguments * downcast_primitive_arrayDowncast an `Array` to a `PrimitiveArray` based on its `DataType` accepts a number of subsequent patterns to match the data type * downcast_run_arrayDowncast an `Array` to a `RunArray` based on its `DataType`, accepts a number of subsequent patterns to match the data type * downcast_run_end_indexGiven one or more expressions evaluating to an integer `DataType` invokes the provided macro `m` with the corresponding integer `RunEndIndexType`, followed by any additional arguments * downcast_temporalGiven one or more expressions evaluating to primitive `DataType` invokes the provided macro `m` with the corresponding `ArrowPrimitiveType`, followed by any additional arguments * downcast_temporal_arrayDowncast an `Array` to a temporal `PrimitiveArray` based on its `DataType` accepts a number of subsequent patterns to match the data type Structs --- * RecordBatchA two-dimensional batch of column-oriented data with a defined schema. * RecordBatchIteratorGeneric implementation of RecordBatchReader that wraps an iterator. * RecordBatchOptionsOptions that control the behaviour used when creating a `RecordBatch`. * ScalarA wrapper around a single value `Array` that implements `Datum` and indicates compute kernels should treat this array as a scalar value (a single value). Traits --- * ArrowNativeTypeOpTrait for `ArrowNativeType` that adds checked and unchecked arithmetic operations, and totally ordered comparison operations * ArrowNumericTypeA subtype of primitive type that represents numeric values. * DatumA possibly `Scalar` `Array` * RecordBatchReaderTrait for types that can read `RecordBatch`’s. * RecordBatchWriterTrait for types that can write `RecordBatch`’s. Crate arrow_array === The central type in Apache Arrow are arrays, which are a known-length sequence of values all having the same type. This crate provides concrete implementations of each type, as well as an `Array` trait that can be used for type-erasure. Building an Array --- Most `Array` implementations can be constructed directly from iterators or `Vec` ``` Int32Array::from(vec![1, 2]); Int32Array::from(vec![Some(1), None]); Int32Array::from_iter([1, 2, 3, 4]); Int32Array::from_iter([Some(1), Some(2), None, Some(4)]); StringArray::from(vec!["foo", "bar"]); StringArray::from(vec![Some("foo"), None]); StringArray::from_iter([Some("foo"), None]); StringArray::from_iter_values(["foo", "bar"]); ListArray::from_iter_primitive::<Int32Type, _, _>([ Some(vec![Some(1), None, Some(3)]), None, Some(vec![]) ]); ``` Additionally `ArrayBuilder` implementations can be used to construct arrays with a push-based interface ``` // Create a new builder with a capacity of 100 let mut builder = Int16Array::builder(100); // Append a single primitive value builder.append_value(1); // Append a null value builder.append_null(); // Append a slice of primitive values builder.append_slice(&[2, 3, 4]); // Build the array let array = builder.finish(); assert_eq!(5, array.len()); assert_eq!(2, array.value(2)); assert_eq!(&array.values()[3..5], &[3, 4]) ``` Low-level API --- Internally, arrays consist of one or more shared memory regions backed by a `Buffer`, the number and meaning of which depend on the array’s data type, as documented in the Arrow specification. For example, the type `Int16Array` represents an array of 16-bit integers and consists of: * An optional `NullBuffer` identifying any null values * A contiguous `ScalarBuffer<i16>` of values Similarly, the type `StringArray` represents an array of UTF-8 strings and consists of: * An optional `NullBuffer` identifying any null values * An offsets `OffsetBuffer<i32>` identifying valid UTF-8 sequences within the values buffer * A values `Buffer` of UTF-8 encoded string data Array constructors such as `PrimitiveArray::try_new` provide the ability to cheaply construct an array from these parts, with functions such as `PrimitiveArray::into_parts` providing the reverse operation. ``` // Create a Int32Array from Vec without copying let array = Int32Array::new(vec![1, 2, 3].into(), None); assert_eq!(array.values(), &[1, 2, 3]); assert_eq!(array.null_count(), 0); // Create a StringArray from parts let offsets = OffsetBuffer::new(vec![0, 5, 10].into()); let array = StringArray::new(offsets, b"helloworld".into(), None); let values: Vec<_> = array.iter().map(|x| x.unwrap()).collect(); assert_eq!(values, &["hello", "world"]); ``` As `Buffer`, and its derivatives, can be created from `Vec` without copying, this provides an efficient way to not only interoperate with other Rust code, but also implement kernels optimised for the arrow data layout - e.g. by handling buffers instead of values. Zero-Copy Slicing --- Given an `Array` of arbitrary length, it is possible to create an owned slice of this data. Internally this just increments some ref-counts, and so is incredibly cheap ``` let array = Int32Array::from_iter([1, 2, 3]); // Slice with offset 1 and length 2 let sliced = array.slice(1, 2); assert_eq!(sliced.values(), &[2, 3]); ``` Downcasting an Array --- Arrays are often passed around as a dynamically typed `&dyn Array` or `ArrayRef`. For example, `RecordBatch` stores columns as `ArrayRef`. Whilst these arrays can be passed directly to the `compute`, `csv`, `json`, etc… APIs, it is often the case that you wish to interact with the concrete arrays directly. This requires downcasting to the concrete type of the array: ``` // Safely downcast an `Array` to an `Int32Array` and compute the sum // using native i32 values fn sum_int32(array: &dyn Array) -> i32 { let integers: &Int32Array = array.as_any().downcast_ref().unwrap(); integers.iter().map(|val| val.unwrap_or_default()).sum() } // Safely downcasts the array to a `Float32Array` and returns a &[f32] view of the data // Note: the values for positions corresponding to nulls will be arbitrary (but still valid f32) fn as_f32_slice(array: &dyn Array) -> &[f32] { array.as_any().downcast_ref::<Float32Array>().unwrap().values() } ``` The `cast::AsArray` extension trait can make this more ergonomic ``` fn as_f32_slice(array: &dyn Array) -> &[f32] { array.as_primitive::<Float32Type>().values() } ``` Re-exports --- * `pub use array::*;` Modules --- * arrayThe concrete array definitions * builderDefines push-based APIs for constructing arrays * castDefines helper functions for downcasting `dyn Array` to concrete types * iteratorIdiomatic iterators for `Array` * run_iteratorIdiomatic iterator for `RunArray` * temporal_conversionsConversion methods for dates and times. * timezoneTimezone for timestamp arrays * typesZero-sized types used to parameterize generic array implementations Macros --- * downcast_dictionary_arrayDowncast an `Array` to a `DictionaryArray` based on its `DataType`, accepts a number of subsequent patterns to match the data type * downcast_integerGiven one or more expressions evaluating to an integer `DataType` invokes the provided macro `m` with the corresponding integer `ArrowPrimitiveType`, followed by any additional arguments * downcast_primitiveGiven one or more expressions evaluating to primitive `DataType` invokes the provided macro `m` with the corresponding `ArrowPrimitiveType`, followed by any additional arguments * downcast_primitive_arrayDowncast an `Array` to a `PrimitiveArray` based on its `DataType` accepts a number of subsequent patterns to match the data type * downcast_run_arrayDowncast an `Array` to a `RunArray` based on its `DataType`, accepts a number of subsequent patterns to match the data type * downcast_run_end_indexGiven one or more expressions evaluating to an integer `DataType` invokes the provided macro `m` with the corresponding integer `RunEndIndexType`, followed by any additional arguments * downcast_temporalGiven one or more expressions evaluating to primitive `DataType` invokes the provided macro `m` with the corresponding `ArrowPrimitiveType`, followed by any additional arguments * downcast_temporal_arrayDowncast an `Array` to a temporal `PrimitiveArray` based on its `DataType` accepts a number of subsequent patterns to match the data type Structs --- * RecordBatchA two-dimensional batch of column-oriented data with a defined schema. * RecordBatchIteratorGeneric implementation of RecordBatchReader that wraps an iterator. * RecordBatchOptionsOptions that control the behaviour used when creating a `RecordBatch`. * ScalarA wrapper around a single value `Array` that implements `Datum` and indicates compute kernels should treat this array as a scalar value (a single value). Traits --- * ArrowNativeTypeOpTrait for `ArrowNativeType` that adds checked and unchecked arithmetic operations, and totally ordered comparison operations * ArrowNumericTypeA subtype of primitive type that represents numeric values. * DatumA possibly `Scalar` `Array` * RecordBatchReaderTrait for types that can read `RecordBatch`’s. * RecordBatchWriterTrait for types that can write `RecordBatch`’s. Trait arrow_array::array::Array === ``` pub trait Array: Debug + Send + Sync { // Required methods fn as_any(&self) -> &dyn Any; fn to_data(&self) -> ArrayData; fn into_data(self) -> ArrayData; fn data_type(&self) -> &DataType; fn slice(&self, offset: usize, length: usize) -> ArrayRef; fn len(&self) -> usize; fn is_empty(&self) -> bool; fn offset(&self) -> usize; fn nulls(&self) -> Option<&NullBuffer>; fn get_buffer_memory_size(&self) -> usize; fn get_array_memory_size(&self) -> usize; // Provided methods fn logical_nulls(&self) -> Option<NullBuffer> { ... } fn is_null(&self, index: usize) -> bool { ... } fn is_valid(&self, index: usize) -> bool { ... } fn null_count(&self) -> usize { ... } fn is_nullable(&self) -> bool { ... } } ``` An array in the arrow columnar format Required Methods --- #### fn as_any(&self) -> &dyn Any Returns the array as `Any` so that it can be downcasted to a specific implementation. ##### Example: ``` let id = Int32Array::from(vec![1, 2, 3, 4, 5]); let batch = RecordBatch::try_new( Arc::new(Schema::new(vec![Field::new("id", DataType::Int32, false)])), vec![Arc::new(id)] ).unwrap(); let int32array = batch .column(0) .as_any() .downcast_ref::<Int32Array>() .expect("Failed to downcast"); ``` #### fn to_data(&self) -> ArrayData Returns the underlying data of this array #### fn into_data(self) -> ArrayData Returns the underlying data of this array Unlike `Array::to_data` this consumes self, allowing it avoid unnecessary clones #### fn data_type(&self) -> &DataType Returns a reference to the `DataType` of this array. ##### Example: ``` use arrow_schema::DataType; use arrow_array::{Array, Int32Array}; let array = Int32Array::from(vec![1, 2, 3, 4, 5]); assert_eq!(*array.data_type(), DataType::Int32); ``` #### fn slice(&self, offset: usize, length: usize) -> ArrayRef Returns a zero-copy slice of this array with the indicated offset and length. ##### Example: ``` use arrow_array::{Array, Int32Array}; let array = Int32Array::from(vec![1, 2, 3, 4, 5]); // Make slice over the values [2, 3, 4] let array_slice = array.slice(1, 3); assert_eq!(&array_slice, &Int32Array::from(vec![2, 3, 4])); ``` #### fn len(&self) -> usize Returns the length (i.e., number of elements) of this array. ##### Example: ``` use arrow_array::{Array, Int32Array}; let array = Int32Array::from(vec![1, 2, 3, 4, 5]); assert_eq!(array.len(), 5); ``` #### fn is_empty(&self) -> bool Returns whether this array is empty. ##### Example: ``` use arrow_array::{Array, Int32Array}; let array = Int32Array::from(vec![1, 2, 3, 4, 5]); assert_eq!(array.is_empty(), false); ``` #### fn offset(&self) -> usize Returns the offset into the underlying data used by this array(-slice). Note that the underlying data can be shared by many arrays. This defaults to `0`. ##### Example: ``` use arrow_array::{Array, BooleanArray}; let array = BooleanArray::from(vec![false, false, true, true]); let array_slice = array.slice(1, 3); assert_eq!(array.offset(), 0); assert_eq!(array_slice.offset(), 1); ``` #### fn nulls(&self) -> Option<&NullBufferReturns the null buffer of this array if any Note: some arrays can encode their nullability in their children, for example, `DictionaryArray::values` values or `RunArray::values`, or without a null buffer, such as `NullArray`. Use `Array::logical_nulls` to obtain a computed mask encoding this #### fn get_buffer_memory_size(&self) -> usize Returns the total number of bytes of memory pointed to by this array. The buffers store bytes in the Arrow memory format, and include the data as well as the validity map. #### fn get_array_memory_size(&self) -> usize Returns the total number of bytes of memory occupied physically by this array. This value will always be greater than returned by `get_buffer_memory_size()` and includes the overhead of the data structures that contain the pointers to the various buffers. Provided Methods --- #### fn logical_nulls(&self) -> Option<NullBufferReturns the logical null buffer of this array if any In most cases this will be the same as `Array::nulls`, except for: * DictionaryArray where `DictionaryArray::values` contains nulls * RunArray where `RunArray::values` contains nulls * NullArray where all indices are nulls In these cases a logical `NullBuffer` will be computed, encoding the logical nullability of these arrays, beyond what is encoded in `Array::nulls` #### fn is_null(&self, index: usize) -> bool Returns whether the element at `index` is null. When using this function on a slice, the index is relative to the slice. Note: this method returns the physical nullability, i.e. that encoded in `Array::nulls` see `Array::logical_nulls` for logical nullability ##### Example: ``` use arrow_array::{Array, Int32Array}; let array = Int32Array::from(vec![Some(1), None]); assert_eq!(array.is_null(0), false); assert_eq!(array.is_null(1), true); ``` #### fn is_valid(&self, index: usize) -> bool Returns whether the element at `index` is not null. When using this function on a slice, the index is relative to the slice. Note: this method returns the physical nullability, i.e. that encoded in `Array::nulls` see `Array::logical_nulls` for logical nullability ##### Example: ``` use arrow_array::{Array, Int32Array}; let array = Int32Array::from(vec![Some(1), None]); assert_eq!(array.is_valid(0), true); assert_eq!(array.is_valid(1), false); ``` #### fn null_count(&self) -> usize Returns the total number of physical null values in this array. Note: this method returns the physical null count, i.e. that encoded in `Array::nulls`, see `Array::logical_nulls` for logical nullability ##### Example: ``` use arrow_array::{Array, Int32Array}; // Construct an array with values [1, NULL, NULL] let array = Int32Array::from(vec![Some(1), None, None]); assert_eq!(array.null_count(), 2); ``` #### fn is_nullable(&self) -> bool Returns `false` if the array is guaranteed to not contain any logical nulls In general this will be equivalent to `Array::null_count() != 0` but may differ in the presence of logical nullability, see `Array::logical_nulls`. Implementations will return `true` unless they can cheaply prove no logical nulls are present. For example a `DictionaryArray` with nullable values will still return true, even if the nulls present in `DictionaryArray::values` are not referenced by any key, and therefore would not appear in `Array::logical_nulls`. Trait Implementations --- ### impl AsArray for dyn Array + '_ #### fn as_boolean_opt(&self) -> Option<&BooleanArrayDowncast this to a `BooleanArray` returning `None` if not possible#### fn as_primitive_opt<T: ArrowPrimitiveType>(&self) -> Option<&PrimitiveArray<T>Downcast this to a `PrimitiveArray` returning `None` if not possible#### fn as_bytes_opt<T: ByteArrayType>(&self) -> Option<&GenericByteArray<T>Downcast this to a `GenericByteArray` returning `None` if not possible#### fn as_struct_opt(&self) -> Option<&StructArrayDowncast this to a `StructArray` returning `None` if not possible#### fn as_list_opt<O: OffsetSizeTrait>(&self) -> Option<&GenericListArray<O>Downcast this to a `GenericListArray` returning `None` if not possible#### fn as_fixed_size_binary_opt(&self) -> Option<&FixedSizeBinaryArrayDowncast this to a `FixedSizeBinaryArray` returning `None` if not possible#### fn as_fixed_size_list_opt(&self) -> Option<&FixedSizeListArrayDowncast this to a `FixedSizeListArray` returning `None` if not possible#### fn as_map_opt(&self) -> Option<&MapArrayDowncast this to a `MapArray` returning `None` if not possible#### fn as_dictionary_opt<K: ArrowDictionaryKeyType>( &self ) -> Option<&DictionaryArray<K>Downcast this to a `DictionaryArray` returning `None` if not possible#### fn as_any_dictionary_opt(&self) -> Option<&dyn AnyDictionaryArrayDowncasts this to a `AnyDictionaryArray` returning `None` if not possible#### fn as_boolean(&self) -> &BooleanArray Downcast this to a `BooleanArray` panicking if not possible#### fn as_primitive<T: ArrowPrimitiveType>(&self) -> &PrimitiveArray<TDowncast this to a `PrimitiveArray` panicking if not possible#### fn as_bytes<T: ByteArrayType>(&self) -> &GenericByteArray<TDowncast this to a `GenericByteArray` panicking if not possible#### fn as_string_opt<O: OffsetSizeTrait>(&self) -> Option<&GenericStringArray<O>Downcast this to a `GenericStringArray` returning `None` if not possible#### fn as_string<O: OffsetSizeTrait>(&self) -> &GenericStringArray<ODowncast this to a `GenericStringArray` panicking if not possible#### fn as_binary_opt<O: OffsetSizeTrait>(&self) -> Option<&GenericBinaryArray<O>Downcast this to a `GenericBinaryArray` returning `None` if not possible#### fn as_binary<O: OffsetSizeTrait>(&self) -> &GenericBinaryArray<ODowncast this to a `GenericBinaryArray` panicking if not possible#### fn as_struct(&self) -> &StructArray Downcast this to a `StructArray` panicking if not possible#### fn as_list<O: OffsetSizeTrait>(&self) -> &GenericListArray<ODowncast this to a `GenericListArray` panicking if not possible#### fn as_fixed_size_binary(&self) -> &FixedSizeBinaryArray Downcast this to a `FixedSizeBinaryArray` panicking if not possible#### fn as_fixed_size_list(&self) -> &FixedSizeListArray Downcast this to a `FixedSizeListArray` panicking if not possible#### fn as_map(&self) -> &MapArray Downcast this to a `MapArray` panicking if not possible#### fn as_dictionary<K: ArrowDictionaryKeyType>(&self) -> &DictionaryArray<KDowncast this to a `DictionaryArray` panicking if not possible#### fn as_any_dictionary(&self) -> &dyn AnyDictionaryArray Downcasts this to a `AnyDictionaryArray` panicking if not possible### impl Datum for &dyn Array #### fn get(&self) -> (&dyn Array, bool) Returns the value for this `Datum` and a boolean indicating if the value is scalar### impl Datum for dyn Array #### fn get(&self) -> (&dyn Array, bool) Returns the value for this `Datum` and a boolean indicating if the value is scalar### impl<T: Array> PartialEq<T> for dyn Array + '_ #### fn eq(&self, other: &T) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<dyn Array + '_> for dyn Array + '_ #### fn eq(&self, other: &Self) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.Implementations on Foreign Types --- ### impl<'a, T: Array> Array for &'a T #### fn as_any(&self) -> &dyn Any #### fn to_data(&self) -> ArrayData #### fn into_data(self) -> ArrayData #### fn data_type(&self) -> &DataType #### fn slice(&self, offset: usize, length: usize) -> ArrayRef #### fn len(&self) -> usize #### fn is_empty(&self) -> bool #### fn offset(&self) -> usize #### fn nulls(&self) -> Option<&NullBuffer#### fn logical_nulls(&self) -> Option<NullBuffer#### fn is_null(&self, index: usize) -> bool #### fn is_valid(&self, index: usize) -> bool #### fn null_count(&self) -> usize #### fn is_nullable(&self) -> bool #### fn get_buffer_memory_size(&self) -> usize #### fn get_array_memory_size(&self) -> usize Implementors --- ### impl Array for BooleanArray ### impl Array for FixedSizeBinaryArray ### impl Array for FixedSizeListArray ### impl Array for MapArray ### impl Array for NullArray ### impl Array for StructArray ### impl Array for UnionArray ### impl Array for ArrayRef Ergonomics: Allow use of an ArrayRef as an `&dyn Array` ### impl<'a, K: ArrowDictionaryKeyType, V: Sync> Array for TypedDictionaryArray<'a, K, V### impl<'a, R: RunEndIndexType, V: Sync> Array for TypedRunArray<'a, R, V### impl<OffsetSize: OffsetSizeTrait> Array for GenericListArray<OffsetSize### impl<T: ArrowDictionaryKeyType> Array for DictionaryArray<T### impl<T: ArrowPrimitiveType> Array for PrimitiveArray<T### impl<T: ByteArrayType> Array for GenericByteArray<T### impl<T: RunEndIndexType> Array for RunArray<TTrait arrow_array::builder::ArrayBuilder === ``` pub trait ArrayBuilder: Any + Send { // Required methods fn len(&self) -> usize; fn finish(&mut self) -> ArrayRef; fn finish_cloned(&self) -> ArrayRef; fn as_any(&self) -> &dyn Any; fn as_any_mut(&mut self) -> &mut dyn Any; fn into_box_any(self: Box<Self>) -> Box<dyn Any>; // Provided method fn is_empty(&self) -> bool { ... } } ``` Trait for dealing with different array builders at runtime Example --- ``` // Create let mut data_builders: Vec<Box<dyn ArrayBuilder>> = vec![ Box::new(Float64Builder::new()), Box::new(Int64Builder::new()), Box::new(StringBuilder::new()), ]; // Fill data_builders[0] .as_any_mut() .downcast_mut::<Float64Builder>() .unwrap() .append_value(3.14); data_builders[1] .as_any_mut() .downcast_mut::<Int64Builder>() .unwrap() .append_value(-1); data_builders[2] .as_any_mut() .downcast_mut::<StringBuilder>() .unwrap() .append_value("🍎"); // Finish let array_refs: Vec<ArrayRef> = data_builders .iter_mut() .map(|builder| builder.finish()) .collect(); assert_eq!(array_refs[0].len(), 1); assert_eq!(array_refs[1].is_null(0), false); assert_eq!( array_refs[2] .as_any() .downcast_ref::<StringArray>() .unwrap() .value(0), "🍎" ); ``` Required Methods --- #### fn len(&self) -> usize Returns the number of array slots in the builder #### fn finish(&mut self) -> ArrayRef Builds the array #### fn finish_cloned(&self) -> ArrayRef Builds the array without resetting the underlying builder. #### fn as_any(&self) -> &dyn Any Returns the builder as a non-mutable `Any` reference. This is most useful when one wants to call non-mutable APIs on a specific builder type. In this case, one can first cast this into a `Any`, and then use `downcast_ref` to get a reference on the specific builder. #### fn as_any_mut(&mut self) -> &mut dyn Any Returns the builder as a mutable `Any` reference. This is most useful when one wants to call mutable APIs on a specific builder type. In this case, one can first cast this into a `Any`, and then use `downcast_mut` to get a reference on the specific builder. #### fn into_box_any(self: Box<Self>) -> Box<dyn AnyReturns the boxed builder as a box of `Any`. Provided Methods --- #### fn is_empty(&self) -> bool Returns whether number of array slots is zero Implementors --- ### impl ArrayBuilder for BooleanBuilder ### impl ArrayBuilder for FixedSizeBinaryBuilder ### impl ArrayBuilder for NullBuilder ### impl ArrayBuilder for StructBuilder ### impl<K, T> ArrayBuilder for GenericByteDictionaryBuilder<K, T>where K: ArrowDictionaryKeyType, T: ByteArrayType, ### impl<K, V> ArrayBuilder for PrimitiveDictionaryBuilder<K, V>where K: ArrowDictionaryKeyType, V: ArrowPrimitiveType, ### impl<K: ArrayBuilder, V: ArrayBuilder> ArrayBuilder for MapBuilder<K, V### impl<OffsetSize: OffsetSizeTrait, T> ArrayBuilder for GenericListBuilder<OffsetSize, T>where T: 'static + ArrayBuilder, ### impl<R, V> ArrayBuilder for GenericByteRunBuilder<R, V>where R: RunEndIndexType, V: ByteArrayType, ### impl<R, V> ArrayBuilder for PrimitiveRunBuilder<R, V>where R: RunEndIndexType, V: ArrowPrimitiveType, ### impl<T> ArrayBuilder for FixedSizeListBuilder<T>where T: 'static + ArrayBuilder, ### impl<T: ArrowPrimitiveType> ArrayBuilder for PrimitiveBuilder<T### impl<T: ByteArrayType> ArrayBuilder for GenericByteBuilder<TType Alias arrow_array::array::Int16Array === ``` pub type Int16Array = PrimitiveArray<Int16Type>; ``` A `PrimitiveArray` of `i16` Examples --- Construction ``` // Create from Vec<Option<i16>> let arr = Int16Array::from(vec![Some(1), None, Some(2)]); // Create from Vec<i16> let arr = Int16Array::from(vec![1, 2, 3]); // Create iter/collect let arr: Int16Array = std::iter::repeat(42).take(10).collect(); ``` See `PrimitiveArray` for more information and examples Aliased Type --- ``` struct Int16Array { /* private fields */ } ``` Implementations --- ### impl<T: ArrowPrimitiveType> PrimitiveArray<T#### pub fn new(values: ScalarBuffer<T::Native>, nulls: Option<NullBuffer>) -> Self Create a new `PrimitiveArray` from the provided values and nulls ##### Panics Panics if `Self::try_new` returns an error ##### Example Creating a `PrimitiveArray` directly from a `ScalarBuffer` and `NullBuffer` using this constructor is the most performant approach, avoiding any additional allocations ``` // [1, 2, 3, 4] let array = Int32Array::new(vec![1, 2, 3, 4].into(), None); // [1, null, 3, 4] let nulls = NullBuffer::from(vec![true, false, true, true]); let array = Int32Array::new(vec![1, 2, 3, 4].into(), Some(nulls)); ``` #### pub fn new_null(length: usize) -> Self Create a new `PrimitiveArray` of the given length where all values are null #### pub fn try_new( values: ScalarBuffer<T::Native>, nulls: Option<NullBuffer> ) -> Result<Self, ArrowErrorCreate a new `PrimitiveArray` from the provided values and nulls ##### Errors Errors if: * `values.len() != nulls.len()` #### pub fn new_scalar(value: T::Native) -> Scalar<SelfCreate a new `Scalar` from `value` #### pub fn into_parts( self ) -> (DataType, ScalarBuffer<T::Native>, Option<NullBuffer>) Deconstruct this array into its constituent parts #### pub fn with_data_type(self, data_type: DataType) -> Self Overrides the `DataType` of this `PrimitiveArray` Prefer using `Self::with_timezone` or `Self::with_precision_and_scale` where the primitive type is suitably constrained, as these cannot panic ##### Panics Panics if ![Self::is_compatible] #### pub fn len(&self) -> usize Returns the length of this array. #### pub fn is_empty(&self) -> bool Returns whether this array is empty. #### pub fn values(&self) -> &ScalarBuffer<T::NativeReturns the values of this array #### pub fn builder(capacity: usize) -> PrimitiveBuilder<TReturns a new primitive array builder #### pub fn is_compatible(data_type: &DataType) -> bool Returns if this `PrimitiveArray` is compatible with the provided `DataType` This is equivalent to `data_type == T::DATA_TYPE`, however ignores timestamp timezones and decimal precision and scale #### pub unsafe fn value_unchecked(&self, i: usize) -> T::Native Returns the primitive value at index `i`. ##### Safety caller must ensure that the passed in offset is less than the array len() #### pub fn value(&self, i: usize) -> T::Native Returns the primitive value at index `i`. ##### Panics Panics if index `i` is out of bounds #### pub fn from_iter_values<I: IntoIterator<Item = T::Native>>(iter: I) -> Self Creates a PrimitiveArray based on an iterator of values without nulls #### pub fn from_value(value: T::Native, count: usize) -> Self Creates a PrimitiveArray based on a constant value with `count` elements #### pub fn take_iter<'a>( &'a self, indexes: impl Iterator<Item = Option<usize>> + 'a ) -> impl Iterator<Item = Option<T::Native>> + 'a Returns an iterator that returns the values of `array.value(i)` for an iterator with each element `i` #### pub unsafe fn take_iter_unchecked<'a>( &'a self, indexes: impl Iterator<Item = Option<usize>> + 'a ) -> impl Iterator<Item = Option<T::Native>> + 'a Returns an iterator that returns the values of `array.value(i)` for an iterator with each element `i` ##### Safety caller must ensure that the offsets in the iterator are less than the array len() #### pub fn slice(&self, offset: usize, length: usize) -> Self Returns a zero-copy slice of this array with the indicated offset and length. #### pub fn reinterpret_cast<K>(&self) -> PrimitiveArray<K>where K: ArrowPrimitiveType<Native = T::Native>, Reinterprets this array’s contents as a different data type without copying This can be used to efficiently convert between primitive arrays with the same underlying representation Note: this will not modify the underlying values, and therefore may change the semantic values of the array, e.g. 100 milliseconds in a `TimestampNanosecondArray` will become 100 seconds in a `TimestampSecondArray`. For casts that preserve the semantic value, check out the [compute kernels] compute kernels ``` let a = Int64Array::from_iter_values([1, 2, 3, 4]); let b: TimestampNanosecondArray = a.reinterpret_cast(); ``` #### pub fn unary<F, O>(&self, op: F) -> PrimitiveArray<O>where O: ArrowPrimitiveType, F: Fn(T::Native) -> O::Native, Applies an unary and infallible function to a primitive array. This is the fastest way to perform an operation on a primitive array when the benefits of a vectorized operation outweigh the cost of branching nulls and non-nulls. ##### Implementation This will apply the function for all values, including those on null slots. This implies that the operation must be infallible for any value of the corresponding type or this function may panic. ##### Example ``` let array = Int32Array::from(vec![Some(5), Some(7), None]); let c = array.unary(|x| x * 2 + 1); assert_eq!(c, Int32Array::from(vec![Some(11), Some(15), None])); ``` #### pub fn unary_mut<F>(self, op: F) -> Result<PrimitiveArray<T>, PrimitiveArray<T>>where F: Fn(T::Native) -> T::Native, Applies an unary and infallible function to a mutable primitive array. Mutable primitive array means that the buffer is not shared with other arrays. As a result, this mutates the buffer directly without allocating new buffer. ##### Implementation This will apply the function for all values, including those on null slots. This implies that the operation must be infallible for any value of the corresponding type or this function may panic. ##### Example ``` let array = Int32Array::from(vec![Some(5), Some(7), None]); let c = array.unary_mut(|x| x * 2 + 1).unwrap(); assert_eq!(c, Int32Array::from(vec![Some(11), Some(15), None])); ``` #### pub fn try_unary<F, O, E>(&self, op: F) -> Result<PrimitiveArray<O>, E>where O: ArrowPrimitiveType, F: Fn(T::Native) -> Result<O::Native, E>, Applies a unary and fallible function to all valid values in a primitive array This is unlike `Self::unary` which will apply an infallible function to all rows regardless of validity, in many cases this will be significantly faster and should be preferred if `op` is infallible. Note: LLVM is currently unable to effectively vectorize fallible operations #### pub fn try_unary_mut<F, E>( self, op: F ) -> Result<Result<PrimitiveArray<T>, E>, PrimitiveArray<T>>where F: Fn(T::Native) -> Result<T::Native, E>, Applies an unary and fallible function to all valid values in a mutable primitive array. Mutable primitive array means that the buffer is not shared with other arrays. As a result, this mutates the buffer directly without allocating new buffer. This is unlike `Self::unary_mut` which will apply an infallible function to all rows regardless of validity, in many cases this will be significantly faster and should be preferred if `op` is infallible. This returns an `Err` when the input array is shared buffer with other array. In the case, returned `Err` wraps input array. If the function encounters an error during applying on values. In the case, this returns an `Err` within an `Ok` which wraps the actual error. Note: LLVM is currently unable to effectively vectorize fallible operations #### pub fn unary_opt<F, O>(&self, op: F) -> PrimitiveArray<O>where O: ArrowPrimitiveType, F: Fn(T::Native) -> Option<O::Native>, Applies a unary and nullable function to all valid values in a primitive array This is unlike `Self::unary` which will apply an infallible function to all rows regardless of validity, in many cases this will be significantly faster and should be preferred if `op` is infallible. Note: LLVM is currently unable to effectively vectorize fallible operations #### pub fn into_builder(self) -> Result<PrimitiveBuilder<T>, SelfReturns `PrimitiveBuilder` of this primitive array for mutating its values if the underlying data buffer is not shared by others. ### impl<T: ArrowTemporalType> PrimitiveArray<T>where i64: From<T::Native>, #### pub fn value_as_datetime(&self, i: usize) -> Option<NaiveDateTimeReturns value as a chrono `NaiveDateTime`, handling time resolution If a data type cannot be converted to `NaiveDateTime`, a `None` is returned. A valid value is expected, thus the user should first check for validity. #### pub fn value_as_datetime_with_tz( &self, i: usize, tz: Tz ) -> Option<DateTime<Tz>Returns value as a chrono `NaiveDateTime`, handling time resolution with the provided tz functionally it is same as `value_as_datetime`, however it adds the passed tz to the to-be-returned NaiveDateTime #### pub fn value_as_date(&self, i: usize) -> Option<NaiveDateReturns value as a chrono `NaiveDate` by using `Self::datetime()` If a data type cannot be converted to `NaiveDate`, a `None` is returned #### pub fn value_as_time(&self, i: usize) -> Option<NaiveTimeReturns a value as a chrono `NaiveTime` `Date32` and `Date64` return UTC midnight as they do not have time resolution #### pub fn value_as_duration(&self, i: usize) -> Option<DurationReturns a value as a chrono `Duration` If a data type cannot be converted to `Duration`, a `None` is returned ### impl<'a, T: ArrowPrimitiveType> PrimitiveArray<T#### pub fn iter(&'a self) -> PrimitiveIter<'a, Tconstructs a new iterator ### impl<T: ArrowPrimitiveType> PrimitiveArray<T#### pub unsafe fn from_trusted_len_iter<I, P>(iter: I) -> Selfwhere P: Borrow<Option<<T as ArrowPrimitiveType>::Native>>, I: IntoIterator<Item = P>, Creates a `PrimitiveArray` from an iterator of trusted length. ##### Safety The iterator must be `TrustedLen`. I.e. that `size_hint().1` correctly reports its length. ### impl<T: ArrowTimestampType> PrimitiveArray<T#### pub fn from_vec(data: Vec<i64>, timezone: Option<String>) -> Selfwhere Self: From<Vec<i64>>, 👎Deprecated: Use with_timezone_opt insteadConstruct a timestamp array from a vec of i64 values and an optional timezone #### pub fn from_opt_vec(data: Vec<Option<i64>>, timezone: Option<String>) -> Selfwhere Self: From<Vec<Option<i64>>>, 👎Deprecated: Use with_timezone_opt insteadConstruct a timestamp array from a vec of `Option<i64>` values and an optional timezone #### pub fn timezone(&self) -> Option<&strReturns the timezone of this array if any #### pub fn with_timezone(self, timezone: impl Into<Arc<str>>) -> Self Construct a timestamp array with new timezone #### pub fn with_timezone_utc(self) -> Self Construct a timestamp array with UTC #### pub fn with_timezone_opt<S: Into<Arc<str>>>(self, timezone: Option<S>) -> Self Construct a timestamp array with an optional timezone ### impl<T: DecimalType + ArrowPrimitiveType> PrimitiveArray<T#### pub fn with_precision_and_scale( self, precision: u8, scale: i8 ) -> Result<Self, ArrowErrorReturns a Decimal array with the same data as self, with the specified precision and scale. See `validate_decimal_precision_and_scale` #### pub fn validate_decimal_precision( &self, precision: u8 ) -> Result<(), ArrowErrorValidates values in this array can be properly interpreted with the specified precision. #### pub fn null_if_overflow_precision(&self, precision: u8) -> Self Validates the Decimal Array, if the value of slot is overflow for the specified precision, and will be casted to Null #### pub fn value_as_string(&self, row: usize) -> String Returns `Self::value` formatted as a string #### pub fn precision(&self) -> u8 Returns the decimal precision of this array #### pub fn scale(&self) -> i8 Returns the decimal scale of this array Trait Implementations --- ### impl<T: ArrowPrimitiveType> Array for PrimitiveArray<T#### fn as_any(&self) -> &dyn Any Returns the array as `Any` so that it can be downcasted to a specific implementation. Returns the underlying data of this array#### fn into_data(self) -> ArrayData Returns the underlying data of this array Returns a reference to the `DataType` of this array. Returns a zero-copy slice of this array with the indicated offset and length. Returns the length (i.e., number of elements) of this array. Returns whether this array is empty. Returns the offset into the underlying data used by this array(-slice). Note that the underlying data can be shared by many arrays. This defaults to `0`. Returns the total number of bytes of memory pointed to by this array. The buffers store bytes in the Arrow memory format, and include the data as well as the validity map.#### fn get_array_memory_size(&self) -> usize Returns the total number of bytes of memory occupied physically by this array. This value will always be greater than returned by `get_buffer_memory_size()` and includes the overhead of the data structures that contain the pointers to the various buffers.#### fn logical_nulls(&self) -> Option<NullBufferReturns the logical null buffer of this array if any Returns whether the element at `index` is null. When using this function on a slice, the index is relative to the slice. Returns whether the element at `index` is not null. When using this function on a slice, the index is relative to the slice. Returns the total number of physical null values in this array. Returns `false` if the array is guaranteed to not contain any logical nulls Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Formats the value using the given formatter. #### fn from(data: ArrayData) -> Self Converts to this type from the input type.### impl From<Vec<<Int16Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Int16Type#### fn from(data: Vec<<Int16Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Int16Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Int16Type#### fn from(data: Vec<Option<<Int16Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl<T: ArrowPrimitiveType, Ptr: Into<NativeAdapter<T>>> FromIterator<Ptr> for PrimitiveArray<T#### fn from_iter<I: IntoIterator<Item = Ptr>>(iter: I) -> Self Creates a value from an iterator. This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. Type Alias arrow_array::array::StringArray === ``` pub type StringArray = GenericStringArray<i32>; ``` A `GenericStringArray` of `str` using `i32` offsets Examples --- Construction ``` // Create from Vec<Option<&str>> let arr = StringArray::from(vec![Some("foo"), Some("bar"), None, Some("baz")]); // Create from Vec<&str> let arr = StringArray::from(vec!["foo", "bar", "baz"]); // Create from iter/collect (requires Option<&str>) let arr: StringArray = std::iter::repeat(Some("foo")).take(10).collect(); ``` Construction and Access ``` let array = StringArray::from(vec![Some("foo"), None, Some("bar")]); assert_eq!(array.value(0), "foo"); ``` See `GenericByteArray` for more information and examples Aliased Type --- ``` struct StringArray { /* private fields */ } ``` Implementations --- ### impl<T: ByteArrayType> GenericByteArray<T#### pub const DATA_TYPE: DataType = T::DATA_TYPE Data type of the array. #### pub fn new( offsets: OffsetBuffer<T::Offset>, values: Buffer, nulls: Option<NullBuffer> ) -> Self Create a new `GenericByteArray` from the provided parts, panicking on failure ##### Panics Panics if `GenericByteArray::try_new` returns an error #### pub fn try_new( offsets: OffsetBuffer<T::Offset>, values: Buffer, nulls: Option<NullBuffer> ) -> Result<Self, ArrowErrorCreate a new `GenericByteArray` from the provided parts, returning an error on failure ##### Errors * `offsets.len() - 1 != nulls.len()` * Any consecutive pair of `offsets` does not denote a valid slice of `values` #### pub unsafe fn new_unchecked( offsets: OffsetBuffer<T::Offset>, values: Buffer, nulls: Option<NullBuffer> ) -> Self Create a new `GenericByteArray` from the provided parts, without validation ##### Safety Safe if `Self::try_new` would not error #### pub fn new_null(len: usize) -> Self Create a new `GenericByteArray` of length `len` where all values are null #### pub fn new_scalar(value: impl AsRef<T::Native>) -> Scalar<SelfCreate a new `Scalar` from `v` #### pub fn from_iter_values<Ptr, I>(iter: I) -> Selfwhere Ptr: AsRef<T::Native>, I: IntoIterator<Item = Ptr>, Creates a `GenericByteArray` based on an iterator of values without nulls #### pub fn into_parts(self) -> (OffsetBuffer<T::Offset>, Buffer, Option<NullBuffer>) Deconstruct this array into its constituent parts #### pub fn value_length(&self, i: usize) -> T::Offset Returns the length for value at index `i`. ##### Panics Panics if index `i` is out of bounds. #### pub fn offsets(&self) -> &OffsetBuffer<T::OffsetReturns a reference to the offsets of this array Unlike `Self::value_offsets` this returns the `OffsetBuffer` allowing for zero-copy cloning #### pub fn values(&self) -> &Buffer Returns the values of this array Unlike `Self::value_data` this returns the `Buffer` allowing for zero-copy cloning #### pub fn value_data(&self) -> &[u8] Returns the raw value data #### pub fn is_ascii(&self) -> bool Returns true if all data within this array is ASCII #### pub fn value_offsets(&self) -> &[T::Offset] Returns the offset values in the offsets buffer #### pub unsafe fn value_unchecked(&self, i: usize) -> &T::Native Returns the element at index `i` ##### Safety Caller is responsible for ensuring that the index is within the bounds of the array #### pub fn value(&self, i: usize) -> &T::Native Returns the element at index `i` ##### Panics Panics if index `i` is out of bounds. #### pub fn iter(&self) -> ArrayIter<&Selfconstructs a new iterator #### pub fn slice(&self, offset: usize, length: usize) -> Self Returns a zero-copy slice of this array with the indicated offset and length. #### pub fn into_builder(self) -> Result<GenericByteBuilder<T>, SelfReturns `GenericByteBuilder` of this byte array for mutating its values if the underlying offset and data buffers are not shared by others. ### impl<OffsetSize: OffsetSizeTrait> GenericByteArray<GenericStringType<OffsetSize>#### pub const fn get_data_type() -> DataType 👎Deprecated: please use `Self::DATA_TYPE` insteadGet the data type of the array. #### pub fn num_chars(&self, i: usize) -> usize Returns the number of `Unicode Scalar Value` in the string at index `i`. ##### Performance This function has `O(n)` time complexity where `n` is the string length. If you can make sure that all chars in the string are in the range `U+0x0000` ~ `U+0x007F`, please use the function `value_length` which has O(1) time complexity. #### pub fn take_iter<'a>( &'a self, indexes: impl Iterator<Item = Option<usize>> + 'a ) -> impl Iterator<Item = Option<&str>> + 'a Returns an iterator that returns the values of `array.value(i)` for an iterator with each element `i` #### pub unsafe fn take_iter_unchecked<'a>( &'a self, indexes: impl Iterator<Item = Option<usize>> + 'a ) -> impl Iterator<Item = Option<&str>> + 'a Returns an iterator that returns the values of `array.value(i)` for an iterator with each element `i` ##### Safety caller must ensure that the indexes in the iterator are less than the `array.len()` #### pub fn try_from_binary( v: GenericBinaryArray<OffsetSize> ) -> Result<Self, ArrowErrorFallibly creates a `GenericStringArray` from a `GenericBinaryArray` returning an error if `GenericBinaryArray` contains invalid UTF-8 data Trait Implementations --- ### impl<T: ByteArrayType> Array for GenericByteArray<T#### fn as_any(&self) -> &dyn Any Returns the array as `Any` so that it can be downcasted to a specific implementation. Returns the underlying data of this array#### fn into_data(self) -> ArrayData Returns the underlying data of this array Returns a reference to the `DataType` of this array. Returns a zero-copy slice of this array with the indicated offset and length. Returns the length (i.e., number of elements) of this array. Returns whether this array is empty. Returns the offset into the underlying data used by this array(-slice). Note that the underlying data can be shared by many arrays. This defaults to `0`. Returns the total number of bytes of memory pointed to by this array. The buffers store bytes in the Arrow memory format, and include the data as well as the validity map.#### fn get_array_memory_size(&self) -> usize Returns the total number of bytes of memory occupied physically by this array. This value will always be greater than returned by `get_buffer_memory_size()` and includes the overhead of the data structures that contain the pointers to the various buffers.#### fn logical_nulls(&self) -> Option<NullBufferReturns the logical null buffer of this array if any Returns whether the element at `index` is null. When using this function on a slice, the index is relative to the slice. Returns whether the element at `index` is not null. When using this function on a slice, the index is relative to the slice. Returns the total number of physical null values in this array. Returns `false` if the array is guaranteed to not contain any logical nulls Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Formats the value using the given formatter. Converts to this type from the input type.### impl<OffsetSize: OffsetSizeTrait> From<GenericByteArray<GenericBinaryType<OffsetSize>>> for GenericStringArray<OffsetSize#### fn from(v: GenericBinaryArray<OffsetSize>) -> Self Converts to this type from the input type.### impl<'a, Ptr, T: ByteArrayType> FromIterator<&'a Option<Ptr>> for GenericByteArray<T>where Ptr: AsRef<T::Native> + 'a, #### fn from_iter<I: IntoIterator<Item = &'a Option<Ptr>>>(iter: I) -> Self Creates a value from an iterator. Ptr: AsRef<T::Native>, #### fn from_iter<I: IntoIterator<Item = Option<Ptr>>>(iter: I) -> Self Creates a value from an iterator. This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.{"&[u8]":"<h3>Notable traits for <code>&amp;[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Read.html\" title=\"trait std::io::Read\">Read</a> for &amp;[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</span>","ArrayIter<&Self>":"<h3>Notable traits for <code><a class=\"struct\" href=\"../iterator/struct.ArrayIter.html\" title=\"struct arrow_array::iterator::ArrayIter\">ArrayIter</a>&lt;T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;T: <a class=\"trait\" href=\"trait.ArrayAccessor.html\" title=\"trait arrow_array::array::ArrayAccessor\">ArrayAccessor</a>&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"../iterator/struct.ArrayIter.html\" title=\"struct arrow_array::iterator::ArrayIter\">ArrayIter</a>&lt;T&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"enum\" href=\"https://doc.rust-lang.org/nightly/core/option/enum.Option.html\" title=\"enum core::option::Option\">Option</a>&lt;T::<a class=\"associatedtype\" href=\"trait.ArrayAccessor.html#associatedtype.Item\" title=\"type arrow_array::array::ArrayAccessor::Item\">Item</a>&gt;;</span>"} Struct arrow_array::array::PrimitiveArray === ``` pub struct PrimitiveArray<T: ArrowPrimitiveType> { /* private fields */ } ``` An array of primitive values Example: From a Vec --- ``` let arr: PrimitiveArray<Int32Type> = vec![1, 2, 3, 4].into(); assert_eq!(4, arr.len()); assert_eq!(0, arr.null_count()); assert_eq!(arr.values(), &[1, 2, 3, 4]) ``` Example: From an optional Vec --- ``` let arr: PrimitiveArray<Int32Type> = vec![Some(1), None, Some(3), None].into(); assert_eq!(4, arr.len()); assert_eq!(2, arr.null_count()); // Note: values for null indexes are arbitrary assert_eq!(arr.values(), &[1, 0, 3, 0]) ``` Example: From an iterator of values --- ``` let arr: PrimitiveArray<Int32Type> = (0..10).map(|x| x + 1).collect(); assert_eq!(10, arr.len()); assert_eq!(0, arr.null_count()); for i in 0..10i32 { assert_eq!(i + 1, arr.value(i as usize)); } ``` Example: From an iterator of option --- ``` let arr: PrimitiveArray<Int32Type> = (0..10).map(|x| (x % 2 == 0).then_some(x)).collect(); assert_eq!(10, arr.len()); assert_eq!(5, arr.null_count()); // Note: values for null indexes are arbitrary assert_eq!(arr.values(), &[0, 0, 2, 0, 4, 0, 6, 0, 8, 0]) ``` Example: Using Builder --- ``` let mut builder = PrimitiveBuilder::<Int32Type>::new(); builder.append_value(1); builder.append_null(); builder.append_value(2); let array = builder.finish(); // Note: values for null indexes are arbitrary assert_eq!(array.values(), &[1, 0, 2]); assert!(array.is_null(1)); ``` Implementations --- ### impl<T: ArrowPrimitiveType> PrimitiveArray<T#### pub fn new(values: ScalarBuffer<T::Native>, nulls: Option<NullBuffer>) -> Self Create a new `PrimitiveArray` from the provided values and nulls ##### Panics Panics if `Self::try_new` returns an error ##### Example Creating a `PrimitiveArray` directly from a `ScalarBuffer` and `NullBuffer` using this constructor is the most performant approach, avoiding any additional allocations ``` // [1, 2, 3, 4] let array = Int32Array::new(vec![1, 2, 3, 4].into(), None); // [1, null, 3, 4] let nulls = NullBuffer::from(vec![true, false, true, true]); let array = Int32Array::new(vec![1, 2, 3, 4].into(), Some(nulls)); ``` #### pub fn new_null(length: usize) -> Self Create a new `PrimitiveArray` of the given length where all values are null #### pub fn try_new( values: ScalarBuffer<T::Native>, nulls: Option<NullBuffer> ) -> Result<Self, ArrowErrorCreate a new `PrimitiveArray` from the provided values and nulls ##### Errors Errors if: * `values.len() != nulls.len()` #### pub fn new_scalar(value: T::Native) -> Scalar<SelfCreate a new `Scalar` from `value` #### pub fn into_parts( self ) -> (DataType, ScalarBuffer<T::Native>, Option<NullBuffer>) Deconstruct this array into its constituent parts #### pub fn with_data_type(self, data_type: DataType) -> Self Overrides the `DataType` of this `PrimitiveArray` Prefer using `Self::with_timezone` or `Self::with_precision_and_scale` where the primitive type is suitably constrained, as these cannot panic ##### Panics Panics if ![Self::is_compatible] #### pub fn len(&self) -> usize Returns the length of this array. #### pub fn is_empty(&self) -> bool Returns whether this array is empty. #### pub fn values(&self) -> &ScalarBuffer<T::NativeReturns the values of this array #### pub fn builder(capacity: usize) -> PrimitiveBuilder<TReturns a new primitive array builder #### pub fn is_compatible(data_type: &DataType) -> bool Returns if this `PrimitiveArray` is compatible with the provided `DataType` This is equivalent to `data_type == T::DATA_TYPE`, however ignores timestamp timezones and decimal precision and scale #### pub unsafe fn value_unchecked(&self, i: usize) -> T::Native Returns the primitive value at index `i`. ##### Safety caller must ensure that the passed in offset is less than the array len() #### pub fn value(&self, i: usize) -> T::Native Returns the primitive value at index `i`. ##### Panics Panics if index `i` is out of bounds #### pub fn from_iter_values<I: IntoIterator<Item = T::Native>>(iter: I) -> Self Creates a PrimitiveArray based on an iterator of values without nulls #### pub fn from_value(value: T::Native, count: usize) -> Self Creates a PrimitiveArray based on a constant value with `count` elements #### pub fn take_iter<'a>( &'a self, indexes: impl Iterator<Item = Option<usize>> + 'a ) -> impl Iterator<Item = Option<T::Native>> + 'a Returns an iterator that returns the values of `array.value(i)` for an iterator with each element `i` #### pub unsafe fn take_iter_unchecked<'a>( &'a self, indexes: impl Iterator<Item = Option<usize>> + 'a ) -> impl Iterator<Item = Option<T::Native>> + 'a Returns an iterator that returns the values of `array.value(i)` for an iterator with each element `i` ##### Safety caller must ensure that the offsets in the iterator are less than the array len() #### pub fn slice(&self, offset: usize, length: usize) -> Self Returns a zero-copy slice of this array with the indicated offset and length. #### pub fn reinterpret_cast<K>(&self) -> PrimitiveArray<K>where K: ArrowPrimitiveType<Native = T::Native>, Reinterprets this array’s contents as a different data type without copying This can be used to efficiently convert between primitive arrays with the same underlying representation Note: this will not modify the underlying values, and therefore may change the semantic values of the array, e.g. 100 milliseconds in a `TimestampNanosecondArray` will become 100 seconds in a `TimestampSecondArray`. For casts that preserve the semantic value, check out the [compute kernels] compute kernels ``` let a = Int64Array::from_iter_values([1, 2, 3, 4]); let b: TimestampNanosecondArray = a.reinterpret_cast(); ``` #### pub fn unary<F, O>(&self, op: F) -> PrimitiveArray<O>where O: ArrowPrimitiveType, F: Fn(T::Native) -> O::Native, Applies an unary and infallible function to a primitive array. This is the fastest way to perform an operation on a primitive array when the benefits of a vectorized operation outweigh the cost of branching nulls and non-nulls. ##### Implementation This will apply the function for all values, including those on null slots. This implies that the operation must be infallible for any value of the corresponding type or this function may panic. ##### Example ``` let array = Int32Array::from(vec![Some(5), Some(7), None]); let c = array.unary(|x| x * 2 + 1); assert_eq!(c, Int32Array::from(vec![Some(11), Some(15), None])); ``` #### pub fn unary_mut<F>(self, op: F) -> Result<PrimitiveArray<T>, PrimitiveArray<T>>where F: Fn(T::Native) -> T::Native, Applies an unary and infallible function to a mutable primitive array. Mutable primitive array means that the buffer is not shared with other arrays. As a result, this mutates the buffer directly without allocating new buffer. ##### Implementation This will apply the function for all values, including those on null slots. This implies that the operation must be infallible for any value of the corresponding type or this function may panic. ##### Example ``` let array = Int32Array::from(vec![Some(5), Some(7), None]); let c = array.unary_mut(|x| x * 2 + 1).unwrap(); assert_eq!(c, Int32Array::from(vec![Some(11), Some(15), None])); ``` #### pub fn try_unary<F, O, E>(&self, op: F) -> Result<PrimitiveArray<O>, E>where O: ArrowPrimitiveType, F: Fn(T::Native) -> Result<O::Native, E>, Applies a unary and fallible function to all valid values in a primitive array This is unlike `Self::unary` which will apply an infallible function to all rows regardless of validity, in many cases this will be significantly faster and should be preferred if `op` is infallible. Note: LLVM is currently unable to effectively vectorize fallible operations #### pub fn try_unary_mut<F, E>( self, op: F ) -> Result<Result<PrimitiveArray<T>, E>, PrimitiveArray<T>>where F: Fn(T::Native) -> Result<T::Native, E>, Applies an unary and fallible function to all valid values in a mutable primitive array. Mutable primitive array means that the buffer is not shared with other arrays. As a result, this mutates the buffer directly without allocating new buffer. This is unlike `Self::unary_mut` which will apply an infallible function to all rows regardless of validity, in many cases this will be significantly faster and should be preferred if `op` is infallible. This returns an `Err` when the input array is shared buffer with other array. In the case, returned `Err` wraps input array. If the function encounters an error during applying on values. In the case, this returns an `Err` within an `Ok` which wraps the actual error. Note: LLVM is currently unable to effectively vectorize fallible operations #### pub fn unary_opt<F, O>(&self, op: F) -> PrimitiveArray<O>where O: ArrowPrimitiveType, F: Fn(T::Native) -> Option<O::Native>, Applies a unary and nullable function to all valid values in a primitive array This is unlike `Self::unary` which will apply an infallible function to all rows regardless of validity, in many cases this will be significantly faster and should be preferred if `op` is infallible. Note: LLVM is currently unable to effectively vectorize fallible operations #### pub fn into_builder(self) -> Result<PrimitiveBuilder<T>, SelfReturns `PrimitiveBuilder` of this primitive array for mutating its values if the underlying data buffer is not shared by others. ### impl<T: ArrowTemporalType> PrimitiveArray<T>where i64: From<T::Native>, #### pub fn value_as_datetime(&self, i: usize) -> Option<NaiveDateTimeReturns value as a chrono `NaiveDateTime`, handling time resolution If a data type cannot be converted to `NaiveDateTime`, a `None` is returned. A valid value is expected, thus the user should first check for validity. #### pub fn value_as_datetime_with_tz( &self, i: usize, tz: Tz ) -> Option<DateTime<Tz>Returns value as a chrono `NaiveDateTime`, handling time resolution with the provided tz functionally it is same as `value_as_datetime`, however it adds the passed tz to the to-be-returned NaiveDateTime #### pub fn value_as_date(&self, i: usize) -> Option<NaiveDateReturns value as a chrono `NaiveDate` by using `Self::datetime()` If a data type cannot be converted to `NaiveDate`, a `None` is returned #### pub fn value_as_time(&self, i: usize) -> Option<NaiveTimeReturns a value as a chrono `NaiveTime` `Date32` and `Date64` return UTC midnight as they do not have time resolution #### pub fn value_as_duration(&self, i: usize) -> Option<DurationReturns a value as a chrono `Duration` If a data type cannot be converted to `Duration`, a `None` is returned ### impl<'a, T: ArrowPrimitiveType> PrimitiveArray<T#### pub fn iter(&'a self) -> PrimitiveIter<'a, Tconstructs a new iterator ### impl<T: ArrowPrimitiveType> PrimitiveArray<T#### pub unsafe fn from_trusted_len_iter<I, P>(iter: I) -> Selfwhere P: Borrow<Option<<T as ArrowPrimitiveType>::Native>>, I: IntoIterator<Item = P>, Creates a `PrimitiveArray` from an iterator of trusted length. ##### Safety The iterator must be `TrustedLen`. I.e. that `size_hint().1` correctly reports its length. ### impl<T: ArrowTimestampType> PrimitiveArray<T#### pub fn from_vec(data: Vec<i64>, timezone: Option<String>) -> Selfwhere Self: From<Vec<i64>>, 👎Deprecated: Use with_timezone_opt insteadConstruct a timestamp array from a vec of i64 values and an optional timezone #### pub fn from_opt_vec(data: Vec<Option<i64>>, timezone: Option<String>) -> Selfwhere Self: From<Vec<Option<i64>>>, 👎Deprecated: Use with_timezone_opt insteadConstruct a timestamp array from a vec of `Option<i64>` values and an optional timezone #### pub fn timezone(&self) -> Option<&strReturns the timezone of this array if any #### pub fn with_timezone(self, timezone: impl Into<Arc<str>>) -> Self Construct a timestamp array with new timezone #### pub fn with_timezone_utc(self) -> Self Construct a timestamp array with UTC #### pub fn with_timezone_opt<S: Into<Arc<str>>>(self, timezone: Option<S>) -> Self Construct a timestamp array with an optional timezone ### impl<T: DecimalType + ArrowPrimitiveType> PrimitiveArray<T#### pub fn with_precision_and_scale( self, precision: u8, scale: i8 ) -> Result<Self, ArrowErrorReturns a Decimal array with the same data as self, with the specified precision and scale. See `validate_decimal_precision_and_scale` #### pub fn validate_decimal_precision( &self, precision: u8 ) -> Result<(), ArrowErrorValidates values in this array can be properly interpreted with the specified precision. #### pub fn null_if_overflow_precision(&self, precision: u8) -> Self Validates the Decimal Array, if the value of slot is overflow for the specified precision, and will be casted to Null #### pub fn value_as_string(&self, row: usize) -> String Returns `Self::value` formatted as a string #### pub fn precision(&self) -> u8 Returns the decimal precision of this array #### pub fn scale(&self) -> i8 Returns the decimal scale of this array Trait Implementations --- ### impl<T: ArrowPrimitiveType> Array for PrimitiveArray<T#### fn as_any(&self) -> &dyn Any Returns the array as `Any` so that it can be downcasted to a specific implementation. Returns the underlying data of this array#### fn into_data(self) -> ArrayData Returns the underlying data of this array Returns a reference to the `DataType` of this array. Returns a zero-copy slice of this array with the indicated offset and length. Returns the length (i.e., number of elements) of this array. Returns whether this array is empty. Returns the offset into the underlying data used by this array(-slice). Note that the underlying data can be shared by many arrays. This defaults to `0`. Returns the total number of bytes of memory pointed to by this array. The buffers store bytes in the Arrow memory format, and include the data as well as the validity map.#### fn get_array_memory_size(&self) -> usize Returns the total number of bytes of memory occupied physically by this array. This value will always be greater than returned by `get_buffer_memory_size()` and includes the overhead of the data structures that contain the pointers to the various buffers.#### fn logical_nulls(&self) -> Option<NullBufferReturns the logical null buffer of this array if any Returns whether the element at `index` is null. When using this function on a slice, the index is relative to the slice. Returns whether the element at `index` is not null. When using this function on a slice, the index is relative to the slice. Returns the total number of physical null values in this array. Returns `false` if the array is guaranteed to not contain any logical nulls The Arrow type of the element being accessed.#### fn value(&self, index: usize) -> Self::Item Returns the element at index `i` Returns the element at index `i` Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Formats the value using the given formatter. #### fn from(data: ArrayData) -> Self Converts to this type from the input type.### impl<T: ArrowPrimitiveType> From<PrimitiveArray<T>> for ArrayData #### fn from(array: PrimitiveArray<T>) -> Self Converts to this type from the input type.### impl From<Vec<<Date32Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Date32Type#### fn from(data: Vec<<Date32Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<Date64Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Date64Type#### fn from(data: Vec<<Date64Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<Decimal128Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Decimal128Type#### fn from(data: Vec<<Decimal128Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<Decimal256Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Decimal256Type#### fn from(data: Vec<<Decimal256Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<DurationMicrosecondType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<DurationMicrosecondType#### fn from( data: Vec<<DurationMicrosecondType as ArrowPrimitiveType>::Native> ) -> Self Converts to this type from the input type.### impl From<Vec<<DurationMillisecondType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<DurationMillisecondType#### fn from( data: Vec<<DurationMillisecondType as ArrowPrimitiveType>::Native> ) -> Self Converts to this type from the input type.### impl From<Vec<<DurationNanosecondType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<DurationNanosecondType#### fn from( data: Vec<<DurationNanosecondType as ArrowPrimitiveType>::Native> ) -> Self Converts to this type from the input type.### impl From<Vec<<DurationSecondType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<DurationSecondType#### fn from(data: Vec<<DurationSecondType as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<Float16Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Float16Type#### fn from(data: Vec<<Float16Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<Float32Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Float32Type#### fn from(data: Vec<<Float32Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<Float64Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Float64Type#### fn from(data: Vec<<Float64Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<Int16Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Int16Type#### fn from(data: Vec<<Int16Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<Int32Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Int32Type#### fn from(data: Vec<<Int32Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<Int64Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Int64Type#### fn from(data: Vec<<Int64Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<Int8Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Int8Type#### fn from(data: Vec<<Int8Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<IntervalDayTimeType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<IntervalDayTimeType#### fn from(data: Vec<<IntervalDayTimeType as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<IntervalMonthDayNanoType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<IntervalMonthDayNanoType#### fn from( data: Vec<<IntervalMonthDayNanoType as ArrowPrimitiveType>::Native> ) -> Self Converts to this type from the input type.### impl From<Vec<<IntervalYearMonthType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<IntervalYearMonthType#### fn from( data: Vec<<IntervalYearMonthType as ArrowPrimitiveType>::Native> ) -> Self Converts to this type from the input type.### impl From<Vec<<Time32MillisecondType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Time32MillisecondType#### fn from( data: Vec<<Time32MillisecondType as ArrowPrimitiveType>::Native> ) -> Self Converts to this type from the input type.### impl From<Vec<<Time32SecondType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Time32SecondType#### fn from(data: Vec<<Time32SecondType as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<Time64MicrosecondType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Time64MicrosecondType#### fn from( data: Vec<<Time64MicrosecondType as ArrowPrimitiveType>::Native> ) -> Self Converts to this type from the input type.### impl From<Vec<<Time64NanosecondType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<Time64NanosecondType#### fn from(data: Vec<<Time64NanosecondType as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<TimestampMicrosecondType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<TimestampMicrosecondType#### fn from( data: Vec<<TimestampMicrosecondType as ArrowPrimitiveType>::Native> ) -> Self Converts to this type from the input type.### impl From<Vec<<TimestampMillisecondType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<TimestampMillisecondType#### fn from( data: Vec<<TimestampMillisecondType as ArrowPrimitiveType>::Native> ) -> Self Converts to this type from the input type.### impl From<Vec<<TimestampNanosecondType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<TimestampNanosecondType#### fn from( data: Vec<<TimestampNanosecondType as ArrowPrimitiveType>::Native> ) -> Self Converts to this type from the input type.### impl From<Vec<<TimestampSecondType as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<TimestampSecondType#### fn from(data: Vec<<TimestampSecondType as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<UInt16Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<UInt16Type#### fn from(data: Vec<<UInt16Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<UInt32Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<UInt32Type#### fn from(data: Vec<<UInt32Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<UInt64Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<UInt64Type#### fn from(data: Vec<<UInt64Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<<UInt8Type as ArrowPrimitiveType>::Native, Global>> for PrimitiveArray<UInt8Type#### fn from(data: Vec<<UInt8Type as ArrowPrimitiveType>::Native>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Date32Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Date32Type#### fn from(data: Vec<Option<<Date32Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Date64Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Date64Type#### fn from(data: Vec<Option<<Date64Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Decimal128Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Decimal128Type#### fn from( data: Vec<Option<<Decimal128Type as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Decimal256Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Decimal256Type#### fn from( data: Vec<Option<<Decimal256Type as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<DurationMicrosecondType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<DurationMicrosecondType#### fn from( data: Vec<Option<<DurationMicrosecondType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<DurationMillisecondType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<DurationMillisecondType#### fn from( data: Vec<Option<<DurationMillisecondType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<DurationNanosecondType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<DurationNanosecondType#### fn from( data: Vec<Option<<DurationNanosecondType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<DurationSecondType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<DurationSecondType#### fn from( data: Vec<Option<<DurationSecondType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Float16Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Float16Type#### fn from(data: Vec<Option<<Float16Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Float32Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Float32Type#### fn from(data: Vec<Option<<Float32Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Float64Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Float64Type#### fn from(data: Vec<Option<<Float64Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Int16Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Int16Type#### fn from(data: Vec<Option<<Int16Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Int32Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Int32Type#### fn from(data: Vec<Option<<Int32Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Int64Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Int64Type#### fn from(data: Vec<Option<<Int64Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Int8Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Int8Type#### fn from(data: Vec<Option<<Int8Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<IntervalDayTimeType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<IntervalDayTimeType#### fn from( data: Vec<Option<<IntervalDayTimeType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<IntervalMonthDayNanoType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<IntervalMonthDayNanoType#### fn from( data: Vec<Option<<IntervalMonthDayNanoType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<IntervalYearMonthType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<IntervalYearMonthType#### fn from( data: Vec<Option<<IntervalYearMonthType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Time32MillisecondType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Time32MillisecondType#### fn from( data: Vec<Option<<Time32MillisecondType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Time32SecondType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Time32SecondType#### fn from( data: Vec<Option<<Time32SecondType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Time64MicrosecondType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Time64MicrosecondType#### fn from( data: Vec<Option<<Time64MicrosecondType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<Time64NanosecondType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<Time64NanosecondType#### fn from( data: Vec<Option<<Time64NanosecondType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<TimestampMicrosecondType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<TimestampMicrosecondType#### fn from( data: Vec<Option<<TimestampMicrosecondType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<TimestampMillisecondType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<TimestampMillisecondType#### fn from( data: Vec<Option<<TimestampMillisecondType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<TimestampNanosecondType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<TimestampNanosecondType#### fn from( data: Vec<Option<<TimestampNanosecondType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<TimestampSecondType as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<TimestampSecondType#### fn from( data: Vec<Option<<TimestampSecondType as ArrowPrimitiveType>::Native>> ) -> Self Converts to this type from the input type.### impl From<Vec<Option<<UInt16Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<UInt16Type#### fn from(data: Vec<Option<<UInt16Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<UInt32Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<UInt32Type#### fn from(data: Vec<Option<<UInt32Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<UInt64Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<UInt64Type#### fn from(data: Vec<Option<<UInt64Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl From<Vec<Option<<UInt8Type as ArrowPrimitiveType>::Native>, Global>> for PrimitiveArray<UInt8Type#### fn from(data: Vec<Option<<UInt8Type as ArrowPrimitiveType>::Native>>) -> Self Converts to this type from the input type.### impl<T: ArrowPrimitiveType, Ptr: Into<NativeAdapter<T>>> FromIterator<Ptr> for PrimitiveArray<T#### fn from_iter<I: IntoIterator<Item = Ptr>>(iter: I) -> Self Creates a value from an iterator. Creates an iterator from a value. This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.Auto Trait Implementations --- ### impl<T> RefUnwindSafe for PrimitiveArray<T>where <T as ArrowPrimitiveType>::Native: RefUnwindSafe, ### impl<T> Send for PrimitiveArray<T### impl<T> Sync for PrimitiveArray<T### impl<T> Unpin for PrimitiveArray<T>where <T as ArrowPrimitiveType>::Native: Unpin, ### impl<T> UnwindSafe for PrimitiveArray<T>where <T as ArrowPrimitiveType>::Native: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Array, #### fn get(&self) -> (&dyn Array, bool) Returns the value for this `Datum` and a boolean indicating if the value is scalar### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> Allocation for Twhere T: RefUnwindSafe + Send + Sync, Type Alias arrow_array::array::ArrayRef === ``` pub type ArrayRef = Arc<dyn Array>; ``` A reference-counted reference to a generic `Array` Aliased Type --- ``` struct ArrayRef { /* private fields */ } ``` Trait Implementations --- ### impl Array for ArrayRef Ergonomics: Allow use of an ArrayRef as an `&dyn Array` #### fn as_any(&self) -> &dyn Any Returns the array as `Any` so that it can be downcasted to a specific implementation. Returns the underlying data of this array#### fn into_data(self) -> ArrayData Returns the underlying data of this array Returns a reference to the `DataType` of this array. Returns a zero-copy slice of this array with the indicated offset and length. Returns the length (i.e., number of elements) of this array. Returns whether this array is empty. Returns the offset into the underlying data used by this array(-slice). Note that the underlying data can be shared by many arrays. This defaults to `0`. Returns whether the element at `index` is null. When using this function on a slice, the index is relative to the slice. Returns whether the element at `index` is not null. When using this function on a slice, the index is relative to the slice. Returns the total number of physical null values in this array. Returns `false` if the array is guaranteed to not contain any logical nulls Returns the total number of bytes of memory pointed to by this array. The buffers store bytes in the Arrow memory format, and include the data as well as the validity map.#### fn get_array_memory_size(&self) -> usize Returns the total number of bytes of memory occupied physically by this array. This value will always be greater than returned by `get_buffer_memory_size()` and includes the overhead of the data structures that contain the pointers to the various buffers.### impl AsArray for ArrayRef #### fn as_boolean_opt(&self) -> Option<&BooleanArrayDowncast this to a `BooleanArray` returning `None` if not possible#### fn as_primitive_opt<T: ArrowPrimitiveType>(&self) -> Option<&PrimitiveArray<T>Downcast this to a `PrimitiveArray` returning `None` if not possible#### fn as_bytes_opt<T: ByteArrayType>(&self) -> Option<&GenericByteArray<T>Downcast this to a `GenericByteArray` returning `None` if not possible#### fn as_struct_opt(&self) -> Option<&StructArrayDowncast this to a `StructArray` returning `None` if not possible#### fn as_list_opt<O: OffsetSizeTrait>(&self) -> Option<&GenericListArray<O>Downcast this to a `GenericListArray` returning `None` if not possible#### fn as_fixed_size_binary_opt(&self) -> Option<&FixedSizeBinaryArrayDowncast this to a `FixedSizeBinaryArray` returning `None` if not possible#### fn as_fixed_size_list_opt(&self) -> Option<&FixedSizeListArrayDowncast this to a `FixedSizeListArray` returning `None` if not possible#### fn as_map_opt(&self) -> Option<&MapArrayDowncast this to a `MapArray` returning `None` if not possible#### fn as_dictionary_opt<K: ArrowDictionaryKeyType>( &self ) -> Option<&DictionaryArray<K>Downcast this to a `DictionaryArray` returning `None` if not possible#### fn as_any_dictionary_opt(&self) -> Option<&dyn AnyDictionaryArrayDowncasts this to a `AnyDictionaryArray` returning `None` if not possible#### fn as_boolean(&self) -> &BooleanArray Downcast this to a `BooleanArray` panicking if not possible#### fn as_primitive<T: ArrowPrimitiveType>(&self) -> &PrimitiveArray<TDowncast this to a `PrimitiveArray` panicking if not possible#### fn as_bytes<T: ByteArrayType>(&self) -> &GenericByteArray<TDowncast this to a `GenericByteArray` panicking if not possible#### fn as_string_opt<O: OffsetSizeTrait>(&self) -> Option<&GenericStringArray<O>Downcast this to a `GenericStringArray` returning `None` if not possible#### fn as_string<O: OffsetSizeTrait>(&self) -> &GenericStringArray<ODowncast this to a `GenericStringArray` panicking if not possible#### fn as_binary_opt<O: OffsetSizeTrait>(&self) -> Option<&GenericBinaryArray<O>Downcast this to a `GenericBinaryArray` returning `None` if not possible#### fn as_binary<O: OffsetSizeTrait>(&self) -> &GenericBinaryArray<ODowncast this to a `GenericBinaryArray` panicking if not possible#### fn as_struct(&self) -> &StructArray Downcast this to a `StructArray` panicking if not possible#### fn as_list<O: OffsetSizeTrait>(&self) -> &GenericListArray<ODowncast this to a `GenericListArray` panicking if not possible#### fn as_fixed_size_binary(&self) -> &FixedSizeBinaryArray Downcast this to a `FixedSizeBinaryArray` panicking if not possible#### fn as_fixed_size_list(&self) -> &FixedSizeListArray Downcast this to a `FixedSizeListArray` panicking if not possible#### fn as_map(&self) -> &MapArray Downcast this to a `MapArray` panicking if not possible#### fn as_dictionary<K: ArrowDictionaryKeyType>(&self) -> &DictionaryArray<KDowncast this to a `DictionaryArray` panicking if not possible#### fn as_any_dictionary(&self) -> &dyn AnyDictionaryArray Downcasts this to a `AnyDictionaryArray` panicking if not possible1.0.0 · source### impl<T, A> Deref for Arc<T, A>where A: Allocator, T: ?Sized, #### type Target = T The resulting type after dereferencing.#### fn deref(&self) -> &T Dereferences the value. Struct arrow_array::RecordBatch === ``` pub struct RecordBatch { /* private fields */ } ``` A two-dimensional batch of column-oriented data with a defined schema. A `RecordBatch` is a two-dimensional dataset of a number of contiguous arrays, each the same length. A record batch has a schema which must match its arrays’ datatypes. Record batches are a convenient unit of work for various serialization and computation functions, possibly incremental. Implementations --- ### impl RecordBatch #### pub fn try_new( schema: SchemaRef, columns: Vec<ArrayRef> ) -> Result<Self, ArrowErrorCreates a `RecordBatch` from a schema and columns. Expects the following: * the vec of columns to not be empty * the schema and column data types to have equal lengths and match * each array in columns to have the same length If the conditions are not met, an error is returned. ##### Example ``` let id_array = Int32Array::from(vec![1, 2, 3, 4, 5]); let schema = Schema::new(vec![ Field::new("id", DataType::Int32, false) ]); let batch = RecordBatch::try_new( Arc::new(schema), vec![Arc::new(id_array)] ).unwrap(); ``` #### pub fn try_new_with_options( schema: SchemaRef, columns: Vec<ArrayRef>, options: &RecordBatchOptions ) -> Result<Self, ArrowErrorCreates a `RecordBatch` from a schema and columns, with additional options, such as whether to strictly validate field names. See `RecordBatch::try_new` for the expected conditions. #### pub fn new_empty(schema: SchemaRef) -> Self Creates a new empty `RecordBatch`. #### pub fn with_schema(self, schema: SchemaRef) -> Result<Self, ArrowErrorOverride the schema of this `RecordBatch` Returns an error if `schema` is not a superset of the current schema as determined by `Schema::contains` #### pub fn schema(&self) -> SchemaRef Returns the `Schema` of the record batch. #### pub fn project(&self, indices: &[usize]) -> Result<RecordBatch, ArrowErrorProjects the schema onto the specified columns #### pub fn num_columns(&self) -> usize Returns the number of columns in the record batch. ##### Example ``` let id_array = Int32Array::from(vec![1, 2, 3, 4, 5]); let schema = Schema::new(vec![ Field::new("id", DataType::Int32, false) ]); let batch = RecordBatch::try_new(Arc::new(schema), vec![Arc::new(id_array)]).unwrap(); assert_eq!(batch.num_columns(), 1); ``` #### pub fn num_rows(&self) -> usize Returns the number of rows in each column. ##### Example ``` let id_array = Int32Array::from(vec![1, 2, 3, 4, 5]); let schema = Schema::new(vec![ Field::new("id", DataType::Int32, false) ]); let batch = RecordBatch::try_new(Arc::new(schema), vec![Arc::new(id_array)]).unwrap(); assert_eq!(batch.num_rows(), 5); ``` #### pub fn column(&self, index: usize) -> &ArrayRef Get a reference to a column’s array by index. ##### Panics Panics if `index` is outside of `0..num_columns`. #### pub fn column_by_name(&self, name: &str) -> Option<&ArrayRefGet a reference to a column’s array by name. #### pub fn columns(&self) -> &[ArrayRef] Get a reference to all columns in the record batch. #### pub fn slice(&self, offset: usize, length: usize) -> RecordBatch Return a new RecordBatch where each column is sliced according to `offset` and `length` ##### Panics Panics if `offset` with `length` is greater than column length. #### pub fn try_from_iter<I, F>(value: I) -> Result<Self, ArrowError>where I: IntoIterator<Item = (F, ArrayRef)>, F: AsRef<str>, Create a `RecordBatch` from an iterable list of pairs of the form `(field_name, array)`, with the same requirements on fields and arrays as `RecordBatch::try_new`. This method is often used to create a single `RecordBatch` from arrays, e.g. for testing. The resulting schema is marked as nullable for each column if the array for that column is has any nulls. To explicitly specify nullibility, use `RecordBatch::try_from_iter_with_nullable` Example: ``` let a: ArrayRef = Arc::new(Int32Array::from(vec![1, 2])); let b: ArrayRef = Arc::new(StringArray::from(vec!["a", "b"])); let record_batch = RecordBatch::try_from_iter(vec![ ("a", a), ("b", b), ]); ``` #### pub fn try_from_iter_with_nullable<I, F>(value: I) -> Result<Self, ArrowError>where I: IntoIterator<Item = (F, ArrayRef, bool)>, F: AsRef<str>, Create a `RecordBatch` from an iterable list of tuples of the form `(field_name, array, nullable)`, with the same requirements on fields and arrays as `RecordBatch::try_new`. This method is often used to create a single `RecordBatch` from arrays, e.g. for testing. Example: ``` let a: ArrayRef = Arc::new(Int32Array::from(vec![1, 2])); let b: ArrayRef = Arc::new(StringArray::from(vec![Some("a"), Some("b")])); // Note neither `a` nor `b` has any actual nulls, but we mark // b an nullable let record_batch = RecordBatch::try_from_iter_with_nullable(vec![ ("a", a, false), ("b", b, true), ]); ``` #### pub fn get_array_memory_size(&self) -> usize Returns the total number of bytes of memory occupied physically by this batch. Trait Implementations --- ### impl Clone for RecordBatch #### fn clone(&self) -> RecordBatch Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn from(struct_array: &StructArray) -> Self Converts to this type from the input type.### impl From<RecordBatch> for StructArray #### fn from(value: RecordBatch) -> Self Converts to this type from the input type.### impl From<StructArray> for RecordBatch #### fn from(value: StructArray) -> Self Converts to this type from the input type.### impl Index<&str> for RecordBatch #### fn index(&self, name: &str) -> &Self::Output Get a reference to a column’s array by name. ##### Panics Panics if the name is not in the schema. #### type Output = Arc<dyn Array, GlobalThe returned type after indexing.### impl PartialEq<RecordBatch> for RecordBatch #### fn eq(&self, other: &RecordBatch) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl StructuralPartialEq for RecordBatch Auto Trait Implementations --- ### impl !RefUnwindSafe for RecordBatch ### impl Send for RecordBatch ### impl Sync for RecordBatch ### impl Unpin for RecordBatch ### impl !UnwindSafe for RecordBatch Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Trait arrow_array::cast::AsArray === ``` pub trait AsArray: Sealed { // Required methods fn as_boolean_opt(&self) -> Option<&BooleanArray>; fn as_primitive_opt<T: ArrowPrimitiveType>( &self ) -> Option<&PrimitiveArray<T>>; fn as_bytes_opt<T: ByteArrayType>(&self) -> Option<&GenericByteArray<T>>; fn as_struct_opt(&self) -> Option<&StructArray>; fn as_list_opt<O: OffsetSizeTrait>(&self) -> Option<&GenericListArray<O>>; fn as_fixed_size_binary_opt(&self) -> Option<&FixedSizeBinaryArray>; fn as_fixed_size_list_opt(&self) -> Option<&FixedSizeListArray>; fn as_map_opt(&self) -> Option<&MapArray>; fn as_dictionary_opt<K: ArrowDictionaryKeyType>( &self ) -> Option<&DictionaryArray<K>>; fn as_any_dictionary_opt(&self) -> Option<&dyn AnyDictionaryArray>; // Provided methods fn as_boolean(&self) -> &BooleanArray { ... } fn as_primitive<T: ArrowPrimitiveType>(&self) -> &PrimitiveArray<T> { ... } fn as_bytes<T: ByteArrayType>(&self) -> &GenericByteArray<T> { ... } fn as_string_opt<O: OffsetSizeTrait>( &self ) -> Option<&GenericStringArray<O>> { ... } fn as_string<O: OffsetSizeTrait>(&self) -> &GenericStringArray<O> { ... } fn as_binary_opt<O: OffsetSizeTrait>( &self ) -> Option<&GenericBinaryArray<O>> { ... } fn as_binary<O: OffsetSizeTrait>(&self) -> &GenericBinaryArray<O> { ... } fn as_struct(&self) -> &StructArray { ... } fn as_list<O: OffsetSizeTrait>(&self) -> &GenericListArray<O> { ... } fn as_fixed_size_binary(&self) -> &FixedSizeBinaryArray { ... } fn as_fixed_size_list(&self) -> &FixedSizeListArray { ... } fn as_map(&self) -> &MapArray { ... } fn as_dictionary<K: ArrowDictionaryKeyType>(&self) -> &DictionaryArray<K> { ... } fn as_any_dictionary(&self) -> &dyn AnyDictionaryArray { ... } } ``` An extension trait for `dyn Array` that provides ergonomic downcasting ``` let col = Arc::new(Int32Array::from(vec![1, 2, 3])) as ArrayRef; assert_eq!(col.as_primitive::<Int32Type>().values(), &[1, 2, 3]); ``` Required Methods --- #### fn as_boolean_opt(&self) -> Option<&BooleanArrayDowncast this to a `BooleanArray` returning `None` if not possible #### fn as_primitive_opt<T: ArrowPrimitiveType>(&self) -> Option<&PrimitiveArray<T>Downcast this to a `PrimitiveArray` returning `None` if not possible #### fn as_bytes_opt<T: ByteArrayType>(&self) -> Option<&GenericByteArray<T>Downcast this to a `GenericByteArray` returning `None` if not possible #### fn as_struct_opt(&self) -> Option<&StructArrayDowncast this to a `StructArray` returning `None` if not possible #### fn as_list_opt<O: OffsetSizeTrait>(&self) -> Option<&GenericListArray<O>Downcast this to a `GenericListArray` returning `None` if not possible #### fn as_fixed_size_binary_opt(&self) -> Option<&FixedSizeBinaryArrayDowncast this to a `FixedSizeBinaryArray` returning `None` if not possible #### fn as_fixed_size_list_opt(&self) -> Option<&FixedSizeListArrayDowncast this to a `FixedSizeListArray` returning `None` if not possible #### fn as_map_opt(&self) -> Option<&MapArrayDowncast this to a `MapArray` returning `None` if not possible #### fn as_dictionary_opt<K: ArrowDictionaryKeyType>( &self ) -> Option<&DictionaryArray<K>Downcast this to a `DictionaryArray` returning `None` if not possible #### fn as_any_dictionary_opt(&self) -> Option<&dyn AnyDictionaryArrayDowncasts this to a `AnyDictionaryArray` returning `None` if not possible Provided Methods --- #### fn as_boolean(&self) -> &BooleanArray Downcast this to a `BooleanArray` panicking if not possible #### fn as_primitive<T: ArrowPrimitiveType>(&self) -> &PrimitiveArray<TDowncast this to a `PrimitiveArray` panicking if not possible #### fn as_bytes<T: ByteArrayType>(&self) -> &GenericByteArray<TDowncast this to a `GenericByteArray` panicking if not possible #### fn as_string_opt<O: OffsetSizeTrait>(&self) -> Option<&GenericStringArray<O>Downcast this to a `GenericStringArray` returning `None` if not possible #### fn as_string<O: OffsetSizeTrait>(&self) -> &GenericStringArray<ODowncast this to a `GenericStringArray` panicking if not possible #### fn as_binary_opt<O: OffsetSizeTrait>(&self) -> Option<&GenericBinaryArray<O>Downcast this to a `GenericBinaryArray` returning `None` if not possible #### fn as_binary<O: OffsetSizeTrait>(&self) -> &GenericBinaryArray<ODowncast this to a `GenericBinaryArray` panicking if not possible #### fn as_struct(&self) -> &StructArray Downcast this to a `StructArray` panicking if not possible #### fn as_list<O: OffsetSizeTrait>(&self) -> &GenericListArray<ODowncast this to a `GenericListArray` panicking if not possible #### fn as_fixed_size_binary(&self) -> &FixedSizeBinaryArray Downcast this to a `FixedSizeBinaryArray` panicking if not possible #### fn as_fixed_size_list(&self) -> &FixedSizeListArray Downcast this to a `FixedSizeListArray` panicking if not possible #### fn as_map(&self) -> &MapArray Downcast this to a `MapArray` panicking if not possible #### fn as_dictionary<K: ArrowDictionaryKeyType>(&self) -> &DictionaryArray<KDowncast this to a `DictionaryArray` panicking if not possible #### fn as_any_dictionary(&self) -> &dyn AnyDictionaryArray Downcasts this to a `AnyDictionaryArray` panicking if not possible Implementors --- ### impl AsArray for ArrayRef ### impl AsArray for dyn Array + '_ Module arrow_array::array === The concrete array definitions Re-exports --- * `pub use crate::types::ArrowPrimitiveType;` Structs --- * BooleanArrayAn array of boolean values * DictionaryArrayAn array of dictionary encoded values * FixedSizeBinaryArrayAn array of fixed size binary arrays * FixedSizeListArrayAn array of [fixed length lists], similar to JSON arrays (e.g. `["A", "B"]`). * GenericByteArrayAn array of variable length byte arrays * GenericListArrayAn array of variable length lists, similar to JSON arrays (e.g. `["A", "B", "C"]`). * MapArrayAn array of key-value maps * NativeAdapterAn optional primitive value * NullArrayAn array of null values * PrimitiveArrayAn array of primitive values * RunArrayAn array of run-end encoded values * StructArrayAn array of structs * TypedDictionaryArrayA `DictionaryArray` typed on its child values array * TypedRunArrayA `RunArray` typed typed on its child values array * UnionArrayAn array of values of varying types Traits --- * AnyDictionaryArrayA `DictionaryArray` with the key type erased * ArrayAn array in the arrow columnar format * ArrayAccessorA generic trait for accessing the values of an `Array` * OffsetSizeTraitA type that can be used within a variable-size array to encode offset information Functions --- * make_arrayConstructs an array using the input `data`. Returns a reference-counted `Array` instance. * new_empty_arrayCreates a new empty array * new_null_arrayCreates a new array of `data_type` of length `length` filled entirely of `NULL` values Type Aliases --- * ArrayRefA reference-counted reference to a generic `Array` * BinaryArrayA `GenericBinaryArray` of `[u8]` using `i32` offsets * Date32ArrayA `PrimitiveArray` of days since UNIX epoch stored as `i32` * Date64ArrayA `PrimitiveArray` of milliseconds since UNIX epoch stored as `i64` * Decimal128ArrayA `PrimitiveArray` of 128-bit fixed point decimals * Decimal256ArrayA `PrimitiveArray` of 256-bit fixed point decimals * DurationMicrosecondArrayA `PrimitiveArray` of elapsed durations in microseconds * DurationMillisecondArrayA `PrimitiveArray` of elapsed durations in milliseconds * DurationNanosecondArrayA `PrimitiveArray` of elapsed durations in nanoseconds * DurationSecondArrayA `PrimitiveArray` of elapsed durations in seconds * Float16ArrayA `PrimitiveArray` of `f16` * Float32ArrayA `PrimitiveArray` of `f32` * Float64ArrayA `PrimitiveArray` of `f64` * GenericBinaryArrayA `GenericBinaryArray` for storing `[u8]` * GenericStringArrayA `GenericByteArray` for storing `str` * Int8ArrayA `PrimitiveArray` of `i8` * Int8DictionaryArrayA `DictionaryArray` indexed by `i8` * Int16ArrayA `PrimitiveArray` of `i16` * Int16DictionaryArrayA `DictionaryArray` indexed by `i16` * Int16RunArrayA `RunArray` with `i16` run ends * Int32ArrayA `PrimitiveArray` of `i32` * Int32DictionaryArrayA `DictionaryArray` indexed by `i32` * Int32RunArrayA `RunArray` with `i32` run ends * Int64ArrayA `PrimitiveArray` of `i64` * Int64DictionaryArrayA `DictionaryArray` indexed by `i64` * Int64RunArrayA `RunArray` with `i64` run ends * IntervalDayTimeArrayA `PrimitiveArray` of “calendar” intervals in days and milliseconds * IntervalMonthDayNanoArrayA `PrimitiveArray` of “calendar” intervals in months, days, and nanoseconds * IntervalYearMonthArrayA `PrimitiveArray` of “calendar” intervals in months * LargeBinaryArrayA `GenericBinaryArray` of `[u8]` using `i64` offsets * LargeListArrayA `GenericListArray` of variable size lists, storing offsets as `i64`. * LargeStringArrayA `GenericStringArray` of `str` using `i64` offsets * ListArrayA `GenericListArray` of variable size lists, storing offsets as `i32`. * StringArrayA `GenericStringArray` of `str` using `i32` offsets * Time32MillisecondArrayA `PrimitiveArray` of milliseconds since midnight stored as `i32` * Time32SecondArrayA `PrimitiveArray` of seconds since midnight stored as `i32` * Time64MicrosecondArrayA `PrimitiveArray` of microseconds since midnight stored as `i64` * Time64NanosecondArrayA `PrimitiveArray` of nanoseconds since midnight stored as `i64` * TimestampMicrosecondArrayA `PrimitiveArray` of microseconds since UNIX epoch stored as `i64` * TimestampMillisecondArrayA `PrimitiveArray` of milliseconds since UNIX epoch stored as `i64` * TimestampNanosecondArrayA `PrimitiveArray` of nanoseconds since UNIX epoch stored as `i64` * TimestampSecondArrayA `PrimitiveArray` of seconds since UNIX epoch stored as `i64` * UInt8ArrayA `PrimitiveArray` of `u8` * UInt8DictionaryArrayA `DictionaryArray` indexed by `u8` * UInt16ArrayA `PrimitiveArray` of `u16` * UInt16DictionaryArrayA `DictionaryArray` indexed by `u16` * UInt32ArrayA `PrimitiveArray` of `u32` * UInt32DictionaryArrayA `DictionaryArray` indexed by `u32` * UInt64ArrayA `PrimitiveArray` of `u64` * UInt64DictionaryArrayA `DictionaryArray` indexed by `u64` Module arrow_array::builder === Defines push-based APIs for constructing arrays Basic Usage --- Builders can be used to build simple, non-nested arrays ``` let mut a = Int32Builder::new(); a.append_value(1); a.append_null(); a.append_value(2); let a = a.finish(); assert_eq!(a, PrimitiveArray::from(vec![Some(1), None, Some(2)])); ``` ``` let mut a = StringBuilder::new(); a.append_value("foo"); a.append_value("bar"); a.append_null(); let a = a.finish(); assert_eq!(a, StringArray::from_iter([Some("foo"), Some("bar"), None])); ``` Nested Usage --- Builders can also be used to build more complex nested arrays, such as lists ``` let mut a = ListBuilder::new(Int32Builder::new()); // [1, 2] a.values().append_value(1); a.values().append_value(2); a.append(true); // null a.append(false); // [] a.append(true); // [3, null] a.values().append_value(3); a.values().append_null(); a.append(true); // [[1, 2], null, [], [3, null]] let a = a.finish(); assert_eq!(a, ListArray::from_iter_primitive::<Int32Type, _, _>([ Some(vec![Some(1), Some(2)]), None, Some(vec![]), Some(vec![Some(3), None])] )) ``` Custom Builders --- It is common to have a collection of statically defined Rust types that you want to convert to Arrow arrays. An example of doing so is below ``` /// A custom row representation struct MyRow { i32: i32, optional_i32: Option<i32>, string: Option<String>, i32_list: Option<Vec<Option<i32>>>, } /// Converts `Vec<Row>` into `StructArray` #[derive(Debug, Default)] struct MyRowBuilder { i32: Int32Builder, string: StringBuilder, i32_list: ListBuilder<Int32Builder>, } impl MyRowBuilder { fn append(&mut self, row: &MyRow) { self.i32.append_value(row.i32); self.string.append_option(row.string.as_ref()); self.i32_list.append_option(row.i32_list.as_ref().map(|x| x.iter().copied())); } /// Note: returns StructArray to allow nesting within another array if desired fn finish(&mut self) -> StructArray { let i32 = Arc::new(self.i32.finish()) as ArrayRef; let i32_field = Arc::new(Field::new("i32", DataType::Int32, false)); let string = Arc::new(self.string.finish()) as ArrayRef; let string_field = Arc::new(Field::new("i32", DataType::Utf8, false)); let i32_list = Arc::new(self.i32_list.finish()) as ArrayRef; let value_field = Arc::new(Field::new("item", DataType::Int32, true)); let i32_list_field = Arc::new(Field::new("i32_list", DataType::List(value_field), true)); StructArray::from(vec![ (i32_field, i32), (string_field, string), (i32_list_field, i32_list), ]) } } impl<'a> Extend<&'a MyRow> for MyRowBuilder { fn extend<T: IntoIterator<Item = &'a MyRow>>(&mut self, iter: T) { iter.into_iter().for_each(|row| self.append(row)); } } /// Converts a slice of [`MyRow`] to a [`RecordBatch`] fn rows_to_batch(rows: &[MyRow]) -> RecordBatch { let mut builder = MyRowBuilder::default(); builder.extend(rows); RecordBatch::from(&builder.finish()) } ``` Structs --- * BooleanBufferBuilderBuilder for `BooleanBuffer` * BooleanBuilderBuilder for `BooleanArray` * BufferBuilderBuilder for creating a Buffer object. * FixedSizeBinaryBuilderBuilder for `FixedSizeBinaryArray` * FixedSizeListBuilderBuilder for `FixedSizeListArray` * GenericByteBuilderBuilder for `GenericByteArray` * GenericByteDictionaryBuilderBuilder for `DictionaryArray` of `GenericByteArray` * GenericByteRunBuilderBuilder for `RunArray` of `GenericByteArray` * GenericListBuilderBuilder for `GenericListArray` * MapBuilderBuilder for `MapArray` * MapFieldNamesThe `Field` names for a `MapArray` * NullBuilderBuilder for `NullArray` * PrimitiveBuilderBuilder for `PrimitiveArray` * PrimitiveDictionaryBuilderBuilder for `DictionaryArray` of `PrimitiveArray` * PrimitiveRunBuilderBuilder for `RunArray` of `PrimitiveArray` * StructBuilderBuilder for `StructArray` * UnionBuilderBuilder for `UnionArray` Traits --- * ArrayBuilderTrait for dealing with different array builders at runtime Functions --- * make_builderReturns a builder with capacity `capacity` that corresponds to the datatype `DataType` This function is useful to construct arrays from an arbitrary vectors with known/expected schema. Type Aliases --- * BinaryBuilderBuilder for `BinaryArray` * BinaryDictionaryBuilderBuilder for `DictionaryArray` of `BinaryArray` * BinaryRunBuilderBuilder for `RunArray` of `BinaryArray` * Date32BufferBuilderBuffer builder for 32-bit date type. * Date32BuilderA 32-bit date array builder. * Date64BufferBuilderBuffer builder for 64-bit date type. * Date64BuilderA 64-bit date array builder. * Decimal128BufferBuilderBuffer builder for 128-bit decimal type. * Decimal128BuilderA decimal 128 array builder * Decimal256BufferBuilderBuffer builder for 256-bit decimal type. * Decimal256BuilderA decimal 256 array builder * DurationMicrosecondBufferBuilderBuffer builder for elaspsed time of microseconds unit. * DurationMicrosecondBuilderAn elapsed time in microseconds array builder. * DurationMillisecondBufferBuilderBuffer builder for elaspsed time of milliseconds unit. * DurationMillisecondBuilderAn elapsed time in milliseconds array builder. * DurationNanosecondBufferBuilderBuffer builder for elaspsed time of nanoseconds unit. * DurationNanosecondBuilderAn elapsed time in nanoseconds array builder. * DurationSecondBufferBuilderBuffer builder for elaspsed time of second unit. * DurationSecondBuilderAn elapsed time in seconds array builder. * Float16BufferBuilderBuffer builder for 16-bit floating point type. * Float16BuilderA 16-bit floating point array builder. * Float32BufferBuilderBuffer builder for 32-bit floating point type. * Float32BuilderA 32-bit floating point array builder. * Float64BufferBuilderBuffer builder for 64-bit floating point type. * Float64BuilderA 64-bit floating point array builder. * GenericBinaryBuilderArray builder for `GenericBinaryArray` * GenericStringBuilderArray builder for `GenericStringArray` * Int8BufferBuilderBuffer builder for signed 8-bit integer type. * Int8BuilderA signed 8-bit integer array builder. * Int16BufferBuilderBuffer builder for signed 16-bit integer type. * Int16BuilderA signed 16-bit integer array builder. * Int32BufferBuilderBuffer builder for signed 32-bit integer type. * Int32BuilderA signed 32-bit integer array builder. * Int64BufferBuilderBuffer builder for signed 64-bit integer type. * Int64BuilderA signed 64-bit integer array builder. * IntervalDayTimeBufferBuilderBuffer builder for “calendar” interval in days and milliseconds. * IntervalDayTimeBuilderA “calendar” interval in days and milliseconds array builder. * IntervalMonthDayNanoBufferBuilderBuffer builder “calendar” interval in months, days, and nanoseconds. * IntervalMonthDayNanoBuilderA “calendar” interval in months, days, and nanoseconds array builder. * IntervalYearMonthBufferBuilderBuffer builder for “calendar” interval in months. * IntervalYearMonthBuilderA “calendar” interval in months array builder. * LargeBinaryBuilderBuilder for `LargeBinaryArray` * LargeBinaryDictionaryBuilderBuilder for `DictionaryArray` of `LargeBinaryArray` * LargeBinaryRunBuilderBuilder for `RunArray` of `LargeBinaryArray` * LargeListBuilderBuilder for `LargeListArray` * LargeStringBuilderBuilder for `LargeStringArray` * LargeStringDictionaryBuilderBuilder for `DictionaryArray` of `LargeStringArray` * LargeStringRunBuilderBuilder for `RunArray` of `LargeStringArray` * ListBuilderBuilder for `ListArray` * StringBuilderBuilder for `StringArray` * StringDictionaryBuilderBuilder for `DictionaryArray` of `StringArray` * StringRunBuilderBuilder for `RunArray` of `StringArray` * Time32MillisecondBufferBuilderBuffer builder for 32-bit elaspsed time since midnight of millisecond unit. * Time32MillisecondBuilderA 32-bit elaspsed time in milliseconds array builder. * Time32SecondBufferBuilderBuffer builder for 32-bit elaspsed time since midnight of second unit. * Time32SecondBuilderA 32-bit elaspsed time in seconds array builder. * Time64MicrosecondBufferBuilderBuffer builder for 64-bit elaspsed time since midnight of microsecond unit. * Time64MicrosecondBuilderA 64-bit elaspsed time in microseconds array builder. * Time64NanosecondBufferBuilderBuffer builder for 64-bit elaspsed time since midnight of nanosecond unit. * Time64NanosecondBuilderA 64-bit elaspsed time in nanoseconds array builder. * TimestampMicrosecondBufferBuilderBuffer builder for timestamp type of microsecond unit. * TimestampMicrosecondBuilderA timestamp microsecond array builder. * TimestampMillisecondBufferBuilderBuffer builder for timestamp type of millisecond unit. * TimestampMillisecondBuilderA timestamp millisecond array builder. * TimestampNanosecondBufferBuilderBuffer builder for timestamp type of nanosecond unit. * TimestampNanosecondBuilderA timestamp nanosecond array builder. * TimestampSecondBufferBuilderBuffer builder for timestamp type of second unit. * TimestampSecondBuilderA timestamp second array builder. * UInt8BufferBuilderBuffer builder for usigned 8-bit integer type. * UInt8BuilderAn usigned 8-bit integer array builder. * UInt16BufferBuilderBuffer builder for usigned 16-bit integer type. * UInt16BuilderAn usigned 16-bit integer array builder. * UInt32BufferBuilderBuffer builder for usigned 32-bit integer type. * UInt32BuilderAn usigned 32-bit integer array builder. * UInt64BufferBuilderBuffer builder for usigned 64-bit integer type. * UInt64BuilderAn usigned 64-bit integer array builder. Module arrow_array::cast === Defines helper functions for downcasting `dyn Array` to concrete types Traits --- * AsArrayAn extension trait for `dyn Array` that provides ergonomic downcasting Functions --- * as_boolean_arrayForce downcast of an `Array`, such as an `ArrayRef` to `BooleanArray`, panicking on failure. * as_decimal_arrayDeprecatedForce downcast of an Array, such as an ArrayRef to Decimal128Array, panic’ing on failure. * as_dictionary_arrayForce downcast of an `Array`, such as an `ArrayRef` to `DictionaryArray<T>`, panic’ing on failure. * as_fixed_size_list_arrayForce downcast of an `Array`, such as an `ArrayRef` to `FixedSizeListArray`, panicking on failure. * as_generic_binary_arrayForce downcast of an `Array`, such as an `ArrayRef` to `GenericBinaryArray<S>`, panicking on failure. * as_generic_list_arrayForce downcast of an `Array`, such as an `ArrayRef` to `GenericListArray<T>`, panicking on failure. * as_large_list_arrayForce downcast of an `Array`, such as an `ArrayRef` to `LargeListArray`, panicking on failure. * as_largestring_arrayForce downcast of an `Array`, such as an `ArrayRef` to `LargeStringArray`, panicking on failure. * as_list_arrayForce downcast of an `Array`, such as an `ArrayRef` to `ListArray`, panicking on failure. * as_map_arrayForce downcast of an `Array`, such as an `ArrayRef` to `MapArray`, panicking on failure. * as_null_arrayForce downcast of an `Array`, such as an `ArrayRef` to `NullArray`, panicking on failure. * as_primitive_arrayForce downcast of an `Array`, such as an `ArrayRef`, to `PrimitiveArray<T>`, panic’ing on failure. * as_run_arrayForce downcast of an `Array`, such as an `ArrayRef` to `RunArray<T>`, panic’ing on failure. * as_string_arrayForce downcast of an `Array`, such as an `ArrayRef` to `StringArray`, panicking on failure. * as_struct_arrayForce downcast of an `Array`, such as an `ArrayRef` to `StructArray`, panicking on failure. * as_union_arrayForce downcast of an `Array`, such as an `ArrayRef` to `UnionArray`, panicking on failure. * downcast_arrayDowncasts a `dyn Array` to a concrete type Module arrow_array::iterator === Idiomatic iterators for `Array` Structs --- * ArrayIterAn iterator that returns Some(T) or None, that can be used on any `ArrayAccessor` Type Aliases --- * BooleanIteran iterator that returns Some(T) or None, that can be used on any BooleanArray * FixedSizeBinaryIteran iterator that returns Some(T) or None, that can be used on any FixedSizeBinaryArray * FixedSizeListIteran iterator that returns Some(T) or None, that can be used on any FixedSizeListArray * GenericBinaryIteran iterator that returns Some(T) or None, that can be used on any BinaryArray * GenericListArrayIteran iterator that returns Some(T) or None, that can be used on any ListArray * GenericStringIteran iterator that returns Some(T) or None, that can be used on any Utf8Array * MapArrayIteran iterator that returns Some(T) or None, that can be used on any MapArray * PrimitiveIteran iterator that returns Some(T) or None, that can be used on any PrimitiveArray Module arrow_array::run_iterator === Idiomatic iterator for `RunArray` Structs --- * RunArrayIterThe `RunArrayIter` provides an idiomatic way to iterate over the run array. It returns Some(T) if there is a value or None if the value is null. Module arrow_array::temporal_conversions === Conversion methods for dates and times. Constants --- * EPOCH_DAYS_FROM_CENumber of days between 0001-01-01 and 1970-01-01 * MICROSECONDSNumber of microseconds in a second * MICROSECONDS_IN_DAYNumber of microseconds in a day * MILLISECONDSNumber of milliseconds in a second * MILLISECONDS_IN_DAYNumber of milliseconds in a day * NANOSECONDSNumber of nanoseconds in a second * NANOSECONDS_IN_DAYNumber of nanoseconds in a day * SECONDS_IN_DAYNumber of seconds in a day Functions --- * as_dateConverts an `ArrowPrimitiveType` to `NaiveDate` * as_datetimeConverts an `ArrowPrimitiveType` to `NaiveDateTime` * as_datetime_with_timezoneConverts an `ArrowPrimitiveType` to `DateTime<Tz>` * as_durationConverts an `ArrowPrimitiveType` to `Duration` * as_timeConverts an `ArrowPrimitiveType` to `NaiveTime` * date32_to_datetimeconverts a `i32` representing a `date32` to `NaiveDateTime` * date64_to_datetimeconverts a `i64` representing a `date64` to `NaiveDateTime` * duration_ms_to_durationconverts a `i64` representing a `duration(ms)` to `Duration` * duration_ns_to_durationconverts a `i64` representing a `duration(ns)` to `Duration` * duration_s_to_durationconverts a `i64` representing a `duration(s)` to `Duration` * duration_us_to_durationconverts a `i64` representing a `duration(us)` to `Duration` * time32ms_to_timeconverts a `i32` representing a `time32(ms)` to `NaiveDateTime` * time32s_to_timeconverts a `i32` representing a `time32(s)` to `NaiveDateTime` * time64ns_to_timeconverts a `i64` representing a `time64(ns)` to `NaiveDateTime` * time64us_to_timeconverts a `i64` representing a `time64(us)` to `NaiveDateTime` * time_to_time32msconverts `NaiveTime` to a `i32` representing a `time32(ms)` * time_to_time32sconverts `NaiveTime` to a `i32` representing a `time32(s)` * time_to_time64nsconverts `NaiveTime` to a `i64` representing a `time64(ns)` * time_to_time64usconverts `NaiveTime` to a `i64` representing a `time64(us)` * timestamp_ms_to_datetimeconverts a `i64` representing a `timestamp(ms)` to `NaiveDateTime` * timestamp_ns_to_datetimeconverts a `i64` representing a `timestamp(ns)` to `NaiveDateTime` * timestamp_s_to_datetimeconverts a `i64` representing a `timestamp(s)` to `NaiveDateTime` * timestamp_us_to_datetimeconverts a `i64` representing a `timestamp(us)` to `NaiveDateTime` Module arrow_array::timezone === Timezone for timestamp arrays Structs --- * TzAn Arrow `TimeZone` * TzOffsetAn `Offset` for `Tz` Module arrow_array::types === Zero-sized types used to parameterize generic array implementations Structs --- * BooleanTypeA boolean datatype * Date32TypeA 32-bit date type representing the elapsed time since UNIX epoch in days(32 bits). * Date64TypeA 64-bit date type representing the elapsed time since UNIX epoch in milliseconds(64 bits). * Decimal128TypeThe decimal type for a Decimal128Array * Decimal256TypeThe decimal type for a Decimal256Array * DurationMicrosecondTypeAn elapsed time type in microseconds. * DurationMillisecondTypeAn elapsed time type in milliseconds. * DurationNanosecondTypeAn elapsed time type in nanoseconds. * DurationSecondTypeAn elapsed time type in seconds. * Float16TypeA 16-bit floating point number type. * Float32TypeA 32-bit floating point number type. * Float64TypeA 64-bit floating point number type. * GenericBinaryType`ByteArrayType` for binary arrays * GenericStringType`ByteArrayType` for string arrays * Int8TypeA signed 8-bit integer type. * Int16TypeA signed 16-bit integer type. * Int32TypeA signed 32-bit integer type. * Int64TypeA signed 64-bit integer type. * IntervalDayTimeTypeA “calendar” interval type in days and milliseconds. * IntervalMonthDayNanoTypeA “calendar” interval type in months, days, and nanoseconds. * IntervalYearMonthTypeA “calendar” interval type in months. * Time32MillisecondTypeA 32-bit time type representing the elapsed time since midnight in milliseconds. * Time32SecondTypeA 32-bit time type representing the elapsed time since midnight in seconds. * Time64MicrosecondTypeA 64-bit time type representing the elapsed time since midnight in microseconds. * Time64NanosecondTypeA 64-bit time type representing the elapsed time since midnight in nanoseconds. * TimestampMicrosecondTypeA timestamp microsecond type with an optional timezone. * TimestampMillisecondTypeA timestamp millisecond type with an optional timezone. * TimestampNanosecondTypeA timestamp nanosecond type with an optional timezone. * TimestampSecondTypeA timestamp second type with an optional timezone. * UInt8TypeAn unsigned 8-bit integer type. * UInt16TypeAn unsigned 16-bit integer type. * UInt32TypeAn unsigned 32-bit integer type. * UInt64TypeAn unsigned 64-bit integer type. Traits --- * ArrowDictionaryKeyTypeA subtype of primitive type that represents legal dictionary keys. See https://arrow.apache.org/docs/format/Columnar.html * ArrowPrimitiveTypeTrait bridging the dynamic-typed nature of Arrow (via `DataType`) with the static-typed nature of rust types (`ArrowNativeType`) for all types that implement `ArrowNativeType`. * ArrowTemporalTypeA subtype of primitive type that represents temporal values. * ArrowTimestampTypeA timestamp type allows us to create array builders that take a timestamp. * ByteArrayTypeA trait over the variable-size byte array types * DecimalTypeA trait over the decimal types, used by `PrimitiveArray` to provide a generic implementation across the various decimal types * RunEndIndexTypeA subtype of primitive type that is used as run-ends index in `RunArray`. See https://arrow.apache.org/docs/format/Columnar.html Functions --- * validate_decimal_precision_and_scaleValidate that `precision` and `scale` are valid for `T` Type Aliases --- * BinaryTypeAn arrow binary array with i32 offsets * LargeBinaryTypeAn arrow binary array with i64 offsets * LargeUtf8TypeAn arrow utf8 array with i64 offsets * Utf8TypeAn arrow utf8 array with i32 offsets Macro arrow_array::downcast_dictionary_array === ``` macro_rules! downcast_dictionary_array { ($values:ident => $e:expr, $($p:pat => $fallback:expr $(,)*)*) => { ... }; ($values:ident => $e:block $($p:pat => $fallback:expr $(,)*)*) => { ... }; } ``` Downcast an `Array` to a `DictionaryArray` based on its `DataType`, accepts a number of subsequent patterns to match the data type ``` fn print_strings(array: &dyn Array) { downcast_dictionary_array!( array => match array.values().data_type() { DataType::Utf8 => { for v in array.downcast_dict::<StringArray>().unwrap() { println!("{:?}", v); } } t => println!("Unsupported dictionary value type {}", t), }, DataType::Utf8 => { for v in as_string_array(array) { println!("{:?}", v); } } t => println!("Unsupported datatype {}", t) ) } ``` Struct arrow_array::array::DictionaryArray === ``` pub struct DictionaryArray<K: ArrowDictionaryKeyType> { /* private fields */ } ``` An array of dictionary encoded values This is mostly used to represent strings or a limited set of primitive types as integers, for example when doing NLP analysis or representing chromosomes by name. `DictionaryArray` are represented using a `keys` array and a `values` array, which may be different lengths. The `keys` array stores indexes in the `values` array which holds the corresponding logical value, as shown here: ``` ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┌─────────────────┐ ┌─────────┐ │ ┌─────────────────┐ │ │ A │ │ 0 │ │ A │ values[keys[0]] ├─────────────────┤ ├─────────┤ │ ├─────────────────┤ │ │ D │ │ 2 │ │ B │ values[keys[1]] ├─────────────────┤ ├─────────┤ │ ├─────────────────┤ │ │ B │ │ 2 │ │ B │ values[keys[2]] └─────────────────┘ ├─────────┤ │ ├─────────────────┤ │ │ 1 │ │ D │ values[keys[3]] ├─────────┤ │ ├─────────────────┤ │ │ 1 │ │ D │ values[keys[4]] ├─────────┤ │ ├─────────────────┤ │ │ 0 │ │ A │ values[keys[5]] └─────────┘ │ └─────────────────┘ │ values keys ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ Logical array Contents DictionaryArray length = 6 ``` Example: From Nullable Data --- ``` let test = vec!["a", "a", "b", "c"]; let array : DictionaryArray<Int8Type> = test.iter().map(|&x| if x == "b" {None} else {Some(x)}).collect(); assert_eq!(array.keys(), &Int8Array::from(vec![Some(0), Some(0), None, Some(1)])); ``` Example: From Non-Nullable Data --- ``` let test = vec!["a", "a", "b", "c"]; let array : DictionaryArray<Int8Type> = test.into_iter().collect(); assert_eq!(array.keys(), &Int8Array::from(vec![0, 0, 1, 2])); ``` Example: From Existing Arrays --- ``` // You can form your own DictionaryArray by providing the // values (dictionary) and keys (indexes into the dictionary): let values = StringArray::from_iter_values(["a", "b", "c"]); let keys = Int8Array::from_iter_values([0, 0, 1, 2]); let array = DictionaryArray::<Int8Type>::try_new(keys, Arc::new(values)).unwrap(); let expected: DictionaryArray::<Int8Type> = vec!["a", "a", "b", "c"].into_iter().collect(); assert_eq!(&array, &expected); ``` Example: Using Builder --- ``` let mut builder = StringDictionaryBuilder::<Int32Type>::new(); builder.append_value("a"); builder.append_null(); builder.append_value("a"); builder.append_value("b"); let array = builder.finish(); let values: Vec<_> = array.downcast_dict::<StringArray>().unwrap().into_iter().collect(); assert_eq!(&values, &[Some("a"), None, Some("a"), Some("b")]); ``` Implementations --- ### impl<K: ArrowDictionaryKeyType> DictionaryArray<K#### pub fn new(keys: PrimitiveArray<K>, values: ArrayRef) -> Self Attempt to create a new DictionaryArray with a specified keys (indexes into the dictionary) and values (dictionary) array. ##### Panics Panics if `Self::try_new` returns an error #### pub fn try_new( keys: PrimitiveArray<K>, values: ArrayRef ) -> Result<Self, ArrowErrorAttempt to create a new DictionaryArray with a specified keys (indexes into the dictionary) and values (dictionary) array. ##### Errors Returns an error if any `keys[i] >= values.len() || keys[i] < 0` #### pub unsafe fn new_unchecked(keys: PrimitiveArray<K>, values: ArrayRef) -> Self Create a new `DictionaryArray` without performing validation ##### Safety Safe provided `Self::try_new` would not return an error #### pub fn into_parts(self) -> (PrimitiveArray<K>, ArrayRef) Deconstruct this array into its constituent parts #### pub fn keys(&self) -> &PrimitiveArray<KReturn an array view of the keys of this dictionary as a PrimitiveArray. #### pub fn lookup_key(&self, value: &str) -> Option<K::NativeIf `value` is present in `values` (aka the dictionary), returns the corresponding key (index into the `values` array). Otherwise returns `None`. Panics if `values` is not a `StringArray`. #### pub fn values(&self) -> &ArrayRef Returns a reference to the dictionary values array #### pub fn value_type(&self) -> DataType Returns a clone of the value type of this list. #### pub fn len(&self) -> usize The length of the dictionary is the length of the keys array. #### pub fn is_empty(&self) -> bool Whether this dictionary is empty #### pub fn is_ordered(&self) -> bool Currently exists for compatibility purposes with Arrow IPC. #### pub fn keys_iter(&self) -> impl Iterator<Item = Option<usize>> + '_ Return an iterator over the keys (indexes into the dictionary) #### pub fn key(&self, i: usize) -> Option<usizeReturn the value of `keys` (the dictionary key) at index `i`, cast to `usize`, `None` if the value at `i` is `NULL`. #### pub fn slice(&self, offset: usize, length: usize) -> Self Returns a zero-copy slice of this array with the indicated offset and length. #### pub fn downcast_dict<V: 'static>( &self ) -> Option<TypedDictionaryArray<'_, K, V>Downcast this dictionary to a `TypedDictionaryArray` ``` use arrow_array::{Array, ArrayAccessor, DictionaryArray, StringArray, types::Int32Type}; let orig = [Some("a"), Some("b"), None]; let dictionary = DictionaryArray::<Int32Type>::from_iter(orig); let typed = dictionary.downcast_dict::<StringArray>().unwrap(); assert_eq!(typed.value(0), "a"); assert_eq!(typed.value(1), "b"); assert!(typed.is_null(2)); ``` #### pub fn with_values(&self, values: ArrayRef) -> Self Returns a new dictionary with the same keys as the current instance but with a different set of dictionary values This can be used to perform an operation on the values of a dictionary ##### Panics Panics if `values` has a length less than the current values ``` // Construct a Dict(Int32, Int8) let mut builder = PrimitiveDictionaryBuilder::<Int32Type, Int8Type>::with_capacity(2, 200); for i in 0..100 { builder.append(i % 2).unwrap(); } let dictionary = builder.finish(); // Perform a widening cast of dictionary values let typed_dictionary = dictionary.downcast_dict::<Int8Array>().unwrap(); let values: Int64Array = typed_dictionary.values().unary(|x| x as i64); // Create a Dict(Int32, let new = dictionary.with_values(Arc::new(values)); // Verify values are as expected let new_typed = new.downcast_dict::<Int64Array>().unwrap(); for i in 0..100 { assert_eq!(new_typed.value(i), (i % 2) as i64) } ``` #### pub fn into_primitive_dict_builder<V>( self ) -> Result<PrimitiveDictionaryBuilder<K, V>, Self>where V: ArrowPrimitiveType, Returns `PrimitiveDictionaryBuilder` of this dictionary array for mutating its keys and values if the underlying data buffer is not shared by others. #### pub fn unary_mut<F, V>( self, op: F ) -> Result<DictionaryArray<K>, DictionaryArray<K>>where V: ArrowPrimitiveType, F: Fn(V::Native) -> V::Native, Applies an unary and infallible function to a mutable dictionary array. Mutable dictionary array means that the buffers are not shared with other arrays. As a result, this mutates the buffers directly without allocating new buffers. ##### Implementation This will apply the function for all dictionary values, including those on null slots. This implies that the operation must be infallible for any value of the corresponding type or this function may panic. ##### Example ``` let values = Int32Array::from(vec![Some(10), Some(20), None]); let keys = Int8Array::from_iter_values([0, 0, 1, 2]); let dictionary = DictionaryArray::<Int8Type>::try_new(keys, Arc::new(values)).unwrap(); let c = dictionary.unary_mut::<_, Int32Type>(|x| x + 1).unwrap(); let typed = c.downcast_dict::<Int32Array>().unwrap(); assert_eq!(typed.value(0), 11); assert_eq!(typed.value(1), 11); assert_eq!(typed.value(2), 21); ``` #### pub fn occupancy(&self) -> BooleanBuffer Computes an occupancy mask for this dictionary’s values For each value in `Self::values` the corresponding bit will be set in the returned mask if it is referenced by a key in this `DictionaryArray` Trait Implementations --- ### impl<K: ArrowDictionaryKeyType> AnyDictionaryArray for DictionaryArray<K#### fn keys(&self) -> &dyn Array Returns the primitive keys of this dictionary as an `Array`#### fn values(&self) -> &ArrayRef Returns the values of this dictionary#### fn normalized_keys(&self) -> Vec<usizeReturns the keys of this dictionary as usize Create a new `DictionaryArray` replacing `values` with the new values Returns the array as `Any` so that it can be downcasted to a specific implementation. Returns the underlying data of this array#### fn into_data(self) -> ArrayData Returns the underlying data of this array Returns a reference to the `DataType` of this array. Returns a zero-copy slice of this array with the indicated offset and length. Returns the length (i.e., number of elements) of this array. Returns whether this array is empty. Returns the offset into the underlying data used by this array(-slice). Note that the underlying data can be shared by many arrays. This defaults to `0`. Returns `false` if the array is guaranteed to not contain any logical nulls Returns the total number of bytes of memory pointed to by this array. The buffers store bytes in the Arrow memory format, and include the data as well as the validity map.#### fn get_array_memory_size(&self) -> usize Returns the total number of bytes of memory occupied physically by this array. This value will always be greater than returned by `get_buffer_memory_size()` and includes the overhead of the data structures that contain the pointers to the various buffers.#### fn is_null(&self, index: usize) -> bool Returns whether the element at `index` is null. When using this function on a slice, the index is relative to the slice. Returns whether the element at `index` is not null. When using this function on a slice, the index is relative to the slice. Returns the total number of physical null values in this array. Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Formats the value using the given formatter. #### fn from(data: ArrayData) -> Self Converts to this type from the input type.### impl<T: ArrowDictionaryKeyType> From<DictionaryArray<T>> for ArrayData #### fn from(array: DictionaryArray<T>) -> Self Converts to this type from the input type.### impl<'a, T: ArrowDictionaryKeyType> FromIterator<&'a str> for DictionaryArray<TConstructs a `DictionaryArray` from an iterator of strings. #### Example: ``` use arrow_array::{DictionaryArray, PrimitiveArray, StringArray, types::Int8Type}; let test = vec!["a", "a", "b", "c"]; let array: DictionaryArray<Int8Type> = test.into_iter().collect(); assert_eq!( "DictionaryArray {keys: PrimitiveArray<Int8>\n[\n 0,\n 0,\n 1,\n 2,\n] values: StringArray\n[\n \"a\",\n \"b\",\n \"c\",\n]}\n", format!("{:?}", array) ); ``` #### fn from_iter<I: IntoIterator<Item = &'a str>>(iter: I) -> Self Creates a value from an iterator. #### Example: ``` use arrow_array::{DictionaryArray, PrimitiveArray, StringArray, types::Int8Type}; let test = vec!["a", "a", "b", "c"]; let array: DictionaryArray<Int8Type> = test .iter() .map(|&x| if x == "b" { None } else { Some(x) }) .collect(); assert_eq!( "DictionaryArray {keys: PrimitiveArray<Int8>\n[\n 0,\n 0,\n null,\n 1,\n] values: StringArray\n[\n \"a\",\n \"c\",\n]}\n", format!("{:?}", array) ); ``` #### fn from_iter<I: IntoIterator<Item = Option<&'a str>>>(iter: I) -> Self Creates a value from an iterator. This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.Auto Trait Implementations --- ### impl<K> !RefUnwindSafe for DictionaryArray<K### impl<K> Send for DictionaryArray<K### impl<K> Sync for DictionaryArray<K### impl<K> Unpin for DictionaryArray<K>where <K as ArrowPrimitiveType>::Native: Unpin, ### impl<K> !UnwindSafe for DictionaryArray<KBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Array, #### fn get(&self) -> (&dyn Array, bool) Returns the value for this `Datum` and a boolean indicating if the value is scalar### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Macro arrow_array::downcast_integer === ``` macro_rules! downcast_integer { ($($data_type:expr),+ => ($m:path $(, $args:tt)*), $($p:pat => $fallback:expr $(,)*)*) => { ... }; } ``` Given one or more expressions evaluating to an integer `DataType` invokes the provided macro `m` with the corresponding integer `ArrowPrimitiveType`, followed by any additional arguments ``` macro_rules! dictionary_key_size_helper { ($t:ty, $o:ty) => { std::mem::size_of::<<$t as ArrowPrimitiveType>::Native>() as $o }; } fn dictionary_key_size(t: &DataType) -> u8 { match t { DataType::Dictionary(k, _) => downcast_integer! { k.as_ref() => (dictionary_key_size_helper, u8), _ => unreachable!(), }, _ => u8::MAX, } } assert_eq!(dictionary_key_size(&DataType::Dictionary(Box::new(DataType::Int32), Box::new(DataType::Utf8))), 4); assert_eq!(dictionary_key_size(&DataType::Dictionary(Box::new(DataType::Int64), Box::new(DataType::Utf8))), 8); assert_eq!(dictionary_key_size(&DataType::Dictionary(Box::new(DataType::UInt16), Box::new(DataType::Utf8))), 2); ``` Trait arrow_array::types::ArrowPrimitiveType === ``` pub trait ArrowPrimitiveType: PrimitiveTypeSealed + 'static { type Native: ArrowNativeTypeOp; const DATA_TYPE: DataType; // Provided methods fn get_byte_width() -> usize { ... } fn default_value() -> Self::Native { ... } } ``` Trait bridging the dynamic-typed nature of Arrow (via `DataType`) with the static-typed nature of rust types (`ArrowNativeType`) for all types that implement `ArrowNativeType`. Required Associated Types --- #### type Native: ArrowNativeTypeOp Corresponding Rust native type for the primitive type. Required Associated Constants --- #### const DATA_TYPE: DataType the corresponding Arrow data type of this primitive type. Provided Methods --- #### fn get_byte_width() -> usize Returns the byte width of this primitive type. #### fn default_value() -> Self::Native Returns a default value of this primitive type. This is useful for aggregate array ops like `sum()`, `mean()`. Implementors --- ### impl ArrowPrimitiveType for Date32Type #### type Native = i32 #### const DATA_TYPE: DataType = DataType::Date32 ### impl ArrowPrimitiveType for Date64Type #### type Native = i64 #### const DATA_TYPE: DataType = DataType::Date64 ### impl ArrowPrimitiveType for Decimal128Type #### type Native = i128 #### const DATA_TYPE: DataType = <Self as DecimalType>::DEFAULT_TYPE ### impl ArrowPrimitiveType for Decimal256Type #### type Native = i256 #### const DATA_TYPE: DataType = <Self as DecimalType>::DEFAULT_TYPE ### impl ArrowPrimitiveType for DurationMicrosecondType #### type Native = i64 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for DurationMillisecondType #### type Native = i64 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for DurationNanosecondType #### type Native = i64 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for DurationSecondType #### type Native = i64 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for Float16Type #### type Native = f16 #### const DATA_TYPE: DataType = DataType::Float16 ### impl ArrowPrimitiveType for Float32Type #### type Native = f32 #### const DATA_TYPE: DataType = DataType::Float32 ### impl ArrowPrimitiveType for Float64Type #### type Native = f64 #### const DATA_TYPE: DataType = DataType::Float64 ### impl ArrowPrimitiveType for Int8Type #### type Native = i8 #### const DATA_TYPE: DataType = DataType::Int8 ### impl ArrowPrimitiveType for Int16Type #### type Native = i16 #### const DATA_TYPE: DataType = DataType::Int16 ### impl ArrowPrimitiveType for Int32Type #### type Native = i32 #### const DATA_TYPE: DataType = DataType::Int32 ### impl ArrowPrimitiveType for Int64Type #### type Native = i64 #### const DATA_TYPE: DataType = DataType::Int64 ### impl ArrowPrimitiveType for IntervalDayTimeType #### type Native = i64 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for IntervalMonthDayNanoType #### type Native = i128 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for IntervalYearMonthType #### type Native = i32 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for Time32MillisecondType #### type Native = i32 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for Time32SecondType #### type Native = i32 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for Time64MicrosecondType #### type Native = i64 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for Time64NanosecondType #### type Native = i64 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for TimestampMicrosecondType #### type Native = i64 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for TimestampMillisecondType #### type Native = i64 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for TimestampNanosecondType #### type Native = i64 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for TimestampSecondType #### type Native = i64 #### const DATA_TYPE: DataType = _ ### impl ArrowPrimitiveType for UInt8Type #### type Native = u8 #### const DATA_TYPE: DataType = DataType::UInt8 ### impl ArrowPrimitiveType for UInt16Type #### type Native = u16 #### const DATA_TYPE: DataType = DataType::UInt16 ### impl ArrowPrimitiveType for UInt32Type #### type Native = u32 #### const DATA_TYPE: DataType = DataType::UInt32 ### impl ArrowPrimitiveType for UInt64Type #### type Native = u64 #### const DATA_TYPE: DataType = DataType::UInt64 Macro arrow_array::downcast_primitive === ``` macro_rules! downcast_primitive { ($($data_type:expr),+ => ($m:path $(, $args:tt)*), $($p:pat => $fallback:expr $(,)*)*) => { ... }; } ``` Given one or more expressions evaluating to primitive `DataType` invokes the provided macro `m` with the corresponding `ArrowPrimitiveType`, followed by any additional arguments ``` macro_rules! primitive_size_helper { ($t:ty, $o:ty) => { std::mem::size_of::<<$t as ArrowPrimitiveType>::Native>() as $o }; } fn primitive_size(t: &DataType) -> u8 { downcast_primitive! { t => (primitive_size_helper, u8), _ => u8::MAX } } assert_eq!(primitive_size(&DataType::Int32), 4); assert_eq!(primitive_size(&DataType::Int64), 8); assert_eq!(primitive_size(&DataType::Float16), 2); assert_eq!(primitive_size(&DataType::Decimal128(38, 10)), 16); assert_eq!(primitive_size(&DataType::Decimal256(76, 20)), 32); ``` Macro arrow_array::downcast_primitive_array === ``` macro_rules! downcast_primitive_array { ($values:ident => $e:expr, $($p:pat => $fallback:expr $(,)*)*) => { ... }; (($($values:ident),+) => $e:expr, $($p:pat => $fallback:expr $(,)*)*) => { ... }; ($($values:ident),+ => $e:block $($p:pat => $fallback:expr $(,)*)*) => { ... }; (($($values:ident),+) => $e:block $($p:pat => $fallback:expr $(,)*)*) => { ... }; } ``` Downcast an `Array` to a `PrimitiveArray` based on its `DataType` accepts a number of subsequent patterns to match the data type ``` fn print_primitive(array: &dyn Array) { downcast_primitive_array!( array => { for v in array { println!("{:?}", v); } } DataType::Utf8 => { for v in as_string_array(array) { println!("{:?}", v); } } t => println!("Unsupported datatype {}", t) ) } ``` Macro arrow_array::downcast_run_array === ``` macro_rules! downcast_run_array { ($values:ident => $e:expr, $($p:pat => $fallback:expr $(,)*)*) => { ... }; ($values:ident => $e:block $($p:pat => $fallback:expr $(,)*)*) => { ... }; } ``` Downcast an `Array` to a `RunArray` based on its `DataType`, accepts a number of subsequent patterns to match the data type ``` fn print_strings(array: &dyn Array) { downcast_run_array!( array => match array.values().data_type() { DataType::Utf8 => { for v in array.downcast::<StringArray>().unwrap() { println!("{:?}", v); } } t => println!("Unsupported run array value type {}", t), }, DataType::Utf8 => { for v in as_string_array(array) { println!("{:?}", v); } } t => println!("Unsupported datatype {}", t) ) } ``` Struct arrow_array::array::RunArray === ``` pub struct RunArray<R: RunEndIndexType> { /* private fields */ } ``` An array of run-end encoded values This encoding is variation on run-length encoding (RLE) and is good for representing data containing same values repeated consecutively. `RunArray` contains `run_ends` array and `values` array of same length. The `run_ends` array stores the indexes at which the run ends. The `values` array stores the value of each run. Below example illustrates how a logical array is represented in `RunArray` ``` ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─┐ ┌─────────────────┐ ┌─────────┐ ┌─────────────────┐ │ │ A │ │ 2 │ │ │ A │ ├─────────────────┤ ├─────────┤ ├─────────────────┤ │ │ D │ │ 3 │ │ │ A │ run length of 'A' = runs_ends[0] - 0 = 2 ├─────────────────┤ ├─────────┤ ├─────────────────┤ │ │ B │ │ 6 │ │ │ D │ run length of 'D' = run_ends[1] - run_ends[0] = 1 └─────────────────┘ └─────────┘ ├─────────────────┤ │ values run_ends │ │ B │ ├─────────────────┤ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─┘ │ B │ ├─────────────────┤ RunArray │ B │ run length of 'B' = run_ends[2] - run_ends[1] = 3 length = 3 └─────────────────┘ Logical array Contents ``` Implementations --- ### impl<R: RunEndIndexType> RunArray<R#### pub fn logical_len(run_ends: &PrimitiveArray<R>) -> usize Calculates the logical length of the array encoded by the given run_ends array. #### pub fn try_new( run_ends: &PrimitiveArray<R>, values: &dyn Array ) -> Result<Self, ArrowErrorAttempts to create RunArray using given run_ends (index where a run ends) and the values (value of the run). Returns an error if the given data is not compatible with RunEndEncoded specification. #### pub fn run_ends(&self) -> &RunEndBuffer<R::NativeReturns a reference to `RunEndBuffer` #### pub fn values(&self) -> &ArrayRef Returns a reference to values array Note: any slicing of this `RunArray` array is not applied to the returned array and must be handled separately #### pub fn get_start_physical_index(&self) -> usize Returns the physical index at which the array slice starts. #### pub fn get_end_physical_index(&self) -> usize Returns the physical index at which the array slice ends. #### pub fn downcast<V: 'static>(&self) -> Option<TypedRunArray<'_, R, V>Downcast this `RunArray` to a `TypedRunArray` ``` use arrow_array::{Array, ArrayAccessor, RunArray, StringArray, types::Int32Type}; let orig = [Some("a"), Some("b"), None]; let run_array = RunArray::<Int32Type>::from_iter(orig); let typed = run_array.downcast::<StringArray>().unwrap(); assert_eq!(typed.value(0), "a"); assert_eq!(typed.value(1), "b"); assert!(typed.values().is_null(2)); ``` #### pub fn get_physical_index(&self, logical_index: usize) -> usize Returns index to the physical array for the given index to the logical array. This function adjusts the input logical index based on `ArrayData::offset` Performs a binary search on the run_ends array for the input index. The result is arbitrary if `logical_index >= self.len()` #### pub fn get_physical_indices<I>( &self, logical_indices: &[I] ) -> Result<Vec<usize>, ArrowError>where I: ArrowNativeType, Returns the physical indices of the input logical indices. Returns error if any of the logical index cannot be converted to physical index. The logical indices are sorted and iterated along with run_ends array to find matching physical index. The approach used here was chosen over finding physical index for each logical index using binary search using the function `get_physical_index`. Running benchmarks on both approaches showed that the approach used here scaled well for larger inputs. See https://github.com/apache/arrow-rs/pull/3622#issuecomment-1407753727 for more details. #### pub fn slice(&self, offset: usize, length: usize) -> Self Returns a zero-copy slice of this array with the indicated offset and length. Trait Implementations --- ### impl<T: RunEndIndexType> Array for RunArray<T#### fn as_any(&self) -> &dyn Any Returns the array as `Any` so that it can be downcasted to a specific implementation. Returns the underlying data of this array#### fn into_data(self) -> ArrayData Returns the underlying data of this array Returns a reference to the `DataType` of this array. Returns a zero-copy slice of this array with the indicated offset and length. Returns the length (i.e., number of elements) of this array. Returns whether this array is empty. Returns the offset into the underlying data used by this array(-slice). Note that the underlying data can be shared by many arrays. This defaults to `0`. Returns `false` if the array is guaranteed to not contain any logical nulls Returns the total number of bytes of memory pointed to by this array. The buffers store bytes in the Arrow memory format, and include the data as well as the validity map.#### fn get_array_memory_size(&self) -> usize Returns the total number of bytes of memory occupied physically by this array. This value will always be greater than returned by `get_buffer_memory_size()` and includes the overhead of the data structures that contain the pointers to the various buffers.#### fn is_null(&self, index: usize) -> bool Returns whether the element at `index` is null. When using this function on a slice, the index is relative to the slice. Returns whether the element at `index` is not null. When using this function on a slice, the index is relative to the slice. Returns the total number of physical null values in this array. Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Formats the value using the given formatter. Converts to this type from the input type.### impl<R: RunEndIndexType> From<RunArray<R>> for ArrayData #### fn from(array: RunArray<R>) -> Self Converts to this type from the input type.### impl<'a, T: RunEndIndexType> FromIterator<&'a str> for RunArray<TConstructs a `RunArray` from an iterator of strings. #### Example: ``` use arrow_array::{RunArray, PrimitiveArray, StringArray, types::Int16Type}; let test = vec!["a", "a", "b", "c"]; let array: RunArray<Int16Type> = test.into_iter().collect(); assert_eq!( "RunArray {run_ends: [2, 3, 4], values: StringArray\n[\n \"a\",\n \"b\",\n \"c\",\n]}\n", format!("{:?}", array) ); ``` #### fn from_iter<I: IntoIterator<Item = &'a str>>(iter: I) -> Self Creates a value from an iterator. #### Example: ``` use arrow_array::{RunArray, PrimitiveArray, StringArray, types::Int16Type}; let test = vec!["a", "a", "b", "c", "c"]; let array: RunArray<Int16Type> = test .iter() .map(|&x| if x == "b" { None } else { Some(x) }) .collect(); assert_eq!( "RunArray {run_ends: [2, 3, 5], values: StringArray\n[\n \"a\",\n null,\n \"c\",\n]}\n", format!("{:?}", array) ); ``` #### fn from_iter<I: IntoIterator<Item = Option<&'a str>>>(iter: I) -> Self Creates a value from an iterator. Read moreAuto Trait Implementations --- ### impl<R> !RefUnwindSafe for RunArray<R### impl<R> Send for RunArray<R### impl<R> Sync for RunArray<R### impl<R> Unpin for RunArray<R>where <R as ArrowPrimitiveType>::Native: Unpin, ### impl<R> !UnwindSafe for RunArray<RBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Array, #### fn get(&self) -> (&dyn Array, bool) Returns the value for this `Datum` and a boolean indicating if the value is scalar### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Macro arrow_array::downcast_run_end_index === ``` macro_rules! downcast_run_end_index { ($($data_type:expr),+ => ($m:path $(, $args:tt)*), $($p:pat => $fallback:expr $(,)*)*) => { ... }; } ``` Given one or more expressions evaluating to an integer `DataType` invokes the provided macro `m` with the corresponding integer `RunEndIndexType`, followed by any additional arguments ``` macro_rules! run_end_size_helper { ($t:ty, $o:ty) => { std::mem::size_of::<<$t as ArrowPrimitiveType>::Native>() as $o }; } fn run_end_index_size(t: &DataType) -> u8 { match t { DataType::RunEndEncoded(k, _) => downcast_run_end_index! { k.data_type() => (run_end_size_helper, u8), _ => unreachable!(), }, _ => u8::MAX, } } assert_eq!(run_end_index_size(&DataType::RunEndEncoded(Arc::new(Field::new("a", DataType::Int32, false)), Arc::new(Field::new("b", DataType::Utf8, true)))), 4); assert_eq!(run_end_index_size(&DataType::RunEndEncoded(Arc::new(Field::new("a", DataType::Int64, false)), Arc::new(Field::new("b", DataType::Utf8, true)))), 8); assert_eq!(run_end_index_size(&DataType::RunEndEncoded(Arc::new(Field::new("a", DataType::Int16, false)), Arc::new(Field::new("b", DataType::Utf8, true)))), 2); ``` Trait arrow_array::types::RunEndIndexType === ``` pub trait RunEndIndexType: ArrowPrimitiveType { } ``` A subtype of primitive type that is used as run-ends index in `RunArray`. See https://arrow.apache.org/docs/format/Columnar.html Implementors --- ### impl RunEndIndexType for Int16Type ### impl RunEndIndexType for Int32Type ### impl RunEndIndexType for Int64Type Macro arrow_array::downcast_temporal === ``` macro_rules! downcast_temporal { ($($data_type:expr),+ => ($m:path $(, $args:tt)*), $($p:pat => $fallback:expr $(,)*)*) => { ... }; } ``` Given one or more expressions evaluating to primitive `DataType` invokes the provided macro `m` with the corresponding `ArrowPrimitiveType`, followed by any additional arguments ``` macro_rules! temporal_size_helper { ($t:ty, $o:ty) => { std::mem::size_of::<<$t as ArrowPrimitiveType>::Native>() as $o }; } fn temporal_size(t: &DataType) -> u8 { downcast_temporal! { t => (temporal_size_helper, u8), _ => u8::MAX } } assert_eq!(temporal_size(&DataType::Date32), 4); assert_eq!(temporal_size(&DataType::Date64), 8); ``` Macro arrow_array::downcast_temporal_array === ``` macro_rules! downcast_temporal_array { ($values:ident => $e:expr, $($p:pat => $fallback:expr $(,)*)*) => { ... }; (($($values:ident),+) => $e:expr, $($p:pat => $fallback:expr $(,)*)*) => { ... }; ($($values:ident),+ => $e:block $($p:pat => $fallback:expr $(,)*)*) => { ... }; (($($values:ident),+) => $e:block $($p:pat => $fallback:expr $(,)*)*) => { ... }; } ``` Downcast an `Array` to a temporal `PrimitiveArray` based on its `DataType` accepts a number of subsequent patterns to match the data type ``` fn print_temporal(array: &dyn Array) { downcast_temporal_array!( array => { for v in array { println!("{:?}", v); } } DataType::Utf8 => { for v in as_string_array(array) { println!("{:?}", v); } } t => println!("Unsupported datatype {}", t) ) } ``` Struct arrow_array::RecordBatchIterator === ``` pub struct RecordBatchIterator<I>where I: IntoIterator<Item = Result<RecordBatch, ArrowError>>,{ /* private fields */ } ``` Generic implementation of RecordBatchReader that wraps an iterator. Example --- ``` let a: ArrayRef = Arc::new(Int32Array::from(vec![1, 2])); let b: ArrayRef = Arc::new(StringArray::from(vec!["a", "b"])); let record_batch = RecordBatch::try_from_iter(vec![ ("a", a), ("b", b), ]).unwrap(); let batches: Vec<RecordBatch> = vec![record_batch.clone(), record_batch.clone()]; let mut reader = RecordBatchIterator::new(batches.into_iter().map(Ok), record_batch.schema()); assert_eq!(reader.schema(), record_batch.schema()); assert_eq!(reader.next().unwrap().unwrap(), record_batch); ``` Implementations --- ### impl<I> RecordBatchIterator<I>where I: IntoIterator<Item = Result<RecordBatch, ArrowError>>, #### pub fn new(iter: I, schema: SchemaRef) -> Self Create a new RecordBatchIterator. If `iter` is an infallible iterator, use `.map(Ok)`. Trait Implementations --- ### impl<I> Iterator for RecordBatchIterator<I>where I: IntoIterator<Item = Result<RecordBatch, ArrowError>>, #### type Item = <I as IntoIterator>::Item The type of the elements being iterated over.#### fn next(&mut self) -> Option<Self::ItemAdvances the iterator and returns the next value. Returns the bounds on the remaining length of the iterator. &mut self ) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where Self: Sized, 🔬This is a nightly-only experimental API. (`iter_next_chunk`)Advances the iterator and returns an array containing the next `N` values. Read more1.0.0 · source#### fn count(self) -> usizewhere Self: Sized, Consumes the iterator, counting the number of iterations and returning it. Read more1.0.0 · source#### fn last(self) -> Option<Self::Item>where Self: Sized, Consumes the iterator, returning the last element. Self: Sized, Creates an iterator starting at the same point, but stepping by the given amount at each iteration. Read more1.0.0 · source#### fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator<Item = Self::Item>, Takes two iterators and creates a new iterator over both in sequence. Read more1.0.0 · source#### fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator, ‘Zips up’ two iterators into a single iterator of pairs. Self: Sized, G: FnMut() -> Self::Item, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places an item generated by `separator` between adjacent items of the original iterator. Read more1.0.0 · source#### fn map<B, F>(self, f: F) -> Map<Self, F>where Self: Sized, F: FnMut(Self::Item) -> B, Takes a closure and creates an iterator which calls that closure on each element. Read more1.21.0 · source#### fn for_each<F>(self, f: F)where Self: Sized, F: FnMut(Self::Item), Calls a closure on each element of an iterator. Read more1.0.0 · source#### fn filter<P>(self, predicate: P) -> Filter<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator which uses a closure to determine if an element should be yielded. Read more1.0.0 · source#### fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Creates an iterator that both filters and maps. Read more1.0.0 · source#### fn enumerate(self) -> Enumerate<Self>where Self: Sized, Creates an iterator which gives the current iteration count as well as the next value. Read more1.0.0 · source#### fn peekable(self) -> Peekable<Self>where Self: Sized, Creates an iterator which can use the `peek` and `peek_mut` methods to look at the next element of the iterator without consuming it. See their documentation for more information. Read more1.0.0 · source#### fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that `skip`s elements based on a predicate. Read more1.0.0 · source#### fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that yields elements based on a predicate. Read more1.57.0 · source#### fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>where Self: Sized, P: FnMut(Self::Item) -> Option<B>, Creates an iterator that both yields elements based on a predicate and maps. Read more1.0.0 · source#### fn skip(self, n: usize) -> Skip<Self>where Self: Sized, Creates an iterator that skips the first `n` elements. Read more1.0.0 · source#### fn take(self, n: usize) -> Take<Self>where Self: Sized, Creates an iterator that yields the first `n` elements, or fewer if the underlying iterator ends sooner. Read more1.0.0 · source#### fn scan<St, B, F>(self, initial_state: St, f: F) -> Scan<Self, St, F>where Self: Sized, F: FnMut(&mut St, Self::Item) -> Option<B>, An iterator adapter which, like `fold`, holds internal state, but unlike `fold`, produces a new iterator. Read more1.0.0 · source#### fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>where Self: Sized, U: IntoIterator, F: FnMut(Self::Item) -> U, Creates an iterator that works like map, but flattens nested structure. Self: Sized, F: FnMut(&[Self::Item; N]) -> R, 🔬This is a nightly-only experimental API. (`iter_map_windows`)Calls the given function `f` for each contiguous window of size `N` over `self` and returns an iterator over the outputs of `f`. Like `slice::windows()`, the windows during mapping overlap as well. Read more1.0.0 · source#### fn fuse(self) -> Fuse<Self>where Self: Sized, Creates an iterator which ends after the first `None`. Read more1.0.0 · source#### fn inspect<F>(self, f: F) -> Inspect<Self, F>where Self: Sized, F: FnMut(&Self::Item), Does something with each element of an iterator, passing the value on. Read more1.0.0 · source#### fn by_ref(&mut self) -> &mut Selfwhere Self: Sized, Borrows an iterator, rather than consuming it. Read more1.0.0 · source#### fn collect<B>(self) -> Bwhere B: FromIterator<Self::Item>, Self: Sized, Transforms an iterator into a collection. E: Extend<Self::Item>, Self: Sized, 🔬This is a nightly-only experimental API. (`iter_collect_into`)Collects all the items from an iterator into a collection. Read more1.0.0 · source#### fn partition<B, F>(self, f: F) -> (B, B)where Self: Sized, B: Default + Extend<Self::Item>, F: FnMut(&Self::Item) -> bool, Consumes an iterator, creating two collections from it. Self: Sized, P: FnMut(Self::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_is_partitioned`)Checks if the elements of this iterator are partitioned according to the given predicate, such that all those that return `true` precede all those that return `false`. Read more1.27.0 · source#### fn try_fold<B, F, R>(&mut self, init: B, f: F) -> Rwhere Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Output = B>, An iterator method that applies a function as long as it returns successfully, producing a single, final value. Read more1.27.0 · source#### fn try_for_each<F, R>(&mut self, f: F) -> Rwhere Self: Sized, F: FnMut(Self::Item) -> R, R: Try<Output = ()>, An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error. Read more1.0.0 · source#### fn fold<B, F>(self, init: B, f: F) -> Bwhere Self: Sized, F: FnMut(B, Self::Item) -> B, Folds every element into an accumulator by applying an operation, returning the final result. Read more1.51.0 · source#### fn reduce<F>(self, f: F) -> Option<Self::Item>where Self: Sized, F: FnMut(Self::Item, Self::Item) -> Self::Item, Reduces the elements to a single one, by repeatedly applying a reducing operation. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryTypewhere Self: Sized, F: FnMut(Self::Item, Self::Item) -> R, R: Try<Output = Self::Item>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`iterator_try_reduce`)Reduces the elements to a single one by repeatedly applying a reducing operation. If the closure returns a failure, the failure is propagated back to the caller immediately. Read more1.0.0 · source#### fn all<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if every element of the iterator matches a predicate. Read more1.0.0 · source#### fn any<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if any element of the iterator matches a predicate. Read more1.0.0 · source#### fn find<P>(&mut self, predicate: P) -> Option<Self::Item>where Self: Sized, P: FnMut(&Self::Item) -> bool, Searches for an element of an iterator that satisfies a predicate. Read more1.30.0 · source#### fn find_map<B, F>(&mut self, f: F) -> Option<B>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Applies function to the elements of iterator and returns the first non-none result. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryTypewhere Self: Sized, F: FnMut(&Self::Item) -> R, R: Try<Output = bool>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`try_find`)Applies function to the elements of iterator and returns the first true result or the first error. Read more1.0.0 · source#### fn position<P>(&mut self, predicate: P) -> Option<usize>where Self: Sized, P: FnMut(Self::Item) -> bool, Searches for an element in an iterator, returning its index. Read more1.6.0 · source#### fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the maximum value from the specified function. Read more1.15.0 · source#### fn max_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the maximum value with respect to the specified comparison function. Read more1.6.0 · source#### fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the minimum value from the specified function. Read more1.15.0 · source#### fn min_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the minimum value with respect to the specified comparison function. Read more1.0.0 · source#### fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)where FromA: Default + Extend<A>, FromB: Default + Extend<B>, Self: Sized + Iterator<Item = (A, B)>, Converts an iterator of pairs into a pair of containers. Read more1.36.0 · source#### fn copied<'a, T>(self) -> Copied<Self>where T: 'a + Copy, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which copies all of its elements. Read more1.0.0 · source#### fn cloned<'a, T>(self) -> Cloned<Self>where T: 'a + Clone, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which `clone`s all of its elements. Self: Sized, 🔬This is a nightly-only experimental API. (`iter_array_chunks`)Returns an iterator over `N` elements of the iterator at a time. Read more1.11.0 · source#### fn sum<S>(self) -> Swhere Self: Sized, S: Sum<Self::Item>, Sums the elements of an iterator. Read more1.11.0 · source#### fn product<P>(self) -> Pwhere Self: Sized, P: Product<Self::Item>, Iterates over the entire iterator, multiplying all the elements Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Ordering, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn partial_cmp<I>(self, other: I) -> Option<Ordering>where I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Lexicographically compares the `PartialOrd` elements of this `Iterator` with those of another. The comparison works like short-circuit evaluation, returning a result without comparing the remaining elements. As soon as an order can be determined, the evaluation stops and a result is returned. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn eq<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are equal to those of another. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_order_by`)Determines if the elements of this `Iterator` are equal to those of another with respect to the specified equality function. Read more1.5.0 · source#### fn ne<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are not equal to those of another. Read more1.5.0 · source#### fn lt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less than those of another. Read more1.5.0 · source#### fn le<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less or equal to those of another. Read more1.5.0 · source#### fn gt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than those of another. Read more1.5.0 · source#### fn ge<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than or equal to those of another. Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given comparator function. Self: Sized, F: FnMut(Self::Item) -> K, K: PartialOrd<K>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given key extraction function. I: IntoIterator<Item = Result<RecordBatch, ArrowError>>, #### fn schema(&self) -> SchemaRef Returns the schema of this `RecordBatchReader`. --- ### impl<I> RefUnwindSafe for RecordBatchIterator<I>where <I as IntoIterator>::IntoIter: RefUnwindSafe, ### impl<I> Send for RecordBatchIterator<I>where <I as IntoIterator>::IntoIter: Send, ### impl<I> Sync for RecordBatchIterator<I>where <I as IntoIterator>::IntoIter: Sync, ### impl<I> Unpin for RecordBatchIterator<I>where <I as IntoIterator>::IntoIter: Unpin, ### impl<I> UnwindSafe for RecordBatchIterator<I>where <I as IntoIterator>::IntoIter: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<I> IntoIterator for Iwhere I: Iterator, #### type Item = <I as Iterator>::Item The type of the elements being iterated over.#### type IntoIter = I Which kind of iterator are we turning this into?const: unstable · source#### fn into_iter(self) -> I Creates an iterator from a value. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> Allocation for Twhere T: RefUnwindSafe + Send + Sync, Trait arrow_array::RecordBatchReader === ``` pub trait RecordBatchReader: Iterator<Item = Result<RecordBatch, ArrowError>> { // Required method fn schema(&self) -> SchemaRef; // Provided method fn next_batch(&mut self) -> Result<Option<RecordBatch>, ArrowError> { ... } } ``` Trait for types that can read `RecordBatch`’s. To create from an iterator, see RecordBatchIterator. Required Methods --- #### fn schema(&self) -> SchemaRef Returns the schema of this `RecordBatchReader`. Implementation of this trait should guarantee that all `RecordBatch`’s returned by this reader should have the same schema as returned from this method. Provided Methods --- #### fn next_batch(&mut self) -> Result<Option<RecordBatch>, ArrowError👎Deprecated since 2.0.0: This method is deprecated in favour of `next` from the trait Iterator.Reads the next `RecordBatch`. Implementations on Foreign Types --- ### impl<R: RecordBatchReader + ?Sized> RecordBatchReader for Box<R#### fn schema(&self) -> SchemaRef Implementors --- ### impl<I> RecordBatchReader for RecordBatchIterator<I>where I: IntoIterator<Item = Result<RecordBatch, ArrowError>>, Struct arrow_array::RecordBatchOptions === ``` #[non_exhaustive]pub struct RecordBatchOptions { pub match_field_names: bool, pub row_count: Option<usize>, } ``` Options that control the behaviour used when creating a `RecordBatch`. Fields (Non-exhaustive) --- Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional `Struct { .. }` syntax; cannot be matched against without a wildcard `..`; and struct update syntax will not work.`match_field_names: bool`Match field names of structs and lists. If set to `true`, the names must match. `row_count: Option<usize>`Optional row count, useful for specifying a row count for a RecordBatch with no columns Implementations --- ### impl RecordBatchOptions #### pub fn new() -> Self Creates a new `RecordBatchOptions` #### pub fn with_row_count(self, row_count: Option<usize>) -> Self Sets the row_count of RecordBatchOptions and returns self #### pub fn with_match_field_names(self, match_field_names: bool) -> Self Sets the match_field_names of RecordBatchOptions and returns self Trait Implementations --- ### impl Debug for RecordBatchOptions #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Self Returns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for RecordBatchOptions ### impl Send for RecordBatchOptions ### impl Sync for RecordBatchOptions ### impl Unpin for RecordBatchOptions ### impl UnwindSafe for RecordBatchOptions Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> Allocation for Twhere T: RefUnwindSafe + Send + Sync, Struct arrow_array::Scalar === ``` pub struct Scalar<T: Array>(/* private fields */); ``` A wrapper around a single value `Array` that implements `Datum` and indicates compute kernels should treat this array as a scalar value (a single value). Using a `Scalar` is often much more efficient than creating an `Array` with the same (repeated) value. See `Datum` for more information. Example --- ``` // Create a (typed) scalar for Int32Array for the value 42 let scalar = Scalar::new(Int32Array::from(vec![42])); // Create a scalar using PrimtiveArray::scalar let scalar = Int32Array::new_scalar(42); // create a scalar from an ArrayRef (for dynamic typed Arrays) let array: ArrayRef = get_array(); let scalar = Scalar::new(array); ``` Implementations --- ### impl<T: Array> Scalar<T#### pub fn new(array: T) -> Self Create a new `Scalar` from an `Array` ##### Panics Panics if `array.len() != 1` Trait Implementations --- ### impl<T: Clone + Array> Clone for Scalar<T#### fn clone(&self) -> Scalar<TReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Returns the value for this `Datum` and a boolean indicating if the value is scalar### impl<T: Debug + Array> Debug for Scalar<T#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. --- ### impl<T> RefUnwindSafe for Scalar<T>where T: RefUnwindSafe, ### impl<T> Send for Scalar<T### impl<T> Sync for Scalar<T### impl<T> Unpin for Scalar<T>where T: Unpin, ### impl<T> UnwindSafe for Scalar<T>where T: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> Allocation for Twhere T: RefUnwindSafe + Send + Sync, Trait arrow_array::Datum === ``` pub trait Datum { // Required method fn get(&self) -> (&dyn Array, bool); } ``` A possibly `Scalar` `Array` This allows optimised binary kernels where one or more arguments are constant ``` fn eq_impl<T: ArrowPrimitiveType>( a: &PrimitiveArray<T>, a_scalar: bool, b: &PrimitiveArray<T>, b_scalar: bool, ) -> BooleanArray { let (array, scalar) = match (a_scalar, b_scalar) { (true, true) | (false, false) => { let len = a.len().min(b.len()); let nulls = NullBuffer::union(a.nulls(), b.nulls()); let buffer = BooleanBuffer::collect_bool(len, |idx| a.value(idx) == b.value(idx)); return BooleanArray::new(buffer, nulls); } (true, false) => (b, (a.null_count() == 0).then(|| a.value(0))), (false, true) => (a, (b.null_count() == 0).then(|| b.value(0))), }; match scalar { Some(v) => { let len = array.len(); let nulls = array.nulls().cloned(); let buffer = BooleanBuffer::collect_bool(len, |idx| array.value(idx) == v); BooleanArray::new(buffer, nulls) } None => BooleanArray::new_null(array.len()), } } pub fn eq(l: &dyn Datum, r: &dyn Datum) -> Result<BooleanArray, ArrowError> { let (l_array, l_scalar) = l.get(); let (r_array, r_scalar) = r.get(); downcast_primitive_array!( (l_array, r_array) => Ok(eq_impl(l_array, l_scalar, r_array, r_scalar)), (a, b) => Err(ArrowError::NotYetImplemented(format!("{a} == {b}"))), ) } // Comparison of two arrays let a = Int32Array::from(vec![1, 2, 3, 4, 5]); let b = Int32Array::from(vec![1, 2, 4, 7, 3]); let r = eq(&a, &b).unwrap(); let values: Vec<_> = r.values().iter().collect(); assert_eq!(values, &[true, true, false, false, false]); // Comparison of an array and a scalar let a = Int32Array::from(vec![1, 2, 3, 4, 5]); let b = Int32Array::new_scalar(1); let r = eq(&a, &b).unwrap(); let values: Vec<_> = r.values().iter().collect(); assert_eq!(values, &[true, false, false, false, false]); ``` Required Methods --- #### fn get(&self) -> (&dyn Array, bool) Returns the value for this `Datum` and a boolean indicating if the value is scalar Implementors --- ### impl Datum for &dyn Array ### impl Datum for dyn Array ### impl<T: Array> Datum for Scalar<T### impl<T: Array> Datum for T Trait arrow_array::ArrowNativeTypeOp === ``` pub trait ArrowNativeTypeOp: ArrowNativeType { const ZERO: Self; const ONE: Self; // Required methods fn add_checked(self, rhs: Self) -> Result<Self, ArrowError>; fn add_wrapping(self, rhs: Self) -> Self; fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError>; fn sub_wrapping(self, rhs: Self) -> Self; fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError>; fn mul_wrapping(self, rhs: Self) -> Self; fn div_checked(self, rhs: Self) -> Result<Self, ArrowError>; fn div_wrapping(self, rhs: Self) -> Self; fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError>; fn mod_wrapping(self, rhs: Self) -> Self; fn neg_checked(self) -> Result<Self, ArrowError>; fn neg_wrapping(self) -> Self; fn pow_checked(self, exp: u32) -> Result<Self, ArrowError>; fn pow_wrapping(self, exp: u32) -> Self; fn is_zero(self) -> bool; fn compare(self, rhs: Self) -> Ordering; fn is_eq(self, rhs: Self) -> bool; // Provided methods fn is_ne(self, rhs: Self) -> bool { ... } fn is_lt(self, rhs: Self) -> bool { ... } fn is_le(self, rhs: Self) -> bool { ... } fn is_gt(self, rhs: Self) -> bool { ... } fn is_ge(self, rhs: Self) -> bool { ... } } ``` Trait for `ArrowNativeType` that adds checked and unchecked arithmetic operations, and totally ordered comparison operations The APIs with `_wrapping` suffix do not perform overflow-checking. For integer types they will wrap around the boundary of the type. For floating point types they will overflow to INF or -INF preserving the expected sign value Note `div_wrapping` and `mod_wrapping` will panic for integer types if `rhs` is zero although this may be subject to change https://github.com/apache/arrow-rs/issues/2647 The APIs with `_checked` suffix perform overflow-checking. For integer types these will return `Err` instead of wrapping. For floating point types they will overflow to INF or -INF preserving the expected sign value Comparison of integer types is as per normal integer comparison rules, floating point values are compared as per IEEE 754’s totalOrder predicate see `f32::total_cmp` Required Associated Constants --- #### const ZERO: Self The additive identity #### const ONE: Self The multiplicative identity Required Methods --- #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowErrorChecked addition operation #### fn add_wrapping(self, rhs: Self) -> Self Wrapping addition operation #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowErrorChecked subtraction operation #### fn sub_wrapping(self, rhs: Self) -> Self Wrapping subtraction operation #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowErrorChecked multiplication operation #### fn mul_wrapping(self, rhs: Self) -> Self Wrapping multiplication operation #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowErrorChecked division operation #### fn div_wrapping(self, rhs: Self) -> Self Wrapping division operation #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowErrorChecked remainder operation #### fn mod_wrapping(self, rhs: Self) -> Self Wrapping remainder operation #### fn neg_checked(self) -> Result<Self, ArrowErrorChecked negation operation #### fn neg_wrapping(self) -> Self Wrapping negation operation #### fn pow_checked(self, exp: u32) -> Result<Self, ArrowErrorChecked exponentiation operation #### fn pow_wrapping(self, exp: u32) -> Self Wrapping exponentiation operation #### fn is_zero(self) -> bool Returns true if zero else false #### fn compare(self, rhs: Self) -> Ordering Compare operation #### fn is_eq(self, rhs: Self) -> bool Equality operation Provided Methods --- #### fn is_ne(self, rhs: Self) -> bool Not equal operation #### fn is_lt(self, rhs: Self) -> bool Less than operation #### fn is_le(self, rhs: Self) -> bool Less than equals operation #### fn is_gt(self, rhs: Self) -> bool Greater than operation #### fn is_ge(self, rhs: Self) -> bool Greater than equals operation Implementations on Foreign Types --- ### impl ArrowNativeTypeOp for u32 #### const ZERO: Self = 0u32 #### const ONE: Self = 1u32 #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn neg_wrapping(self) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool ### impl ArrowNativeTypeOp for i8 #### const ZERO: Self = 0i8 #### const ONE: Self = 1i8 #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn neg_wrapping(self) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool ### impl ArrowNativeTypeOp for f64 #### const ZERO: Self = 0f64 #### const ONE: Self = 1f64 #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn neg_wrapping(self) -> Self #### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool ### impl ArrowNativeTypeOp for i16 #### const ZERO: Self = 0i16 #### const ONE: Self = 1i16 #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn neg_wrapping(self) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool ### impl ArrowNativeTypeOp for i32 #### const ZERO: Self = 0i32 #### const ONE: Self = 1i32 #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn neg_wrapping(self) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool ### impl ArrowNativeTypeOp for u16 #### const ZERO: Self = 0u16 #### const ONE: Self = 1u16 #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn neg_wrapping(self) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool ### impl ArrowNativeTypeOp for i256 #### const ZERO: Self = i256::ZERO #### const ONE: Self = i256::ONE #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn neg_wrapping(self) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool ### impl ArrowNativeTypeOp for f16 #### const ZERO: Self = f16::ZERO #### const ONE: Self = f16::ONE #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn neg_wrapping(self) -> Self #### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool ### impl ArrowNativeTypeOp for i64 #### const ZERO: Self = 0i64 #### const ONE: Self = 1i64 #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn neg_wrapping(self) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool ### impl ArrowNativeTypeOp for u64 #### const ZERO: Self = 0u64 #### const ONE: Self = 1u64 #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn neg_wrapping(self) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool ### impl ArrowNativeTypeOp for f32 #### const ZERO: Self = 0f32 #### const ONE: Self = 1f32 #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn neg_wrapping(self) -> Self #### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool ### impl ArrowNativeTypeOp for u8 #### const ZERO: Self = 0u8 #### const ONE: Self = 1u8 #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn neg_wrapping(self) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool ### impl ArrowNativeTypeOp for i128 #### const ZERO: Self = 0i128 #### const ONE: Self = 1i128 #### fn add_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn add_wrapping(self, rhs: Self) -> Self #### fn sub_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn sub_wrapping(self, rhs: Self) -> Self #### fn mul_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mul_wrapping(self, rhs: Self) -> Self #### fn div_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn div_wrapping(self, rhs: Self) -> Self #### fn mod_checked(self, rhs: Self) -> Result<Self, ArrowError#### fn mod_wrapping(self, rhs: Self) -> Self #### fn neg_checked(self) -> Result<Self, ArrowError#### fn pow_checked(self, exp: u32) -> Result<Self, ArrowError#### fn pow_wrapping(self, exp: u32) -> Self #### fn neg_wrapping(self) -> Self #### fn is_zero(self) -> bool #### fn compare(self, rhs: Self) -> Ordering #### fn is_eq(self, rhs: Self) -> bool Implementors --- Trait arrow_array::ArrowNumericType === ``` pub trait ArrowNumericType: ArrowPrimitiveType { } ``` A subtype of primitive type that represents numeric values. Implementors --- ### impl ArrowNumericType for Date32Type ### impl ArrowNumericType for Date64Type ### impl ArrowNumericType for Decimal128Type ### impl ArrowNumericType for Decimal256Type ### impl ArrowNumericType for DurationMicrosecondType ### impl ArrowNumericType for DurationMillisecondType ### impl ArrowNumericType for DurationNanosecondType ### impl ArrowNumericType for DurationSecondType ### impl ArrowNumericType for Float16Type ### impl ArrowNumericType for Float32Type ### impl ArrowNumericType for Float64Type ### impl ArrowNumericType for Int8Type ### impl ArrowNumericType for Int16Type ### impl ArrowNumericType for Int32Type ### impl ArrowNumericType for Int64Type ### impl ArrowNumericType for IntervalDayTimeType ### impl ArrowNumericType for IntervalMonthDayNanoType ### impl ArrowNumericType for IntervalYearMonthType ### impl ArrowNumericType for Time32MillisecondType ### impl ArrowNumericType for Time32SecondType ### impl ArrowNumericType for Time64MicrosecondType ### impl ArrowNumericType for Time64NanosecondType ### impl ArrowNumericType for TimestampMicrosecondType ### impl ArrowNumericType for TimestampMillisecondType ### impl ArrowNumericType for TimestampNanosecondType ### impl ArrowNumericType for TimestampSecondType ### impl ArrowNumericType for UInt8Type ### impl ArrowNumericType for UInt16Type ### impl ArrowNumericType for UInt32Type ### impl ArrowNumericType for UInt64Type Trait arrow_array::RecordBatchWriter === ``` pub trait RecordBatchWriter { // Required methods fn write(&mut self, batch: &RecordBatch) -> Result<(), ArrowError>; fn close(self) -> Result<(), ArrowError>; } ``` Trait for types that can write `RecordBatch`’s. Required Methods --- #### fn write(&mut self, batch: &RecordBatch) -> Result<(), ArrowErrorWrite a single batch to the writer. #### fn close(self) -> Result<(), ArrowErrorWrite footer or termination data, then mark the writer as done. Implementors ---
qumin
readthedoc
Python
Qumin 1.1.0 documentation [Qumin](index.html#document-index) --- Qumin: Quantitative modelling of inflection[¶](#qumin-quantitative-modelling-of-inflection) === Qumin (QUantitative Modelling of INflection) is a collection of scripts for the computational modelling of the inflectional morphology of languages. It was developed by me ([<NAME>](xachab.github.io)) for my PhD, which was supervised by [<NAME>](http://www.llf.cnrs.fr/fr/Gens/Bonami) . **The documentation has moved to ReadTheDocs** at: <https://qumin.readthedocs.io/For more detail, you can refer to my dissertation (in French): [<NAME>. Classifications flexionnelles. Étude quantitative des structures de paradigmes. Linguistique. Université Sorbonne Paris Cité - Université Paris Diderot (Paris 7), 2018. Français.](https://tel.archives-ouvertes.fr/tel-01840448) Quick Start[¶](#quick-start) --- ### Install[¶](#install) First, open the terminal and navigate to the folder where you want the Qumin code. Clone the repository from github: ``` git clone https://github.com/XachaB/Qumin.git ``` Make sure to have all the python dependencies installed. The dependencies are listed in environment.yml. A simple solution is to use conda and create a new environment from the environment.yml file: ``` conda env create -f environment.yml ``` There is now a new conda environment named Qumin. It needs to be activated before using any Qumin script: ``` conda activate Qumin ``` ### Data[¶](#data) The scripts expect full paradigm data in phonemic transcription, as well as a feature key for the transcription. To provide a data sample in the correct format, Qumin includes a subset of the French [flexique lexicon](http://www.llf.cnrs.fr/fr/flexique-fr.php), distributed under a [Creative Commons Attribution-NonCommercial-ShareAlike license](http://creativecommons.org/licenses/by-nc-sa/3.0/). For Russian nouns, see the [Inflected lexicon of Russian Nouns in IPA notation](https://zenodo.org/record/3428591). ### Scripts[¶](#scripts) #### Patterns[¶](#patterns) Alternation patterns serve as a basis for all the other scripts. The algorithm to find the patterns was presented in: <NAME>. [Un algorithme universel pour l’abstraction automatique d’alternances morphophonologiques 24e Conférence sur le Traitement Automatique des Langues Naturelles](https://halshs.archives-ouvertes.fr/hal-01615899) (TALN), Jun 2017, Orléans, France. 2 (2017), 24e Conférence sur le Traitement Automatique des Langues Naturelles. **Computing automatically aligned patterns** for paradigm entropy or macroclass: ``` bin/$ python3 find_patterns.py <paradigm.csv> <segments.csv> ``` **Computing automatically aligned patterns** for lattices: ``` bin/$ python3 find_patterns.py -d -o <paradigm.csv> <segments.csv> ``` #### Microclasses[¶](#microclasses) To visualize the microclasses and their similarities, you can use the new script microclass_heatmap.py: **Computing a microclass heatmap**: ``` bin/$ python3 microclass_heatmap.py <paradigm.csv> <output_path> ``` **Computing a microclass heatmap, comparing with class labels**: ``` bin/$ python3 microclass_heatmap.py -l <labels.csv> -- <paradigm.csv> <output_path> ``` The labels file is a csv file. The first column give lexemes names, the second column provides inflection class labels. This allows to visually compare a manual classification with pattern-based similarity. This script relies heavily on [seaborn’s clustermap](https://seaborn.pydata.org/generated/seaborn.clustermap.html) function. #### Paradigm entropy[¶](#paradigm-entropy) This script was used in: * Bonami, Olivier, and <NAME>. “[Joint predictiveness in inflectional paradigms](http://www.llf.cnrs.fr/fr/node/4789).” Word Structure 9, no. 2 (2016): 156-182. Some improvements have been implemented since then. **Computing entropies from one cell** ``` bin/$ python3 calc_paradigm_entropy.py -n 1 -- <patterns.csv> <paradigm.csv> <segments.csv> ``` **Computing entropies from two cells** (you can specify any number of predictors, e.g. -n 1 2 3 works too) ``` bin/$ python3 calc_paradigm_entropy.py -n 2 -- <patterns.csv> <paradigm.csv> <segments.csv> ``` **Add a file with features to help prediction** (for example gender – features will be added to the known information when predicting) ``` bin/$ python3 calc_paradigm_entropy.py -n 2 --features <features.csv> -- <patterns.csv> <paradigm.csv> <segments.csv> ``` #### Macroclass inference[¶](#macroclass-inference) Our work on automatical inference of macroclasses was published in Beniamine, Sacha, <NAME>, and <NAME>. “[Inferring Inflection Classes with Description Length.](http://jlm.ipipan.waw.pl/index.php/JLM/article/view/184)” Journal of Language Modelling (2018). **Inferring macroclasses** ``` bin/$ python3 find_macroclasses.py <patterns.csv> <segments.csv> ``` #### Lattices[¶](#lattices) This script was used in: * <NAME>. (in press) “[One lexeme, many classes: inflection class systems as lattices](https://xachab.github.io/papers/Beniamine2019.pdf)” , In: One-to-Many Relations in Morphology, Syntax and Semantics , Ed. by <NAME> and <NAME>. Berlin: Language Science Press. **Inferring a lattice of inflection classes, with html output** ``` bin/$ python3 make_lattice.py --html <patterns.csv> <segments.csv> ``` ### Documentation index[¶](#documentation-index) #### The morphological paradigms file[¶](#the-morphological-paradigms-file) This file relates phonological forms to their lexemes and paradigm cells. As an example of valid data, Qumin is shipped with a paradigm table from the French inflectional lexicon [Flexique](http://www.llf.cnrs.fr/fr/flexique-fr.php). Here is a sample of the first 10 columns for 10 randomly picked verbs from Flexique: > | lexeme | variants | prs.1sg | prs.2sg | prs.3sg | prs.1pl | prs.2pl | prs.3pl | ipfv.1sg | ipfv.2sg | ipfv.3sg | > | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | > | peler | peler | pɛl | pɛl | pɛl | pəlɔ̃ | pəle | pɛl | pəlE | pəlE | pəlE | > | soudoyer | soudoyer | sudwa | sudwa | sudwa | sudwajɔ̃ | sudwaje | sudwa | sudwajE | sudwajE | sudwajE | > | inféoder | inféoder | ɛ̃fEɔd | ɛ̃fEɔd | ɛ̃fEɔd | ɛ̃fEOdɔ̃ | ɛ̃fEOde | ɛ̃fEɔd | ɛ̃fEOdE | ɛ̃fEOdE | ɛ̃fEOdE | > | débiller | débiller | dEbij | dEbij | dEbij | dEbijɔ̃ | dEbije | dEbij | dEbijE | dEbijE | dEbijE | > | désigner | désigner | dEziɲ | dEziɲ | dEziɲ | dEziɲɔ̃ | dEziɲe | dEziɲ | dEziɲE | dEziɲE | dEziɲE | > | crachoter | crachoter | kʁaʃɔt | kʁaʃɔt | kʁaʃɔt | kʁaʃOtɔ̃ | kʁaʃOte | kʁaʃɔt | kʁaʃOtE | kʁaʃOtE | kʁaʃOtE | > | saouler | saouler:soûler | sul | sul | sul | sulɔ̃ | sule | sul | sulE | sulE | sulE | > | caserner | caserner | kazɛʁn | kazɛʁn | kazɛʁn | kazɛʁnɔ̃ | kazɛʁne | kazɛʁn | kazɛʁnE | kazɛʁnE | kazɛʁnE | > | parrainer | parrainer | paʁɛn | paʁɛn | paʁɛn | paʁEnɔ̃ | paʁEne | paʁɛn | paʁEnE | paʁEnE | paʁEnE | > | souscrire | souscrire | suskʁi | suskʁi | suskʁi | suskʁivɔ̃ | suskʁive | suskʁiv | suskʁivE | suskʁivE | suskʁivE | Paradigm files are written in [wide format](https://en.wikipedia.org/wiki/Wide_and_narrow_data): * each row represents a lexeme, and each column represents a cell. * The first column indicates a unique identifier for each lexeme. It is usually convenient to use orthographic citation forms for this purpose (e.g. infinitive for verbs). * In Vlexique, there is a second column with orthographic variants for lexeme names, which is called “variants”. You do not need to add a “variant” column, and if it is there, it will be ignored. * the very first row indicates the names of the cells as column headers. Columns headers shouldn’t contain the character “#”. While Qumin assumes that inflected forms are written in some phonemic notation (we suggest to be as close to the IPA as possible), you do not need to explicitely segment them into phonemes in the paradigms file. The file itself is a `csv`, meaning that the values are written as plain text, in utf-8 format, separated by spaces. This format can be read by spreadsheet programs as well as programmatically: ``` %%sh head -n 3 "../Data/Vlexique/vlexique-20171031.csv" ``` ``` lexeme,variants,prs.1sg,prs.2sg,prs.3sg,prs.1pl,prs.2pl,prs.3pl,ipfv.1sg,ipfv.2sg,ipfv.3sg,ipfv.1pl,ipfv.2pl,ipfv.3pl,fut.1sg,fut.2sg,fut.3sg,fut.1pl,fut.2pl,fut.3pl,cond.1sg,cond.2sg,cond.3sg,cond.1pl,cond.2pl,cond.3pl,sbjv.1sg,sbjv.2sg,sbjv.3sg,sbjv.1pl,sbjv.2pl,sbjv.3pl,pst.1sg,pst.2sg,pst.3sg,pst.1pl,pst.2pl,pst.3pl,pst.sbjv.1sg,pst.sbjv.2sg,pst.sbjv.3sg,pst.sbjv.1pl,pst.sbjv.2pl,pst.sbjv.3pl,imp.2sg,imp.1pl,imp.2pl,inf,prs.ptcp,pst.ptcp.m.sg,pst.ptcp.m.pl,pst.ptcp.f.sg,pst.ptcp.f.pl accroire,accroire,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,#DEF#,akʁwaʁ,#DEF#,#DEF#,#DEF#,#DEF#,#DEF# advenir,advenir,#DEF#,#DEF#,advjɛ̃,#DEF#,#DEF#,advjɛn,#DEF#,#DEF#,advənE,#DEF#,#DEF#,advənE,#DEF#,#DEF#,advjɛ̃dʁa,#DEF#,#DEF#,advjɛ̃dʁɔ̃,#DEF#,#DEF#,advjɛ̃dʁE,#DEF#,#DEF#,advjɛ̃dʁE,#DEF#,#DEF#,advjɛn,#DEF#,#DEF#,advjɛn,#DEF#,#DEF#,advɛ̃,#DEF#,#DEF#,advɛ̃ʁ,#DEF#,#DEF#,advɛ̃,#DEF#,#DEF#,advɛ̃s,#DEF#,#DEF#,#DEF#,advəniʁ,advənɑ̃,advəny,advəny,advəny,advəny ``` ##### Overabundance[¶](#overabundance) Inflectional paradigms sometimes have some overabundant forms, where the same lexeme and paradigm cell can be realized in various ways, as in “dreamed” vs “dreamt” for the English past of “to dream”. Concurrent forms can be written in the same cell, separated by “;”. Only some scripts can make use of this information, the other scripts will use the first value only. Here is an example for English verbs: > | lexeme | ppart | pres3s | prespart | inf | pres1s | presothers | past13 | pastnot13 | > | --- | --- | --- | --- | --- | --- | --- | --- | --- | > | bind | baˑɪnd;baˑʊnd | baˑɪndz | baˑɪndɪŋ | baˑɪnd | baˑɪnd | baˑɪnd | baˑɪnd;baˑʊnd | baˑɪnd;baˑʊnd | > | wind(air) | waˑʊnd;waˑɪndɪd | waˑɪndz | waˑɪndɪŋ | waˑɪnd | waˑɪnd | waˑɪnd | waˑʊnd;waˑɪndɪd | waˑʊnd;waˑɪndɪd | > | weave | wəˑʊvn̩;wiːvd | wiːvz | wiːvɪŋ | wiːv | wiːv | wiːv | wəˑʊv;wiːvd | wəˑʊv;wiːvd | > | slink | slʌŋk;slæŋk;slɪŋkt | slɪŋks | slɪŋkɪŋ | slɪŋk | slɪŋk | slɪŋk | slʌŋk;slæŋk;slɪŋkt | slʌŋk;slæŋk;slɪŋkt | > | dream | driːmd | driːmz | driːmɪŋ | driːm | driːm | driːm | driːmd;drɛmt | driːmd;drɛmt | ##### Defectivity[¶](#defectivity) On the contrary, some lexemes might be defective for some cells, and have no values whatsoever for these cells. The most explicit way to indicate these missing values is to write “#DEF#” in the cell. The cell can also be left empty. Note that some scripts ignore all lines with defective values. Here are some examples from French verbs: > | lexeme | prs.1sg | prs.2sg | prs.3sg | prs.1pl | prs.2pl | prs.3pl | ipfv.1sg | ipfv.2sg | > | --- | --- | --- | --- | --- | --- | --- | --- | --- | > | accroire | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | > | advenir | #DEF# | #DEF# | advjɛ̃ | #DEF# | #DEF# | advjɛn | #DEF# | #DEF# | > | ardre | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | aʁdE | aʁdE | > | braire | #DEF# | #DEF# | bʁE | #DEF# | #DEF# | bʁE | #DEF# | #DEF# | > | chaloir | #DEF# | #DEF# | ʃo | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | > | comparoir | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | > | discontinuer | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | > | douer | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | > | échoir | #DEF# | #DEF# | eʃwa | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | > | endêver | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #DEF# | #### The phonological segments file[¶](#the-phonological-segments-file) Qumin works from the assumption that your paradigms are written in phonemic notation. The phonological segments file provides a list of phonemes and their decomposition into distinctive features. This file is first used to segment the paradigms into sequences of phonemes (rather than sequences of characters). Then, the distinctive features are used to recognize phonological similarity and natural classes when creating and handling alternation patterns. To create a new segments file, the best is usually to refer to an authoritative description, and adapt it to the needs of the specific dataset. In the absence of such a description, I suggest to make use of [Bruce Hayes’ spreadsheet](https://linguistics.ucla.edu/people/hayes/120a/index.htm#features) as a starting point (he writes `+`, `-` and `0` for our `1`,`0` and `-1`). ##### Format[¶](#format) The segments file is also written in wide format, with each row describing a phoneme. The first column gives phonemes as they are written in the paradigms file. Each column represents a distinctive feature. Here is an example with just 10 rows of the segments table for French verbs: > | Seg. | sonant | syllabique | consonantique | continu | nasal | haut | bas | arrière | arrondi | antérieur | CORONAL | voisé | rel.ret. | > | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | > | p | 0 | 0 | 1 | 0 | 0 | 0 | | 0 | | 1 | | 0 | 0 | > | b | 0 | 0 | 1 | 0 | 0 | 0 | | 0 | | 1 | | 1 | 0 | > | t | 0 | 0 | 1 | 0 | 0 | 0 | | 0 | | 1 | 1 | 0 | 0 | > | s | 0 | 0 | 1 | 1 | 0 | 0 | | 0 | | 1 | 1 | 0 | 1 | > | i | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | | | 1 | 1 | > | y | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | | | 1 | 1 | > | u | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | | | 1 | 1 | > | o | 1 | 1 | 0 | 1 | 0 | 0 | | 1 | 1 | | | 1 | 1 | > | a | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | | | 1 | 1 | > | ɑ̃ | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | | | 1 | 1 | Some conventions: * The first column must be called `Seg.`. * The phonological symbols, in the `Seg.` column cannot be one of the reserved character : `. ^ $ * + ? { } [ ] / | ( ) < > _  ⇌ , ;`. * If the file contains a “value” column, it will be ignored. This is used to provide a human-readable description of segments, which can be useful when preparing the data. * In order to provide short names for the features, as in [+nas] rather than [+nasal], you can add a second level of header, also beginning by `Seg.`, which gives abbreviated names: > | Seg. | sonant | syllabique | consonantique | continu | nasal | haut | bas | arrière | arrondi | antérieur | CORONAL | voisé | rel.ret. | > | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | > | Seg. | son | syl | cons | cont | nas | haut | bas | arr | rond | ant | COR | vois | rel.ret. | > | p | 0 | 0 | 1 | 0 | 0 | 0 | | 0 | | 1 | | 0 | 0 | > | b | 0 | 0 | 1 | 0 | 0 | 0 | | 0 | | 1 | | 1 | 0 | The file is encoded in utf-8 and can be either a csv table (preferred) or a tabulation separated table (tsv). ``` %%sh head -n 6 "../Data/Vlexique/frenchipa.csv" ``` ``` Seg.,sonant,syllabique,consonantique,continu,nasal,haut,bas,arrière,arrondi,antérieur,CORONAL,voisé,rel.ret. Seg.,son,syl,cons,cont,nas,haut,bas,arr,rond,ant,COR,vois,rel.ret. p,0,0,1,0,0,0,,0,,1,,0,0 b,0,0,1,0,0,0,,0,,1,,1,0 t,0,0,1,0,0,0,,0,,1,1,0,0 d,0,0,1,0,0,0,,0,,1,1,1,0 ``` ##### Segmentation and aliases[¶](#segmentation-and-aliases) Since the forms in the paradigms are not segmented into phonemes, the phonological segments file is used to segment them. It is possible to specify phonemes which are more than one character long, for example using combining characters, or for diphthongs and affricates. Be careful of using the same notation as in your paradigms. For example, you can not use “a” + combining tilde in one, and the precomposed “ã” in the other file, as the program would not recognize them as the same thing. You should however be certain that there is no segmentation ambiguity. If you have sequences such as “ABC” which should be segmented “AB.C” in some contexts and “A.BC” in some other contexts, you need to change the notation in the paradigms file so that it is not ambiguous, for example by writing “A͡BC” in the first case and “AB͡C” in the second case. You would then have separate rows for “A”, “A͡B”, “C” and “B͡C” in the segments file. Internally, the program will use arbitrary aliases which are 1 character long to replace longer phonemes – this substitution will be reversed in the output. While this usually works without your intervention, you can provide your own aliases if you want to preserve some readability in debug logs. This is done by adding a column “ALIAS” right after the fist column, which holds 1-char aliases. This example shows a few rows for the segment files of navajo: > | Seg. | ALIAS | syllabic | htone | long | consonantal | sonorant | continuant | delayed release | … | > | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | > | ɣ | | 0 | | 0 | 1 | 0 | 1 | 1 | … | > | k | | 0 | | 0 | 1 | 0 | 0 | 0 | … | > | k’ | ḱ | 0 | | 0 | 1 | 0 | 0 | 0 | … | > | k͡x | K | 0 | | 0 | 1 | 0 | 0 | 1 | … | > | t | | 0 | | 0 | 1 | 0 | 0 | 0 | … | > | ť | | 0 | | 0 | 1 | 0 | 0 | 0 | … | > | t͡ɬ | L | 0 | | 0 | 1 | 0 | 0 | 1 | … | > | t͡ɬ’ | Ľ | 0 | | 0 | 1 | 0 | 0 | 1 | … | > | t͡ɬʰ | Ḷ | 0 | | 0 | 1 | 0 | 0 | 1 | … | > | ʦ | | 0 | | 0 | 1 | 0 | 0 | 1 | … | > | ʦ’ | Ś | 0 | | 0 | 1 | 0 | 0 | 1 | … | > | ʦʰ | Ṣ | 0 | | 0 | 1 | 0 | 0 | 1 | … | > | ʧ | H | 0 | | 0 | 1 | 0 | 0 | 1 | … | > | ʧ’ | Ḣ | 0 | | 0 | 1 | 0 | 0 | 1 | … | > | ʧʰ | Ḥ | 0 | | 0 | 1 | 0 | 0 | 1 | … | > | t͡x | T | 0 | | 0 | 1 | 0 | 0 | 1 | … | > | … | … | … | … | … | … | … | … | … | … | If you have many multi-character phonemes, you may get the following error: ``` ValueError: ('I can not guess a good one-char alias for ã, please use an ALIAS column to provide one.', 'occurred at index 41') ``` The solution is to add an alias for this character, and maybe a few others. To find aliases which vaguely resemble the proper symbols, this [table of unicode characters organized by letter](https://www.unicode.org/charts/collation/index.html) are often useful. ##### Shorthands[¶](#shorthands) When writing phonological rules, linguists often use shorthands like “V” for the natural class of all vowels, and “C” for the natural class of all consonants. If you want, you can provide some extra rows in the table to define shorthand names for some natural classes. These names have to start and end by “#”. Here an example for the French segments file, giving shorthands for C (consonants), V (vowels) and G (glides): > | Seg. | sonant | syllabique | consonantique | continu | nasal | haut | bas | arrière | arrondi | antérieur | CORONAL | voisé | rel.ret. | > | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | > | Seg. | son | syl | cons | cont | nas | haut | bas | arr | rond | ant | COR | vois | rel.ret. | > | #C# | | 0 | 1 | | | | | | | | | | | > | #V# | 1 | 1 | 0 | 1 | | | | | | | | 1 | 1 | > | #G# | 1 | 0 | 0 | 1 | 0 | 1 | 0 | | | 0 | | 1 | 1 | ##### Values of distinctive features[¶](#values-of-distinctive-features) Distinctive features are usually considered to be bivalent: they can be either positive ([+nasal]) or negative ([-nasal]). In the Segments file, positive values are written by the number `1`, and negative values by the number `0`. Some features do not apply at all to some phonemes, for example consonants are neither [+round] nor [-round]. This can be written either by `-1`, or by leaving the cell empty. While the first is more explicit, leaving the cell empty makes the tables more readable at a glance. The same strategy is used for features which are privative, as for example [CORONAL]: there is no class of segments which are [-coronal], so we can write either `1` or `-1` in the corresponding column, not using `0`. While `1`, `0` and `-1` (or nothing) are the values that make the most sense, any numeric values are technically allowed, for example [-back], [+back] and [++back] could be expressed by writing `0`, `1`, and `2` in the “back” column. I do not recommend doing this. When writing segments file, it is important to be careful of the naturality of natural classes, as Qumin will take them at face value. For example, using the same [±high] feature for both vowels and consonants will result in a natural class of all the [+high] segments, and one for all the [-high] segments. Sometimes, it is better to duplicate some columns to avoid generating unfounded classes. ###### Monovalent or bivalent features[¶](#monovalent-or-bivalent-features) [Frisch (1996)](https://www.cas.usf.edu/~frisch/publications.html) argues that monovalent features (using only `-1` and `1`) are to be preferred to bivalent features, as the latter implicitly generate natural classes for the complement features ([-coronal]), which is not always desirable. In Qumin, both monovalent and bivalent features are accepted. Internally, the program will expand all `1` and `0` into + and - values. As an example, take this table which classifies the three vowels /a/, /i/ and /u/: | Seg. | high | low | front | back | round | Non-round | | Seg. | high | low | front | back | round | Non-round | | a | | 1 | | 1 | | 1 | | i | 1 | | 1 | | | 1 | | u | 1 | | | 1 | 1 | | Internally, Qumin will construct the following table, which looks almost identical because we used monovalued features: | Seg. | +high | +low | +front | +back | +round | +Non-round | | --- | --- | --- | --- | --- | --- | --- | | a | | x | | x | | x | | i | x | | x | | | x | | u | x | | | x | x | | This will then result in the following natural class hierarchy: To visualize natural class hierarchies declared by segment files, you can use [FeatureViz](https://gitlab.com/sbeniamine/featureviz). The same thing can be achieved with less columns using binary features: | Seg. | high | front | round | | Seg. | high | front | round | | a | 0 | 0 | 0 | | i | 1 | 1 | 0 | | u | 1 | 0 | 1 | Internally, these will be expanded to: | Seg. | +high | -high | +front | -front | +round | -round | | --- | --- | --- | --- | --- | --- | --- | | a | | x | | x | | x | | i | x | | x | | | x | | u | x | | | x | x | | Which is the same thing as previously, with different names. The class hierarchy is also very similar: ###### Warning, some of the segments aren’t actual leaves[¶](#warning-some-of-the-segments-aren-t-actual-leaves) The following error occurs when the table is well formed, but specifies a natural class hierarchy which is not usable by Qumin: ``` Exception: Warning, some of the segments aren't actual leaves : p is the same node as [p-kʷ] [p-kʷ] ([pĸ]) = [+cons -son -syll +lab -round -voice -cg -cont -strid -lat -del.rel -nas -long] kʷ (ĸ) = [+cons -son -syll +lab -round +dor +highC -lowC +back -tense -voice -cg -cont -strid -lat -del.rel -nas -long] k is the same node as [k-kʷ] [k-kʷ] ([kĸ]) = [+cons -son -syll +dor +highC -lowC +back -tense -voice -cg -cont -strid -lat -del.rel -nas -long] kʷ (ĸ) = [+cons -son -syll +lab -round +dor +highC -lowC +back -tense -voice -cg -cont -strid -lat -del.rel -nas -long] ``` What happened here is that the natural class [p-kʷ] has the exact same definition as just /p/. Similarly, the natural class [k-kʷ] has the same definition as /k/. The result is the following structure, in which /p/ and /k/ are superclasses of /kʷ/: In this structure, it is impossible to distinguish the natural classes [p-kʷ] and [k-kʷ] from the respective ponemes /p/ and /k/. Instead, we want them to be one level lower. If we ignore the bottom node, this means that they should be leaves of the hierarchy. The solution is to ensure that both /p/ and /k/ have at least one feature divergent from [kʷ]. Usually, kʷ is marked as [+round], but in the above it is mistakenly written [-round]. Correcting this definitions yields the following structure, and solves the error: ###### Neutralizations[¶](#neutralizations) While having a segment be higher than another in the hierarchy is forbidden, it is possible to declare two segments with the exact same features. This is useful if you want to neutralize some oppositions, and ignore some details in the data. For example, this set of French vowels display height oppositions using the [±low] feature: | Seg. | sonant | syllabique | consonantique | continu | nasal | haut | bas | arrière | arrondi | antérieur | coronal | voisé | rel.ret. | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Seg. | son | syl | cons | cont | nas | haut | bas | arr | rond | ant | cor | vois | rel.ret. | | e | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | -1 | -1 | 1 | 1 | | ɛ | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | -1 | -1 | 1 | 1 | | ø | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | -1 | -1 | 1 | 1 | | œ | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | -1 | -1 | 1 | 1 | | o | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | -1 | -1 | 1 | 1 | | ɔ | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | -1 | -1 | 1 | 1 | Leading to this complex hierarchy: Due to regional variations, the French Vlexique sometimes neutralizes this oppositions, and writes E, Ø and O to underspecify the value of the vowels. The solution is to neutralize entirely the [±low] distinction for these vowels, writing repeated rows for E, e, ɛ, etc: | Seg. | sonant | syllabique | consonantique | continu | nasal | haut | bas | arrière | arrondi | antérieur | coronal | voisé | rel.ret. | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Seg. | son | syl | cons | cont | nas | haut | bas | arr | rond | ant | cor | vois | rel.ret. | | E | 1 | 1 | 0 | 1 | 0 | 0 | -1 | 0 | 0 | -1 | -1 | 1 | 1 | | e | 1 | 1 | 0 | 1 | 0 | 0 | -1 | 0 | 0 | -1 | -1 | 1 | 1 | | ɛ | 1 | 1 | 0 | 1 | 0 | 0 | -1 | 0 | 0 | -1 | -1 | 1 | 1 | | Ø | 1 | 1 | 0 | 1 | 0 | 0 | -1 | 0 | 1 | -1 | -1 | 1 | 1 | | ø | 1 | 1 | 0 | 1 | 0 | 0 | -1 | 0 | 1 | -1 | -1 | 1 | 1 | | œ | 1 | 1 | 0 | 1 | 0 | 0 | -1 | 0 | 1 | -1 | -1 | 1 | 1 | | O | 1 | 1 | 0 | 1 | 0 | 0 | -1 | 1 | 1 | -1 | -1 | 1 | 1 | | o | 1 | 1 | 0 | 1 | 0 | 0 | -1 | 1 | 1 | -1 | -1 | 1 | 1 | | ɔ | 1 | 1 | 0 | 1 | 0 | 0 | -1 | 1 | 1 | -1 | -1 | 1 | 1 | Internally, Qumin will replace all of these identical characters by a single unified one (the first in the file). The simplified structure becomes: ###### Creating scales[¶](#creating-scales) Rather than using many-valued features, it is often preferrable to use a few monovalent or bivalent features to create a scale. As an example, here is a possible (bad) implementation for tones, which uses a single feature “Tone”. | Seg. | Tone | | --- | --- | | Seg. | Tone | | ˥ | 3 | | ˦ | 2 | | ˧ | 1 | | ˨ | 0 | It results in this natural class hierarchy: While such a file is allowed, it results in the tones having nothing in common. If some morpho-phonological alternations selects both high and mid tones, we will miss that generalization. To express a scale, a simple solution is to create one less feature than there are segments (here four tones lead to three scale features), then fill in the upper diagonal with `1` and the lower diagonal with `0` (or the opposite). For example: | Seg. | scale1 | scale2 | scale3 | | --- | --- | --- | --- | | Seg. | scale1 | scale2 | scale3 | | ˥ | 1 | 1 | 1 | | ˦ | 0 | 1 | 1 | | ˧ | 0 | 0 | 1 | | ˨ | 0 | 0 | 0 | It will result in the natural classes below: Since this is not very readable, we can re-write the same thing in a more readable way using a combination of binary and monovalent features: | Seg. | Top | High | Low | Bottom | | --- | --- | --- | --- | --- | | Seg. | Top | High | Low | Bottom | | ˥ | 1 | 1 | | 0 | | ˦ | 0 | 1 | | 0 | | ˧ | 0 | | 1 | 0 | | ˨ | 0 | | 1 | 1 | Which leads to the same structure: When implementing tones, I recommend to mark them all as [-segmental] to ensure that they share a common class, and to write all other features as [+segmental]. ###### Diphthongs[¶](#diphthongs) Diphthongs are not usually decomposed using distinctive features, as they are complex sequences (see [this question on the Linguist List](https://linguistlist.org/ask-ling/message-details1.cfm?asklingid=200408211)). However, if diphthongs alternate with simple vowels in your data, adding diphthongs in the list of phonological segments can allow Qumin to capture better generalizations. The strategy I have employed so far is the following: * Write diphthongs in a non-ambiguous way in the data (either ‘aj’ or ‘aˑi’, but not ‘ai’ when the same sequence can sometimes be two vowels) * Copy the features from the initial vowel * Add a monovalent feature [DIPHTHONG] * Add monovalent features [DIPHTHONG_J], [DIPHTHONG_W], etc, as needed. This is a small example for a few English diphthongs: | Seg. | high | low | back | LABIAL | tense | diphtong j | diphtong ə | diphtong W | diphtong | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Seg. | high | low | back | LAB | tens | diph.j | diph.ə | diph.w | diph | | a | 0 | 1 | 0 | | 1 | | | | 0 | | aˑʊ | 0 | 1 | 1 | | 1 | | | 1 | 1 | | aˑɪ | 0 | 1 | 1 | | 1 | 1 | | | 1 | | ɪ | 1 | 0 | 0 | | 0 | | | | 0 | | ɪˑə | 1 | 0 | 0 | | 0 | | 1 | | 1 | Which leads to the following classes: ###### Others[¶](#others) * Stress: I recommend to mark it directly on vowels, and duplicate the vowel inventory to have both stressed and unstressed counterpart. A simple binary [±stress] feature is enough to distinguish them. * Length: Similarly, I recommend to mark length, when possible, on vowels, rather than duplicating them. #### Usages[¶](#usages) ##### Usage of bin/find_patterns.py[¶](#usage-of-bin-find-patterns-py) Find pairwise alternation patterns from paradigms. This is a preliminary step necessary to obtain patterns used as input in the three scripts below. **Computing automatically aligned patterns** for paradigm entropy or macroclass: ``` bin/$ python3 find_patterns.py <paradigm.csv> <segments.csv> ``` **Computing automatically aligned patterns** for lattices: ``` bin/$ python3 find_patterns.py -d -o -c <paradigm.csv> <segments.csv> ``` The option -k allows one to choose the algorithm for inferring alternation patterns. | Option | Description | Strategy | | --- | --- | --- | | endings | Affixes | Removes the longest common initial string for each row. | | endingsPairs | Pairs of affixes | Endings, tabulated as pairs for all combinations of columns. | | endingsDisc | Discontinuous endings | Removes the longest common substring, left aligned | | **….Alt** | **Alternations** | **Alternations have no contextes. These were used for comparing macroclass** **strategies on French and European Portuguese.** | | globalAlt | Alternations | As EndingsDisc, tabulated as pairs for all combinations of columns. | | localAlt | Alternations | Inferred from local pairs of cells, left aligned. | | **patterns…** | **Binary Patterns** | **All patterns have alternations and generalized contexts. Various alignment** **strategies are offered for comparison. Arbitrary number of changes supported.** | | patternsLevenshtein | Patterns | Aligned with simple edit distance. | | patternsPhonsim | Patterns | Aligned with edit distances based on phonological similarity. | | patternsSuffix | Patterns | Fixed left alignment, only interesting for suffixal languages. | | patternsPrefix | Patterns | Fixed right alignment, only interesting for prefixal languages. | | patternsBaseline | Patterns | Baseline alignment, follows Albright & Hayes 2002. A single change, with a priority order: Suffixation > Prefixation > Stem-internal alternation (ablaut/infixation) | Most of these were implemented for comparison purposes. I recommend to use the default patternsPhonsim in most cases. To avoid relying on your phonological features files for alignment scores, use patternsLevenshtein. Only these two are full patterns with generalization both in the context and alternation. For lattices, we keep defective and overabundant entries. We do not usually keep them for other applications. The latest code for entropy can handle defective entries. The file you should use as input for the below scripts has a name that ends in “_patterns”. The “_human_readable_patterns” file is nicer to review but is only meant for human usage. ##### Usage of bin/calc_paradigm_entropy.py[¶](#usage-of-bin-calc-paradigm-entropy-py) Compute entropies of flexional paradigms’ distributions. **Computing entropies from one cell** ``` bin/$ python3 calc_paradigm_entropy.py -o <patterns.csv> <paradigm.csv> <segments.csv> ``` **Computing entropies from one cell, with a split dataset** ``` bin/$ python3 calc_paradigm_entropy.py -names <data1 name> <data2 name> -b <patterns1.csv> <paradigm1.csv> -o <patterns2.csv> <paradigm2.csv> <segments.csv> ``` **Computing entropies from two cell** ``` bin/$ python3 calc_paradigm_entropy.py -n 2 <patterns.csv> <paradigm.csv> <segments.csv> ``` More complete usage can be obtained by typing ``` bin/$ python3 calc_paradigm_entropy.py --help ``` With –nPreds and N>2 the computation can get quite long on large datasets. ##### Usage of bin/find_macroclasses.py[¶](#usage-of-bin-find-macroclasses-py) Cluster lexemes in macroclasses according to alternation patterns. **Inferring macroclasses** ``` bin/$ python3 find_macroclasses.py <patterns.csv> <segments.csv> ``` More complete usage can be obtained by typing ``` bin/$ python3 find_macroclasses.py --help ``` The options “-m UPGMA”, “-m CD” and “-m TD” are experimental and will not undergo further development, use at your own risks. The default is to use Description Length (DL) and a bottom-up algorithm (BU). ##### Usage of bin/make_lattice.py[¶](#usage-of-bin-make-lattice-py) Infer Inflection classes as a lattice from alternation patterns. This will produce a context and an interactive html file. **Inferring a lattice of inflection classes, with html output** ``` bin/$ python3 make_lattice.py --html <patterns.csv> <segments.csv> ``` More complete usage can be obtained by typing ``` bin/$ python3 make_lattice.py --help ``` #### bin[¶](#bin) ##### clustering package[¶](#clustering-package) ###### Submodules[¶](#submodules) ###### clustering.algorithms module[¶](#module-clustering.algorithms) Algorithms for inflection classes clustering. Author: <NAME> `clustering.algorithms.``bottom_up_clustering`(*patterns*, *microclasses*, *Clusters*, ***kwargs*)[[source]](_modules/clustering/algorithms.html#bottom_up_clustering)[¶](#clustering.algorithms.bottom_up_clustering) Cluster microclasses in a top-down recursive fashion. The algorithm is the following: ``` Begin with one cluster per microclasses. While there is more than one cluster : Find the best possible merge of two clusters, among all possible pairs. Perform this merge ``` Scoring, finding the best merges, merging nodes depends on the Clusters class. | Parameters: | * **patterns** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe of patterns. * **(****dict of str** (*microclasses*) – list): mapping of microclasses exemplars to microclasses inventories. * **Clusters** – a cluster class to use in clustering. * **kwargs** – any keywords arguments to pass to Clusters. | `clustering.algorithms.``choose`(*iterable*)[[source]](_modules/clustering/algorithms.html#choose)[¶](#clustering.algorithms.choose) Choose a random element in an iterable of iterable. `clustering.algorithms.``hierarchical_clustering`(*patterns*, *Clusters*, *clustering_algorithm=<function bottom_up_clustering>*, ***kwargs*)[[source]](_modules/clustering/algorithms.html#hierarchical_clustering)[¶](#clustering.algorithms.hierarchical_clustering) Perform hierarchical clustering on patterns according to a clustering algorithm and a measure. This function :: Finds microclasses. Performs the clustering, Finds the macroclasses (and exports them), Returns the inflection class tree. Scoring, finding the best merges, merging nodes depends on the Clusters class. | Parameters: | * **patterns** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe of strings representing alternation patterns. * **Clusters** – a cluster class to use in clustering. * **clustering_algorithm** (*func*) – a clustering algorithm. * **kwargs** – any keywords arguments to pass to Clusters. Some keywords are mandatory : “prefix” should be the log file prefix, “patterns” should be a function for pattern finding | `clustering.algorithms.``log_classes`(*classes*, *prefix*, *suffix*)[[source]](_modules/clustering/algorithms.html#log_classes)[¶](#clustering.algorithms.log_classes) `clustering.algorithms.``top_down_clustering`(*patterns*, *microclasses*, *Clusters*, ***kwargs*)[[source]](_modules/clustering/algorithms.html#top_down_clustering)[¶](#clustering.algorithms.top_down_clustering) Cluster microclasses in a top-down recursive fashion. The algorithm is the following: ``` Begin with one unique cluster containing all microclasses, and one empty cluster. While we are seeing an improvement: Find the best possible shift of a microclass from one cluster to another. Perform this shift. Build a binary node with the two clusters. Recursively apply the same algorithm to each. ``` The algorithm stops when it reaches leaves, or when no shift improves the score. Scoring, finding the best shits, updating the nodes depends on the Clusters class. | Parameters: | * **patterns** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe of patterns. * **(****dict of str** (*microclasses*) – list): mapping of microclasses exemplars to microclasses inventories. * **Clusters** – a cluster class to use in clustering. * **kwargs** – any keywords arguments to pass to Clusters. | ###### clustering.clusters module[¶](#module-clustering.clusters) Base classes to make clustering decisions and build inflection class trees. Author: <NAME> *class* `clustering.clusters.``BUComparisonClustersBuilder`(**args*, *DecisionMaker=None*, *Annotator=None*, ***kwargs*)[[source]](_modules/clustering/clusters.html#BUComparisonClustersBuilder)[¶](#clustering.clusters.BUComparisonClustersBuilder) Bases: `clustering.clusters._BUClustersBuilder` Comparison between measures for hierarchical clustering bottom-up clustering of Inflection classes. This class takes two _BUClustersBuilder classes, a DecisionMaker and an Annotator. The DecisionMaker is used to find the ordered merges. When merging, the merge is performed on both classes, and the Annotator’s values (description length or distances) are used to annotate the trees of the DecisionMaker. | Variables: | * **(****dict of str** (*microclasses*) – list): Inherited. mapping of microclasses exemplars to microclasses inventories. * **(****dict of frozenset** (*nodes*) – Node): Inherited. Maps frozensets of microclass exemplars to Nodes representing clusters. * **preferences** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Inherited. Configuration parameters. * **(** (*Annotator*) – class:clustering.clusters._BUClustersBuilder): A class to use for finding ordered merges. * **(** – class:clustering.clusters._BUClustersBuilder): A class to use for annotating the DecisionMaker. | `find_ordered_merges`()[[source]](_modules/clustering/clusters.html#BUComparisonClustersBuilder.find_ordered_merges)[¶](#clustering.clusters.BUComparisonClustersBuilder.find_ordered_merges) Find the list of all best possible merges. `merge`(*a*, *b*)[[source]](_modules/clustering/clusters.html#BUComparisonClustersBuilder.merge)[¶](#clustering.clusters.BUComparisonClustersBuilder.merge) Merge two clusters into one. `rootnode`()[[source]](_modules/clustering/clusters.html#BUComparisonClustersBuilder.rootnode)[¶](#clustering.clusters.BUComparisonClustersBuilder.rootnode) Return the root of the Inflection Class tree, if it exists. ###### clustering.descriptionlength module[¶](#module-clustering.descriptionlength) Classes to make clustering decisions and build inflection class trees according to description length. Author: <NAME> *class* `clustering.descriptionlength.``BUDLClustersBuilder`(*microclasses*, *paradigms*, ***kwargs*)[[source]](_modules/clustering/descriptionlength.html#BUDLClustersBuilder)[¶](#clustering.descriptionlength.BUDLClustersBuilder) Bases: `clustering.descriptionlength._DLClustersBuilder`, `clustering.clusters._BUClustersBuilder` Bottom up Builder for hierarchical clusters of inflection classes with description length based decisions. This class holds two representations of the clusters it builds. On one hand, the class Cluster represents the informations needed to compute the description length of a cluster. On the other hand, the class Node represents the inflection classes being built. A Node can have children and a parent, a Cluster can be splitted or merged. This class inherits attributes. | Variables: | * **(****dict of str** (*patterns*) – list): Inherited. mapping of microclasses exemplars to microclasses inventories. * **(****dict of frozenset** (*clusters*) – Node): Inherited. Maps frozensets of microclass exemplars to Nodes representing clusters. * **preferences** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Inherited. Configuration parameters. * **attr** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Inherited. (class attribute) always have the value “DL”, as the nodes of the Inflection class tree have a “DL” attribute. * [**DL**](index.html#clustering.distances.DL) ([*float*](https://docs.python.org/3/library/functions.html#float)) – Inherited. A description length DL, with DL(system) = DL(M) + DL(C) + DL(P) + DL(R) * **M** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Inherited. DL(M), the cost in bits to express the mapping between lexemes and microclasses. * **C** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Inherited. DL(C), the cost in bits to express the mapping between microclasses and clusters. * [**P**](index.html#entropy.utils.P) ([*float*](https://docs.python.org/3/library/functions.html#float)) – Inherited. DL(P), the cost in bits to express the relation between clusters and patterns. * **R** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Inherited. DL(R), the cost in bits to disambiguiate which pattern to use in each cluster for each microclasses. * **(****dict of frozenset** – [`Cluster`](#clustering.descriptionlength.Cluster)): Inherited. Clusters, indexed by a frozenset of microclass examplars. * **(****dict of str** – Counter): Inherited. A dict of pairs of cells to a count of patterns to the number of clusters presenting this pattern for this cell.: ``` { str: Counter({Pattern: int }) } pairs of cells -> pattern -> number of clusters with this pattern for this cell ``` Note that the Counter’s length is written on a .length attribute, to avoid calling len() repeatedly. Remark that the count is not the same as in the class Cluster. * **size** ([*int*](https://docs.python.org/3/library/functions.html#int)) – Inherited. The size of the whole system in microclasses. | `find_ordered_merges`()[[source]](_modules/clustering/descriptionlength.html#BUDLClustersBuilder.find_ordered_merges)[¶](#clustering.descriptionlength.BUDLClustersBuilder.find_ordered_merges) Find the list of all best merges of two clusters. The list is a list of tuples of length 3 containing two frozensets representing the labels of the clusters to merge and the description length of the resulting system. `merge`(*a*, *b*)[[source]](_modules/clustering/descriptionlength.html#BUDLClustersBuilder.merge)[¶](#clustering.descriptionlength.BUDLClustersBuilder.merge) Merge two Clusters, build a Node to represent the result, update the DL. | Parameters: | * **a** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – the label of a cluster to merge. * **b** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – the label of a cluster to merge. | *class* `clustering.descriptionlength.``Cluster`(**args*)[[source]](_modules/clustering/descriptionlength.html#Cluster)[¶](#clustering.descriptionlength.Cluster) Bases: [`object`](https://docs.python.org/3/library/functions.html#object) A single cluster in MDL clustering. A Cluster is iterable. Itering on a cluster is itering on its patterns. Cluster can be merged or separated by adding or substracting them. | Variables: | * [**patterns**](index.html#module-representations.patterns) (`defaultdict`) – For each pair of cell in the paradigms under consideration, it holds a counter of the number of microclass using each pattern in this cluster and pair of cells.: ``` { str: Counter({Pattern: int }) } pairs of cells -> pattern -> number of microclasses using this pattern for this cell ``` Note that the Counter’s length is written on a .length attribute, to avoid calling len() repeatedly. * **labels** ([*set*](https://docs.python.org/3/library/stdtypes.html#set)) – the set of all exemplars representing the microclasses in this cluster. * **size** ([*int*](https://docs.python.org/3/library/functions.html#int)) – The size of this cluster. Depending on external parameters, this can be the number of microclasses or the number of lexemes belonging to the cluster. * **totalsize** ([*int*](https://docs.python.org/3/library/functions.html#int)) – The size of the whole system of clusters, either number of microclasses in the system, or number of lexemes in the system. * **R** – The cost in bits to disambiguate for each pair of cells which pattern is to be used with which microclass. * **C** – The contribution of this cluster to the cost of mapping from microclasses to clusters. | `__init__`(**args*)[[source]](_modules/clustering/descriptionlength.html#Cluster.__init__)[¶](#clustering.descriptionlength.Cluster.__init__) Initialize single cluster. | Parameters: | **args** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Names (exemplar) of each microclass belonging to the cluster. | `init_from_paradigm`(*class_size*, *paradigms*, *size*)[[source]](_modules/clustering/descriptionlength.html#Cluster.init_from_paradigm)[¶](#clustering.descriptionlength.Cluster.init_from_paradigm) Populate fields according to a paradigm column. This assumes an initialization with only one microclass. | Parameters: | * **class_size** ([*int*](https://docs.python.org/3/library/functions.html#int)) – the size of the microclass * **paradigms** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe of patterns. * **size** ([*int*](https://docs.python.org/3/library/functions.html#int)) – total size | *class* `clustering.descriptionlength.``TDDLClustersBuilder`(*microclasses*, *paradigms*, ***kwargs*)[[source]](_modules/clustering/descriptionlength.html#TDDLClustersBuilder)[¶](#clustering.descriptionlength.TDDLClustersBuilder) Bases: `clustering.descriptionlength._DLClustersBuilder` Top down builder for hierarchical clusters of inflection classes with description length based decisions. This class holds two representations of the clusters it builds. On one hand, the class Cluster represents the informations needed to compute the description length of a cluster. On the other hand, the class Node represents the inflection classes being built. A Node can have children and a parent, a Cluster can be splitted or merged. This class inherits attributes. | Variables: | * **(****dict of str** (*patterns*) – list): Inherited. mapping of microclasses exemplars to microclasses inventories. * **(****dict of frozenset** (*history*) – Node): Inherited. Maps frozensets of microclass exemplars to Nodes representing clusters. * **preferences** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Inherited. Configuration parameters. * **attr** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Inherited. (class attribute) always have the value “DL”, as the nodes of the Inflection class tree have a “DL” attribute. * [**DL**](index.html#clustering.distances.DL) ([*float*](https://docs.python.org/3/library/functions.html#float)) – Inherited. A description length DL, with DL(system) = DL(M) + DL(C) + DL(P) + DL(R) * **M** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Inherited. DL(M), the cost in bits to express the mapping between lexemes and microclasses. * **C** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Inherited. DL(C), the cost in bits to express the mapping between microclasses and clusters. * [**P**](index.html#entropy.utils.P) ([*float*](https://docs.python.org/3/library/functions.html#float)) – Inherited. DL(P), the cost in bits to express the relation between clusters and patterns. * **R** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Inherited. DL(R), the cost in bits to disambiguiate which pattern to use in each cluster for each microclasses. * **(****dict of frozenset** – [`Cluster`](#clustering.descriptionlength.Cluster)): Inherited. Clusters, indexed by a frozenset of microclass examplars. * **(****dict of str** – Counter): Inherited. A dict of pairs of cells to a count of patterns to the number of clusters presenting this pattern for this cell.: ``` { str: Counter({Pattern: int }) } pairs of cells -> pattern -> number of clusters with this pattern for this cell ``` Note that the Counter’s length is written on a .length attribute, to avoid calling len() repeatedly. Remark that the count is not the same as in the class Cluster. * **size** ([*int*](https://docs.python.org/3/library/functions.html#int)) – Inherited. The size of the whole system in microclasses. * **minDL** ([*float*](https://docs.python.org/3/library/functions.html#float)) – The minimum description length yet encountered. * **(****dict of frozenset** – tuples): dict associating partitions with (M, C, P, R, DL) tuples. * **left** ([*Cluster*](index.html#clustering.descriptionlength.Cluster)) – left and right are temporary clusters used to divide a current cluster in two. * **right** ([*Cluster*](index.html#clustering.descriptionlength.Cluster)) – see left. * **to_split** ([*Node*](index.html#clustering.utils.Node)) – the node that we are currently trying to split. | `__init__`(*microclasses*, *paradigms*, ***kwargs*)[[source]](_modules/clustering/descriptionlength.html#TDDLClustersBuilder.__init__)[¶](#clustering.descriptionlength.TDDLClustersBuilder.__init__) Constructor. | Parameters: | * **(****dict of str** (*microclasses*) – list): mapping of microclasses exemplars to microclasses inventories. * **paradigms** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe of patterns. * **kwargs** – keyword arguments to be used as configuration. | `find_ordered_shifts`()[[source]](_modules/clustering/descriptionlength.html#TDDLClustersBuilder.find_ordered_shifts)[¶](#clustering.descriptionlength.TDDLClustersBuilder.find_ordered_shifts) Find the list of all best shifts of a microclass between right and left. The list is a list of tuples of length 2 containing the label of a node to shift and the description length of the node to be splitted if we perform the shift. `initialize_clusters`(*paradigms*)[[source]](_modules/clustering/descriptionlength.html#TDDLClustersBuilder.initialize_clusters)[¶](#clustering.descriptionlength.TDDLClustersBuilder.initialize_clusters) Initialize clusters with one cluster per microclass plus one for the whole. | Parameters: | **paradigms** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe of patterns. | `initialize_nodes`()[[source]](_modules/clustering/descriptionlength.html#TDDLClustersBuilder.initialize_nodes)[¶](#clustering.descriptionlength.TDDLClustersBuilder.initialize_nodes) Initialize nodes with only one root node which children are all microclasses. `initialize_subpartition`(*node*)[[source]](_modules/clustering/descriptionlength.html#TDDLClustersBuilder.initialize_subpartition)[¶](#clustering.descriptionlength.TDDLClustersBuilder.initialize_subpartition) Initialize left and right as a subpartition of a node we want to split. | Parameters: | **node** ([*Node*](index.html#clustering.utils.Node)) – The node to be splitted. | `shift`(*label*)[[source]](_modules/clustering/descriptionlength.html#TDDLClustersBuilder.shift)[¶](#clustering.descriptionlength.TDDLClustersBuilder.shift) Shift one microclass rom left to right or vice-versa | Parameters: | **label** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – the label of the microclass to shift. | `split_leaves`()[[source]](_modules/clustering/descriptionlength.html#TDDLClustersBuilder.split_leaves)[¶](#clustering.descriptionlength.TDDLClustersBuilder.split_leaves) Split a cluster by replacing it with the two clusters left and right. Recompute the description length when left and right are separated. Build two nodes corresponding to left and right, children of to_split. `clustering.descriptionlength.``weighted_log`(*symbol_count*, *message_length*)[[source]](_modules/clustering/descriptionlength.html#weighted_log)[¶](#clustering.descriptionlength.weighted_log) Compute \(-log_{2}(symbol_count/message_length) * message_length\). This corresponds to the product inside the sum of the description length formula when probabilities are estimated on frequencies. | Parameters: | * **symbol_count** ([*int*](https://docs.python.org/3/library/functions.html#int)) – a count of symbols. * **message_length** ([*int*](https://docs.python.org/3/library/functions.html#int)) – the size of the message. | | Returns: | the weighted log | | Return type: | ([float](https://docs.python.org/3/library/functions.html#float)) | ###### clustering.distances module[¶](#module-clustering.distances) Classes and functions to make clustering decisions and build inflection class trees according to distances. > Still experimental, and unfinished. Author: <NAME> *class* `clustering.distances.``CompressionDistClustersBuilder`(**args*, ***kwargs*)[[source]](_modules/clustering/distances.html#CompressionDistClustersBuilder)[¶](#clustering.distances.CompressionDistClustersBuilder) Bases: `clustering.distances._DistanceClustersBuilder` Builder for bottom up hierarchical clusters of inflection classes with compression distance. This class inherits attributes. | Variables: | * **(****dict of str** (*microclasses*) – list): Inherited. mapping of microclasses exemplars to microclasses inventories. * **(****dict of frozenset** (*DL_dict*) – Node): Inherited. Maps frozensets of microclass exemplars to Nodes representing clusters. * **preferences** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Inherited. Configuration parameters. * **attr** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Inherited. always have the value “DL”, as the nodes of the Inflection class tree have a “DL” attribute. * **(****dict of str** – list): Inherited. mapping of microclasses exemplars to microclasses inventories. * **(****dict of frozenset** – Node): Inherited. Maps frozensets of microclass exemplars to Nodes representing clusters. * **preferences** – Inherited. Configuration parameters. * **paradigms** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – Inherited. a dataframe of patterns. * [**distances**](index.html#module-clustering.distances) ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Inherited. The distance score_matrix between clusters. * **(****dict of frozenset** – float): Maps each cluster to its description length. * **min_DL** ([*float*](https://docs.python.org/3/library/functions.html#float)) – the lowest description length for the whole system yet encountered. | `merge`(*a*, *b*)[[source]](_modules/clustering/distances.html#CompressionDistClustersBuilder.merge)[¶](#clustering.distances.CompressionDistClustersBuilder.merge) Merge two Clusters, build a new Node, update the distances, track system DL. | Parameters: | * **a** ([*frozenset*](https://docs.python.org/3/library/stdtypes.html#frozenset)) – the label of a cluster to merge. * **b** ([*frozenset*](https://docs.python.org/3/library/stdtypes.html#frozenset)) – the label of a cluster to merge. | `update_distances`(*new*)[[source]](_modules/clustering/distances.html#CompressionDistClustersBuilder.update_distances)[¶](#clustering.distances.CompressionDistClustersBuilder.update_distances) Update for compression distances. | Parameters: | **new** ([*frozenset*](https://docs.python.org/3/library/stdtypes.html#frozenset)) – Frozenset of microclass exemplar representing the new cluster. | `clustering.distances.``DL`(*messages*)[[source]](_modules/clustering/distances.html#DL)[¶](#clustering.distances.DL) Compute the description length of a list of messages encoded separately. | Parameters: | **messages** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – List of lists of symbols. Symbols are str. They are treated as atomic. | *class* `clustering.distances.``UPGMAClustersBuilder`(**args*, ***kwargs*)[[source]](_modules/clustering/distances.html#UPGMAClustersBuilder)[¶](#clustering.distances.UPGMAClustersBuilder) Bases: `clustering.distances._DistanceClustersBuilder` Builder for UPGMA hierarchical clusters of inflection classes with hamming distance. | Variables: | * **(****dict of str** (*microclasses*) – list): Inherited. mapping of microclasses exemplars to microclasses inventories. * **(****dict of frozenset** (*nodes*) – Node): Inherited. Maps frozensets of microclass exemplars to Nodes representing clusters. * **preferences** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Inherited. Configuration parameters. * **attr** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Inherited. always have the value “DL”, as the nodes of the Inflection class tree have a “DL” attribute. * **(****dict of str** – list): Inherited. mapping of microclasses exemplars to microclasses inventories. * **(****dict of frozenset** – Node): Inherited. Maps frozensets of microclass exemplars to Nodes representing clusters. * **preferences** – Inherited. Configuration parameters. * **paradigms** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – Inherited. a dataframe of patterns. * [**distances**](index.html#module-clustering.distances) ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Inherited. The distance score_matrix between clusters. | `update_distances`(*new*)[[source]](_modules/clustering/distances.html#UPGMAClustersBuilder.update_distances)[¶](#clustering.distances.UPGMAClustersBuilder.update_distances) UPGMA update for distances. | Parameters: | **new** ([*frozenset*](https://docs.python.org/3/library/stdtypes.html#frozenset)) – Frozenset of microclass exemplar representing the new cluster. | `clustering.distances.``compression_distance`(*a*, *b*, *merged*)[[source]](_modules/clustering/distances.html#compression_distance)[¶](#clustering.distances.compression_distance) Compute the compression distances between description lengths. | Parameters: | * **a** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Description length of a cluster. * **b** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Description length of a cluster. * **merged** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Description length of the cluster merging both the clusters from a and b. | `clustering.distances.``compression_distance_atomic`(*x*, *y*, *table*, *microclasses*, **args*, ***kwargs*)[[source]](_modules/clustering/distances.html#compression_distance_atomic)[¶](#clustering.distances.compression_distance_atomic) Compute the compression distances between microclasses x and y from their exemplars. | Parameters: | * **x** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A microclass exemplar. * **y** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A microclass exemplar. * **table** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe of patterns. * **(****dict of str** (*microclasses*) – list): mapping of microclasses exemplars to microclasses inventories. | `clustering.distances.``dist_matrix`(*table*, **args*, *labels=None*, *distfun=<function hamming>*, *half=False*, *default=inf*, ***kwargs*)[[source]](_modules/clustering/distances.html#dist_matrix)[¶](#clustering.distances.dist_matrix) Output a distance score_matrix between clusters. | Parameters: | * **table** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe of patterns. * **distfun** (*fun*) – distance function. * **labels** (*iterable*) – the labels between which to compute distance. Defaults to the table’s index. * **half** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Wether to fill only a half score_matrix. * **default** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Default distance. | | Returns: | the similarity score_matrix. | | Return type: | distances ([dict](https://docs.python.org/3/library/stdtypes.html#dict)) | `clustering.distances.``hamming`(*x*, *y*, *table*, **args*, ***kwargs*)[[source]](_modules/clustering/distances.html#hamming)[¶](#clustering.distances.hamming) Compute hamming distances between x and y in table. | Parameters: | * **x** (*any iterable*) – vector. * **y** (*any iterable*) – vector. * **table** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe of patterns. | | Returns: | the hamming distance between x and y. | | Return type: | ([int](https://docs.python.org/3/library/functions.html#int)) | `clustering.distances.``split_description`(*descriptions*)[[source]](_modules/clustering/distances.html#split_description)[¶](#clustering.distances.split_description) Split each description of a list on spaces to obtain symbols. `clustering.distances.``table_to_descr`(*table*, *exemplars*, *microclasses*)[[source]](_modules/clustering/distances.html#table_to_descr)[¶](#clustering.distances.table_to_descr) Create a list of descriptions from a paradigmatic table. | Parameters: | * **table** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe of patterns. * **exemplars** (*iterable of str*) – The microclasses to include in the description. * **(****dict of str** (*microclasses*) – list): mapping of microclasses exemplars to microclasses inventories. | ###### clustering.utils module[¶](#module-clustering.utils) Utilities used in clustering. Author:<NAME>. *class* `clustering.utils.``Node`(*labels*, *children=None*, ***kwargs*)[[source]](_modules/clustering/utils.html#Node)[¶](#clustering.utils.Node) Bases: [`object`](https://docs.python.org/3/library/functions.html#object) Represent an inflection class tree. | Variables: | * **labels** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – labels of all the leaves under this node. * **children** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – direct children of this node. * **attributes** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – attributes for this node. Currently, three attributes are expected: size (int): size of the group represented by this node. DL (float): Description length for this node. color (str): color of the splines from this node to its children, in a format usable by pyplot. Currently, red (“r”) is used when the node didn’t decrease Description length, blue (“b”) otherwise. macroclass (bool): Is the node in a macroclass ? macroclass_root (bool): Is the node the root of a macroclass ? The attributes “_x” and “_rank” are reserved, and will be overwritten by the draw function. | `__init__`(*labels*, *children=None*, ***kwargs*)[[source]](_modules/clustering/utils.html#Node.__init__)[¶](#clustering.utils.Node.__init__) Node constructor. | Parameters: | * **labels** (*iterable*) – labels of all the leaves under this node. * **children** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – direct children of this node. * **kwargs** – any other keyword argument will be added as node attributes. Note that certain algorithm expect the Node to have (int) “size”, (str) “color”, (bool) “macroclass”, or (float) “DL” attributes. * **attributes "_x" and "_rank" are reserved****,** (*The*) – * **will be overwritten by the draw function.** (*and*) – | `compute_xy`(*tree_placement=False*, *pos=None*)[[source]](_modules/clustering/utils.html#Node.compute_xy)[¶](#clustering.utils.Node.compute_xy) `draw`(*horizontal=False*, *square=False*, *leavesfunc=<function Node.<lambda>>*, *nodefunc=None*, *label_rotation=None*, *annotateOnlyMacroclasses=False*, *point=None*, *edge_attributes=None*, *interactive=False*, *lattice=False*, *pos=None*, ***kwargs*)[[source]](_modules/clustering/utils.html#Node.draw)[¶](#clustering.utils.Node.draw) Draw the tree as a dendrogram-style pyplot graph. Example: ``` square=True square=False │ ┌──┴──┐ │ ╱╲ horizontal=False │ │ ┌─┴─┐ │ ╱ ╲ │ │ │ │ │ ╱ ╱╲ │ │ │ │ │ ╱ ╱ ╲ │__│___│___│ │╱___╱____╲ │─────┐ │⟍ │───┐ ├ │ ⟍ horizontal=True │ ├─┘ │⟍ ⟋ │───┘ │⟋ │____________ │____________ ``` | Parameters: | * **horizontal** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Should the tree be drawn with leaves on the y axis ? (Defaults to False: leaves on x axis). * **square** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Should the tree splines be squared with 90° angles ? (Defauls to False) * **leavesfunc** (*fun*) – A function that will be applied to leaves before writing them down. Takes a Node, returns a str. * **nodefunc** (*fun*) – A function that will be applied to nodes to annotate them. Takes a Node, returns a str. * **keep_above_macroclass** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – For macroclass history trees: Should the edges above macroclasses be drawn ? (Defaults to True). * **annotateOnlyMacroclasses** – For macroclass history trees: If True and nodelabel isn’t None, only the macroclasses nodes are annotated. * **point** (*fun*) – A function that maps a node to point attributes. * **edge_attributes** (*fun*) – A function that maps a pair of nodes to edge attributes. By default, use the parent’s color and “-” linestyle for nodes, “–” for leaves. * **interactive** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether this is destined to create an interactive plot. * **lattice** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether this node is a lattice rather than a tree. * **pos** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – A dictionnary of node label to x,y positions. Compatible with networkx layout functions. If absent, use networkx’s graphviz layout. | `leaves`()[[source]](_modules/clustering/utils.html#Node.leaves)[¶](#clustering.utils.Node.leaves) `macroclasses`(*parent_is_macroclass=False*)[[source]](_modules/clustering/utils.html#Node.macroclasses)[¶](#clustering.utils.Node.macroclasses) Find all the macroclasses nodes in this tree `to_latex`(*nodelabel=None*, *vertical=True*, *level_dist=50*, *square=True*, *leavesfunc=<function Node.<lambda>>*, *scale=1*)[[source]](_modules/clustering/utils.html#Node.to_latex)[¶](#clustering.utils.Node.to_latex) Return a latex string, compatible with tikz-qtree | Parameters: | * **nodelabel** – The name of the attribute to write on the nodes. * **vertical** – Should the tree be drawn vertically ? * **level_dist** – Distance between levels. * **square** – Should the arcs have a squared shape ? * **leavesfunc** (*fun*) – A function that will be applied to leaves before writing them down. Takes a Node, returns a str. * **scale** ([*int*](https://docs.python.org/3/library/functions.html#int)) – defaults to 1. tikzpicture scale argument. | `to_networkx`()[[source]](_modules/clustering/utils.html#Node.to_networkx)[¶](#clustering.utils.Node.to_networkx) `clustering.utils.``find_microclasses`(*paradigms*)[[source]](_modules/clustering/utils.html#find_microclasses)[¶](#clustering.utils.find_microclasses) Find microclasses in a paradigm (lines with identical rows). This is useful to identify an exemplar of each inflection microclass, and limit further computation to the collection of these exemplars. | Parameters: | **paradigms** ([*pandas.DataFrame*](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe containing inflectional paradigms. Columns are cells, and rows are lemmas. | | Returns: | microclasses (dict). classes is a dict. Its keys are exemplars, its values are lists of the name of rows identical to the exemplar. Each exemplar represents a macroclass. ``` >>> classes {"a":["a","A","aa"], "b":["b","B","BBB"]} ``` | `clustering.utils.``find_min_attribute`(*tree*, *attr*)[[source]](_modules/clustering/utils.html#find_min_attribute)[¶](#clustering.utils.find_min_attribute) Find the minimum value for an attribute in a tree. | Parameters: | * **tree** ([*Node*](index.html#clustering.utils.Node)) – The tree in which to find the minimum attribute. * **attr** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – the attribute’s key. | `clustering.utils.``string_to_node`(*string*, *legacy_annotation_name=None*)[[source]](_modules/clustering/utils.html#string_to_node)[¶](#clustering.utils.string_to_node) Parse an inflection tree written as a string. Example In the label, fields are separated by “#” as such: ``` (<labels>#<size>#<DL>#<color> (... ) (... ) ) ``` | Returns: | The root of the tree | | Return type: | inflexClass.Node | ###### Module contents[¶](#module-clustering) ##### entropy package[¶](#entropy-package) ###### Submodules[¶](#submodules) ###### entropy.distribution module[¶](#module-entropy.distribution) author: <NAME>. Encloses distribution of patterns on paradigms. *class* `entropy.distribution.``PatternDistribution`(*paradigms*, *patterns*, *pat_dic*, *features=None*)[[source]](_modules/entropy/distribution.html#PatternDistribution)[¶](#entropy.distribution.PatternDistribution) Bases: [`object`](https://docs.python.org/3/library/functions.html#object) Statistical distribution of patterns. | Variables: | * **paradigms** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – containing forms. * [**patterns**](index.html#module-representations.patterns) ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – containing pairwise patterns of alternation. * **classes** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – containing a representation of applicable patterns from one cell to another. Index are lemmas. * **(****dict of int** (*entropies*) – [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)): dict mapping n to a dataframe containing the entropies for the distribution \(P(c_{1}, ..., c_{n} → c_{n+1})\). | `__init__`(*paradigms*, *patterns*, *pat_dic*, *features=None*)[[source]](_modules/entropy/distribution.html#PatternDistribution.__init__)[¶](#entropy.distribution.PatternDistribution.__init__) Constructor for PatternDistribution. | Parameters: | * **patterns** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – patterns (columns are pairs of cells, index are lemmas). * **logfile** (*TextIOWrapper*) – Flow on which to write a log. | `add_features`(*series*)[[source]](_modules/entropy/distribution.html#PatternDistribution.add_features)[¶](#entropy.distribution.PatternDistribution.add_features) `entropy_matrix`(*silent=False*)[[source]](_modules/entropy/distribution.html#PatternDistribution.entropy_matrix)[¶](#entropy.distribution.PatternDistribution.entropy_matrix) Return a:class:pandas:pandas.DataFrame with unary entropies, and one with counts of lexemes. The result contains entropy \(H(c_{1} \to c_{2})\). Values are computed for all unordered combinations of \((c_{1}, c_{2})\) in the `PatternDistribution.paradigms`’s columns. Indexes are predictor cells \(c{1}\) and columns are the predicted cells \(c{2}\). Example For two cells c1, c2, entropy of c1 → c2, noted \(H(c_{1} \to c_{2})\) is: \[H( patterns_{c1, c2} | classes_{c1, c2} )\] `n_preds_distrib_log`(*logfile*, *n*, *sanity_check=False*)[[source]](_modules/entropy/distribution.html#PatternDistribution.n_preds_distrib_log)[¶](#entropy.distribution.PatternDistribution.n_preds_distrib_log) Print a log of the probability distribution for two predictors. Writes down the distributions: \[P( patterns_{c1, c3}, \; \; patterns_{c2, c3} \; \; | classes_{c1, c3}, \; \; \; \; classes_{c2, c3}, \; \; patterns_{c1, c2} )\] for all unordered combinations of two column names in `PatternDistribution.paradigms`. | Parameters: | * **logfile** ([`io.TextIOWrapper`](https://docs.python.org/3/library/io.html#io.TextIOWrapper)) – Output flow on which to write. * **n** ([*int*](https://docs.python.org/3/library/functions.html#int)) – number of predictors. * **sanity_check** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Use a slower calculation to check that the results are exact. | `n_preds_entropy_matrix`(*n*)[[source]](_modules/entropy/distribution.html#PatternDistribution.n_preds_entropy_matrix)[¶](#entropy.distribution.PatternDistribution.n_preds_entropy_matrix) Return a:class:pandas:pandas.DataFrame with nary entropies, and one with counts of lexemes. The result contains entropy \(H(c_{1}, ..., c_{n} \to c_{n+1} )\). Values are computed for all unordered combinations of \((c_{1}, ..., c_{n+1})\) in the `PatternDistribution.paradigms`’s columns. Indexes are tuples \((c_{1}, ..., c_{n})\) and columns are the predicted cells \(c_{n+1}\). Example For three cells c1, c2, c3, (n=2) entropy of c1, c2 → c3, noted \(H(c_{1}, c_{2} \to c_{3})\) is: \[H( patterns_{c1, c3}, \; \; patterns_{c2, c3}\; \; | classes_{c1, c3}, \; \; \; \; classes_{c2, c3}, \; \; patterns_{c1, c2} )\] | Parameters: | **n** ([*int*](https://docs.python.org/3/library/functions.html#int)) – number of predictors. | `one_pred_distrib_log`(*logfile*, *sanity_check=False*)[[source]](_modules/entropy/distribution.html#PatternDistribution.one_pred_distrib_log)[¶](#entropy.distribution.PatternDistribution.one_pred_distrib_log) Print a log of the probability distribution for one predictor. Writes down the distributions \(P( patterns_{c1, c2} | classes_{c1, c2} )\) for all unordered combinations of two column names in `PatternDistribution.paradigms`. Also writes the entropy of the distributions. | Parameters: | * **logfile** (`io.TextIO`) – Output flow on which to write. * **sanity_check** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Use a slower calculation to check that the results are exact. | `read_entropy_from_file`(*filename*)[[source]](_modules/entropy/distribution.html#PatternDistribution.read_entropy_from_file)[¶](#entropy.distribution.PatternDistribution.read_entropy_from_file) Read already computed entropies from a file. | Parameters: | **filename** – the file’s path. | `value_check`(*n*, *logfile=None*)[[source]](_modules/entropy/distribution.html#PatternDistribution.value_check)[¶](#entropy.distribution.PatternDistribution.value_check) Check that predicting from n predictors isn’t harder than with less. Check that the value of entropy from n predictors c1, ….cn is lower than the entropy from n-1 predictors c1, …, cn-1 (for all computed n preds entropies). | Parameters: | * **n** – number of predictors. * **logfile** ([`io.TextIOWrapper`](https://docs.python.org/3/library/io.html#io.TextIOWrapper)) – Output flow on which to write the detail of the result (optional). | *class* `entropy.distribution.``SplitPatternDistribution`(*paradigms_list*, *patterns_list*, *pat_dic_list*, *names*, *logfile=None*, *features=None*)[[source]](_modules/entropy/distribution.html#SplitPatternDistribution)[¶](#entropy.distribution.SplitPatternDistribution) Bases: [`entropy.distribution.PatternDistribution`](#entropy.distribution.PatternDistribution) Implicative entropy distribution for split systems Split system entropy is the joint entropy on both systems. `cond_bipartite_entropy`(*target=0*, *known=1*)[[source]](_modules/entropy/distribution.html#SplitPatternDistribution.cond_bipartite_entropy)[¶](#entropy.distribution.SplitPatternDistribution.cond_bipartite_entropy) Entropie conditionnelle entre les deux systèmes, H(c1->c2|c1’->c2’) ou H(c1’->c2’|c1->c2) `mutual_information`(*normalize=False*)[[source]](_modules/entropy/distribution.html#SplitPatternDistribution.mutual_information)[¶](#entropy.distribution.SplitPatternDistribution.mutual_information) Information mutuelle entre les deux systèmes. `entropy.distribution.``dfsum`(*df*, ***kwargs*)[[source]](_modules/entropy/distribution.html#dfsum)[¶](#entropy.distribution.dfsum) `entropy.distribution.``merge_split_df`(*dfs*)[[source]](_modules/entropy/distribution.html#merge_split_df)[¶](#entropy.distribution.merge_split_df) `entropy.distribution.``value_norm`(*df*)[[source]](_modules/entropy/distribution.html#value_norm)[¶](#entropy.distribution.value_norm) Rounding at 10 significant digits, avoiding negative 0s ###### entropy.utils module[¶](#module-entropy.utils) `entropy.utils.``P`(*x*, *subset=None*)[[source]](_modules/entropy/utils.html#P)[¶](#entropy.utils.P) Return the probability distribution of elements in a `pandas.core.series.Series`. | Parameters: | * **x** (`pandas.core.series.Series`) – A series of data. * **subset** (*iterable*) – Only give the distribution for a subset of values. | | Returns: | A `pandas.core.series.Series` which index are x’s elements and which values are their probability in x. | `entropy.utils.``cond_P`(*A*, *B*, *subset=None*)[[source]](_modules/entropy/utils.html#cond_P)[¶](#entropy.utils.cond_P) Return the conditional probability distribution P(A|B) for elements in two `pandas.core.series.Series`. | Parameters: | * **A** (`pandas.core.series.Series`) – A series of data. * **B** (`pandas.core.series.Series`) – A series of data. * **subset** (*iterable*) – Only give the distribution for a subset of values. | | Returns: | A `pandas.core.series.Series` whith two indexes. The first index is from the elements of B, the second from the elements of A. The values are the P(A|B). | `entropy.utils.``cond_entropy`(*A*, *B*, ***kwargs*)[[source]](_modules/entropy/utils.html#cond_entropy)[¶](#entropy.utils.cond_entropy) Calculate the conditional entropy between two series of data points. Presupposes that values in the series are of the same type, typically tuples. | Parameters: | * **A** (`pandas.core.series.Series`) – A series of data. * **B** (`pandas.core.series.Series`) – A series of data. | | Returns: | H(A|B) | `entropy.utils.``entropy`(*A*)[[source]](_modules/entropy/utils.html#entropy)[¶](#entropy.utils.entropy) Calculate the entropy for a series of probabilities. | Parameters: | **A** (`pandas.core.series.Series`) – A series of data. | | Returns: | H(A) | ###### Module contents[¶](#module-entropy) ##### lattice package[¶](#lattice-package) ###### Submodules[¶](#submodules) ###### lattice.lattice module[¶](#module-lattice.lattice) *class* `lattice.lattice.``ICLattice`(*dataframe*, *leaves*, *annotate=None*, *dummy_formatter=None*, *keep_names=True*, *comp_prefix=None*, *col_formatter=None*, *na_value=None*, *AOC=False*, *collections=False*, *verbose=True*)[[source]](_modules/lattice/lattice.html#ICLattice)[¶](#lattice.lattice.ICLattice) Bases: [`object`](https://docs.python.org/3/library/functions.html#object) Inflection Class Lattice. This is a wrapper around ([`concepts.Context`](https://concepts.readthedocs.io/en/latest/api.html#concepts.Context)). `__init__`(*dataframe*, *leaves*, *annotate=None*, *dummy_formatter=None*, *keep_names=True*, *comp_prefix=None*, *col_formatter=None*, *na_value=None*, *AOC=False*, *collections=False*, *verbose=True*)[[source]](_modules/lattice/lattice.html#ICLattice.__init__)[¶](#lattice.lattice.ICLattice.__init__) | Parameters: | * **dataframe** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – A dataframe * **leaves** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Dictionnaire de microclasses * **annotate** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Extra annotations to add on lattice. Of the form: {<object label>:<annotation>} * **dummy_formatter** (*func*) – Function to make dummies from the table. (default to panda’s) * **keep_names** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – whether to keep original column names when dropping duplicate dummy columns. * **comp_prefix** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – If there are two sets of properties, the prefix used to distinguish column names. * **AOC** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether to limit ourselves to Attribute or Object Concepts. * **col_formatter** (*func*) – Function to format columns in the context table. * **na_value** – A value tu use as “Na”. Defaults to None * **collections** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether the table contains [`representations.patterns.PatternCollection`](index.html#representations.patterns.PatternCollection) objects. | `ancestors`(*identifier*)[[source]](_modules/lattice/lattice.html#ICLattice.ancestors)[¶](#lattice.lattice.ICLattice.ancestors) Return all ancestors of a node which corresponds to the identifier. `draw`(*filename*, *title='Lattice'*, ***kwargs*)[[source]](_modules/lattice/lattice.html#ICLattice.draw)[¶](#lattice.lattice.ICLattice.draw) Draw the lattice using `clustering.Node`’s drawing function. `parents`(*identifier*)[[source]](_modules/lattice/lattice.html#ICLattice.parents)[¶](#lattice.lattice.ICLattice.parents) Return all direct parents of a node which corresponds to the identifier. `stats`()[[source]](_modules/lattice/lattice.html#ICLattice.stats)[¶](#lattice.lattice.ICLattice.stats) Returns some stats about the classification size and shape. Based on self.nodes, not self.lattice: stats are different depending on AOC/not AOC. `to_html`(***kwargs*)[[source]](_modules/lattice/lattice.html#ICLattice.to_html)[¶](#lattice.lattice.ICLattice.to_html) `lattice.lattice.``to_dummies`(*table*, ***kwargs*)[[source]](_modules/lattice/lattice.html#to_dummies)[¶](#lattice.lattice.to_dummies) Make a context table from a dataframe. | Parameters: | **table** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – A dataframe of patterns or strings | | Returns: | A context table. | | Return type: | dummies ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) | `lattice.lattice.``to_html_disabled`(**args*, ***kwargs*)[[source]](_modules/lattice/lattice.html#to_html_disabled)[¶](#lattice.lattice.to_html_disabled) ###### Module contents[¶](#module-lattice) ##### representations package[¶](#representations-package) ###### Submodules[¶](#submodules) ###### representations.alignment module[¶](#module-representations.alignment) author: <NAME>. This module is used to align sequences. `representations.alignment.``align_auto`(*s1*, *s2*, *insert_cost*, *sub_cost*, *distance_only=False*, *fillvalue=''*, ***kwargs*)[[source]](_modules/representations/alignment.html#align_auto)[¶](#representations.alignment.align_auto) Return all the best alignments of two words according to some edit distance matrix. | Parameters: | * **s1** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – first word to align * **s2** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – second word to align * **insert_cost** (*func*) – A function which takes one value and returns an insertion cost * **sub_cost** (*func*) – A function which takes two values and returns a substitution cost * **distance_only** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – defaults to False. If True, returns only the best distance. If False, returns an alignment. * **fillvalue** – (optional) the value with which to pad when iterable have varying lengths. Default: “”. | | Returns: | Either an alignment (a list of list of zipped tuples), or a distance (if distance_only is True). | `representations.alignment.``align_baseline`(**args*, ***kwargs*)[[source]](_modules/representations/alignment.html#align_baseline)[¶](#representations.alignment.align_baseline) Simple alignment intended as an inflectional baseline. (Albright & Hayes 2002) single change, either suffixal, or suffixal, or infixal. This doesn’t work well when there is both a prefix and a suffix. Used as a baseline for evaluation of the auto-aligned patterns. see “Modeling English Past Tense Intuitions with Minimal Generalization”, <NAME>. & <NAME>. *Proceedings of the ACL-02 Workshop on Morphological and Phonological Learning* - Volume 6, Association for Computational Linguistics, 2002, 58-69, page 2 : > “The exact procedure for finding a word-specific > rule is as follows: given an input pair (X, Y), the > model first finds the maximal left-side substring > shared by the two forms (e.g., #mɪs), to create the > C term (left side context). The model then exam- > ines the remaining material and finds the maximal > substring shared on the right side, to create the D > term (right side context). The remaining material is > the change; the non-shared string from the first > form is the A term, and from the second form is the > B term.” Examples ``` >>> align_baseline("mɪs","mas") [('m', 'm'), ('ɪ', 'a'), ('s', 's')] >>> align_baseline("mɪs","mɪst") [('m', 'm'), ('ɪ', 'ɪ'), ('s', 's'), ('', 't')] >>> align_baseline("mɪs","amɪs") [('', 'a'), ('m', 'm'), ('ɪ', 'ɪ'), ('s', 's')] >>> align_baseline("mɪst","amɪs") [('m', 'a'), ('ɪ', 'm'), ('s', 'ɪ'), ('t', 's')] ``` | Parameters: | * ***args** – any number of iterables >= 2 * **fillvalue** – the value with which to pad when iterable have varying lengths. Default: “”. | | Returns: | a list of zipped tuples. | `representations.alignment.``align_left`(**args*, ***kwargs*)[[source]](_modules/representations/alignment.html#align_left)[¶](#representations.alignment.align_left) Align left all arguments (wrapper around zip_longest). Examples ``` >>> align_left("mɪs","mas") [('m', 'm'), ('ɪ', 'a'), ('s', 's')] >>> align_left("mɪs","mɪst") [('m', 'm'), ('ɪ', 'ɪ'), ('s', 's'), ('', 't')] >>> align_left("mɪs","amɪs") [('m', 'a'), ('ɪ', 'm'), ('s', 'ɪ'), ('', 's')] >>> align_left("mɪst","amɪs") [('m', 'a'), ('ɪ', 'm'), ('s', 'ɪ'), ('t', 's')] ``` | Parameters: | * ***args** – any number of iterables >= 2 * **fillvalue** – the value with which to pad when iterable have varying lengths. Default: “”. | | Returns: | a list of zipped tuples, left aligned. | `representations.alignment.``align_multi`(**strings*, ***kwargs*)[[source]](_modules/representations/alignment.html#align_multi)[¶](#representations.alignment.align_multi) Levenshtein-style alignment over arguments, two by two. `representations.alignment.``align_right`(**iterables*, ***kwargs*)[[source]](_modules/representations/alignment.html#align_right)[¶](#representations.alignment.align_right) Align right all arguments. Zip longest with right alignment. Examples ``` >>> align_right("mɪs","mas") [('m', 'm'), ('ɪ', 'a'), ('s', 's')] >>> align_right("mɪs","mɪst") [('', 'm'), ('m', 'ɪ'), ('ɪ', 's'), ('s', 't')] >>> align_right("mɪs","amɪs") [('', 'a'), ('m', 'm'), ('ɪ', 'ɪ'), ('s', 's')] >>> align_right("mɪst","amɪs") [('m', 'a'), ('ɪ', 'm'), ('s', 'ɪ'), ('t', 's')] ``` | Parameters: | * ***iterables** – any number of iterables >= 2 * **fillvalue** – the value with which to pad when iterable have varying lengths. Default: “”. | | Returns: | a list of zipped tuples, right aligned. | `representations.alignment.``commonprefix`(**args*)[[source]](_modules/representations/alignment.html#commonprefix)[¶](#representations.alignment.commonprefix) Given a list of strings, returns the longest common prefix `representations.alignment.``commonsuffix`(**args*)[[source]](_modules/representations/alignment.html#commonsuffix)[¶](#representations.alignment.commonsuffix) Given a list of strings, returns the longest common suffix `representations.alignment.``levenshtein_ins_cost`(**_*)[[source]](_modules/representations/alignment.html#levenshtein_ins_cost)[¶](#representations.alignment.levenshtein_ins_cost) `representations.alignment.``levenshtein_sub_cost`(*a*, *b*)[[source]](_modules/representations/alignment.html#levenshtein_sub_cost)[¶](#representations.alignment.levenshtein_sub_cost) `representations.alignment.``multi_sub_cost`(*a*, *b*)[[source]](_modules/representations/alignment.html#multi_sub_cost)[¶](#representations.alignment.multi_sub_cost) ###### representations.confusables module[¶](#module-representations.confusables) author: <NAME>. This module is used to get characters similar to other utf8 characters. `representations.confusables.``parse`(*filename*)[[source]](_modules/representations/confusables.html#parse)[¶](#representations.confusables.parse) Parse a file with confusable chars association, return a dict. ###### representations.contexts module[¶](#module-representations.contexts) author: <NAME>. This module implements patterns’ contexts, which are series of phonological restrictions. *class* `representations.contexts.``Context`(*segments*)[[source]](_modules/representations/contexts.html#Context)[¶](#representations.contexts.Context) Bases: [`object`](https://docs.python.org/3/library/functions.html#object) Context for an alternation pattern *classmethod* `merge`(*contexts*, *debug=False*)[[source]](_modules/representations/contexts.html#Context.merge)[¶](#representations.contexts.Context.merge) Merge contexts to generalize them. Merge contexts and combine their restrictions into a new context. | Parameters: | * **contexts** – iterable of Contexts. * **debug** – whether to print debug strings. | | Returns: | a merged context | `to_str`(*mode=2*)[[source]](_modules/representations/contexts.html#Context.to_str)[¶](#representations.contexts.Context.to_str) ###### representations.generalize module[¶](#module-representations.generalize) author: <NAME>. This module is used to generalise pats contexts. `representations.generalize.``generalize_patterns`(*pats*, *debug=False*)[[source]](_modules/representations/generalize.html#generalize_patterns)[¶](#representations.generalize.generalize_patterns) Generalize these patterns’ context. | Parameters: | * **patterns** – an iterable of `Patterns.Pattern` * **debug** – whether to print debug strings. | | Returns: | a new `Patterns.Pattern`. | `representations.generalize.``incremental_generalize_patterns`(**args*)[[source]](_modules/representations/generalize.html#incremental_generalize_patterns)[¶](#representations.generalize.incremental_generalize_patterns) Merge patterns incrementally as long as the pattern has the same coverage. Attempt to merge each patterns two by two, and refrain from doing so if the pattern doesn’t match all the lexemes that lead to its inference. Also attempt to merge together patterns that have not been merged with others. | Parameters: | ***args** – the patterns | | Returns: | a list of patterns, at best of length 1, at worst of the same length as the input. | ###### representations.patterns module[¶](#module-representations.patterns) author: <NAME>. This module addresses the modeling of inflectional alternation patterns. *class* `representations.patterns.``BinaryPattern`(**args*, ***kwargs*)[[source]](_modules/representations/patterns.html#BinaryPattern)[¶](#representations.patterns.BinaryPattern) Bases: [`representations.patterns.Pattern`](#representations.patterns.Pattern) Represent the alternation pattern between two forms. A BinaryPattern is a Patterns.Pattern over just two forms. Applying the pattern to one of the original forms yields the second one. As an example, we will use the following alternation in a present verb of french: | cells | Forms | Transcription | | --- | --- | --- | | prs.1.sg ⇌ prs.2.pl | j’amène ⇌ vous amenez | amEn ⇌ amənE | Example ``` >>> cells = ("prs.1.sg", "prs.2.pl") >>> forms = ("amEn", "amənE") >>> p = Pattern(cells, forms, aligned=False) >>> type(p) representations.patterns.BinaryPattern >>> p E_ ⇌ ə_E / am_n_ <0> >>> p.apply("amEn",cells) 'amənE' ``` `applicable`(*form*, *cell*)[[source]](_modules/representations/patterns.html#BinaryPattern.applicable)[¶](#representations.patterns.BinaryPattern.applicable) Test if this pattern matches a form, i.e. if the pattern is applicable to the form. | Parameters: | * **form** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – a form. * **cell** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A cell contained in self.cells. | | Returns: | whether the pattern is applicable to the form from that cell. | | Return type: | bool | `apply`(*form*, *names*, *raiseOnFail=True*)[[source]](_modules/representations/patterns.html#BinaryPattern.apply)[¶](#representations.patterns.BinaryPattern.apply) Apply the pattern to a form. | Parameters: | * **form** – a form, assumed to belong to the cell names[0]. * **names** – apply to a form of cell names[0] to produce a form of cell names[1] (default:self.cells). Patterns being non-oriented, it is better to use the names argument. * **raiseOnFail** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – defaults to True. If true, raise an error when the pattern is not applicable to the form. If False, return None instead. | | Returns: | form belonging the opposite cell. | *exception* `representations.patterns.``NotApplicable`[[source]](_modules/representations/patterns.html#NotApplicable)[¶](#representations.patterns.NotApplicable) Bases: [`Exception`](https://docs.python.org/3/library/exceptions.html#Exception) Raised when a `patterns.Pattern` can’t be applied to a form. *class* `representations.patterns.``Pattern`(*cells*, *forms*, *aligned=False*, ***kwargs*)[[source]](_modules/representations/patterns.html#Pattern)[¶](#representations.patterns.Pattern) Bases: [`object`](https://docs.python.org/3/library/functions.html#object) Represent an alternation pattern and its context. The pattern can be defined over an arbitrary number of forms. If there are only two forms, a `patterns.BinaryPattern` will be created. cells (tuple): Cell labels. alternation (dict of str: list of tuple): Maps the cell’s names to a list of tuples of alternating material. context (tuple of str): Sequence of (str, Quantifier) pairs or “{}” (stands for alternating material.) score (float): A score used to choose among patterns. Example ``` >>> cells = ("prs.1.sg", "prs.1.pl","prs.2.pl") >>> forms = ("amEn", "amənõ", "amənE") >>> p = patterns.Pattern(cells, forms, aligned=False) >>> p E_ ⇌ ə_ɔ̃ ⇌ ə_E / am_n_ <0> ``` `__init__`(*cells*, *forms*, *aligned=False*, ***kwargs*)[[source]](_modules/representations/patterns.html#Pattern.__init__)[¶](#representations.patterns.Pattern.__init__) Constructor for Pattern. | Parameters: | * **cells** (*iterable*) – Cells labels (str), in the same order. * **forms** (*iterable*) – Forms (str) to be segmented. * **aligned** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – whether forms are already aligned. Otherwise, left alignment will be performed. | `alternation_list`(*exhaustive_blanks=True*, *use_gen=False*, *filler='_'*)[[source]](_modules/representations/patterns.html#Pattern.alternation_list)[¶](#representations.patterns.Pattern.alternation_list) Return a list of the alternating material, where the context is replaced by a filler. | Parameters: | * **exhaustive_blanks** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether initial and final contexts should be marked by a filler. * **use_gen** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether the alternation should be the generalized one. * **filler** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Alternative filler used to join alternation members. | | Returns: | a list of str of alternating material, where the context is replaced by a filler. | `is_identity`()[[source]](_modules/representations/patterns.html#Pattern.is_identity)[¶](#representations.patterns.Pattern.is_identity) *classmethod* `new_identity`(*cells*)[[source]](_modules/representations/patterns.html#Pattern.new_identity)[¶](#representations.patterns.Pattern.new_identity) Create a new identity pattern for a given set of cells. `to_alt`(*exhaustive_blanks=True*, *use_gen=False*, ***kwargs*)[[source]](_modules/representations/patterns.html#Pattern.to_alt)[¶](#representations.patterns.Pattern.to_alt) Join the alternating material obtained with alternation_list() in a str. *class* `representations.patterns.``PatternCollection`(*collection*)[[source]](_modules/representations/patterns.html#PatternCollection)[¶](#representations.patterns.PatternCollection) Bases: [`object`](https://docs.python.org/3/library/functions.html#object) Represent a set of patterns. `representations.patterns.``are_all_identical`(*iterable*)[[source]](_modules/representations/patterns.html#are_all_identical)[¶](#representations.patterns.are_all_identical) Test whether all elements in the iterable are identical. `representations.patterns.``find_alternations`(*paradigms*, *method*, ***kwargs*)[[source]](_modules/representations/patterns.html#find_alternations)[¶](#representations.patterns.find_alternations) Find local alternations in a Dataframe of paradigms. For each pair of form in the paradigm, keep only the alternating material (words are left-aligned). Return the resulting DataFrame. | Parameters: | * **paradigms** ([*pandas.DataFrame*](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe containing inflectional paradigms. Columns are cells, and rows are lemmas. * **method** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – “local” uses pairs of forms, “global” uses entire paradigms. | | Returns: | a dataframe with the same indexes as paradigms and as many columns as possible combinations of columns in paradigms, filled with segmented patterns. | | Return type: | [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame) | `representations.patterns.``find_applicable`(*paradigms*, *pat_dict*, *disable_tqdm=False*, ***kwargs*)[[source]](_modules/representations/patterns.html#find_applicable)[¶](#representations.patterns.find_applicable) Find all applicable rules for each form. We name sets of applicable rules *classes*. *Classes* are oriented: we produce two separate columns (a, b) and (b, a) for each pair of columns (a, b) in the paradigm. | Parameters: | * **paradigms** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – paradigms (columns are cells, index are lemmas). * **pat_dict** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – a dict mapping a column name to a list of patterns. * **disable_tqdm** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – if true, do not show progressbar | | Returns: | associating a lemma (index) and an ordered pair of paradigm cells (columns) to a tuple representing a class of applicable patterns. | | Return type: | ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) | `representations.patterns.``find_endings`(*paradigms*, **args*, *disable_tqdm=False*, ***kwargs*)[[source]](_modules/representations/patterns.html#find_endings)[¶](#representations.patterns.find_endings) Find suffixes in a paradigm. Return a DataFrame of endings where we remove in each row the common prefix to all the row’s cells. | Parameters: | * **paradigms** ([*pandas.DataFrame*](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – a dataframe containing inflectional paradigms. Columns are cells, and rows are lemmas. * **disable_tqdm** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – if true, do not show progressbar | | Returns: | a dataframe of the same shape filled with segmented endings. | | Return type: | [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame) | Example ``` >>> df = pd.DataFrame([["amEn", "amEn", "amEn", "amənõ", "amənE", "amEn"]], columns=["prs.1.sg", "prs.2.sg", "prs.3.sg", "prs.1.pl", "prs.2.pl","prs.3.pl"], index=["amener"]) >>> df prs.1.sg prs.2.sg prs.3.sg prs.1.pl prs.2.pl prs.3.pl amener amEn amEn amEn amənõ amənE amEn >>> find_endings(df) prs.1.sg prs.2.sg prs.3.sg prs.1.pl prs.2.pl prs.3.pl amener En En En ənõ ənE En ``` `representations.patterns.``find_patterns`(*paradigms*, *method*, ***kwargs*)[[source]](_modules/representations/patterns.html#find_patterns)[¶](#representations.patterns.find_patterns) Find Patterns in a DataFrame according to any general method. Methods can be: * suffix (align left), * prefix (align right), * baseline (see Albright & Hayes 2002) * levenshtein (dynamic alignment using levenshtein scores) * similarity (dynamic alignment using segment similarity scores) | Parameters: | * **paradigms** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – paradigms (columns are cells, index are lemmas). * **method** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – “suffix”, “prefix”, “baseline”, “levenshtein” or “similarity” | | Returns: | **patterns,pattern_dict**. Patterns is the created [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame), pat_dict is a dict mapping a column name to a list of patterns. | | Return type: | ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple)) | `representations.patterns.``from_csv`(*filename*, *defective=True*, *overabundant=True*)[[source]](_modules/representations/patterns.html#from_csv)[¶](#representations.patterns.from_csv) Read a Patterns Dataframe from a csv `representations.patterns.``make_pairs`(*paradigms*)[[source]](_modules/representations/patterns.html#make_pairs)[¶](#representations.patterns.make_pairs) Join columns with ” ⇌ ” by combination. The output has one column for each pairs on the paradigm’s columns. `representations.patterns.``to_csv`(*dataframe*, *filename*, *pretty=False*)[[source]](_modules/representations/patterns.html#to_csv)[¶](#representations.patterns.to_csv) Export a Patterns DataFrame to csv. ###### representations.quantity module[¶](#module-representations.quantity) author: <NAME>. This module provides Quantity objects to represent quantifiers. *class* `representations.quantity.``Quantity`(*mini*, *maxi*)[[source]](_modules/representations/quantity.html#Quantity)[¶](#representations.quantity.Quantity) Bases: [`object`](https://docs.python.org/3/library/functions.html#object) Represents a quantifier as an interval. This is a flyweight class and the presets are : | description | mini | maxi | regex symbol | variable name | | --- | --- | --- | --- | --- | | Match one | 1 | 1 | | quantity.one | | Optional | 0 | 1 | ? | quantity.optional | | Some | 1 | inf | + | quantity.some | | Any | 0 | inf | * | quantity.kleenestar | | None | 0 | 0 | | | `__init__`(*mini*, *maxi*)[[source]](_modules/representations/quantity.html#Quantity.__init__)[¶](#representations.quantity.Quantity.__init__) | Parameters: | * **mini** ([*int*](https://docs.python.org/3/library/functions.html#int)) – the minimum number of elements matched. * **maxi** ([*int*](https://docs.python.org/3/library/functions.html#int)) – the maximum number of elements matched. | `representations.quantity.``quantity_largest`(*args*)[[source]](_modules/representations/quantity.html#quantity_largest)[¶](#representations.quantity.quantity_largest) Reduce on the “&” operator of quantities. Returns a quantity with the minimum left value and maximum right value. Example ``` >>> quantity_largest([Quantity(0,1),Quantity(1,1),Quantity(1,np.inf)]) Quantity(0,np.inf) ``` Argument: args: an iterable of quantities. `representations.quantity.``quantity_sum`(*args*)[[source]](_modules/representations/quantity.html#quantity_sum)[¶](#representations.quantity.quantity_sum) Reduce on the “+” operator of quantities. Returns a quantity with the minimum left value and the sum of the right value. Example ``` >>> quantity_largest([Quantity(0,1),Quantity(1,1),Quantity(0,0)]) Quantity(0,1) ``` Argument: args: an iterable of quantities. ###### representations.segments module[¶](#module-representations.segments) author: <NAME>. This module addresses the modelisation of phonological segments. *class* `representations.segments.``Segment`(*classes*, *features*, *alias*, *chars*, *shorthand=None*)[[source]](_modules/representations/segments.html#Segment)[¶](#representations.segments.Segment) Bases: [`object`](https://docs.python.org/3/library/functions.html#object) The Segments.Segment class holds the definition of a single segment. This is a lightweight class. | Variables: | * **name** ([*str*](https://docs.python.org/3/library/stdtypes.html#str) *or* *_CharClass*) – Name of the segment. * **features** (*frozenset of tuples*) – The tuples are of the form (attribute, value) with a positive value, used for set operations. | `__init__`(*classes*, *features*, *alias*, *chars*, *shorthand=None*)[[source]](_modules/representations/segments.html#Segment.__init__)[¶](#representations.segments.Segment.__init__) Constructor for Segments. *classmethod* `get`(*descriptor*)[[source]](_modules/representations/segments.html#Segment.get)[¶](#representations.segments.Segment.get) Get a Segment from an alias. *classmethod* `get_from_transform`(*a*, *transform*)[[source]](_modules/representations/segments.html#Segment.get_from_transform)[¶](#representations.segments.Segment.get_from_transform) Get a segment from another according to a transformation tuple. In the following example, the segments have been initialized with French segment definitions. | Parameters: | * **a** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Segment alias * **transform** ([*tuple*](https://docs.python.org/3/library/stdtypes.html#tuple)) – Couple of two strings of segments aliases. | Example ``` >>> Segment.get_from_transform("d",("bdpt", "fsvz")) 'z' ``` *classmethod* `get_transform_features`(*left*, *right*)[[source]](_modules/representations/segments.html#Segment.get_transform_features)[¶](#representations.segments.Segment.get_transform_features) Get the features corresponding to a transformation. | Parameters: | * **left** ([*tuple*](https://docs.python.org/3/library/stdtypes.html#tuple)) – string of segment aliases * **right** ([*tuple*](https://docs.python.org/3/library/stdtypes.html#tuple)) – string of segment aliases | Example ``` >>> Segment.get_from_transform("bd", "pt") {'+vois'}, {'-vois'} ``` *classmethod* `init_dissimilarity_matrix`(*gap_prop=0.24*, ***kwargs*)[[source]](_modules/representations/segments.html#Segment.init_dissimilarity_matrix)[¶](#representations.segments.Segment.init_dissimilarity_matrix) Compute score matrix with dissimilarity scores. *classmethod* `insert_cost`(**_*)[[source]](_modules/representations/segments.html#Segment.insert_cost)[¶](#representations.segments.Segment.insert_cost) *classmethod* `intersect`(**args*)[[source]](_modules/representations/segments.html#Segment.intersect)[¶](#representations.segments.Segment.intersect) Intersect some segments from their names/aliases. This is the “meet” operation on the lattice nodes, and returns the lowest common ancestor. | Returns: | a str or _CharClass representing the segment which classes are the intersection of the input. | *classmethod* `set_max`()[[source]](_modules/representations/segments.html#Segment.set_max)[¶](#representations.segments.Segment.set_max) Set a variable to the top of the natural classes lattice. *classmethod* `show_pool`(*only_single=False*)[[source]](_modules/representations/segments.html#Segment.show_pool)[¶](#representations.segments.Segment.show_pool) Return a string description of the whole segment pool. `similarity`[[source]](_modules/representations/segments.html#Segment.similarity)[¶](#representations.segments.Segment.similarity) Compute phonological similarity (Frisch, 2004) The function is memoized. Measure from “Similarity avoidance and the OCP” , <NAME>.; <NAME>. & Broe, <NAME>. *Natural Language & Linguistic Theory*, Springer, 2004, 22, 179-228, p. 198. > We compute similarity by comparing the number of shared and unshared natural classes > of two consonants, using the equation in (7). This equation is a direct extension > of the Pierrehumbert (1993) feature similarity metric to the case of natural classes. > 7. \(Similarity = \frac{\text{Shared natural classes}}{\text{Shared natural classes } + \text{Non-shared natural classes}}\) *classmethod* `sub_cost`(*a*, *b*)[[source]](_modules/representations/segments.html#Segment.sub_cost)[¶](#representations.segments.Segment.sub_cost) *classmethod* `transformation`(*a*, *b*)[[source]](_modules/representations/segments.html#Segment.transformation)[¶](#representations.segments.Segment.transformation) Find a transformation between aliases a and b. The transformation is a pair of two maximal sets of segments related by a bijective phonological function. This function takes a pair of strings representing segments. It calculates the function which relates these two segments. It then finds the two maximal sets of segments related by this function. Example In French, t -> s can be expressed by a phonological function which changes [-cont] and [-rel. ret] to [+cont] and [+rel. ret] These other segments are related by the same change: d -> z b -> v p -> f ``` >>> a,b = Segment.transformation("t","s") >>> print(a,b) [bdpt] [fsvz] ``` | Parameters: | **a****,****b** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Segment aliases. | | Returns: | two charclasses. | `representations.segments.``initialize`(*filename*, *sep='\t'*, *verbose=False*)[[source]](_modules/representations/segments.html#initialize)[¶](#representations.segments.initialize) `representations.segments.``make_aliases`(*ipa*)[[source]](_modules/representations/segments.html#make_aliases)[¶](#representations.segments.make_aliases) Associate one symbol to segments that take two characters. Return restoration map. This function takes a segments table and changes the entries of the “Segs.” column with a unique character for each multi-chars cell in the Segs. A dict is returned that allows for original segment name restoration. | Input Segs. | Output Segs. | | --- | --- | | ɑ̃ | â | | a | a | The table can have an optional UNICODE column. It will be dropped at the end of the process. | Parameters: | **ipa** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – Dataframe of segments. Columns are features and indexes are segments. A UNICODE col can specify alt chars. | | Returns: | maps from the simplified name to the original segments name. | | Return type: | alias_map ([dict](https://docs.python.org/3/library/stdtypes.html#dict)) | `representations.segments.``normalize`(*ipa*, *features*)[[source]](_modules/representations/segments.html#normalize)[¶](#representations.segments.normalize) Assign a normalized segment to groups of segments with identical rows. This function takes a segments table and adds **in place** a “Normalized” column. This column contains a common value for each segment with identical boolean values. The function also returns a translation table mapping indexes to normalized segments. Note: the index are expected to be one char length. | Index | ..features.. | Normalized | | --- | --- | --- | | ɛ | […] | E | | e | […] | E | | Parameters: | * **ipa** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – Dataframe of segments. Columns are features, UNICODE code point representation and segment names, indexes are segments. * **features** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – Feature columns’ names. | | Returns: | translation table from the segment’s nameto its normalized name. | | Return type: | norm_map ([dict](https://docs.python.org/3/library/stdtypes.html#dict)) | `representations.segments.``restore`(*char*)[[source]](_modules/representations/segments.html#restore)[¶](#representations.segments.restore) Restore the original string from an alias. `representations.segments.``restore_segment_shortest`(*segment*)[[source]](_modules/representations/segments.html#restore_segment_shortest)[¶](#representations.segments.restore_segment_shortest) Restore segment to the shortest of either the original character or its feature list. `representations.segments.``restore_string`(*string*)[[source]](_modules/representations/segments.html#restore_string)[¶](#representations.segments.restore_string) Restore the original string from a string of aliases. ###### representations.utils module[¶](#module-representations.utils) author: <NAME>. Utility functions for representations. `representations.utils.``create_features`(*data_file_name*)[[source]](_modules/representations/utils.html#create_features)[¶](#representations.utils.create_features) Read feature and preprocess to be coindexed with paradigms. `representations.utils.``create_paradigms`(*data_file_name*, *cols=None*, *verbose=False*, *fillna=True*, *segcheck=False*, *merge_duplicates=False*, *defective=False*, *overabundant=False*, *merge_cols=False*)[[source]](_modules/representations/utils.html#create_paradigms)[¶](#representations.utils.create_paradigms) Read paradigms data, and prepare it according to a Segment class pool. | Parameters: | * **data_file_name** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – path to the paradigm csv file. * **characters occuring in the paradigms except the first column** (*All*) – * **be inventoried in this class.** (*should*) – * **cols** (*list of str*) – a subset of columns to use from the paradigm file. * **verbose** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – verbosity switch. * **merge_duplicates** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – should identical columns be merged ? * **fillna** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Defaults to True. Should #DEF# be replaced by np.NaN ? Otherwise they are filled with empty strings (“”). * **segcheck** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Defaults to False. Should I check that all the phonological segments in the table are defined in the segments table ? * **defective** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Defaults to False. Should I keep rows with defective forms ? * **overabundant** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Defaults to False. Should I keep rows with overabundant forms ? * **merge_cols** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Defaults to False. Should I merge identical columns (fully syncretic) ? | | Returns: | paradigms table (columns are cells, index are lemmas). | | Return type: | paradigms ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) | `representations.utils.``normalize_dataframe`(*paradigms*, *aliases*, *normalization*, *verbose=False*)[[source]](_modules/representations/utils.html#normalize_dataframe)[¶](#representations.utils.normalize_dataframe) Normalize and Simplify a dataframe. **aliases**: For all sequence of n characters representing a segment, replace with a unique character representing this segment. **Normalization**: for all groups of characters representing the same feature set, translate to one unique character. **Note**: a .translate strategy works for normalization but not for aliases, since it only maps single characters to single characters. The order of operations is important, since .translate assumes mapping of 1: 1 chars. | Parameters: | * **paradigms** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – paradigms table (columns are cells, index are lemmas). * **aliases** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – dictionnary of segments (as found in the paradigms) to their aliased versions (one char length) * **normalization** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – dictionnary of 1 aliased character to another, to replace segments which have the same feature set. * **verbose** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – verbosity switch. | | Returns: | The same dataframe, normalized and simplified. | | Return type: | new_df ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) | `representations.utils.``unique_lexemes`(*series*)[[source]](_modules/representations/utils.html#unique_lexemes)[¶](#representations.utils.unique_lexemes) Rename duplicates in a serie of strings. Take a pandas series of strings and output another serie where all originally duplicate strings are given numbers, so each cell contains a unique string. ###### Module contents[¶](#module-representations) ##### utils package[¶](#utils-package) ###### Module contents[¶](#module-utils) `utils.``get_repository_version`()[[source]](_modules/utils.html#get_repository_version)[¶](#utils.get_repository_version) Return an ID for the current git or svn revision. If the directory isn’t under git or svn, the function returns an empty str. > Returns: > (str): svn/git version or ‘’. `utils.``merge_duplicate_columns`(*df*, *sep=';'*, *keep_names=True*)[[source]](_modules/utils.html#merge_duplicate_columns)[¶](#utils.merge_duplicate_columns) Merge duplicate columns and return new DataFrame. | Parameters: | * **df** ([`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame)) – A dataframe * **sep** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – separator to use when joining columns names. * **keep_names** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether to keep the names of the original duplicated columns by merging them onto the columns we keep. |
analyzing-visualizing-data-f-sharp.pdf
free_programming_book
Unknown
Analyzing and Visualizing Data with F# <NAME> Analyzing and Visualizing Data with F# by <NAME> Copyright 2016 OReilly Media, Inc. All rights reserved. Printed in the United States of America. Published by OReilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. OReilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://safaribooksonline.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or <EMAIL> . Editor: <NAME> Production Editor: <NAME> Copyeditor: <NAME> Proofreader: <NAME> October 2015: Interior Designer: <NAME> Cover Designer: <NAME> Illustrator: Rebecca Demarest First Edition Revision History for the First Edition 2015-10-15: First Release While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsibility for errors or omissions, including without limi tation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsi bility to ensure that your use thereof complies with such licenses and/or rights. 978-1-491-93953-6 [LSI] Table of Contents Acknowledgements... ix 1. Accessing Data with Type Providers... 1 Data Science Workflow Why Choose F# for Data Science? Getting Data from the World Bank Calling the Open Weather Map REST API Plotting Temperatures Around the World Conclusions 2 3 4 7 10 13 2. Analyzing Data Using F# and Deedle... 15 Downloading Data Using an XML Provider Visualizing CO2 Emissions Change Aligning and Summarizing Data with Frames Summarizing Data Using the R Provider Normalizing the World Data Set Conclusions 16 18 20 21 24 26 3. Implementing Machine Learning Algorithms... 29 How k-Means Clustering Works Clustering 2D Points Initializing Centroids and Clusters Updating Clusters Recursively Writing a Reusable Clustering Function Clustering Countries Scaling to the Cloud with MBrace 30 31 33 35 36 39 41 vii Conclusions 42 4. Conclusions and Next Steps... 45 Adding F# to Your Project Resources for Learning More viii | Table of Contents 45 46 Acknowledgements This report would never exist without the amazing F# open source community that creates and maintains many of the libraries used in the report. It is impossible to list all the contributors, but let me say thanks to <NAME>, <NAME>, and <NAME> for their work on F# Data, R type provider, and XPlot, and to <NAME> for his work on the projects that power much of the F# open source infrastructure. Many thanks to companies that support the F# projects, including Microsoft and BlueMountain Capital. I would also like to thank <NAME> who wrote many great examples using F# for machine learning and whose blog post about clustering with F# inspired the example in Chapter 4. Last but not least, Im thankful to <NAME>, <NAME> from OReilly, and the technical reviewers for useful feedback on early drafts of the report. ix CHAPTER 1 Accessing Data with Type Providers Working with data was not always as easy as nowadays. For exam ple, processing the data from the decennial 1880 US Census took eight years. For the 1890 census, the United States Census Bureau hired <NAME>, who invented a number of devices to auto mate the process. A pantograph punch was used to punch the data on punch cards, which were then fed to the tabulator that counted cards with certain properties, or to the sorter for filtering. The cen sus still required a large amount of clerical work, but Holleriths machines sped up the process eight times to just one year.1 These days, filtering and calculating sums over hundreds of millions of rows (the number of forms received in the 2010 US Census) can take seconds. Much of the data from the US Census, various Open Government Data initiatives, and from international organizations like the World Bank is available online and can be analyzed by any one. Holleriths tabulator and sorter have become standard library functions in many programming languages and data analytics libra ries. 1 Holleriths company later merged with three other companies to form a company that was renamed International Business Machines Corporation (IBM) in 1924. You can find more about Holleriths machines in Mark Priestleys excellent book, A Science of Operations (Springer). 1 Making data analytics easier no longer involves building new physi cal devices, but instead involves creating better software tools and programming languages. So, lets see how the F# language and its unique features like type providers make the task of modern data analysis even easier! Data Science Workflow Data science is an umbrella term for a wide range of fields and disci plines that are needed to extract knowledge from data. The typical data science workflow is an iterative process. You start with an initial idea or research question, get some data, do a quick analysis, and make a visualization to show the results. This shapes your original idea, so you can go back and adapt your code. On the technical side, the three steps include a number of activities: Accessing data. The first step involves connecting to various data sources, downloading CSV files, or calling REST services. Then we need to combine data from different sources, align the data correctly, clean possible errors, and fill in missing values. Analyzing data. Once we have the data, we can calculate basic statistics about it, run machine learning algorithms, or write our own algorithms that help us explain what the data means. Visualizing data. Finally, we need to present the results. We may build a chart, create interactive visualization that can be published, or write a report that represents the results of our analysis. If you ask any data scientist, shell tell you that accessing data is the most frustrating part of the workflow. You need to download CSV files, figure out what columns contain what values, then determine how missing values are represented and parse them. When calling REST-based services, you need to understand the structure of the returned JSON and extract the values you care about. As youll see in this chapter, the data access part is largely simplified in F# thanks to type providers that integrate external data sources directly into the language. 2 | Chapter 1: Accessing Data with Type Providers Why Choose F# for Data Science? There are a lot of languages and tools that can be used for data sci ence. Why should you choose F#? A two-word answer to the ques tion is type providers. However, there are other reasons. Youll see all of them in this report, but here is a quick summary: Data access. With type providers, youll never need to look up column names in CSV files or country codes again. Type pro viders can be used with many common formats like CSV, JSON, and XML, but they can also be built for a specific data source like Wikipedia. You will see type providers in this and the next chapter. Correctness. As a functional-first language, F# is excellent at expressing algorithms and solving complex problems in areas like machine learning. As youll see in Chapter 3, the F# type system not only prevents bugs, but also helps us understand our code. Efficiency and scaling. F# combines the simplicity of Python with the efficiency of a JIT-based compiled language, so you do not have to call external libraries to write fast code. You can also run F# code in the cloud with the MBrace project. We wont go into details, but Ill show you the idea in Chapter 3. Integration. In Chapter 4, we see how type providers let us easily call functions from R (a statistical software with rich libraries). F# can also integrate with other ecosystems. You get access to a large number of .NET and Mono libraries, and you can easily interoperate with FORTRAN and C. Enough talking, lets look at some code! To set the theme for this chapter, lets look at the forecasted temperatures around the world. To do this, we combine data from two sources. We use the World Bank2 to access information about countries, and we use the Open Weather Map3 to get the forecasted temperature in all the capitals of all the countries in the world. 2 The World Bank is an international organization that provides loans to developing countries. To do so effectively, it also collects large numbers of development and finan cial indicators that are available through a REST API at http://data.worldbank.org/. 3 See http://openweathermap.org/. Why Choose F# for Data Science? | 3 Getting Data from the World Bank To access information about countries, we use the World Bank type provider. This is a type provider for a specific data source that makes accessing data as easy as possible, and it is a good example to start with. Even if you do not need to access data from the World Bank, this is worth exploring because it shows how simple F# data access can be. If you frequently work with another data source, you can create your own type provider and get the same level of simplicity. The World Bank type provider is available as part of the F# Data library.4 We could start by referencing just F# Data, but we will also need a charting library later, so it is better to start by referencing FsLab, which is a collection of .NET and F# data science libraries. The easiest way to get started is to download the FsLab basic tem plate from http://fslab.org/download. The FsLab template comes with a sample script file (a file with the .fsx extension) and a project file. To download the dependen cies, you can either build the project in Visual Studio or Xamarin Studio, or you can invoke the Paket package manager directly. To do this, run the Paket bootstrapper to download Paket itself, and then invoke Paket to install the packages (on Windows, drop the mono prefix): mono .paket\paket.bootstrapper.exe mono .paket\paket.exe install NuGet Packages and Paket In the F# ecosystem, most packages are available from the NuGet gallery. NuGet is also the name of the most common package man ager that comes with typical .NET distributions. However, the FsLab templates use an alternative called Paket instead. Paket has a number of benefits that make it easier to use with data science projects in F#. It uses a single paket.lock file to keep ver sion numbers of all packages (making updates to new versions eas ier), and it does not put the version number in the name of the 4 See http://fslab.org/FSharp.Data. 4 | Chapter 1: Accessing Data with Type Providers folder that contains the packages. This works nicely with F# and the #load command, as you can see in the snippet below. Once you have all the packages, you can replace the sample script file with the following simple code snippet: #load "packages/FsLab/FsLab.fsx" open FSharp.Data let wb = WorldBankData.GetDataContext() The first line loads the FsLab.fsx file, which comes from the FsLab package, and loads all the libraries that are a part of FsLab, so you do not have to reference them one by one. The last line uses GetData Context to to create an instance that well need in the next step to fetch some data. The next step is to use the World Bank type provider to get some data. Assuming everything is set up in your editor, you should be able to type wb.Countries followed by . (a period) and get autocompletion on the country names as shown in Figure 1-1. This is not a magic! The country names, are just ordinary properties. The trick is that they are generated on the fly by the type provider based on the schema retrieved from the World Bank. Figure 1-1. Atom editor providing auto-completion on countries Getting Data from the World Bank | 5 Feel free to explore the World Bank data on your own! The follow ing snippet shows two simple things you can do to get the capital city and the total population of the Czech Republic: wb.Countries.``Czech Republic``.CapitalCity wb.Countries.``Czech Republic``.Indicators .`` CO2 emissions (kt)``.[2010] On the first line, we pick a country from the World Bank and look at one of the basic properties that are available directly on the country object. The World Bank also collects numerous indicators about the countries, such as GDP, school enrollment, total population, CO2 emissions, and thousands of others. In the second example, we access the CO2 emissions using the Indicators property of a coun try. This returns a provided object that is generated based on the indicators that are available in the World Bank database. Many of the properties contain characters that are not valid identifiers in F# and are wrapped in ``. As you can see in the example, the names are quite complex. Fortunately, you are not expected to figure out and remember the names of the properties because the F# editors pro vide auto-completion based on the type information. A World Bank indicator is returned as an object that can be turned into a list using List.ofSeq. This list contains values for all of the years for which a value is available. As demonstrated in the example, we can also invoke the indexer of the object using .[2010] to find a value for a specific year. F# Editors and Auto-complete F# is a statically typed language and the editors have access to a lot of information that is used to provide advanced IDE features like auto-complete and tooltips. Type providers also heavily rely on auto-complete; if you want to use them, youll need an editor with good F# support. Fortunately, a number of popular editors have good F# support. If you prefer editors, you can use Atom from GitHub (install the language-fsharp and atom-fsharp packages) or Emacs with fsharp-mode. If you prefer a full IDE, you can use Visual Studio (including the free edition) on Windows, or MonoDevelop (a free version of Xamarin Studio) on Mac, Linux, or Windows. For more 6 | Chapter 1: Accessing Data with Type Providers information about getting started with F# and up-to-date editor information, see the Use pages on http://fsharp.org. The typical data science workflow requires a quick feedback loop. In F#, you get this by using F# Interactive, which is the F# REPL. In most F# editors, you can select a part of the source code and press Alt+Enter (or Ctrl+Enter) to evaluate it in F# Interactive and see the results immediately. The one thing to be careful about is that you need to load all depen dencies first, so in this example, you first need to evaluate the con tents of the first snippet (with #load, open, and let wb = ...), and then you can evaluate the two commands from the above snippets to see the results. Now, lets see how we can combine the World Bank data with another data source. Calling the Open Weather Map REST API For most data sources, because F# does not have a specialized type provider like for the World Bank, we need to call a REST API that returns data as JSON or XML. Working with JSON or XML data in most statically typed languages is not very elegant. You either have to access fields by name and write obj.GetField<int>("id"), or you have to define a class that corresponds to the JSON object and then use a reflection-based library that loads data into that class. In any case, there is a lot of boilerplate code involved! Dynamically typed languages like JavaScript just let you write obj.id, but the downside is that you lose all compile-time checking. Is it possible to get the simplicity of dynamically typed languages, but with the static checking of statically typed languages? As youll see in this section, the answer is yes! To get the weather forecast, well use the Open Weather Map service. It provides a daily weather forecast endpoint that returns weather information based on a city name. For example, if we request http:// api.openweathermap.org/data/2.5/forecast/daily?q=Cambridge, we get a JSON document that contains the following information. I omitted some of the information and included the forecast just for two days, but it shows the structure: Calling the Open Weather Map REST API | 7 { "city": { "id": 2653941, "name": "Cambridge", "coord": { "lon": 0.11667, "lat": 52.200001 }, "country": "GB" }, "list": [ { "dt": 1439380800, "temp": { "min": 14.12, "max": 15.04 } }, { "dt": 1439467200, "temp": { "min": 15.71, "max": 22.44 } } ] } As mentioned before, we could parse the JSON and then write something like json.GetField("list").AsList() to access the list with temperatures, but we can do much better than that with type providers. The F# Data library comes with JsonProvider, which is a parame terized type provider that takes a sample JSON. It infers the type of the sample document and generates a type that can be used for working with documents that have the same structure. The sample can be specified as a URL, so we can get a type for calling the weather forecast endpoint as follows: type Weather = JsonProvider<"http://api.openweathermap .org/data/2.5/forecast/daily?units=metric&q=Prague"> Because of the width limitations, we have to split the URL into multiple lines in the report. This wont actually work, so make sure to keep the sample URL on a single line when typing the code! The parameter of a type provider has to be a constant. In order to generate the Weather type, the F# compiler needs to be able to get the value of the parameter at compile-time without running any code. This is also the reason why we are not allowed to use string concatenation with a + here, because that would be an expression, albeit a simple one, rather than a constant. Now that we have the Weather type, lets see how we can use it: let w = Weather.GetSample() printfn "%s" w.City.Country for day in w.List do printfn "%f" day.Temp.Max The first line calls the GetSample method to obtain the forecast using the sample URLin our case, the temperature in Prague in 8 | Chapter 1: Accessing Data with Type Providers metric units. We then use the F# printfn function to output the country (just to check that we got the correct city!) and a for loop to iterate over the seven days that the forecast service returns. As with the World Bank type provider, you get auto-completion when accessing. For example, if you type day.Temp and ., you will see that the service the returns forecasted temperature for morning, day, evening, and night, as well as maximal and minimal tempera tures during the day. This is because Weather is a type provided based on the sample JSON document that we specified. When you use the JSON type provider to call a RESTbased service, you do not even need to look at the doc umentation or sample response. The type provider brings this directly into your editor. In this example, we use GetSample to request the weather forecast based on the sample URL, which has to be constant. But we can also use the Weather type to get data for other cities. The following snip pet defines a getTomorrowTemp function that returns the maximal temperature for tomorrow: let baseUrl = "http://api.openweathermap.org/data/2.5" let forecastUrl = baseUrl + "/forecast/daily?units=metric&q=" let getTomorrowTemp place = let w = Weather.Load(forecastUrl + place) let tomorrow = Seq.head w.List tomorrow.Temp.Max getTomorrowTemp "Prague" getTomorrowTemp "Cambridge,UK" The Open Weather Map returns the JSON document with the same structure for all cities. This means that we can use the Load method to load data from a different URL, because it will still have the same properties. Once we have the document, we call Seq.head to get the forecast for the first day in the list. As mentioned before, F# is statically typed, but we did not have to write any type annotations for the getTomorrowTemp function. Thats because the F# compiler is smart enough to infer that place has to be a string (because we are appending it to another string) and that Calling the Open Weather Map REST API | 9 the result is float (because the type provider infers that based on the values for the max field in the sample JSON document). A common question is, what happens when the schema of the returned JSON changes? For example, what if the service stops returning the Max temperature as part of the forecast? If you specify the sample via a live URL (like we did here), then your code will no longer compile. The JSON type provider will generate type based on the response returned by the latest version of the API, and the type will not expose the Max member. This is a good thing though, because we will catch the error during development and not later at runtime. If you use type providers in a compiled and deployed code and the schema changes, then the behavior is the same as with any other data access technologyyoull get a runtime exception that you have to handle. Finally, it is worth noting that you can also pass a local file as a sample, which is useful when youre working offline. Plotting Temperatures Around the World Now that weve seen how to use the World Bank type provider to get information about countries and the JSON type provider to get the weather forecast, we can combine the two and visualize the temper atures around the world! To do this, we iterate over all the countries in the world and call getTomorrowTemp to get the maximal temperature in the capital cit ies: let worldTemps = [ for c in wb.Countries -> let place = c.CapitalCity + "," + c.Name printfn "Getting temperature in: %s" place c.Name, getTomorrowTemp place ] If you are new to F#, there is a number of new constructs in this snippet: [ for .. in .. -> .. ] is a list expression that generates a list of values. For every item in the input sequence wb.Countries, we return one element of the resulting list. 10 | Chapter 1: Accessing Data with Type Providers c.Name, getTomorrowTemp place creates a pair with two ele ments. The first is the name of the country and the second is the temperature in the capital. We use printf in the list expression to print the place that we are processing. Downloading all data takes a bit of time, so this is useful for tracking progress. To better understand the code, you can look at the type of the world Temps value that we are defining. This is printed in F# Interactive when you run the code, and most F# editors also show a tooltip when you place the mouse pointer over the identifier. The type of the value is (string * float) list, which means that we get a list of pairs with two elements: the first is a string (country name) and the second is a floating-point number (temperature).5 After you run the code and download the temperatures, youre ready to plot the temperatures on a map. To do this, we use the XPlot library, which is a lightweight F# wrapper for Google Charts: open XPlot.GoogleCharts Chart.Geo(worldTemps) The Chart.Geo function expects a collection of pairs where the first element is a country name or country code and the second element is the value, so we can directly call this with worldTemps as an argu ment. When you select the second line and run it in F# Interactive, XPlot creates the chart and opens it in your default web browser. To make the chart nicer, well need to use the F# pipeline operator |>. The operator lets you use the fluent programming style when applying a chain of operations or transformations. Rather than call ing Chart.Geo with worldTemps as an argument, we can get the data and pass it to the charting function as worldTemps |> Chart.Geo. Under the cover, the |> operator is very simple. It takes a value on the left, a function on the right, and calls the function with the value as an argument. So, v |> f is just shorthand for f v. This becomes more useful when we need to apply a number of operations, because we can write g (f v) as v |> f |> g. 5 If you are coming from a C# background, you can also read this as List<Tuple<string, float>>. Plotting Temperatures Around the World | 11 The following snippet creates a ColorAxis object to specify how to map temperatures to colors (for more information on the options, see the XPlot documentation). Note that XPlot accepts parameters as .NET arrays, so we use the notation [| .. |] rather than using a plain list expression written as [ .. ]: let colors = [| "#80E000";"#E0C000";"#E07B00";"#E02800" |] let values = [| 0;+15;+30;+45 |] let axis = ColorAxis(values=values, colors=colors) worldTemps |> Chart.Geo |> Chart.WithOptions(Options(colorAxis=axis)) |> Chart.WithLabel "Temp" The Chart.Geo function returns a chart object. The various Chart.With functions then transform the chart object. We use With Options to set the color axis and WithLabel to specify the label for the values. Thanks to the static typing, you can explore the various available options using code completion in your editor. Figure 1-2. Forecasted temperatures for tomorrow with label and cus tom color scale The resulting chart should look like the one in Figure 1-2. Just be careful, if you are running the code in the winter, you might need to tweak the scale! 12 | Chapter 1: Accessing Data with Type Providers Conclusions The example in this chapter focused on the access part of the data science workflow. In most languages, this is typically the most frus trating part of the access, analyze, visualize loop. In F#, type provid ers come to the rescue! As you could see in this chapter, type providers make data access simpler in a number of ways. Type providers integrate external data sources directly into the language, and you can explore external data inside your editor. You could see this with the specialized World Bank type provider (where you can choose countries and indicators in the completion list), and also with the general-purpose JSON type provider (which maps JSON object fields into F# types). However, type providers are not useful only for data access. As well see in the next chapter, they can also be useful for calling external non-F# libraries. To build the visualization in this chapter, we needed to write just a couple of lines of F# code. In the next chapter, we download larger amounts of data using the World Bank REST service and preprocess it to get ready for the simple clustering algorithm implemented in Chapter 3. Conclusions | 13 CHAPTER 2 Analyzing Data Using F# and Deedle In the previous chapter, we carefully picked a straightforward exam ple that does not require too much data preprocessing and too much fiddling to find an interesting visualization to build. Life is typically not that easy, so this chapter looks at a more realistic case study. Along the way, we will add one more library to our toolbox. We will look at Deedle,1 which is a .NET library for data and time series manipulation that is great for interactive data exploration, data alignment, and handling missing values. In this chapter, we download a number of interesting indicators about countries of the world from the World Bank, but we do so efficiently by calling the REST service directly using an XML type provider. We align multiple data sets, fill missing values, and build two visualizations looking at CO2 emissions and the correlation between GDP and life expectancy. Well use the two libraries covered in the previous chapter (F# Data and XPlot) together with Deedle. If youre referencing the libraries using the FsLab package as before, youll need the following open declarations: #r "System.Xml.Linq.dll" #load "packages/FsLab/FsLab.fsx" 1 See http://fslab.org/Deedle/. 15 open Deedle open FSharp.Data open XPlot.GoogleCharts open XPlot.GoogleCharts.Deedle There are two new things here. First, we need to reference the System.Xml.Linq library, which is required by the XML type pro vider. Next, we open the Deedle namespace together with extensions that let us pass data from the Deedle series directly to XPlot for visu alization. Downloading Data Using an XML Provider Using the World Bank type provider, we can easily access data for a specific indicator and country over all years. However, here we are interested in an indicator for a specific year, but over all countries. We could download this from the World Bank type provider too, but to make the download more efficient, we can use the underlying API directly and get data for all countries with just a single request. This is also a good opportunity to look at how the XML type pro vider works. As with the JSON type provider, we give the XML type provider a sample URL. You can find more information about this query in the World Bank API documentation. The code NY.GDP.PCAP.CD is a sample indicator returning GDP growth per capita: type WorldData = XmlProvider<"http://api.worldbank .org/countries/indicators/NY.GDP.PCAP.CD?date=2010:2010"> As in the last chapter, we had to split this into two lines, but you should have the sample URL on a single line in your source code. You can now call WorldData.GetSample() to download the data from the sample URL, but with type providers, you dont even need to do that. You can start using the generated type to see what mem bers are available and find the data in your F# editor. In the last chapter, we loaded data into a list of type (string*float) list. This is a list of pairs that can also be written as list<string*float>. In the following example, we create a Deedle series Series<string, float>. The series type is parameterized by the type of keys and the type of values, and builds an index based on the keys. As well see later, this can be used to align data from multi ple series. 16 | Chapter 2: Analyzing Data Using F# and Deedle We write a function getData that takes a year and an indicator code, then downloads and parses the XML response. Processing the data is similar to the JSON type provider example from the previous chapter: let indUrl = "http://api.worldbank.org/countries/indicators/" let getData year indicator = let query = [("per_page","1000"); ("date",sprintf "%d:%d" year year)] let data = Http.RequestString(indUrl + indicator, query) let xml = WorldData.Parse(data) let orNaN value = defaultArg (Option.map float value) nan series [ for d in xml.Datas -> d.Country.Value, orNaN d.Value ] To call the service, we need to provide the per_page and date query parameters. Those are specified as a list of pairs. The first parameter has a constant value of "1000". The second parameter needs to be a date range written as "2015:2015", so we use sprintf to format the string. The function then downloads the data using the Http.Request String helper which takes the URL and a list of query parameters. Then we use WorldData.Parse to read the data using our provided type. We could also use WorkldData.Load, but by using the Http helper we do not have to concatenate the URL by hand (the helper is also useful if you need to specify an HTTP method or provide HTTP headers). Next we define a helper function orNaN. This deserves some explan ation. The type provider correctly infers that data for some countries may be missing and gives us option<decimal> as the value. This is a high-precision decimal number wrapped in an option to indicate that it may be missing. For convenience, we want to treat missing values as nan. To do this, we first convert the value into float (if it is available) using Option.map float value. Then we use defaultArg to return either the value (if it is available) or nan (if it is not avail able). Finally, the last line creates a series with country names as keys and the World Bank data as values. This is similar to what we did in the Downloading Data Using an XML Provider | 17 last chapter. The list expression creates a list with tuples, which is then passed to the series function to create a Deedle series. The two examples of using the JSON and XML type providers demonstrate the general pattern. When accessing data, you just need a sample document, and then you can use the type providers to load different data in the same format. This approach works well for any REST-based service, and it means that you do not need to study the response in much detail. Aside from XML and JSON, you can also access CSV files in the same way using CsvProvider. Visualizing CO2 Emissions Change Now that we can load an indicator for all countries into a series, we can use it to explore the World Bank data. As a quick example, lets see how the CO2 emissions have been changing over the last 10 years. We can still use the World Bank type provider to get the indi cator code instead of looking up the code on the World Bank web page: let wb = WorldBankData.GetDataContext() let inds = wb.Countries.World.Indicators let code = inds.``CO2 emissions (kt)``.IndicatorCode let co2000 = getData 2000 code let co2010 = getData 2010 code At the beginning of the chapter, we opened Deedle extensions for XPlot. Now you can directly pass co2000 or co2010 to Chart.Geo and write, for example, Chart.Geo(co2010) to display the total car bon emissions of countries across the world. This shows the expected results (with China and the US being the largest polluters). More interesting numbers appear when we calculate the relative change over the last 10 years: let change = (co2010 - co2000) / co2000 * 100.0 The snippet calculates the difference, divides it by the 2000 values to get a relative change, and multiplies the result by 100 to get a per centage. But the whole calculation is done over a series rather than over individual values! This is possible because a Deedle series sup ports numerical operators and automatically aligns data based on the keys (so, if we got the countries in a different order, it will still work). The operations also propagate missing values correctly. If the 18 | Chapter 2: Analyzing Data Using F# and Deedle value for one of the years is missing, it will be marked as missing in the resulting series, too. As before, you can call Chart.Geo(change) to produce a map with the changes. If you tweak the color scale as we did in the last chap ter, youll get a visualization similar to the one in Figure 2-1 (you can get the complete source code from http://fslab.org/report). Figure 2-1. Change in CO2 emissions between 2000 and 2010 As you can see in Figure 2-1, we got data for most countries of the world, but not for all of them. The range of the values is between -70% to +1200%, but emissions in most countries are growing more slowly. To see this, we specify a green color for -10%, yellow for 0%, orange for +100, red for +200%, and very dark red for +1200%. In this example, we used Deedle to align two series with country names as indices. This kind of operation is useful all the time when combining data from multiple sources, no matter whether your keys are product IDs, email addresses, or stock tickers. If youre working with a time series, Deedle offers even more. For example, for every key from one time-series, you can find a value from another series whose key is the closest to the time of the value in the first series. You can find a detailed overview in the Deedle page about working with time series. Visualizing CO2 Emissions Change | 19 Aligning and Summarizing Data with Frames The getData function that we wrote in the previous section is a per fect starting point for loading more indicators about the world. Well do exactly this as the next step, and well also look at simple ways to summarize the obtained data. Downloading more data is easy now. We just need to pick a number of indicators that we are interested in from the World Bank type provider and call getData for each indicator. We download all data for 2010 below, but feel free to experiment and choose different indicators and different years: let codes = [ "CO2", inds.``CO2 emissions (metric tons per capita)`` "Univ", inds.``School enrollment, tertiary (% gross)`` "Life", inds.``Life expectancy at birth, total (years)`` "Growth", inds.``GDP per capita growth (annual %)`` "Pop", inds.``Population growth (annual %)`` "GDP", inds.``GDP per capita (current US$)`` ] let world = frame [ for name, ind in codes -> name, getData 2010 ind.IndicatorCode ] The code snippet defines a list with pairs consisting of a short indi cator name and the code from the World Bank. You can run it and see what the codes look likechoosing an indicator from an autocomplete list is much easier than finding it in the API documenta tion! The last line does all the actual work. It creates a list of key value pairs using a sequence expression [ ... ], but this time, the value is a series with data for all countries. So, we create a list with an indi cator name and data series. This is then passed to the frame func tion, which creates a data frame. A data frame is a Deedle data structure that stores multiple series. You can think of it as a table with multiple columns and rows (simi lar to a data table or spreadsheet). When creating a data frame, Dee dle again makes sure that the values are correctly aligned based on their keys. 20 | Chapter 2: Analyzing Data Using F# and Deedle Table 2-1. Data frame with information about the world CO2 Univ Afghanistan 0.30 N/A Life Growth Pop GDP 59.60 5.80 2.46 561.20 Albania 1.52 43.56 76.98 4.22 -0.49 4094.36 Algeria 3.22 28.76 70.62 1.70 1.85 4349.57 : Yemen, Rep. 1.13 10.87 62.53 0.90 2.37 1357.76 Zambia 0.20 N/A 54.53 7.03 3.01 1533.30 Zimbabwe 0.69 6.21 53.59 9.77 1.45 723.16 Data frames are useful for interactive data exploration. When you create a data frame, F# Interactive formats it nicely so you can get a quick idea about the data. For example, in Table 2-1 you can see the ranges of the values and which values are frequently missing. Data frames are also useful for interoperability. You can easily save data frames to CSV files. If you want to use F# for data access and cleanup, but then load the data in another language or tool such as R, Mathematica, or Python, data frames give you an easy way to do that. However, if you are interested in calling R, this is even easier with the F# R type provider. Summarizing Data Using the R Provider When using F# for data analytics, you can access a number of useful libraries: Math.NET Numerics for statistical and numerical comput ing, Accord.NET for machine learning, and others. However, F# can also integrate with libraries from other ecosystems. We already saw Summarizing Data Using the R Provider | 21 this with XPlot, which is an F# wrapper for the Google Charts visu alization library. Another good example is the R type provider.2 The R Project and R Type Provider R is a popular programming language and software environment for statistical computing. One of the main reasons for the popular ity of R is its comprehensive archive of statistical packages (CRAN), providing libraries for advanced charting, statistics, machine learn ing, financial computing, bioinformatics, and more. The R type provider makes the packages available to F#. The R type provider is cross-platform, but it requires a 64-bit ver sion of Mono on Mac and Linux. The documentation explains the required setup in details. Also, the R provider uses your local instal lation of R, so you need to have R on your machine in order to use it! You can get R from http://www.r-project.org. In R, functionality is organized as functions in packages. The R type provider discovers R packages that are installed on your machine and makes them available as F# modules. R functions then become F# functions that you can call. As with type providers for accessing data, the modules and functions become normal F# entities, and you can discover them through auto-complete. The R type provider is also included in the FsLab package, so no additional installation is needed. If you have R installed, you can run the plot function from the graphics package to get a quick visuali zation of correlations in the world data frame: open RProvider open RProvider.graphics R.plot(world) If you are typing the code in your editor, you can use autocompletion in two places. First, after typing RProvider and . (dot), you can see a list with all available packages. Second, after typing R and . (dot), you can see functions in all the packages you opened. Also note that we are calling the R function with a Deedle data frame as an argument. This is possible because the R provider 2 See http://fslab.org/RProvider. 22 | Chapter 2: Analyzing Data Using F# and Deedle knows how to convert Deedle frames to R data frames. The call then invokes the R runtime, which opens a new window with the chart displayed in Figure 2-2. Figure 2-2. R plot showing correlations between indicators The plot function creates a scatter plot for each combination of rows in our input data, so we can quickly check if there are any cor relations. For example, if you look at the intersection of the Life row and GDP column, you can see that there might be some correlation between life expectancy and GDP per capita (but not a linear one). Well see this better after normalizing the data in the next section. The plot function is possibly the most primitive function from R we can call, but it shows the idea. However, R offers a number of pow erful packages that you can access from F# thanks to the R provider. For example, you can use ggplot2 for producing print-ready charts, nnet for neural networks, and numerous other packages for regres sions, clustering, and other statistical analyses. Summarizing Data Using the R Provider | 23 Normalizing the World Data Set As the last step in this chapter, we write a simple computation to normalize the data in the world data frame. As you could see in Table 2-1, the data set contains quite diverse numbers, so we rescale the values to a scale from 0 to 1. This prepares the data for the clus tering algorithm implemented in the next chapter, and also lets us explore the correlation between GDP and life expectancy. To normalize the values, we need the minimal and maximal value for each indicator. Then we can transform a value v by calculating (v-min)/(max-min). With Deedle, we do not have to do this for individual values, but we can instead express this as a computation over the whole frame. As part of the normalization, we also fill missing values with the average value for the indicator. This is simple, but works well enough for us: let lo = Stats.min world let hi = Stats.max world let avg = Stats.mean world let filled = world |> Frame.transpose |> Frame.fillMissingUsing (fun _ ind -> avg.[ind]) let norm = (filled - lo) / (hi - lo) |> Frame.transpose The normalization is done in three steps: 1. First, we use functions from the Stats module to get the small est, largest, and average values. When applied on a frame, the functions return series with one number for each column, so we get aggregates for all indicators. 2. Second, we fill the missing values. The fillMissingUsing oper ation iterates over all columns and then fills the missing value for each item in the column by calling the function we provide. To use it, we first transpose the frame (to switch rows and col umns). Then fillMissingUsing iterates over all countries, gives us the indicator name ind, and we look up the average value for the indicator using avg.[ind]. We do not need the value of the 24 | Chapter 2: Analyzing Data Using F# and Deedle first parameter, and rather than assigning it to an unused vari able, we use the _ pattern which ignores the value. 3. Third, we perform the normalization. Deedle defines numerical operators between frame and series, such that filled - lo sub tracts the lo series point-wise from each column of the filled frame, and we subtract minimal indicator values for each coun try. Finally, we transpose the frame again into the original shape with indicators as columns and countries as rows. The fact that the explanation here is much longer than the code shows just how much you can do with just a couple of lines of code with F# and Deedle. The library provides functions for joining frames, grouping, and aggregation, as well as windowing and sam pling (which are especially useful for time-indexed data). For more information about the available functions, check out the documen tation for the Stats module and the documentation for the Frame module on the Deedle website. To finish the chapter with an interesting visualization, lets use the normalized data to build a scatter plot that shows the correlation between GDP and life expectancy. As suggested earlier, the growth is not linear so we take the logarithm of GDP: let gdp = log norm.["GDP"] |> Series.values let life = norm.["Life"] |> Series.values let options = Options(pointSize=3, colors=[|"#3B8FCC"|], trendlines=[|Trendline(opacity=0.5,lineWidth=10)|], hAxis=Axis(title="Log of scaled GDP (per capita)"), vAxis=Axis(title="Life expectancy (scaled)")) Chart.Scatter(Seq.zip gdp life) |> Chart.WithOptions(options) The norm.["GDP"] notation is used to get a specified column from the data frame. This returns a series, which supports basic numerical operators (as used in Visualizing CO2 Emissions Change on page 18) as well as basic numerical functions, so we can directly call log on the series. For the purpose of the visualization, we need just the values and not the country names, so we call Series.values to get a plain F# sequence with the raw values. We then combine the values for the X and Y axes using Seq.zip to get a sequence of pairs representing the two indicators for each country. To get the chart in Figure 2-3, we Normalizing the World Data Set | 25 also specify visual properties, titles, and most importantly, add a lin ear trend line. Figure 2-3. Correlation between logarithm of GDP and life expectancy If we denormalize the numbers, we can roughly say that countries with a life expectancy greater by 10 years have 10 times larger GDP per capita. That said, to prove this point more convincingly, we would have to test the statistical significance of the hypothesis, and wed have to go back to the R type provider! Conclusions In this chapter, we looked at a more realistic case study of doing data science with F#. We still used World Bank as our data source, but this time we called it using the XML provider directly. This demon strates a general approach that would work with any REST-based service. Next, we looked at the data in two different ways. We used Deedle to print a data frame showing the numerical values. This showed us that some values are missing and that different indicators have very different ranges, and we later normalized the values for further pro cessing. Next, we used the R type provider to get a quick overview of correlations. Here, we really just scratched the surface of what is 26 | Chapter 2: Analyzing Data Using F# and Deedle possible. The R provider provides access to over 5000 statistical packages which are invaluable when doing more complex data anal ysis. In the first two chapters, we used a number of external libraries (all of them available conveniently through the FsLab package). In the next chapter, we shift our focus from using to creating, and well look at how to use F# to implement a simple clustering algorithm. Conclusions | 27 CHAPTER 3 Implementing Machine Learning Algorithms All of the analysis that we discussed so far in this report was manual. We looked at some data, we had some idea what we wanted to find or highlight, we transformed the data, and we built a visualization. Machine learning aims to make the process more automated. In general, machine learning is the process of building models automat ically from data. There are two basic kinds of algorithms. Supervised algorithms learn to generalize from data with known answers, while unsupervised algorithms automatically learn to model data without known structure. In this chapter, we implement a basic, unsupervised machine learn ing algorithm called k-means clustering that automatically splits inputs into a specified number of groups. Well use it to group coun tries based on the indicators obtained in the previous chapter. This chapter also shows the F# language from a different perspec tive. So far, we did not need to implement any complicated logic and mostly relied on existing libraries. In contrast, this chapter uses just the standard F# library, and youll see a number of ways in which F# makes it very easy to implement new algorithmsthe primary way is type inference which lets us write efficient and correct code while keeping it very short and readable. 29 How k-Means Clustering Works The k-means clustering algorithm takes input data, together with the number k that specifies how many clusters we want to obtain, and automatically assigns the individual inputs to one of the clus ters. It is iterative, meaning that it runs in a loop until it reaches the final result or a maximal number of steps. The idea of the algorithm is that it creates a number of centroids that represent the centers of the clusters. As it runs, it keeps adjusting the centroids so that they better cluster the input data. It is an unsuper vised algorithm, which means that we do not need to know any information about the clusters (say, sample inputs that belong there). To demonstrate how the algorithm works, we look at an example that can be easily drawn in a diagram. Lets say that we have a num ber of points with X and Y coordinates and we want to group them in clusters. Figure 3-1 shows the points (as circles) and current cent roids (as stars). Colors illustrate the current clustering that we are trying to improve. This is very simple, but it is sufficient to get started. Figure 3-1. Clustering three groups of circles with stars showing kmeans centroids The algorithm runs in three simple steps: 1. First, we randomly generate initial centroids. This can be done by randomly choosing some of the inputs as centroids, or by generating random values. In the figure, we placed three stars at random X and Y locations. 30 | Chapter 3: Implementing Machine Learning Algorithms 2. Second, we update the clusters. For every input, we find the nearest centroid, which determines the cluster to which the input belongs. In the figure, we show this using coloreach input has the color of the nearest centroid. If this step does not change the inputs in any of the clusters, we are done and can return them as the final result. 3. Third, we update the centroids. For each cluster (group of inputs with the same color), we calculate the center and move the centroid into this new location. Next, we jump back to the second step and update the clusters again, based on the new centroids. The example in Figure 3-1 shows the state before and after one iter ation of the loop. In Before, we randomly generated the location of the centroids (shown as stars) and assigned all of the inputs to the correct cluster (shown as different colors). In After, we see the new state after running steps 3 and 2. In step 3, we move the green cent roid to the right (the leftmost green circle becomes blue), and we move the orange centroid to the bottom and a bit to the left (the rightmost blue circle becomes orange). To run the algorithm, we do not need any classified samples, but we do need two things. We need to be able to measure the distance (to find the nearest centroid), and we need to be able to aggregate the inputs (to calculate a new centroid). As well see in Writing a Reus able Clustering Function on page 36, this information will be nicely reflected in the F# type information at the end of the chapter, so its worth remembering. Clustering 2D Points Rather than getting directly to the full problem and clustering coun tries, we start with a simpler example. Once we know that the code works on the basic sample, well turn it into a reusable F# function and use it on the full data set. Our sample data set consists of just six points. Assuming 0.0, 0.0 is the bottom left corner, we have two points in the bottom left, two in the bottom right, and two in the top left corner: let data = [ (0.0, 1.0); (1.0, 1.0); Clustering 2D Points | 31 (10.0, 1.0); (13.0, 3.0); (4.0, 10.0); (5.0, 8.0) ] The notation [ ... ] is the list expression (which weve seen in pre vious chapters), but this time were creating a list of explicitly given tuples. If you run the code in F# Interactive, youll see that the type of the data value is list<float * float>,1 so the tuple float * float is the type of individual input. As discussed before, we need the dis tance and aggregate functions for the inputs: let distance (x1, y1) (x2, y2) : float = sqrt ((x1-x2)*(x1-x2) + (y1-y2)*(y1-y2)) let aggregate points : float * float = (List.averageBy fst points, List.averageBy snd points) The distance function takes two points and produces a single num ber. Note that in F#, function parameters are separated by spaces, and so (x1, y1) is the first parameter and (x2, y2) is the second. However, both parameters are bound to patterns that decompose the tuple into individual components, and we get access to the X and Y coordinates for both points. We also included the type annotation specifying that the result is float. This is needed here because the F# compiler would not know what numerical type we intend to use. The body then simply calculates the distance between the two points. The aggregate function takes a list of inputs and calculates their centers. This is done using the List.averageBy function, which takes two arguments. The second argument is the input list, and the first argument is a projection function that specifies what value (from the input) should be averaged. The fst and snd functions return the first and second element of a tuple, respectively, and this averages the X and Y coordinates. 1 The F# compiler also reports this as (float * float) list, which is just a different way of writing the same type. 32 | Chapter 3: Implementing Machine Learning Algorithms Initializing Centroids and Clusters The first step of the k-means algorithm is to initialize the centroids. For our sample, we use three clusters. We initialize the centroids by randomly picking three of the inputs: let clusterCount = 3 let centroids = let random = System.Random() [ for i in 1 .. clusterCount -> List.nth data (random.Next(data.Length)) ] The code snippet uses the List.nth function to access the element at the random offset (in F# 4.0, List.nth is deprecated, and you can use the new List.item instead). We also define the random value as part of the definition of centroidsthis makes it accessible only inside the definition of centroids and we keep it local to the initial ization code. Our logic here is not perfect, because we could accidentally pick the same input twice and two clusters would fully overlap. This is some thing we should improve in a proper implementation, but it works well enough for our demo. The next step is to find the closest centroid for each input. To do this, we write a function closest that takes all centroids and the input we want to classify: let closest centroids input = centroids |> List.mapi (fun i v -> i, v) |> List.minBy (fun (_, cent) -> distance cent input) |> fst The function works in three steps that are composed in a sequence using the pipeline |> operator that weve seen in the first chapter. Here, we start with centroids, which is a list, and apply a number of transformations on the list: 1. We use List.mapi, which calls the specified function for each element of the input list and collects the results into an output list.2 The mapi function gives us the value v, and also the index i 2 If you are familiar with LINQ, then this is the Select extension method. Initializing Centroids and Clusters | 33 (hence mapi and not just map), and we construct a tuple with the index and the value. Now we have a list with centroids together with their index. 2. Next, we use List.minBy to find the smallest element of the list according to the specified criteriain our case, this is the dis tance from the input. Note that we get the element of the previ ous list as an input. This is a pair with index and centroid, and we use pattern (_, cent) to extract the second element (cent roid) and assign it to a variable while ignoring the index of the centroid (which is useful in the next step). 3. The List.minBy function returns the element of the list for which the function given as a parameter returned the smallest value. In our case, this is a value of type int * (float * float) consisting of the index together with the centroid itself. The last step then uses fst to get the first element of the tuple, that is, the index of the centroid. The one new piece of F# syntax used in this snippet is an anony mous function that is created using fun v1 .. vn -> e, where v1 .. vn are the input variables (or patterns) and e is the body of the function. Now that we have a function to classify one input, we can easily use List.map to classify all inputs: data |> List.map (fun point -> closest centroids point) Try running the above in F# Interactive to see how your random centroids are generated! If you are lucky, you might get a result [0; 0; 1; 1; 2; 2] which would mean that you already have the per fect clusters. But this is not likely the case, so well need to run the next step. Before we continue, it is worth noting that we could also write data |> List.map (closest centroids). This uses an F# feature called partial function application and means the exact same thing: F# automatically creates a function that takes point and passes it as the next argument to closest centroids. 34 | Chapter 3: Implementing Machine Learning Algorithms Updating Clusters Recursively The last part of the algorithm that we need to implement is updating the centroids (based on the assignments to clusters) and looping until the cluster assignment stops changing. To do this, we write a recursive function update that takes the current assignment to clus ters and produces the final assignment (after the looping converges). The assignments to clusters is just a list (as in the previous section) that has the same length as our data and contains the index of a cluster (between 0 and clusterCount-1). To get all inputs for a given cluster, we need to filter the data based on the assignments. We will use the List.zip function which aligns elements in two lists and returns a list of tuples. For example: List.zip [1; 2; 3; 4] ['A'; 'B'; 'C'; 'D'] = [(1,'A'); (2,'B'); (3,'C'); (4,'D')] Aside from List.zip, the only new F# construct in the following snippet is let rec, which is the same as let, but it explicitly marks the function as recursive (meaning that it is allowed to call itself): let rec update assignment = let centroids = [ for i in 0 .. clusterCount-1 -> let items = List.zip assignment data |> List.filter (fun (c, data) -> c = i) |> List.map snd aggregate items ] let next = List.map (closest centroids) data if next = assignment then assignment else update next let assignment = update (List.map (closest centroids) data) The function first calculates new centroids. To do this, it iterates over the centroid indices. For each centroid, it finds all items from data that are currently assigned to the centroid. Here, we use List.zip to create a list containing items from data together with their assignments. We then use the aggregate function (defined ear lier) to calculate the center of the items. Once we have new centroids, we calculate new assignments based on the updated clusters (using List.map (closest centroids) data, as in the previous section). Updating Clusters Recursively | 35 The last two lines of the function implement the looping. If the new assignment next is the same as the previous assignment, then we are done and we return the assignment as the result. Otherwise, we call update recursively with the new assignment (and it updates the centroids again, leading to a new assignment, etc.). It is worth noting that F# allows us to use next = assignment to compare two arrays. It implements structural equality by comparing the arrays based on their contents instead of their reference (or posi tion in the .NET memory). Finally, we call update with the initial assignments to cluster our sample points. If everything worked well, you should get a list such as [1;1;2;2;0;0] with the three clusters as the result. However, there are two things that could go wrong and would be worth improving in the full implementation: Empty clusters. If the random initialization picks the same point twice as a centroid, we will end up with an empty cluster (because List.minBy always returns the first value if there are multiple values with the same minimum). This currently causes an exception because the aggregate function does not work on empty lists. We could fix this either by dropping empty clusters, or by adding the original center as another parameter of aggregate (and keeping the centroid where it was before). Termination condition. The other potential issue is that the looping could take too long. We might want to stop it not just when the clusters stop changing, but also after a fixed number of iterations. To do this, we would add the iters parameter to our update function, increment it with every recursive call, and modify the termination condition. Even though we did all the work using an extremely simple special case, we now have everything in place to turn the code into a reusa ble function. This nicely shows the typical F# development process. Writing a Reusable Clustering Function A nice aspect of how we were writing code so far is that we did it in small chunks and we could immediately test the code interactively to see that it works on our small example. This makes it easy to avoid silly mistakes and makes the software development process 36 | Chapter 3: Implementing Machine Learning Algorithms much more pleasant, especially when writing machine learning algorithms where many little details could go wrong that would be hard to discover later! The last step is to take the code and turn it into a function that we can call on different inputs. This turns out to be extremely easy with F#. The following snippet is exactly the same as the previous code the only difference is that we added a function header (first line), indented the body further, and changed the last line to return the result: let kmeans distance aggregate clusterCount data = let centroids = let rnd = System.Random() [ for i in 1 .. clusterCount -> List.nth data (rnd.Next(data.Length)) ] let closest centroids input = centroids |> List.mapi (fun i v -> i, v) |> List.minBy (fun (_, cent) -> distance cent input) |> fst let rec update assignment = let centroids = [ for i in 0 .. clusterCount-1 -> let items = List.zip assignment data |> List.filter (fun (c, data) -> c = i) |> List.map snd aggregate items ] let next = List.map (closest centroids) data if next = assignment then assignment else update next update (List.map (closest centroids) data) The most interesting aspect of the change we did is that we turned all the inputs for the k-means algorithm into function parameters. This includes not just data and clusterCount, but also the func tions for calculating the distance and aggregating the items. The function does not rely on any values defined earlier, and you can extract it into a separate file and could turn it into a library, too. An interesting thing happened during this change. We turned the code that worked on just 2D points into a function that can work on any inputs. You can see this when you look at the type of the func Writing a Reusable Clustering Function | 37 tion (either in a tooltip or by sending it to F# Interactive). The type signature of the function looks as follows: val kmeans : distance : ('a -> 'a -> 'b) -> aggregate : ('a list -> 'a) -> clusterCount : int -> data : 'a list -> int list (when 'b : comparison) In F#, the 'a notation in a type signature represents a type parame ter. This is a variable that can be substituted for any actual type when the function is called. This means that the data parameter can be a list containing any values, but only if we also provide a distance function that works on the same values, and aggregate function that turns a list of those values into a single value. The clusterCount parameter is just a number, and the result is int list, representing the assignments to clusters. The distance function takes two 'a values and produces a distance of type 'b. Surprisingly, the distance does not have to return a float ing point number. It can be any value that supports the comparison constraint (as specified on the last line). For instance, we could return int, but not string. If you think about this, it makes sense we do not do any calculations with the distance. We just need to find the smallest value (using List.minBy), so we only need to compare them. This can be done on float or int; there is no way to compare two string values. The compiler is not just checking the types to detect errors, but also helps you understand what your code does by inferring the type. Learning to read the type signatures takes some time, but it quickly becomes an invaluable tool of every F# programmer. You can look at the inferred type and verify whether it matches your intuition. In the case of k-means clustering, the type signature matches the intro duction discussed earlier in How k-Means Clustering Works on page 30. To experiment with the type inference, try removing one of the parameters from the signature of the kmeans function. When you 38 | Chapter 3: Implementing Machine Learning Algorithms do, the function might still compile (for example, if you have data in scope), but it will restrict the type from generic parameter 'a to float, suggesting that something in the code is making it too speci alized. This is often a hint that there is something wrong with the code! Clustering Countries Now that we have a reusable kmeans function, there is one step left: run it on the information about the countries that we downloaded at the end of the previous chapter. Recall that we previously defined norm, which is a data frame of type Frame<string, string> that has countries as rows and a number of indicators as columns. For call ing kmeans, we need a list of values, so we get the rows of the frame (representing individual countries) and turn them into a list using List.ofSeq: let data = norm.GetRows<float>().Values |> List.ofSeq The type of data is list<Series<string, float>>. Every series in the list represents one country with a number of different indicators. The fact that we are using a Deedle series means that we do not have to worry about missing values and also makes calculations easier. The two functions we need for kmeans are just a few lines of code: let distance (s1:Series<string,float>) (s2:Series<string, float>) = (s1 - s2) * (s1 - s2) |> Stats.sum let aggregate items = items |> Frame.ofRowsOrdinal |> Stats.mean The distance function takes two series and uses the point-wise * and - operators to calculate the squares of differences for each col umn, then sums them to get a single distance metric. We need to provide type annotations, written as (s1:Series<string,float>), to tell the F# compiler that the parameter is a series and that it should use the overloaded numerical operators provided by Deedle (rather than treating them as operators on integers). Clustering Countries | 39 The aggregate takes a list of series (countries in a cluster) of type list<Series<string,float>>. It should return the averaged value that represents the center of the cluster. To do this, we use a simple trick: we turn the series into a frame and then use Stats.mean from Deedle to calculate averages over all columns of the frame. This gives us a series where each indicator is the average of all input indi cators. Deedle also conveniently skips over missing values. Now we just need to call the kmeans function and draw a chart showing the clusters: let clrs = ColorAxis(colors=[|"red";"blue";"orange"|]) let countryClusters = kmeans distance aggregate 3 data Seq.zip norm.RowKeys countryClusters |> Chart.Geo |> Chart.WithOptions(Options(colorAxis=clrs)) The snippet is not showing anything new. We call kmeans with our new data and the distance and aggregate functions. Then we combine the country names (norm.RowKeys) with their cluster assignments and draw a geo chart that uses red, blue, and orange for the three clusters. The result is the map in Figure 3-2. Figure 3-2. Clustering countries of the world based on World Bank indicators 40 | Chapter 3: Implementing Machine Learning Algorithms Looking at the image, it seems that the clustering algorithm does identify some categories of countries that we would expect. The next interesting step would be to try understand why. To do this, we could look at the final centroids and find which of the indicators contribute the most to the distance between them. Scaling to the Cloud with MBrace The quality of the results you get from k-means clustering partly depends on the initialization of the centroids, so you can run the algorithm a number of times with different initial centroids and see which result is better. You can easily do this locally, but what if we were looking not at hundreds of countries, but at millions of prod ucts or customers in our database? In that case, the next step of our journey would be to use the cloud. In F#, you can use the MBrace library,3 which lets you take existing F# code, wrap the body of a function in the cloud computation, and run it in the cloud. You can download a complete example as part of the accompanying source code download, but the following code snippet shows the required changes to the kmeans function: let kmeans distance aggregate clusterCount (remoteData:CloudValue<'T[]>) = cloud { let! data = CloudValue.Read remoteData // The rest of the function is the same as before } kmeans distance aggregator 3 cloudCountries |> cluster.CreateProcess In the sample, we are using two key concepts from the MBrace library: Cloud computation. By wrapping the body of the function in cloud, we created a cloud computation. This is a block of F# code that can be serialized, transferred to a cluster in the cloud, and executed remotely. Cloud computations can spawn multiple parallel workflows that are then distributed cross the cluster. To start a cloud computation, we use the CreateProcess method, 3 Available at http://www.m-brace.net/. Scaling to the Cloud with MBrace | 41 which starts the work and returns information about the pro cess running in the cluster. Cloud values. When running the algorithm on a large number of inputs, we cannot copy the data from the local machine to the cloud every time. The CloudValue type lets us create a value (here, containing an array of inputs) that is stored in the cluster, so we can use it to avoid data copying. The CloudValue.Read method is used to read the data from the cloud storage into memory. Using MBrace requires more F# background than we can provide in a brief report, but it is an extremely powerful programming model that lets you take your machine learning algorithms to the next level. Just by adding cloud computations and cloud values, you can turn a simple local implementation into one that runs over hun dreds of machines. If you want to learn more about MBrace, the project documentation has all the details and also an extensive col lection of samples.4 Conclusions In this chapter, we completed our brief tour by using the F# lan guage to implement the k-means clustering algorithm. This illustra ted two aspects of F# that make it nice for writing algorithms: First, we wrote the code iteratively. We started by running indi vidual parts of the code on sample input and we could quickly verify that it works. Then we refactored the code into a reusable function. By then, we already knew that the code works. Second, the F# type inference helped us along the way. In F#, you do not write types explicitly, so the language is not verbose, but you still get the safety guarantees that come with static typ ing. The type inference also helps us understand our code, because it finds the most general type. If this does not match your expectations about the algorithm, you know that there is something suspicious going on! 4 Available at http://www.m-brace.net/programming-model.html. 42 | Chapter 3: Implementing Machine Learning Algorithms Interactive development and type inference will help you when writ ing any machine learning algorithms. To learn more, you can explore other existing F# projects like the decision tree by <NAME>,5 or the Ariadne project,6 which implements Gaussian process regression. Writing your own machine learning algorithms is not just a great way to learn the concepts; it is useful when explor ing the problem domain. It often turns out that a simple algorithm like k-means clustering works surprisingly well. Often, you can also use an existing F# or .NET library. The next chapter will give you a couple of pointers. 5 Available at http://bit.ly/decisiontreeblog. 6 Available at http://evelinag.com/Ariadne/. Conclusions | 43 CHAPTER 4 Conclusions and Next Steps This brief report shows just a few examples of what you can do with F#, but we used it to demonstrate many of the key features of the language that make it a great tool for data science and machine learning. With type providers, you can elegantly access data. We used the XPlot library for visualization, but F# also gives you access to the ggplot2 package from R and numerous other tools. As for analysis, we used the Deedle library and R type provider, but we also implemented our own clustering algorithm. Adding F# to Your Project I hope this report piqued your interest in F# and showed some good reasons why you might want to use it in your projects. So, what are the best first steps? First of all, you probably should not immediately switch all your code to F# and become the only person in your com pany who understands it! A large part of any machine learning and data science is experimen tation. Even if your final implementation needs to be in C# (or any other language), you can still use F# to explore the data and proto type different algorithms (using plain F#, R type provider, and the machine learning libraries discussed below). F# integrates well with .NET and Xamarin applications, so you can write your data access code or a machine learning algorithm in F# and use it in a larger C# application. There are also many libraries 45 for wrapping F# code as a web application or a web service;1 and so you can expose the functionality as a simple REST service and host it on Heroku, AWS, or Azure. Resources for Learning More If you want to learn more about using F# for data science and machine learning, a number of excellent resources are worth check ing out now that you have finished the quick overview in this report. Report Source Code (fslab.org/report) The best way to learn anything is to try it on your own, so download the full source code for the examples from this report and try modifying them to learn other interesting things about the data weve been using, or change the code to load other data relevant to your work! F# User Groups and Coding Dojos (c4fsharp.net) The F# community is very active, and there is likely an F# user group not far from where you live. The Community for F# web site is the best place to find more information. It also hosts cod ing Dojos that you can try completing on your own. F# Software Foundation (fsharp.org) The F# Foundation website is the home of the F# language and is a great starting point if you want to learn more about the lan guage and find resources like books and online tutorials. It also provides up-to-date installation instructions for all platforms. FsLab Project (fslab.org) FsLab is a package that brings together many of the popular data science libraries for F#. We used F# Data (for data access), Deedle and R provider (for data analysis), and XPlot (for visual ization). The FsLab website hosts their documentation and other useful resources. Accord.NET Framework (accord-framework.net) Accord.NET is a machine learning library for .NET that works well with F#. In this report, we implemented k-means clustering to demonstrate interesting F# language features, but when solv 1 See the web guide on the F# Foundation website: http://fsharp.org/guides/web. 46 | Chapter 4: Conclusions and Next Steps ing simple machine learning problems, you can often just use an existing library! MBrace Project (http://www.m-brace.net/) MBrace is a simple library for scalable data scripting and pro gramming with F# and C#. It lets you run F# computations in the cloud directly from your F# Interactive with the same rapid feedback that you get when running your F# code locally. Check out MBrace if you are looking at implementing scalable machine learning with F#. Machine Learning Projects for .NET Developers (<NAME>, Apress) Finally, if you enjoyed Chapter 3, then Mathias Brandewinders book is a great resource. It implements a number of machine learning algorithms using F# and also provides more details for some of the libraries used in this report, like the R type provider and F# Data. Resources for Learning More | 47 About the Author <NAME> is a computer scientist, book author, and open source developer. He wrote a popular book called Real-World Func tional Programming (Manning) and is a lead developer of several F# open source libraries. He also contributed to the design of the F# language as an intern and consultant at Microsoft Research. He is a partner at fsharpWorks (http://fsharpworks.com/) where he provides training and consulting services. Tomas recently submitted his PhD thesis at the University of Cam bridge, focused on types for understanding context usage in pro gramming languages. His most recent work also includes two essays that attempt to understand programming through the perspective of philosophy of science.
github.com/mennanov/fieldmask-utils
go
Go
README [¶](#section-readme) --- ### Protobuf Field Mask utils for Go [![Tests](https://github.com/mennanov/fieldmask-utils/actions/workflows/tests.yml/badge.svg)](https://github.com/mennanov/fieldmask-utils/actions/workflows/tests.yml) [![Coverage](https://codecov.io/gh/mennanov/fieldmask-utils/branch/master/graph/badge.svg?token=O7HtNMO6Ra)](https://codecov.io/gh/mennanov/fieldmask-utils) Features: * Copy from any Go struct to any compatible Go struct with a field mask applied * Copy from any Go struct to a `map[string]interface{}` with a field mask applied * Extensible masks (e.g. inverse mask: copy all except those mentioned, etc.) * Supports [Protobuf Any](https://developers.google.com/protocol-buffers/docs/proto3#any) message types. If you're looking for a simple FieldMask library to work with protobuf messages only (not arbitrary structs) consider this tiny repo: <https://github.com/mennanov/fmutils#### Examples Copy from a protobuf message to a protobuf message: ``` // testproto/test.proto message UpdateUserRequest { User user = 1; google.protobuf.FieldMask field_mask = 2; } ``` ``` package main import fieldmask_utils "github.com/mennanov/fieldmask-utils" // A function that maps field mask field names to the names used in Go structs. // It has to be implemented according to your needs. func naming(s string) string { if s == "foo" { return "Foo" } return s } func main () { var request UpdateUserRequest userDst := &testproto.User{} // a struct to copy to mask, _ := fieldmask_utils.MaskFromPaths(request.FieldMask.Paths, naming) fieldmask_utils.StructToStruct(mask, request.User, userDst) // Only the fields mentioned in the field mask will be copied to userDst, other fields are left intact } ``` Copy from a protobuf message to a `map[string]interface{}`: ``` package main import fieldmask_utils "github.com/mennanov/fieldmask-utils" func main() { var request UpdateUserRequest userDst := make(map[string]interface{}) // a map to copy to mask, _ := fieldmask_utils.MaskFromProtoFieldMask(request.FieldMask, naming) err := fieldmask_utils.StructToMap(mask, request.User, userDst) // Only the fields mentioned in the field mask will be copied to userDst, other fields are left intact } ``` Copy with an inverse mask: ``` package main import fieldmask_utils "github.com/mennanov/fieldmask-utils" func main() { var request UpdateUserRequest userDst := &testproto.User{} // a struct to copy to mask := fieldmask_utils.MaskInverse{"Id": nil, "Friends": fieldmask_utils.MaskInverse{"Username": nil}} fieldmask_utils.StructToStruct(mask, request.User, userDst) // Only the fields that are not mentioned in the field mask will be copied to userDst, other fields are left intact. } ``` #### Limitations 1. Larger scope field masks have no effect and are not considered invalid: field mask strings `"a", "a.b", "a.b.c"` will result in a mask `a{b{c}}`, which is the same as `"a.b.c"`. 2. Masks inside a protobuf `Map` are not supported. 3. When copying from a struct to struct the destination struct must have the same fields (or a subset) as the source struct. Either of source or destination fields can be a pointer as long as it is a pointer to the type of the corresponding field. 4. `oneof` fields are represented differently in `fieldmaskpb.FieldMask` compared to `fieldmask_util.Mask`. In [FieldMask](https://pkg.go.dev/google.golang.org/protobuf/types/known/fieldmaskpb#:~:text=%23%20Field%20Masks%20and%20Oneof%20Fields) the fields are represented using their property name, in this library they are prefixed with the `oneof` name matching how Go generated code is laid out. This can lead to issues when converting between the two, for example when using `MaskFromPaths` or `MaskFromProtoFieldMask`. Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package fieldmask_utils provides utility functions for copying data from structs using a field mask. ### Index [¶](#pkg-index) * [func StructToMap(filter FieldFilter, src interface{}, dst map[string]interface{}, ...) error](#StructToMap) * [func StructToStruct(filter FieldFilter, src, dst interface{}, userOpts ...Option) error](#StructToStruct) * [type FieldFilter](#FieldFilter) * [type FieldFilterContainer](#FieldFilterContainer) * + [func FieldFilterFromPaths(paths []string, naming func(string) string, filter func() FieldFilterContainer) (FieldFilterContainer, error)](#FieldFilterFromPaths) + [func FieldFilterFromString(input string, filter func() FieldFilterContainer) FieldFilterContainer](#FieldFilterFromString) * [type MapVisitorResult](#MapVisitorResult) * [type Mask](#Mask) * + [func MaskFromPaths(paths []string, naming func(string) string) (Mask, error)](#MaskFromPaths) + [func MaskFromProtoFieldMask(fm *field_mask.FieldMask, naming func(string) string) (Mask, error)](#MaskFromProtoFieldMask) + [func MaskFromString(s string) Mask](#MaskFromString) * + [func (m Mask) Filter(fieldName string) (FieldFilter, bool)](#Mask.Filter) + [func (m Mask) Get(fieldName string) (FieldFilterContainer, bool)](#Mask.Get) + [func (m Mask) IsEmpty() bool](#Mask.IsEmpty) + [func (m Mask) Set(fieldName string, filter FieldFilterContainer)](#Mask.Set) + [func (m Mask) String() string](#Mask.String) * [type MaskInverse](#MaskInverse) * + [func MaskInverseFromPaths(paths []string, naming func(string) string) (MaskInverse, error)](#MaskInverseFromPaths) + [func MaskInverseFromProtoFieldMask(fm *field_mask.FieldMask, naming func(string) string) (MaskInverse, error)](#MaskInverseFromProtoFieldMask) + [func MaskInverseFromString(s string) MaskInverse](#MaskInverseFromString) * + [func (m MaskInverse) Filter(fieldName string) (FieldFilter, bool)](#MaskInverse.Filter) + [func (m MaskInverse) Get(fieldName string) (FieldFilterContainer, bool)](#MaskInverse.Get) + [func (m MaskInverse) IsEmpty() bool](#MaskInverse.IsEmpty) + [func (m MaskInverse) Set(fieldName string, filter FieldFilterContainer)](#MaskInverse.Set) + [func (m MaskInverse) String() string](#MaskInverse.String) * [type Option](#Option) * + [func WithCopyListSize(f func(src *reflect.Value) int) Option](#WithCopyListSize) + [func WithMapVisitor(visitor mapVisitor) Option](#WithMapVisitor) + [func WithSrcTag(s string) Option](#WithSrcTag) + [func WithTag(s string) Option](#WithTag) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [StructToMap](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/copy.go#L320) [¶](#StructToMap) ``` func StructToMap(filter [FieldFilter](#FieldFilter), src interface{}, dst map[[string](/builtin#string)]interface{}, userOpts ...[Option](#Option)) [error](/builtin#error) ``` StructToMap copies `src` struct to the `dst` map. Behavior is similar to `StructToStruct`. Arrays in the non-empty dst are converted to slices. #### func [StructToStruct](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/copy.go#L17) [¶](#StructToStruct) ``` func StructToStruct(filter [FieldFilter](#FieldFilter), src, dst interface{}, userOpts ...[Option](#Option)) [error](/builtin#error) ``` StructToStruct copies `src` struct to `dst` struct using the given FieldFilter. Only the fields where FieldFilter returns true will be copied to `dst`. `src` and `dst` must be coherent in terms of the field names, but it is not required for them to be of the same type. Unexported fields are copied only if the corresponding struct filter is empty and `dst` is assignable to `src`. ### Types [¶](#pkg-types) #### type [FieldFilter](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L12) [¶](#FieldFilter) ``` type FieldFilter interface { // Filter should return a corresponding FieldFilter for the given fieldName and a boolean result. If result is true // then the field is copied, skipped otherwise. Filter(fieldName [string](/builtin#string)) ([FieldFilter](#FieldFilter), [bool](/builtin#bool)) // Returns true if the FieldFilter is empty. In this case all fields are copied. IsEmpty() [bool](/builtin#bool) } ``` FieldFilter is an interface used by the copying function to filter fields that are needed to be copied. #### type [FieldFilterContainer](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L21) [¶](#FieldFilterContainer) ``` type FieldFilterContainer interface { [FieldFilter](#FieldFilter) // Get gets the FieldFilter for the given field name. Result is false if the filter is not found. Get(fieldName [string](/builtin#string)) (filter [FieldFilterContainer](#FieldFilterContainer), result [bool](/builtin#bool)) // Set sets the FieldFilter for the given field name. Set(fieldName [string](/builtin#string), filter [FieldFilterContainer](#FieldFilterContainer)) } ``` FieldFilterContainer is a FieldFilter with additional methods Get and Set. #### func [FieldFilterFromPaths](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L159) [¶](#FieldFilterFromPaths) ``` func FieldFilterFromPaths(paths [][string](/builtin#string), naming func([string](/builtin#string)) [string](/builtin#string), filter func() [FieldFilterContainer](#FieldFilterContainer)) ([FieldFilterContainer](#FieldFilterContainer), [error](/builtin#error)) ``` FieldFilterFromPaths creates a new FieldFilter from the given paths. #### func [FieldFilterFromString](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L199) [¶](#FieldFilterFromString) ``` func FieldFilterFromString(input [string](/builtin#string), filter func() [FieldFilterContainer](#FieldFilterContainer)) [FieldFilterContainer](#FieldFilterContainer) ``` FieldFilterFromString creates a new FieldFilterContainer from string. Input string is supposed to be a valid string representation of a FieldFilter like "a,b,c{d,e{f,g}},d". Use it in tests only as the input string is not validated and the underlying function panics in case of a parse error. #### type [MapVisitorResult](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/copy.go#L260) [¶](#MapVisitorResult) added in v1.0.0 ``` type MapVisitorResult struct { SkipToNext [bool](/builtin#bool) UpdatedDst *[reflect](/reflect).[Value](/reflect#Value) } ``` #### type [Mask](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L30) [¶](#Mask) ``` type Mask map[[string](/builtin#string)][FieldFilterContainer](#FieldFilterContainer) ``` Mask is a tree-based implementation of a FieldFilter. #### func [MaskFromPaths](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L137) [¶](#MaskFromPaths) ``` func MaskFromPaths(paths [][string](/builtin#string), naming func([string](/builtin#string)) [string](/builtin#string)) ([Mask](#Mask), [error](/builtin#error)) ``` MaskFromPaths creates a new Mask from the given paths. #### func [MaskFromProtoFieldMask](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L127) [¶](#MaskFromProtoFieldMask) ``` func MaskFromProtoFieldMask(fm *[field_mask](/google.golang.org/genproto/protobuf/field_mask).[FieldMask](/google.golang.org/genproto/protobuf/field_mask#FieldMask), naming func([string](/builtin#string)) [string](/builtin#string)) ([Mask](#Mask), [error](/builtin#error)) ``` MaskFromProtoFieldMask creates a Mask from the given FieldMask. #### func [MaskFromString](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L181) [¶](#MaskFromString) ``` func MaskFromString(s [string](/builtin#string)) [Mask](#Mask) ``` MaskFromString creates a new Mask instance from a given string. Use in tests only. See FieldFilterFromString for details. #### func (Mask) [Filter](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L48) [¶](#Mask.Filter) ``` func (m [Mask](#Mask)) Filter(fieldName [string](/builtin#string)) ([FieldFilter](#FieldFilter), [bool](/builtin#bool)) ``` Filter returns true for those fieldNames that exist in the underlying map. Field names that start with "XXX_" are ignored as unexported. #### func (Mask) [Get](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L33) [¶](#Mask.Get) ``` func (m [Mask](#Mask)) Get(fieldName [string](/builtin#string)) ([FieldFilterContainer](#FieldFilterContainer), [bool](/builtin#bool)) ``` Get gets the FieldFilter for the given field name. Result is false if the filter is not found. #### func (Mask) [IsEmpty](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L61) [¶](#Mask.IsEmpty) ``` func (m [Mask](#Mask)) IsEmpty() [bool](/builtin#bool) ``` IsEmpty returns true of the mask is empty. #### func (Mask) [Set](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L39) [¶](#Mask.Set) ``` func (m [Mask](#Mask)) Set(fieldName [string](/builtin#string), filter [FieldFilterContainer](#FieldFilterContainer)) ``` Set sets the FieldFilter for the given field name. #### func (Mask) [String](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L86) [¶](#Mask.String) ``` func (m [Mask](#Mask)) String() [string](/builtin#string) ``` #### type [MaskInverse](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L91) [¶](#MaskInverse) ``` type MaskInverse map[[string](/builtin#string)][FieldFilterContainer](#FieldFilterContainer) ``` MaskInverse is an inversed version of a Mask (will copy all the fields except those mentioned in the mask). #### func [MaskInverseFromPaths](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L148) [¶](#MaskInverseFromPaths) ``` func MaskInverseFromPaths(paths [][string](/builtin#string), naming func([string](/builtin#string)) [string](/builtin#string)) ([MaskInverse](#MaskInverse), [error](/builtin#error)) ``` MaskInverseFromPaths creates a new MaskInverse from the given paths. #### func [MaskInverseFromProtoFieldMask](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L132) [¶](#MaskInverseFromProtoFieldMask) ``` func MaskInverseFromProtoFieldMask(fm *[field_mask](/google.golang.org/genproto/protobuf/field_mask).[FieldMask](/google.golang.org/genproto/protobuf/field_mask#FieldMask), naming func([string](/builtin#string)) [string](/builtin#string)) ([MaskInverse](#MaskInverse), [error](/builtin#error)) ``` MaskInverseFromProtoFieldMask creates a MaskInverse from the given FieldMask. #### func [MaskInverseFromString](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L189) [¶](#MaskInverseFromString) ``` func MaskInverseFromString(s [string](/builtin#string)) [MaskInverse](#MaskInverse) ``` MaskInverseFromString creates a new MaskInverse instance from a given string. Use in tests only. See FieldFilterFromString for details. #### func (MaskInverse) [Filter](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L106) [¶](#MaskInverse.Filter) ``` func (m [MaskInverse](#MaskInverse)) Filter(fieldName [string](/builtin#string)) ([FieldFilter](#FieldFilter), [bool](/builtin#bool)) ``` Filter returns true for those fieldNames that do NOT exist in the underlying map. Field names that start with "XXX_" are ignored as unexported. #### func (MaskInverse) [Get](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L94) [¶](#MaskInverse.Get) ``` func (m [MaskInverse](#MaskInverse)) Get(fieldName [string](/builtin#string)) ([FieldFilterContainer](#FieldFilterContainer), [bool](/builtin#bool)) ``` Get gets the FieldFilter for the given field name. Result is false if the filter is not found. #### func (MaskInverse) [IsEmpty](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L118) [¶](#MaskInverse.IsEmpty) ``` func (m [MaskInverse](#MaskInverse)) IsEmpty() [bool](/builtin#bool) ``` IsEmpty returns true if the mask is empty. #### func (MaskInverse) [Set](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L100) [¶](#MaskInverse.Set) ``` func (m [MaskInverse](#MaskInverse)) Set(fieldName [string](/builtin#string), filter [FieldFilterContainer](#FieldFilterContainer)) ``` Set sets the FieldFilter for the given field name. #### func (MaskInverse) [String](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/mask.go#L122) [¶](#MaskInverse.String) ``` func (m [MaskInverse](#MaskInverse)) String() [string](/builtin#string) ``` #### type [Option](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/copy.go#L266) [¶](#Option) added in v0.3.0 ``` type Option func(*options) ``` Option function modifies the given options. #### func [WithCopyListSize](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/copy.go#L283) [¶](#WithCopyListSize) added in v0.6.0 ``` func WithCopyListSize(f func(src *[reflect](/reflect).[Value](/reflect#Value)) [int](/builtin#int)) [Option](#Option) ``` WithCopyListSize sets CopyListSize func you can set copy size according to src. #### func [WithMapVisitor](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/copy.go#L290) [¶](#WithMapVisitor) added in v0.7.0 ``` func WithMapVisitor(visitor mapVisitor) [Option](#Option) ``` WithMapVisitor sets the fields visitor function for StructToMap. #### func [WithSrcTag](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/copy.go#L276) [¶](#WithSrcTag) added in v1.1.0 ``` func WithSrcTag(s [string](/builtin#string)) [Option](#Option) ``` WithSrcTag sets an option that gets the source field name from the field's tag. #### func [WithTag](https://github.com/mennanov/fieldmask-utils/blob/v1.1.0/copy.go#L269) [¶](#WithTag) added in v0.3.0 ``` func WithTag(s [string](/builtin#string)) [Option](#Option) ``` WithTag sets the destination field name
biblatex
ctan
TeX
* As a _conspectus siglorum_. You can see minimal example in the file example.pdf. ### Credits This package was created for <NAME>'s PhD1 in 2014. It is licenced on the _Eflex Project Public Licence2_. Footnote 1: [http://apocryphes.hypothese.org](http://apocryphes.hypothese.org). Footnote 2: [http://latex-project.org/lpp1/lpp1-1-3c.html](http://latex-project.org/lpp1/lpp1-1-3c.html). All issues can be submitted, in French or English, in the GitHub issues page3. Footnote 3: [https://github.com/maieul/biblatex-manuscripts-philology/issues](https://github.com/maieul/biblatex-manuscripts-philology/issues). ## 2 Requirement This package needs biblatex 3.3 or later. ## 3 New type and fields This package defines one new bibtype @manuscript, which is to be used to define a manuscript. ### Mandatory This bibtype has these mandatory fields: \begin{tabular}{l l} collection & field (literal) \\ & the collection in the library. For example: Supplement grec or Vaidyaka. \\ location & field (literal) \\ & the city or place where the manuscript is kept. For example: Paris or Oxford or Varanasi. \\ library & field (literal) \\ & the library where the manuscript is kept. For example: Bibliotheque nationale de France or Sarasvati Bhavan Library. \\ shelfmark & field (literal) \\ & the shelfmark in the collection. For example: 241. Do not prefix with "MS". \\ \end{tabular} @manuscript{key, field1 = {value1}, field2 = {value2}, catalog = {[prenotel][postnotel]{key1}[prenote2][postnote2]{key2}} contentsfield (literal) description of the manuscript's contents, can be used with or instead of annotation. It starts a new block into the description output. If you want to add paragraphs inside it, you must use the \par command between each paragraph. E.g., "Covers chapters 1-3 only." originfield (list) the places where the manuscript was written. ownerfield (name) the name(s) of the owner(s) of the manuscript in the past. scribefield (list) the name(s) of the scribe(s). ## 4 Use ### Loading When loading the biblatex package, use the option bibstyle with value equal to manuscripts. \usepackage[bibstyle=manuscripts,otheroption_]{biblatex} If you don't want the shorthand of the manuscript be auto-defined from the entry key, use manuscripts-noautos shorthand instead. \usepackage[bibstyle=manuscripts-noautos shorthand,otheroption_]{biblatex} The bibliographic style for other type entries is "verbose", which call "authortitle". However, if you need other bibliographic style, use the biblatex-multiple-dm package. ### Citation of one manuscript The manuscript description is supposed to be used with a citeslyte of the _verbose_ family (see the biblatex handbook), but you can use any other citeslyte. The only problematic point is that the manuscript citation will be verbose. So, if you use: \cite{manuscriptkey} the full reference of the manuscript will be printed (see the example file). However, you can use \shcite to print directly the shorthand of the manuscript: \shcite{manuscriptkey} You can also use \detailcite to print the description of the manuscript including the "special" fields (3.3), like owner and annotation. \detailcite{manuscriptkey} A \detailscites version of the commands does the same thing, but for multiple manuscripts. \detailscites{manuscriptkey}{manuscriptkey2}{_}{manuscriptkeyn} ### List of manuscripts: _conspectus siglorum_ You can use the standard command \printshorthands with appropriate options: \printshorthands[type=manuscript,title=Conspectus siglorum] In the previous example, with use one option to print shorthands only for manuscripts entries, and we set the title to the classical one "Conspectus siglorum". ### List of manuscripts with detailed fields If you want to print a list of manuscripts with detailed fields listed in SS 3.3, just use the env option, with value equals to details. \printshorthands[type=manuscript,env=details,title=Description of manuscripts] In this case, you must run two times biber: one after the first run of LaTeX and one after the second run, to add in the.bbl the catalogues. After that, run a third time LaTeX. If you use manuscripts-noautoshorthandbibstyle, this list could contain manuscripts without shorthand. By default, the descriptions of such manuscripts will be aligned on the left of the shorthands column. Another solution is to make two lists: one containing the manuscripts which have shorthand and one containing the manuscripts which have not shorthand. You can add to your \printbibliography or \printshorthands commands a bibcheck option, equal to withshorthand or withoutshorthand. As you: \printshorthands[ type=manuscript, env=details, title=Description of used manuscripts, check=withshorthand ] \printbibliography[ type=manuscript, env=details, title=Description of other manuscripts, check=withoutshorthand ] Maybe, you want to print the details but not the shorthand, even if the manuscripts have shorthands. In this case, use \printbibliography with the env option equals to details-nos shorthand. \printbibliography[ type=manuscript, env=details-nos shorthand, title=Description of manuscripts, ] ### Sorting list of manuscripts without shorthand If you doesn't use the shorthands' feature, you can want to print a list of manuscripts, sorted by Town, Library, Collection, Shelfmark. For this purpose, you can use, the option sorting=manuscripts of a refcontext environment. \begin{refcontext}[sorting=manuscripts]{} \printbibliography[ type=manuscript, title=Description of other manuscripts, ] \end{refcontext} You can use the special field sortshelfmark if the way you shelf marks are sorted is not the way they are printed (like in normal entry type, you can use sortitle. However, using different sorting scheme for different bibliography in the same document implies that biber will insert many time the bibliographic entries in the.bbl file, once by sorting scheme. If you have a bib bibliography, that could make LaTeX finishing the next run by a TeX capacity exceeded, sorry [main memory size=<something>] because this will full the memory5. Footnote 5: The problem should not happen if you use LuaElTeX, because, contrary to XaElTeX and LaTeX, LuaElTeX has no limit in memory, except the limit of the computer... which should be enough with recent computer. In order to prevent this, the package also provides sorting schemes to get the same sorting scheme for manuscripts and other entries, even if separated in the final bibliography. The following sorting schemes are adapted for the standard biblatex sorting schemes: * nty+manuscripts, adapted from nty; * nyt+manuscripts, adapted from nyt; * nyvt+manuscripts, adapted from nyvt. To use these scheme, don't use recontext environment, but simply the global option when loading biblatex: \usepackage[sorting=nty+manuscripts,...]{biblatex} If you need more sorting scheme, please contact us, we could integrate them easily. ## 5 Customization ### Create your own keys The support and script fields can contains either literal string, either key that biblatex will transform to a value. That use only standard biblatex localization string. To define your owns key, add into your preamble: \NewBibliographyString{<key1>} \NewBibliographyString{<key2>} - \NewBibliographyString{<keyn>} \DefineBibliographyStrings{% <key1> = {<value1}, <key2> = {<value2>}, -, <keyn> = {<valuen>} } Where <key1>, <key2>... <keyn> must be replaced by the keys, and <value1>, <value2>,... <value> by the value. ### Commands You can redefine, with \renewcommand some commands defined in manuscripts.bbx. The commands starting with \mk... take one argument, the other take no argument. In these command, use the punctuation commands of biblatex. \collectionshelfmarkpunct the punctuation between collection and shelfmark. By default \addspace. \columnslayerpunct the punctuation between columns and layer. By default \addscmicolon\addspace. \datingpagespunct the punctuation between dating and pages. By default \addcomma\addspace. The \isdot is automatically called when printing dating field. \librarycollectionpunct the punctuation between library and collection. By default \addcomma\addspace. \mkcolumns the way the columns are printed. By default, in parens. \mkcolumnslayer the way the columns and layer fields are printed together. By default, in parens. \mklocation the way the location is printed. By default, with the command \mkbibnamefamily. \mkmanuscriptdescriptionlabel the way the label are printed before the special fields. By default, in bold, followed with \manuscriptdescriptionlabelpunct. \mkmanuscriptdescriptionlabelparagraphed the way the label are printed before the special fields which can contains paragraph (e.g. content. B default, in bold, followed with \par. \mkshcite the way the shorthand is printed when using \shcite. By default, no special formatting. \locationlibrarypunct the punctuation between location and library. By default \addcolon\addspace. \manuscriptdescriptionlabelpunct the punctuation between label and text, for the special fields. By default \addcolon\addspace. \moreinterpunct the punctuation between each special fields when printing in the same paragraph. By default \addcolon\addspace. \pagetotalpagespunct the punctuation between pagetotal and pages. By default \addcolon\addspace. ### Commands to use in the pages field In the pages field, you can use \recto and \verso command when you speak of folios. Default value are \r and \ but you can change them. ### Localization strings Some specific localization strings are defined in the manucripts-xxx.lbx files. Read the biblatex handbook to know how to customize it. ### Macros and field formats The manuscripts-shared.bbx file defines bibmacros and field formats (read the biblatex handbook to know more about bibmacro and field format). We can't list all of them, but you can look on them to know how to customize more finely the manuscripts description. ## 6 Use with biblatex-realauthor To use this package with the features of biblatex-realauthor, you must use the package biblatex-multiple-dm. ## 7 Migration to v.2.0.0 The version 2.0.0 adds some modification which could require modifications in your own customization. 1. We have decided to prefix all bibmacros concerning manuscript with manuscript: If you have redefined one of the following macros, or if you have created your own macros which call them, you should adapt your code: * annotation; * catalog; * collection+shelfmark; * date/dating; * more+annotation+catalog; 2. The support+dating does not exist anymore. 3. Now, the annotation field is introduced with a label. If you don't want this label, add into your preamble/custom style: \DeclareFieldFormat{annotation}{#1} ## 8 Change history #### 2.1.4 2023-05-01 Add latin translation (Domenico Cufalo) #### 2.1.2 2020-01-07 Fix bug when changing language in the middle of an handbook. #### 2.1.1 2019-10-02 Fix warning with sorting schemes. #### 2.1.0 2018-09-30 Add \detailcite and \detailscites commands. Add env=details-nos shorthand option to \printbibliography. #### 2.1.0 2018-09-19 Add contents and script fields. Add new support types. Improve handbook (thanks to <NAME>). Really add italian translation. #### 2017-11-26 Add compatibility with biblatex 3.8. #### 2017-01-31 Fix spurious space after citation of a manuscript. #### 2016-10-28 Move url after folio and columns data. #### 2016-10-23 New sorting schemes. #### 2016-09-21 Fix typographical bug when using both layer and columns fields without pages field. #### 2016-09-07 Use \mkbibnamefamily instead of \mkbibnamelast (biblatex 3.3 and later). #### 2016-06-07 Add error message to know more quickly break compatibility with new releases of biblatex. #### 1.8.0 2016-03-11 Fix compatibility with biblatex 3.3. #### 1.7.0 2016-02-10 Add italian translation. #### 1.6.2 2015-11-01 Fix missing line break before scribe or owner fields when origin field is empty. #### 1.6.1a 2015-05-06 Fix typo in handbook Insert good version number in the \ProvidesFile commands. #### 1.6.1 2014-10-21 Add \isdot after each printing of the dating field, to allow to use abbreviations with a dot (like "c.") without adding a uppercase after. Consequently, \isdot is deleted from \datingpagespunct. #### 1.6.0 2014-10-16 Patching some bibmacros to prevent lost of manuscript descriptions when using op. cit abbreviation. #### 1.5.0 2014-10-08 Formating of collection+shelfmark defined in a FielFormat. Compatibility with chicago-notes styles. #### 1.4.0 2014-06-23 Compatibility with biblatex-multiple-dm. #### 1.3.0 2014-06-16 Add sorting description. #### 1.2.0 2014-04-07 Add layer. #### 1.1.1 2014-03-20 Delete msnoautos shorthand option and replace it by the manuscripts-noautos shorthand style. #### 1.1.0 2014-03-15 Add msnoautos shorthand option. Add shortcollection field. Add withshorthand and withoutshorthand bibcheck. #### 1.0.0 2014-01-20 First public release.
wpress-oxide
rust
Rust
Struct wpress_oxide::Header === ``` pub struct Header { pub name: String, pub size: u64, pub mtime: u64, pub prefix: String, pub bytes: Vec<u8>, } ``` Metadata representation of a file with attributes necessary for an archive entry. Fields --- `name: String`Base name of the file from an entry. `size: u64`Size of the file in bytes. `mtime: u64`Last modified time relative to UNIX epochs. `prefix: String`Path of the file without the final component, its name. `bytes: Vec<u8>`A representation of `name`, `size`, `mtime` and `perfix` in a blob of bytes. Each field is zero padded to meets predefined boundaries. Implementations --- ### impl Header #### pub fn from_bytes(block: &[u8]) -> Result<Header, HeaderErrorParse an archive metadata entry for a file from a block of bytes. #### pub fn from_file_metadata<P: AsRef<Path>>( path: P ) -> Result<Header, FileParseErrorGenerate an archive metadata entry for a file given its path. ##### Example ``` use wpress_oxide::Header; let header = Header::from_file_metadata("tests/writer/file.txt")?; assert_eq!(header.name, "file.txt"); assert_eq!(header.size, 5); assert_eq!(header.prefix, "tests/writer"); ``` Trait Implementations --- ### impl Clone for Header #### fn clone(&self) -> Header Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Header ### impl Send for Header ### impl Sync for Header ### impl Unpin for Header ### impl UnwindSafe for Header Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct wpress_oxide::Reader === ``` pub struct Reader { /* private fields */ } ``` Structure that can read, parse and extract a wpress archive file. Implementations --- ### impl Reader #### pub fn new<P: AsRef<Path>>(path: P) -> Result<Reader, FileParseErrorCreates a new `Reader` with the path supplied as the source file. #### pub fn extract_to<P: AsRef<Path>>( &mut self, destination: P ) -> Result<(), ExtractErrorExtracts all the files inside the archive to the provided destination directory. ##### Example ``` use wpress_oxide::Reader; let mut r = Reader::new("tests/reader/archive.wpress")?; r.extract_to("tests/reader_output_0")?; ``` #### pub fn extract(&mut self) -> Result<(), ExtractErrorExtracts all the files inside the archive to the current directory. ##### Example ``` use wpress_oxide::Reader; let mut r = Reader::new("tests/reader/archive.wpress")?; r.extract()?; ``` #### pub fn files_count(&self) -> usize Returns number of files in the current archive. #### pub fn headers(&self) -> &[Header] Returns a borrowed header slice with metadata about the files in the archive. #### pub fn headers_owned(&self) -> Vec<HeaderReturns a copied vector of headers or metadata about the files in the archive. #### pub fn extract_file<P: AsRef<Path>>( &mut self, filename: P, destination: P ) -> Result<(), ExtractErrorExtract a single file, given either its name or *complete path inside the archive*, to a destination directory. Preserves the directory hierarchy of the archive during extraction. ##### Examples ###### Extract all files from the archive that match a filename ``` use wpress_oxide::Reader; let mut r = Reader::new("tests/reader/archive.wpress")?; r.extract_file("file.txt", "tests/reader_output_1")?; ``` ###### Extract a file with a specific path in the archive ``` use wpress_oxide::Reader; let mut r = Reader::new("tests/reader/archive.wpress")?; r.extract_file( "tests/writer/directory/subdirectory/file.txt", "tests/reader_output_2", )?; ``` Auto Trait Implementations --- ### impl RefUnwindSafe for Reader ### impl Send for Reader ### impl Sync for Reader ### impl Unpin for Reader ### impl UnwindSafe for Reader Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct wpress_oxide::Writer === ``` pub struct Writer { /* private fields */ } ``` Structure to write multiple files and corresponding metadata into a wpress archive. Implementations --- ### impl Writer #### pub fn new<P: AsRef<Path>>(path: P) -> Result<Writer, ArchiveErrorCreates a new `Writer` with the destination being the path supplied. #### pub fn add<P: AsRef<Path>>(&mut self, path: P) -> Result<(), ArchiveErrorLazily adds paths to the `Writer`. It merely tells the `Writer` to note the supplied path and does not write to the underlying file. To write to the underlying file, use the `write` method after `add`ing all the files. #### pub fn write(self) -> Result<(), ArchiveErrorWrites header structures and associated data to the underlying file handle. Since the object is consumed, the file is closed on drop, making sure we cannot incorrectly write multiple times to the same file. #### pub fn files_count(&self) -> usize Auto Trait Implementations --- ### impl RefUnwindSafe for Writer ### impl Send for Writer ### impl Sync for Writer ### impl Unpin for Writer ### impl UnwindSafe for Writer Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum wpress_oxide::ArchiveError === ``` pub enum ArchiveError { FileCreation(Error), EntryAddition(Error), DirectoryTraversal(Error), FileParse(FileParseError), FileWrite(Error), } ``` Variants --- ### FileCreation(Error) ### EntryAddition(Error) ### DirectoryTraversal(Error) ### FileParse(FileParseError) ### FileWrite(Error) Trait Implementations --- ### impl Debug for ArchiveError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(source: FileParseError) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl !RefUnwindSafe for ArchiveError ### impl Send for ArchiveError ### impl Sync for ArchiveError ### impl Unpin for ArchiveError ### impl !UnwindSafe for ArchiveError Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<E> Provider for Ewhere E: Error + ?Sized, #### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum wpress_oxide::BlockParseError === ``` pub enum BlockParseError { FromUtf8Error(Field), IntoU64Error(Field), } ``` Variants --- ### FromUtf8Error(Field) ### IntoU64Error(Field) Trait Implementations --- ### impl Debug for BlockParseError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result Formats the value using the given formatter. 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(source: BlockParseError) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl RefUnwindSafe for BlockParseError ### impl Send for BlockParseError ### impl Sync for BlockParseError ### impl Unpin for BlockParseError ### impl UnwindSafe for BlockParseError Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<E> Provider for Ewhere E: Error + ?Sized, #### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum wpress_oxide::ExtractError === ``` pub enum ExtractError { PathSanitization(StripPrefixError), FileRead(Error), } ``` Variants --- ### PathSanitization(StripPrefixError) ### FileRead(Error) Trait Implementations --- ### impl Debug for ExtractError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(source: Error) -> Self Converts to this type from the input type.### impl From<StripPrefixError> for ExtractError #### fn from(source: StripPrefixError) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl !RefUnwindSafe for ExtractError ### impl Send for ExtractError ### impl Sync for ExtractError ### impl Unpin for ExtractError ### impl !UnwindSafe for ExtractError Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<E> Provider for Ewhere E: Error + ?Sized, #### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum wpress_oxide::FileParseError === ``` pub enum FileParseError { Metadata, EmptyName, ReadLastModified, UnixEpoch, Length(LengthExceededError), Header(HeaderError), FileRead(Error), } ``` Variants --- ### Metadata ### EmptyName ### ReadLastModified ### UnixEpoch ### Length(LengthExceededError) ### Header(HeaderError) ### FileRead(Error) Trait Implementations --- ### impl Debug for FileParseError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(source: Error) -> Self Converts to this type from the input type.### impl From<FileParseError> for ArchiveError #### fn from(source: FileParseError) -> Self Converts to this type from the input type.### impl From<HeaderError> for FileParseError #### fn from(source: HeaderError) -> Self Converts to this type from the input type.### impl From<LengthExceededError> for FileParseError #### fn from(source: LengthExceededError) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl !RefUnwindSafe for FileParseError ### impl Send for FileParseError ### impl Sync for FileParseError ### impl Unpin for FileParseError ### impl !UnwindSafe for FileParseError Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<E> Provider for Ewhere E: Error + ?Sized, #### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum wpress_oxide::HeaderError === ``` pub enum HeaderError { BlockParseError(BlockParseError), IncompleteHeader, } ``` Variants --- ### BlockParseError(BlockParseError) ### IncompleteHeader Trait Implementations --- ### impl Debug for HeaderError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(source: BlockParseError) -> Self Converts to this type from the input type.### impl From<HeaderError> for FileParseError #### fn from(source: HeaderError) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl RefUnwindSafe for HeaderError ### impl Send for HeaderError ### impl Sync for HeaderError ### impl Unpin for HeaderError ### impl UnwindSafe for HeaderError Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<E> Provider for Ewhere E: Error + ?Sized, #### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum wpress_oxide::LengthExceededError === ``` pub enum LengthExceededError { Name, Size, Mtime, Prefix, } ``` Variants --- ### Name ### Size ### Mtime ### Prefix Trait Implementations --- ### impl Debug for LengthExceededError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result Formats the value using the given formatter. 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(source: LengthExceededError) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl RefUnwindSafe for LengthExceededError ### impl Send for LengthExceededError ### impl Sync for LengthExceededError ### impl Unpin for LengthExceededError ### impl UnwindSafe for LengthExceededError Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<E> Provider for Ewhere E: Error + ?Sized, #### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
pymodbus
readthedoc
Unknown
PyModbus Release 3.6.0dev Open Source volunteers Oct 19, 2023 CONTENTS: 1.1 Pymodbus in a nutshel... 3 1.1.1 Common feature... 3 1.1.2 Client Feature... 4 1.1.3 Server Feature... 4 1.1.4 REPL Feature... 4 1.1.5 Simulator Feature... 4 1.2 Use Case... 5 1.3 Instal... 5 1.3.1 Install with pi... 5 1.3.2 Install with githu... 6 1.4 Example Cod... 7 1.5 Contributin... 7 1.6 Development Instruction... 8 1.6.1 Architectur... 8 1.6.2 Generate documentatio... 8 1.7 License Informatio... 9 2.1 Client performanc... 11 2.2 Client protocols/framer... 11 2.2.1 Serial (RS-485... 12 2.2.2 TC... 12 2.2.3 TL... 12 2.2.4 UD... 12 2.3 Client usag... 12 2.4 Client device addressin... 13 2.5 Client response handlin... 14 2.6 Client interface classe... 14 2.6.1 Client seria... 14 2.6.2 Client TC... 17 2.6.3 Client TL... 19 2.6.4 Client UD... 22 2.7 Modbus call... 24 4.1 Dependencie... 39 4.2 Usage Instruction... 39 i 4.3 DEM... 45 4.4 REPL client classe... 45 4.5 REPL server classe... 53 4.5.1 Pymodbus REPL (Read Evaluate Print Loop... 53 4.5.1.1 Pymodbus REPL Clien... 53 4.5.1.2 Pymodbus REPL Serve... 53 5.1 Configuratio... 55 5.1.1 Json file layou... 56 5.1.2 Server entrie... 56 5.1.3 Server configuration example... 57 5.1.4 Device entrie... 58 5.1.4.1 Setup sectio... 60 5.1.4.2 Invalid sectio... 62 5.1.4.3 Write sectio... 62 5.1.4.4 Bits sectio... 62 5.1.4.5 Uint16 sectio... 63 5.1.4.6 Uint32 sectio... 63 5.1.4.7 Float32 sectio... 63 5.1.4.8 String sectio... 64 5.1.4.9 Repeat sectio... 64 5.1.5 Device configuration example... 64 5.1.6 Configuration used for tes... 68 5.2 Simulator datastor... 72 5.3 Web fronten... 73 5.3.1 pymodbus.simulato... 73 5.4 Pymodbus simulator ReST AP... 76 6.1 Ready to run examples... 77 6.1.1 Simple asynchronous clien... 77 6.1.2 Simple synchronous clien... 80 6.1.3 Client performance sync vs asyn... 82 6.2 Advanced example... 84 6.2.1 Client asynchronous call... 84 6.2.2 Client asynchronou... 85 6.2.3 Client call... 85 6.2.4 Client custom messag... 86 6.2.5 Client payloa... 86 6.2.6 Client synchronou... 86 6.2.7 Server asynchronou... 87 6.2.8 Server callbac... 87 6.2.9 Server trace... 87 6.2.10 Server payloa... 88 6.2.11 Server synchronou... 88 6.2.12 Server updatin... 89 6.2.13 Simulator exampl... 89 6.2.14 Simulator datastore exampl... 90 6.2.15 Message generato... 90 6.2.16 Message Parse... 90 6.2.17 Modbus forwarde... 90 6.3 Examples contribution... 91 6.3.1 Sola... 91 ii 6.3.2 Redis datastor... 91 6.3.3 Serial Forwarde... 91 6.3.4 Sqlalchemy datastor... 91 7.1 Pymodbus version 3 famil... 93 7.2 Pymodbus version 2 famil... 94 7.3 Pymodbus version 1 famil... 95 7.4 Pymodbus version 0 famil... 96 8.1 Version 3.5.... 99 8.2 Version 3.5.... 99 8.3 Version 3.5.... 100 8.4 Version 3.5.... 100 8.5 Version 3.5.... 100 8.6 Version 3.4.... 101 8.7 Version 3.4.... 101 8.8 Version 3.3.... 103 8.9 Version 3.3.... 103 8.10 Version 3.3.... 103 8.11 Version 3.2.... 105 8.12 Version 3.2.... 105 8.13 Version 3.2.... 105 8.14 Version 3.1.... 106 8.15 Version 3.1.... 106 8.16 Version 3.1.... 107 8.17 Version 3.1.... 107 8.18 Version 3.0.... 108 8.19 Version 3.0.... 109 8.20 Version 3.0.... 109 8.21 Version 2.5.... 111 8.22 Version 2.5.... 111 8.23 Version 2.5.... 111 8.24 Version 2.5.... 111 8.25 Version 2.4.... 112 8.26 Version 2.3.... 113 8.27 Version 2.2.... 113 8.28 Version 2.1.... 114 8.29 Version 2.0.... 114 8.30 Version 2.0.... 114 8.31 Version 1.5.... 114 8.32 Version 1.5.... 115 8.33 Version 1.5.... 115 8.34 Version 1.4.... 116 8.35 Version 1.3.... 116 8.36 Version 1.3.... 116 8.37 Version 1.2.... 117 8.38 Version 1.1.... 117 8.39 Version 1.0.... 118 9.1 API changes 3.6.0 (future... 119 9.2 API changes 3.5.... 119 iii 9.3 API changes 3.4.... 119 9.4 API changes 3.3.... 120 9.5 API changes 3.2.... 120 9.6 API changes 3.1.... 120 9.7 API changes 3.0.... 121 10.1 NullMode... 123 10.2 Datastor... 123 10.2.1 Datastore classe... 123 10.3 Frame... 127 10.3.1 pymodbus.framer.ascii_framer modul... 127 10.3.2 pymodbus.framer.binary_framer modul... 128 10.3.3 pymodbus.framer.rtu_framer modul... 130 10.3.4 pymodbus.framer.socket_framer modul... 132 10.4 Constant... 133 10.5 Extra function... 135 iv PyModbus, Release 3.6.0dev Please select a topic in the left hand column. PyModbus, Release 3.6.0dev 2 CONTENTS: CHAPTER ONE PYMODBUS - A PYTHON MODBUS STACK Pymodbus is a full Modbus protocol implementation offering client/server with synchronous/asynchronous API a well as simulators. Current release is 3.5.4. Bleeding edge (not released) is dev. All changes are described in release notes and all API changes are documented A big thanks to all the volunteers that helps make pymodbus a great project. Source code on github 1.1 Pymodbus in a nutshell Pymodbus consist of 5 parts: • client, connect to your favorite device(s) • server, simulate your favorite device(s) • repl, a commandline text based client/server simulator • simulator, an html based server simulator • examples, showing both simple and advances usage 1.1.1 Common features • Full modbus standard protocol implementation • Support for custom function codes • support serial (rs-485), tcp, tls and udp communication • support all standard frames: socket, rtu, rtu-over-tcp, tcp and ascii • does not have third party dependencies, apart from pyserial (optional) • very lightweight project • requires Python >= 3.8 PyModbus, Release 3.6.0dev • thorough test suite, that test all corners of the library • automatically tested on Windows, Linux and MacOS combined with python 3.8 - 3.12 • strongly typed API (py.typed present) 1.1.2 Client Features • asynchronous API and synchronous API for applications • very simple setup and call sequence (just 6 lines of code) • utilities to convert int/float to/from multiple registers • payload builder/decoder to help with complex data Client documentation 1.1.3 Server Features • asynchronous implementation for high performance • synchronous API classes for convenience • simulate real life devices • full server control context (device information, counters, etc) • different backend datastores to manage register values • callback to intercept requests/responses • work on RS485 in parallel with other devices Server documentation 1.1.4 REPL Features • server/client commandline emulator • easy test of real device (client) • easy test of client app (server) • simulation of broken requests/responses • simulation of error responses (hard to provoke in real devices) REPL documentation 1.1.5 Simulator Features • server simulator with WEB interface • configure the structure of a real device • monitor traffic online • allow distributed team members to work on a virtual device using internet • simulation of broken requests/responses PyModbus, Release 3.6.0dev • simulation of error responses (hard to provoke in real devices) Simulator documentation 1.2 Use Cases The client is the most typically used. It is embedded into applications, where it abstract the modbus protocol from the application by providing an easy to use API. The client is integrated into some well known projects like home-assistant. Although most system administrators will find little need for a Modbus server, the server is handy to verify the func- tionality of an application. The simulator and/or server is often used to simulate real life devices testing applications. The server is excellent to perform high volume testing (e.g. houndreds of devices connected to the application). The advantage of the server is that it runs not only a “normal” computers but also on small ones like Raspberry PI. Since the library is written in python, it allows for easy scripting and/or integration into their existing solutions. For more information please browse the project documentation: https://readthedocs.org/docs/pymodbus/en/latest/index.html 1.3 Install The library is available on pypi.org and github.com to install with • pip for those who just want to use the library • git clone for those who wants to help or just are curious Be aware that there are a number of project, who have forked pymodbus and • seems just to provide a version frozen in time • extended pymodbus with extra functionality The latter is not because we rejected the extra functionality (we welcome all changes), but because the codeowners made that decision. In both cases, please understand, we cannot offer support to users of these projects as we do not known what have been changed nor what status the forked code have. A growing number of Linux distributions include pymodbus in their standard installation. You need to have python3 installed, preferable 3.11. 1.3.1 Install with pip You can install using pip by issuing the following commands in a terminal window: pip install pymodbus If you want to use the serial interface: pip install pymodbus[serial] PyModbus, Release 3.6.0dev This will install pymodbus with the pyserial dependency. Pymodbus offers a number of extra options: • repl, needed by pymodbus.repl • serial, needed for serial communication • simulator, needed by pymodbus.simulator • documentation, needed to generate documentation • development, needed for development • all, installs all of the above which can be installed as: pip install pymodbus[<option>,...] It is possible to install old releases if needed: pip install pymodbus==3.5.4 1.3.2 Install with github On github, fork https://github.com/pymodbus-dev/pymodbus.git Clone the source, and make a virtual environment: git clone git://github.com/<your account>/pymodbus.git cd pymodbus python3 -m venv .venv Activate the virtual environment, this command needs repeated in every new terminal: source .venv/bin/activate To get a specific release: git checkout v3.5.2 or the bleeding edge: git checkout dev Install required development tools: pip install ".[development]" pre-commit install Install all (allows creation of documentation etc): pip install “.[all]” pre-commit install This installs dependencies in your virtual environment with pointers directly to the pymodbus directory, so any change you make is immediately available as if installed. It will also install pre-commit git hooks, ensuring your commit are verified. PyModbus, Release 3.6.0dev The repository contains a number of important branches and tags. • dev is where all development happens, this branch is not always stable. • master is where are releases are kept. • vX.Y.Z (e.g. v2.5.3) is a specific release 1.4 Example Code For those of you that just want to get started fast, here you go: from pymodbus.client import ModbusTcpClient client = ModbusTcpClient('MyDevice.lan') client.connect() client.write_coil(1, True) result = client.read_coils(1,1) print(result.bits[0]) client.close() We provide a couple of simple ready to go clients: • async client • sync client For more advanced examples, check out Examples included in the repository. If you have created any utilities that meet a specific need, feel free to submit them so others can benefit. Also, if you have a question, please create a post in discussions q&a topic, so that others can benefit from the results. If you think, that something in the code is broken/not running well, please open an issue, read the Template-text first and then post your issue with your setup information. Example documentation 1.5 Contributing Just fork the repo and raise your Pull Request against dev branch. We always have more work than time, so feel free to open a discussion / issue on a theme you want to solve. If your company would like your device tested or have a cloud based device simulation, feel free to contact us. We are happy to help your company solve your modbus challenges. That said, the current work mainly involves polishing the library and solving issues: • Fixing bugs/feature requests • Architecture documentation • Functional testing against any reference we can find There are 2 bigger projects ongoing: • rewriting the internal part of all clients (both sync and async) • Add features to and simulator, and enhance the web design PyModbus, Release 3.6.0dev 1.6 Development Instructions The current code base is compatible with python >= 3.8. Here are some of the common commands to perform a range of activities: source .venv/bin/activate <-- Activate the virtual environment ./check_ci.sh <-- run the same checks as CI runs on a pull request. Make a pull request: git checkout dev <-- activate development branch git pull <-- update branch with newest changes git checkout -b feature <-- make new branch for pull request ... make source changes git commit <-- commit change to git git push <-- push to your account on github on github open a pull request, check that CI turns green and then wait for review␣ ˓→comments. Test your changes: cd pytest pytest 1.6.1 Architecture There are no documentation of the architecture (help is welcome), but most classes and methods are documented: Pymodbus internals 1.6.2 Generate documentation Remark Assumes that you have installed documentation tools:; pip install “.[documentation]” to build do: cd doc ./build_html The documentation is available in <root>/build/html PyModbus, Release 3.6.0dev 1.7 License Information Released under the BSD License 1.7. License Information 9 PyModbus, Release 3.6.0dev 10 Chapter 1. PyModbus - A Python Modbus Stack CHAPTER TWO CLIENT Pymodbus offers both a synchronous client and a asynchronous client. Both clients offer simple calls for each type of request, as well as a unified response, removing a lot of the complexities in the modbus protocol. In addition to the “pure” client, pymodbus offers a set of utilities converting to/from registers to/from “normal” python values. The client is NOT thread safe, meaning the application must ensure that calls are serialized. This is only a problem for synchronous applications that use multiple threads or for asynchronous applications that use multiple asyncio. create_task. It is allowed to have multiple client objects that e.g. each communicate with a TCP based device. 2.1 Client performance There are currently a big performance gab between the 2 clients (try it on your computer performance test). This is due to a rather old implementation of the synchronous client, we are currently working to update the client code. Our aim is to achieve a similar data rate with both clients and at least double the data rate while keeping the stability. Table below is a test with 1000 calls each reading 10 registers. client asynchronous synchronous total time 0,33 sec 114,10 sec ms/call 0,33 ms 114,10 ms ms/register 0,03 ms 11,41 ms calls/sec 3.030 8 registers/sec 30.300 87 2.2 Client protocols/framers Pymodbus offers clients with transport different protocols and different framers protocol ASCII RTU RTU_OVER_TCP Socket TLS Serial (RS-485) Yes Yes No No No TCP Yes No Yes Yes No TLS No No No No Yes UDP Yes No Yes Yes No PyModbus, Release 3.6.0dev 2.2.1 Serial (RS-485) Pymodbus do not connect to the device (server) but connects to a comm port or usb port on the local computer. RS-485 is a half duplex protocol, meaning the servers do nothing until the client sends a request then the server being addressed responds. The client controls the traffic and as a consequence one RS-485 line can only have 1 client but upto 254 servers (physical devices). RS-485 is a simple 2 wire cabling with a pullup resistor. It is important to note that many USB converters do not have a builtin resistor, this must be added manually. When experiencing many faulty packets and retries this is often the problem. 2.2.2 TCP Pymodbus connects directly to the device using a standard socket and have a one-to-one connection with the device. In case of multiple TCP devices the application must instantiate multiple client objects one for each connection. Tip: a TCP device often represent multiple physical devices (e.g Ethernet-RS485 converter), each of these devices can be addressed normally 2.2.3 TLS A variant of TCP that uses encryption and certificates. TLS is mostly used when the devices are connected to the internet. 2.2.4 UDP A broadcast variant of TCP. UDP allows addressing of many devices with a single request, however there are no control that a device have received the packet. 2.3 Client usage Using pymodbus client to set/get information from a device (server) is done in a few simple steps, like the following synchronous example: from pymodbus.client import ModbusTcpClient client = ModbusTcpClient('MyDevice.lan') # Create client object client.connect() # connect to device, reconnect automatically client.write_coil(1, True, slave=1) # set information in device result = client.read_coils(2, 3, slave=1) # get information from device print(result.bits[0]) # use information client.close() # Disconnect device and a asynchronous example: from pymodbus.client import ModbusAsyncTcpClient client = ModbusAsyncTcpClient('MyDevice.lan') # Create client object (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) await client.connect() # connect to device, reconnect␣ ˓→automatically await client.write_coil(1, True, slave=1) # set information in device result = await client.read_coils(2, 3, slave=1) # get information from device print(result.bits[0]) # use information client.close() # Disconnect device The line client = ModbusAsyncTcpClient('MyDevice.lan') only creates the object it does not activate any- thing. The line await client.connect() connects to the device (or comm port), if this cannot connect successfully within the timeout it throws an exception. If connected successfully reconnecting later is handled automatically The line await client.write_coil(1, True, slave=1) is an example of a write request, set address 1 to True on device 1 (slave). The line result = await client.read_coils(2, 1, slave=1) is an example of a read request, get the value of address 2, 3 and 4 (count = 3) from device 1 (slave). The last line client.close() closes the connection and render the object inactive. Large parts of the implementation are shared between the different classes, to ensure high stability and efficient main- tenance. The synchronous clients are not thread safe nor is a single client intended to be used from multiple threads. Due to the nature of the modbus protocol, it makes little sense to have client calls split over different threads, however the application can do it with proper locking implemented. The asynchronous client only runs in the thread where the asyncio loop is created, it does not provide mechanisms to prevent (semi)parallel calls, that must be prevented at application level. 2.4 Client device addressing With TCP, TLS and UDP, the tcp/ip address of the physical device is defined when creating the object. The logical devices represented by the device is addressed with the slave= parameter. With Serial, the comm port is defined when creating the object. The physical devices are addressed with the slave= parameter. slave=0 is used as broadcast in order to address all devices. However experience shows that modern devices do not allow broadcast, mostly because it is inheriently dangerous. With slave=0 the application can get upto 254 responses on a single request! The simple request calls (mixin) do NOT support broadcast, if an application wants to use broadcast it must call client.execute and deal with the responses. PyModbus, Release 3.6.0dev 2.5 Client response handling All simple request calls (mixin) return a unified result independent whether it´s a read, write or diagnostic call. The application should evaluate the result generically: try: rr = await client.read_coils(1, 1, slave=1) except ModbusException as exc: _logger.error(f"ERROR: exception in pymodbus {exc}") raise exc if rr.isError(): _logger.error("ERROR: pymodbus returned an error!") raise ModbusException(txt) except ModbusException as exc: happens generally when pymodbus experiences an internal error. There are a few situation where a unexpected response from a device can cause an exception. rr.isError() is set whenever the device reports a problem. And in case of read retrieve the data depending on type of request • rr.bits is set for coils / input_register requests • rr.registers is set for other requests 2.6 Client interface classes There are a client class for each type of communication and for asynchronous/synchronous Serial AsyncModbusSerialClient ModbusSerialClient TCP AsyncModbusTcpClient ModbusTcpClient TLS AsyncModbusTlsClient ModbusTlsClient UDP AsyncModbusUdpClient ModbusUdpClient 2.6.1 Client serial class pymodbus.client.AsyncModbusSerialClient(port: str, framer: Framer = Framer.RTU, baudrate: int Bases: ModbusBaseClient, Protocol AsyncModbusSerialClient. Fixed parameters: Parameters port – Serial port used for communication. Optional parameters: Parameters • baudrate – Bits per second. • bytesize – Number of bits per byte 7-8. PyModbus, Release 3.6.0dev • parity – ‘E’ven, ‘O’dd or ‘N’one • stopbits – Number of stop bits 0-2. • handle_local_echo – Discard local echo from dongle. Common optional parameters: Parameters • framer – Framer enum name • timeout – Timeout for a request, in seconds. • retries – Max number of retries per request. • retry_on_empty – Retry on empty response. • close_comm_on_error – Close connection on error. • strict – Strict timing, 1.5 character between requests. • broadcast_enable – True to treat id 0 as broadcast address. • reconnect_delay – Minimum delay in milliseconds before reconnecting. • reconnect_delay_max – Maximum delay in milliseconds before reconnecting. • on_reconnect_callback – Function that will be called just before a reconnection attempt. • no_resend_on_retry – Do not resend request when retrying due to missing response. • kwargs – Experimental parameters. Example: from pymodbus.client import AsyncModbusSerialClient async def run(): client = AsyncModbusSerialClient("dev/serial0") await client.connect() ... client.close() Please refer to Pymodbus internals for advanced usage. async connect() → bool Connect Async client. close(reconnect: bool = False) → None Close connection. class pymodbus.client.ModbusSerialClient(port: str, framer: Framer = Framer.RTU, baudrate: int = **kwargs: Any) Bases: ModbusBaseClient ModbusSerialClient. Fixed parameters: Parameters port – Serial port used for communication. PyModbus, Release 3.6.0dev Optional parameters: Parameters • baudrate – Bits per second. • bytesize – Number of bits per byte 7-8. • parity – ‘E’ven, ‘O’dd or ‘N’one • stopbits – Number of stop bits 0-2. • handle_local_echo – Discard local echo from dongle. Common optional parameters: Parameters • framer – Framer enum name • timeout – Timeout for a request, in seconds. • retries – Max number of retries per request. • retry_on_empty – Retry on empty response. • close_comm_on_error – Close connection on error. • strict – Strict timing, 1.5 character between requests. • broadcast_enable – True to treat id 0 as broadcast address. • reconnect_delay – Minimum delay in milliseconds before reconnecting. • reconnect_delay_max – Maximum delay in milliseconds before reconnecting. • on_reconnect_callback – Function that will be called just before a reconnection attempt. • no_resend_on_retry – Do not resend request when retrying due to missing response. • kwargs – Experimental parameters. Example: from pymodbus.client import ModbusSerialClient def run(): client = ModbusSerialClient("dev/serial0") client.connect() ... client.close() Please refer to Pymodbus internals for advanced usage. Remark: There are no automatic reconnect as with AsyncModbusSerialClient property connected Connect internal. connect() Connect to the modbus serial server. close() Close the underlying socket connection. PyModbus, Release 3.6.0dev send(request) Send data on the underlying socket. If receive buffer still holds some data then flush it. Sleep if last send finished less than 3.5 character times ago. recv(size) Read data from the underlying descriptor. is_socket_open() Check if socket is open. 2.6.2 Client TCP class pymodbus.client.AsyncModbusTcpClient(host: str, port: int = 502, framer: Framer = Framer.SOCKET , source_address: tuple[str, int] | None = None, **kwargs: Any) Bases: ModbusBaseClient, Protocol AsyncModbusTcpClient. Fixed parameters: Parameters host – Host IP address or host name Optional parameters: Parameters • port – Port used for communication • source_address – source address of client Common optional parameters: Parameters • framer – Framer enum name • timeout – Timeout for a request, in seconds. • retries – Max number of retries per request. • retry_on_empty – Retry on empty response. • close_comm_on_error – Close connection on error. • strict – Strict timing, 1.5 character between requests. • broadcast_enable – True to treat id 0 as broadcast address. • reconnect_delay – Minimum delay in milliseconds before reconnecting. • reconnect_delay_max – Maximum delay in milliseconds before reconnecting. • on_reconnect_callback – Function that will be called just before a reconnection attempt. • no_resend_on_retry – Do not resend request when retrying due to missing response. • kwargs – Experimental parameters. Example: PyModbus, Release 3.6.0dev from pymodbus.client import AsyncModbusTcpClient async def run(): client = AsyncModbusTcpClient("localhost") await client.connect() ... client.close() Please refer to Pymodbus internals for advanced usage. async connect() → bool Initiate connection to start client. close(reconnect: bool = False) → None Close connection. class pymodbus.client.ModbusTcpClient(host: str, port: int = 502, framer: Framer = Framer.SOCKET , source_address: tuple[str, int] | None = None, **kwargs: Any) Bases: ModbusBaseClient ModbusTcpClient. Fixed parameters: Parameters host – Host IP address or host name Optional parameters: Parameters • port – Port used for communication • source_address – source address of client Common optional parameters: Parameters • framer – Framer enum name • timeout – Timeout for a request, in seconds. • retries – Max number of retries per request. • retry_on_empty – Retry on empty response. • close_comm_on_error – Close connection on error. • strict – Strict timing, 1.5 character between requests. • broadcast_enable – True to treat id 0 as broadcast address. • reconnect_delay – Minimum delay in milliseconds before reconnecting. • reconnect_delay_max – Maximum delay in milliseconds before reconnecting. • on_reconnect_callback – Function that will be called just before a reconnection attempt. • no_resend_on_retry – Do not resend request when retrying due to missing response. • kwargs – Experimental parameters. Example: PyModbus, Release 3.6.0dev from pymodbus.client import ModbusTcpClient async def run(): client = ModbusTcpClient("localhost") client.connect() ... client.close() Please refer to Pymodbus internals for advanced usage. Remark: There are no automatic reconnect as with AsyncModbusTcpClient property connected Connect internal. connect() Connect to the modbus tcp server. close() Close the underlying socket connection. send(request) Send data on the underlying socket. recv(size) Read data from the underlying descriptor. is_socket_open() Check if socket is open. 2.6.3 Client TLS class pymodbus.client.AsyncModbusTlsClient(host: str, port: int = 802, framer: Framer = Framer.TLS, sslctx: SSLContext | None = None, certfile: str | None = None, keyfile: str | None = None, password: str | None = None, server_hostname: str | None = None, **kwargs: Any) Bases: AsyncModbusTcpClient AsyncModbusTlsClient. Fixed parameters: Parameters host – Host IP address or host name Optional parameters: Parameters • port – Port used for communication • source_address – Source address of client • sslctx – SSLContext to use for TLS • certfile – Cert file path for TLS server request • keyfile – Key file path for TLS server request PyModbus, Release 3.6.0dev • password – Password for for decrypting private key file • server_hostname – Bind certificate to host Common optional parameters: Parameters • framer – Framer enum name • timeout – Timeout for a request, in seconds. • retries – Max number of retries per request. • retry_on_empty – Retry on empty response. • close_comm_on_error – Close connection on error. • strict – Strict timing, 1.5 character between requests. • broadcast_enable – True to treat id 0 as broadcast address. • reconnect_delay – Minimum delay in milliseconds before reconnecting. • reconnect_delay_max – Maximum delay in milliseconds before reconnecting. • on_reconnect_callback – Function that will be called just before a reconnection attempt. • no_resend_on_retry – Do not resend request when retrying due to missing response. • kwargs – Experimental parameters. Example: from pymodbus.client import AsyncModbusTlsClient async def run(): client = AsyncModbusTlsClient("localhost") await client.connect() ... client.close() Please refer to Pymodbus internals for advanced usage. async connect() → bool Initiate connection to start client. class pymodbus.client.ModbusTlsClient(host: str, port: int = 802, framer: Framer = Framer.TLS, sslctx: SSLContext | None = None, certfile: str | None = None, keyfile: str | None = None, password: str | None = None, server_hostname: str | None = None, **kwargs: Any) Bases: ModbusTcpClient ModbusTlsClient. Fixed parameters: Parameters host – Host IP address or host name Optional parameters: Parameters • port – Port used for communication PyModbus, Release 3.6.0dev • source_address – Source address of client • sslctx – SSLContext to use for TLS • certfile – Cert file path for TLS server request • keyfile – Key file path for TLS server request • password – Password for decrypting private key file • server_hostname – Bind certificate to host • kwargs – Experimental parameters Common optional parameters: Parameters • framer – Framer enum name • timeout – Timeout for a request, in seconds. • retries – Max number of retries per request. • retry_on_empty – Retry on empty response. • close_comm_on_error – Close connection on error. • strict – Strict timing, 1.5 character between requests. • broadcast_enable – True to treat id 0 as broadcast address. • reconnect_delay – Minimum delay in milliseconds before reconnecting. • reconnect_delay_max – Maximum delay in milliseconds before reconnecting. • on_reconnect_callback – Function that will be called just before a reconnection attempt. • no_resend_on_retry – Do not resend request when retrying due to missing response. • kwargs – Experimental parameters. Example: from pymodbus.client import ModbusTlsClient async def run(): client = ModbusTlsClient("localhost") client.connect() ... client.close() Please refer to Pymodbus internals for advanced usage. Remark: There are no automatic reconnect as with AsyncModbusTlsClient property connected Connect internal. connect() Connect to the modbus tls server. PyModbus, Release 3.6.0dev 2.6.4 Client UDP class pymodbus.client.AsyncModbusUdpClient(host: str, port: int = 502, framer: Framer = Framer.SOCKET , source_address: tuple[str, int] | None = None, **kwargs: Any) Bases: ModbusBaseClient, Protocol, DatagramProtocol AsyncModbusUdpClient. Fixed parameters: Parameters host – Host IP address or host name Optional parameters: Parameters • port – Port used for communication. • source_address – source address of client, Common optional parameters: Parameters • framer – Framer enum name • timeout – Timeout for a request, in seconds. • retries – Max number of retries per request. • retry_on_empty – Retry on empty response. • close_comm_on_error – Close connection on error. • strict – Strict timing, 1.5 character between requests. • broadcast_enable – True to treat id 0 as broadcast address. • reconnect_delay – Minimum delay in milliseconds before reconnecting. • reconnect_delay_max – Maximum delay in milliseconds before reconnecting. • on_reconnect_callback – Function that will be called just before a reconnection attempt. • no_resend_on_retry – Do not resend request when retrying due to missing response. • kwargs – Experimental parameters. Example: from pymodbus.client import AsyncModbusUdpClient async def run(): client = AsyncModbusUdpClient("localhost") await client.connect() ... client.close() Please refer to Pymodbus internals for advanced usage. PyModbus, Release 3.6.0dev property connected Return true if connected. class pymodbus.client.ModbusUdpClient(host: str, port: int = 502, framer: Framer = Framer.SOCKET , source_address: tuple[str, int] | None = None, **kwargs: Any) Bases: ModbusBaseClient ModbusUdpClient. Fixed parameters: Parameters host – Host IP address or host name Optional parameters: Parameters • port – Port used for communication. • source_address – source address of client, Common optional parameters: Parameters • framer – Framer enum name • timeout – Timeout for a request, in seconds. • retries – Max number of retries per request. • retry_on_empty – Retry on empty response. • close_comm_on_error – Close connection on error. • strict – Strict timing, 1.5 character between requests. • broadcast_enable – True to treat id 0 as broadcast address. • reconnect_delay – Minimum delay in milliseconds before reconnecting. • reconnect_delay_max – Maximum delay in milliseconds before reconnecting. • on_reconnect_callback – Function that will be called just before a reconnection attempt. • no_resend_on_retry – Do not resend request when retrying due to missing response. • kwargs – Experimental parameters. Example: from pymodbus.client import ModbusUdpClient async def run(): client = ModbusUdpClient("localhost") client.connect() ... client.close() Please refer to Pymodbus internals for advanced usage. Remark: There are no automatic reconnect as with AsyncModbusUdpClient PyModbus, Release 3.6.0dev property connected Connect internal. 2.7 Modbus calls Pymodbus makes all standard modbus requests/responses available as simple calls. Using Modbus<transport>Client.register() custom messagees can be added to pymodbus, and handled automatically. class pymodbus.client.mixin.ModbusClientMixin Bases: object ModbusClientMixin. This is an interface class to facilitate the sending requests/receiving responses like read_coils. execute() allows to make a call with non-standard or user defined function codes (remember to add a PDU in the transport class to interpret the request/response). Simple modbus message call: response = client.read_coils(1, 10) # or response = await client.read_coils(1, 10) Advanced modbus message call: request = ReadCoilsRequest(1,10) response = client.execute(request) # or request = ReadCoilsRequest(1,10) response = await client.execute(request) Tip: All methods can be used directly (synchronous) or with await <method> (asynchronous) depending on the client used. execute(request: ModbusRequest) → ModbusResponse | Awaitable[ModbusResponse] Execute request (code ???). Parameters request – Request to send Raises ModbusException – Call with custom function codes. Tip: Response is not interpreted. read_coils(address: int, count: int = 1, slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Read coils (code 0x01). Parameters PyModbus, Release 3.6.0dev • address – Start address to read from • count – (optional) Number of coils to read • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – read_discrete_inputs(address: int, count: int = 1, slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Read discrete inputs (code 0x02). Parameters • address – Start address to read from • count – (optional) Number of coils to read • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – read_holding_registers(address: int, count: int = 1, slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Read holding registers (code 0x03). Parameters • address – Start address to read from • count – (optional) Number of coils to read • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – read_input_registers(address: int, count: int = 1, slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Read input registers (code 0x04). Parameters • address – Start address to read from • count – (optional) Number of coils to read • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – write_coil(address: int, value: bool, slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Write single coil (code 0x05). PyModbus, Release 3.6.0dev Parameters • address – Address to write to • value – Boolean to write • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – write_register(address: int, value: int, slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Write register (code 0x06). Parameters • address – Address to write to • value – Value to write • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – read_exception_status(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Read Exception Status (code 0x07). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_query_data(msg: bytearray, slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose query data (code 0x08 sub 0x00). Parameters • msg – Message to be returned • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_restart_communication(toggle: bool, slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose restart communication (code 0x08 sub 0x01). Parameters • toggle – True if toggled. PyModbus, Release 3.6.0dev • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_read_diagnostic_register(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose read diagnostic register (code 0x08 sub 0x02). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_change_ascii_input_delimeter(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose change ASCII input delimiter (code 0x08 sub 0x03). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_force_listen_only(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose force listen only (code 0x08 sub 0x04). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_clear_counters(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose clear counters (code 0x08 sub 0x0A). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_read_bus_message_count(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose read bus message count (code 0x08 sub 0x0B). Parameters • slave – (optional) Modbus slave ID PyModbus, Release 3.6.0dev • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_read_bus_comm_error_count(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose read Bus Communication Error Count (code 0x08 sub 0x0C). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_read_bus_exception_error_count(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose read Bus Exception Error Count (code 0x08 sub 0x0D). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_read_slave_message_count(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose read Slave Message Count (code 0x08 sub 0x0E). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_read_slave_no_response_count(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose read Slave No Response Count (code 0x08 sub 0x0F). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_read_slave_nak_count(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose read Slave NAK Count (code 0x08 sub 0x10). Parameters • slave – (optional) Modbus slave ID PyModbus, Release 3.6.0dev • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_read_slave_busy_count(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose read Slave Busy Count (code 0x08 sub 0x11). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_read_bus_char_overrun_count(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose read Bus Character Overrun Count (code 0x08 sub 0x12). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_read_iop_overrun_count(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose read Iop overrun count (code 0x08 sub 0x13). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_clear_overrun_counter(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose Clear Overrun Counter and Flag (code 0x08 sub 0x14). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_getclear_modbus_response(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose Get/Clear modbus plus (code 0x08 sub 0x15). Parameters • slave – (optional) Modbus slave ID PyModbus, Release 3.6.0dev • kwargs – (optional) Experimental parameters. Raises ModbusException – diag_get_comm_event_counter(**kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose get event counter (code 0x0B). Parameters kwargs – (optional) Experimental parameters. Raises ModbusException – diag_get_comm_event_log(**kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Diagnose get event counter (code 0x0C). Parameters kwargs – (optional) Experimental parameters. Raises ModbusException – write_coils(address: int, values: list[bool] | bool, slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Write coils (code 0x0F). Parameters • address – Start address to write to • values – List of booleans to write, or a single boolean to write • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – write_registers(address: int, values: list[int] | int, slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Write registers (code 0x10). Parameters • address – Start address to write to • values – List of values to write, or a single value to write • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. Raises ModbusException – report_slave_id(slave: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Report slave ID (code 0x11). Parameters • slave – (optional) Modbus slave ID • kwargs – (optional) Experimental parameters. PyModbus, Release 3.6.0dev Raises ModbusException – read_file_record(records: list[tuple], **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Read file record (code 0x14). Parameters • records – List of (Reference type, File number, Record Number, Record Length) • kwargs – (optional) Experimental parameters. Raises ModbusException – write_file_record(records: list[tuple], **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Write file record (code 0x15). Parameters • records – List of (Reference type, File number, Record Number, Record Length) • kwargs – (optional) Experimental parameters. Raises ModbusException – mask_write_register(address: int = 0, and_mask: int = 65535, or_mask: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Mask write register (code 0x16). Parameters • address – The mask pointer address (0x0000 to 0xffff) • and_mask – The and bitmask to apply to the register address • or_mask – The or bitmask to apply to the register address • kwargs – (optional) Experimental parameters. Raises ModbusException – readwrite_registers(read_address: int = 0, read_count: int = 0, write_address: int = 0, values: list[int] | int = 0, slave: int = 0, **kwargs) → ModbusResponse | Awaitable[ModbusResponse] Read/Write registers (code 0x17). Parameters • read_address – The address to start reading from • read_count – The number of registers to read from address • write_address – The address to start writing to • values – List of values to write, or a single value to write • slave – (optional) Modbus slave ID • kwargs – PyModbus, Release 3.6.0dev Raises ModbusException – read_fifo_queue(address: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Read FIFO queue (code 0x18). Parameters • address – The address to start reading from • kwargs – Raises ModbusException – read_device_information(read_code: int = None, object_id: int = 0, **kwargs: Any) → ModbusResponse | Awaitable[ModbusResponse] Read FIFO queue (code 0x2B sub 0x0E). Parameters • read_code – The device information read code • object_id – The object to read from • kwargs – Raises ModbusException – class DATATYPE(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None) Bases: Enum Datatype enum (name and number of bytes), used for convert_* calls. classmethod convert_from_registers(registers: list[int], data_type: DATATYPE) → int | float | str Convert registers to int/float/str. Parameters • registers – list of registers received from e.g. read_holding_registers() • data_type – data type to convert to Returns int, float or str depending on “data_type” Raises ModbusException – when size of registers is not 1, 2 or 4 classmethod convert_to_registers(value: int | float | str, data_type: DATATYPE) → list[int] Convert int/float/str to registers (16/32/64 bit). Parameters • value – value to be converted • data_type – data type to be encoded as registers Returns List of registers, can be used directly in e.g. write_registers() Raises TypeError – when there is a mismatch between data_type and value CHAPTER THREE SERVER Pymodbus offers servers with transport protocols for • Serial (RS-485) typically using a dongle • TCP • TLS • UDP • possibility to add a custom transport protocol communication in 2 versions: • synchronous server, • asynchronous server using asyncio. Remark All servers are implemented with asyncio, and the synchronous servers are just an interface layer allowing synchronous applications to use the server as if it was synchronous. Server. import external classes, to make them easier to use: class pymodbus.server.ModbusSerialServer(context, framer=Framer.RTU, identity=None, **kwargs) Bases: ModbusBaseServer A modbus threaded serial socket server. We inherit and overload the socket server so that we can control the client threads as well as have a single server context instance. class pymodbus.server.ModbusSimulatorServer(modbus_server: str = 'server', modbus_device: str = log_file: str = 'server.log', json_file: str = 'setup.json', custom_actions_module: str | None = None) Bases: object ModbusSimulatorServer. Parameters • modbus_server – Server name in json file (default: “server”) • modbus_device – Device name in json file (default: “client”) • http_host – TCP host for HTTP (default: “localhost”) • http_port – TCP port for HTTP (default: 8080) • json_file – setup file (default: “setup.json”) PyModbus, Release 3.6.0dev • custom_actions_module – python module with custom actions (default: none) if either http_port or http_host is none, HTTP will not be started. This class starts a http server, that serves a couple of endpoints: • “<addr>/” static files • “<addr>/api/log” log handling, HTML with GET, REST-API with post • “<addr>/api/registers” register handling, HTML with GET, REST-API with post • “<addr>/api/calls” call (function code / message) handling, HTML with GET, REST-API with post • “<addr>/api/server” server handling, HTML with GET, REST-API with post Example: from pymodbus.server import StartAsyncSimulatorServer async def run(): simulator = StartAsyncSimulatorServer( modbus_server="my server", modbus_device="my device", http_host="localhost", http_port=8080) await simulator.start() ... await simulator.close() action_add(params, range_start, range_stop) Build list of registers matching filter. action_clear(_params, _range_start, _range_stop) Clear register filter. action_monitor(params, range_start, range_stop) Start monitoring calls. action_reset(_params, _range_start, _range_stop) Reset call simulation. action_set(params, _range_start, _range_stop) Set register value. action_simulate(params, _range_start, _range_stop) Simulate responses. action_stop(_params, _range_start, _range_stop) Stop call monitoring. build_html_calls(params, html) Build html calls page. build_html_log(_params, html) Build html log page. build_html_registers(params, html) Build html registers page. PyModbus, Release 3.6.0dev build_html_server(_params, html) Build html server page. build_json_calls(params, json_dict) Build html calls page. build_json_log(params, json_dict) Build json log page. build_json_registers(params, json_dict) Build html registers page. build_json_server(params, json_dict) Build html server page. async handle_html(request) Handle html. async handle_html_static(request) Handle static html. async handle_json(request) Handle api registers. helper_build_html_submit(params) Build html register submit. async run_forever(only_start=False) Start modbus and http servers. server_request_tracer(request, *_addr) Trace requests. All server requests passes this filter before being handled. server_response_manipulator(response) Manipulate responses. All server responses passes this filter before being sent. The filter returns: • response, either original or modified • skip_encoding, signals whether or not to encode the response async start_modbus_server(app) Start Modbus server as asyncio task. async stop() Stop modbus and http servers. async stop_modbus_server(app) Stop modbus server. class pymodbus.server.ModbusTcpServer(context, framer=Framer.SOCKET , identity=None, address=('', response_manipulator=None, request_tracer=None) Bases: ModbusBaseServer A modbus threaded tcp socket server. PyModbus, Release 3.6.0dev We inherit and overload the socket server so that we can control the client threads as well as have a single server context instance. class pymodbus.server.ModbusTlsServer(context, framer=Framer.TLS, identity=None, address=('', 502), sslctx=None, certfile=None, keyfile=None, password=None, ignore_missing_slaves=False, broadcast_enable=False, response_manipulator=None, request_tracer=None) Bases: ModbusTcpServer A modbus threaded tls socket server. We inherit and overload the socket server so that we can control the client threads as well as have a single server context instance. class pymodbus.server.ModbusUdpServer(context, framer=Framer.SOCKET , identity=None, address=('', 502), ignore_missing_slaves=False, broadcast_enable=False, response_manipulator=None, request_tracer=None) Bases: ModbusBaseServer A modbus threaded udp socket server. We inherit and overload the socket server so that we can control the client threads as well as have a single server context instance. async pymodbus.server.ServerAsyncStop() Terminate server. pymodbus.server.ServerStop() Terminate server. async pymodbus.server.StartAsyncSerialServer(context=None, identity=None, custom_functions=[], **kwargs) Start and run a serial modbus server. Parameters • context – The ModbusServerContext datastore • identity – An optional identify structure • custom_functions – An optional list of custom function classes supported by server in- stance. • kwargs – The rest async pymodbus.server.StartAsyncTcpServer(context=None, identity=None, address=None, custom_functions=[], **kwargs) Start and run a tcp modbus server. Parameters • context – The ModbusServerContext datastore • identity – An optional identify structure • address – An optional (interface, port) to bind to. • custom_functions – An optional list of custom function classes supported by server in- stance. • kwargs – The rest PyModbus, Release 3.6.0dev async pymodbus.server.StartAsyncTlsServer(context=None, identity=None, address=None, sslctx=None, certfile=None, keyfile=None, password=None, custom_functions=[], **kwargs) Start and run a tls modbus server. Parameters • context – The ModbusServerContext datastore • identity – An optional identify structure • address – An optional (interface, port) to bind to. • sslctx – The SSLContext to use for TLS (default None and auto create) • certfile – The cert file path for TLS (used if sslctx is None) • keyfile – The key file path for TLS (used if sslctx is None) • password – The password for for decrypting the private key file • custom_functions – An optional list of custom function classes supported by server in- stance. • kwargs – The rest async pymodbus.server.StartAsyncUdpServer(context=None, identity=None, address=None, custom_functions=[], **kwargs) Start and run a udp modbus server. Parameters • context – The ModbusServerContext datastore • identity – An optional identify structure • address – An optional (interface, port) to bind to. • custom_functions – An optional list of custom function classes supported by server in- stance. • kwargs – pymodbus.server.StartSerialServer(**kwargs) Start and run a serial modbus server. pymodbus.server.StartTcpServer(**kwargs) Start and run a serial modbus server. pymodbus.server.StartTlsServer(**kwargs) Start and run a serial modbus server. pymodbus.server.StartUdpServer(**kwargs) Start and run a serial modbus server. pymodbus.server.get_simulator_commandline(extras=None, cmdline=None) Get command line arguments. PyModbus, Release 3.6.0dev 38 Chapter 3. Server CHAPTER FOUR REPL 4.1 Dependencies Depends on prompt_toolkit and click Install dependencies $ pip install click prompt_toolkit --upgrade Or Install pymodbus with repl support $ pip install pymodbus[repl] --upgrade 4.2 Usage Instructions RTU and TCP are supported as of now bash-3.2$ pymodbus.console Usage: pymodbus.console [OPTIONS] COMMAND [ARGS]... Options: --version Show the version and exit. --verbose Verbose logs --support-diag Support Diagnostic messages --help Show this message and exit. Commands: serial tcp TCP Options bash-3.2$ pymodbus.console tcp --help Usage: pymodbus.console tcp [OPTIONS] Options: --host TEXT Modbus TCP IP --port INTEGER Modbus TCP port --help Show this message and exit. PyModbus, Release 3.6.0dev SERIAL Options bash-3.2$ pymodbus.console serial --help Usage: pymodbus.console serial [OPTIONS] Options: --method TEXT Modbus Serial Mode (rtu/ascii) --port TEXT Modbus RTU port --baudrate INTEGER Modbus RTU serial baudrate to use. --bytesize [5|6|7|8] Modbus RTU serial Number of data bits. Possible values: FIVEBITS, SIXBITS, SEVENBITS, EIGHTBITS. --parity [N|E|O|M|S] Modbus RTU serial parity. Enable parity checking. Possible values: PARITY_NONE, PARITY_EVEN, PARITY_ODD PARITY_MARK, PARITY_SPACE. Default to 'N' --stopbits [1|1.5|2] Modbus RTU serial stop bits. Number of stop bits. Possible values: STOPBITS_ONE, STOPBITS_ONE_POINT_FIVE, STOPBITS_TWO. Default to '1' --xonxoff INTEGER Modbus RTU serial xonxoff. Enable software flow control. --rtscts INTEGER Modbus RTU serial rtscts. Enable hardware (RTS/CTS) flow control. --dsrdtr INTEGER Modbus RTU serial dsrdtr. Enable hardware (DSR/DTR) flow control. --timeout FLOAT Modbus RTU serial read timeout. --write-timeout FLOAT Modbus RTU serial write timeout. --help Show this message and exit. To view all available commands type help TCP $ pymodbus.console tcp --host 192.168.128.126 --port 5020 > help Available commands: client.change_ascii_input_delimiter Diagnostic sub command, Change message␣ ˓→delimiter for future requests. client.clear_counters Diagnostic sub command, Clear all counters␣ ˓→and diag registers. client.clear_overrun_count Diagnostic sub command, Clear over run␣ ˓→counter. client.close Closes the underlying socket connection client.connect Connect to the modbus tcp server client.debug_enabled Returns a boolean indicating if debug is␣ ˓→enabled. client.force_listen_only_mode Diagnostic sub command, Forces the␣ ˓→addressed remote device to its Listen Only Mode. client.get_clear_modbus_plus Diagnostic sub command, Get or clear stats␣ ˓→of remote modbus plus device. client.get_com_event_counter Read status word and an event count from␣ ˓→the remote device's communication event counter. client.get_com_event_log Read status word, event count, message␣ ˓→count, and a field of event bytes from the remote device. client.host Read Only! (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) client.idle_time Bus Idle Time to initiate next transaction client.is_socket_open Check whether the underlying socket/serial␣ ˓→is open or not. client.last_frame_end Read Only! client.mask_write_register Mask content of holding register at␣ ˓→`address` with `and_mask` and `or_mask`. client.port Read Only! client.read_coils Reads `count` coils from a given slave␣ ˓→starting at `address`. client.read_device_information Read the identification and additional␣ ˓→information of remote slave. client.read_discrete_inputs Reads `count` number of discrete inputs␣ ˓→starting at offset `address`. client.read_exception_status Read the contents of eight Exception Status␣ ˓→outputs in a remote device. client.read_holding_registers Read `count` number of holding registers␣ ˓→starting at `address`. client.read_input_registers Read `count` number of input registers␣ ˓→starting at `address`. client.readwrite_registers Read `read_count` number of holding␣ ˓→registers starting at `read_address` and write `write_registers` ␣ ˓→starting at `write_address`. client.report_slave_id Report information about remote slave ID. client.restart_comm_option Diagnostic sub command, initialize and␣ ˓→restart remote devices serial interface and clear all of its communications␣ ˓→event counters . client.return_bus_com_error_count Diagnostic sub command, Return count of CRC␣ ˓→errors received by remote slave. client.return_bus_exception_error_count Diagnostic sub command, Return count of␣ ˓→Modbus exceptions returned by remote slave. client.return_bus_message_count Diagnostic sub command, Return count of␣ ˓→message detected on bus by remote slave. client.return_diagnostic_register Diagnostic sub command, Read 16-bit␣ ˓→diagnostic register. client.return_iop_overrun_count Diagnostic sub command, Return count of iop␣ ˓→overrun errors by remote slave. client.return_query_data Diagnostic sub command , Loop back data␣ ˓→sent in response. client.return_slave_bus_char_overrun_count Diagnostic sub command, Return count of␣ ˓→messages not handled by remote slave due to character overrun condition. client.return_slave_busy_count Diagnostic sub command, Return count of␣ ˓→server busy exceptions sent by remote slave. client.return_slave_message_count Diagnostic sub command, Return count of␣ ˓→messages addressed to remote slave. client.return_slave_no_ack_count Diagnostic sub command, Return count of NO␣ ˓→ACK exceptions sent by remote slave. client.return_slave_no_response_count Diagnostic sub command, Return count of No␣ ˓→responses by remote slave. client.silent_interval Read Only! client.state Read Only! client.timeout Read Only! client.write_coil Write `value` to coil at `address`. (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) client.write_coils Write `value` to coil at `address`. client.write_register Write `value` to register at `address`. client.write_registers Write list of `values` to registers␣ ˓→starting at `address`. SERIAL $ pymodbus.console serial --port /dev/ttyUSB0 --baudrate 19200 --timeout 2 > help Available commands: client.baudrate Read Only! client.bytesize Read Only! client.change_ascii_input_delimiter Diagnostic sub command, Change message␣ ˓→delimiter for future requests. client.clear_counters Diagnostic sub command, Clear all counters␣ ˓→and diag registers. client.clear_overrun_count Diagnostic sub command, Clear over run␣ ˓→counter. client.close Closes the underlying socket connection client.connect Connect to the modbus serial server client.debug_enabled Returns a boolean indicating if debug is␣ ˓→enabled. client.force_listen_only_mode Diagnostic sub command, Forces the␣ ˓→addressed remote device to its Listen Only Mode. client.get_baudrate Serial Port baudrate. client.get_bytesize Number of data bits. client.get_clear_modbus_plus Diagnostic sub command, Get or clear stats␣ ˓→of remote modbus plus device. client.get_com_event_counter Read status word and an event count from␣ ˓→the remote device's communication event counter. client.get_com_event_log Read status word, event count, message␣ ˓→count, and a field of event bytes from the remote device. client.get_parity Enable Parity Checking. client.get_port Serial Port. client.get_serial_settings Gets Current Serial port settings. client.get_stopbits Number of stop bits. client.get_timeout Serial Port Read timeout. client.idle_time Bus Idle Time to initiate next transaction client.inter_char_timeout Read Only! client.is_socket_open c l i e n t . i s s o c k e t o p e n client.mask_write_register Mask content of holding register at␣ ˓→`address` with `and_mask` and `or_mask`. client.method Read Only! client.parity Read Only! client.port Read Only! client.read_coils Reads `count` coils from a given slave␣ ˓→starting at `address`. client.read_device_information Read the identification and additional␣ ˓→information of remote slave. client.read_discrete_inputs Reads `count` number of discrete inputs␣ ˓→starting at offset `address`. client.read_exception_status Read the contents of eight Exception Status␣ (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) ˓→outputs in a remote device. client.read_holding_registers Read `count` number of holding registers␣ ˓→starting at `address`. client.read_input_registers Read `count` number of input registers␣ ˓→starting at `address`. client.readwrite_registers Read `read_count` number of holding␣ ˓→registers starting at `read_address` and write `write_registers` ␣ ˓→starting at `write_address`. client.report_slave_id Report information about remote slave ID. client.restart_comm_option Diagnostic sub command, initialize and␣ ˓→restart remote devices serial interface and clear all of its communications␣ ˓→event counters . client.return_bus_com_error_count Diagnostic sub command, Return count of CRC␣ ˓→errors received by remote slave. client.return_bus_exception_error_count Diagnostic sub command, Return count of␣ ˓→Modbus exceptions returned by remote slave. client.return_bus_message_count Diagnostic sub command, Return count of␣ ˓→message detected on bus by remote slave. client.return_diagnostic_register Diagnostic sub command, Read 16-bit␣ ˓→diagnostic register. client.return_iop_overrun_count Diagnostic sub command, Return count of iop␣ ˓→overrun errors by remote slave. client.return_query_data Diagnostic sub command , Loop back data␣ ˓→sent in response. client.return_slave_bus_char_overrun_count Diagnostic sub command, Return count of␣ ˓→messages not handled by remote slave due to character overrun condition. client.return_slave_busy_count Diagnostic sub command, Return count of␣ ˓→server busy exceptions sent by remote slave. client.return_slave_message_count Diagnostic sub command, Return count of␣ ˓→messages addressed to remote slave. client.return_slave_no_ack_count Diagnostic sub command, Return count of NO␣ ˓→ACK exceptions sent by remote slave. client.return_slave_no_response_count Diagnostic sub command, Return count of No␣ ˓→responses by remote slave. client.set_baudrate Baudrate setter. client.set_bytesize Byte size setter. client.set_parity Parity Setter. client.set_port Serial Port setter. client.set_stopbits Stop bit setter. client.set_timeout Read timeout setter. client.silent_interval Read Only! client.state Read Only! client.stopbits Read Only! client.timeout Read Only! client.write_coil Write `value` to coil at `address`. client.write_coils Write `value` to coil at `address`. client.write_register Write `value` to register at `address`. client.write_registers Write list of `values` to registers␣ ˓→starting at `address`. result.decode Decode the register response to known␣ ˓→formatters. result.raw Return raw result dict. PyModbus, Release 3.6.0dev Every command has auto suggestion on the arguments supported, arg and value are to be supplied in arg=val format. > client.read_holding_registers count=4 address=9 slave=1 { "registers": [ 60497, 47134, 34091, 15424 ] } The last result could be accessed with result.raw command > result.raw { "registers": [ 15626, 55203, 28733, 18368 ] } For Holding and Input register reads, the decoded value could be viewed with result.decode > result.decode word_order=little byte_order=little formatters=float64 28.17 > Client settings could be retrieved and altered as well. > # For serial settings > # Check the serial mode > client.method "rtu" > client.get_serial_settings { "t1.5": 0.00171875, "baudrate": 9600, "read timeout": 0.5, "port": "/dev/ptyp0", "t3.5": 0.00401, "bytesize": 8, "parity": "N", "stopbits": 1.0 } > client.set_timeout value=1 null > client.get_timeout (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) 1.0 > client.get_serial_settings { "t1.5": 0.00171875, "baudrate": 9600, "read timeout": 1.0, "port": "/dev/ptyp0", "t3.5": 0.00401, "bytesize": 8, "parity": "N", "stopbits": 1.0 } 4.3 DEMO 4.4 REPL client classes Modbus Clients to be used with REPL. class pymodbus.repl.client.mclient.ExtendedRequestSupport Bases: object Extended request support. change_ascii_input_delimiter(data=0, **kwargs) Change message delimiter for future requests. Parameters • data – New delimiter character • kwargs – Returns clear_counters(data=0, **kwargs) Clear all counters and diag registers. Parameters • data – Data field (0x0000) • kwargs – Returns clear_overrun_count(data=0, **kwargs) Clear over run counter. Parameters • data – Data field (0x0000) • kwargs – Returns PyModbus, Release 3.6.0dev force_listen_only_mode(data=0, **kwargs) Force addressed remote device to its Listen Only Mode. Parameters • data – Data field (0x0000) • kwargs – Returns get_clear_modbus_plus(data=0, **kwargs) Get/clear stats of remote modbus plus device. Parameters • data – Data field (0x0000) • kwargs – Returns get_com_event_counter(**kwargs) Read status word and an event count. From the remote device’s communication event counter. Parameters kwargs – Returns get_com_event_log(**kwargs) Read status word. Event count, message count, and a field of event bytes from the remote device. Parameters kwargs – Returns mask_write_register(address=0, and_mask=65535, or_mask=0, slave=0, **kwargs) Mask content of holding register at address with and_mask and or_mask. Parameters • address – Reference address of register • and_mask – And Mask • or_mask – OR Mask • slave – Modbus slave slave ID • kwargs – Returns read_coils(address, count=1, slave=0, **kwargs) Read count coils from a given slave starting at address. Parameters • address – The starting address to read from • count – The number of coils to read PyModbus, Release 3.6.0dev • slave – Modbus slave slave ID • kwargs – Returns List of register values read_device_information(read_code=None, object_id=0, **kwargs) Read the identification and additional information of remote slave. Parameters • read_code – Read Device ID code (0x01/0x02/0x03/0x04) • object_id – Identification of the first object to obtain. • kwargs – Returns read_discrete_inputs(address, count=1, slave=0, **kwargs) Read count number of discrete inputs starting at offset address. Parameters • address – The starting address to read from • count – The number of coils to read • slave – Modbus slave slave ID • kwargs – Returns List of bits read_exception_status(slave=0, **kwargs) Read contents of eight Exception Status output in a remote device. Parameters • slave – Modbus slave ID • kwargs – Returns read_holding_registers(address, count=1, slave=0, **kwargs) Read count number of holding registers starting at address. Parameters • address – starting register offset to read from • count – Number of registers to read • slave – Modbus slave slave ID • kwargs – Returns read_input_registers(address, count=1, slave=0, **kwargs) Read count number of input registers starting at address. Parameters • address – starting register offset to read from to PyModbus, Release 3.6.0dev • count – Number of registers to read • slave – Modbus slave slave ID • kwargs – Returns readwrite_registers(read_address=0, read_count=0, write_address=0, values=0, slave=0, **kwargs) Read read_count number of holding registers. Starting at read_address and write write_registers starting at write_address. Parameters • read_address – register offset to read from • read_count – Number of registers to read • write_address – register offset to write to • values – List of register values to write (comma separated) • slave – Modbus slave slave ID • kwargs – Returns report_slave_id(slave=0, **kwargs) Report information about remote slave ID. Parameters • slave – Modbus slave ID • kwargs – Returns restart_comm_option(toggle=False, **kwargs) Initialize and restart remote devices. Serial interface and clear all of its communications event counters. Parameters • toggle – Toggle Status [ON(0xff00)/OFF(0x0000] • kwargs – Returns return_bus_com_error_count(data=0, **kwargs) Return count of CRC errors received by remote slave. Parameters • data – Data field (0x0000) • kwargs – Returns PyModbus, Release 3.6.0dev return_bus_exception_error_count(data=0, **kwargs) Return count of Modbus exceptions returned by remote slave. Parameters • data – Data field (0x0000) • kwargs – Returns return_bus_message_count(data=0, **kwargs) Return count of message detected on bus by remote slave. Parameters • data – Data field (0x0000) • kwargs – Returns return_diagnostic_register(data=0, **kwargs) Read 16-bit diagnostic register. Parameters • data – Data field (0x0000) • kwargs – Returns return_iop_overrun_count(data=0, **kwargs) Return count of iop overrun errors by remote slave. Parameters • data – Data field (0x0000) • kwargs – Returns return_query_data(message=0, **kwargs) Loop back data sent in response. Parameters • message – Message to be looped back • kwargs – Returns return_slave_bus_char_overrun_count(data=0, **kwargs) Return count of messages not handled. By remote slave due to character overrun condition. Parameters • data – Data field (0x0000) • kwargs – Returns PyModbus, Release 3.6.0dev return_slave_busy_count(data=0, **kwargs) Return count of server busy exceptions sent by remote slave. Parameters • data – Data field (0x0000) • kwargs – Returns return_slave_message_count(data=0, **kwargs) Return count of messages addressed to remote slave. Parameters • data – Data field (0x0000) • kwargs – Returns return_slave_no_ack_count(data=0, **kwargs) Return count of NO ACK exceptions sent by remote slave. Parameters • data – Data field (0x0000) • kwargs – Returns return_slave_no_response_count(data=0, **kwargs) Return count of No responses by remote slave. Parameters • data – Data field (0x0000) • kwargs – Returns write_coil(address, value, slave=0, **kwargs) Write value to coil at address. Parameters • address – coil offset to write to • value – bit value to write • slave – Modbus slave slave ID • kwargs – Returns write_coils(address, values, slave=0, **kwargs) Write value to coil at address. Parameters • address – coil offset to write to • values – list of bit values to write (comma separated) PyModbus, Release 3.6.0dev • slave – Modbus slave slave ID • kwargs – Returns write_register(address, value, slave=0, **kwargs) Write value to register at address. Parameters • address – register offset to write to • value – register value to write • slave – Modbus slave slave ID • kwargs – Returns write_registers(address, values, slave=0, **kwargs) Write list of values to registers starting at address. Parameters • address – register offset to write to • values – list of register value to write (comma separated) • slave – Modbus slave slave ID • kwargs – Returns class pymodbus.repl.client.mclient.ModbusSerialClient(framer, **kwargs) Bases: ExtendedRequestSupport, ModbusSerialClient Modbus serial client. get_baudrate() Get serial Port baudrate. Returns Current baudrate get_bytesize() Get number of data bits. Returns Current bytesize get_parity() Enable Parity Checking. Returns Current parity setting get_port() Get serial Port. Returns Current Serial port PyModbus, Release 3.6.0dev get_serial_settings() Get Current Serial port settings. Returns Current Serial settings as dict. get_stopbits() Get number of stop bits. Returns Current Stop bits get_timeout() Get serial Port Read timeout. Returns Current read imeout. set_baudrate(value) Set baudrate setter. Parameters value – <supported baudrate> set_bytesize(value) Set Byte size. Parameters value – Possible values (5, 6, 7, 8) set_parity(value) Set parity Setter. Parameters value – Possible values (“N”, “E”, “O”, “M”, “S”) set_port(value) Set serial Port setter. Parameters value – New port set_stopbits(value) Set stop bit. Parameters value – Possible values (1, 1.5, 2) set_timeout(value) Read timeout setter. Parameters value – Read Timeout in seconds class pymodbus.repl.client.mclient.ModbusTcpClient(**kwargs) Bases: ExtendedRequestSupport, ModbusTcpClient TCP client. pymodbus.repl.client.mclient.handle_brodcast(func) Handle broadcast. PyModbus, Release 3.6.0dev pymodbus.repl.client.mclient.make_response_dict(resp) Make response dict. 4.5 REPL server classes 4.5.1 Pymodbus REPL (Read Evaluate Print Loop) Warning The Pymodbus REPL documentation is not updated. 4.5.1.1 Pymodbus REPL Client Pymodbus REPL comes with many handy features such as payload decoder to directly retrieve the values in desired format and supports all the diagnostic function codes directly . For more info on REPL Client refer Pymodbus REPL Client 4.5.1.2 Pymodbus REPL Server Pymodbus also comes with a REPL server to quickly run an asynchronous server with additional capabilities out of the box like simulating errors, delay, mangled messages etc. For more info on REPL Server refer Pymodbus REPL Server PyModbus, Release 3.6.0dev 54 Chapter 4. REPL CHAPTER FIVE SIMULATOR The simulator is a full fledged modbus simulator, which is constantly being evolved with user ideas / amendments. The purpose of the simulator is to provide support for client application test harnesses with end-to-end testing simulating real life modbus devices. The datastore simulator allows the user to (all automated) • simulate a modbus device by adding a simple configuration, • test how a client handles modbus exceptions, • test a client apps correct use of the simulated device. The web interface allows the user to (online / manual) • test how a client handles modbus errors, • test how a client handles communication errors like divided messages, • run your test server in the cloud, • monitor requests/responses, • inject modbus errors like malicious a response, • see/Change values online. The REST API allow the test process to be automated • spin up a test server with unix domain sockets in your test harness, • set expected responses with a simple REST API command, • check the result with another simple REST API command, • test your client app in a true end-to-end fashion. The simulator replaces REPL server classes but not REPL client classes 5.1 Configuration Configuring the pymodbus simulator is done with a json file, or if only using the datastore simulator a python dict (same structure as the device part of the json file). PyModbus, Release 3.6.0dev 5.1.1 Json file layout The json file consist of 2 main entries “server_list” (see Server entries) and “device_list” (see Device entries) each containing a list of servers/devices { "server_list": { "<name>": { ... }, ... }, "device_list": { "<name>": { ... }, ... } } You can define as many server and devices as you like, when starting pymodbus.simulator you select one server and one device to simulate. A entry in “device_list” correspond to the dict you can use as parameter to datastore_simulator is you want to construct your own simulator. 5.1.2 Server entries The entries for a tcp server with minimal parameters look like: { "server_list": { "server": { "comm": "tcp", "host": "0.0.0.0", "port": 5020, "framer": "socket", } } "device_list": { ... } } The example uses “comm”: “tcp”, so the entries are arguments to pymodbus.server.ModbusTcpServer, where detailed information are available. The entry “comm” allows the following values: • “serial”, to use pymodbus.server.ModbusSerialServer, • “tcp”, to use pymodbus.server.ModbusTcpServer, • “tls”, to use pymodbus.server.ModbusTlsServer, • “udp”; to use pymodbus.server.ModbusUdpServer. The entry “framer” allows the following values: • “ascii” to use pymodbus.framer.ascii_framer.ModbusAsciiFramer, • “binary to use pymodbus.framer.ascii_framer.ModbusBinaryFramer, PyModbus, Release 3.6.0dev • “rtu” to use pymodbus.framer.ascii_framer.ModbusRtuFramer, • “tls” to use pymodbus.framer.ascii_framer.ModbusTlsFramer, • “socket” to use pymodbus.framer.ascii_framer.ModbusSocketFramer. Warning: not all “framer” types can be used with all “comm” types. e.g. "framer": “tls” only works with “comm”: “tls”! 5.1.3 Server configuration examples { "server_list": { "server": { "comm": "tcp", "host": "0.0.0.0", "port": 5020, "ignore_missing_slaves": false, "framer": "socket", "identity": { "VendorName": "pymodbus", "ProductCode": "PM", "VendorUrl": "https://github.com/riptideio/pymodbus/", "ProductName": "pymodbus Server", "ModelName": "pymodbus Server", "MajorMinorRevision": "3.1.0" } }, "server_try_serial": { "comm": "serial", "port": "/dev/tty0", "stopbits": 1, "bytesize": 8, "parity": "N", "baudrate": 9600, "timeout": 3, "reconnect_delay": 2, "framer": "rtu", "identity": { "VendorName": "pymodbus", "ProductCode": "PM", "VendorUrl": "https://github.com/riptideio/pymodbus/", "ProductName": "pymodbus Server", "ModelName": "pymodbus Server", "MajorMinorRevision": "3.1.0" } }, "server_try_tls": { "comm": "tls", "host": "0.0.0.0", "port": 5020, (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) "certfile": "certificates/pymodbus.crt", "keyfile": "certificates/pymodbus.key", "ignore_missing_slaves": false, "framer": "tls", "identity": { "VendorName": "pymodbus", "ProductCode": "PM", "VendorUrl": "https://github.com/riptideio/pymodbus/", "ProductName": "pymodbus Server", "ModelName": "pymodbus Server", "MajorMinorRevision": "3.1.0" } }, "server_test_try_udp": { "comm": "udp", "host": "0.0.0.0", "port": 5020, "ignore_missing_slaves": false, "framer": "socket", "identity": { "VendorName": "pymodbus", "ProductCode": "PM", "VendorUrl": "https://github.com/riptideio/pymodbus/", "ProductName": "pymodbus Server", "ModelName": "pymodbus Server", "MajorMinorRevision": "3.1.0" } } } } 5.1.4 Device entries Each device is configured in a number of sections, described in detail below • “setup”, defines the overall structure of the device, like e.g. number of registers, • “invalid”, defines invalid registers and causes a modbus exception when reading and/or writing, • “write”, defines registers which allow read/write, other registers causes a modbus exception when writing, • “bits”, defines registers which contain bits (discrete input and coils), • “uint16”, defines registers which contain a 16 bit unsigned integer, • “uint32”, defines sets of registers (2) which contain a 32 bit unsigned integer, • “float32”, defines sets of registers (2) which contain a 32 bit float, • “string”, defines sets of registers which contain a string, • “repeat”, is a special command to copy configuration if a device contains X bay controllers, configure one and use repeat for X-1. The datastore simulator manages the registers in a big list, which can be manipulated with • actions (functions that are called with each access) PyModbus, Release 3.6.0dev • manually via the WEB interface • automated via the REST API interface • the client (writing values) It is important to understand that the modbus protocol does not know or care how the physical memory/registers are organized, but it has a huge impact on the client! Communication with a modbus device is based on registers which each contain 16 bits (2 bytes). The requests are grouped in 4 groups • Input Discrete • Coils • Input registers • Holding registers The 4 blocks are mapped into physical memory, but the modbus protocol makes no assumption or demand on how this is done. The history of modbus devices have shown 2 forms of mapping. The first form is also the original form. It originates from a time where the devices did not contain memory, but the request was mapped directly to a physical sensor: When reading holding register 1 (block 4) you get a different register as when reading input register 1 (block 1). Each block references a different physical register memory, in other words the size of the needed memory is the sum of the block sizes. The second form uses 1 shared block, most modern devices use this form for 2 main reasons: PyModbus, Release 3.6.0dev • the modbus protocol implementation do not connect directly to the sensors but to a shared memory controlled by a small microprocessor. • designers can group related information independent of type (e.g. a bay controller with register 1 as coil, register 2 as input and register 3 as holding) When reading holding register 1 the same phyical register is accessed as when reading input register 1. Each block references the same physical register memory, in other words the size of the needed memory is the size of the largest block. The datastore simulator supports both types. 5.1.4.1 Setup section Example “setup” configuration: "setup": { "co size": 10, "di size": 20, "hr size": 15, "ir size": 25, "shared blocks": true, "type exception": true, "defaults": { "value": { "bits": 0, "uint16": 0, (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) "uint32": 0, "float32": 0.0, "string": " " }, "action": { "bits": null, "uint16": "register", "uint32": "register", "float32": "register", "string": null } } “co size”, “di size”, “hr size”, “ir size”: Define the size of each block. If using shared block the register list size will be the size of the biggest block (25 reegisters) If not using shared block the register list size will be the sum of the 4 block sizes (70 registers). “shared blocks” Defines if the blocks are independent or shared (true) Tip: if shared is set to false, please remember to adjust the addresses, depending on in which group they are. assuming all sizes are set to 10, the addresses for configuration are as follows: • coils have addresses 0-9, • discrete_inputs have addresses 10-19, • input_registers have addresses 20-29, • holding_registers have addresses 30-39 when configuring the the datatypes (when calling each block start with 0). This is needed because the datatypes can be in different blocks. “type exception” Defines is the server returns a modbus exception if a read/write request violates the specified type. E.g. Read holding register 10 with count 1, but the 10,11 are defined as UINT32 and thus can only be read with multiples of 2. This feature is designed to control that a client access the device in the manner it was designed. “defaults” Defines how to defines registers not configured or or only partial configured. “value” defines the default value for each type. “action” defines the default action for each type. Actions are functions that are called whenever the register is accessed and thus allows automatic manipulation. The datastore simulator have a number of builtin actions, and allows custom actions to be added: • “random”, change the value with every access, PyModbus, Release 3.6.0dev • “increment”, increment the value by 1 with every access, • “timestamp”, uses 6 registers and build a timestamp, • “reset”, causes a reboot of the simulator, • “uptime”, sets the number of seconds the server have been running. The “random” and “increment” actions may optionally minimum and/or maximum. In case of “increment”, the counter is reset to the minimum value, if the maximum is crossed. {"addr": 9, "value": 7, "action": "random", "kwargs": {"minval": 0, "maxval": 12} }, {"addr": 10, "value": 100, "action": "increment", "kwargs": {"minval": 50} } 5.1.4.2 Invalid section Example “invalid” configuration: "invalid": [ 5, [10, 15] ], Defines invalid registers which cannot be read or written. When accessed the response in a modbus exception invalid address. In the example registers 5, 10, 11, 12, 13, 14, 15 will produce an exception response. Registers can be singulars (first entry) or arrays (second entry) 5.1.4.3 Write section Example “write” configuration: "write": [ 4, [5, 6] ], Defines registers which can be written to. When writing to registers not defined here the response is a modbus exception invalid address. Registers can be singulars (first entry) or arrays (second entry) 5.1.4.4 Bits section Example “bits” configuration: "bits": [ 5, [6, 7], {"addr": 8, "value": 7}, {"addr": 9, "value": 7, "action": "random"}, {"addr": [11, 12], "value": 7, "action": "random"} ], PyModbus, Release 3.6.0dev defines registers which contain bits (discrete input and coils), Registers can be singulars (first entry) or arrays (second entry), furthermore a value and/or a action can be defined, the value and/or action is inserted into each register defined in “addr”. 5.1.4.5 Uint16 section Example “uint16” configuration: "uint16": [ 5, [6, 7], {"addr": 8, "value": 30123}, {"addr": 9, "value": 712, "action": "increment"}, {"addr": [11, 12], "value": 517, "action": "random"} ], defines registers which contain a 16 bit unsigned integer, Registers can be singulars (first entry) or arrays (second entry), furthermore a value and/or a action can be defined, the value and/or action is inserted into each register defined in “addr”. 5.1.4.6 Uint32 section Example “uint32” configuration: "uint32": [ [6, 7], {"addr": [8, 9], "value": 300123}, {"addr": [10, 13], "value": 400712, "action": "increment"}, {"addr": [14, 15], "value": 500517, "action": "random"} ], defines sets of registers (2) which contain a 32 bit unsigned integer, Registers can only be arrays in multiples of 2, furthermore a value and/or a action can be defined, the value and/or action is converted (high/low value) and inserted into each register set defined in “addr”. 5.1.4.7 Float32 section Example “float32” configuration: "float32": [ [6, 7], {"addr": [8, 9], "value": 3123.17}, {"addr": [10, 13], "value": 712.5, "action": "increment"}, {"addr": [14, 15], "value": 517.0, "action": "random"} ], defines sets of registers (2) which contain a 32 bit float, Registers can only be arrays in multiples of 2, furthermore a value and/or a action can be defined, the value and/or action is converted (high/low value) and inserted into each register set defined in “addr”. Remark remember to set "value": <float value> like 512.0 (float) not 512 (integer). PyModbus, Release 3.6.0dev 5.1.4.8 String section Example “string” configuration: "string": [ 7, [8, 9], {"addr": [16, 20], "value": "A_B_C_D_E_"} ], defines sets of registers which contain a string, Registers can be singulars (first entry) or arrays (second entry). Important each string must be defined individually. • Entry 1 is a string of 2 chars, • Entry 2 is a string of 4 chars, • Entry 3 is a string of 10 chars with the value ‘’A_B_C_D_E_”. 5.1.4.9 Repeat section Example “repeat” configuration: "repeat": [ {"addr": [0, 2], "to": [10, 11]}, {"addr": [0, 2], "to": [10, 15]}, ] is a special command to copy configuration if a device contains X bay controllers, configure one and use repeat for X-1. First entry copies registers 0-2 to 10-11, resulting in 10 == 0, 11 == 1, 12 unchanged. Second entry copies registers 0-2 to 10-15, resulting in 10 == 0, 11 == 1, 12 == 2, 13 == 0, 14 == 1, 15 == 2, 16 unchanged. 5.1.5 Device configuration examples { "server_list": { ... }, "device_list": { "device": { "setup": { "co size": 63000, "di size": 63000, "hr size": 63000, "ir size": 63000, "shared blocks": true, "type exception": true, "defaults": { "value": { "bits": 0, "uint16": 0, (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) "uint32": 0, "float32": 0.0, "string": " " }, "action": { "bits": null, "uint16": "register", "uint32": "register", "float32": "register", "string": null } } }, "invalid": [ 1 ], "write": [ 5 ], "bits": [ {"addr": 2, "value": 7} ], "uint16": [ {"addr": 3, "value": 17001}, 2100 ], "uint32": [ {"addr": 4, "value": 617001}, [3037, 3038] ], "float32": [ {"addr": 6, "value": 404.17}, [4100, 4101] ], "string": [ 5047, {"addr": [16, 20], "value": "A_B_C_D_E_"} ], "repeat": [ ] }, "device_try": { "setup": { "co size": 63000, "di size": 63000, "hr size": 63000, "ir size": 63000, "shared blocks": true, "type exception": true, "defaults": { "value": { "bits": 0, (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) "uint16": 0, "uint32": 0, "float32": 0.0, "string": " " }, "action": { "bits": null, "uint16": "register", "uint32": "register", "float32": "register", "string": null } } }, "invalid": [ [0, 5], 77 ], "write": [ 10, [61, 76] ], "bits": [ 10, 1009, [1116, 1119], {"addr": 1144, "value": 1}, {"addr": [1148,1149], "value": 32117}, {"addr": [1208, 1306], "action": "random"} ], "uint16": [ 11, 2027, [2126, 2129], {"addr": 2164, "value": 1}, {"addr": [2168,2169], "value": 32117}, {"addr": [2208, 2306], "action": null} ], "uint32": [ 12, 3037, [3136, 3139], {"addr": 3174, "value": 1}, {"addr": [3188,3189], "value": 32514}, {"addr": [3308, 3406], "action": null}, {"addr": [3688, 3878], "value": 115, "action": "increment"} ], "float32": [ 14, 4047, [4146, 4149], {"addr": 4184, "value": 1}, (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) {"addr": [4198,4191], "value": 32514.1}, {"addr": [4308, 4406], "action": null}, {"addr": [4688, 4878], "value": 115.7, "action": "increment"} ], "string": [ {"addr": [16, 20], "value": "A_B_C_D_E_"}, 5047, [5146, 5149], {"addr": [529, 544], "value": "Brand name, 32 bytes...........X"} ], "repeat": [ {"addr": [0, 999], "to": [10000, 10999]}, {"addr": [10, 1999], "to": [11000, 11999]} ] } }, "device_minimum": { "setup": { "co size": 10, "di size": 10, "hr size": 10, "ir size": 10, "shared blocks": true, "type exception": false, "defaults": { "value": { "bits": 0, "uint16": 0, "uint32": 0, "float32": 0.0, "string": " " }, "action": { "bits": null, "uint16": null, "uint32": null, "float32": null, "string": null } } }, "invalid": [], "write": [], "bits": [], "uint16": [ [0, 9] ], "uint32": [], "float32": [], "string": [], "repeat": [] } (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) } } 5.1.6 Configuration used for test { "server_list": { "server": { "comm": "tcp", "host": "0.0.0.0", "port": 5020, "ignore_missing_slaves": false, "framer": "socket", "identity": { "VendorName": "pymodbus", "ProductCode": "PM", "VendorUrl": "https://github.com/pymodbus-dev/pymodbus/", "ProductName": "pymodbus Server", "ModelName": "pymodbus Server", "MajorMinorRevision": "3.1.0" } }, "server_try_serial": { "comm": "serial", "port": "/dev/tty0", "stopbits": 1, "bytesize": 8, "parity": "N", "baudrate": 9600, "timeout": 3, "reconnect_delay": 2, "framer": "rtu", "identity": { "VendorName": "pymodbus", "ProductCode": "PM", "VendorUrl": "https://github.com/pymodbus-dev/pymodbus/", "ProductName": "pymodbus Server", "ModelName": "pymodbus Server", "MajorMinorRevision": "3.1.0" } }, "server_try_tls": { "comm": "tls", "host": "0.0.0.0", "port": 5020, "certfile": "certificates/pymodbus.crt", "keyfile": "certificates/pymodbus.key", "ignore_missing_slaves": false, "framer": "tls", "identity": { (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) "VendorName": "pymodbus", "ProductCode": "PM", "VendorUrl": "https://github.com/pymodbus-dev/pymodbus/", "ProductName": "pymodbus Server", "ModelName": "pymodbus Server", "MajorMinorRevision": "3.1.0" } }, "server_test_try_udp": { "comm": "udp", "host": "0.0.0.0", "port": 5020, "ignore_missing_slaves": false, "framer": "socket", "identity": { "VendorName": "pymodbus", "ProductCode": "PM", "VendorUrl": "https://github.com/pymodbus-dev/pymodbus/", "ProductName": "pymodbus Server", "ModelName": "pymodbus Server", "MajorMinorRevision": "3.1.0" } } }, "device_list": { "device": { "setup": { "co size": 63000, "di size": 63000, "hr size": 63000, "ir size": 63000, "shared blocks": true, "type exception": true, "defaults": { "value": { "bits": 0, "uint16": 0, "uint32": 0, "float32": 0.0, "string": " " }, "action": { "bits": null, "uint16": "increment", "uint32": "increment", "float32": "increment", "string": null } } }, "invalid": [ 1 (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) ], "write": [ 3 ], "bits": [ {"addr": 2, "value": 7} ], "uint16": [ {"addr": 3, "value": 17001, "action": null}, 2100 ], "uint32": [ {"addr": [4, 5], "value": 617001, "action": null}, [3037, 3038] ], "float32": [ {"addr": [6, 7], "value": 404.17}, [4100, 4101] ], "string": [ 5047, {"addr": [16, 20], "value": "A_B_C_D_E_"} ], "repeat": [ ] }, "device_try": { "setup": { "co size": 63000, "di size": 63000, "hr size": 63000, "ir size": 63000, "shared blocks": true, "type exception": true, "defaults": { "value": { "bits": 0, "uint16": 0, "uint32": 0, "float32": 0.0, "string": " " }, "action": { "bits": null, "uint16": null, "uint32": null, "float32": null, "string": null } } }, "invalid": [ (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) [0, 5], 77 ], "write": [ 10 ], "bits": [ 10, 1009, [1116, 1119], {"addr": 1144, "value": 1}, {"addr": [1148,1149], "value": 32117}, {"addr": [1208, 1306], "action": "random"} ], "uint16": [ 11, 2027, [2126, 2129], {"addr": 2164, "value": 1}, {"addr": [2168,2169], "value": 32117}, {"addr": [2208, 2304], "action": "increment"}, {"addr": 2305, "value": 50, "action": "increment", "kwargs": {"minval": 45, "maxval": 155} }, {"addr": 2306, "value": 50, "action": "random", "kwargs": {"minval": 45, "maxval": 55} } ], "uint32": [ [12, 13], [3037, 3038], [3136, 3139], {"addr": [3174, 3175], "value": 1}, {"addr": [3188,3189], "value": 32514}, {"addr": [3308, 3407], "action": null}, {"addr": [3688, 3875], "value": 115, "action": "increment"}, {"addr": [3876, 3877], "value": 50000, "action": "increment", "kwargs": {"minval": 45000, "maxval": 55000} }, {"addr": [3878, 3879], "value": 50000, "action": "random", "kwargs": {"minval": 45000, "maxval": 55000} } ], "float32": [ (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) [14, 15], [4047, 4048], [4146, 4149], {"addr": [4184, 4185], "value": 1}, {"addr": [4188, 4191], "value": 32514.2}, {"addr": [4308, 4407], "action": null}, {"addr": [4688, 4875], "value": 115.7, "action": "increment"}, {"addr": [4876, 4877], "value": 50000.0, "action": "increment", "kwargs": {"minval": 45000.0, "maxval": 55000.0} }, {"addr": [4878, 48779], "value": 50000.0, "action": "random", "kwargs": {"minval": 45000.0, "maxval": 55000.0} } ], "string": [ {"addr": [16, 20], "value": "A_B_C_D_E_"}, {"addr": [529, 544], "value": "Brand name, 32 bytes...........X"} ], "repeat": [ ] } } } 5.2 Simulator datastore The simulator datastore is an advanced datastore. The simulator allows to simulate the registers of a real life modbus device by adding a simple dict (definition see Device entries). The simulator datastore allows to add actions (functions) to a register, and thus allows a low level automation. Documentation pymodbus.datastore.ModbusSimulatorContext PyModbus, Release 3.6.0dev 5.3 Web frontend TO BE DOCUMENTED. 5.3.1 pymodbus.simulator The easiest way to run the simulator with web is to use “pymodbus.simulator” from the commandline. TO BE DOCUMENTED. HTTP server for modbus simulator. class pymodbus.server.simulator.http_server.CallTracer(call: bool = False, fc: int = -1, address: int Bases: object Define call/response traces class pymodbus.server.simulator.http_server.CallTypeMonitor(active: bool = False, trace_response: False) Bases: object Define Request/Response monitor class pymodbus.server.simulator.http_server.CallTypeResponse(active: int = -1, split: int = 0, delay: Bases: object Define Response manipulation class pymodbus.server.simulator.http_server.ModbusSimulatorServer(modbus_server: str = 'server', modbus_device: str = 'device', log_file: str = 'server.log', json_file: str = 'setup.json', custom_actions_module: str | None = None) Bases: object ModbusSimulatorServer. Parameters • modbus_server – Server name in json file (default: “server”) • modbus_device – Device name in json file (default: “client”) • http_host – TCP host for HTTP (default: “localhost”) • http_port – TCP port for HTTP (default: 8080) • json_file – setup file (default: “setup.json”) • custom_actions_module – python module with custom actions (default: none) PyModbus, Release 3.6.0dev if either http_port or http_host is none, HTTP will not be started. This class starts a http server, that serves a couple of endpoints: • “<addr>/” static files • “<addr>/api/log” log handling, HTML with GET, REST-API with post • “<addr>/api/registers” register handling, HTML with GET, REST-API with post • “<addr>/api/calls” call (function code / message) handling, HTML with GET, REST-API with post • “<addr>/api/server” server handling, HTML with GET, REST-API with post Example: from pymodbus.server import StartAsyncSimulatorServer async def run(): simulator = StartAsyncSimulatorServer( modbus_server="my server", modbus_device="my device", http_host="localhost", http_port=8080) await simulator.start() ... await simulator.close() async start_modbus_server(app) Start Modbus server as asyncio task. async stop_modbus_server(app) Stop modbus server. async run_forever(only_start=False) Start modbus and http servers. async stop() Stop modbus and http servers. async handle_html_static(request) Handle static html. async handle_html(request) Handle html. async handle_json(request) Handle api registers. build_html_registers(params, html) Build html registers page. build_html_calls(params, html) Build html calls page. build_html_log(_params, html) Build html log page. build_html_server(_params, html) Build html server page. PyModbus, Release 3.6.0dev build_json_registers(params, json_dict) Build html registers page. build_json_calls(params, json_dict) Build html calls page. build_json_log(params, json_dict) Build json log page. build_json_server(params, json_dict) Build html server page. helper_build_html_submit(params) Build html register submit. action_clear(_params, _range_start, _range_stop) Clear register filter. action_stop(_params, _range_start, _range_stop) Stop call monitoring. action_reset(_params, _range_start, _range_stop) Reset call simulation. action_add(params, range_start, range_stop) Build list of registers matching filter. action_monitor(params, range_start, range_stop) Start monitoring calls. action_set(params, _range_start, _range_stop) Set register value. action_simulate(params, _range_start, _range_stop) Simulate responses. server_response_manipulator(response) Manipulate responses. All server responses passes this filter before being sent. The filter returns: • response, either original or modified • skip_encoding, signals whether or not to encode the response server_request_tracer(request, *_addr) Trace requests. All server requests passes this filter before being handled. PyModbus, Release 3.6.0dev 5.4 Pymodbus simulator ReST API TO BE DOCUMENTED. 76 Chapter 5. Simulator CHAPTER SIX EXAMPLES Examples are divided in 2 parts: The first part are some simple client examples which can be copied and run directly. These examples show the basic functionality of the library. The second part are more advanced examples, but in order to not duplicate code, this requires you to download the examples directory and run the examples in the directory. 6.1 Ready to run examples: These examples are very basic examples, showing how a client can communicate with a server. You need to modify the code to adapt it to your situation. 6.1.1 Simple asynchronous client Source: examples/simple_async_client.py #!/usr/bin/env python3 """Pymodbus asynchronous client example. An example of a single threaded synchronous client. usage: simple_client_async.py All options must be adapted in the code The corresponding server must be started before e.g. as: python3 server_sync.py """ import asyncio from pymodbus import Framer, pymodbus_apply_logging_config from pymodbus.client import ( AsyncModbusSerialClient, AsyncModbusTcpClient, AsyncModbusTlsClient, AsyncModbusUdpClient, ) from pymodbus.exceptions import ModbusException (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) from pymodbus.pdu import ExceptionResponse async def run_async_simple_client(comm, host, port, framer=Framer.SOCKET): """Run async client.""" # activate debugging pymodbus_apply_logging_config("DEBUG") print("get client") if comm == "tcp": client = AsyncModbusTcpClient( host, port=port, framer=framer, # timeout=10, # retries=3, # retry_on_empty=False, # close_comm_on_error=False, # strict=True, # source_address=("localhost", 0), ) elif comm == "udp": client = AsyncModbusUdpClient( host, port=port, framer=framer, # timeout=10, # retries=3, # retry_on_empty=False, # close_comm_on_error=False, # strict=True, # source_address=None, ) elif comm == "serial": client = AsyncModbusSerialClient( port, framer=framer, # timeout=10, # retries=3, # retry_on_empty=False, # close_comm_on_error=False, # strict=True, baudrate=9600, bytesize=8, parity="N", stopbits=1, # handle_local_echo=False, ) elif comm == "tls": client = AsyncModbusTlsClient( host, (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) port=port, framer=Framer.TLS, # timeout=10, # retries=3, # retry_on_empty=False, # close_comm_on_error=False, # strict=True, # sslctx=sslctx, certfile="../examples/certificates/pymodbus.crt", keyfile="../examples/certificates/pymodbus.key", # password="none", server_hostname="localhost", ) else: # pragma no cover print(f"Unknown client {comm} selected") return print("connect to server") await client.connect() # test client is connected assert client.connected print("get and verify data") try: # See all calls in client_calls.py rr = await client.read_coils(1, 1, slave=1) except ModbusException as exc: # pragma no cover print(f"Received ModbusException({exc}) from library") client.close() return if rr.isError(): # pragma no cover print(f"Received Modbus library error({rr})") client.close() return if isinstance(rr, ExceptionResponse): # pragma no cover print(f"Received Modbus library exception ({rr})") # THIS IS NOT A PYTHON EXCEPTION, but a valid modbus message client.close() print("close connection") client.close() if __name__ == "__main__": asyncio.run( run_async_simple_client("tcp", "127.0.0.1", 5020), debug=True ) # pragma: no cover PyModbus, Release 3.6.0dev 6.1.2 Simple synchronous client Source: examples/simple_sync_client.py #!/usr/bin/env python3 """Pymodbus synchronous client example. An example of a single threaded synchronous client. usage: simple_client_async.py All options must be adapted in the code The corresponding server must be started before e.g. as: python3 server_sync.py """ # --------------------------------------------------------------------------- # # import the various client implementations # --------------------------------------------------------------------------- # from pymodbus import Framer, pymodbus_apply_logging_config from pymodbus.client import ( ModbusSerialClient, ModbusTcpClient, ModbusTlsClient, ModbusUdpClient, ) from pymodbus.exceptions import ModbusException from pymodbus.pdu import ExceptionResponse def run_sync_simple_client(comm, host, port, framer=Framer.SOCKET): """Run sync client.""" # activate debugging pymodbus_apply_logging_config("DEBUG") print("get client") if comm == "tcp": client = ModbusTcpClient( host, port=port, framer=framer, # timeout=10, # retries=3, # retry_on_empty=False,y # close_comm_on_error=False, # strict=True, # source_address=("localhost", 0), ) elif comm == "udp": client = ModbusUdpClient( host, port=port, (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) framer=framer, # timeout=10, # retries=3, # retry_on_empty=False, # close_comm_on_error=False, # strict=True, # source_address=None, ) elif comm == "serial": client = ModbusSerialClient( port, framer=framer, # timeout=10, # retries=3, # retry_on_empty=False, # close_comm_on_error=False,. # strict=True, baudrate=9600, bytesize=8, parity="N", stopbits=1, # handle_local_echo=False, ) elif comm == "tls": client = ModbusTlsClient( host, port=port, framer=Framer.TLS, # timeout=10, # retries=3, # retry_on_empty=False, # close_comm_on_error=False, # strict=True, # sslctx=None, certfile="../examples/certificates/pymodbus.crt", keyfile="../examples/certificates/pymodbus.key", # password=None, server_hostname="localhost", ) else: # pragma no cover print(f"Unknown client {comm} selected") return print("connect to server") client.connect() print("get and verify data") try: rr = client.read_coils(1, 1, slave=1) except ModbusException as exc: print(f"Received ModbusException({exc}) from library") client.close() (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) return if rr.isError(): # pragma no cover print(f"Received Modbus library error({rr})") client.close() return if isinstance(rr, ExceptionResponse): # pragma no cover print(f"Received Modbus library exception ({rr})") # THIS IS NOT A PYTHON EXCEPTION, but a valid modbus message client.close() print("close connection") # pragma no cover client.close() # pragma no cover if __name__ == "__main__": run_sync_simple_client("tcp", "127.0.0.1", "5020") # pragma: no cover 6.1.3 Client performance sync vs async Source: examples/client_performance.py #!/usr/bin/env python3 """Test performance of client: sync vs. async This example show how much faster the async version is. example run: (pymodbus) % ./client_performance.py --- Testing sync client v3.4.1 running 1000 call (each 10 registers), took 114.10 seconds Averages 114.10 ms pr call and 11.41 ms pr register. --- Testing async client v3.4.1 running 1000 call (each 10 registers), took 0.33 seconds Averages 0.33 ms pr call and 0.03 ms pr register. """ import asyncio import time from pymodbus import Framer from pymodbus.client import AsyncModbusSerialClient, ModbusSerialClient LOOP_COUNT = 1000 REGISTER_COUNT = 10 def run_sync_client_test(): """Run sync client.""" print("--- Testing sync client v3.4.1") client = ModbusSerialClient( (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) "/dev/ttys007", framer_name=Framer.RTU, baudrate=9600, ) client.connect() assert client.connected start_time = time.time() for _i in range(LOOP_COUNT): rr = client.read_input_registers(1, REGISTER_COUNT, slave=1) if rr.isError(): print(f"Received Modbus library error({rr})") break client.close() run_time = time.time() - start_time avg_call = (run_time / LOOP_COUNT) * 1000 avg_register = avg_call / REGISTER_COUNT print( f"running {LOOP_COUNT} call (each {REGISTER_COUNT} registers), took {run_time:. ˓→2f} seconds" ) print(f"Averages {avg_call:.2f} ms pr call and {avg_register:.2f} ms pr register.") async def run_async_client_test(): """Run async client.""" print("--- Testing async client v3.4.1") client = AsyncModbusSerialClient( "/dev/ttys007", framer_name=Framer.RTU, baudrate=9600, ) await client.connect() assert client.connected start_time = time.time() for _i in range(LOOP_COUNT): rr = await client.read_input_registers(1, REGISTER_COUNT, slave=1) if rr.isError(): print(f"Received Modbus library error({rr})") break client.close() run_time = time.time() - start_time avg_call = (run_time / LOOP_COUNT) * 1000 avg_register = avg_call / REGISTER_COUNT print( f"running {LOOP_COUNT} call (each {REGISTER_COUNT} registers), took {run_time:. ˓→2f} seconds" ) print(f"Averages {avg_call:.2f} ms pr call and {avg_register:.2f} ms pr register.") (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) if __name__ == "__main__": run_sync_client_test() asyncio.run(run_async_client_test()) 6.2 Advanced examples These examples are considered essential usage examples, and are guaranteed to work, because they are tested au- tomatilly with each dev branch commit using CI. Tip: The examples needs to be run from within the examples directory, unless you modify them. Most examples use helper.py and client_*.py or server_*.py. This is done to avoid maintaining the same code in multiple files. • examples.zip • examples.tgz 6.2.1 Client asynchronous calls Source: examples/client_async_calls.py Pymodbus Client modbus async all calls example. Please see method async_template_call for a template on how to make modbus calls and check for different error conditions. The handle* functions each handle a set of modbus calls with the same register type (e.g. coils). All available modbus calls are present. If you are performing a request that is not available in the client mixin, you have to perform the request like this instead: from pymodbus.diag_message import ClearCountersRequest from pymodbus.diag_message import ClearCountersResponse request = ClearCountersRequest() response = client.execute(request) if isinstance(response, ClearCountersResponse): ... do something with the response This example uses client_async.py and client_sync.py to handle connection, and have the same options. The corresponding server must be started before e.g. as: ./server_async.py PyModbus, Release 3.6.0dev 6.2.2 Client asynchronous Source: examples/client_async.py Pymodbus asynchronous client example. usage: client_async.py [-h] [-c {tcp,udp,serial,tls}] [-f {ascii,binary,rtu,socket,tls}] [-l {critical,error,warning,info,debug}] [-p PORT] [--baudrate BAUDRATE] [--host HOST] -h, --help show this help message and exit -c, -comm {tcp,udp,serial,tls} set communication, default is tcp -f, --framer {ascii,binary,rtu,socket,tls} set framer, default depends on --comm -l, --log {critical,error,warning,info,debug} set log level, default is info -p, --port PORT set port --baudrate BAUDRATE set serial device baud rate --host HOST set host, default is 127.0.0.1 The corresponding server must be started before e.g. as: python3 server_sync.py 6.2.3 Client calls Source: examples/client_calls.py Pymodbus Client modbus all calls example. Please see method template_call for a template on how to make modbus calls and check for different error conditions. The handle* functions each handle a set of modbus calls with the same register type (e.g. coils). All available modbus calls are present. If you are performing a request that is not available in the client mixin, you have to perform the request like this instead: from pymodbus.diag_message import ClearCountersRequest from pymodbus.diag_message import ClearCountersResponse request = ClearCountersRequest() response = client.execute(request) if isinstance(response, ClearCountersResponse): ... do something with the response This example uses client_async.py and client_sync.py to handle connection, and have the same options. The corresponding server must be started before e.g. as: ./server_async.py PyModbus, Release 3.6.0dev 6.2.4 Client custom message Source: examples/client_custom_msg.py Pymodbus Synchronous Client Examples. The following is an example of how to use the synchronous modbus client implementation from pymodbus: with ModbusClient("127.0.0.1") as client: result = client.read_coils(1,10) print result 6.2.5 Client payload Source: examples/client_payload.py Pymodbus Client Payload Example. This example shows how to build a client with a complicated memory layout using builder. Works out of the box together with payload_server.py 6.2.6 Client synchronous Source: examples/client_sync.py Pymodbus Synchronous Client Example. An example of a single threaded synchronous client. usage: client_sync.py [-h] [-c {tcp,udp,serial,tls}] [-f {ascii,binary,rtu,socket,tls}] [-l {critical,error,warning,info,debug}] [-p PORT] [--baudrate BAUDRATE] [--host HOST] -h, --help show this help message and exit -c, --comm {tcp,udp,serial,tls} set communication, default is tcp -f, --framer {ascii,binary,rtu,socket,tls} set framer, default depends on --comm -l, --log {critical,error,warning,info,debug} set log level, default is info -p, --port PORT set port --baudrate BAUDRATE set serial device baud rate --host HOST set host, default is 127.0.0.1 The corresponding server must be started before e.g. as: python3 server_sync.py PyModbus, Release 3.6.0dev 6.2.7 Server asynchronous Source: examples/server_async.py Pymodbus asynchronous Server Example. An example of a multi threaded asynchronous server. usage: server_async.py [-h] [--comm {tcp,udp,serial,tls}] [--framer {ascii,binary,rtu,socket,tls}] [--log {critical,error,warning,info,debug}] [--port PORT] [--store {sequential,sparse,factory,none}] [--slaves SLAVES] -h, --help show this help message and exit -c, --comm {tcp,udp,serial,tls} set communication, default is tcp -f, --framer {ascii,binary,rtu,socket,tls} set framer, default depends on --comm -l, --log {critical,error,warning,info,debug} set log level, default is info -p, --port PORT set port set serial device baud rate --store {sequential,sparse,factory,none} set datastore type --slaves SLAVES set number of slaves to respond to The corresponding client can be started as: python3 client_sync.py 6.2.8 Server callback Source: examples/server_callback.py Pymodbus Server With Callbacks. This is an example of adding callbacks to a running modbus server when a value is written to it. 6.2.9 Server tracer Source: examples/server_hook.py Pymodbus Server With request/response manipulator. This is an example of using the builtin request/response tracer to manipulate the messages to/from the modbus server PyModbus, Release 3.6.0dev 6.2.10 Server payload Source: examples/server_payload.py Pymodbus Server Payload Example. This example shows how to initialize a server with a complicated memory layout using builder. 6.2.11 Server synchronous Source: examples/server_sync.py Pymodbus Synchronous Server Example. An example of a single threaded synchronous server. usage: server_sync.py [-h] [--comm {tcp,udp,serial,tls}] [--framer {ascii,binary,rtu,socket,tls}] [--log {critical,error,warning,info,debug}] [--port PORT] [--store {sequential,sparse,factory,none}] [--slaves SLAVES] -h, --help show this help message and exit -c, --comm {tcp,udp,serial,tls} set communication, default is tcp -f, --framer {ascii,binary,rtu,socket,tls} set framer, default depends on --comm -l, --log {critical,error,warning,info,debug} set log level, default is info -p, --port PORT set port set serial device baud rate --store {sequential,sparse,factory,none} set datastore type --slaves SLAVES set number of slaves to respond to The corresponding client can be started as: python3 client_sync.py REMARK It is recommended to use the async server! The sync server is just a thin cover on top of the async server and is in some aspects a lot slower. PyModbus, Release 3.6.0dev 6.2.12 Server updating Source: examples/server_updating.py Pymodbus asynchronous Server with updating task Example. An example of an asynchronous server and a task that runs continuously alongside the server and updates values. usage: server_updating.py [-h] [--comm {tcp,udp,serial,tls}] [--framer {ascii,binary,rtu,socket,tls}] [--log {critical,error,warning,info,debug}] [--port PORT] [--store {sequential,sparse,factory,none}] [--slaves SLAVES] -h, --help show this help message and exit -c, --comm {tcp,udp,serial,tls} set communication, default is tcp -f, --framer {ascii,binary,rtu,socket,tls} set framer, default depends on --comm -l, --log {critical,error,warning,info,debug} set log level, default is info -p, --port PORT set port set serial device baud rate --store {sequential,sparse,factory,none} set datastore type --slaves SLAVES set number of slaves to respond to The corresponding client can be started as: python3 client_sync.py 6.2.13 Simulator example Source: examples/simulator.py Pymodbus simulator server/client Example. An example of how to use the simulator (server) with a client. for usage see documentation of simulator Tip: pymodbus.simulator starts the server directly from the commandline PyModbus, Release 3.6.0dev 6.2.14 Simulator datastore example Source: examples/datastore_simulator.py Pymodbus datastore simulator Example. An example of using simulator datastore with json interface. usage: server_simulator.py [-h] [--log {critical,error,warning,info,debug}] [--port PORT] -h, --help show this help message and exit -l, --log {critical,error,warning,info,debug} set log level -p, --port PORT set port to use The corresponding client can be started as: python3 client_sync.py Tip: This is NOT the pymodbus simulator, that is started as pymodbus.simulator. 6.2.15 Message generator Source: examples/message_generator.py Modbus Message Generator. 6.2.16 Message Parser Source: examples/message_parser.py Modbus Message Parser. The following is an example of how to parse modbus messages using the supplied framers. 6.2.17 Modbus forwarder Source: examples/modbus_forwarder.py Pymodbus synchronous forwarder. This is a repeater or converter and an example of just how powerful datastore is. It consist of a server (any comm) and a client (any comm), functionality: a) server receives a read/write request from external client: • client sends a new read/write request to target server • client receives response and updates the datastore PyModbus, Release 3.6.0dev • server sends new response to external client Both server and client are tcp based, but it can be easily modified to any server/client (see client_sync.py and server_sync.py for other communication types) WARNING This example is a simple solution, that do only forward read requests. 6.3 Examples contributions These examples are supplied by users of pymodbus. The pymodbus team thanks for sharing the examples. 6.3.1 Solar Source: :examples/contrib/solar.py Pymodbus Synchronous Client Example. Modified to test long term connection. 6.3.2 Redis datastore Source: examples/contrib/redis_datastore.py Datastore using redis. 6.3.3 Serial Forwarder Source: examples/contrib/serial_forwarder.py Pymodbus SerialRTU2TCP Forwarder usage : python3 serial_forwarder.py –log DEBUG –port “/dev/ttyUSB0” –baudrate 9600 –server_ip “192.168.1.27” –server_port 5020 –slaves 1 2 3 6.3.4 Sqlalchemy datastore Source: examples/contrib/sql_datastore.py Datastore using SQL. PyModbus, Release 3.6.0dev 92 Chapter 6. Examples CHAPTER SEVEN AUTHORS All these versions would not be possible without volunteers! This is a complete list for each major version. A big “thank you” to everybody who helped out. 7.1 Pymodbus version 3 family Thanks to • AKJ7 • Alex • <NAME> • <NAME> • <NAME> • banana-sun • <NAME> • cgernert • corollaries • <NAME> • <NAME> • dhoomakethu • Dries • duc996 • Fredo70 • <NAME> • Ghostkeeper • <NAME>an • Hayden Roche • Iktek • Jakob Ruhe PyModbus, Release 3.6.0dev • <NAME> • jan iversen • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • peufeu2 • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • WouterTuinstra • wriswith • yyokusa 7.2 Pymodbus version 2 family Thanks to • alecjohanson • <NAME> • <NAME> • <NAME> • Cougar • <NAME> • dhoomakethu • dices • <NAME> • <NAME> • er888kh • <NAME> • <NAME> • hackerboygn 94 Chapter 7. Authors PyModbus, Release 3.6.0dev • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • Mike • sanjay • Sekenre • <NAME> • <NAME> • tcplomp • <NAME> • <NAME> • <NAME> • Wild Stray • Yegor Yefremov 7.3 Pymodbus version 1 family Thanks to • <NAME> • <NAME> • bashwork • bje- • <NAME> • <NAME> • dhoomakethu • dragoshenron • <NAME> • Eren Inan Canpolat 7.3. Pymodbus version 1 family 95 PyModbus, Release 3.6.0dev • Everley • <NAME> • fleimgruber • francozappa • <NAME> • <NAME> • <NAME> • <NAME> • idahogray • <NAME> • Jack • jbiswas • jon mills • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • sanjay • schubduese42 • semyont • <NAME> • <NAME> • Yegor Yefremov 7.4 Pymodbus version 0 family Thanks to • <NAME> • Galen Collins Import to github was based on code from: • S.W.A.C. GmbH, Germany. • S.W.A.C. Bohemia s.r.o., Czech Republic. • Hynek Petrak 96 Chapter 7. Authors PyModbus, Release 3.6.0dev • Galen Collins 7.4. Pymodbus version 0 family 97 PyModbus, Release 3.6.0dev 98 Chapter 7. Authors CHAPTER EIGHT CHANGELOG All these version would not be possible without a lot of work from volunteers! We, the maintainers, are greatful for each pull requests small or big, that helps make pymodbus a better product. Authors: contains a complete list of volunteers have contributed to each major version. 8.1 Version 3.5.4 • Release errors (pyproject.toml changes). (#1811) 8.2 Version 3.5.3 • Simplify transport_serial (modbus use) (#1808) • Reduce transport_serial (#1807) • Change to pyproject.toml. (#1805) • fixes access to asyncio loop via loop property of SerialTransport (#1804) • Bump aiohttp to support python 3.12. (#1802) • README wrong links. (#1801) • CI caching. (#1796) • Solve pylint unhappy. (#1799) • Clean except last 7 days. (#1798) • Reconect_delay == 0, do not reconnect. (#1795) • Update simulator.py method docstring (#1793) • add type to isError. (#1781) • Allow repr(ModbusException) to return complete information (#1779) • Update docs. (#1777) PyModbus, Release 3.6.0dev 8.3 Version 3.5.2 • server tracer example. (#1773) • sync connect missing. (#1772) • simulator future problem. (#1771) 8.4 Version 3.5.1 • Always close socket on error (reset_sock). (#1767) • Revert reset_socket change. • add close_comm_on_error to example. • Test long term (HomeAsistant problem). (#1765) • Update ruff to 0.0.287 (#1764) • Remove references to ModbusSerialServer.start (#1759) (#1762) • Readd test to get 100% coverage. • transport: Don’t raise a RunTimeError in ModbusProtocol.error_received() (#1758) 8.5 Version 3.5.0 • Async retry (#1752) • test_client: Fix test_client_protocol_execute() (#1751) • Use enums for constants (#1743) • Local Echo Broadcast with Async Clients (#1744) • Fix #1746 . Return missing result (#1748) • Document nullmodem. (#1739) • Add system health check to all tests. (#1736) • Handle partial message in ReadDeviceInformationResponse (#1738) • Broadcast with Handle Local Echo (#1737) • transport_emulator, part II. (#1710) • Added file AUTHORS, to list all Volunteers. (#1734) • Fix #1702 and #1728 (#1733) • Clear retry count when success. (#1732) • RFC: Reduce parameters for REPL server classes (#1714) • retries=1, solved. (#1731) • Impoved the example “server_updating.py” (#1720) • pylint 3.11 (#1730) • Correct retry loop. (#1729) PyModbus, Release 3.6.0dev • Fix faulty not check (#1725) • bugfix local echo handling on sync clients (#1723) • Updated copyright in LICENSE. • Correct README pre-commit. • Fix custom message parsing in RTU framer (#1716) • Request tracer (#1715) • pymodbus.server: allow strings for “-p” paramter (#1713) • New nullmodem and transport. (#1696) • xdist loadscope (test is not split). (#1708) • Add client performance example. (#1707) 8.6 Version 3.4.1 • Fix serial startup problems. (#1701) • pass source_address in tcp client. (#1700) • serial server use source_address[0]. (#1699) • Examples coverage nearly 100%. (#1694) • new async serial (#1681) • Docker is not supported (lack of maintainer). (#1693) • Forwarder write_coil –> write_coil. (#1691) • Change default source_address to (0.0.0.0, 502) (#1690) • Update ruff to 0.0.277 (#1689) • Fix dict comprehension (#1687) • Removed requests dependency from contrib/explain.py (#1688) • Fix broken test (#1685) • Fix readme badges (#1682) • Bump aiohttp from 3.8.3 to 3.8.5 (#1680) • pygments from 2.14.0 to 2.15.0 (#1677) 8.7 Version 3.4.0 • Handle partial local echo. (#1675) • clarify handle_local_echo. (#1674) • async_client: add retries/reconnect. (#1672) • Fix 3.11 problem. (#1673) • Add new example simulator server/client. (#1671) • examples/contrib/explain.py leveraging Rapid SCADA (#1665) PyModbus, Release 3.6.0dev • _logger missed basicConfig. (#1670) • Bug fix for #1662 (#1663) • Bug fix for #1661 (#1664) • Fix typo in config.rst (#1660) • test action_increment. (#1659) • test codeql (#1655) • mypy complaints. (#1656) • Remove self.params from async client (#1640) • Drop test of pypy with python 3.8. • repair server_async.py (#1644) • move common framer to base. (#1639) • Restrict Return diag call to bytes. (#1638) • use slave= in diag requests. (#1636) • transport listen in server. (#1628) • CI test. • Integrate transport in server. (#1617) • fix getFrameStart for ExceptionResponse (#1627) • Add min/min to simulator actions. • Change to “sync client” in forwarder example (#1625) • Remove docker (lack of maintenance). (#1623) • Clean defaults (#1618) • Reduce CI log with no debug. (#1616) • prepare server to use transport. (#1607) • Fix RemoteSlaveContext (#1599) • Combine stale and lock. (#1608) • update pytest + extensions. (#1610) • Change version follow PEP 440. (#1609) • Fix regression with REPL server not listening (#1604) • Remove handler= for server classes. (#1602) • Fix write function codes (#1598) • transport nullmodem (#1591) • move test of examples to subdirectory. (#1592) • transport as object, not base class. (#1572) • Simple examples. (#1590) • transport_connect as bool. (#1587) • Prepare dev (#1588) PyModbus, Release 3.6.0dev • Release corrections. (#1586) 8.8 Version 3.3.2 • Fix RemoteSlaveContext (#1599) • Change version follow PEP 440. (#1609) • Fix regression with REPL server not listening (#1604) • Fix write function codes (#1598) • Release corrections. (#1586) 8.9 Version 3.3.1 • transport fixes and 100% test coverage. (#1580) • Delay self.loop until connect(). (#1579) • Added mechanism to determine if server did not start cleanly (#1539) • Proof transport reconnect works. (#1577) • Fix non-shared block doc in config.rst. (#1573) 8.10 Version 3.3.0 • Stabilize windows tests. (#1567) • Bump mypy 1.3.0 (#1568) • Transport integrated in async clients. (#1541) • Client async corrections (due to 3.1.2) (#1565) • Server_async[udp], solve 3.1.1 problem. (#1564) • Remove ModbusTcpDiagClient. (#1560) • Remove old method from Python2/3 transition (#1559) • Switch to ruff’s version of bandit (#1557) • Allow reading/writing address 0 in the simulator (#1552) • Remove references to “defer_start”. (#1548) • Client more robust against faulty response. (#1547) • Fix missing package_data directives for simulator web (#1544) • Fix installation instructions (#1543) • Solve pytest timeout problem. (#1540) • DiagnosticStatus encode missing tuple check. (#1533) • test SparseDataStore. (#1532) • BinaryPayloadBuilder.to_string to BinaryPayloadBuilder.encode (#1526) PyModbus, Release 3.6.0dev • Adding flake8-pytest-style` to ruff (#1520) • Simplify version management. (#1522) • pylint and pre-commit autoupdate (#1519) • Add type hint (#1512) • Add action to lock issues/PR. (#1508) • New common transport layer. (#1492) • Solve serial close raise problem. • Remove old config values (#1503) • Document pymodbus.simulator. (#1502) • Refactor REPL server to reduce complexity (#1499) • Don’t catch KeyboardInterrupt twice for REPL server (#1498) • Refactor REPL client to reduce complexity (#1489) • pymodbus.server: listen on ID 1 by default (#1496) • Clean framer/__init__.py (#1494) • Duplicate transactions in UDP. (#1486) • clean ProcessIncommingPacket. (#1491) • Enable pyupgrade (U) rules in ruff (#1484) • clean_workflow.yaml solve parameter problem. • Correct wrong import in test. (#1483) • Implement pyflakes-simplify (#1480) • Test case for UDP duplicate msg issue (#1470) • Test of write_coil. (#1479) • Test reuse of client object. (#1475) • Comment about addressing when shared=false (#1474) • Remove old aliases to OSError (#1473) • pymodbus.simulator fixes (#1463) • Fix wrong error message with pymodbus console (#1456) • update modbusrtuframer (#1435) • Server multidrop test.: (#1451) • mypy problem ModbusResponse. PyModbus, Release 3.6.0dev 8.11 Version 3.2.2 • Add forgotten await 8.12 Version 3.2.1 • add missing server.start(). (#1443) • Don’t publish univeral (Python2 / Python 3) wheels (#1423) • Remove unneccesary custom LOG_LEVEL check (#1424) • Include py.typed in package (#1422) 8.13 Version 3.2.0 • Add value <-> registers converter helpers. (#1413) • Add pre-commit config (#1406) • Make baud rate configurable for examples (#1410) • Clean __init_ and update log module. (#1411) • Simulator add calls functionality. (#1390) • Add note about not being thread safe. (#1404) • Update docker-publish.yml • Forward retry_on_empty and retries by calling transaction (#1401) • serial sync recv interval (#1389) • Add tests for writing multiple writes with a single value (#1402) • Enable mypy in CI (#1388) • Limit use of Singleton. (#1397) • Cleanup interfaces (#1396) • Add request names. (#1391) • Simulator, register look and feel. (#1387) • Fix enum for REPL server (#1384) • Remove unneeded attribute (#1383) • Fix mypy errors in reactive server (#1381) • remove nosec (#1379) • Fix type hints for http_server (#1369) • Merge pull request #1380 from pymodbus-dev/requirements • remove second client instance in async mode. (#1367) • Pin setuptools to prevent breakage with Version including “X” (#1373) • Lint and type hints for REPL (#1364) PyModbus, Release 3.6.0dev • Clean mixin execute (#1366) • Remove unused setup_commands.py. (#1362) • Run black on top-level files and /doc (#1361) • repl config path (#1359) • Fix NoReponse -> NoResponse (#1358) • Make whole main async. (#1355) • Fix more typing issues (#1351) • Test sync task (#1341) • Fixed text in ModbusClientMixin’s writes (#1352) • lint /doc (#1345) • Remove unused linters (#1344) • Allow log level as string or integer. (#1343) • Sync serial, clean recv. (#1340) • Test server task, async completed (#1318) • main() should be sync (#1339) • Bug: Fixed caused by passing wrong arg (#1336) 8.14 Version 3.1.3 • Solve log problem in payload. • Fix register type check for size bigger than 3 registers (6 bytes) (#1323) • Re-add SQL tests. (#1329) • Central logging. (#1324) • Skip sqlAlchemy test. (#1325) • Solve 1319 (#1320) 8.15 Version 3.1.2 • Update README.rst • Correct README link. (#1316) • More direct readme links for REPL (#1314) • Add classifier for 3.11 (#1312) • Update README.rst (#1313) • Delete ModbusCommonBlock.png (#1311) • Add modbus standard to README. (#1308) • fix no auto reconnect after close/connect in TCPclient (#1298) • Update examples.rst (#1307) PyModbus, Release 3.6.0dev • var name clarification (#1304) • Bump external libraries. (#1302) • Reorganize documentation to make it easier accessible (#1299) • Simulator documentation (first version). (#1296) • Updated datastore Simulator. (#1255) • Update links to pydmodbus-dev (#1291) • Change riptideio to pymodbus-dev. (#1292) • #1258 Avoid showing unit as a seperate command line argument (#1288) • Solve docker cache problem. (#1287) 8.16 Version 3.1.1 • add missing server.start() (#1282) • small performance improvement on debug log (#1279) • Fix Unix sockets parsing (#1281) • client: Allow unix domain socket. (#1274) • transfer timeout to protocol object. (#1275) • Add ModbusUnixServer / StartAsyncUnixServer. (#1273) • Added return in AsyncModbusSerialClient.connect (#1271) • add connect() to the very first example (#1270) • Solve docker problem. (#1268) • Test stop of server task. (#1256) 8.17 Version 3.1.0 • Add xdist pr default. (#1253) • Create docker-publish.yml (#1250) • Parallelize pytest with pytest-xdist (#1247) • Support Python3.11 (#1246) • Fix reconnectDelay to be within (100ms, 5min) (#1244) • Fix typos in comments (#1233) • WEB simulator, first version. (#1226) • Clean async serial problem. (#1235) • terminate when using ‘randomize’ and ‘change_rate’ at the same time (#1231) • Used tooled python and OS (#1232) • add ‘change_rate’ randomization option (#1229) • add check_ci.sh (#1225) PyModbus, Release 3.6.0dev • Simplify CI and use cache. (#1217) • Solve issue 1210, update simulator (#1211) • Add missing client calls in mixin.py. (#1206) • Advanced simulator with cross memory. (#1195) • AsyncModbusTcp/UdpClient honors delay_ms == 0 (#1203) (#1205) • Fix #1188 and some pylint issues (#1189) • Serial receive incomplete bytes.issue #1183 (#1185) • Handle echo (#1186) • Add updating server example. (#1176) 8.18 Version 3.0.2 • Add pygments as requirement for repl • Update datastore remote to handle write requests (#1166) • Allow multiple servers. (#1164) • Fix typo. (#1162) • Transfer parms. to connected client. (#1161) • Repl enhancements 2 (#1141) • Server simulator with datastore with json data. (#1157) • Avoid unwanted reconnects (#1154) • Do not initialize framer twice. (#1153) • Allow timeout as float. (#1152) • Improve Docker Support (#1145) • Fix unreachable code in AsyncModbusTcpClient (#1151) • Fix type hints for port and timeout (#1147) • Start/stop multiple servers. (#1138) • Server/asyncio.py correct logging when disconnecting the socket (#1135) • Add Docker and container registry support (#1132) • Removes undue reported error when forwarding (#1134) • Obey timeout parameter on connection (#1131) • Readme typos (#1129) • Clean noqa directive. (#1125) • Add isort and activate CI fail for black/isort. (#1124) • Update examples. (#1117) • Move logging configuration behind function call (#1120) • serial2TCP forwarding example (#1116) PyModbus, Release 3.6.0dev • Make serial import dynamic. (#1114) • Bugfix ModbusSerialServer setup so handler is called correctly. (#1113) • Clean configurations. (#1111) 8.19 Version 3.0.1 • Faulty release! 8.20 Version 3.0.0 • Solve multiple incomming frames. (#1107) • Up coverage, tests are 100%. (#1098) • Prepare for rc1. (#1097) • Prepare 3.0.0dev5 (#1095) • Adapt serial tests. (#1094) • Allow windows. (#1093) • Remove server sync code and combine with async code. (#1092) • Solve test of tls by adding certificates and remove bugs (#1080) • Simplify server implementation. (#1071) • Do not filter using unit id in the received response (#1076) • Hex values for repl arguments (#1075) • All parameters in class parameter. (#1070) • Add len parameter to decode_bits. (#1062) • New combined test for all types of clients. (#1061) • Dev mixin client (#1056) • Add/update client documentation, including docstrings etc. (#1055) • Add unit to arguments (#1041) • Add timeout to all pytest. (#1037) • Simplify client parent classes. (#1018) • Clean copyright statements, to ensure we follow FOSS rules. (#1014) • Rectify sync/async client parameters. (#1013) • Clean client directory structure for async. (#1010) • Remove async_io, simplify AsyncModbus<x>Client. (#1009) • remove init_<something>_client(). (#1008) • Remove async factory. (#1001) • Remove loop parameter from client/server (#999) • add example async client. (#997) PyModbus, Release 3.6.0dev • Change async ModbusSerialClient to framer= from method=. (#994) • Add forwarder example with multiple slaves. (#992) • Remove async get_factory. (#990) • Remove unused ModbusAccessControl. (#989) • Solve problem with remote datastore. (#988) • Remove unused schedulers. (#976) • Remove twisted (#972) • Remove/Update tornado/twister tests. (#971) • remove easy_install and ez_setup (#964) • Fix mask write register (#961) • Activate pytest-asyncio. (#949) • Changed default framer for serial to be ModbusRtuFramer. (#948) • Remove tornado. (#935) • Pylint, check method parameter documentation. (#909) • Add get_response_pdu_size to mask read/write. (#922) • Minimum python version is 3.8. (#921) • Ensure make doc fails on warnings and/or errors. (#920) • Remove central makefile. (#916) • Re-organize examples (#914) • Documentation cleanup and clarification (#689) • Update doc for repl. (#910) • Include package and tests in coverage measurement (#912) • Use response byte length if available (#880) • better fix for rtu incomplete frames (#511) • Remove twisted/tornado from doc. (#904) • Update classifiers for pypi. (#907) • Documentation updates • PEP8 compatibale code • More tooling and CI updates • Remove python2 compatibility code (#564) • Remove Python2 checks and Python2 code snippets • Misc co-routines related fixes • Fix CI for python3 and remove PyPI from CI • Fix mask_write_register call. (#685) • Add support for byte strings in the device information fields (#693) • Catch socket going away. (#722) PyModbus, Release 3.6.0dev • Misc typo errors (#718) • Support python3.10 • Implement asyncio ModbusSerialServer • ModbusTLS updates (tls handshake, default framer) • Support broadcast messages with asyncio client • Fix for lazy loading serial module with asyncio clients. • Updated examples and tests • Support python3.7 and above • Support creating asyncio clients from with in coroutines. 8.21 Version 2.5.3 • Fix retries on tcp client failing randomly. • Fix Asyncio client timeout arg not being used. • Treat exception codes as valid responses • Fix examples (modbus_payload) • Add missing identity argument to async ModbusSerialServer 8.22 Version 2.5.2 • Add kwarg reset_socket to control closing of the socket on read failures (set to True by default). • Add –reset-socket/–no-reset-socket to REPL client. 8.23 Version 2.5.1 • Bug fix TCP Repl server. • Support multiple UID’s with REPL server. • Support serial for URL (sync serial client) • Bug fix/enhancements, close socket connections only on empty or invalid response 8.24 Version 2.5.0 • Support response types stray and empty in repl server. • Minor updates in asyncio server. • Update reactive server to send stray response of given length. • Transaction manager updates on retries for empty and invalid packets. • Test fixes for asyncio client and transaction manager. PyModbus, Release 3.6.0dev • Fix sync client and processing of incomplete frames with rtu framers • Support synchronous diagnostic client (TCP) • Server updates (REPL and async) • Handle Memory leak in sync servers due to socketserver memory leak • Minor fix in documentations • Travis fix for Mac OSX • Disable unnecessary deprecation warning while using async clients. • Use Github actions for builds in favor of travis. • Documentation updates • Disable strict mode by default. • Fix ReportSlaveIdRequest request • Sparse datablock initialization updates. • Support REPL for modbus server (only python3 and asyncio) • Fix REPL client for write requests • Fix examples • Asyncio server • Asynchronous server (with custom datablock) • Fix version info for servers • Fix and enhancements to Tornado clients (seril and tcp) • Fix and enhancements to Asyncio client and server • Update Install instructions • Synchronous client retry on empty and error enhancments • Add new modbus state RETRYING • Support runtime response manipulations for Servers • Bug fixes with logging module in servers • Asyncio modbus serial server support 8.25 Version 2.4.0 • Support async moduls tls server/client • Add local echo option • Add exponential backoffs on retries. • REPL - Support broadcasts. • Fix framers using wrong unit address. • Update documentation for serial_forwarder example • Fix error with rtu client for local_echo PyModbus, Release 3.6.0dev • Fix asyncio client not working with already running loop • Fix passing serial arguments to async clients • Support timeouts to break out of responspe await when server goes offline • Misc updates and bugfixes. 8.26 Version 2.3.0 • Support Modbus TLS (client / server) • Distribute license with source • BinaryPayloadDecoder/Encoder now supports float16 on python3.6 and above • Fix asyncio UDP client/server • Minor cosmetic updates • Asyncio Server implementation (Python 3.7 and above only) • Bug fix for DiagnosticStatusResponse when odd sized response is received • Remove Pycrypto from dependencies and include cryptodome instead • Remove SIX requirement pinned to exact version. • Minor bug-fixes in documentations. 8.27 Version 2.2.0 • Support Python 3.7 • Fix to task cancellations and CRC errors for async serial clients. • Fix passing serial settings to asynchronous serial server. • Fix AttributeError when setting interCharTimeout for serial clients. • Provide an option to disable inter char timeouts with Modbus RTU. • Add support to register custom requests in clients and server instances. • Fix read timeout calculation in ModbusTCP. • Fix SQLDbcontext always returning InvalidAddress error. • Fix SQLDbcontext update failure • Fix Binary payload example for endianess. • Fix BinaryPayloadDecoder.to_coils and BinaryPayloadBuilder.fromCoils methods. • Fix tornado async serial client TypeError while processing incoming packet. • Fix erroneous CRC handling in Modbus RTU framer. • Support broadcasting in Modbus Client and Servers (sync). • Fix asyncio examples. • Improved logging in Modbus Server . • ReportSlaveIdRequest would fetch information from Device identity instead of hardcoded Pymodbus. PyModbus, Release 3.6.0dev • Fix regression introduced in 2.2.0rc2 (Modbus sync client transaction failing) • Minor update in factory.py, now server logs prints received request instead of only function code 8.28 Version 2.1.0 • Fix Issues with Serial client where in partial data was read when the response size is unknown. • Fix Infinite sleep loop in RTU Framer. • Add pygments as extra requirement for repl. • Add support to modify modbus client attributes via repl. • Update modbus repl documentation. • More verbose logs for repl. 8.29 Version 2.0.1 • Fix unicode decoder error with BinaryPayloadDecoder in some platforms • Avoid unnecessary import of deprecated modules with dependencies on twisted 8.30 Version 2.0.0 • Async client implementation based on Tornado, Twisted and asyncio with backward compatibility support for twisted client. • Allow reusing existing[running] asyncio loop when creating async client based on asyncio. • Allow reusing address for Modbus TCP sync server. • Add support to install tornado as extra requirement while installing pymodbus. • Support Pymodbus REPL • Add support to python 3.7. • Bug fix and enhancements in examples. • Async client implementation based on Tornado, Twisted and asyncio 8.31 Version 1.5.2 • Fix serial client is_socket_open method PyModbus, Release 3.6.0dev 8.32 Version 1.5.1 • Fix device information selectors • Fixed behaviour of the MEI device information command as a server when an invalid object_id is provided by an external client. • Add support for repeated MEI device information Object IDs (client/server) • Added support for encoding device information when it requires more than one PDU to pack. • Added REPR statements for all syncchronous clients • Added isError method to exceptions, Any response received can be tested for success before proceeding. • Add examples for MEI read device information request 8.33 Version 1.5.0 • Improve transaction speeds for sync clients (RTU/ASCII), now retry on empty happens only when retry_on_empty kwarg is passed to client during intialization • Fix tcp servers (sync/async) not processing requests with transaction id > 255 • Introduce new api to check if the received response is an error or not (response.isError()) • Move timing logic to framers so that irrespective of client, correct timing logics are followed. • Move framers from transaction.py to respective modules • Fix modbus payload builder and decoder • Async servers can now have an option to defer reactor.run() when using Start<Tcp/Serial/Udo>Server(. . . ,defer_reactor_run=True) • Fix UDP client issue while handling MEI messages (ReadDeviceInformationRequest) • Add expected response lengths for WriteMultipleCoilRequest and WriteMultipleRegisterRequest • Fix _rtu_byte_count_pos for GetCommEventLogResponse • Add support for repeated MEI device information Object IDs • Fix struct errors while decoding stray response • Modbus read retries works only when empty/no message is received • Change test runner from nosetest to pytest • Fix Misc examples PyModbus, Release 3.6.0dev 8.34 Version 1.4.0 • Bug fix Modbus TCP client reading incomplete data • Check for slave unit id before processing the request for serial clients • Bug fix serial servers with Modbus Binary Framer • Bug fix header size for ModbusBinaryFramer • Bug fix payload decoder with endian Little • Payload builder and decoder can now deal with the wordorder as well of 32/64 bit data. • Support Database slave contexts (SqlStore and RedisStore) • Custom handlers could be passed to Modbus TCP servers • Asynchronous Server could now be stopped when running on a seperate thread (StopServer) • Signal handlers on Asynchronous servers are now handled based on current thread • Registers in Database datastore could now be read from remote clients • Fix examples in contrib (message_parser.py/message_generator.py/remote_server_context) • Add new example for SqlStore and RedisStore (db store slave context) • Fix minor comaptibility issues with utilities. • Update test requirements • Update/Add new unit tests • Move twisted requirements to extra so that it is not installed by default on pymodbus installtion 8.35 Version 1.3.2 • ModbusSerialServer could now be stopped when running on a seperate thread. • Fix issue with server and client where in the frame buffer had values from previous unsuccesful transaction • Fix response length calculation for ModbusASCII protocol • Fix response length calculation ReportSlaveIdResponse, DiagnosticStatusResponse • Fix never ending transaction case when response is received without header and CRC • Fix tests 8.36 Version 1.3.1 • Recall socket recv until get a complete response • Register_write_message.py: Observe skip_encode option when encoding a single register request • Fix wrong expected response length for coils and discrete inputs • Fix decode errors with ReadDeviceInformationRequest and ReportSlaveIdRequest on Python3 • Move MaskWriteRegisterRequest/MaskWriteRegisterResponse to register_write_message.py from file_message.py PyModbus, Release 3.6.0dev • Python3 compatible examples [WIP] • Misc updates with examples • Fix encoding problem for ReadDeviceInformationRequest method on python3 • Fix problem with the usage of ord in python3 while cleaning up receive buffer • Fix struct unpack errors with BinaryPayloadDecoder on python3 - string vs bytestring error • Calculate expected response size for ReadWriteMultipleRegistersRequest • Enhancement for ModbusTcpClient, ModbusTcpClient can now accept connection timeout as one of the param- eter • Misc updates • Timing improvements over MODBUS Serial interface • Modbus RTU use 3.5 char silence before and after transactions • Bug fix on FifoTransactionManager , flush stray data before transaction • Update repository information • Added ability to ignore missing slaves • Added ability to revert to ZeroMode • Passed a number of extra options through the stack • Fixed documenation and added a number of examples 8.37 Version 1.2.0 • Reworking the transaction managers to be more explicit and to handle modbus RTU over TCP. • Adding examples for a number of unique requested use cases • Allow RTU framers to fail fast instead of staying at fault • Working on datastore saving and loading 8.38 Version 1.1.0 • Fixing memory leak in clients and servers (removed __del__) • Adding the ability to override the client framers • Working on web page api and GUI • Moving examples and extra code to contrib sections • Adding more documentation PyModbus, Release 3.6.0dev 8.39 Version 1.0.0 • Adding support for payload builders to form complex encoding and decoding of messages. • Adding BCD and binary payload builders • Adding support for pydev • Cleaning up the build tools • Adding a message encoding generator for testing. • Now passing kwargs to base of PDU so arguments can be used correctly at all levels of the protocol. • A number of bug fixes (see bug tracker and commit messages) CHAPTER NINE API CHANGES Versions (X.Y.Z) where Z > 0 e.g. 3.0.1 do NOT have API changes! 9.1 API changes 3.6.0 (future) • framer= is an enum: pymodbus.Framer, but still accept a framer class 9.2 API changes 3.5.0 • Remove handler parameter from ModbusUdpServer • Remove loop parameter from ModbusSerialServer • Remove handler and allow_reuse_port from repl default config • Static classes from the constants module are now inheriting from enum.Enum and using UPPER_CASE naming scheme, this affects: - MoreData - DeviceInformation - ModbusPlusOperation - Endian - ModbusStatus • Async clients now accepts no_resend_on_retry=True, to not resend the request when retrying. • ModbusSerialServer now accepts request_tracer=. 9.3 API changes 3.4.0 • Modbus<x>Client .connect() returns True/False (connected or not) • Modbue<x>Server handler=, allow_reuse_addr=, backlog= are no longer accepted • ModbusTcpClient / AsyncModbusTcpClient no longer support unix path • StartAsyncUnixServer / ModbusUnixServer removed (never worked on Windows) • ModbusTlsServer reqclicert= is no longer accepted • ModbusSerialServer auto_connect= is no longer accepted • ModbusSimulatorServer.serve_forever(only_start=False) added to allow return PyModbus, Release 3.6.0dev 9.4 API changes 3.3.0 • ModbusTcpDiagClient is removed due to lack of support • Clients have an optional parameter: on_reconnect_callback, Function that will be called just before a reconnec- tion attempt. • general parameter unit= -> slave= • move SqlSlaveContext, RedisSlaveContext to examples/contrib (due to lack of maintenance) • BinaryPayloadBuilder.to_string was renamed to BinaryPayloadBuilder.encode • on_reconnect_callback for async clients works slightly different • utilities/unpack_bitstring now expects an argument named data not string 9.5 API changes 3.2.0 • helper to convert values in mixin: convert_from_registers, convert_to_registers • import pymodbus.version -> from pymodbus import __version__, __version_full__ • pymodbus.pymodbus_apply_logging_config(log_file_name=”pymodbus.log”) to enable file pymod- bus_apply_logging_config • pymodbus.pymodbus_apply_logging_config have default DEBUG, it not called root settings will be used. • pymodbus/interfaces/IModbusDecoder removed. • pymodbus/interfaces/IModbusFramer removed. • pymodbus/interfaces/IModbusSlaveContext -> pymodbus/datastore/ModbusBaseSlaveContext. • StartAsync<type>Server, removed defer_start argument, return is None. instead of using defer_start instantiate the Modbus<type>Server directly. • ReturnSlaveNoReponseCountResponse has been corrected to ReturnSlaveNoResponseCountResponse • Option –modbus-config for REPL server renamed to –modbus-config-path • client.protocol.<something> –> client.<something> • client.factory.<something> –> client.<something> 9.6 API changes 3.1.0 • Added –host to client_* examples, to allow easier use. • unit= in client calls are no longer converted to slave=, but raises a runtime exception. • Added missing client calls (all standard request are not available as methods). • client.mask_write_register() changed parameters. • server classes no longer accept reuse_port= (the socket do not accept it) PyModbus, Release 3.6.0dev 9.7 API changes 3.0.0 Base for recording changes. 9.7. API changes 3.0.0 121 PyModbus, Release 3.6.0dev 122 Chapter 9. API changes CHAPTER TEN PYMODBUS INTERNALS 10.1 NullModem Pymodbus offers a special NullModem transport to help end-to-end test without network. The NullModem is activated by setting host= (port= for serial) to NULLMODEM_HOST (import pymodbus.transport) The NullModem works with the normal transport types, and simply substitutes the physical connection: - Serial (RS- 485) typically using a dongle - TCP - TLS - UDP The NullModem is currently integrated in - Modbus<x>Client - AsyncModbus<x>Client - Modbus<x>Server - AsyncModbus<x>Server Of course the NullModem requires that server and client(s) run in the same python instance. 10.2 Datastore Datastore is responsible for managing registers for a server. 10.2.1 Datastore classes class pymodbus.datastore.ModbusSparseDataBlock(values=None, mutable=True) Create a sparse modbus datastore. E.g Usage. sparse = ModbusSparseDataBlock({10: [3, 5, 6, 8], 30: 1, 40: [0]*20}) This would create a datablock with 3 blocks starting at offset 10 with length 4 , 30 with length 1 and 40 with length 20 sparse = ModbusSparseDataBlock([10]*100) Creates a sparse datablock of length 100 starting at offset 0 and default value of 10 sparse = ModbusSparseDataBlock() –> Create Empty datablock sparse.setValues(0, [10]*10) –> Add block 1 at offset 0 with length 10 (default value 10) sparse.setValues(30, [20]*5) –> Add block 2 at offset 30 with length 5 (default value 20) if mutable is set to True during initialization, the datablock can not be altered with setValues (new datablocks can not be added) classmethod create(values=None) Create sparse datastore. Use setValues to initialize registers. PyModbus, Release 3.6.0dev Parameters values – Either a list or a dictionary of values Returns An initialized datastore reset() Reset the store to the initially provided defaults. validate(address, count=1) Check to see if the request is in range. Parameters • address – The starting address • count – The number of values to test for Returns True if the request in within range, False otherwise getValues(address, count=1) Return the requested values of the datastore. Parameters • address – The starting address • count – The number of values to retrieve Returns The requested values from a:a+c setValues(address, values, use_as_default=False) Set the requested values of the datastore. Parameters • address – The starting address • values – The new values to be set • use_as_default – Use the values as default Raises ParameterException – class pymodbus.datastore.ModbusSlaveContext(*_args, **kwargs) This creates a modbus data model with each data access stored in a block. reset() Reset all the datastores to their default values. validate(fc_as_hex, address, count=1) Validate the request to make sure it is in range. Parameters • fc_as_hex – The function we are working with • address – The starting address • count – The number of values to test PyModbus, Release 3.6.0dev Returns True if the request in within range, False otherwise getValues(fc_as_hex, address, count=1) Get count values from datastore. Parameters • fc_as_hex – The function we are working with • address – The starting address • count – The number of values to retrieve Returns The requested values from a:a+c setValues(fc_as_hex, address, values) Set the datastore with the supplied values. Parameters • fc_as_hex – The function we are working with • address – The starting address • values – The new values to be set register(function_code, fc_as_hex, datablock=None) Register a datablock with the slave context. Parameters • function_code – function code (int) • fc_as_hex – string representation of function code (e.g “cf” ) • datablock – datablock to associate with this function code class pymodbus.datastore.ModbusServerContext(slaves=None, single=True) This represents a master collection of slave contexts. If single is set to true, it will be treated as a single context so every slave_id returns the same context. If single is set to false, it will be interpreted as a collection of slave contexts. slaves() Define slaves. class pymodbus.datastore.ModbusSimulatorContext(config: dict[str, Any], custom_actions: dict[str, Callable]) Modbus simulator Parameters • config – A dict with structure as shown below. • actions – A dict with “<name>”: <function> structure. Raises RuntimeError – if json contains errors (msg explains what) It builds and maintains a virtual copy of a device, with simulation of device specific functions. The device is described in a dict, user supplied actions will be added to the builtin actions. It is used in conjunction with a pymodbus server. PyModbus, Release 3.6.0dev Example: store = ModbusSimulatorContext(<config dict>, <actions dict>) StartAsyncTcpServer(<host>, context=store) Now the server will simulate the defined device with features like: - invalid addresses - write protected addresses - optional control of access for string, uint32, bit/bits - builtin actions for e.g. reset/datetime, value increment by read - custom actions Description of the json file or dict to be supplied: { "setup": { "di size": 0, --> Size of discrete input block (8 bit) "co size": 0, --> Size of coils block (8 bit) "ir size": 0, --> Size of input registers block (16 bit) "hr size": 0, --> Size of holding registers block (16 bit) "shared blocks": True, --> share memory for all blocks (largest size wins) "defaults": { "value": { --> Initial values (can be overwritten) "bits": 0x01, "uint16": 122, "uint32": 67000, "float32": 127.4, "string": " ", }, "action": { --> default action (can be overwritten) "bits": None, "uint16": None, "uint32": None, "float32": None, "string": None, }, }, "type exception": False, --> return IO exception if read/write on non␣ ˓→boundary }, "invalid": [ --> List of invalid addresses, IO exception returned 51, --> single register [78, 99], --> start, end registers, repeated as needed ], "write": [ --> allow write, efault is ReadOnly [5, 5] --> start, end bytes, repeated as needed ], "bits": [ --> Define bits (1 register == 1 byte) [30, 31], --> start, end registers, repeated as needed {"addr": [32, 34], "value": 0xF1}, --> with value {"addr": [35, 36], "action": "increment"}, --> with action {"addr": [37, 38], "action": "increment", "value": 0xF1} --> with action␣ ˓→and value (continues on next page) PyModbus, Release 3.6.0dev (continued from previous page) {"addr": [37, 38], "action": "increment", "kwargs": {"min": 0, "max": 100}}␣ ˓→ --> with action with arguments ], "uint16": [ --> Define uint16 (1 register == 2 bytes) --> same as type_bits ], "uint32": [ --> Define 32 bit integers (2 registers == 4 bytes) --> same as type_bits ], "float32": [ --> Define 32 bit floats (2 registers == 4 bytes) --> same as type_bits ], "string": [ --> Define strings (variable number of registers (each 2 bytes)) [21, 22], --> start, end registers, define 1 string {"addr": 23, 25], "value": "ups"}, --> with value {"addr": 26, 27], "action": "user"}, --> with action {"addr": 28, 29], "action": "", "value": "user"} --> with action and value ], "repeat": [ --> allows to repeat section e.g. for n devices {"addr": [100, 200], "to": [50, 275]} --> Repeat registers 100-200 to 50+␣ ˓→until 275 ] } get_text_register(register) Get raw register. classmethod build_registers_from_value(value, is_int) Build registers from int32 or float32 classmethod build_value_from_registers(registers, is_int) Build int32 or float32 value from registers 10.3 Framer 10.3.1 pymodbus.framer.ascii_framer module Ascii_framer. class pymodbus.framer.ascii_framer.ModbusAsciiFramer(decoder, client=None) Bases: ModbusFramer Modbus ASCII Frame Controller. [ Start ][Address ][ Function ][ Data ][ LRC ][ End ] 1c 2c 2c Nc 2c 2c • data can be 0 - 2x252 chars • end is “\r\n” (Carriage return line feed), however the line feed character can be changed via a special command • start is “:” PyModbus, Release 3.6.0dev This framer is used for serial transmission. Unlike the RTU protocol, the data in this framer is transferred in plain text ascii. advanceFrame() Skip over the current framed message. This allows us to skip over the current message after we have processed it or determined that it contains an error. It also has to reset the current frame header handle buildPacket(message) Create a ready to send modbus packet. Built off of a modbus request/response Parameters message – The request/response to send Returns The encoded packet checkFrame() Check and decode the next frame. Returns True if we successful, False otherwise decode_data(data) Decode data. frameProcessIncomingPacket(single, callback, slave, _tid=None, **kwargs) Process new packet pattern. getFrame() Get the next frame from the buffer. Returns The frame data or “” isFrameReady() Check if we should continue decode logic. This is meant to be used in a while loop in the decoding phase to let the decoder know that there is still data in the buffer. Returns True if ready, False otherwise method = 'ascii' 10.3.2 pymodbus.framer.binary_framer module Binary framer. class pymodbus.framer.binary_framer.ModbusBinaryFramer(decoder, client=None) Bases: ModbusFramer Modbus Binary Frame Controller. [ Start ][Address ][ Function ][ Data ][ CRC ][ End ] 1b 1b 1b Nb 2b 1b PyModbus, Release 3.6.0dev • data can be 0 - 2x252 chars • end is “}” • start is “{” The idea here is that we implement the RTU protocol, however, instead of using timing for message delimiting, we use start and end of message characters (in this case { and }). Basically, this is a binary framer. The only case we have to watch out for is when a message contains the { or } characters. If we encounter these characters, we simply duplicate them. Hopefully we will not encounter those characters that often and will save a little bit of bandwitch without a real-time system. Protocol defined by jamod.sourceforge.net. advanceFrame() → None Skip over the current framed message. This allows us to skip over the current message after we have processed it or determined that it contains an error. It also has to reset the current frame header handle buildPacket(message) Create a ready to send modbus packet. Parameters message – The request/response to send Returns The encoded packet checkFrame() → bool Check and decode the next frame. Returns True if we are successful, False otherwise decode_data(data) Decode data. frameProcessIncomingPacket(single, callback, slave, _tid=None, **kwargs) Process new packet pattern. getFrame() Get the next frame from the buffer. Returns The frame data or “” isFrameReady() → bool Check if we should continue decode logic. This is meant to be used in a while loop in the decoding phase to let the decoder know that there is still data in the buffer. Returns True if ready, False otherwise method = 'binary' PyModbus, Release 3.6.0dev 10.3.3 pymodbus.framer.rtu_framer module RTU framer. class pymodbus.framer.rtu_framer.ModbusRtuFramer(decoder, client=None) Bases: ModbusFramer Modbus RTU Frame controller. [ Start Wait ] [Address ][ Function Code] [ Data ][ CRC ][ End Wait ] 3.5 chars 1b 1b Nb 2b 3.5 chars Wait refers to the amount of time required to transmit at least x many characters. In this case it is 3.5 characters. Also, if we receive a wait of 1.5 characters at any point, we must trigger an error message. Also, it appears as though this message is little endian. The logic is simplified as the following: block-on-read: read until 3.5 delay check for errors decode The following table is a listing of the baud wait times for the specified baud rates: ------------------------------------------------------------------ Baud 1.5c (18 bits) 3.5c (38 bits) ------------------------------------------------------------------ 1200 13333.3 us 31666.7 us 4800 3333.3 us 7916.7 us 9600 1666.7 us 3958.3 us 19200 833.3 us 1979.2 us 38400 416.7 us 989.6 us ------------------------------------------------------------------ 1 Byte = start + 8 bits + parity + stop = 11 bits (1/Baud)(bits) = delay seconds advanceFrame() Skip over the current framed message. This allows us to skip over the current message after we have processed it or determined that it contains an error. It also has to reset the current frame header handle buildPacket(message) Create a ready to send modbus packet. Parameters message – The populated request/response to send checkFrame() Check if the next frame is available. Return True if we were successful. 1. Populate header 2. Discard frame if UID does not match decode_data(data) Decode data. PyModbus, Release 3.6.0dev frameProcessIncomingPacket(single, callback, slave, _tid=None, **kwargs) Process new packet pattern. getFrame() Get the next frame from the buffer. Returns The frame data or “” getFrameStart(slaves, broadcast, skip_cur_frame) Scan buffer for a relevant frame start. get_expected_response_length(data) Get the expected response length. Parameters data – Message data read so far Raises IndexError – If not enough data to read byte count Returns Total frame size isFrameReady() Check if we should continue decode logic. This is meant to be used in a while loop in the decoding phase to let the decoder know that there is still data in the buffer. Returns True if ready, False otherwise method = 'rtu' populateHeader(data=None) Try to set the headers uid, len and crc. This method examines self._buffer and writes meta information into self._header. Beware that this method will raise an IndexError if self._buffer is not yet long enough. populateResult(result) Populate the modbus result header. The serial packets do not have any header information that is copied. Parameters result – The response packet recvPacket(size) Receive packet from the bus with specified len. Parameters size – Number of bytes to read Returns PyModbus, Release 3.6.0dev resetFrame() Reset the entire message frame. This allows us to skip over errors that may be in the stream. It is hard to know if we are simply out of sync or if there is an error in the stream as we have no way to check the start or end of the message (python just doesn’t have the resolution to check for millisecond delays). sendPacket(message) Send packets on the bus with 3.5char delay between frames. Parameters message – Message to be sent over the bus Returns 10.3.4 pymodbus.framer.socket_framer module Socket framer. class pymodbus.framer.socket_framer.ModbusSocketFramer(decoder, client=None) Bases: ModbusFramer Modbus Socket Frame controller. Before each modbus TCP message is an MBAP header which is used as a message frame. It allows us to easily separate messages as follows: [ MBAP Header ] [ Function Code] [ Data ] [ tid ][ pid ][␣ ˓→ length ][ uid ] 2b 2b 2b 1b 1b Nb while len(message) > 0: tid, pid, length`, uid = struct.unpack(">HHHB", message) request = message[0:7 + length - 1`] message = [7 + length - 1:] * length = uid + function code + data * The -1 is to account for the uid byte advanceFrame() Skip over the current framed message. This allows us to skip over the current message after we have processed it or determined that it contains an error. It also has to reset the current frame header handle buildPacket(message) Create a ready to send modbus packet. Parameters message – The populated request/response to send checkFrame() Check and decode the next frame. Return true if we were successful. decode_data(data) Decode data. PyModbus, Release 3.6.0dev frameProcessIncomingPacket(single, callback, slave, tid=None, **kwargs) Process new packet pattern. This takes in a new request packet, adds it to the current packet stream, and performs framing on it. That is, checks for complete messages, and once found, will process all that exist. This handles the case when we read N + 1 or 1 // N messages at a time instead of 1. The processed and decoded messages are pushed to the callback function to process and send. getFrame() Return the next frame from the buffered data. Returns The next full frame buffer isFrameReady() Check if we should continue decode logic. This is meant to be used in a while loop in the decoding phase to let the decoder factory know that there is still data in the buffer. Returns True if ready, False otherwise method = 'socket' 10.4 Constants Constants For Modbus Server/Client. This is the single location for storing default values for the servers and clients. class pymodbus.constants.DeviceInformation(value, names=None, *, module=None, qualname=None, Bases: int, Enum Represents what type of device information to read. BASIC This is the basic (required) device information to be returned. This includes VendorName, ProductCode, and MajorMinorRevision code. REGULAR In addition to basic data objects, the device provides additional and optional identification and description data objects. All of the objects of this category are defined in the standard but their implementation is optional. EXTENDED In addition to regular data objects, the device provides additional and optional identification and description private data about the physical device itself. All of these data are device dependent. SPECIFIC Request to return a single data object. BASIC = 1 EXTENDED = 3 PyModbus, Release 3.6.0dev REGULAR = 2 SPECIFIC = 4 class pymodbus.constants.Endian(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None) Bases: str, Enum An enumeration representing the various byte endianness. AUTO This indicates that the byte order is chosen by the current native environment. BIG This indicates that the bytes are in big endian format LITTLE This indicates that the bytes are in little endian format Note: I am simply borrowing the format strings from the python struct module for my convenience. AUTO = '@' BIG = '>' LITTLE = '<' class pymodbus.constants.ModbusPlusOperation(value, names=None, *, module=None, qualname=None, Bases: int, Enum Represents the type of modbus plus request. GET_STATISTICS Operation requesting that the current modbus plus statistics be returned in the response. CLEAR_STATISTICS Operation requesting that the current modbus plus statistics be cleared and not returned in the response. CLEAR_STATISTICS = 4 GET_STATISTICS = 3 class pymodbus.constants.ModbusStatus(value, names=None, *, module=None, qualname=None, Bases: int, Enum These represent various status codes in the modbus protocol. WAITING This indicates that a modbus device is currently waiting for a given request to finish some running task. READY This indicates that a modbus device is currently free to perform the next request task. ON This indicates that the given modbus entity is on PyModbus, Release 3.6.0dev OFF This indicates that the given modbus entity is off SLAVE_ON This indicates that the given modbus slave is running SLAVE_OFF This indicates that the given modbus slave is not running OFF = 0 ON = 65280 READY = 0 SLAVE_OFF = 0 SLAVE_ON = 255 WAITING = 65535 class pymodbus.constants.MoreData(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None) Bases: int, Enum Represents the more follows condition. NOTHING This indicates that no more objects are going to be returned. KEEP_READING This indicates that there are more objects to be returned. KEEP_READING = 255 NOTHING = 0 10.5 Extra functions Pymodbus: Modbus Protocol Implementation. Released under the BSD license class pymodbus.Framer(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None) Bases: str, Enum These represent the different framers. ASCII = 'ascii' BINARY = 'binary' RTU = 'rtu' SOCKET = 'socket' TLS = 'tls' PyModbus, Release 3.6.0dev pymodbus.pymodbus_apply_logging_config(level: str | int = 10, log_file_name: str | None = None) Apply basic logging configuration used by default by Pymodbus maintainers. Parameters • level – (optional) set log level, if not set it is inherited. • log_file_name – (optional) log additional to file Please call this function to format logging appropriately when opening issues. Bit Reading Request/Response messages. class pymodbus.bit_read_message.ReadBitsResponseBase(values, slave=0, **kwargs) Bases: ModbusResponse Base class for Messages responding to bit-reading values. The requested bits can be found in the .bits list. bits A list of booleans representing bit values decode(data) Decode response pdu. Parameters data – The packet data to decode encode() Encode response pdu. Returns The encoded packet message getBit(address) Get the specified bit’s value. Parameters address – The bit to query Returns The value of the requested bit resetBit(address) Set the specified bit to 0. Parameters address – The bit to reset setBit(address, value=1) Set the specified bit. Parameters • address – The bit to set • value – The value to set the bit to class pymodbus.bit_read_message.ReadCoilsRequest(address=None, count=None, slave=0, **kwargs) Bases: ReadBitsRequestBase This function code is used to read from 1 to 2000(0x7d0) contiguous status of coils in a remote device. PyModbus, Release 3.6.0dev The Request PDU specifies the starting address, ie the address of the first coil specified, and the number of coils. In the PDU Coils are addressed starting at zero. Therefore coils numbered 1-16 are addressed as 0-15. execute(context) Run a read coils request against a datastore. Before running the request, we make sure that the request is in the max valid range (0x001-0x7d0). Next we make sure that the request is valid against the current datastore. Parameters context – The datastore to request from Returns An initialized ReadCoilsResponse, or an ExceptionResponse if an error occurred function_code = 1 function_code_name = 'read_coils' class pymodbus.bit_read_message.ReadCoilsResponse(values=None, slave=0, **kwargs) Bases: ReadBitsResponseBase The coils in the response message are packed as one coil per bit of the data field. Status is indicated as 1= ON and 0= OFF. The LSB of the first data byte contains the output addressed in the query. The other coils follow toward the high order end of this byte, and from low order to high order in subsequent bytes. If the returned output quantity is not a multiple of eight, the remaining bits in the final data byte will be padded with zeros (toward the high order end of the byte). The Byte Count field specifies the quantity of complete bytes of data. The requested coils can be found in boolean form in the .bits list. function_code = 1 class pymodbus.bit_read_message.ReadDiscreteInputsRequest(address=None, count=None, slave=0, **kwargs) Bases: ReadBitsRequestBase This function code is used to read from 1 to 2000(0x7d0). Contiguous status of discrete inputs in a remote device. The Request PDU specifies the starting address, ie the address of the first input specified, and the number of inputs. In the PDU Discrete Inputs are addressed starting at zero. Therefore Discrete inputs numbered 1-16 are addressed as 0-15. execute(context) Run a read discrete input request against a datastore. Before running the request, we make sure that the request is in the max valid range (0x001-0x7d0). Next we make sure that the request is valid against the current datastore. Parameters context – The datastore to request from Returns An initialized ReadDiscreteInputsResponse, or an ExceptionResponse if an error oc- curred function_code = 2 function_code_name = 'read_discrete_input' PyModbus, Release 3.6.0dev class pymodbus.bit_read_message.ReadDiscreteInputsResponse(values=None, slave=0, **kwargs) Bases: ReadBitsResponseBase The discrete inputs in the response message are packed as one input per bit of the data field. Status is indicated as 1= ON; 0= OFF. The LSB of the first data byte contains the input addressed in the query. The other inputs follow toward the high order end of this byte, and from low order to high order in subsequent bytes. If the returned input quantity is not a multiple of eight, the remaining bits in the final data byte will be padded with zeros (toward the high order end of the byte). The Byte Count field specifies the quantity of complete bytes of data. The requested coils can be found in boolean form in the .bits list. function_code = 2 Bit Writing Request/Response. TODO write mask request/response class pymodbus.bit_write_message.WriteMultipleCoilsRequest(address=None, values=None, slave=None, **kwargs) Bases: ModbusRequest This function code is used to forcea sequence of coils. To either ON or OFF in a remote device. The Request PDU specifies the coil references to be forced. Coils are addressed starting at zero. Therefore coil numbered 1 is addressed as 0. The requested ON/OFF states are specified by contents of the request data field. A logical “1” in a bit position of the field requests the corresponding output to be ON. A logical “0” requests it to be OFF.” decode(data) Decode a write coils request. Parameters data – The packet data to decode encode() Encode write coils request. Returns The byte encoded message execute(context) Run a write coils request against a datastore. Parameters context – The datastore to request from Returns The populated response or exception message function_code = 15 function_code_name = 'write_coils' get_response_pdu_size() Get response pdu size. Func_code (1 byte) + Output Address (2 byte) + Quantity of Outputs (2 Bytes) :return: PyModbus, Release 3.6.0dev class pymodbus.bit_write_message.WriteMultipleCoilsResponse(address=None, count=None, **kwargs) Bases: ModbusResponse The normal response returns the function code. Starting address, and quantity of coils forced. decode(data) Decode a write coils response. Parameters data – The packet data to decode encode() Encode write coils response. Returns The byte encoded message function_code = 15 class pymodbus.bit_write_message.WriteSingleCoilRequest(address=None, value=None, slave=None, **kwargs) Bases: ModbusRequest This function code is used to write a single output to either ON or OFF in a remote device. The requested ON/OFF state is specified by a constant in the request data field. A value of FF 00 hex requests the output to be ON. A value of 00 00 requests it to be OFF. All other values are illegal and will not affect the output. The Request PDU specifies the address of the coil to be forced. Coils are addressed starting at zero. Therefore coil numbered 1 is addressed as 0. The requested ON/OFF state is specified by a constant in the Coil Value field. A value of 0XFF00 requests the coil to be ON. A value of 0X0000 requests the coil to be off. All other values are illegal and will not affect the coil. decode(data) Decode a write coil request. Parameters data – The packet data to decode encode() Encode write coil request. Returns The byte encoded message execute(context) Run a write coil request against a datastore. Parameters context – The datastore to request from Returns The populated response or exception message function_code = 5 function_code_name = 'write_coil' PyModbus, Release 3.6.0dev get_response_pdu_size() Get response pdu size. Func_code (1 byte) + Output Address (2 byte) + Output Value (2 Bytes) :return: class pymodbus.bit_write_message.WriteSingleCoilResponse(address=None, value=None, **kwargs) Bases: ModbusResponse The normal response is an echo of the request. Returned after the coil state has been written. decode(data) Decode a write coil response. Parameters data – The packet data to decode encode() Encode write coil response. Returns The byte encoded message function_code = 5 Modbus Device Controller. These are the device management handlers. They should be maintained in the server context and the various methods should be inserted in the correct locations. class pymodbus.device.DeviceInformationFactory Bases: object This is a helper factory. That really just hides some of the complexity of processing the device information requests (function code 0x2b 0x0e). classmethod get(control, read_code=DeviceInformation.BASIC, object_id=0) Get the requested device data from the system. Parameters • control – The control block to pull data from • read_code – The read code to process • object_id – The specific object_id to read Returns The requested data (id, length, value) class pymodbus.device.ModbusDeviceIdentification(info=None, info_name=None) Bases: object This is used to supply the device identification. For the readDeviceIdentification function For more information read section 6.21 of the modbus application protocol. property MajorMinorRevision PyModbus, Release 3.6.0dev property ModelName property ProductCode property ProductName property UserApplicationName property VendorName property VendorUrl summary() Return a summary of the main items. Returns An dictionary of the main items update(value) Update the values of this identity. using another identify as the value Parameters value – The value to copy values from class pymodbus.device.ModbusPlusStatistics Bases: object This is used to maintain the current modbus plus statistics count. As of right now this is simply a stub to complete the modbus implementation. For more information, see the modbus implementation guide page 87. encode() Return a summary of the modbus plus statistics. Returns 54 16-bit words representing the status reset() Clear all of the modbus plus statistics. summary() Return a summary of the modbus plus statistics. Returns 54 16-bit words representing the status Diagnostic Record Read/Write. These need to be tied into a the current server context or linked to the appropriate data class pymodbus.diag_message.ChangeAsciiInputDelimiterRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Change ascii input delimiter. The character “CHAR” passed in the request data field becomes the end of message delimiter for future messages (replacing the default LF character). This function is useful in cases of a Line Feed is not required at the end of ASCII messages. PyModbus, Release 3.6.0dev execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 3 class pymodbus.diag_message.ChangeAsciiInputDelimiterResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Change ascii input delimiter. The character “CHAR” passed in the request data field becomes the end of message delimiter for future messages (replacing the default LF character). This function is useful in cases of a Line Feed is not required at the end of ASCII messages. sub_function_code = 3 class pymodbus.diag_message.ClearCountersRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Clear ll counters and the diagnostic register. Also, counters are cleared upon power-up execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 10 class pymodbus.diag_message.ClearCountersResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Clear ll counters and the diagnostic register. Also, counters are cleared upon power-up sub_function_code = 10 class pymodbus.diag_message.ClearOverrunCountRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Clear the overrun error counter and reset the error flag. An error flag should be cleared, but nothing else in the specification mentions is, so it is ignored. execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 20 class pymodbus.diag_message.ClearOverrunCountResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Clear the overrun error counter and reset the error flag. PyModbus, Release 3.6.0dev sub_function_code = 20 class pymodbus.diag_message.DiagnosticStatusRequest(**kwargs) Bases: ModbusRequest This is a base class for all of the diagnostic request functions. decode(data) Decode a diagnostic request. Parameters data – The data to decode into the function code encode() Encode a diagnostic response. we encode the data set in self.message Returns The encoded packet function_code = 8 function_code_name = 'diagnostic_status' get_response_pdu_size() Get response pdu size. Func_code (1 byte) + Sub function code (2 byte) + Data (2 * N bytes) :return: class pymodbus.diag_message.DiagnosticStatusResponse(**kwargs) Bases: ModbusResponse Diagnostic status. This is a base class for all of the diagnostic response functions It works by performing all of the encoding and decoding of variable data and lets the higher classes define what extra data to append and how to execute a request decode(data) Decode diagnostic response. Parameters data – The data to decode into the function code encode() Encode diagnostic response. we encode the data set in self.message Returns The encoded packet function_code = 8 class pymodbus.diag_message.ForceListenOnlyModeRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Forces the addressed remote device to its Listen Only Mode for MODBUS communications. This isolates it from the other devices on the network, allowing them to continue communicating without inter- ruption from the addressed remote device. No response is returned. PyModbus, Release 3.6.0dev execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 4 class pymodbus.diag_message.ForceListenOnlyModeResponse(**kwargs) Bases: DiagnosticStatusResponse Forces the addressed remote device to its Listen Only Mode for MODBUS communications. This isolates it from the other devices on the network, allowing them to continue communicating without inter- ruption from the addressed remote device. No response is returned. This does not send a response should_respond = False sub_function_code = 4 class pymodbus.diag_message.GetClearModbusPlusRequest(slave=None, **kwargs) Bases: DiagnosticStatusSimpleRequest Get/Clear modbus plus request. In addition to the Function code (08) and Subfunction code (00 15 hex) in the query, a two-byte Operation field is used to specify either a “Get Statistics” or a “Clear Statistics” operation. The two operations are exclusive - the “Get” operation cannot clear the statistics, and the “Clear” operation does not return statistics prior to clearing them. Statistics are also cleared on power-up of the slave device. encode() Encode a diagnostic response. we encode the data set in self.message Returns The encoded packet execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message get_response_pdu_size() Return a series of 54 16-bit words (108 bytes) in the data field of the response. This function differs from the usual two-byte length of the data field. The data contains the statistics for the Modbus Plus peer processor in the slave device. Func_code (1 byte) + Sub function code (2 byte) + Operation (2 byte) + Data (108 bytes) :return: sub_function_code = 21 class pymodbus.diag_message.GetClearModbusPlusResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Return a series of 54 16-bit words (108 bytes) in the data field of the response. This function differs from the usual two-byte length of the data field. The data contains the statistics for the Modbus Plus peer processor in the slave device. PyModbus, Release 3.6.0dev sub_function_code = 21 class pymodbus.diag_message.RestartCommunicationsOptionRequest(toggle=False, slave=None, **kwargs) Bases: DiagnosticStatusRequest Restart communication. The remote device serial line port must be initialized and restarted, and all of its communications event counters are cleared. If the port is currently in Listen Only Mode, no response is returned. This function is the only one that brings the port out of Listen Only Mode. If the port is not currently in Listen Only Mode, a normal response is returned. This occurs before the restart is executed. execute(*_args) Clear event log and restart. Returns The initialized response message sub_function_code = 1 class pymodbus.diag_message.RestartCommunicationsOptionResponse(toggle=False, **kwargs) Bases: DiagnosticStatusResponse Restart Communication. The remote device serial line port must be initialized and restarted, and all of its communications event counters are cleared. If the port is currently in Listen Only Mode, no response is returned. This function is the only one that brings the port out of Listen Only Mode. If the port is not currently in Listen Only Mode, a normal response is returned. This occurs before the restart is executed. sub_function_code = 1 class pymodbus.diag_message.ReturnBusCommunicationErrorCountRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Return bus comm. count. The response data field returns the quantity of CRC errors encountered by the remote device since its last restart, clear counter operation, or power-up execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 12 class pymodbus.diag_message.ReturnBusCommunicationErrorCountResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Return bus comm. error. The response data field returns the quantity of CRC errors encountered by the remote device since its last restart, clear counter operation, or power-up sub_function_code = 12 PyModbus, Release 3.6.0dev class pymodbus.diag_message.ReturnBusExceptionErrorCountRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Return bus exception. The response data field returns the quantity of modbus exception responses returned by the remote device since its last restart, clear counters operation, or power-up execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 13 class pymodbus.diag_message.ReturnBusExceptionErrorCountResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Return bus exception. The response data field returns the quantity of modbus exception responses returned by the remote device since its last restart, clear counters operation, or power-up sub_function_code = 13 class pymodbus.diag_message.ReturnBusMessageCountRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Return bus message count. The response data field returns the quantity of messages that the remote device has detected on the communica- tions systems since its last restart, clear counters operation, or power-up execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 11 class pymodbus.diag_message.ReturnBusMessageCountResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Return bus message count. The response data field returns the quantity of messages that the remote device has detected on the communica- tions systems since its last restart, clear counters operation, or power-up sub_function_code = 11 class pymodbus.diag_message.ReturnDiagnosticRegisterRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest The contents of the remote device’s 16-bit diagnostic register are returned in the response. execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message PyModbus, Release 3.6.0dev sub_function_code = 2 class pymodbus.diag_message.ReturnDiagnosticRegisterResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Return diagnostic register. The contents of the remote device’s 16-bit diagnostic register are returned in the response sub_function_code = 2 class pymodbus.diag_message.ReturnIopOverrunCountRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Return IopOverrun. An IOP overrun is caused by data characters arriving at the port faster than they can be stored, or by the loss of a character due to a hardware malfunction. This function is specific to the 884. execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 19 class pymodbus.diag_message.ReturnIopOverrunCountResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Return Iop overrun count. The response data field returns the quantity of messages addressed to the slave that it could not handle due to an 884 IOP overrun condition, since its last restart, clear counters operation, or power-up. sub_function_code = 19 class pymodbus.diag_message.ReturnQueryDataRequest(message=b'\x00\x00', slave=None, **kwargs) Bases: DiagnosticStatusRequest Return query data. The data passed in the request data field is to be returned (looped back) in the response. The entire response message should be identical to the request. execute(*_args) Execute the loopback request (builds the response). Returns The populated loopback response message sub_function_code = 0 class pymodbus.diag_message.ReturnQueryDataResponse(message=b'\x00\x00', **kwargs) Bases: DiagnosticStatusResponse Return query data. The data passed in the request data field is to be returned (looped back) in the response. The entire response message should be identical to the request. sub_function_code = 0 PyModbus, Release 3.6.0dev class pymodbus.diag_message.ReturnSlaveBusCharacterOverrunCountRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Return slave character overrun. The response data field returns the quantity of messages addressed to the remote device that it could not handle due to a character overrun condition, since its last restart, clear counters operation, or power-up. A character overrun is caused by data characters arriving at the port faster than they can be stored, or by the loss of a character due to a hardware malfunction. execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 18 class pymodbus.diag_message.ReturnSlaveBusCharacterOverrunCountResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Return the quantity of messages addressed to the remote device unhandled due to a character overrun. Since its last restart, clear counters operation, or power-up. A character overrun is caused by data characters arriving at the port faster than they can be stored, or by the loss of a character due to a hardware malfunction. sub_function_code = 18 class pymodbus.diag_message.ReturnSlaveBusyCountRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Return slave busy count. The response data field returns the quantity of messages addressed to the remote device for which it returned a Slave Device Busy exception response, since its last restart, clear counters operation, or power-up. execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 17 class pymodbus.diag_message.ReturnSlaveBusyCountResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Return slave busy count. The response data field returns the quantity of messages addressed to the remote device for which it returned a Slave Device Busy exception response, since its last restart, clear counters operation, or power-up. sub_function_code = 17 class pymodbus.diag_message.ReturnSlaveMessageCountRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Return slave message count. The response data field returns the quantity of messages addressed to the remote device, or broadcast, that the remote device has processed since its last restart, clear counters operation, or power-up PyModbus, Release 3.6.0dev execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 14 class pymodbus.diag_message.ReturnSlaveMessageCountResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Return slave message count. The response data field returns the quantity of messages addressed to the remote device, or broadcast, that the remote device has processed since its last restart, clear counters operation, or power-up sub_function_code = 14 class pymodbus.diag_message.ReturnSlaveNAKCountRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Return slave NAK count. The response data field returns the quantity of messages addressed to the remote device for which it returned a Negative Acknowledge (NAK) exception response, since its last restart, clear counters operation, or power-up. Exception responses are described and listed in section 7 . execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 16 class pymodbus.diag_message.ReturnSlaveNAKCountResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Return slave NAK. The response data field returns the quantity of messages addressed to the remote device for which it returned a Negative Acknowledge (NAK) exception response, since its last restart, clear counters operation, or power-up. Exception responses are described and listed in section 7. sub_function_code = 16 class pymodbus.diag_message.ReturnSlaveNoResponseCountRequest(data=0, **kwargs) Bases: DiagnosticStatusSimpleRequest Return slave no response. The response data field returns the quantity of messages addressed to the remote device, or broadcast, that the remote device has processed since its last restart, clear counters operation, or power-up execute(*args) Execute the diagnostic request on the given device. Returns The initialized response message sub_function_code = 15 PyModbus, Release 3.6.0dev class pymodbus.diag_message.ReturnSlaveNoResponseCountResponse(data=0, **kwargs) Bases: DiagnosticStatusSimpleResponse Return slave no response. The response data field returns the quantity of messages addressed to the remote device, or broadcast, that the remote device has processed since its last restart, clear counters operation, or power-up sub_function_code = 15 Modbus Remote Events. An event byte returned by the Get Communications Event Log function can be any one of four types. The type is defined by bit 7 (the high-order bit) in each byte. It may be further defined by bit 6. class pymodbus.events.CommunicationRestartEvent Bases: ModbusEvent Restart remote device Initiated Communication. The remote device stores this type of event byte when its communications port is restarted. The remote device can be restarted by the Diagnostics function (code 08), with sub-function Restart Communications Option (code 00 01). That function also places the remote device into a “Continue on Error” or “Stop on Error” mode. If the remote device is placed into “Continue on Error” mode, the event byte is added to the existing event log. If the remote device is placed into “Stop on Error” mode, the byte is added to the log and the rest of the log is cleared to zeros. The event is defined by a content of zero. decode(event) Decode the event message to its status bits. Parameters event – The event to decode Raises ParameterException – encode() Encode the status bits to an event message. Returns The encoded event message value = 0 class pymodbus.events.EnteredListenModeEvent Bases: ModbusEvent Enter Remote device Listen Only Mode The remote device stores this type of event byte when it enters the Listen Only Mode. The event is defined by a content of 04 hex. decode(event) Decode the event message to its status bits. Parameters event – The event to decode Raises ParameterException – PyModbus, Release 3.6.0dev encode() Encode the status bits to an event message. Returns The encoded event message value = 4 class pymodbus.events.ModbusEvent Bases: object Define modbus events. decode(event) Decode the event message to its status bits. Parameters event – The event to decode Raises NotImplementedException – encode() Encode the status bits to an event message. Raises NotImplementedException – class pymodbus.events.RemoteReceiveEvent(**kwargs) Bases: ModbusEvent Remote device MODBUS Receive Event. The remote device stores this type of event byte when a query message is received. It is stored before the remote device processes the message. This event is defined by bit 7 set to logic “1”. The other bits will be set to a logic “1” if the corresponding condition is TRUE. The bit layout is: Bit Contents ---------------------------------- 0 Not Used 2 Not Used 3 Not Used 4 Character Overrun 5 Currently in Listen Only Mode 6 Broadcast Receive 7 1 decode(event: bytes) → None Decode the event message to its status bits. Parameters event – The event to decode encode() → bytes Encode the status bits to an event message. Returns The encoded event message PyModbus, Release 3.6.0dev class pymodbus.events.RemoteSendEvent(**kwargs) Bases: ModbusEvent Remote device MODBUS Send Event. The remote device stores this type of event byte when it finishes processing a request message. It is stored if the remote device returned a normal or exception response, or no response. This event is defined by bit 7 set to a logic “0”, with bit 6 set to a “1”. The other bits will be set to a logic “1” if the corresponding condition is TRUE. The bit layout is: Bit Contents ----------------------------------------------------------- 0 Read Exception Sent (Exception Codes 1-3) 1 Slave Abort Exception Sent (Exception Code 4) 2 Slave Busy Exception Sent (Exception Codes 5-6) 3 Slave Program NAK Exception Sent (Exception Code 7) 4 Write Timeout Error Occurred 5 Currently in Listen Only Mode 6 1 7 0 decode(event) Decode the event message to its status bits. Parameters event – The event to decode encode() Encode the status bits to an event message. Returns The encoded event message Pymodbus Exceptions. Custom exceptions to be used in the Modbus code. exception pymodbus.exceptions.ConnectionException(string='') Bases: ModbusException Error resulting from a bad connection. exception pymodbus.exceptions.InvalidMessageReceivedException(string='') Bases: ModbusException Error resulting from invalid response received or decoded. exception pymodbus.exceptions.MessageRegisterException(string='') Bases: ModbusException Error resulting from failing to register a custom message request/response. exception pymodbus.exceptions.ModbusException(string) Bases: Exception Base modbus exception. isError() Error PyModbus, Release 3.6.0dev exception pymodbus.exceptions.ModbusIOException(string='', function_code=None) Bases: ModbusException Error resulting from data i/o. exception pymodbus.exceptions.NoSuchSlaveException(string='') Bases: ModbusException Error resulting from making a request to a slave that does not exist. exception pymodbus.exceptions.NotImplementedException(string='') Bases: ModbusException Error resulting from not implemented function. exception pymodbus.exceptions.ParameterException(string='') Bases: ModbusException Error resulting from invalid parameter. Modbus Request/Response Decoder Factories. The following factories make it easy to decode request/response messages. To add a new request/response pair to be decodeable by the library, simply add them to the respective function lookup table (order doesn’t matter, but it does help keep things organized). Regardless of how many functions are added to the lookup, O(1) behavior is kept as a result of a pre-computed lookup dictionary. class pymodbus.factory.ClientDecoder Bases: object Response Message Factory (Client). To add more implemented functions, simply add them to the list decode(message) Decode a response packet. Parameters message – The raw packet to decode Returns The decoded modbus message or None if error PyModbus, Release 3.6.0dev function_table = [<class 'pymodbus.register_read_message.ReadHoldingRegistersResponse'>, <class 'pymodbus.bit_read_message.ReadDiscreteInputsResponse'>, <class 'pymodbus.register_read_message.ReadInputRegistersResponse'>, <class 'pymodbus.bit_read_message.ReadCoilsResponse'>, <class 'pymodbus.bit_write_message.WriteMultipleCoilsResponse'>, <class 'pymodbus.register_write_message.WriteMultipleRegistersResponse'>, <class 'pymodbus.register_write_message.WriteSingleRegisterResponse'>, <class 'pymodbus.bit_write_message.WriteSingleCoilResponse'>, <class 'pymodbus.register_read_message.ReadWriteMultipleRegistersResponse'>, <class 'pymodbus.diag_message.DiagnosticStatusResponse'>, <class 'pymodbus.other_message.ReadExceptionStatusResponse'>, <class 'pymodbus.other_message.GetCommEventCounterResponse'>, <class 'pymodbus.other_message.GetCommEventLogResponse'>, <class 'pymodbus.other_message.ReportSlaveIdResponse'>, <class 'pymodbus.file_message.ReadFileRecordResponse'>, <class 'pymodbus.file_message.WriteFileRecordResponse'>, <class 'pymodbus.register_write_message.MaskWriteRegisterResponse'>, <class 'pymodbus.file_message.ReadFifoQueueResponse'>, <class 'pymodbus.mei_message.ReadDeviceInformationResponse'>] lookupPduClass(function_code) Use function_code to determine the class of the PDU. Parameters function_code – The function code specified in a frame. Returns The class of the PDU that has a matching function_code. register(function) Register a function and sub function class with the decoder. class pymodbus.factory.ServerDecoder Bases: object Request Message Factory (Server). To add more implemented functions, simply add them to the list decode(message) Decode a request packet Parameters message – The raw modbus request packet Returns The decoded modbus message or None if error classmethod getFCdict() Build function code - class list. lookupPduClass(function_code) Use function_code to determine the class of the PDU. Parameters function_code – The function code specified in a frame. PyModbus, Release 3.6.0dev Returns The class of the PDU that has a matching function_code. register(function=None) Register a function and sub function class with the decoder. Parameters function – Custom function class to register Raises MessageRegisterException – File Record Read/Write Messages. Currently none of these messages are implemented class pymodbus.file_message.FileRecord(**kwargs) Bases: object Represents a file record and its relevant data. class pymodbus.file_message.ReadFifoQueueRequest(address=0, **kwargs) Bases: ModbusRequest Read fifo queue request. This function code allows to read the contents of a First-In-First-Out (FIFO) queue of register in a remote device. The function returns a count of the registers in the queue, followed by the queued data. Up to 32 registers can be read: the count, plus up to 31 queued data registers. The queue count register is returned first, followed by the queued data registers. The function reads the queue contents, but does not clear them. decode(data) Decode the incoming request. Parameters data – The data to decode into the address encode() Encode the request packet. Returns The byte encoded packet execute(_context) Run a read exception status request against the store. Returns The populated response function_code = 24 function_code_name = 'read_fifo_queue' class pymodbus.file_message.ReadFifoQueueResponse(values=None, **kwargs) Bases: ModbusResponse Read Fifo queue response. In a normal response, the byte count shows the quantity of bytes to follow, including the queue count bytes and value register bytes (but not including the error check field). The queue count is the quantity of data registers in the queue (not including the count register). PyModbus, Release 3.6.0dev If the queue count exceeds 31, an exception response is returned with an error code of 03 (Illegal Data Value). classmethod calculateRtuFrameSize(buffer) Calculate the size of the message. Parameters buffer – A buffer containing the data that have been received. Returns The number of bytes in the response. decode(data) Decode a the response. Parameters data – The packet data to decode encode() Encode the response. Returns The byte encoded message function_code = 24 class pymodbus.file_message.ReadFileRecordRequest(records=None, **kwargs) Bases: ModbusRequest Read file record request. This function code is used to perform a file record read. All request data lengths are provided in terms of number of bytes and all record lengths are provided in terms of registers. A file is an organization of records. Each file contains 10000 records, addressed 0000 to 9999 decimal or 0x0000 to 0x270f. For example, record 12 is addressed as 12. The function can read multiple groups of references. The groups can be separating (non-contiguous), but the references within each group must be sequential. Each group is defined in a separate “sub-request” field that contains seven bytes: The reference type: 1 byte (must be 0x06) The file number: 2 bytes The starting record number within the file: 2 bytes The length of the record to be read: 2 bytes The quantity of registers to be read, combined with all other fields in the expected response, must not exceed the allowable length of the MODBUS PDU: 235 bytes. decode(data) Decode the incoming request. Parameters data – The data to decode into the address encode() Encode the request packet. Returns The byte encoded packet execute(_context) Run a read exception status request against the store. PyModbus, Release 3.6.0dev Returns The populated response function_code = 20 function_code_name = 'read_file_record' class pymodbus.file_message.ReadFileRecordResponse(records=None, **kwargs) Bases: ModbusResponse Read file record response. The normal response is a series of “sub-responses,” one for each “sub-request.” The byte count field is the total combined count of bytes in all “sub-responses.” In addition, each “sub-response” contains a field that shows its own byte count. decode(data) Decode the response. Parameters data – The packet data to decode encode() Encode the response. Returns The byte encoded message function_code = 20 class pymodbus.file_message.WriteFileRecordRequest(records=None, **kwargs) Bases: ModbusRequest Write file record request. This function code is used to perform a file record write. All request data lengths are provided in terms of number of bytes and all record lengths are provided in terms of the number of 16 bit words. decode(data) Decode the incoming request. Parameters data – The data to decode into the address encode() Encode the request packet. Returns The byte encoded packet execute(_context) Run the write file record request against the context. Returns The populated response function_code = 21 function_code_name = 'write_file_record' PyModbus, Release 3.6.0dev class pymodbus.file_message.WriteFileRecordResponse(records=None, **kwargs) Bases: ModbusResponse The normal response is an echo of the request. decode(data) Decode the incoming request. Parameters data – The data to decode into the address encode() Encode the response. Returns The byte encoded message function_code = 21 Encapsulated Interface (MEI) Transport Messages. class pymodbus.mei_message.ReadDeviceInformationRequest(read_code=None, object_id=0, **kwargs) Bases: ModbusRequest Read device information. This function code allows reading the identification and additional information relative to the physical and func- tional description of a remote device, only. The Read Device Identification interface is modeled as an address space composed of a set of addressable data elements. The data elements are called objects and an object Id identifies them. decode(data) Decode data part of the message. Parameters data – The incoming data encode() Encode the request packet. Returns The byte encoded packet execute(_context) Run a read exception status request against the store. Returns The populated response function_code = 43 function_code_name = 'read_device_information' sub_function_code = 14 class pymodbus.mei_message.ReadDeviceInformationResponse(read_code=None, information=None, **kwargs) Bases: ModbusResponse Read device information response. PyModbus, Release 3.6.0dev classmethod calculateRtuFrameSize(buffer) Calculate the size of the message Parameters buffer – A buffer containing the data that have been received. Returns The number of bytes in the response. decode(data) Decode a the response. Parameters data – The packet data to decode encode() Encode the response. Returns The byte encoded message function_code = 43 sub_function_code = 14 Diagnostic record read/write. Currently not all implemented class pymodbus.other_message.GetCommEventCounterRequest(**kwargs) Bases: ModbusRequest This function code is used to get a status word. And an event count from the remote device’s communication event counter. By fetching the current count before and after a series of messages, a client can determine whether the messages were handled normally by the remote device. The device’s event counter is incremented once for each successful message completion. It is not incremented for exception responses, poll commands, or fetch event counter commands. The event counter can be reset by means of the Diagnostics function (code 08), with a subfunction of Restart Communications Option (code 00 01) or Clear Counters and Diagnostic Register (code 00 0A). decode(data) Decode data part of the message. Parameters data – The incoming data encode() Encode the message. execute(_context=None) Run a read exception status request against the store. Returns The populated response function_code = 11 PyModbus, Release 3.6.0dev function_code_name = 'get_event_counter' class pymodbus.other_message.GetCommEventCounterResponse(count=0, **kwargs) Bases: ModbusResponse Get comm event counter response. The normal response contains a two-byte status word, and a two-byte event count. The status word will be all ones (FF FF hex) if a previously-issued program command is still being processed by the remote device (a busy condition exists). Otherwise, the status word will be all zeros. decode(data) Decode a the response. Parameters data – The packet data to decode encode() Encode the response. Returns The byte encoded message function_code = 11 class pymodbus.other_message.GetCommEventLogRequest(**kwargs) Bases: ModbusRequest This function code is used to get a status word. Event count, message count, and a field of event bytes from the remote device. The status word and event counts are identical to that returned by the Get Communications Event Counter function (11, 0B hex). The message counter contains the quantity of messages processed by the remote device since its last restart, clear counters operation, or power-up. This count is identical to that returned by the Diagnostic function (code 08), sub-function Return Bus Message Count (code 11, 0B hex). The event bytes field contains 0-64 bytes, with each byte corresponding to the status of one MODBUS send or receive operation for the remote device. The remote device enters the events into the field in chronological order. Byte 0 is the most recent event. Each new byte flushes the oldest byte from the field. decode(data) Decode data part of the message. Parameters data – The incoming data encode() Encode the message. execute(_context=None) Run a read exception status request against the store. Returns The populated response function_code = 12 function_code_name = 'get_event_log' PyModbus, Release 3.6.0dev class pymodbus.other_message.GetCommEventLogResponse(**kwargs) Bases: ModbusResponse Get Comm event log response. The normal response contains a two-byte status word field, a two-byte event count field, a two-byte message count field, and a field containing 0-64 bytes of events. A byte count field defines the total length of the data in these four field decode(data) Decode a the response. Parameters data – The packet data to decode encode() Encode the response. Returns The byte encoded message function_code = 12 class pymodbus.other_message.ReadExceptionStatusRequest(slave=None, **kwargs) Bases: ModbusRequest This function code is used to read the contents of eight Exception Status outputs in a remote device. The function provides a simple method for accessing this information, because the Exception Output references are known (no output reference is needed in the function). decode(data) Decode data part of the message. Parameters data – The incoming data encode() Encode the message. execute(_context=None) Run a read exception status request against the store. Returns The populated response function_code = 7 function_code_name = 'read_exception_status' class pymodbus.other_message.ReadExceptionStatusResponse(status=0, **kwargs) Bases: ModbusResponse The normal response contains the status of the eight Exception Status outputs. The outputs are packed into one data byte, with one bit per output. The status of the lowest output reference is contained in the least significant bit of the byte. The contents of the eight Exception Status outputs are device specific. PyModbus, Release 3.6.0dev decode(data) Decode a the response. Parameters data – The packet data to decode encode() Encode the response. Returns The byte encoded message function_code = 7 class pymodbus.other_message.ReportSlaveIdRequest(slave=0, **kwargs) Bases: ModbusRequest This function code is used to read the description of the type. The current status, and other information specific to a remote device. decode(data) Decode data part of the message. Parameters data – The incoming data encode() Encode the message. execute(context=None) Run a report slave id request against the store. Returns The populated response function_code = 17 function_code_name = 'report_slave_id' class pymodbus.other_message.ReportSlaveIdResponse(identifier=b'\x00', status=True, **kwargs) Bases: ModbusResponse Show response. The data contents are specific to each type of device. decode(data) Decode a the response. Since the identifier is device dependent, we just return the raw value that a user can decode to whatever it should be. Parameters data – The packet data to decode encode() Encode the response. Returns The byte encoded message PyModbus, Release 3.6.0dev function_code = 17 Modbus Payload Builders. A collection of utilities for building and decoding modbus messages payloads. class pymodbus.payload.BinaryPayloadBuilder(payload=None, byteorder=Endian.LITTLE, wordorder=Endian.BIG, repack=False) Bases: object A utility that helps build payload messages to be written with the various modbus messages. It really is just a simple wrapper around the struct module, however it saves time looking up the format strings. What follows is a simple example: builder = BinaryPayloadBuilder(byteorder=Endian.Little) builder.add_8bit_uint(1) builder.add_16bit_uint(2) payload = builder.build() add_16bit_float(value: float) → None Add a 16 bit float to the buffer. Parameters value – The value to add to the buffer add_16bit_int(value: int) → None Add a 16 bit signed int to the buffer. Parameters value – The value to add to the buffer add_16bit_uint(value: int) → None Add a 16 bit unsigned int to the buffer. Parameters value – The value to add to the buffer add_32bit_float(value: float) → None Add a 32 bit float to the buffer. Parameters value – The value to add to the buffer add_32bit_int(value: int) → None Add a 32 bit signed int to the buffer. Parameters value – The value to add to the buffer add_32bit_uint(value: int) → None Add a 32 bit unsigned int to the buffer. Parameters value – The value to add to the buffer add_64bit_float(value: float) → None Add a 64 bit float(double) to the buffer. Parameters value – The value to add to the buffer PyModbus, Release 3.6.0dev add_64bit_int(value: int) → None Add a 64 bit signed int to the buffer. Parameters value – The value to add to the buffer add_64bit_uint(value: int) → None Add a 64 bit unsigned int to the buffer. Parameters value – The value to add to the buffer add_8bit_int(value: int) → None Add a 8 bit signed int to the buffer. Parameters value – The value to add to the buffer add_8bit_uint(value: int) → None Add a 8 bit unsigned int to the buffer. Parameters value – The value to add to the buffer add_bits(values: List[bool]) → None Add a collection of bits to be encoded. If these are less than a multiple of eight, they will be left padded with 0 bits to make it so. Parameters values – The value to add to the buffer add_string(value: str) → None Add a string to the buffer. Parameters value – The value to add to the buffer build() → List[bytes] Return the payload buffer as a list. This list is two bytes per element and can thus be treated as a list of registers. Returns The payload buffer as a list encode() → bytes Get the payload buffer encoded in bytes. reset() → None Reset the payload buffer. to_coils() → List[bool] Convert the payload buffer into a coil layout that can be used as a context block. Returns The coil layout to use as a block to_registers() Convert the payload buffer to register layout that can be used as a context block. PyModbus, Release 3.6.0dev Returns The register layout to use as a block class pymodbus.payload.BinaryPayloadDecoder(payload, byteorder=Endian.LITTLE, wordorder=Endian.BIG) Bases: object A utility that helps decode payload messages from a modbus response message. It really is just a simple wrapper around the struct module, however it saves time looking up the format strings. What follows is a simple example: decoder = BinaryPayloadDecoder(payload) first = decoder.decode_8bit_uint() second = decoder.decode_16bit_uint() classmethod bit_chunks(coils, size=8) Return bit chunks. decode_16bit_float() Decode a 16 bit float from the buffer. decode_16bit_int() Decode a 16 bit signed int from the buffer. decode_16bit_uint() Decode a 16 bit unsigned int from the buffer. decode_32bit_float() Decode a 32 bit float from the buffer. decode_32bit_int() Decode a 32 bit signed int from the buffer. decode_32bit_uint() Decode a 32 bit unsigned int from the buffer. decode_64bit_float() Decode a 64 bit float(double) from the buffer. decode_64bit_int() Decode a 64 bit signed int from the buffer. decode_64bit_uint() Decode a 64 bit unsigned int from the buffer. decode_8bit_int() Decode a 8 bit signed int from the buffer. decode_8bit_uint() Decode a 8 bit unsigned int from the buffer. decode_bits(package_len=1) Decode a byte worth of bits from the buffer. decode_string(size=1) Decode a string from the buffer. PyModbus, Release 3.6.0dev Parameters size – The size of the string to decode classmethod fromCoils(coils, byteorder=Endian.LITTLE, _wordorder=Endian.BIG) Initialize a payload decoder with the result of reading of coils. classmethod fromRegisters(registers, byteorder=Endian.LITTLE, wordorder=Endian.BIG) Initialize a payload decoder. With the result of reading a collection of registers from a modbus device. The registers are treated as a list of 2 byte values. We have to do this because of how the data has already been decoded by the rest of the library. Parameters • registers – The register results to initialize with • byteorder – The Byte order of each word • wordorder – The endianness of the word (when wordcount is >= 2) Returns An initialized PayloadDecoder Raises ParameterException – reset() Reset the decoder pointer back to the start. skip_bytes(nbytes) Skip n bytes in the buffer. Parameters nbytes – The number of bytes to skip Contains base classes for modbus request/response/error packets. class pymodbus.pdu.ExceptionResponse(function_code, exception_code=None, **kwargs) Bases: ModbusResponse Base class for a modbus exception PDU. ExceptionOffset = 128 decode(data) Decode a modbus exception response. Parameters data – The packet data to decode encode() Encode a modbus exception response. Returns The encoded exception packet class pymodbus.pdu.IllegalFunctionRequest(function_code, **kwargs) Bases: ModbusRequest Define the Modbus slave exception type “Illegal Function”. This exception code is returned if the slave: PyModbus, Release 3.6.0dev - does not implement the function code **or** - is not in a state that allows it to process the function ErrorCode = 1 decode(_data) Decode so this failure will run correctly. execute(_context) Build an illegal function request error response. Returns The error response packet class pymodbus.pdu.ModbusExceptions Bases: object An enumeration of the valid modbus exceptions. Acknowledge = 5 GatewayNoResponse = 11 GatewayPathUnavailable = 10 IllegalAddress = 2 IllegalFunction = 1 IllegalValue = 3 MemoryParityError = 8 SlaveBusy = 6 SlaveFailure = 4 classmethod decode(code) Give an error code, translate it to a string error name. Parameters code – The code number to translate class pymodbus.pdu.ModbusRequest(slave=0, **kwargs) Bases: ModbusPDU Base class for a modbus request PDU. doException(exception) Build an error response based on the function. Parameters exception – The exception to return Raises An exception response function_code = -1 PyModbus, Release 3.6.0dev class pymodbus.pdu.ModbusResponse(slave=0, **kwargs) Bases: ModbusPDU Base class for a modbus response PDU. should_respond A flag that indicates if this response returns a result back to the client issuing the request _rtu_frame_size Indicates the size of the modbus rtu response used for calculating how much to read. function_code = 0 isError() → bool Check if the error is a success or failure. should_respond = True Register Reading Request/Response. class pymodbus.register_read_message.ReadHoldingRegistersRequest(address=None, count=None, Bases: ReadRegistersRequestBase Read holding registers. This function code is used to read the contents of a contiguous block of holding registers in a remote device. The Request PDU specifies the starting register address and the number of registers. In the PDU Registers are addressed starting at zero. Therefore registers numbered 1-16 are addressed as 0-15. execute(context) Run a read holding request against a datastore. Parameters context – The datastore to request from Returns An initialized ReadHoldingRegistersResponse, or an ExceptionResponse if an error occurred function_code = 3 function_code_name = 'read_holding_registers' class pymodbus.register_read_message.ReadHoldingRegistersResponse(values=None, **kwargs) Bases: ReadRegistersResponseBase Read holding registers. This function code is used to read the contents of a contiguous block of holding registers in a remote device. The Request PDU specifies the starting register address and the number of registers. In the PDU Registers are addressed starting at zero. Therefore registers numbered 1-16 are addressed as 0-15. The requested registers can be found in the .registers list. function_code = 3 class pymodbus.register_read_message.ReadInputRegistersRequest(address=None, count=None, PyModbus, Release 3.6.0dev Bases: ReadRegistersRequestBase Read input registers. This function code is used to read from 1 to approx. 125 contiguous input registers in a remote device. The Re- quest PDU specifies the starting register address and the number of registers. In the PDU Registers are addressed starting at zero. Therefore input registers numbered 1-16 are addressed as 0-15. execute(context) Run a read input request against a datastore. Parameters context – The datastore to request from Returns An initialized ReadInputRegistersResponse, or an ExceptionResponse if an error oc- curred function_code = 4 function_code_name = 'read_input_registers' class pymodbus.register_read_message.ReadInputRegistersResponse(values=None, **kwargs) Bases: ReadRegistersResponseBase Read/write input registers. This function code is used to read from 1 to approx. 125 contiguous input registers in a remote device. The Re- quest PDU specifies the starting register address and the number of registers. In the PDU Registers are addressed starting at zero. Therefore input registers numbered 1-16 are addressed as 0-15. The requested registers can be found in the .registers list. function_code = 4 class pymodbus.register_read_message.ReadRegistersResponseBase(values, slave=0, **kwargs) Bases: ModbusResponse Base class for responding to a modbus register read. The requested registers can be found in the .registers list. decode(data) Decode a register response packet. Parameters data – The request to decode encode() Encode the response packet. Returns The encoded packet getRegister(index) Get the requested register. Parameters index – The indexed register to retrieve Returns The request register PyModbus, Release 3.6.0dev registers A list of register values class pymodbus.register_read_message.ReadWriteMultipleRegistersRequest(**kwargs) Bases: ModbusRequest Read/write multiple registers. This function code performs a combination of one read operation and one write operation in a single MODBUS transaction. The write operation is performed before the read. Holding registers are addressed starting at zero. Therefore holding registers 1-16 are addressed in the PDU as 0-15. The request specifies the starting address and number of holding registers to be read as well as the starting address, number of holding registers, and the data to be written. The byte count specifies the number of bytes to follow in the write data field.” decode(data) Decode the register request packet. Parameters data – The request to decode encode() Encode the request packet. Returns The encoded packet execute(context) Run a write single register request against a datastore. Parameters context – The datastore to request from Returns An initialized ReadWriteMultipleRegistersResponse, or an ExceptionResponse if an error occurred function_code = 23 function_code_name = 'read_write_multiple_registers' get_response_pdu_size() Get response pdu size. Func_code (1 byte) + Byte Count(1 byte) + 2 * Quantity of Coils (n Bytes) :return: class pymodbus.register_read_message.ReadWriteMultipleRegistersResponse(values=None, **kwargs) Bases: ModbusResponse Read/write multiple registers. The normal response contains the data from the group of registers that were read. The byte count field specifies the quantity of bytes to follow in the read data field. The requested registers can be found in the .registers list. PyModbus, Release 3.6.0dev decode(data) Decode the register response packet. Parameters data – The response to decode encode() Encode the response packet. Returns The encoded packet function_code = 23 Register Writing Request/Response Messages. class pymodbus.register_write_message.MaskWriteRegisterRequest(address=0, and_mask=65535, Bases: ModbusRequest This function code is used to modify the contents. Of a specified holding register using a combination of an AND mask, an OR mask, and the register’s current contents. The function can be used to set or clear individual bits in the register. decode(data) Decode the incoming request. Parameters data – The data to decode into the address encode() Encode the request packet. Returns The byte encoded packet execute(context) Run a mask write register request against the store. Parameters context – The datastore to request from Returns The populated response function_code = 22 function_code_name = 'mask_write_register' class pymodbus.register_write_message.MaskWriteRegisterResponse(address=0, and_mask=65535, Bases: ModbusResponse The normal response is an echo of the request. The response is returned after the register has been written. decode(data) Decode a the response. PyModbus, Release 3.6.0dev Parameters data – The packet data to decode encode() Encode the response. Returns The byte encoded message function_code = 22 class pymodbus.register_write_message.WriteMultipleRegistersRequest(address=None, values=None, slave=None, **kwargs) Bases: ModbusRequest This function code is used to write a block. Of contiguous registers (1 to approx. 120 registers) in a remote device. The requested written values are specified in the request data field. Data is packed as two bytes per register. decode(data) Decode a write single register packet packet request. Parameters data – The request to decode encode() Encode a write single register packet packet request. Returns The encoded packet execute(context) Run a write single register request against a datastore. Parameters context – The datastore to request from Returns An initialized response, exception message otherwise function_code = 16 function_code_name = 'write_registers' get_response_pdu_size() Get response pdu size. Func_code (1 byte) + Starting Address (2 byte) + Quantity of Registers (2 Bytes) :return: class pymodbus.register_write_message.WriteMultipleRegistersResponse(address=None, count=None, **kwargs) Bases: ModbusResponse The normal response returns the function code. Starting address, and quantity of registers written. PyModbus, Release 3.6.0dev decode(data) Decode a write single register packet packet request. Parameters data – The request to decode encode() Encode a write single register packet packet request. Returns The encoded packet function_code = 16 class pymodbus.register_write_message.WriteSingleRegisterRequest(address=None, value=None, slave=None, **kwargs) Bases: ModbusRequest This function code is used to write a single holding register in a remote device. The Request PDU specifies the address of the register to be written. Registers are addressed starting at zero. Therefore register numbered 1 is addressed as 0. decode(data) Decode a write single register packet packet request. Parameters data – The request to decode encode() Encode a write single register packet packet request. Returns The encoded packet execute(context) Run a write single register request against a datastore. Parameters context – The datastore to request from Returns An initialized response, exception message otherwise function_code = 6 function_code_name = 'write_register' get_response_pdu_size() Get response pdu size. Func_code (1 byte) + Register Address(2 byte) + Register Value (2 bytes) :return: class pymodbus.register_write_message.WriteSingleRegisterResponse(address=None, value=None, **kwargs) Bases: ModbusResponse The normal response is an echo of the request. Returned after the register contents have been written. PyModbus, Release 3.6.0dev decode(data) Decode a write single register packet packet request. Parameters data – The request to decode encode() Encode a write single register packet packet request. Returns The encoded packet function_code = 6 get_response_pdu_size() Get response pdu size. Func_code (1 byte) + Starting Address (2 byte) + And_mask (2 Bytes) + OrMask (2 Bytes) :return: Collection of transaction based abstractions. class pymodbus.transaction.DictTransactionManager(client, **kwargs) Bases: ModbusTransactionManager Implements a transaction for a manager. Where the results are keyed based on the supplied transaction id. addTransaction(request, tid=None) Add a transaction to the handler. This holds the requests in case it needs to be resent. After being sent, the request is removed. Parameters • request – The request to hold on to • tid – The overloaded transaction id to use delTransaction(tid) Remove a transaction matching the referenced tid. Parameters tid – The transaction to remove getTransaction(tid) Return a transaction matching the referenced tid. If the transaction does not exist, None is returned Parameters tid – The transaction to retrieve class pymodbus.transaction.FifoTransactionManager(client, **kwargs) Bases: ModbusTransactionManager Implements a transaction. For a manager where the results are returned in a FIFO manner. addTransaction(request, tid=None) Add a transaction to the handler. This holds the requests in case it needs to be resent. After being sent, the request is removed. PyModbus, Release 3.6.0dev Parameters • request – The request to hold on to • tid – The overloaded transaction id to use delTransaction(tid) Remove a transaction matching the referenced tid. Parameters tid – The transaction to remove getTransaction(tid) Return a transaction matching the referenced tid. If the transaction does not exist, None is returned Parameters tid – The transaction to retrieve class pymodbus.transaction.ModbusAsciiFramer(decoder, client=None) Bases: ModbusFramer Modbus ASCII Frame Controller. [ Start ][Address ][ Function ][ Data ][ LRC ][ End ] 1c 2c 2c Nc 2c 2c • data can be 0 - 2x252 chars • end is “\r\n” (Carriage return line feed), however the line feed character can be changed via a special command • start is “:” This framer is used for serial transmission. Unlike the RTU protocol, the data in this framer is transferred in plain text ascii. advanceFrame() Skip over the current framed message. This allows us to skip over the current message after we have processed it or determined that it contains an error. It also has to reset the current frame header handle buildPacket(message) Create a ready to send modbus packet. Built off of a modbus request/response Parameters message – The request/response to send Returns The encoded packet checkFrame() Check and decode the next frame. Returns True if we successful, False otherwise PyModbus, Release 3.6.0dev decode_data(data) Decode data. frameProcessIncomingPacket(single, callback, slave, _tid=None, **kwargs) Process new packet pattern. getFrame() Get the next frame from the buffer. Returns The frame data or “” isFrameReady() Check if we should continue decode logic. This is meant to be used in a while loop in the decoding phase to let the decoder know that there is still data in the buffer. Returns True if ready, False otherwise method = 'ascii' class pymodbus.transaction.ModbusBinaryFramer(decoder, client=None) Bases: ModbusFramer Modbus Binary Frame Controller. [ Start ][Address ][ Function ][ Data ][ CRC ][ End ] 1b 1b 1b Nb 2b 1b • data can be 0 - 2x252 chars • end is “}” • start is “{” The idea here is that we implement the RTU protocol, however, instead of using timing for message delimiting, we use start and end of message characters (in this case { and }). Basically, this is a binary framer. The only case we have to watch out for is when a message contains the { or } characters. If we encounter these characters, we simply duplicate them. Hopefully we will not encounter those characters that often and will save a little bit of bandwitch without a real-time system. Protocol defined by jamod.sourceforge.net. advanceFrame() → None Skip over the current framed message. This allows us to skip over the current message after we have processed it or determined that it contains an error. It also has to reset the current frame header handle buildPacket(message) Create a ready to send modbus packet. Parameters message – The request/response to send Returns The encoded packet PyModbus, Release 3.6.0dev checkFrame() → bool Check and decode the next frame. Returns True if we are successful, False otherwise decode_data(data) Decode data. frameProcessIncomingPacket(single, callback, slave, _tid=None, **kwargs) Process new packet pattern. getFrame() Get the next frame from the buffer. Returns The frame data or “” isFrameReady() → bool Check if we should continue decode logic. This is meant to be used in a while loop in the decoding phase to let the decoder know that there is still data in the buffer. Returns True if ready, False otherwise method = 'binary' class pymodbus.transaction.ModbusRtuFramer(decoder, client=None) Bases: ModbusFramer Modbus RTU Frame controller. [ Start Wait ] [Address ][ Function Code] [ Data ][ CRC ][ End Wait ] 3.5 chars 1b 1b Nb 2b 3.5 chars Wait refers to the amount of time required to transmit at least x many characters. In this case it is 3.5 characters. Also, if we receive a wait of 1.5 characters at any point, we must trigger an error message. Also, it appears as though this message is little endian. The logic is simplified as the following: block-on-read: read until 3.5 delay check for errors decode The following table is a listing of the baud wait times for the specified baud rates: ------------------------------------------------------------------ Baud 1.5c (18 bits) 3.5c (38 bits) ------------------------------------------------------------------ 1200 13333.3 us 31666.7 us 4800 3333.3 us 7916.7 us 9600 1666.7 us 3958.3 us 19200 833.3 us 1979.2 us 38400 416.7 us 989.6 us ------------------------------------------------------------------ 1 Byte = start + 8 bits + parity + stop = 11 bits (1/Baud)(bits) = delay seconds PyModbus, Release 3.6.0dev advanceFrame() Skip over the current framed message. This allows us to skip over the current message after we have processed it or determined that it contains an error. It also has to reset the current frame header handle buildPacket(message) Create a ready to send modbus packet. Parameters message – The populated request/response to send checkFrame() Check if the next frame is available. Return True if we were successful. 1. Populate header 2. Discard frame if UID does not match decode_data(data) Decode data. frameProcessIncomingPacket(single, callback, slave, _tid=None, **kwargs) Process new packet pattern. getFrame() Get the next frame from the buffer. Returns The frame data or “” getFrameStart(slaves, broadcast, skip_cur_frame) Scan buffer for a relevant frame start. get_expected_response_length(data) Get the expected response length. Parameters data – Message data read so far Raises IndexError – If not enough data to read byte count Returns Total frame size isFrameReady() Check if we should continue decode logic. This is meant to be used in a while loop in the decoding phase to let the decoder know that there is still data in the buffer. Returns True if ready, False otherwise method = 'rtu' PyModbus, Release 3.6.0dev populateHeader(data=None) Try to set the headers uid, len and crc. This method examines self._buffer and writes meta information into self._header. Beware that this method will raise an IndexError if self._buffer is not yet long enough. populateResult(result) Populate the modbus result header. The serial packets do not have any header information that is copied. Parameters result – The response packet recvPacket(size) Receive packet from the bus with specified len. Parameters size – Number of bytes to read Returns resetFrame() Reset the entire message frame. This allows us to skip over errors that may be in the stream. It is hard to know if we are simply out of sync or if there is an error in the stream as we have no way to check the start or end of the message (python just doesn’t have the resolution to check for millisecond delays). sendPacket(message) Send packets on the bus with 3.5char delay between frames. Parameters message – Message to be sent over the bus Returns class pymodbus.transaction.ModbusSocketFramer(decoder, client=None) Bases: ModbusFramer Modbus Socket Frame controller. Before each modbus TCP message is an MBAP header which is used as a message frame. It allows us to easily separate messages as follows: [ MBAP Header ] [ Function Code] [ Data ] [ tid ][ pid ][␣ ˓→length ][ uid ] 2b 2b 2b 1b 1b Nb while len(message) > 0: tid, pid, length`, uid = struct.unpack(">HHHB", message) request = message[0:7 + length - 1`] message = [7 + length - 1:] * length = uid + function code + data * The -1 is to account for the uid byte PyModbus, Release 3.6.0dev advanceFrame() Skip over the current framed message. This allows us to skip over the current message after we have processed it or determined that it contains an error. It also has to reset the current frame header handle buildPacket(message) Create a ready to send modbus packet. Parameters message – The populated request/response to send checkFrame() Check and decode the next frame. Return true if we were successful. decode_data(data) Decode data. frameProcessIncomingPacket(single, callback, slave, tid=None, **kwargs) Process new packet pattern. This takes in a new request packet, adds it to the current packet stream, and performs framing on it. That is, checks for complete messages, and once found, will process all that exist. This handles the case when we read N + 1 or 1 // N messages at a time instead of 1. The processed and decoded messages are pushed to the callback function to process and send. getFrame() Return the next frame from the buffered data. Returns The next full frame buffer isFrameReady() Check if we should continue decode logic. This is meant to be used in a while loop in the decoding phase to let the decoder factory know that there is still data in the buffer. Returns True if ready, False otherwise method = 'socket' class pymodbus.transaction.ModbusTlsFramer(decoder, client=None) Bases: ModbusFramer Modbus TLS Frame controller No prefix MBAP header before decrypted PDU is used as a message frame for Modbus Security Application Protocol. It allows us to easily separate decrypted messages which is PDU as follows: [ Function Code] [ Data ] 1b Nb advanceFrame() Skip over the current framed message. This allows us to skip over the current message after we have processed it or determined that it contains an error. It also has to reset the current frame header handle PyModbus, Release 3.6.0dev buildPacket(message) Create a ready to send modbus packet. Parameters message – The populated request/response to send checkFrame() Check and decode the next frame. Return true if we were successful. decode_data(data) Decode data. frameProcessIncomingPacket(single, callback, slave, _tid=None, **kwargs) Process new packet pattern. getFrame() Return the next frame from the buffered data. Returns The next full frame buffer isFrameReady() Check if we should continue decode logic. This is meant to be used in a while loop in the decoding phase to let the decoder factory know that there is still data in the buffer. Returns True if ready, False otherwise method = 'tls' Modbus Utilities. A collection of utilities for packing data, unpacking data computing checksums, and decode checksums. pymodbus.utilities.checkCRC(data, check) Check if the data matches the passed in CRC. Parameters • data – The data to create a crc16 of • check – The CRC to validate Returns True if matched, False otherwise pymodbus.utilities.checkLRC(data, check) Check if the passed in data matches the LRC. Parameters • data – The data to calculate • check – The LRC to validate Returns True if matched, False otherwise PyModbus, Release 3.6.0dev pymodbus.utilities.computeCRC(data) Compute a crc16 on the passed in string. For modbus, this is only used on the binary serial protocols (in this case RTU). The difference between modbus’s crc16 and a normal crc16 is that modbus starts the crc value out at 0xffff. Parameters data – The data to create a crc16 of Returns The calculated CRC pymodbus.utilities.computeLRC(data) Use to compute the longitudinal redundancy check against a string. This is only used on the serial ASCII modbus protocol. A full description of this implementation can be found in appendix B of the serial line modbus description. Parameters data – The data to apply a lrc to Returns The calculated LRC pymodbus.utilities.default(value) Return the default value of object. Parameters value – The value to get the default of Returns The default value pymodbus.utilities.pack_bitstring(bits: List[bool]) → bytes Create a bytestring out of a list of bits. Parameters bits – A list of bits example: bits = [False, True, False, True] result = pack_bitstring(bits) pymodbus.utilities.rtuFrameSize(data, byte_count_pos) Calculate the size of the frame based on the byte count. Parameters • data – The buffer containing the frame. • byte_count_pos – The index of the byte count in the buffer. Returns The size of the frame. The structure of frames with a byte count field is always the same: • first, there are some header fields • then the byte count field • then as many data bytes as indicated by the byte count, PyModbus, Release 3.6.0dev • finally the CRC (two bytes). To calculate the frame size, it is therefore sufficient to extract the contents of the byte count field, add the position of this field, and finally increment the sum by three (one byte for the byte count field, two for the CRC). pymodbus.utilities.unpack_bitstring(data: bytes) → List[bool] Create bit list out of a bytestring. Parameters data – The modbus data packet to decode example: bytes = "bytes to decode" result = unpack_bitstring(bytes) PyModbus, Release 3.6.0dev 184 Chapter 10. Pymodbus internals PYTHON MODULE INDEX p pymodbus, 135 pymodbus.bit_read_message, 136 pymodbus.bit_write_message, 138 pymodbus.device, 140 pymodbus.diag_message, 141 pymodbus.events, 150 pymodbus.exceptions, 152 pymodbus.factory, 153 pymodbus.file_message, 155 pymodbus.framer.ascii_framer, 127 pymodbus.framer.binary_framer, 128 pymodbus.framer.rtu_framer, 130 pymodbus.framer.socket_framer, 132 pymodbus.mei_message, 158 pymodbus.other_message, 159 pymodbus.payload, 163 pymodbus.register_read_message, 168 pymodbus.register_write_message, 171 pymodbus.repl.client.mclient, 45 pymodbus.server, 33 pymodbus.server.simulator.http_server, 73 pymodbus.transaction, 174 pymodbus.utilities, 181 PyModbus, Release 3.6.0dev 186 Python Module Index INDEX Symbols bus.server.ModbusSimulatorServer method), _rtu_frame_size (pymodbus.pdu.ModbusResponse at- 34 tribute), 168 action_stop() (pymod- bus.server.simulator.http_server.ModbusSimulatorServer action_add() (pymod- bus.payload.BinaryPayloadBuilder method), bus.server.ModbusSimulatorServer method), 34 action_add() (pymod- bus.payload.BinaryPayloadBuilder method), bus.server.simulator.http_server.ModbusSimulatorServer method), 75 action_clear() (pymod- bus.payload.BinaryPayloadBuilder method), bus.server.ModbusSimulatorServer method), 34 action_clear() (pymod- bus.payload.BinaryPayloadBuilder method), bus.server.simulator.http_server.ModbusSimulatorServer method), 75 action_monitor() (pymod- bus.payload.BinaryPayloadBuilder method), bus.server.ModbusSimulatorServer method), 34 action_monitor() (pymod- bus.payload.BinaryPayloadBuilder method), bus.server.simulator.http_server.ModbusSimulatorServer method), 75 action_reset() (pymod- bus.payload.BinaryPayloadBuilder method), bus.server.ModbusSimulatorServer method), 34 action_reset() (pymod- bus.payload.BinaryPayloadBuilder method), bus.server.simulator.http_server.ModbusSimulatorServer method), 75 action_set() (pymod- bus.payload.BinaryPayloadBuilder method), bus.server.ModbusSimulatorServer method), 34 action_set() (pymod- bus.payload.BinaryPayloadBuilder method), bus.server.simulator.http_server.ModbusSimulatorServer method), 75 action_simulate() (pymod- bus.payload.BinaryPayloadBuilder method), bus.server.ModbusSimulatorServer method), 34 add_bits() (pymodbus.payload.BinaryPayloadBuilder action_simulate() (pymod- bus.server.simulator.http_server.ModbusSimulatorServer add_string() (pymod- method), 75 bus.payload.BinaryPayloadBuilder method), action_stop() (pymod- PyModbus, Release 3.6.0dev 164 bits (pymodbus.bit_read_message.ReadBitsResponseBase addTransaction() (pymod- attribute), 136 bus.transaction.DictTransactionManager build() (pymodbus.payload.BinaryPayloadBuilder method), 174 method), 164 addTransaction() (pymod- build_html_calls() (pymod- bus.transaction.FifoTransactionManager bus.server.ModbusSimulatorServer method), method), 174 34 advanceFrame() (pymod- build_html_calls() (pymod- bus.framer.ascii_framer.ModbusAsciiFramer bus.server.simulator.http_server.ModbusSimulatorServer method), 128 method), 74 advanceFrame() (pymod- build_html_log() (pymod- bus.framer.binary_framer.ModbusBinaryFramer bus.server.ModbusSimulatorServer method), method), 129 34 advanceFrame() (pymod- build_html_log() (pymod- bus.framer.rtu_framer.ModbusRtuFramer bus.server.simulator.http_server.ModbusSimulatorServer method), 130 method), 74 advanceFrame() (pymod- build_html_registers() (pymod- bus.framer.socket_framer.ModbusSocketFramer bus.server.ModbusSimulatorServer method), method), 132 34 advanceFrame() (pymod- build_html_registers() (pymod- bus.transaction.ModbusAsciiFramer method), bus.server.simulator.http_server.ModbusSimulatorServer advanceFrame() (pymod- build_html_server() (pymod- bus.transaction.ModbusBinaryFramer bus.server.ModbusSimulatorServer method), method), 176 34 advanceFrame() (pymod- build_html_server() (pymod- bus.transaction.ModbusRtuFramer method), bus.server.simulator.http_server.ModbusSimulatorServer advanceFrame() (pymod- build_json_calls() (pymod- bus.transaction.ModbusSocketFramer method), bus.server.ModbusSimulatorServer method), advanceFrame() (pymod- build_json_calls() (pymod- bus.transaction.ModbusTlsFramer method), bus.server.simulator.http_server.ModbusSimulatorServer ASCII (pymodbus.Framer attribute), 135 build_json_log() (pymod- AsyncModbusSerialClient (class in pymodbus.client), bus.server.ModbusSimulatorServer method), AsyncModbusTcpClient (class in pymodbus.client), 17 build_json_log() (pymod- AsyncModbusTlsClient (class in pymodbus.client), 19 bus.server.simulator.http_server.ModbusSimulatorServer AsyncModbusUdpClient (class in pymodbus.client), 22 method), 75 AUTO (pymodbus.constants.Endian attribute), 134 build_json_registers() (pymod- bus.server.ModbusSimulatorServer method), BASIC (pymodbus.constants.DeviceInformation at- build_json_registers() (pymod- tribute), 133 bus.server.simulator.http_server.ModbusSimulatorServer BIG (pymodbus.constants.Endian attribute), 134 method), 74 BINARY (pymodbus.Framer attribute), 135 build_json_server() (pymod- BinaryPayloadBuilder (class in pymodbus.payload), bus.server.ModbusSimulatorServer method), BinaryPayloadDecoder (class in pymodbus.payload), build_json_server() (pymod- bit_chunks() (pymod- method), 75 bus.payload.BinaryPayloadDecoder class build_registers_from_value() (pymod- method), 165 bus.datastore.ModbusSimulatorContext class PyModbus, Release 3.6.0dev method), 127 bus.framer.ascii_framer.ModbusAsciiFramer build_value_from_registers() (pymod- method), 128 bus.datastore.ModbusSimulatorContext class checkFrame() (pymod- method), 127 bus.framer.binary_framer.ModbusBinaryFramer buildPacket() (pymod- method), 129 bus.framer.ascii_framer.ModbusAsciiFramer checkFrame() (pymod- method), 128 bus.framer.rtu_framer.ModbusRtuFramer buildPacket() (pymod- method), 130 bus.framer.binary_framer.ModbusBinaryFramer checkFrame() (pymod- method), 129 bus.framer.socket_framer.ModbusSocketFramer buildPacket() (pymod- method), 132 bus.framer.rtu_framer.ModbusRtuFramer checkFrame() (pymod- method), 130 bus.transaction.ModbusAsciiFramer method), buildPacket() (pymod- 175 bus.framer.socket_framer.ModbusSocketFramer checkFrame() (pymod- method), 132 bus.transaction.ModbusBinaryFramer buildPacket() (pymod- method), 176 bus.transaction.ModbusAsciiFramer method), checkFrame() (pymod- buildPacket() (pymod- 178 bus.transaction.ModbusBinaryFramer checkFrame() (pymod- method), 176 bus.transaction.ModbusSocketFramer method), buildPacket() (pymod- 180 bus.transaction.ModbusRtuFramer method), checkFrame() (pymod- buildPacket() (pymod- 181 bus.transaction.ModbusSocketFramer method), checkLRC() (in module pymodbus.utilities), 181 180 clear_counters() (pymod- buildPacket() (pymod- bus.repl.client.mclient.ExtendedRequestSupport bus.transaction.ModbusTlsFramer method), method), 45 180 clear_overrun_count() (pymod- bus.repl.client.mclient.ExtendedRequestSupport calculateRtuFrameSize() (pymod- CLEAR_STATISTICS (pymod- bus.file_message.ReadFifoQueueResponse bus.constants.ModbusPlusOperation attribute), class method), 156 134 calculateRtuFrameSize() (pymod- ClearCountersRequest (class in pymod- bus.mei_message.ReadDeviceInformationResponse bus.diag_message), 142 class method), 158 ClearCountersResponse (class in pymod- CallTracer (class in pymod- bus.diag_message), 142 bus.server.simulator.http_server), 73 ClearOverrunCountRequest (class in pymod- CallTypeMonitor (class in pymod- bus.diag_message), 142 bus.server.simulator.http_server), 73 ClearOverrunCountResponse (class in pymod- CallTypeResponse (class in pymod- bus.diag_message), 142 bus.server.simulator.http_server), 73 ClientDecoder (class in pymodbus.factory), 153 change_ascii_input_delimiter() (pymod- close() (pymodbus.client.AsyncModbusSerialClient bus.repl.client.mclient.ExtendedRequestSupport method), 15 method), 45 close() (pymodbus.client.AsyncModbusTcpClient ChangeAsciiInputDelimiterRequest (class in py- method), 18 modbus.diag_message), 141 close() (pymodbus.client.ModbusSerialClient method), ChangeAsciiInputDelimiterResponse (class in py- 16 modbus.diag_message), 142 close() (pymodbus.client.ModbusTcpClient method), 19 checkCRC() (in module pymodbus.utilities), 181 CommunicationRestartEvent (class in pymod- checkFrame() (pymod- bus.events), 150 PyModbus, Release 3.6.0dev computeCRC() (in module pymodbus.utilities), 181 decode() (pymodbus.events.RemoteReceiveEvent computeLRC() (in module pymodbus.utilities), 182 method), 151 connect() (pymodbus.client.AsyncModbusSerialClient decode() (pymodbus.events.RemoteSendEvent method), method), 15 152 connect() (pymodbus.client.AsyncModbusTcpClient decode() (pymodbus.factory.ClientDecoder method), method), 18 153 connect() (pymodbus.client.AsyncModbusTlsClient decode() (pymodbus.factory.ServerDecoder method), method), 20 154 connect() (pymodbus.client.ModbusSerialClient decode() (pymodbus.file_message.ReadFifoQueueRequest method), 16 method), 155 connect() (pymodbus.client.ModbusTcpClient method), decode() (pymodbus.file_message.ReadFifoQueueResponse connect() (pymodbus.client.ModbusTlsClient method), decode() (pymodbus.file_message.ReadFileRecordRequest connected (pymodbus.client.AsyncModbusUdpClient decode() (pymodbus.file_message.ReadFileRecordResponse property), 22 method), 157 connected (pymodbus.client.ModbusSerialClient prop- decode() (pymodbus.file_message.WriteFileRecordRequest erty), 16 method), 157 connected (pymodbus.client.ModbusTcpClient prop- decode() (pymodbus.file_message.WriteFileRecordResponse erty), 19 method), 158 connected (pymodbus.client.ModbusTlsClient prop- decode() (pymodbus.mei_message.ReadDeviceInformationRequest erty), 21 method), 158 connected (pymodbus.client.ModbusUdpClient prop- decode() (pymodbus.mei_message.ReadDeviceInformationResponse erty), 23 method), 159 ConnectionException, 152 decode() (pymodbus.other_message.GetCommEventCounterRequest convert_from_registers() (pymod- method), 159 bus.client.mixin.ModbusClientMixin class decode() (pymodbus.other_message.GetCommEventCounterResponse method), 32 method), 160 convert_to_registers() (pymod- decode() (pymodbus.other_message.GetCommEventLogRequest bus.client.mixin.ModbusClientMixin class method), 160 method), 32 decode() (pymodbus.other_message.GetCommEventLogResponse create() (pymodbus.datastore.ModbusSparseDataBlock method), 161 class method), 123 decode() (pymodbus.other_message.ReadExceptionStatusRequest D decode() (pymodbus.other_message.ReadExceptionStatusResponse decode() (pymodbus.bit_read_message.ReadBitsResponseBase method), 161 method), 136 decode() (pymodbus.other_message.ReportSlaveIdRequest decode() (pymodbus.bit_write_message.WriteMultipleCoilsRequest method), 162 method), 138 decode() (pymodbus.other_message.ReportSlaveIdResponse decode() (pymodbus.bit_write_message.WriteMultipleCoilsResponsemethod), 162 method), 139 decode() (pymodbus.register_read_message.ReadRegistersResponseBase decode() (pymodbus.bit_write_message.WriteSingleCoilRequest method), 169 method), 139 decode() (pymodbus.register_read_message.ReadWriteMultipleRegistersR decode() (pymodbus.bit_write_message.WriteSingleCoilResponse method), 170 method), 140 decode() (pymodbus.register_read_message.ReadWriteMultipleRegistersR decode() (pymodbus.diag_message.DiagnosticStatusRequest method), 170 method), 143 decode() (pymodbus.register_write_message.MaskWriteRegisterRequest decode() (pymodbus.diag_message.DiagnosticStatusResponse method), 171 method), 143 decode() (pymodbus.register_write_message.MaskWriteRegisterResponse decode() (pymodbus.events.CommunicationRestartEvent method), 171 method), 150 decode() (pymodbus.register_write_message.WriteMultipleRegistersReque decode() (pymodbus.events.EnteredListenModeEvent method), 172 method), 150 decode() (pymodbus.register_write_message.WriteMultipleRegistersRespo decode() (pymodbus.events.ModbusEvent method), 151 method), 172 PyModbus, Release 3.6.0dev decode() (pymodbus.register_write_message.WriteSingleRegisterRequest method), 173 decode_data() (pymod- decode() (pymodbus.register_write_message.WriteSingleRegisterResponse bus.transaction.ModbusBinaryFramer method), 173 method), 177 decode_16bit_float() (pymod- decode_data() (pymod- bus.payload.BinaryPayloadDecoder method), bus.transaction.ModbusRtuFramer method), decode_16bit_int() (pymod- decode_data() (pymod- bus.payload.BinaryPayloadDecoder method), bus.transaction.ModbusSocketFramer method), decode_16bit_uint() (pymod- decode_data() (pymod- bus.payload.BinaryPayloadDecoder method), bus.transaction.ModbusTlsFramer method), decode_32bit_float() (pymod- decode_string() (pymod- bus.payload.BinaryPayloadDecoder method), bus.payload.BinaryPayloadDecoder method), decode_32bit_int() (pymod- default() (in module pymodbus.utilities), 182 bus.payload.BinaryPayloadDecoder method), delTransaction() (pymod- decode_32bit_uint() (pymod- method), 174 bus.payload.BinaryPayloadDecoder method), delTransaction() (pymod- decode_64bit_float() (pymod- method), 175 bus.payload.BinaryPayloadDecoder method), DeviceInformationFactory (class in pymod- decode_64bit_int() (pymod- diag_change_ascii_input_delimeter() (pymod- bus.payload.BinaryPayloadDecoder method), bus.client.mixin.ModbusClientMixin method), decode_64bit_uint() (pymod- diag_clear_counters() (pymod- bus.payload.BinaryPayloadDecoder method), bus.client.mixin.ModbusClientMixin method), decode_8bit_int() (pymod- diag_clear_overrun_counter() (pymod- bus.payload.BinaryPayloadDecoder method), bus.client.mixin.ModbusClientMixin method), decode_8bit_uint() (pymod- diag_force_listen_only() (pymod- bus.payload.BinaryPayloadDecoder method), bus.client.mixin.ModbusClientMixin method), decode_bits() (pymod- diag_get_comm_event_counter() (pymod- bus.payload.BinaryPayloadDecoder method), bus.client.mixin.ModbusClientMixin method), decode_data() (pymod- diag_get_comm_event_log() (pymod- bus.framer.ascii_framer.ModbusAsciiFramer bus.client.mixin.ModbusClientMixin method), method), 128 30 decode_data() (pymod- diag_getclear_modbus_response() (pymod- bus.framer.binary_framer.ModbusBinaryFramer bus.client.mixin.ModbusClientMixin method), method), 129 29 decode_data() (pymod- diag_query_data() (pymod- bus.framer.rtu_framer.ModbusRtuFramer bus.client.mixin.ModbusClientMixin method), method), 130 26 decode_data() (pymod- diag_read_bus_char_overrun_count() (pymod- bus.framer.socket_framer.ModbusSocketFramer bus.client.mixin.ModbusClientMixin method), method), 132 29 decode_data() (pymod- diag_read_bus_comm_error_count() (pymod- bus.transaction.ModbusAsciiFramer method), bus.client.mixin.ModbusClientMixin method), PyModbus, Release 3.6.0dev 28 encode() (pymodbus.events.CommunicationRestartEvent diag_read_bus_exception_error_count() (pymod- method), 150 bus.client.mixin.ModbusClientMixin method), encode() (pymodbus.events.EnteredListenModeEvent diag_read_bus_message_count() (pymod- encode() (pymodbus.events.ModbusEvent method), 151 bus.client.mixin.ModbusClientMixin method), encode() (pymodbus.events.RemoteReceiveEvent diag_read_diagnostic_register() (pymod- encode() (pymodbus.events.RemoteSendEvent method), bus.client.mixin.ModbusClientMixin method), 152 27 encode() (pymodbus.file_message.ReadFifoQueueRequest diag_read_iop_overrun_count() (pymod- method), 155 bus.client.mixin.ModbusClientMixin method), encode() (pymodbus.file_message.ReadFifoQueueResponse diag_read_slave_busy_count() (pymod- encode() (pymodbus.file_message.ReadFileRecordRequest bus.client.mixin.ModbusClientMixin method), method), 156 29 encode() (pymodbus.file_message.ReadFileRecordResponse diag_read_slave_message_count() (pymod- method), 157 bus.client.mixin.ModbusClientMixin method), encode() (pymodbus.file_message.WriteFileRecordRequest diag_read_slave_nak_count() (pymod- encode() (pymodbus.file_message.WriteFileRecordResponse bus.client.mixin.ModbusClientMixin method), method), 158 28 encode() (pymodbus.mei_message.ReadDeviceInformationRequest diag_read_slave_no_response_count() (pymod- method), 158 bus.client.mixin.ModbusClientMixin method), encode() (pymodbus.mei_message.ReadDeviceInformationResponse diag_restart_communication() (pymod- encode() (pymodbus.other_message.GetCommEventCounterRequest bus.client.mixin.ModbusClientMixin method), method), 159 26 encode() (pymodbus.other_message.GetCommEventCounterResponse DiagnosticStatusRequest (class in pymod- method), 160 bus.diag_message), 143 encode() (pymodbus.other_message.GetCommEventLogRequest DiagnosticStatusResponse (class in pymod- method), 160 bus.diag_message), 143 encode() (pymodbus.other_message.GetCommEventLogResponse DictTransactionManager (class in pymod- method), 161 bus.transaction), 174 encode() (pymodbus.other_message.ReadExceptionStatusRequest E encode() (pymodbus.other_message.ReadExceptionStatusResponse encode() (pymodbus.bit_read_message.ReadBitsResponseBase method), 162 method), 136 encode() (pymodbus.other_message.ReportSlaveIdRequest encode() (pymodbus.bit_write_message.WriteMultipleCoilsRequest method), 162 method), 138 encode() (pymodbus.other_message.ReportSlaveIdResponse encode() (pymodbus.bit_write_message.WriteMultipleCoilsResponsemethod), 162 method), 139 encode() (pymodbus.payload.BinaryPayloadBuilder encode() (pymodbus.bit_write_message.WriteSingleCoilRequest method), 164 method), 139 encode() (pymodbus.register_read_message.ReadRegistersResponseBase encode() (pymodbus.bit_write_message.WriteSingleCoilResponse method), 169 method), 140 encode() (pymodbus.register_read_message.ReadWriteMultipleRegistersR encode() (pymodbus.device.ModbusPlusStatistics method), 170 method), 141 encode() (pymodbus.register_read_message.ReadWriteMultipleRegistersR encode() (pymodbus.diag_message.DiagnosticStatusRequest method), 171 method), 143 encode() (pymodbus.register_write_message.MaskWriteRegisterRequest encode() (pymodbus.diag_message.DiagnosticStatusResponse method), 171 method), 143 encode() (pymodbus.register_write_message.MaskWriteRegisterResponse encode() (pymodbus.diag_message.GetClearModbusPlusRequest method), 172 method), 144 encode() (pymodbus.register_write_message.WriteMultipleRegistersReque PyModbus, Release 3.6.0dev method), 172 method), 155 encode() (pymodbus.register_write_message.WriteMultipleRegistersResponse execute() (pymodbus.file_message.ReadFileRecordRequest method), 173 method), 156 encode() (pymodbus.register_write_message.WriteSingleRegisterRequest execute() (pymodbus.file_message.WriteFileRecordRequest method), 173 method), 157 encode() (pymodbus.register_write_message.WriteSingleRegisterResponse execute() (pymodbus.mei_message.ReadDeviceInformationRequest method), 174 method), 158 EnteredListenModeEvent (class in pymodbus.events), execute() (pymodbus.other_message.GetCommEventCounterRequest execute() (pymodbus.bit_read_message.ReadCoilsRequestexecute() (pymodbus.other_message.GetCommEventLogRequest method), 137 method), 160 execute() (pymodbus.bit_read_message.ReadDiscreteInputsRequest execute() (pymodbus.other_message.ReadExceptionStatusRequest method), 137 method), 161 execute() (pymodbus.bit_write_message.WriteMultipleCoilsRequest execute() (pymodbus.other_message.ReportSlaveIdRequest method), 138 method), 162 execute() (pymodbus.bit_write_message.WriteSingleCoilRequest execute() (pymodbus.register_read_message.ReadHoldingRegistersRequ method), 139 method), 168 execute() (pymodbus.client.mixin.ModbusClientMixin execute() (pymodbus.register_read_message.ReadInputRegistersRequest method), 24 method), 169 execute() (pymodbus.diag_message.ChangeAsciiInputDelimiterRequest execute() (pymodbus.register_read_message.ReadWriteMultipleRegisters method), 141 method), 170 execute() (pymodbus.diag_message.ClearCountersRequestexecute() (pymodbus.register_write_message.MaskWriteRegisterRequest method), 142 method), 171 execute() (pymodbus.diag_message.ClearOverrunCountRequest execute() (pymodbus.register_write_message.WriteMultipleRegistersReq method), 142 method), 172 execute() (pymodbus.diag_message.ForceListenOnlyModeRequest execute() (pymodbus.register_write_message.WriteSingleRegisterRequest method), 143 method), 173 execute() (pymodbus.diag_message.GetClearModbusPlusRequest EXTENDED (pymodbus.constants.DeviceInformation at- method), 144 tribute), 133 execute() (pymodbus.diag_message.RestartCommunicationsOptionRequest ExtendedRequestSupport (class in pymod- method), 145 bus.repl.client.mclient), 45 execute() (pymodbus.diag_message.ReturnBusCommunicationErrorCountRequest method), 145 F execute() (pymodbus.diag_message.ReturnBusExceptionErrorCountRequest FifoTransactionManager (class in pymod- method), 146 bus.transaction), 174 execute() (pymodbus.diag_message.ReturnBusMessageCountRequest method), 146 force_listen_only_mode() (pymod- execute() (pymodbus.diag_message.ReturnDiagnosticRegisterRequestbus.repl.client.mclient.ExtendedRequestSupport method), 146 method), 45 execute() (pymodbus.diag_message.ReturnIopOverrunCountRequest ForceListenOnlyModeRequest (class in pymod- method), 147 bus.diag_message), 143 execute() (pymodbus.diag_message.ReturnQueryDataRequest ForceListenOnlyModeResponse (class in pymod- method), 147 bus.diag_message), 144 execute() (pymodbus.diag_message.ReturnSlaveBusCharacterOverrunCountRequest frameProcessIncomingPacket() (pymod- method), 148 bus.framer.ascii_framer.ModbusAsciiFramer execute() (pymodbus.diag_message.ReturnSlaveBusyCountRequest method), 128 method), 148 frameProcessIncomingPacket() (pymod- execute() (pymodbus.diag_message.ReturnSlaveMessageCountRequest bus.framer.binary_framer.ModbusBinaryFramer method), 148 method), 129 execute() (pymodbus.diag_message.ReturnSlaveNAKCountRequest frameProcessIncomingPacket() (pymod- method), 149 bus.framer.rtu_framer.ModbusRtuFramer execute() (pymodbus.diag_message.ReturnSlaveNoResponseCountRequest method), 149 frameProcessIncomingPacket() (pymod- execute() (pymodbus.file_message.ReadFifoQueueRequest bus.framer.socket_framer.ModbusSocketFramer PyModbus, Release 3.6.0dev method), 132 attribute), 155 frameProcessIncomingPacket() (pymod- function_code (pymod- bus.transaction.ModbusAsciiFramer method), bus.file_message.ReadFifoQueueResponse frameProcessIncomingPacket() (pymod- function_code (pymod- bus.transaction.ModbusBinaryFramer bus.file_message.ReadFileRecordRequest method), 177 attribute), 157 frameProcessIncomingPacket() (pymod- function_code (pymod- bus.transaction.ModbusRtuFramer method), bus.file_message.ReadFileRecordResponse frameProcessIncomingPacket() (pymod- function_code (pymod- bus.transaction.ModbusSocketFramer method), bus.file_message.WriteFileRecordRequest frameProcessIncomingPacket() (pymod- function_code (pymod- bus.transaction.ModbusTlsFramer method), bus.file_message.WriteFileRecordResponse Framer (class in pymodbus), 135 function_code (pymod- fromCoils() (pymodbus.payload.BinaryPayloadDecoder bus.mei_message.ReadDeviceInformationRequest class method), 166 attribute), 158 fromRegisters() (pymod- function_code (pymod- bus.payload.BinaryPayloadDecoder class bus.mei_message.ReadDeviceInformationResponse method), 166 attribute), 159 function_code (pymod- function_code (pymod- bus.bit_read_message.ReadCoilsRequest bus.other_message.GetCommEventCounterRequest attribute), 137 attribute), 159 function_code (pymod- function_code (pymod- bus.bit_read_message.ReadCoilsResponse bus.other_message.GetCommEventCounterResponse attribute), 137 attribute), 160 function_code (pymod- function_code (pymod- bus.bit_read_message.ReadDiscreteInputsRequest bus.other_message.GetCommEventLogRequest attribute), 137 attribute), 160 function_code (pymod- function_code (pymod- bus.bit_read_message.ReadDiscreteInputsResponse bus.other_message.GetCommEventLogResponse attribute), 138 attribute), 161 function_code (pymod- function_code (pymod- bus.bit_write_message.WriteMultipleCoilsRequest bus.other_message.ReadExceptionStatusRequest attribute), 138 attribute), 161 function_code (pymod- function_code (pymod- bus.bit_write_message.WriteMultipleCoilsResponse bus.other_message.ReadExceptionStatusResponse attribute), 139 attribute), 162 function_code (pymod- function_code (pymod- bus.bit_write_message.WriteSingleCoilRequest bus.other_message.ReportSlaveIdRequest attribute), 139 attribute), 162 function_code (pymod- function_code (pymod- bus.bit_write_message.WriteSingleCoilResponse bus.other_message.ReportSlaveIdResponse attribute), 140 attribute), 162 function_code (pymod- function_code (pymod- bus.diag_message.DiagnosticStatusRequest bus.register_read_message.ReadHoldingRegistersRequest attribute), 143 attribute), 168 function_code (pymod- function_code (pymod- bus.diag_message.DiagnosticStatusResponse bus.register_read_message.ReadHoldingRegistersResponse attribute), 143 attribute), 168 function_code (pymod- function_code (pymod- bus.file_message.ReadFifoQueueRequest bus.register_read_message.ReadInputRegistersRequest PyModbus, Release 3.6.0dev attribute), 169 attribute), 158 function_code (pymod- function_code_name (pymod- bus.register_read_message.ReadInputRegistersResponse bus.other_message.GetCommEventCounterRequest attribute), 169 attribute), 159 function_code (pymod- function_code_name (pymod- bus.register_read_message.ReadWriteMultipleRegistersRequest bus.other_message.GetCommEventLogRequest attribute), 170 attribute), 160 function_code (pymod- function_code_name (pymod- bus.register_read_message.ReadWriteMultipleRegistersResponse bus.other_message.ReadExceptionStatusRequest attribute), 171 attribute), 161 function_code (pymod- function_code_name (pymod- bus.register_write_message.MaskWriteRegisterRequest bus.other_message.ReportSlaveIdRequest attribute), 171 attribute), 162 function_code (pymod- function_code_name (pymod- bus.register_write_message.MaskWriteRegisterResponse bus.register_read_message.ReadHoldingRegistersRequest attribute), 172 attribute), 168 function_code (pymod- function_code_name (pymod- bus.register_write_message.WriteMultipleRegistersRequest bus.register_read_message.ReadInputRegistersRequest attribute), 172 attribute), 169 function_code (pymod- function_code_name (pymod- bus.register_write_message.WriteMultipleRegistersResponsebus.register_read_message.ReadWriteMultipleRegistersRequest attribute), 173 attribute), 170 function_code (pymod- function_code_name (pymod- bus.register_write_message.WriteSingleRegisterRequest bus.register_write_message.MaskWriteRegisterRequest attribute), 173 attribute), 171 function_code (pymod- function_code_name (pymod- bus.register_write_message.WriteSingleRegisterResponse bus.register_write_message.WriteMultipleRegistersRequest attribute), 174 attribute), 172 function_code_name (pymod- function_code_name (pymod- bus.bit_read_message.ReadCoilsRequest bus.register_write_message.WriteSingleRegisterRequest attribute), 137 attribute), 173 function_code_name (pymod- function_table (pymodbus.factory.ClientDecoder at- bus.bit_read_message.ReadDiscreteInputsRequest tribute), 153 attribute), 137 function_code_name (pymod- G bus.bit_write_message.WriteMultipleCoilsRequestget() (pymodbus.device.DeviceInformationFactory attribute), 138 class method), 140 function_code_name (pymod- get_baudrate() (pymod- bus.bit_write_message.WriteSingleCoilRequest bus.repl.client.mclient.ModbusSerialClient attribute), 139 method), 51 function_code_name (pymod- get_bytesize() (pymod- bus.diag_message.DiagnosticStatusRequest bus.repl.client.mclient.ModbusSerialClient attribute), 143 method), 51 function_code_name (pymod- get_clear_modbus_plus() (pymod- bus.file_message.ReadFifoQueueRequest bus.repl.client.mclient.ExtendedRequestSupport attribute), 155 method), 46 function_code_name (pymod- get_com_event_counter() (pymod- bus.file_message.ReadFileRecordRequest bus.repl.client.mclient.ExtendedRequestSupport attribute), 157 method), 46 function_code_name (pymod- get_com_event_log() (pymod- bus.file_message.WriteFileRecordRequest bus.repl.client.mclient.ExtendedRequestSupport attribute), 157 method), 46 function_code_name (pymod- get_expected_response_length() (pymod- bus.mei_message.ReadDeviceInformationRequest bus.framer.rtu_framer.ModbusRtuFramer PyModbus, Release 3.6.0dev method), 131 GetClearModbusPlusResponse (class in pymod- get_expected_response_length() (pymod- bus.diag_message), 144 bus.transaction.ModbusRtuFramer method), GetCommEventCounterRequest (class in pymod- get_parity() (pymod- GetCommEventCounterResponse (class in pymod- bus.repl.client.mclient.ModbusSerialClient bus.other_message), 160 method), 51 GetCommEventLogRequest (class in pymod- get_port() (pymodbus.repl.client.mclient.ModbusSerialClient bus.other_message), 160 method), 51 GetCommEventLogResponse (class in pymod- get_response_pdu_size() (pymod- bus.other_message), 160 bus.bit_write_message.WriteMultipleCoilsRequestgetFCdict() (pymodbus.factory.ServerDecoder class method), 138 method), 154 get_response_pdu_size() (pymod- getFrame() (pymodbus.framer.ascii_framer.ModbusAsciiFramer bus.bit_write_message.WriteSingleCoilRequest method), 128 method), 139 getFrame() (pymodbus.framer.binary_framer.ModbusBinaryFramer get_response_pdu_size() (pymod- method), 129 bus.diag_message.DiagnosticStatusRequest getFrame() (pymodbus.framer.rtu_framer.ModbusRtuFramer method), 143 method), 131 get_response_pdu_size() (pymod- getFrame() (pymodbus.framer.socket_framer.ModbusSocketFramer bus.diag_message.GetClearModbusPlusRequest method), 133 method), 144 getFrame() (pymodbus.transaction.ModbusAsciiFramer get_response_pdu_size() (pymod- method), 176 bus.register_read_message.ReadWriteMultipleRegistersRequest getFrame() (pymodbus.transaction.ModbusBinaryFramer method), 170 method), 177 get_response_pdu_size() (pymod- getFrame() (pymodbus.transaction.ModbusRtuFramer bus.register_write_message.WriteMultipleRegistersRequest method), 178 method), 172 getFrame() (pymodbus.transaction.ModbusSocketFramer get_response_pdu_size() (pymod- method), 180 bus.register_write_message.WriteSingleRegisterRequest getFrame() (pymodbus.transaction.ModbusTlsFramer method), 173 method), 181 get_response_pdu_size() (pymod- getFrameStart() (pymod- bus.register_write_message.WriteSingleRegisterResponse bus.framer.rtu_framer.ModbusRtuFramer method), 174 method), 131 get_serial_settings() (pymod- getFrameStart() (pymod- bus.repl.client.mclient.ModbusSerialClient bus.transaction.ModbusRtuFramer method), method), 51 178 get_simulator_commandline() (in module pymod- getRegister() (pymod- bus.server), 37 bus.register_read_message.ReadRegistersResponseBase GET_STATISTICS (pymod- method), 169 bus.constants.ModbusPlusOperation attribute), getTransaction() (pymod- get_stopbits() (pymod- method), 174 bus.repl.client.mclient.ModbusSerialClient getTransaction() (pymod- method), 52 bus.transaction.FifoTransactionManager get_text_register() (pymod- method), 175 bus.datastore.ModbusSimulatorContext getValues() (pymodbus.datastore.ModbusSlaveContext method), 127 method), 125 get_timeout() (pymod- getValues() (pymodbus.datastore.ModbusSparseDataBlock bus.repl.client.mclient.ModbusSerialClient method), 124 method), 52 getBit() (pymodbus.bit_read_message.ReadBitsResponseBaseH method), 136 handle_brodcast() (in module pymod- GetClearModbusPlusRequest (class in pymod- bus.repl.client.mclient), 52 bus.diag_message), 144 PyModbus, Release 3.6.0dev handle_html() (pymod- 178 bus.server.ModbusSimulatorServer method), isFrameReady() (pymod- handle_html() (pymod- 180 bus.server.simulator.http_server.ModbusSimulatorServer isFrameReady() (pymod- method), 74 bus.transaction.ModbusTlsFramer method), handle_html_static() (pymod- 181 bus.server.ModbusSimulatorServer method), 35 K handle_html_static() (pymod- KEEP_READING (pymodbus.constants.MoreData at- bus.server.simulator.http_server.ModbusSimulatorServer tribute), 135 method), 74 handle_json() (pymod- L bus.server.ModbusSimulatorServer method), LITTLE (pymodbus.constants.Endian attribute), 134 35 lookupPduClass() (pymodbus.factory.ClientDecoder handle_json() (pymod- method), 154 bus.server.simulator.http_server.ModbusSimulatorServer lookupPduClass() (pymodbus.factory.ServerDecoder method), 74 method), 154 helper_build_html_submit() (pymod- bus.server.ModbusSimulatorServer method), M 35 MajorMinorRevision (pymod- helper_build_html_submit() (pymod- bus.device.ModbusDeviceIdentification prop- bus.server.simulator.http_server.ModbusSimulatorServer erty), 140 method), 75 make_response_dict() (in module pymod- I mask_write_register() (pymod- InvalidMessageReceivedException, 152 bus.client.mixin.ModbusClientMixin method), is_socket_open() (pymod- 31 bus.client.ModbusSerialClient method), mask_write_register() (pymod- is_socket_open() (pymodbus.client.ModbusTcpClient method), 46 method), 19 MaskWriteRegisterRequest (class in pymod- isError() (pymodbus.exceptions.ModbusException bus.register_write_message), 171 method), 152 MaskWriteRegisterResponse (class in pymod- isFrameReady() (pymod- bus.register_write_message), 171 bus.framer.ascii_framer.ModbusAsciiFramer MessageRegisterException, 152 method), 128 method (pymodbus.framer.ascii_framer.ModbusAsciiFramer isFrameReady() (pymod- attribute), 128 bus.framer.binary_framer.ModbusBinaryFramer method (pymodbus.framer.binary_framer.ModbusBinaryFramer method), 129 attribute), 129 isFrameReady() (pymod- method (pymodbus.framer.rtu_framer.ModbusRtuFramer bus.framer.rtu_framer.ModbusRtuFramer attribute), 131 method), 131 method (pymodbus.framer.socket_framer.ModbusSocketFramer isFrameReady() (pymod- attribute), 133 bus.framer.socket_framer.ModbusSocketFramer method (pymodbus.transaction.ModbusAsciiFramer at- method), 133 tribute), 176 isFrameReady() (pymod- method (pymodbus.transaction.ModbusBinaryFramer at- bus.transaction.ModbusAsciiFramer method), tribute), 177 176 method (pymodbus.transaction.ModbusRtuFramer isFrameReady() (pymod- attribute), 178 bus.transaction.ModbusBinaryFramer method (pymodbus.transaction.ModbusSocketFramer at- method), 177 tribute), 180 isFrameReady() (pymod- method (pymodbus.transaction.ModbusTlsFramer bus.transaction.ModbusRtuFramer method), attribute), 181 PyModbus, Release 3.6.0dev ModbusAsciiFramer (class in pymod- pymodbus, 135 bus.framer.ascii_framer), 127 pymodbus.bit_read_message, 136 ModbusAsciiFramer (class in pymodbus.transaction), pymodbus.bit_write_message, 138 175 pymodbus.device, 140 ModbusBinaryFramer (class in pymod- pymodbus.diag_message, 141 bus.framer.binary_framer), 128 pymodbus.events, 150 ModbusBinaryFramer (class in pymodbus.transaction), pymodbus.exceptions, 152 176 pymodbus.factory, 153 ModbusClientMixin (class in pymodbus.client.mixin), pymodbus.file_message, 155 24 pymodbus.framer.ascii_framer, 127 ModbusClientMixin.DATATYPE (class in pymod- pymodbus.framer.binary_framer, 128 bus.client.mixin), 32 pymodbus.framer.rtu_framer, 130 ModbusDeviceIdentification (class in pymod- pymodbus.framer.socket_framer, 132 bus.device), 140 pymodbus.mei_message, 158 ModbusEvent (class in pymodbus.events), 151 pymodbus.other_message, 159 ModbusException, 152 pymodbus.payload, 163 ModbusIOException, 152 pymodbus.register_read_message, 168 ModbusPlusStatistics (class in pymodbus.device), pymodbus.register_write_message, 171 141 pymodbus.repl.client.mclient, 45 ModbusRtuFramer (class in pymod- pymodbus.server, 33 bus.framer.rtu_framer), 130 pymodbus.server.simulator.http_server, 73 ModbusRtuFramer (class in pymodbus.transaction), 177 pymodbus.transaction, 174 ModbusSerialClient (class in pymodbus.client), 15 pymodbus.utilities, 181 ModbusSerialClient (class in pymod- bus.repl.client.mclient), 51 N ModbusSerialServer (class in pymodbus.server), 33 NoSuchSlaveException, 153 ModbusServerContext (class in pymodbus.datastore), NOTHING (pymodbus.constants.MoreData attribute), 135 125 NotImplementedException, 153 ModbusSimulatorContext (class in pymod- bus.datastore), 125 O ModbusSimulatorServer (class in pymodbus.server), OFF (pymodbus.constants.ModbusStatus attribute), 134 33 ON (pymodbus.constants.ModbusStatus attribute), 134 ModbusSimulatorServer (class in pymod- bus.server.simulator.http_server), 73 P ModbusSlaveContext (class in pymodbus.datastore), pack_bitstring() (in module pymodbus.utilities), 182 124 ParameterException, 153 ModbusSocketFramer (class in pymod- populateHeader() (pymod- bus.framer.socket_framer), 132 bus.framer.rtu_framer.ModbusRtuFramer ModbusSocketFramer (class in pymodbus.transaction), method), 131 179 populateHeader() (pymod- ModbusSparseDataBlock (class in pymod- bus.transaction.ModbusRtuFramer method), bus.datastore), 123 178 ModbusTcpClient (class in pymodbus.client), 18 populateResult() (pymod- ModbusTcpClient (class in pymod- bus.framer.rtu_framer.ModbusRtuFramer bus.repl.client.mclient), 52 method), 131 ModbusTcpServer (class in pymodbus.server), 35 populateResult() (pymod- ModbusTlsClient (class in pymodbus.client), 20 bus.transaction.ModbusRtuFramer method), ModbusTlsFramer (class in pymodbus.transaction), 180 179 ModbusTlsServer (class in pymodbus.server), 36 ProductCode (pymodbus.device.ModbusDeviceIdentification ModbusUdpClient (class in pymodbus.client), 23 property), 141 ModbusUdpServer (class in pymodbus.server), 36 ProductName (pymodbus.device.ModbusDeviceIdentification ModelName (pymodbus.device.ModbusDeviceIdentification property), 141 property), 140 pymodbus PyModbus, Release 3.6.0dev pymodbus.bit_read_message read_device_information() (pymod- module, 136 bus.client.mixin.ModbusClientMixin method), pymodbus.bit_write_message 32 module, 138 read_device_information() (pymod- pymodbus.device bus.repl.client.mclient.ExtendedRequestSupport module, 140 method), 47 pymodbus.diag_message read_discrete_inputs() (pymod- module, 141 bus.client.mixin.ModbusClientMixin method), pymodbus.events 25 module, 150 read_discrete_inputs() (pymod- pymodbus.exceptions bus.repl.client.mclient.ExtendedRequestSupport module, 152 method), 47 pymodbus.factory read_exception_status() (pymod- module, 153 bus.client.mixin.ModbusClientMixin method), pymodbus.file_message 26 module, 155 read_exception_status() (pymod- pymodbus.framer.ascii_framer bus.repl.client.mclient.ExtendedRequestSupport module, 127 method), 47 pymodbus.framer.binary_framer read_fifo_queue() (pymod- module, 128 bus.client.mixin.ModbusClientMixin method), pymodbus.framer.rtu_framer 32 module, 130 read_file_record() (pymod- pymodbus.framer.socket_framer bus.client.mixin.ModbusClientMixin method), module, 132 31 pymodbus.mei_message read_holding_registers() (pymod- module, 158 bus.client.mixin.ModbusClientMixin method), pymodbus.other_message 25 module, 159 read_holding_registers() (pymod- pymodbus.payload bus.repl.client.mclient.ExtendedRequestSupport module, 163 method), 47 pymodbus.register_read_message read_input_registers() (pymod- module, 168 bus.client.mixin.ModbusClientMixin method), pymodbus.register_write_message 25 module, 171 read_input_registers() (pymod- pymodbus.repl.client.mclient bus.repl.client.mclient.ExtendedRequestSupport module, 45 method), 47 pymodbus.server ReadBitsResponseBase (class in pymod- module, 33 bus.bit_read_message), 136 pymodbus.server.simulator.http_server ReadCoilsRequest (class in pymod- module, 73 bus.bit_read_message), 136 pymodbus.transaction ReadCoilsResponse (class in pymod- module, 174 bus.bit_read_message), 137 pymodbus.utilities ReadDeviceInformationRequest (class in pymod- module, 181 bus.mei_message), 158 pymodbus_apply_logging_config() (in module py- ReadDeviceInformationResponse (class in pymod- modbus), 135 bus.mei_message), 158 ReadDiscreteInputsRequest (class in pymod- read_coils() (pymod- ReadDiscreteInputsResponse (class in pymod- bus.client.mixin.ModbusClientMixin method), bus.bit_read_message), 137 24 ReadExceptionStatusRequest (class in pymod- read_coils() (pymod- bus.other_message), 161 bus.repl.client.mclient.ExtendedRequestSupport ReadExceptionStatusResponse (class in pymod- method), 46 bus.other_message), 161 PyModbus, Release 3.6.0dev ReadFifoQueueRequest (class in pymod- bus.repl.client.mclient.ExtendedRequestSupport bus.file_message), 155 method), 48 ReadFifoQueueResponse (class in pymod- ReportSlaveIdRequest (class in pymod- bus.file_message), 155 bus.other_message), 162 ReadFileRecordRequest (class in pymod- ReportSlaveIdResponse (class in pymod- bus.file_message), 156 bus.other_message), 162 ReadFileRecordResponse (class in pymod- reset() (pymodbus.datastore.ModbusSlaveContext bus.file_message), 157 method), 124 ReadHoldingRegistersRequest (class in pymod- reset() (pymodbus.datastore.ModbusSparseDataBlock bus.register_read_message), 168 method), 124 ReadHoldingRegistersResponse (class in pymod- reset() (pymodbus.device.ModbusPlusStatistics bus.register_read_message), 168 method), 141 ReadInputRegistersRequest (class in pymod- reset() (pymodbus.payload.BinaryPayloadBuilder bus.register_read_message), 168 method), 164 ReadInputRegistersResponse (class in pymod- reset() (pymodbus.payload.BinaryPayloadDecoder bus.register_read_message), 169 method), 166 ReadRegistersResponseBase (class in pymod- resetBit() (pymodbus.bit_read_message.ReadBitsResponseBase bus.register_read_message), 169 method), 136 readwrite_registers() (pymod- resetFrame() (pymod- bus.client.mixin.ModbusClientMixin method), bus.framer.rtu_framer.ModbusRtuFramer readwrite_registers() (pymod- resetFrame() (pymod- bus.repl.client.mclient.ExtendedRequestSupport bus.transaction.ModbusRtuFramer method), method), 48 179 ReadWriteMultipleRegistersRequest (class in py- restart_comm_option() (pymod- modbus.register_read_message), 170 bus.repl.client.mclient.ExtendedRequestSupport ReadWriteMultipleRegistersResponse (class in py- method), 48 modbus.register_read_message), 170 RestartCommunicationsOptionRequest (class in py- READY (pymodbus.constants.ModbusStatus attribute), 134 modbus.diag_message), 145 recv() (pymodbus.client.ModbusSerialClient method), RestartCommunicationsOptionResponse (class in recv() (pymodbus.client.ModbusTcpClient method), 19 return_bus_com_error_count() (pymod- recvPacket() (pymod- bus.repl.client.mclient.ExtendedRequestSupport bus.framer.rtu_framer.ModbusRtuFramer method), 48 method), 131 return_bus_exception_error_count() (pymod- recvPacket() (pymod- bus.repl.client.mclient.ExtendedRequestSupport bus.transaction.ModbusRtuFramer method), method), 48 179 return_bus_message_count() (pymod- register() (pymodbus.datastore.ModbusSlaveContext bus.repl.client.mclient.ExtendedRequestSupport method), 125 method), 49 register() (pymodbus.factory.ClientDecoder method), return_diagnostic_register() (pymod- register() (pymodbus.factory.ServerDecoder method), method), 49 155 return_iop_overrun_count() (pymod- registers (pymodbus.register_read_message.ReadRegistersResponseBase bus.repl.client.mclient.ExtendedRequestSupport attribute), 169 method), 49 REGULAR (pymodbus.constants.DeviceInformation return_query_data() (pymod- attribute), 133 bus.repl.client.mclient.ExtendedRequestSupport RemoteReceiveEvent (class in pymodbus.events), 151 method), 49 RemoteSendEvent (class in pymodbus.events), 151 return_slave_bus_char_overrun_count() (pymod- report_slave_id() (pymod- bus.repl.client.mclient.ExtendedRequestSupport bus.client.mixin.ModbusClientMixin method), method), 49 30 return_slave_busy_count() (pymod- report_slave_id() (pymod- bus.repl.client.mclient.ExtendedRequestSupport PyModbus, Release 3.6.0dev method), 49 RTU (pymodbus.Framer attribute), 135 return_slave_message_count() (pymod- rtuFrameSize() (in module pymodbus.utilities), 182 bus.repl.client.mclient.ExtendedRequestSupport run_forever() (pymod- method), 50 bus.server.ModbusSimulatorServer method), return_slave_no_ack_count() (pymod- 35 bus.repl.client.mclient.ExtendedRequestSupport run_forever() (pymod- method), 50 bus.server.simulator.http_server.ModbusSimulatorServer return_slave_no_response_count() (pymod- method), 74 bus.repl.client.mclient.ExtendedRequestSupport method), 50 S ReturnBusCommunicationErrorCountRequest (class send() (pymodbus.client.ModbusSerialClient method), in pymodbus.diag_message), 145 16 ReturnBusCommunicationErrorCountResponse send() (pymodbus.client.ModbusTcpClient method), 19 (class in pymodbus.diag_message), 145 sendPacket() (pymod- ReturnBusExceptionErrorCountRequest (class in bus.framer.rtu_framer.ModbusRtuFramer pymodbus.diag_message), 145 method), 132 ReturnBusExceptionErrorCountResponse (class in sendPacket() (pymod- pymodbus.diag_message), 146 bus.transaction.ModbusRtuFramer method), ReturnBusMessageCountRequest (class in pymod- 179 bus.diag_message), 146 server_request_tracer() (pymod- ReturnBusMessageCountResponse (class in pymod- bus.server.ModbusSimulatorServer method), bus.diag_message), 146 35 ReturnDiagnosticRegisterRequest (class in pymod- server_request_tracer() (pymod- bus.diag_message), 146 bus.server.simulator.http_server.ModbusSimulatorServer ReturnDiagnosticRegisterResponse (class in py- method), 75 modbus.diag_message), 147 server_response_manipulator() (pymod- ReturnIopOverrunCountRequest (class in pymod- bus.server.ModbusSimulatorServer method), bus.diag_message), 147 35 ReturnIopOverrunCountResponse (class in pymod- server_response_manipulator() (pymod- bus.diag_message), 147 bus.server.simulator.http_server.ModbusSimulatorServer ReturnQueryDataRequest (class in pymod- method), 75 bus.diag_message), 147 ServerAsyncStop() (in module pymodbus.server), 36 ReturnQueryDataResponse (class in pymod- ServerDecoder (class in pymodbus.factory), 154 bus.diag_message), 147 ServerStop() (in module pymodbus.server), 36 ReturnSlaveBusCharacterOverrunCountRequest set_baudrate() (pymod- (class in pymodbus.diag_message), 147 bus.repl.client.mclient.ModbusSerialClient ReturnSlaveBusCharacterOverrunCountResponse method), 52 (class in pymodbus.diag_message), 148 set_bytesize() (pymod- ReturnSlaveBusyCountRequest (class in pymod- bus.repl.client.mclient.ModbusSerialClient bus.diag_message), 148 method), 52 ReturnSlaveBusyCountResponse (class in pymod- set_parity() (pymod- bus.diag_message), 148 bus.repl.client.mclient.ModbusSerialClient ReturnSlaveMessageCountRequest (class in pymod- method), 52 bus.diag_message), 148 set_port() (pymodbus.repl.client.mclient.ModbusSerialClient ReturnSlaveMessageCountResponse (class in pymod- method), 52 bus.diag_message), 149 set_stopbits() (pymod- ReturnSlaveNAKCountRequest (class in pymod- bus.repl.client.mclient.ModbusSerialClient bus.diag_message), 149 method), 52 ReturnSlaveNAKCountResponse (class in pymod- set_timeout() (pymod- bus.diag_message), 149 bus.repl.client.mclient.ModbusSerialClient ReturnSlaveNoResponseCountRequest (class in py- method), 52 modbus.diag_message), 149 setBit() (pymodbus.bit_read_message.ReadBitsResponseBase ReturnSlaveNoResponseCountResponse (class in py- method), 136 modbus.diag_message), 149 PyModbus, Release 3.6.0dev setValues() (pymodbus.datastore.ModbusSlaveContext bus.diag_message.ChangeAsciiInputDelimiterResponse method), 125 attribute), 142 setValues() (pymodbus.datastore.ModbusSparseDataBlock sub_function_code (pymod- method), 124 bus.diag_message.ClearCountersRequest should_respond (pymod- attribute), 142 bus.diag_message.ForceListenOnlyModeResponsesub_function_code (pymod- attribute), 144 bus.diag_message.ClearCountersResponse should_respond (pymodbus.pdu.ModbusResponse at- attribute), 142 tribute), 168 sub_function_code (pymod- skip_bytes() (pymod- bus.diag_message.ClearOverrunCountRequest bus.payload.BinaryPayloadDecoder method), attribute), 142 166 sub_function_code (pymod- SLAVE_OFF (pymodbus.constants.ModbusStatus at- bus.diag_message.ClearOverrunCountResponse tribute), 135 attribute), 142 SLAVE_ON (pymodbus.constants.ModbusStatus attribute), sub_function_code (pymod- slaves() (pymodbus.datastore.ModbusServerContext attribute), 144 method), 125 sub_function_code (pymod- SOCKET (pymodbus.Framer attribute), 135 bus.diag_message.ForceListenOnlyModeResponse SPECIFIC (pymodbus.constants.DeviceInformation at- attribute), 144 tribute), 133 sub_function_code (pymod- start_modbus_server() (pymod- bus.diag_message.GetClearModbusPlusRequest bus.server.ModbusSimulatorServer method), attribute), 144 35 sub_function_code (pymod- start_modbus_server() (pymod- bus.diag_message.GetClearModbusPlusResponse bus.server.simulator.http_server.ModbusSimulatorServer attribute), 144 method), 74 sub_function_code (pymod- StartAsyncSerialServer() (in module pymod- bus.diag_message.RestartCommunicationsOptionRequest bus.server), 36 attribute), 145 StartAsyncTcpServer() (in module pymod- sub_function_code (pymod- bus.server), 36 bus.diag_message.RestartCommunicationsOptionResponse StartAsyncTlsServer() (in module pymod- attribute), 145 bus.server), 36 sub_function_code (pymod- StartAsyncUdpServer() (in module pymod- bus.diag_message.ReturnBusCommunicationErrorCountRequest bus.server), 37 attribute), 145 StartSerialServer() (in module pymodbus.server), sub_function_code (pymod- StartTcpServer() (in module pymodbus.server), 37 attribute), 145 StartTlsServer() (in module pymodbus.server), 37 sub_function_code (pymod- StartUdpServer() (in module pymodbus.server), 37 bus.diag_message.ReturnBusExceptionErrorCountRequest stop() (pymodbus.server.ModbusSimulatorServer attribute), 146 method), 35 sub_function_code (pymod- stop() (pymodbus.server.simulator.http_server.ModbusSimulatorServer bus.diag_message.ReturnBusExceptionErrorCountResponse method), 74 attribute), 146 stop_modbus_server() (pymod- sub_function_code (pymod- bus.server.ModbusSimulatorServer method), bus.diag_message.ReturnBusMessageCountRequest stop_modbus_server() (pymod- sub_function_code (pymod- bus.server.simulator.http_server.ModbusSimulatorServer bus.diag_message.ReturnBusMessageCountResponse method), 74 attribute), 146 sub_function_code (pymod- sub_function_code (pymod- bus.diag_message.ChangeAsciiInputDelimiterRequest bus.diag_message.ReturnDiagnosticRegisterRequest attribute), 142 attribute), 146 sub_function_code (pymod- sub_function_code (pymod- PyModbus, Release 3.6.0dev T bus.diag_message.ReturnDiagnosticRegisterResponse attribute), 147 TLS (pymodbus.Framer attribute), 135 sub_function_code (pymod- to_coils() (pymodbus.payload.BinaryPayloadBuilder bus.diag_message.ReturnIopOverrunCountRequest method), 164 attribute), 147 to_registers() (pymod- sub_function_code (pymod- bus.payload.BinaryPayloadBuilder method), bus.diag_message.ReturnIopOverrunCountResponse 164 attribute), 147 sub_function_code (pymod- U bus.diag_message.ReturnQueryDataRequest unpack_bitstring() (in module pymodbus.utilities), attribute), 147 183 sub_function_code (pymod- update() (pymodbus.device.ModbusDeviceIdentification bus.diag_message.ReturnQueryDataResponse method), 141 attribute), 147 UserApplicationName (pymod- sub_function_code (pymod- bus.device.ModbusDeviceIdentification prop- bus.diag_message.ReturnSlaveBusCharacterOverrunCountRequest attribute), 148 sub_function_code (pymod- V bus.diag_message.ReturnSlaveBusCharacterOverrunCountResponse validate() (pymodbus.datastore.ModbusSlaveContext attribute), 148 sub_function_code (pymod- validate() (pymodbus.datastore.ModbusSparseDataBlock bus.diag_message.ReturnSlaveBusyCountRequest attribute), 148 value (pymodbus.events.CommunicationRestartEvent at- sub_function_code (pymod- bus.diag_message.ReturnSlaveBusyCountResponse value (pymodbus.events.EnteredListenModeEvent attribute), 148 sub_function_code (pymod- VendorName (pymodbus.device.ModbusDeviceIdentification bus.diag_message.ReturnSlaveMessageCountRequest attribute), 149 VendorUrl (pymodbus.device.ModbusDeviceIdentification sub_function_code (pymod- bus.diag_message.ReturnSlaveMessageCountResponse attribute), 149 sub_function_code (pymod- W bus.diag_message.ReturnSlaveNAKCountRequest WAITING (pymodbus.constants.ModbusStatus attribute), attribute), 149 134 sub_function_code (pymod- write_coil() (pymod- bus.diag_message.ReturnSlaveNAKCountResponse bus.client.mixin.ModbusClientMixin method), attribute), 149 25 sub_function_code (pymod- write_coil() (pymod- bus.diag_message.ReturnSlaveNoResponseCountRequest bus.repl.client.mclient.ExtendedRequestSupport attribute), 149 method), 50 sub_function_code (pymod- write_coils() (pymod- bus.diag_message.ReturnSlaveNoResponseCountResponse bus.client.mixin.ModbusClientMixin method), attribute), 150 30 sub_function_code (pymod- write_coils() (pymod- bus.mei_message.ReadDeviceInformationRequest bus.repl.client.mclient.ExtendedRequestSupport attribute), 158 method), 50 sub_function_code (pymod- write_file_record() (pymod- bus.mei_message.ReadDeviceInformationResponse bus.client.mixin.ModbusClientMixin method), attribute), 159 31 summary() (pymodbus.device.ModbusDeviceIdentification write_register() (pymod- method), 141 bus.client.mixin.ModbusClientMixin method), summary() (pymodbus.device.ModbusPlusStatistics 26 method), 141 PyModbus, Release 3.6.0dev write_register() (pymod- bus.repl.client.mclient.ExtendedRequestSupport method), 51 write_registers() (pymod- bus.client.mixin.ModbusClientMixin method), 30 write_registers() (pymod- bus.repl.client.mclient.ExtendedRequestSupport method), 51 WriteFileRecordRequest (class in pymod- bus.file_message), 157 WriteFileRecordResponse (class in pymod- bus.file_message), 157 WriteMultipleCoilsRequest (class in pymod- bus.bit_write_message), 138 WriteMultipleCoilsResponse (class in pymod- bus.bit_write_message), 138 WriteMultipleRegistersRequest (class in pymod- bus.register_write_message), 172 WriteMultipleRegistersResponse (class in pymod- bus.register_write_message), 172 WriteSingleCoilRequest (class in pymod- bus.bit_write_message), 139 WriteSingleCoilResponse (class in pymod- bus.bit_write_message), 140 WriteSingleRegisterRequest (class in pymod- bus.register_write_message), 173 WriteSingleRegisterResponse (class in pymod- bus.register_write_message), 173
currencyapi
rust
Rust
Crate currencyapi === currencyapi API library. > **Note:** experimental The starting point of this library is the `Rates` type for currency rates, which provides: * Latest Exchange Rates - `Rates::fetch_latest` * Historical Exchange Rates The Convert Exchange Rates endpoint is not provided but conversion is implemented via `Rates::convert`. ### Example ``` async fn main() { let mut rates = Rates::<rust_decimal::Decimal>::new(); // requires `rust_decimal` feature and crate let request = request.base_currency(EUR).currencies([EUR,USD,GBP]).build(); let metadata = rates .fetch_latest::<DateTime<Utc>, RateLimitIgnore>(&client, request) // DateTime<Utc> from the `chrono` crate .await .unwrap(); println!("Fetched {} rates as of {}", rates.len(), metadata.last_updated_at); for (currency, value) in rates.iter() { println!("{currency} {value}"); } } ``` Modules --- * currencyCurrencies constants. * latestAPI for the `latest` endpoint. Structs --- * CurrencyCodeCurrency code. * RateLimitRate-limit data from response headers. * RateLimitIgnoreIgnore rate limit data. * RatesCurrency rates. Enums --- * CurrencyErrorInvalid currency code error. * ErrorAn error from the API or from the HTTP client. Traits --- * FromScientificScientific notation parsing. Crate currencyapi === currencyapi API library. > **Note:** experimental The starting point of this library is the `Rates` type for currency rates, which provides: * Latest Exchange Rates - `Rates::fetch_latest` * Historical Exchange Rates The Convert Exchange Rates endpoint is not provided but conversion is implemented via `Rates::convert`. ### Example ``` async fn main() { let mut rates = Rates::<rust_decimal::Decimal>::new(); // requires `rust_decimal` feature and crate let request = request.base_currency(EUR).currencies([EUR,USD,GBP]).build(); let metadata = rates .fetch_latest::<DateTime<Utc>, RateLimitIgnore>(&client, request) // DateTime<Utc> from the `chrono` crate .await .unwrap(); println!("Fetched {} rates as of {}", rates.len(), metadata.last_updated_at); for (currency, value) in rates.iter() { println!("{currency} {value}"); } } ``` Modules --- * currencyCurrencies constants. * latestAPI for the `latest` endpoint. Structs --- * CurrencyCodeCurrency code. * RateLimitRate-limit data from response headers. * RateLimitIgnoreIgnore rate limit data. * RatesCurrency rates. Enums --- * CurrencyErrorInvalid currency code error. * ErrorAn error from the API or from the HTTP client. Traits --- * FromScientificScientific notation parsing. Struct currencyapi::Rates === ``` pub struct Rates<RATE, const N: usize = { crate::currency::ARRAY.len() + /* slack */ 10 }> { /* private fields */ } ``` Currency rates. Implementations --- ### impl<const N: usize, RATE> Rates<RATE, N#### pub const fn new() -> Self Creates a new `Rates` value. #### pub const fn len(&self) -> usize Gets the count of rates. #### pub const fn is_empty(&self) -> bool Gets whether there are no rates. #### pub fn clear(&mut self) Removes all rates. #### pub fn currencies(&self) -> &[CurrencyCode] Gets a slice of the currencies. #### pub fn rates(&self) -> &[RATE] Gets a slice of the rates. #### pub fn iter(&self) -> impl Iterator<Item = (CurrencyCode, &RATE)Iterates over currency rates. #### pub unsafe fn push_unchecked(&mut self, currency: CurrencyCode, rate: RATE) Pushes a new currency rate. See `Rates::push`. ##### Safety Ensure there is space for the new rate, i.e. that `Rates::len` < `N`. #### pub fn push(&mut self, currency: CurrencyCode, rate: RATE) -> bool Pushes a new currency rate, if the `Rates` is not full. Does not check for duplicates, but other functions should use the latest pushed rate of a currency. Returns whether the rate was inserted. #### pub fn extend_capped( &mut self, iter: impl IntoIterator<Item = (CurrencyCode, RATE)> ) -> bool Appends the given iterator rates, until full. Returns whether all values were appended. #### pub fn get(&self, currency: CurrencyCode) -> Option<&RATEGets the rate for the given currency, if exists. #### pub fn convert( &self, amount: &RATE, from: CurrencyCode, to: CurrencyCode ) -> Option<RATE>where for<'x> &'x RATE: Div<&'x RATE, Output = RATE> + Mul<RATE, Output = RATE>, Covnerts an amount between currencies. Returns `None` if either the `from` or `to` currencies are missing. ### impl<const N: usize, RATE> Rates<RATE, N#### pub async fn fetch_latest<DateTime: FromStr, RateLimit: for<'x> RateLimitData<'x>>( &mut self, client: &Client, request: Request ) -> Result<Metadata<DateTime, RateLimit>, Error>where RATE: FromScientific, Fetches a `latest` `Request`. Trait Implementations --- ### impl<const N: usize, RATE: Debug> Debug for Rates<RATE, N#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Returns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl<RATE, const N: usize> RefUnwindSafe for Rates<RATE, N>where RATE: RefUnwindSafe, ### impl<RATE, const N: usize> Send for Rates<RATE, N>where RATE: Send, ### impl<RATE, const N: usize> Sync for Rates<RATE, N>where RATE: Sync, ### impl<RATE, const N: usize> Unpin for Rates<RATE, N>where RATE: Unpin, ### impl<RATE, const N: usize> UnwindSafe for Rates<RATE, N>where RATE: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Module currencyapi::currency === Currencies constants. This module defines all known currencies as constants, as well as `ARRAY` which contains all of them in a constant array. Constants --- * AEDThe AED currency code. * AFNThe AFN currency code. * ALLThe ALL currency code. * AMDThe AMD currency code. * ANGThe ANG currency code. * AOAThe AOA currency code. * ARBThe ARB currency code. * ARRAYAn array of all the currencies defined in this module. * ARSThe ARS currency code. * AUDThe AUD currency code. * AVAXThe AVAX currency code. * AWGThe AWG currency code. * AZNThe AZN currency code. * BAMThe BAM currency code. * BBDThe BBD currency code. * BDTThe BDT currency code. * BGNThe BGN currency code. * BHDThe BHD currency code. * BIFThe BIF currency code. * BMDThe BMD currency code. * BNBThe BNB currency code. * BNDThe BND currency code. * BOBThe BOB currency code. * BRLThe BRL currency code. * BSDThe BSD currency code. * BTCThe BTC currency code. * BTNThe BTN currency code. * BUSDThe BUSD currency code. * BWPThe BWP currency code. * BYNThe BYN currency code. * BYRThe BYR currency code. * BZDThe BZD currency code. * CADThe CAD currency code. * CDFThe CDF currency code. * CHFThe CHF currency code. * CLFThe CLF currency code. * CLPThe CLP currency code. * CNYThe CNY currency code. * COPThe COP currency code. * CRCThe CRC currency code. * CUCThe CUC currency code. * CUPThe CUP currency code. * CVEThe CVE currency code. * CZKThe CZK currency code. * DAIThe DAI currency code. * DJFThe DJF currency code. * DKKThe DKK currency code. * DOPThe DOP currency code. * DOTThe DOT currency code. * DZDThe DZD currency code. * EGPThe EGP currency code. * ERNThe ERN currency code. * ETBThe ETB currency code. * ETHThe ETH currency code. * EURThe EUR currency code. * FJDThe FJD currency code. * FKPThe FKP currency code. * GBPThe GBP currency code. * GELThe GEL currency code. * GGPThe GGP currency code. * GHSThe GHS currency code. * GIPThe GIP currency code. * GMDThe GMD currency code. * GNFThe GNF currency code. * GTQThe GTQ currency code. * GYDThe GYD currency code. * HKDThe HKD currency code. * HNLThe HNL currency code. * HRKThe HRK currency code. * HTGThe HTG currency code. * HUFThe HUF currency code. * IDRThe IDR currency code. * ILSThe ILS currency code. * IMPThe IMP currency code. * INRThe INR currency code. * IQDThe IQD currency code. * IRRThe IRR currency code. * ISKThe ISK currency code. * JEPThe JEP currency code. * JMDThe JMD currency code. * JODThe JOD currency code. * JPYThe JPY currency code. * KESThe KES currency code. * KGSThe KGS currency code. * KHRThe KHR currency code. * KMFThe KMF currency code. * KPWThe KPW currency code. * KRWThe KRW currency code. * KWDThe KWD currency code. * KYDThe KYD currency code. * KZTThe KZT currency code. * LAKThe LAK currency code. * LBPThe LBP currency code. * LKRThe LKR currency code. * LRDThe LRD currency code. * LSLThe LSL currency code. * LTCThe LTC currency code. * LTLThe LTL currency code. * LVLThe LVL currency code. * LYDThe LYD currency code. * MADThe MAD currency code. * MATICThe MATIC currency code. * MDLThe MDL currency code. * MGAThe MGA currency code. * MKDThe MKD currency code. * MMKThe MMK currency code. * MNTThe MNT currency code. * MOPThe MOP currency code. * MROThe MRO currency code. * MURThe MUR currency code. * MVRThe MVR currency code. * MWKThe MWK currency code. * MXNThe MXN currency code. * MYRThe MYR currency code. * MZNThe MZN currency code. * NADThe NAD currency code. * NGNThe NGN currency code. * NIOThe NIO currency code. * NOKThe NOK currency code. * NPRThe NPR currency code. * NZDThe NZD currency code. * OMRThe OMR currency code. * OPThe OP currency code. * PABThe PAB currency code. * PENThe PEN currency code. * PGKThe PGK currency code. * PHPThe PHP currency code. * PKRThe PKR currency code. * PLNThe PLN currency code. * PYGThe PYG currency code. * QARThe QAR currency code. * RONThe RON currency code. * RSDThe RSD currency code. * RUBThe RUB currency code. * RWFThe RWF currency code. * SARThe SAR currency code. * SBDThe SBD currency code. * SCRThe SCR currency code. * SDGThe SDG currency code. * SEKThe SEK currency code. * SGDThe SGD currency code. * SHPThe SHP currency code. * SLLThe SLL currency code. * SOLThe SOL currency code. * SOSThe SOS currency code. * SRDThe SRD currency code. * STDThe STD currency code. * SVCThe SVC currency code. * SYPThe SYP currency code. * SZLThe SZL currency code. * THBThe THB currency code. * TJSThe TJS currency code. * TMTThe TMT currency code. * TNDThe TND currency code. * TOPThe TOP currency code. * TRYThe TRY currency code. * TTDThe TTD currency code. * TWDThe TWD currency code. * TZSThe TZS currency code. * UAHThe UAH currency code. * UGXThe UGX currency code. * USDThe USD currency code. * USDCThe USDC currency code. * USDTThe USDT currency code. * UYUThe UYU currency code. * UZSThe UZS currency code. * VEFThe VEF currency code. * VNDThe VND currency code. * VUVThe VUV currency code. * WSTThe WST currency code. * XAFThe XAF currency code. * XAGThe XAG currency code. * XAUThe XAU currency code. * XCDThe XCD currency code. * XDRThe XDR currency code. * XOFThe XOF currency code. * XPDThe XPD currency code. * XPFThe XPF currency code. * XPTThe XPT currency code. * XRPThe XRP currency code. * YERThe YER currency code. * ZARThe ZAR currency code. * ZMKThe ZMK currency code. * ZMWThe ZMW currency code. * ZWLThe ZWL currency code. Struct currencyapi::CurrencyCode === ``` #[repr(C, align(8))]pub struct CurrencyCode { /* private fields */ } ``` Currency code. It’s recommended to use the constants in the `currencies` module. Implementations --- ### impl CurrencyCode #### pub unsafe fn new_unchecked(code: &[u8]) -> Self Creates a new `CurrencyCode` value. ##### Safety Ensure that the code’s length is within range [2..5]. The code must consist only of uppercase ASCII characters, and be terminated by zeroes until the end of the slice. Trait Implementations --- ### impl AsRef<[u8]> for CurrencyCode #### fn as_ref(&self) -> &[u8] Converts this type into a shared reference of the (usually inferred) input type.### impl AsRef<str> for CurrencyCode #### fn as_ref(&self) -> &str Converts this type into a shared reference of the (usually inferred) input type.### impl Clone for CurrencyCode #### fn clone(&self) -> CurrencyCode Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. The default currency code is `USD`. It is chosen for being the most traded currency. #### fn default() -> Self Returns the “default value” for a type. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### type Err = Error The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<Self, Self::ErrParses a string `s` to return a value of this type. #### fn hash<H: Hasher>(&self, state: &mut H) Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn cmp(&self, other: &Self) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn eq(&self, other: &Self) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<CurrencyCode> for CurrencyCode #### fn partial_cmp(&self, other: &Self) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### type Error = Error The type returned in the event of a conversion error.#### fn try_from(value: &[u8]) -> Result<Self, Self::ErrorPerforms the conversion.### impl Copy for CurrencyCode ### impl Eq for CurrencyCode Auto Trait Implementations --- ### impl RefUnwindSafe for CurrencyCode ### impl Send for CurrencyCode ### impl Sync for CurrencyCode ### impl Unpin for CurrencyCode ### impl UnwindSafe for CurrencyCode Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more{"&[u8]":"<h3>Notable traits for <code>&amp;[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Read.html\" title=\"trait std::io::Read\">Read</a> for &amp;[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</span>"} Module currencyapi::latest === API for the `latest` endpoint. Structs --- * Builder`Request` builder. * Metadata`latest` endpoint response data. * RequestRequest to the `latest` endpoint. Type Aliases --- * AllCurrenciesA `Builder` buffer for all currencies. Struct currencyapi::RateLimit === ``` pub struct RateLimit { pub limit_minute: usize, pub limit_month: usize, pub remainig_minute: usize, pub remaining_month: usize, } ``` Rate-limit data from response headers. Fields --- `limit_minute: usize`How many requests can be made in a minute. `limit_month: usize`How many requests can be made in a month. `remainig_minute: usize`How many remaining requests can be made in the minute of request. `remaining_month: usize`How many remaining requests can be made in the month of request. Trait Implementations --- ### impl Clone for RateLimit #### fn clone(&self) -> RateLimit Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> RateLimit Returns the “default value” for a type. #### fn hash<__H: Hasher>(&self, state: &mut __H) Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn cmp(&self, other: &RateLimit) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn eq(&self, other: &RateLimit) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<RateLimit> for RateLimit #### fn partial_cmp(&self, other: &RateLimit) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### type Error = () The type returned in the event of a conversion error.#### fn try_from(value: &Response) -> Result<Self, Self::ErrorPerforms the conversion.### impl Copy for RateLimit ### impl Eq for RateLimit ### impl StructuralEq for RateLimit ### impl StructuralPartialEq for RateLimit Auto Trait Implementations --- ### impl RefUnwindSafe for RateLimit ### impl Send for RateLimit ### impl Sync for RateLimit ### impl Unpin for RateLimit ### impl UnwindSafe for RateLimit Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct currencyapi::RateLimitIgnore === ``` pub struct RateLimitIgnore; ``` Ignore rate limit data. Trait Implementations --- ### impl TryFrom<&Response> for RateLimitIgnore #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(_: &Response) -> Result<Self, Self::ErrorPerforms the conversion.Auto Trait Implementations --- ### impl RefUnwindSafe for RateLimitIgnore ### impl Send for RateLimitIgnore ### impl Sync for RateLimitIgnore ### impl Unpin for RateLimitIgnore ### impl UnwindSafe for RateLimitIgnore Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Enum currencyapi::CurrencyError === ``` pub enum CurrencyError { TooShort, TooLong, InvalidCharacter(u8), } ``` Invalid currency code error. Valid currency codes are three uppercase alpha ASCII characters. Variants --- ### TooShort The currency code is too short. ### TooLong The currency code is too long. ### InvalidCharacter(u8) The currency code has an invalid character. Trait Implementations --- ### impl Debug for Error #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result Formats the value using the given formatter. 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Error ### impl Send for Error ### impl Sync for Error ### impl Unpin for Error ### impl UnwindSafe for Error Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToString for Twhere T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Enum currencyapi::Error === ``` pub enum Error { RateLimitError, HttpError(Error), ResponseParseError, RateLimitParseError, } ``` An error from the API or from the HTTP client. Variants --- ### RateLimitError The rate-limit was hit. ### HttpError(Error) HTTP error. ### ResponseParseError Failed to parse the response. ### RateLimitParseError Failed to parse the rate-limit headers. Trait Implementations --- ### impl Debug for Error #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(source: Error) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl !RefUnwindSafe for Error ### impl Send for Error ### impl Sync for Error ### impl Unpin for Error ### impl !UnwindSafe for Error Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToString for Twhere T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Trait currencyapi::FromScientific === ``` pub trait FromScientific: Sized { type Error; // Required method fn parse_scientific(s: &str) -> Result<Self, Self::Error>; } ``` Scientific notation parsing. Required Associated Types --- #### type Error The parse error type. Required Methods --- #### fn parse_scientific(s: &str) -> Result<Self, Self::ErrorParses a decimal number from a string. The number representation may or may not be in scientific notation. Implementations on Foreign Types --- ### impl FromScientific for f32 #### type Error = Error #### fn parse_scientific(s: &str) -> Result<Self, Self::Error### impl FromScientific for f64 #### type Error = Error #### fn parse_scientific(s: &str) -> Result<Self, Self::ErrorImplementors ---
chai2010.cn/gobook
go
Go
README [¶](#section-readme) --- ### Go语言高级编程 (Advanced Go Programming) *推广: [《WebAssembly标准入门》开始预售了,欢迎关注!](https://github.com/chai2010/awesome-wasm-zh/blob/master/webassembly-primer.md)([京东购买](https://item.jd.com/12499372.html))* --- 本书涵盖CGO、Go汇编语言、RPC实现、Web框架实现、分布式系统等高阶主题,针对Go语言有一定经验想深入了解Go语言各种高级用法的开发人员。对于刚学习Go语言的读者,建议先从[《Go语言圣经》](https://github.com/golang-china/gopl-zh)开始系统学习Go语言的基础知识。 ![](https://github.com/chai2010/advanced-go-programming-book/raw/v1.0.0/cover.png) * 作者:柴树杉,Github [@chai2010](https://github.com/chai2010),Twitter [@chaishushan](https://twitter.com/chaishushan),主页 <https://chai2010.cn/about> * 作者:曹春晖,Github [@cch123](https://github.com/cch123),主页 [xargin](http://xargin.com) * 网址:https://github.com/chai2010/advanced-go-programming-book * Star历史:https://starcharts.herokuapp.com/chai2010/advanced-go-programming-book.svg #### 在线阅读 * [SUMMARY.md](https://github.com/chai2010/advanced-go-programming-book/blob/v1.0.0/SUMMARY.md) * <https://chai2010.cn/advanced-go-programming-book/> * <https://www.gitbook.com/book/chai2010/advanced-go-programming-book/#### 相关报告 1. [Go语言简介](https://talks.godoc.org/github.com/chai2010/awesome-go-zh/chai2010/chai2010-golang-intro.slide) - [chai2010](https://github.com/chai2010/awesome-go-zh/tree/master/chai2010) 武汉·黄鹤会 2018/12/16 2. [GIAC: 2018 - Go 语言将要走向何方?](https://github.com/chai2010/awesome-go-zh/raw/master/chai2010/giac2018) - [chai2010](https://github.com/chai2010/awesome-go-zh/tree/master/chai2010) 上海·GIAC全球互联网架构大会 2018/11/23 3. [Go语言并发编程](https://talks.godoc.org/github.com/chai2010/awesome-go-zh/chai2010/chai2010-golang-concurrency.slide) - [chai2010](https://github.com/chai2010/awesome-go-zh/tree/master/chai2010) 武汉·光谷猫友会 2018/09/16, [整理01](https://mp.weixin.qq.com/s/UaY9gJU85dq-dXlOhLYY1Q)/[整理02](https://mp.weixin.qq.com/s/_aKNO-H11GEDA-l0rycfQQ) 4. 深入CGO编程: <https://github.com/chai2010/gopherchina2018-cgo-talk#### 开发者头条号 <https://toutiao.io/subjects/318517![](https://github.com/chai2010/advanced-go-programming-book/raw/v1.0.0/toutiao-318517-small.jpg) #### 关注微信公众号 (golang-china) ![](https://github.com/chai2010/advanced-go-programming-book/raw/v1.0.0/weixin-golang-china.jpg) #### 版权声明 [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/)。 ![Creative Commons License](https://github.com/chai2010/advanced-go-programming-book/raw/v1.0.0/images/by-nc-sa-4.0-88x31.png) 严禁任何商业行为使用或引用该文档的全部或部分内容! 欢迎大家提供建议! --- #### 鸣谢 感谢大家提供 PR!以下排名不分先后: | [**fuwensun**](https://github.com/fuwensun) | [**qichengzx**](https://github.com/qichengzx) | [**lewgun**](https://github.com/lewgun) | [**LaoK996**](https://github.com/LaoK996) | [**plpan**](https://github.com/plpan) | [**xiaoliwang**](https://github.com/xiaoliwang) | [**barryz**](https://github.com/barryz) | | --- | --- | --- | --- | --- | --- | --- | | [**alphayan**](https://github.com/alphayan) | [**leobuzhi**](https://github.com/leobuzhi) | [**iikira**](https://github.com/iikira) | [**fognome**](https://github.com/fognome) | [**darren**](https://github.com/darren) | [**jiayx**](https://github.com/jiayx) | [**orangle**](https://github.com/orangle) | | [**yangtaooo**](https://github.com/yangtaooo) | [**bcb51**](https://github.com/bcb51) | [**mathrobot**](https://github.com/mathrobot) | [**7535**](https://github.com/7535) | [**cloverstd**](https://github.com/cloverstd) | [**douglarek**](https://github.com/douglarek) | [**RealDeanZhao**](https://github.com/RealDeanZhao) | | [**yyt030**](https://github.com/yyt030) | [**yuqaf1989**](https://github.com/yuqaf1989) | [**BeccaBecca**](https://github.com/BeccaBecca) | [**cloudzhou**](https://github.com/cloudzhou) | [**ezioruan**](https://github.com/ezioruan) | [**hacknode**](https://github.com/hacknode) | [**Frozen-Shadow**](https://github.com/Frozen-Shadow) | --- <https://api.github.com/repos/chai2010/advanced-go-programming-book/contributorsDocumentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) 《Go语言高级编程》 开源图书 - by chai2010 ``` https://github.com/chai2010/advanced-go-programming-book ```
apollo
cran
R
Package ‘apollo’ August 10, 2023 Type Package Title Tools for Choice Model Estimation and Application Version 0.3.0 Description Choice models are a widely used technique across numerous scientific disci- plines. The Apollo package is a very flexible tool for the estimation and application of choice models in R. Users are able to write their own model functions or use a mix of already available ones. Random heterogeneity, both continuous and discrete and at the level of individuals and choices, can be incorporated for all models. There is support for both standalone models and hybrid model structures. Both classical and Bayesian estimation is available, and multiple discrete continuous models are covered in addition to discrete choice. Multi-threading processing is supported for estimation and a large number of pre and post-estimation routines, including for computing posterior (individual-level) distributions are available. For examples, a manual, and a support forum, visit <http://www.ApolloChoiceModelling.com>. For more information on choice models see <NAME>. (2009) <isbn:978-0-521-74738-7> and Hess, S. & <NAME>. (2014) <isbn:978-1-781-00314-5> for an overview of the field. License GPL-2 URL http://www.apolloChoiceModelling.com BugReports http://www.apolloChoiceModelling.com/forum/ Encoding UTF-8 LazyData true Depends R (>= 4.0.0), stats, utils Imports Rcpp (>= 1.0.0), maxLik, mnormt, mvtnorm, graphics, randtoolbox, numDeriv, parallel, Deriv, matrixStats, RSGHB, coda, tibble, stringr, bgw, cli, Rsolnp LinkingTo Rcpp, RcppArmadillo, RcppEigen Suggests knitr, rmarkdown, testthat VignetteBuilder knitr RoxygenNote 7.2.3 NeedsCompilation yes Author <NAME> [aut, cre], <NAME> [aut], <NAME> [ctb] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-08-10 12:40:02 UTC R topics documented: .onAttac... 4 apollo_addCovarianc... 5 apollo_attac... 5 apollo_avgInterDraw... 6 apollo_avgIntraDraw... 7 apollo_basTes... 9 apollo_bootstra... 9 apollo_checkArgument... 12 apollo_choiceAnalysi... 13 apollo_classAllo... 14 apollo_cn... 15 apollo_combineModel... 17 apollo_combineResult... 19 apollo_compareInput... 20 apollo_conditional... 20 apollo_deltaMetho... 21 apollo_detac... 23 apollo_df... 24 apollo_diagnostic... 26 apollo_drugChoiceDat... 27 apollo_dVd... 28 apollo_dVdB... 29 apollo_e... 29 apollo_emd... 31 apollo_emdc... 33 apollo_emdc... 34 apollo_estimat... 36 apollo_estimateH... 38 apollo_expandLoo... 39 apollo_firstRo... 40 apollo_fitsTes... 41 apollo_fmn... 42 apollo_initialis... 44 apollo_insertComponentNam... 44 apollo_insertFun... 45 apollo_insertOLLis... 46 apollo_insertRow... 46 apollo_insertRRMQuote... 47 apollo_insertScalin... 48 apollo_keepRow... 48 apollo_l... 49 apollo_lcConditional... 50 apollo_lcE... 51 apollo_lcUnconditional... 53 apollo_llCal... 53 apollo_loadMode... 54 apollo_longToWid... 55 apollo_lrTes... 55 apollo_makeCluste... 56 apollo_makeDraw... 57 apollo_makeGra... 58 apollo_makeLogLik... 59 apollo_mdce... 60 apollo_mdcev... 62 apollo_mdcne... 64 apollo_mixConditional... 67 apollo_mixE... 68 apollo_mixUnconditional... 69 apollo_mlh... 70 apollo_mn... 71 apollo_modeChoiceDat... 73 apollo_modelOutpu... 74 apollo_modifyUserDefFun... 75 apollo_n... 76 apollo_normalDensit... 79 apollo_o... 81 apollo_o... 83 apollo_outOfSampl... 85 apollo_ownMode... 87 apollo_panelPro... 89 apollo_predictio... 90 apollo_preparePro... 91 apollo_preproces... 93 apollo_prin... 94 apollo_readBet... 95 apollo_rr... 96 apollo_saveOutpu... 98 apollo_searchStar... 100 apollo_setRow... 102 apollo_sharesTes... 102 apollo_sin... 104 apollo_speedTes... 104 apollo_swissRouteChoiceDat... 106 apollo_timeUseDat... 107 apollo_unconditional... 108 apollo_validat... 109 apollo_validateContro... 110 apollo_validateDat... 111 apollo_validateHBContro... 112 apollo_validateInput... 113 apollo_varco... 116 apollo_varLis... 118 apollo_weightin... 119 apollo_writeF1... 120 apollo_writeThet... 120 aux_validateRow... 121 print.apoll... 122 summary.apoll... 122 .onAttach Prints package startup message Description This function is only called by R when attaching the package. Usage .onAttach(libname, pkgname) Arguments libname Name of library. pkgname Name of package. Value Nothing apollo_addCovariance Adds covariance matrix to Apollo model Description Receives an estimated model object, calculates its Hessian, and classical and robust covariance matrix, and returns the same model object, but with these additional elements. Usage apollo_addCovariance(model, apollo_inputs) Arguments model Model object. Estimated model object as returned by function apollo_estimate. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Value model. apollo_attach Attaches predefined variables. Description Attaches parameters and data to allow users to refer to individual variables by name without refer- ence to the object that contains them. Usage apollo_attach(apollo_beta, apollo_inputs) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Details This function should be called at the beginning of apollo_probabilities to make writing the log-likelihood more user-friendly. If used, then apollo_detach should be called at the end of apollo_probabilities, or more conveniently, using on.exit after the initial call to apollo_attach. apollo_attach attaches apollo_beta, database, draws, and the output of apollo_randCoeff and apollo_lcPars, if they are defined by the user. Value Nothing. apollo_avgInterDraws Averages across inter-individual draws. Description Averages individual-specific likelihood across inter-individual draws. Usage apollo_avgInterDraws(P, apollo_inputs, functionality) Arguments P List of vectors, matrices or 3-dim arrays. Likelihood of the model components. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Value Argument P with (for most functionalities) the original contents averaged over inter-individual draws. Shape depends on argument functionality. • "components": Returns P without changes. • "conditionals": Returns P without averaging across draws. Drops all components except "model". • "estimate": Returns P containing the likelihood of the model averaged across inter-individual draws. Drops all components except "model". • "gradient": Returns P containing the gradient of the likelihood averaged across inter-individual draws. Drops all components except "model". • "output": Returns P containing the likelihood of all model components averaged across inter- individual draws. • "prediction": Returns P containing the probabilities/likelihoods of all alternatives for all model components averaged across inter-individual draws. • "preprocess": Returns P without changes. • "raw": Returns P without changes. • "report": Returns P without changes. • "shares_LL": Returns P without changes. • "validate": Returns P containing the likelihood of the model averaged across inter-individual draws. Drops all components except "model". • "zero_LL": Returns P without changes. apollo_avgIntraDraws Averages across intra-individual draws. Description Averages observation-specific likelihood across intra-individual draws. Usage apollo_avgIntraDraws(P, apollo_inputs, functionality) Arguments P List of vectors, matrices or 3-dim arrays. Likelihood of the model components. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Value Argument P with (for most functionalities) the original contents averaged over intra-individual draws. Shape depends on argument functionality. • "components": Returns P without changes. • "conditionals": Returns P containing the likelihood of the model averaged across intra- individual draws. Drops all components except for "model". • "estimate": Returns P containing the likelihood of the model averaged across intra-individual draws. Drops all components except "model". • "gradient": Returns P containing the gradient of the likelihood averaged across intra-individual draws. Drops all components except "model". • "output": Returns P containing the likelihood of all model components averaged across intra- individual draws. • "prediction": Returns P containing the probabilities of all alternatives for all model compo- nents averaged across intra-individual draws. • "preprocess": Returns P without changes. • "raw": Returns P without changes. • "report": Returns P without changes. • "validate": Returns P containing the likelihood of the model averaged across intra-individual draws. Drops all components but "model". • "zero_LL": Returns P without changes. apollo_basTest Ben-Akiva & Swait test Description Carries out the Ben-Akiva & Swait test for non-nested models and reports the corresponding p- value. Usage apollo_basTest(model1, model2) Arguments model1 Either a character variable with the name (and possibly path) of a previously esti- mated model, or an estimated model in memory, as returned by apollo_estimate. model2 Either a character variable with the name (and possibly path) of a previously esti- mated model, or an estimated model in memory, as returned by apollo_estimate. Details The two models need to both be discrete choice, and need to have been estimated on the same data. Value Ben-Akiva & Swait test p-value (invisibly) apollo_bootstrap Bootstrap a model Description Samples individuals with replacement from the database, and estimates the model for each sample. Usage apollo_bootstrap( apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs, estimate_settings = list(estimationRoutine = "bgw", maxIterations = 200, writeIter = FALSE, hessianRoutine = "none", printLevel = 2L, silent = FALSE, maxLik_settings = list()), bootstrap_settings = list(nRep = 30, samples = NA, calledByEstimate = FALSE, recycle = TRUE) ) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. estimate_settings List. Options controlling the estimation process. See apollo_estimate. hessianRoutine="none" by default. bootstrap_settings List containing settings for the sampling procedure. User input is required for all settings except those with a default or marked as optional. • calledByEstimate: Logical. TRUE if apollo_bootstrap is called by apollo_estimate. FALSE by default. • nRep: Numeric scalar. Number of times the model must be estimated with different samples. Default is 30. • recycle: Logical. If TRUE, the function will look for old output files and append new repetitions to them. If FALSE, output files will be overwritten. • samples: Numeric matrix or data.frame. Optional argument. Must have as many rows as observations in the database, and as many columns as number of repetitions wanted. Each column represents a re-sample, and each element the number of times that observation must be included in the sample. If this argument is provided, then nRep is ignored. Note that this allows sampling at the observation rather than the individual level, which is not recommended for panel data. • seed: DEPRECATED, apollo_control$seed is used since v0.2.5. Nu- meric scalar (integer). Random number generator seed to generate the boot- strap samples. Only used if samples is NA. Default is 24. Details This function implements a basic block bootstrap. It estimates the model parameters on nRep dif- ferent samples. Each new sample is constructed by sampling with replacement from the original full sample. Each new sample has as many individuals as the original sample, though some of them may be repeated. Sampling is done at the individual level, therefore if different individuals have different number of observations, each re-sample does not necessarily have the same number of observations. If the sampling should be done at the individual level (not recommended for panel data), then the optional bootstrap_settings$samples argument should be provided. For each sample, only the parameters and log-likelihood are estimated. Standard errors are not calculated (they may be added in future versions). The composition of the re-samples is stored in a file, but is stable with the same seed. This function writes three different files to the working or output directory: • modelName_bootstrap_params.csv: estimated parameters, final log-likelihood, and number of observations for each re-sample • modelName_bootstrap_samples.csv: composition of each re-sample. • modelName_bootstrap_vcov.csv: variance-covariance matrix of the estimated parameters across re-samples. The first two files are updated throughout the run of this function, while the last one is only written once the function finishes. When run, this function will look for the first two files above in the working/output directory. If they are found, the function will attempt to pick up re-sampling from where those files left off. This is useful in cases where the original bootstrapping was interrupted, or when additional re-sampling runs are to be performed. Value List with three elements. • estimates: Matrix containing the parameter estimates for each repetition. As many rows as repetitions and as many columns as parameters. • LL: Vector of final log-likelihoods of each repetition. • varcov: Covariance matrix of the estimated parameters across the repetitions. This function also writes three output files to the working/output directory, with the following names (’x’ represents the model name): • x_bootstrap_params.csv: Table containing the parameter estimates, log-likelihood, and num- ber of observations for each repetition. • x_bootstrap_samples.csv: Table containing the description of the sample used in each repe- tition. Same format than bootstrap_settings$samples. • x_bootstrap_vcov: Table containing the covariance matrix of estimated parameters across the repetitions. apollo_checkArguments Checks definitions of Apollo functions Description Checks that the user-defined functions used by Apollo are correctly defined by the user. Usage apollo_checkArguments( apollo_probabilities = NA, apollo_randCoeff = NA, apollo_lcPars = NA ) Arguments apollo_probabilities Function. Likelihood function as defined by the user. apollo_randCoeff Function. Used with mixing models. Constructs the random parameters of a mixing model. Receives two arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: The output of this function (apollo_validateInputs). apollo_lcPars Function. Used with latent class models. Constructs a list of parameters for each latent class. Receives two arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: The output of this function (apollo_validateInputs). Details It only checks that the functions have the correct definition of inputs. It does not run the functions. Value Returns (invisibly) TRUE if definitions are correct, and FALSE otherwise. apollo_choiceAnalysis Reports market share for subsamples Description Compares market shares across subsamples in dataset, and conducts statistical tests. Usage apollo_choiceAnalysis(choiceAnalysis_settings, apollo_control, database) Arguments choiceAnalysis_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • codealternatives: Named numeric vector. Names of alternatives and their corresponding value in choiceVar. Note that these need not necessarily be the alternatives as defined in the model, but could e.g. relate to cheap- est/most expensive. • codeavail: Named list of numeric vectors or scalars. Availabilities of alter- natives, one element per alternative. Names of elements must match those in alternatives. Values can be 0 or 1. A user can also specify avail=1 to indicate universal availability, or omit the setting completely. • codechoiceVar: Numeric vector. Contains choices for all observations. It will usually be a column from the database. Values are defined in alternatives. • codeexplanators: data.frame. Variables determining subsamples of the database. Values in each column must describe a group or groups of in- dividuals (e.g. socio-demographics). Most usually a subset of columns from the database. • codeprintToScreen: Logical. TRUE for returning output to screen as well as file. TRUE by default. • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). apollo_control List. Options controlling the running of the code. See apollo_validateInputs. database data.frame. Data used by model. Details Saves the output to a csv file in the working/output directory. Value Silently returns a matrix containing the mean value for each explanator for those cases where an alternative is chosen and where it is not chosen, as well as the t-test comparing those means (H0: equivalence). The table is also written to a file called modelName_choiceAnalysis.csv and printed to screen. apollo_classAlloc Calculates class allocation probabilities for a Latent Class model Description Calculates class allocation probabilities for a Latent Class model using a Multinomial Logit model and can also perform other operations based on the value of the functionality argument. Usage apollo_classAlloc(classAlloc_settings) Arguments classAlloc_settings List of inputs of the MNL model. It should contain the following. • utilities: Named list of deterministic utilities . Utilities of the classes in class allocation model. Names of elements must match those in avail, if provided. • avail: Named list of numeric vectors or scalars. Availabilities of classes, one element per class Names of elements must match those in classes. Values can be 0 or 1. These can be scalars or vectors (of length equal to rows in the database). A user can also specify avail=1 to indicate universal availability, or omit the setting completely. • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. Value The returned object depends on the value of argument functionality, which it fetches from the calling stack (see apollo_validateInputs). • "components": Same as "estimate". • "conditionals": Same as "estimate". • "estimate": List of vector/matrices/arrays with the allocation probabilities for each class. • "gradient": List containing the likelihood and gradient of the model component. • "output": Same as "estimate". • "prediction": Same as "estimate". • "preprocess": Returns a list with pre-processed inputs, based on classAlloc_settings. • "raw": Same as "estimate". • "report": Same as "estimate". • "shares_LL": List with probabilities for each class in an equal shares setting. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "zero_LL": List with probabilities for each class in an equal shares setting. apollo_cnl Calculates Cross-Nested Logit probabilities Description Calculates the probabilities of a Cross-nested Logit model and can also perform other operations based on the value of the functionality argument. Usage apollo_cnl(cnl_settings, functionality) Arguments cnl_settings List of inputs of the CNL model. User input is required for all settings except those with a default or marked as optional. • alternatives: Named numeric vector. Names of alternatives and their corresponding value in choiceVar. • avail: Named list of numeric vectors or scalars. Availabilities of alterna- tives, one element per alternative. Names of elements must match those in alternatives. Values can be 0 or 1. These can be scalars or vectors (of length equal to rows in the database). A user can also specify avail=1 to indicate universal availability, or omit the setting completely. • choiceVar: Numeric vector. Contains choices for all observations. It will usually be a column from the database. Values are defined in alternatives. • cnlNests: List of numeric scalars or vectors. Lambda parameters for each nest. Elements must be named according to nests. The lambda at the root is fixed to 1, and therefore does not need to be defined. • cnlStructure: Numeric matrix. One row per nest and one column per alternative. Each element of the matrix is the alpha parameter of that (nest, alternative) pair. • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. • utilities: Named list of deterministic utilities . Utilities of the alterna- tives. Names of elements must match those in alternatives. • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Details For the model to be consistent with utility maximisation, the estimated value of the lambda pa- rameter of all nests should be between 0 and 1. Lambda parameters are inversely proportional to the correlation between the error terms of alternatives in a nest. If lambda=1, there is no relevant correlation between the unobserved utility of alternatives in that nest. Alpha parameters inside cnlStructure should be between 0 and 1. Using a transformation to ensure this constraint is satisfied is recommended for complex structures (e.g. logistic transformation). Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "gradient": Not implemented. • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all alternatives, with an extra element for the chosen alternative probability. • "preprocess": Returns a list with pre-processed inputs, based on cnl_settings. • "raw": Same as "prediction". • "report": List with tree structure and choice overview. • "shares_LL": vector/matrix/array. Returns the probability of the chosen alternative when only constants are estimated. • "validate": Same as "estimate". • "zero_LL": vector/matrix/array. Returns the probability of the chosen alternative when all parameters are zero. apollo_combineModels Combines separate model components. Description Combines model components to create likelihood for overall model. Usage apollo_combineModels( P, apollo_inputs, functionality, components = NULL, asList = TRUE ) Arguments P List of vectors, matrices or 3-dim arrays. Likelihood of the model components. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. components Character vector. Optional argument. Names of elements in P that should be multiplied to construct the whole model likelihood. If a single element is pro- vided, it is interpreted as a regular expression. Default is to include all compo- nents in P. asList Logical. Only used if functionality is "conditionals","estimate","validate","zero_LL" or "output". If TRUE, it will return a list as described in the ’Value’ section. If FALSE, it will only return a vector/matrix/3-dim array of the product of likeli- hoods inside P. Default is TRUE. Details This function should be called inside apollo_probabilities after all model components have been produced. It should be called before apollo_avgInterDraws, apollo_avgIntraDraws, apollo_panelProd and apollo_prepareProb, whichever apply, except where these functions are called inside any latent class components of the overall model. Value Argument P with (for most functionalities) an extra element called "model", which is the product of all the other elements. Shape depends on argument functionality. • "components": Returns P without changes. • "conditionals": Returns P with an extra component called "model", which is the product of all other elements of P. • "estimate": Returns P with an extra component called "model", which is the product of all other elements of P. • "gradient": Returns P containing the gradient of the likelihood after applying the product rule across model components. • "output": Returns P with an extra component called "model", which is the product of all other elements of P. • "prediction": Returns P without changes. • "preprocess": Returns P without changes. • "raw": Returns P without changes. • "shares_LL": Returns P with an extra component called "model", which is the product of all other elements of P. • "validate": Returns P with an extra component called "model", which is the product of all other elements of P. • "zero_LL": Returns P with an extra component called "model", which is the product of all other elements of P. apollo_combineResults Write model results to file Description Writes results from various models to a single csv file. Usage apollo_combineResults(combineResults_settings = NULL) Arguments combineResults_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • modelNames: Character vector. Optional names of models to combine. Omit or use an empty vector to combine results from all models in the working/output directory. • printClassical: Boolean. TRUE for printing classical standard errors. FALSE by default. • printPVal: Boolean. TRUE for printing p-values. FALSE by default. • printT1: Boolean. If TRUE, t-test for H0: apollo_beta=1 are printed. FALSE by default. • estimateDigits: Numeric scalar. Number of decimal places to print for estimates. Default is 4. • tDigits: Numeric scalar. Number of decimal places to print for t-ratios values. Default is 2. • pDigits: Numeric scalar. Number of decimal places to print for p-values. Default is 2. • sortByDate: Boolean. If TRUE, models are ordered by date. Default is TRUE. Value Nothing, but writes a file called ’model_comparison_[date].csv’ in the working/output directory. apollo_compareInputs Compares the content of apollo_inputs to their counterparts in the global environment Description Compares the content of apollo_inputs to their counterparts in the global environment Usage apollo_compareInputs(apollo_inputs) Arguments apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Value Logical. TRUE if the content of apollo_inputs is the same than the one in the global environment, FALSE otherwise. apollo_conditionals Calculates conditionals Description Calculates posterior expected values (conditionals) of random coefficient models (continuous or discrete mixtures/latent class) Usage apollo_conditionals(model, apollo_probabilities, apollo_inputs) Arguments model Model object. Estimated model object as returned by function apollo_estimate. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Details This functions is only meant for use with models using either continuous distributions or latent classes, not both at the same time Value Depends on whether the model uses continuous mixtures or latent class. • If the model contains a continuous mixture, the function returns a list of matrices. Each matrix has dimensions nIndiv x 3. One matrix per random component. Each row of each matrix contains the indivID of an individual, and the posterior mean and s.d. of this random component for this individual. • If the model contains latent classes, the function returns a matrix with the posterior class allocation probabilities for each individual. • If the model contains both continuous mixtures and latent classes, the function fails. apollo_deltaMethod Delta method for Apollo models Description Applies the Delta method to calculate the standard errors of transformations of parameters. Usage apollo_deltaMethod(model, deltaMethod_settings) Arguments model Model object. Estimated model object as returned by function apollo_estimate. deltaMethod_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • expression: Character vector. A character vector with a single or multi- ple arbitrary functions of the estimated parameters, as text. For example: c(VTT="b1/b2*60"). Each expression can only contain model parameters (estimated or fixed), numeric values, and operands. At least one of the pa- rameters used needs to not have been fixed in estimation. Variables in the database cannot be included. If the user does not provide a name for an expression, then the expression itself is used in the output. If this setting is provided, then operation, parName1, parName2, multPar1 and multPar2 are ignored. • allPairs: Logical. If set to TRUE, Delta method calculations are carried out for the ratio and difference for all pairs of parameters and returned as two separate matrices with values and t-ratios. FALSE by default. • varcov: Character. Type of variance-covariance matrix to use in calcula- tions. It can take values "classical", "robust" and "bootstrap". De- fault is "robust". • printPVal: Logical or Scalar. TRUE or 1 for printing p-values for one- sided test, 2 for printing p-values for two-sided test, FALSE for not printing p-values. FALSE by default. • operation: Character. Function to calculate the delta method for. See details. Not used if expression is provided. • parName1: Character. Name of the first parameter if operation is used. See details. Not used if expression is provided. • parName2: Character. Name of the second parameter if operation is used. See details. Not used if expression is provided.. Optional depending on operation. • multPar1: Numeric scalar. An optional value to scale parName1. Not used if expression is provided. • multPar2: Numeric scalar. An optional value to scale parName2. Not used if expression is provided. Details apollo_deltaMethod can be used in two ways. The first and recommended way is to provide an element called expression inside its argument deltaMethod_settings. expression should contain the expression or expressions for which the standard error is/are to be calculated, as text. For example, to calculate the ratio between parameters b1 and b2, expression=c(vtt="b1/b2") should be used. The second method is to provide the name of a specific operation inside deltaMethod_settings. The following five operations are supported. • sum: Calculates the s.e. of parName1 + parName2 • diff: Calculates the s.e. of parName1 - parName2 and parName2 - parName1 • prod: Calculates the s.e. of parName1*parName2 • ratio: Calculates the s.e. of parName1/parName2 and parName2/parName1 • exp: Calculates the s.e. of exp(parName1) • logistic: If only parName1 is provided, it calculates the s.e. of exp(parName1)/(1+exp(parName1)) and 1/(1+exp(parName1)). If parName1 and parName2 are provided, it calculates exp(par_i)/(1+exp(parName1)+e for i=1, 2, and 3 (par_3 = 1). • lognormal: Calculates the mean and s.d. of a lognormal distribution based on the mean (parName1) and s.d. (parName2) of the underlying normal. By default, apollo_deltaMethod uses the robust covariance matrix. However, the user can change this through the varcov setting. Value Matrix containing value, s.e. and t-ratio resulting from the requested expression or operation. This is also printed to screen. apollo_detach Detaches parameters and the database. Description Detaches variables attached by apollo_attach. Usage apollo_detach(apollo_beta = NA, apollo_inputs = NA) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Details This function detaches the variables attached by apollo_attach. It should be called at the end of apollo_probabilities, only if apollo_attach was called and the beginning. This can also be achieved by adding the line on.exit(apollo_detach(apollo_beta, apollo_inputs)) right af- ter calling apollo_attach. This function can also be called without any arguments, i.e. apollo_detach(). Value Nothing. apollo_dft Calculate DFT probabilities Description Calculate probabilities of a Decision Field Theory (DFT) model and can also perform other opera- tions based on the value of the functionality argument. Usage apollo_dft(dft_settings, functionality) Arguments dft_settings List of settings for the DFT model. It should contain the following elements. • alternatives: Named numeric vector. Names of alternatives and their corresponding value in choiceVar. • avail: Named list of numeric vectors or scalars. Availabilities of alterna- tives, one element per alternative. Names of elements must match those in alternatives. Values can be 0 or 1. These can be scalars or vectors (of length equal to rows in the database). A user can also specify avail=1 to indicate universal availability, or omit the setting completely. • altStart: A named list with as many elements as alternatives. Each element can be a scalar or vector containing the starting preference value for the alternative. • attrScalings: A named list with as many elements as attributes, or fewer. Each element is a factor that scale the attribute, and can be a scalar, a vector or a matrix/array. They do not need to add up to one for each observation. attrWeights and attrScalings are incompatible, and they should not be both defined for an attribute. Default is 1 for all attributes. • attrValues: A named list with as many elements as alternatives. Each el- ement is itself a named list of vectors of the alternative attributes for each observation (usually a column from the database). All alternatives must have the same attributes (can be set to zero if not relevant). • attrWeights: A named list with as many elements as attributes, or fewer. Each element is the weight of the attribute, and can be a scalar, a vector with as many elements as observations, or a matrix/array if random. They should add up to one for each observation and draw (if present), and will be re- scaled if they do not. attrWeights and attrScalings are incompatible, and they should not be both defined for an attribute. Default is 1 for all attributes. • choiceVar: Numeric vector. Contains choices for all observations. It will usually be a column from the database. Values are defined in alternatives. • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. • procPars: A list containing the four DFT ’process parameters’ – error_sd: Numeric scalar or vector. The standard deviation of the the error term in each timestep. – timesteps: Numeric scalar or vector. Number of timesteps to consider. Should be an integer bigger than 0. – phi1: Numeric scalar or vector. Sensitivity. – phi2: Numeric scalar or vector. Process parameter. • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "gradient": Not implemented. • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all alternatives, with an extra element for the chosen alternative probability. • "preprocess": Returns a list with pre-processed inputs, based on dft_settings. • "raw": Same as "prediction" • "report": Choice overview. • "shares_LL": Not implemented. Returns a vector of NA with as many elements as observa- tions. • "validate": Same as "estimate" • "zero_LL": vector/matrix/array. Returns the probability of the chosen alternative when all parameters are zero. References <NAME>.; <NAME>. and <NAME>. (2018) Decision field theory: Improvements to current methodology and comparisons with standard choice modelling techniques. Transportation Research 107B, 18 - 40. <NAME>.; <NAME>. and <NAME>. (Submitted) An accumulation of prefer- ence: two alternative dynamic models for understanding transport choices. <NAME>.; <NAME>. and <NAME>. (2001) Multialternative decision field theory: A dynamic connectionist model of decision making. Psychological Review 108, 370 apollo_diagnostics Pre-process input for common models return Description Pre-process input for common models return Usage apollo_diagnostics(inputs, modelType, apollo_inputs, data = TRUE, param = TRUE) Arguments inputs List of settings modelType Character. Type of model, e.g. "mnl", "nl", "cnl", etc. apollo_inputs List of main inputs to the model estimation process. See apollo_validateInputs. data Boolean. TRUE for printing report related to dependent and independent vari- ables. FALSE for not printing it. Default is TRUE. param Boolean. TRUE for printing report related to estimated parameters (e.g. model structure). FALSE for not printing it. Default is TRUE. Value (invisibly) TRUE if no error happend during execution. apollo_drugChoiceData Simulated dataset of medication choice. Description A simulated dataset containing 10,000 stated medication choices among four alternatives. Usage apollo_drugChoiceData Format A data.frame with 10,000 rows and 33 variables: ID Numeric. Identification number of the individual. task Numeric. Index of choice situations for each individual, going from 1 to 10. best Numeric. Index of alternative selected as best option. second_pref Numeric. Index of alternative selected as second-best option. third_pref Numeric. Index of alternative selected as third-best option. worst Numeric. Index of alternative selected as worst option. brand_1 Character. Brand for alternative 1. country_1 Character. Country of origin for alternative 1. char_1 Character. Characteristics of alternative 1 (standard, fast acting, or double strength). side_effects_1 Numeric. Chance of suffering negative side effects with alternative 1. price_1 Numeric. Cost of alternative 1 in Pounds sterling (GBP). brand_2 Character. Brand for alternative 2. country_2 Character. Country of origin for alternative 2. char_2 Character. Characteristics of alternative 2 (standard, fast acting, or double strength). side_effects_2 Numeric. Chance of suffering negative side effects with alternative 2. price_2 Numeric. Cost of alternative 2 in Pounds sterling (GBP). brand_3 Character. Brand for alternative 3. country_3 Character. Country of origin for alternative 3. char_3 Character. Characteristics of alternative 3 (standard, fast acting, or double strength). side_effects_3 Numeric. Chance of suffering negative side effects with alternative 3. price_3 Numeric. Cost of alternative 3 in Pounds sterling (GBP). brand_4 Character. Brand for alternative 4. country_4 Character. Country of origin for alternative 4. char_4 Character. Characteristics of alternative 4 (standard, fast acting, or double strength). side_effects_4 Numeric. Chance of suffering negative side effects with alternative 4. price_4 Numeric. Cost of alternative 4 in Pounds sterling (GBP). regular_user Numeric. 1 if the respondent is a regular user of headache medicine, 0 otherwise. university_educated Numeric. 1 if the respondent holds a university degree, 0 otherwise. over_50 Numeric. 1 if the respondent is 50 years old or older, 0 otherwise. attitude_quality Numeric. Level of agreement from 1 (strongly disagree) to 5 (strongly agree) with the phrase ’I am concerned about the quality of drugs developed by unknown companies’. attitude_ingredients Numeric. Level of agreement from 1 (strongly disagree) to 5 (strongly agree) with the phrase ’I believe that ingredients are the same no matter what brand’. attitude_patent Numeric. Level of agreement from 1 (strongly disagree) to 5 (strongly agree) with the phrase ’The original patent holders have valuable experience with their medicines’. attitude_dominance Numeric. Level of agreement from 1 (strongly disagree) to 5 (strongly agree) with the phrase ’I believe the dominance of big pharmaceutical companies is unhelpful’. Details This dataset is to be used for discrete choice modelling. Data comes from 1,000 individuals, each with ten stated choice (SC) scenarios involving a choice among headache medication. There are 10,000 choices in total. Data is simulated. Each observation contains attributes of the alternatives, characteristics of the respondent, and their answers to four attitudinal questions. All four alterna- tives are always available for all individuals. Alternatives 1 and 2 are branded, while alternatives 3 and 4 are generic. Respondents provide a full ranking of alternatives for each choice task (i.e. observation). Source http://www.apollochoicemodelling.com/ apollo_dVdB Calculates gradients of utility functions Description Calculates gradients (derivatives) of utility functions. Usage apollo_dVdB(apollo_beta, apollo_inputs, V) Arguments apollo_beta Named numeric vector of parameters. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. V List of functions Value Named list. Each element is a function that returns a list, where each element is the partial deriva- tives of the elements of V. apollo_dVdB2 Calculates gradients of utility functions Description Calculates gradients (derivatives) of utility functions. Usage apollo_dVdB2(apollo_beta, apollo_inputs, V) Arguments apollo_beta Named numeric vector of parameters. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. V List of functions Value Named list. Each element is itself a list of functions: the partial derivatives of the elements of V. apollo_el Calculates Exploded Logit probabilities Description Calculates the probabilities of an Exploded Logit model and can also perform other operations based on the value of the functionality argument. Usage apollo_el(el_settings, functionality) Arguments el_settings List of inputs of the Exploded Logit model. It shoud contain the following. • alternatives: Named numeric vector. Names of alternatives and their corresponding value in choiceVar. • avail: Named list of numeric vectors or scalars. Availabilities of alterna- tives, one element per alternative. Names of elements must match those in alternatives. Values can be 0 or 1. These can be scalars or vectors (of length equal to rows in the database). A user can also specify avail=1 to indicate universal availability, or omit the setting completely. • choiceVars: List of numeric vectors. Contain choices for each position of the ranking. The list must be ordered with the best choice first, second best second, etc. It will usually be a list of columns from the database. Use value -1 if a stage does not apply for a given observations (e.g. when some individuals have shorter rankings). • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. • utilities: Named list of deterministic utilities . Utilities of the alterna- tives. Names of elements must match those in alternatives. • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). • scales: List of vectors. Scale factors of each Logit model. At least one element should be normalized to 1. If omitted, scale=1 for all positions is assumed. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Details The function calculates the probability of a ranking as a product of Multinomial Logit models with gradually reducing availability, where scale differences can be allowed for. Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "gradient": List containing the likelihood and gradient of the model component. • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": Not applicable (NA). • "preprocess": Returns a list with pre-processed inputs, based on el_settings. • "raw": Same as "estimate" • "report": Choice overview across stages. • "shares_LL": Not implemented. Returns a vector of NA with as many elements as observa- tions. • "validate": Same as "estimate" • "zero_LL": vector/matrix/array. Returns the probability of the chosen alternative when all parameters are zero. apollo_emdc MDC model with exogenous budget Description Calculates the likelihood function of the MDC model with exogenous budget. Can also predict and validate inputs. Usage apollo_emdc(emdc_settings, functionality = "estimate") Arguments emdc_settings List of settings for the model. It includes the following. • continuousChoice: Named list of numeric vectors. Amount consumed of each inside good. Outside good must not be included. Can also be called "X". • budget: Optional numeric vector. Budget. Must be bigger that the expen- diture on all inside goods. Can also be called "B". • avail: Named list of numeric vectors. Availability of each product. Can also be called "A". • utilityOutside: Numeric vector (or matrix or array). Shadow price of the budget. Must be normalised to 0 for at least one individual. Default is 0 for every observation. Can also be called "V0". • utilities: Named list of numeric vectors (or matrices or arrays). Base utility of each product. Can also be called "V". • gamma: Named list of numeric vectors. Satiation parameter of each product. • delta: Lower triangular numeric matrix, or list of lists. Complementar- ity/substitution parameter. • cost: Named list of numeric vectors. Price of each product. • sigma: Numeric vector or scalar. Standard deviation of the error term. Default is one. • nRep: Scalar positive integer. Number of repetitions used when prediction • tol: Positive scalar. Tolerance of the prediction algorithm. • timeLimit: Positive scalar. Maximum amount of seconds the optimiser can spend calculating a prediction before setting it to NA. functionality Character. Either "validate", "zero_LL", "estimate", "conditionals", "raw", "out- put" or "prediction" Details This model extends the traditional multiple discrete-continuous (MDC) framework by (i) making the marginal utility of the outside good deterministic, and (ii) including complementarity and sub- stitution in the model formulation. See the following working paper for more details: <NAME>. & <NAME>. (2022) Extending the Multiple Discrete Continuous (MDC) modelling frame- work to consider complementarity, substitution, and an unobserved budget. Transportation Reser- arch 161B, 13 - 35. https://doi.org/10.1016/j.trb.2022.04.005 Value The returned object depends on the value of argument functionality as follows. • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all alternatives, with an extra element for the probability of the chosen alternative. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "zero_LL": vector/matrix/array. Returns the probability of the chosen alternative when all parameters are zero. • "conditionals": Same as "estimate" • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "raw": Same as "prediction" apollo_emdc1 MDC model with exogenous budget Description Calculates the likelihood function of the MDC model with exogenous budget. Can also predict and validate inputs. Usage apollo_emdc1(emdc_settings, functionality = "estimate") Arguments emdc_settings List of settings for the model. It includes the following. • continuousChoice: Named list of numeric vectors. Amount consumed of each inside good. Outside good must not be included. Can also be called "X". • budget: Numeric vector. Budget. Must be bigger that the expenditure on all inside goods. Can also be called "B". • avail: Named list of numeric vectors. Availability of each product. Can also be called "A". • utilityOutside: Numeric vector (or matrix or array). Shadow price of the budget. Must be normalised to 0 for at least one individual. Default is 0 for every observation. Can also be called "V0". • utilities: Named list of numeric vectors (or matrices or arrays). Base utility of each product. Can also be called "V". • gamma: Named list of numeric vectors. Satiation parameter of each product. • delta: Lower triangular numeric matrix, or list of lists. Complementar- ity/substitution parameter. • cost: Named list of numeric vectors. Price of each product. • sigma: Numeric vector or scalar. Standard deviation of the error term. Default is one. • nRep: Scalar positive integer. Number of repetitions used when prediction • tol: Positive scalar. Tolerance of the prediction algorithm. • timeLimit: Positive scalar. Maximum amount of seconds the optimiser can spend calculating a prediction before setting it to NA. functionality Character. Either "validate", "zero_LL", "estimate", "conditionals", "raw", "out- put" or "prediction" Details This model extends the traditional multiple discrete-continuous (MDC) framework by (i) making the marginal utility of the outside good deterministic, and (ii) including complementarity and sub- stitution in the model formulation. See the following working paper for more details: <NAME>. & <NAME>. (2022) Extending the Multiple Discrete Continuous (MDC) modelling frame- work to consider complementarity, substitution, and an unobserved budget. Transportation Reser- arch 161B, 13 - 35. https://doi.org/10.1016/j.trb.2022.04.005 Value The returned object depends on the value of argument functionality as follows. • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all alternatives, with an extra element for the probability of the chosen alternative. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "zero_LL": vector/matrix/array. Returns the probability of the chosen alternative when all parameters are zero. • "conditionals": Same as "estimate" • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "raw": Same as "prediction" apollo_emdc2 Extended MDC Description Calculates the likelihood function of the extended MDC model. Can also predict and validate inputs. Usage apollo_emdc2(emdc_settings, functionality = "estimate") Arguments emdc_settings List of settings for the model. It includes the following. • continuousChoice: Named list of numeric vectors. Amount consumed of each inside good. Outside good must not be included. Can also be called "X". • avail: Named list of numeric vectors. Availability of each product. Can also be called "A". • utilityOutside: Numeric vector (or matrix or array). Shadow price of the budget. Must be normalised to 0 for at least one individual. Default is 0 for every observation. Can also be called "V0". • utilities: Named list of numeric vectors (or matrices or arrays). Base utility of each product. Can also be called "V". • gamma: Named list of numeric vectors. Satiation parameter of each product. • sigma: Numeric scalar. Scale parameter. • delta: Lower triangular numeric matrix, or list of lists. Complementar- ity/substitution parameter. • cost: Named list of numeric vectors. Price of each product. • nRep: Scalar positive integer. Number of repetitions used when predictiong • nIter: Vector of two positive integers. Number of maximum iterations used during prediction, for the upper and lower iterative levels. • tolerance: Positive scalar Tolerance of the prediction algorithm. • rawPrediction: Scalar logical. When functionality is equal to "predic- tion", it returns the full set of simulations. Defaults is FALSE. functionality Character. Either "validate", "zero_LL", "estimate", "conditionals", "raw", "out- put" or "prediction" Details This model extends the traditional multiple discrete-continuous (MDC) framework by (i) dropping the need to define a budget, (ii) making the marginal utility of the outside good deterministic, and (iii) including complementarity and substitution in the model formulation. See the following working paper for more details: <NAME>. & <NAME>. (Working Paper) Some adaptations of Multiple Discrete-Continuous Extreme Value (MDCEV) models for a computationally tractable treatment of complementarity and substi- tution effects, and reduced influence of budget assumptions Avilable at: http://stephanehess.me.uk/publications.html Value The returned object depends on the value of argument functionality as follows. • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all alternatives, with an extra element for the probability of the chosen alternative. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "zero_LL": vector/matrix/array. Returns the probability of the chosen alternative when all parameters are zero. • "conditionals": Same as "estimate" • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "raw": Same as "prediction" apollo_estimate Estimates model Description Estimates a model using the likelihood function defined by apollo_probabilities. Usage apollo_estimate( apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs, estimate_settings = NA ) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. estimate_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • bgw_settings: List. Additional arguments to the BGW optimisation method. See bgw_mle for more details. • bootstrapSE: Numeric. Number of bootstrap samples to calculate standard errors. Default is 0, meaning no bootstrap s.e. will be calculated. Number must zero or a positive integer. Only used if apollo_control$estMethod!="HB". • bootstrapSeed: Numeric scalar (integer). Random number generator seed to generate the bootstrap samples. Only used if bootstrapSE>0. Default is 24. • constraints: Character vector. Linear constraints on parameters to es- timate. For example c('b1>0', 'b1 + 2*b2>1'). Only >, < and = can be used. Inequalities cannot be mixed with equality constraints, e.g. c(b1-b2=0, b2>0) will fail. All parameter names must be on the left side. Fixed param- eters cannot go into constraints. Alternatively, constraints can be defined as in maxLik. Constraints can only be used with maximum likelihood estima- tion and the BFGS routine in particular. • estimationRoutine: Character. Estimation method. Can take values "bfgs", "bgw", "bhhh", or "nr". Used only if apollo_control$HB is FALSE. Default is "bgw". • hessianRoutine: Character. Name of routine used to calculate the Hessian of the log-likelihood function after estimation. Valid values are "analytic" (default), "numDeriv" (to use the numeric routine in package numDeric), "maxLik" (to use the numeric routine in packahe maxLik), and "none" to avoid calculating the Hessian and the covariance matrix. Only used if apollo_control$HB=FALSE. • maxIterations: Numeric. Maximum number of iterations of the estima- tion routine before stopping. Used only if apollo_control$HB is FALSE. Default is 200. • maxLik_settings: List. Additional settings for maxLik. See argument control in maxBFGS, maxBHHH and maxNM for more details. Only used for maximum likelihood estimation. • numDeriv_method: Character. Method used for numerical differentiation when calculating the covariance matrix. Can be "Richardson" or "simple", Only used if analytic gradients are available. See argument method in grad for more details. • numDeriv_settings: List. Additional arguments to the method used by numDeriv to calculate the Hessian. See argument method.args in grad for more details. • printLevel: Higher values render more verbous outputs. Can take values 0, 1, 2 or 3. Ignored if apollo_control$HB is TRUE. Default is 3. • scaleAfterConvergence: Logical. Used to increase numerical precision of convergence. If TRUE, parameters are scaled to 1 after convergence, and the estimation is repeated from this new starting values. Results are reported scaled back, so it is a transparent process for the user. Default is TRUE. • scaleHessian: Logical. If TRUE, parameters are scaled to 1 for Hessian estimation. Default is TRUE. • scaling: Named vector. Names of elements should match those in apollo_beta. Optional scaling for parameters. If provided, for each parameter i, (apollo_beta[i]/scaling[i]) is optimised, but scaling[i]*(apollo_beta[i]/scaling[i]) is used dur- ing estimation. For example, if parameter b3=10, while b1 and b2 are close to 1, then setting scaling = c(b3=10) can help estimation, specially the calculation of the Hessian. Reports will still be based on the non-scaled parameters. • silent: Logical. If TRUE, no information is printed to the console during estimation. Default is FALSE. • validateGrad: Logical. If TRUE, the analytical gradient (if used) is com- pared to the numerical one. Default is TRUE. • writeIter: Logical. Writes value of the parameters in each iteration to a csv file. Works only if estimation_routine=="bfgs"|"bgw". Default is TRUE. Details This is the main function of the Apollo package. The estimation process begins by running a number of checks on the apollo_probabilities function provided by the user. If all checks are passed, estimation begins. There is no limit to estimation time other than reaching the maximum number of iterations. If Bayesian estimation is used, estimation will finish once the predefined number of iterations are completed. By default, this functions writes the estimated parameter val- ues in each iteration to a file in the working/output directory. Writing can be turned off by setting estimate_settings$writeIter to FALSE. By default, final results are not written into a file nor printed to the console, so users must make sure to call function apollo_modelOutput and/or apollo_saveOutput afterwards. Users are strongly encouraged to visit http://www.apollochoicemodelling. com/ to download examples on how to use the Apollo package. The webpage also provides a de- tailed manual for the package, as well as a user-group to get further help. Value model object apollo_estimateHB Estimates model using Bayesian estimation Description Estimates a model using Bayesian estimation on the likelihood function defined by apollo_probabilities. Usage apollo_estimateHB( apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs, estimate_settings = NA ) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. estimate_settings List. Options controlling the estimation process, as used for in apollo_estimate. Details This is a sub function of apollo_estimate which is called when using Bayesian estimation. Value model object apollo_expandLoop Expands loops in a function or expression Description Expands loops replacing the index by its value. It also evaluates paste and paste0, and removes get. Usage apollo_expandLoop(f, apollo_inputs) Arguments f function (usually apollo_probabilities) inside which the name of the com- ponents are inserted. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Details For example, the expression for(j in 1:3) V[[paste0('alt',j)]] = b1*get(paste0('x',j)) + b2*X[,j] would be expanded into: V[[alt1]] = b1*x1 + b2*X[,1] V[[alt2]] = b1*x2 + b2*X[,2] V[[alt3]] = b1*x3 + b2*X[,3] Value A function or an expression (same type as input f) apollo_firstRow Keeps only the first row for each individual Description Given a multi-row input, keeps only the first row for each individual. Usage apollo_firstRow(P, apollo_inputs) Arguments P List of vectors, matrices or 3-dim arrays. Likelihood of the model components (or other object). apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Details This a function to keep only the first row of an object per indidividual. It can handle multiple types of components, including scalars, vectors and three-dimensional arrays (cubes). The argument database MUST contain a column called ’apollo_sequence’, which is created by apollo_validateData. Value If P is a list, then it returns a list where each element has only the first row of each individual. If P is a single element, then it returns a single element with only the first row of each individual. The size of the element is changed only in the first dimension. If input is a scalar, then it returns a vector with the element repeated as many times as individuals in database. If the element is a vector, its length will be changed to the number of individuals. If the element is a matrix, then its first dimension will be changed to the number of individuals, while keeping the size of the second dimension. If the element is a cube, then only the first dimension’s length is changed, preserving the others. apollo_fitsTest Compares log-likelihood of model across categories Description Given the estimates of a model, it compares the log-likelihood at the observation level across cate- gories of observations. Usage apollo_fitsTest(model, apollo_probabilities, apollo_inputs, fitsTest_settings) Arguments model Model object. Estimated model object as returned by function apollo_estimate. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. fitsTest_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • subsamples: Named list of boolean vectors. Each element of the list de- fines whether a given observation belongs to a given subsample (e.g. by sociodemographics). Details Prints a table comparing the average log-likelihood at the observation level for each category. Value Matrix with average log-likelihood at observation level per category (invisibly). apollo_fmnl Calculates Fractional Multinomial Logit probabilities Description Calculates the probabilities of a Fractional Multinomial Logit model and can also perform other operations based on the value of the functionality argument. Usage apollo_fmnl(fmnl_settings, functionality) Arguments fmnl_settings List of inputs of the FMNL model. It should contain the following. • alternatives: Character vector. Names of alternatives, elements must match the names in list ’utilities’. • avail: Named list of numeric vectors or scalars. Availabilities of alterna- tives, one element per alternative. Names of elements must match those in alternatives. Values can be 0 or 1. These can be scalars or vectors (of length equal to rows in the database). A user can also specify avail=1 to indicate universal availability, or omit the setting completely. • choiceShares: Named list of numeric vectors. Share allocated to each alternative. One element per alternative, as long as the number of observa- tions or a scalar. Names must match those in alternatives. • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). • utilities: Named list of deterministic utilities . Utilities of the alterna- tives. Names of elements must match those in alternatives. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "gradient": List containing the likelihood and gradient of the model component. • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all alternatives, with an extra element for the probability of the chosen alternative. • "preprocess": Returns a list with pre-processed inputs, based on mfnl_settings. • "raw": Same as "prediction" • "report": Overview of dependent variable • "shares_LL": vector/matrix/array. Returns the probability of the chosen alternative when only constants are estimated. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "zero_LL": vector/matrix/array. Returns the probability of the chosen alternative when all parameters are zero. apollo_initialise Prepares environment Description Prepares environment (the global environment if called by the user) for model definition and esti- mation. Usage apollo_initialise() Details This function detaches variables and makes sure that output is directed to console. It does not delete variables from the working environment. Value Nothing. apollo_insertComponentName Adds componentName2 to model calls Description Adds componentName2 to model calls Usage apollo_insertComponentName(e) Arguments e An expression or a function. It will usually be apollo_probabilities. Value The original argument ’e’ but modified to incorporate a new setting called ’componentName2’ to every call to apollo_<model> (e.g. apollo_mnl, apollo_nl, etc.). apollo_insertFunc Modifies function to make it compatible with analytic gradients Description Takes a likelihood function and inserts function () before key elements to allow for analytic gra- dient calculation Usage apollo_insertFunc(f, like = TRUE, randCoeff = FALSE, lcPars = FALSE) Arguments f Function. Expressions inside it will be turned into functions. Usually apollo_probabilities or apollo_randCoeff. like Logical. Must be TRUE if f is apollo_probabilities. FALSE otherwise. randCoeff Logical. Must be TRUE if f is apollo_randCoeff. FALSE otherwise. lcPars Logical. Must be TRUE if f is apollo_lcPars. FALSE otherwise. Details It modifies the definition of the following models. • apollo_mnl: Turns all elements inside mnl_settings$V into functions. • apollo_ol: Turns ol_settings$V and all elements inside ol_settings$tau into functions. • apollo_op: Turns op_settings$V and all elements inside op_settings$tau into functions. • apollo_normalDensity: Turns normalDensity_settings$xNormal, normalDensity_settings$mu and normalDensity_settings$sigma into functions. It can only track a maximum of 3 levels of depth in definitions. For example: V <- list() V[["A"]] <- b1*x1A + b2*x2A V[["B"]] <- b1*x1B + b2*x2B mnl_settings1 <- list(alternatives=c("A", "B"), V = V, choiceVar= Y, avail = 1, componentName="MNL1") P[["MNL1"]] <- apollo_mnl(mnl_settings1, functionality) But it may not be able to deal with the following: VA <- b1*x1A + b2*x2A V <- list() V[["A"]] <- VA V[["B"]] <- b1*x1B + b2*x2B mnl_settings1 <- list(alternatives=c("A", "B"), V = V, choiceVar= Y, avail = 1, componentName="MNL1") P[["MNL1"]] <- apollo_mnl(mnl_settings1, functionality) But that might be enough given how apollo_dVdB works. Value Function f but with relevant expressions turned into function definitions. apollo_insertOLList Replaces tau=c(...) by tau=list(...) in calls to apollo_ol Description Takes a function, looks for calls to apollo_ol, identifies the corresponding ol_settings, then goes inside the definition of ol_settings and replaces tau=c(...) for tau=list(...). Usage apollo_insertOLList(f) Arguments f Function. Usually apollo_probabilities, apollo_randCoeff, or apollo_lcPars. Details This only goes one level deep in definitions. For example, it will work correctly in the following cases: ol_settings = list(outcomeOrdered = y1, V = b1*x1, tau = c(tau11, tau12)) P[["OL1"]] = apollo_ol(ol_settings, functionality) P[["OL2"]] = apollo_ol(list(outcomeOrdered=y2, V=b2*x2, tau=c(tau21, tau22)), functionality) But it will not work on the following cases: Tau = c(tau1, tau2, tau3) ol_settings = list(outcomeOrdered = y2, V = b2*x2, tau = Tau) P[["OL1"]] = apollo_ol(ol_settings, functionality) P[["OL2"]] = apollo_ol(list(outcomeOrdered=y1, V=b1*x1, tau=Tau), functionality) This function is called by apollo_modifyUserDefFunc to allow for analytical gradients when using apollo_ol. Value Function f with tau=c(...) replaced by tau=list(...). apollo_insertRows Inserts rows Description Given a numeric object (scalar, vector, matrix or 3-dim array) inserts rows in the specified places. Usage apollo_insertRows(v, r, val) Arguments v Numeric scalar, vector, matrix or 3-dim array. r Boolean vector. TRUE for inserting a row from utilities, FALSE to insert a new row with value val. val Numeric scalar. Value that will fill new rows. Details In general, r should be longer than the number of rows in utilities, and sum(r)=nrow(v). If not, then a new object with as many rows as r will be returned. Old rows will be taken from utilities from the top down. Value The same argument utilities but with the rows where r==FALSE removed. apollo_insertRRMQuotes Introduces quotes into rrm_settings Description Takes a function, looks for the definition of relevant parts of rrm_settings, and introduces quotes on them. This is to facilitate their processing by apollo_rrm under functionality="preprocessing". Usage apollo_insertRRMQuotes(f) Arguments f Function. Usually apollo_probabilities. Value Function f with relevant expressions turned into character. apollo_insertScaling Scales variables inside a function Description It changes the syntax of the function by replacing variable names for their scaled form, e.g. x –> x*apollo_inputs$apollo_scale[["x"]]. In assignments, it only scales the right side of the assignment. Usage apollo_insertScaling(e, sca) Arguments e Function, expression, call or symbol to alter. sca Named numeric vector with the scales. The names in these vectors determine which variables should be scaled. Value A function, expression, call or symbol with the corresponding variables scaled. apollo_keepRows Keeps only some rows Description Given a numeric object (scalar, vector, matrix or 3-dim array) keeps only the specified rows. Usage apollo_keepRows(v, r) Arguments v Numeric scalar, vector, matrix or 3-dim array. r Boolean vector. As many elements as rows in utilities. TRUE for keeping the row. FALSE to drop it. Value The same argument utilities but with the rows where r==FALSE removed. apollo_lc Calculates the likelihood of a latent class model Description Given within class probabilities, and class allocation probabilities, calculates the probabilities of an Exploded Logit model and can also perform other operations based on the value of the functionality argument. Usage apollo_lc(lc_settings, apollo_inputs, functionality) Arguments lc_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • inClassProb: List of probabilities. Conditional likelihood for each class. One element per class, in the same order as classProb. • classProb: List of probabilities. Allocation probability for each class. One element per class, in the same order as inClassProb. • componentName: Character. Name given to model component (optional). apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Value The returned object depends on the value of argument functionality as follows. • "components": Returns nothing. • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "gradient": List containing the likelihood and gradient of the model component. • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all models components, for each class. • "preprocess": Returns a list with pre-processed inputs, based on lc_settings. • "raw": Same as "prediction" • "report": Class allocation overview. • "shares_LL": Same as "estimate" • "validate": Same as "estimate", but also runs a set of tests on the given arguments. • "zero_LL": Same as "estimate" apollo_lcConditionals Calculates conditionals for latent class models. Description Calculates posterior expected values (conditionals) of class allocation probabilities for each indi- vidual. Usage apollo_lcConditionals(model, apollo_probabilities, apollo_inputs) Arguments model Model object. Estimated model object as returned by function apollo_estimate. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Details This function can only be used with latent class models without continuous heterogeneity. Value A matrix with the posterior class allocation probabilities for each individual. apollo_lcEM Uses EM for latent class model Description Uses the EM algorithm for estimating a latent class model. Usage apollo_lcEM( apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs, lcEM_settings = NA, estimate_settings = NA ) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. lcEM_settings List. Options controlling the EM process. • EMmaxIterations: Numeric. Maximum number of iterations of the EM algorithm before stopping. Default is 100. • postEM: Numeric scalar. Determines the tasks performed by this function after the EM algorithm has converged. Can take values 0, 1 or 2 only. If value is 0, only the EM algorithm will be performed, and the results will be a model object without a covariance matrix (i.e. estimates only). If value is 1, after the EM algorithm, the covariance matrix of the model will be calculated as well, and the result will be a model object with a covariance matrix. If value is 2, after the EM algorithm, the estimated parameter values will be used as starting value for a maximum likelihood estimation process, which will render a model object with a covariance matrix. Performing maximum likelihood estimation after the EM algorithm is useful, as there may be room for further improvement. Default is 2. • silent: Boolean. If TRUE, no information is printed to the console during estimation. Default is FALSE. • stoppingCriterion: Numeric. Convergence criterion. The EM process will stop when improvements in the log-likelihood fall below this value. Default is 10^-5. estimate_settings List. Options controlling the estimation process within each EM iteration. See apollo_estimate for details. Details This function uses the EM algorithm for estimating a Latent Class model. It is only suitable for models without continuous mixing. All parameters need to vary across classes and need to be included in the apollo_lcPars function which is used by apollo_lcEM. Value model object apollo_lcUnconditionals Returns unconditionals for a latent class model model Description Returns values for random parameters and class allocation probabilities in a latent class model model. Usage apollo_lcUnconditionals(model, apollo_probabilities, apollo_inputs) Arguments model Model object. Estimated model object as returned by function apollo_estimate. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Value List of object, one per random component and one for the class allocation probabilities. apollo_llCalc Calculates log-likelihood of all model components Description Calculates the log-likelihood of each model component as well as the whole model. Usage apollo_llCalc(apollo_beta, apollo_probabilities, apollo_inputs, silent = FALSE) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. silent Boolean. If TRUE, no information is printed to the console by the function. Default is FALSE. Details This function calls apollo_probabilities with functionality="output". It then reorders the list of likelihoods so that "model" goes first. Value A list of vectors. Each vector corresponds to the log-likelihood of the whole model (first element) or a model component. apollo_loadModel Loads model from file Description Loads a previously estimated model object from a file. Usage apollo_loadModel(modelName) Arguments modelName Character. Name of the model to load. Details This function looks for a file named modelName_model.rds in the working or output directory, loads the object contained in it, and returns it. Value A model object. apollo_longToWide Converts data from long to wide format. Description Converts choice data from long to wide format, with one row per observation as opposed to one row per alternative/observation. Usage apollo_longToWide(longData, longToWide_settings) Arguments longData data.frame. Data in long format. longToWide_settings List. Contains settings for this function. User input is required for all settings. • codealternative_column: Character. Name of column in long data that contains the names of the alternatives (either numeric or character). • codealternative_specific_attributes: Character vector. Names of columns in long data with attributes that vary across alternatives within an observa- tion. • codechoice_column: Character. Name of column in long data that contains the choice. • codeID_column: Character. Name of column in long data that contains the ID of individuals. • codeobservation_column: Character. Name of column in long data that contains the observation index. Value Silently returns a data.frame with the wide format version of the data. An overview of the data is printed to screen. apollo_lrTest Likelihood ratio test Description Calculates the likelihood ratio test value between two models and reports the corresponding p-value. Usage apollo_lrTest(model1, model2) Arguments model1 Either a character variable with the name of a previously estimated model, or an estimated model in memory, as returned by apollo_estimate. model2 Either a character variable with the name of a previously estimated model, or an estimated model in memory, as returned by apollo_estimate. Details The two models need to have been estimated on the same data, and one model needs to be nested within the other model. Value LR-test p-value (invisibly) apollo_makeCluster Creates cluster for estimation. Description Splits data, creates cluster and loads different pieces of the database on each worker. Usage apollo_makeCluster( apollo_probabilities, apollo_inputs, silent = FALSE, cleanMemory = FALSE ) Arguments apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. silent Boolean. If TRUE, no messages are printed to the terminal. FALSE by default. It overrides apollo_inputs$silent. cleanMemory Boolean. If TRUE, it saves apollo_inputs to disc, and removes database and draws from the apollo_inputs in .GlobalEnv and the parent environment. Details Internal use only. Called by apollo_estimate before estimation. Using multiple cores greatly increases memory consumption. Value Cluster (i.e. an object of class cluster from package parallel) apollo_makeDraws Creates draws for models with mixing Description Creates a list containing all draws necessary to estimate a model with mixing. Usage apollo_makeDraws(apollo_inputs, silent = FALSE) Arguments apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. silent Boolean. If true, then no information is printed to console or default output. FALSE by default. Details Internal use only. Called by apollo_validateInputs. This function creates a list whose elements are the sets of draws requested by the user for use in a model with mixing. If the model does not include mixing, then it is not necessary to run this function. The number of draws has a massive impact on memory usage and estimation time. Memory usage and number of computations scale geometrically as N*interNDraws*intraNDraws (where N is the number of observations). Special care should be taken when using both inter and intra-individual draws, as memory usage can easily reach the GB order of magnitude. Also, keep in mind that using several threads (i.e. multicore) at least doubles the memory usage. This function returns a list, with each element representing a random component of the mixing model. The dimensions of the array depend on the type of draws used. 1. If only inter-individual draws are used, then draws are stored as 2-dimensional arrays (i.e. matrices). 2. If intra-individual draws are used, then draws are stored as 3-dimensional arrays. 3. The first dimension of the arrays (rows) correspond with the observations in the database. 4. The second dimension of the arrays (columns) correspond to the number of inter-individual draws. 5. The third dimension of the arrays correspond to the number of intra-individual draws. Value List. Each element is an array of draws representing a random component of the mixing model. apollo_makeGrad Creates gradient function. Description Creates gradient function from the likelihood function apollo_probabilities provided by the user. Returns NULL if the creation of gradient function fails. Usage apollo_makeGrad( apollo_beta, apollo_fixed, apollo_logLike, validateGrad = FALSE ) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. apollo_logLike Function to calculate the log-likelihood of the model, as created by apollo_makeLogLike If provided, the value of the analytical gradient will be compared to the value of the numerical gradient as calculated using apollo_logLike and the numDeriv package. If the difference between the two is bigger than 1 that the analytical gradient is wrong and NULL will be returned. validateGrad Logical. If TRUE, it compares the value of the analytical gradient evaluated at apollo_beta against the numeric gradient (using numDeriv) at the same value. If the difference is bigger than 1 return NULL. Details Internal use only. Called by apollo_estimate before estimation. The returned function can be single-threaded or multi-threaded based on the model options. Value apollo_gradient function. It receives the following arguments • b Numeric vector of _variable_ parameters (i.e. must not include fixed parameters). • countIter Not used. Included only to mirror inputs of apollo_logLike. • getNIter Not used. Included only to mirror inputs of apollo_logLike. • sumLL Not used. Included only to mirror inputs of apollo_logLike. • writeIter Not used. Included only to mirror inputs of apollo_logLike. If the creation of the gradient function fails, then it returns NULL. apollo_makeLogLike Creates log-likelihood function. Description Creates log-likelihood function from the likelihood function apollo_probabilities provided by the user. Usage apollo_makeLogLike( apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs, apollo_estSet = list(estimationRoutine = "bgw"), cleanMemory = FALSE ) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. apollo_estSet List of estimation options. It must contain at least one element called estima- tionRoutine defining the estimation algorithm. See apollo_estimate. cleanMemory Logical. If TRUE, then apollo_inputs$draws and apollo_inputs$database are erased throughout the calling stack. Used to reduce memory usage in case of multithreading and a large database or number o draws. Details Internal use only. Called by apollo_estimate before estimation. The returned function can be single-threaded or multi-threaded based on the model options. Value apollo_logLike function. apollo_mdcev Calculates MDCEV likelihoods Description Calculates the likelihoods of a Multiple Discrete Continuous Extreme Value (MDCEV) model and can also perform other operations based on the value of the functionality argument. Usage apollo_mdcev(mdcev_settings, functionality) Arguments mdcev_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • alpha: Named list. Alpha parameters for each alternative, including for any outside good. As many elements as alternatives. • alternatives: Character vector. Names of alternatives, elements must match the names in list ’utilities’. • avail: Named list of numeric vectors or scalars. Availabilities of alterna- tives, one element per alternative. Names of elements must match those in alternatives. Values can be 0 or 1. These can be scalars or vectors (of length equal to rows in the database). A user can also specify avail=1 to indicate universal availability, or omit the setting completely. • budget: Numeric vector. Budget for each observation. • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. • continuousChoice: Named list of numeric vectors. Amount of consump- tion of each alternative. One element per alternative, as long as the number of observations or a scalar. Names must match those in alternatives. • cost: Named list of numeric vectors. Price of each alternative. One ele- ment per alternative, each one as long as the number of observations or a scalar. Names must match those in alternatives. • gamma: Named list. Gamma parameters for each alternative, excluding any outside good. As many elements as inside good alternatives. • nRep: Numeric scalar. Number of simulations of the whole dataset used for forecasting. The forecast is the average of these simulations. Default is 100. • outside: Character. Optional name of the outside good. • rawPrediction: Logical scalar. TRUE for prediction to be returned at the draw level (a 3-dim array). FALSE for prediction to be returned averaged across draws. Default is FALSE. • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). • sigma: Numeric scalar. Scale parameter of the model extreme value type I error. • utilities: Named list. Utilities of the alternatives. Names of elements must match those in argument ’alternatives’. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the observed consumption for each observation. • "gradient": Not implemented • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": A matrix with one row per observation, and columns indicating means and s.d. of continuous and discrete predicted consumptions. • "preprocess": Returns a list with pre-processed inputs, based on mdcev_settings. • "raw": Same as "estimate" • "report": Dependent variable overview. • "shares_LL": Not implemented. Returns a vector of NA with as many elements as observa- tions. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "zero_LL": Not implemented. Returns a vector of NA with as many elements as observations. apollo_mdcev2 Calculates MDCEV likelihoods Description Calculates the likelihoods of a Multiple Discrete Continuous Extreme Value (MDCEV) model and can also perform other operations based on the value of the functionality argument. Usage apollo_mdcev2(mdcev_settings, functionality) Arguments mdcev_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • alpha: Named list. Alpha parameters for each alternative, including for any outside good. As many elements as alternatives. • alternatives: Character vector. Names of alternatives, elements must match the names in list ’utilities’. • avail: Named list of numeric vectors or scalars. Availabilities of alterna- tives, one element per alternative. Names of elements must match those in alternatives. Values can be 0 or 1. These can be scalars or vectors (of length equal to rows in the database). A user can also specify avail=1 to indicate universal availability, or omit the setting completely. • budget: Numeric vector. Budget for each observation. • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. • continuousChoice: Named list of numeric vectors. Amount of consump- tion of each alternative. One element per alternative, as long as the number of observations or a scalar. Names must match those in alternatives. • cost: Named list of numeric vectors. Price of each alternative. One ele- ment per alternative, each one as long as the number of observations or a scalar. Names must match those in alternatives. • fastPred: Boolean scalar. TRUE to mix parameter draws with repetition draws. This is formally incorrect, but a good a approximation to the true prediction, and much faster. FALSE by default. • gamma: Named list. Gamma parameters for each alternative, excluding any outside good. As many elements as inside good alternatives. • nRep: Numeric scalar. Number of simulations of the whole dataset used for forecasting. The forecast is the average of these simulations. Default is 100. • outside: Character. Optional name of the outside good. • rawPrediction: Logical scalar. TRUE for prediction to be returned at the draw level (a 3-dim array). FALSE for prediction to be returned averaged across draws. Default is FALSE. • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). • sigma: Numeric scalar. Scale parameter of the model extreme value type I error. • utilities: Named list. Utilities of the alternatives. Names of elements must match those in argument ’alternatives’. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the observed consumption for each observation. • "gradient": Not implemented • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": A matrix with one row per observation, and columns indicating means and s.d. of continuous and discrete predicted consumptions. • "preprocess": Returns a list with pre-processed inputs, based on mdcev_settings. • "raw": Same as "estimate" • "report": Dependent variable overview. • "shares_LL": Not implemented. Returns a vector of NA with as many elements as observa- tions. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "zero_LL": Not implemented. Returns a vector of NA with as many elements as observations. apollo_mdcnev Calculates MDCNEV likelihoods Description Calculates the likelihoods of a Multiple Discrete Continuous Nested Extreme Value (MDCNEV) model with an outside good and can also perform other operations based on the value of the functionality argument. Usage apollo_mdcnev(mdcnev_settings, functionality) Arguments mdcnev_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • alpha: Named list. Alpha parameters for each alternative, including for the outside good. As many elements as alternatives. • avail: Named list of numeric vectors or scalars. Availabilities of alterna- tives, one element per alternative. Names of elements must match those in alternatives. Values can be 0 or 1. These can be scalars or vectors (of length equal to rows in the database). A user can also specify avail=1 to indicate universal availability, or omit the setting completely. • alternatives: Character vector. Names of alternatives, elements must match the names in list ’utilities’. • budget: Numeric vector. Budget for each observation. • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. • continuousChoice: Named list of numeric vectors. Amount of consump- tion of each alternative. One element per alternative, as long as the number of observations or a scalar. Names must match those in alternatives. • cost: Named list of numeric vectors. Price of each alternative. One ele- ment per alternative, each one as long as the number of observations or a scalar. Names must match those in alternatives. • gamma: Named list. Gamma parameters for each alternative, including for the outside good. As many elements as alternatives. • mdcnevNests: Named list. Lambda parameters for each nest. Elements must be named with the nest name. The lambda at the root is fixed to 1, and therefore must be no be defined. The value of the estimated mdcnevNests parameters should be between 0 and 1 to ensure consistency with random utility maximization. • mdcnevStructure: Numeric matrix. One row per nest and one column per alternative. Each element of the matrix is 1 if an alternative belongs to the corresponding nest. • outside: Character. Alternative name for the outside good. Default is "outside" • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). • utilities: Named list. Utilities of the alternatives. Names of elements must match those in argument ’alternatives’. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the observed consumption for each observation. • "gradient": Not implemented • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": A matrix with one row per observation, and columns indicating means and s.d. of continuous and discrete predicted consumptions. • "preprocess": Returns a list with pre-processed inputs, based on mdcnev_settings. • "raw": Same as "estimate" • "report": Dependent variable overview. • "shares_LL": Not implemented. Returns a vector of NA with as many elements as observa- tions. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "zero_LL": Not implemented. Returns a vector of NA with as many elements as observations. apollo_mixConditionals Calculates conditionals for continuous mixture models Description Calculates posterior expected values (conditionals) of continuously distributed random coefficients, as well as their standard deviations. Usage apollo_mixConditionals(model, apollo_probabilities, apollo_inputs) Arguments model Model object. Estimated model object as returned by function apollo_estimate. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Details This functions is only meant for use with continuous distributions Value List of matrices. Each matrix has dimensions nIndiv x 3. One matrix per random component. Each row of each matrix contains the indivID of an individual, and the posterior mean and s.d. of this random component for this individual apollo_mixEM Uses EM for models with continuous random coefficients Description Uses the EM algorithm for estimating a model with continuous random coefficients. Usage apollo_mixEM( apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs, mixEM_settings = NA, estimate_settings = NA ) Arguments apollo_beta Named numeric vector. Names and values for parameters. These need to be pro- vided in the following order. With K random parameters, K means for the under- lying Normals, followed by the elements of the lower triangle of the Cholesky matrix, by row. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. mixEM_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • EMmaxIterations: Numeric. Maximum number of iterations of the EM algorithm before stopping. Default is 100. • postEM: Numeric scalar. Determines the tasks performed by this function after the EM algorithm has converged. Can take values 0, 1 or 2 only. If value is 0, only the EM algorithm will be performed, and the results will be a model object without a covariance matrix (i.e. estimates only). If value is 1, after the EM algorithm, the covariance matrix of the model will be calculated as well, and the result will be a model object with a covariance matrix. If value is 2, after the EM algorithm, the estimated parameter values will be used as starting value for a maximum likelihood estimation process, which will render a model object with a covariance matrix. Performing maximum likelihood estimation after the EM algorithm is useful, as there may be room for further improvement. Default is 2. • silent: Boolean. If TRUE, no information is printed to the console during estimation. Default is FALSE. • stoppingCriterion: Numeric. Convergence criterion. The EM process will stop when improvements in the log-likelihood fall below this value. Default is 10^-5. • transforms: List. Optional argument, with one entry per parameter, show- ing the inverse transform to return from beta to the underlying Normal. E.g. if the first parameter is specified as negative logormal inside apollo_randCoeff, then the entry in transforms should be transforms[[1]]=function(x) log(-x) estimate_settings List. Options controlling the estimation process within each EM iteration. See apollo_estimate for details. Details This function uses the EM algorithm for estimating a model with continuous random coefficients. It is only suitable for models where all parameters are random, with a full covariance matrix. All random parameters need to be based on underlying Normals with a full covariance matrix, but any transform thereof can be used. Value model object apollo_mixUnconditionals Returns draws for continuously distributed random parameters in mix- ture model Description Returns draws (unconditionals) for random parameters in model, including interactions with deter- ministic covariates. Usage apollo_mixUnconditionals(model, apollo_probabilities, apollo_inputs) Arguments model Model object. Estimated model object as returned by function apollo_estimate. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Details This functions is only meant for use with continuous distributions Value List of object, one per random coefficient. With inter-individual draws only, this will be a ma- trix, with one row per individual, and one column per draw. With intra-individual draws, this will be a three-dimensional array, with one row per observation, inter-individual draws in the second dimension, and intra-individual draws in the third dimension. apollo_mlhs Generate random draws using MLHS algorithm Description Generate random draws using the Modified Latin Hypercube Sampling algorithm. Usage apollo_mlhs(N, d, i) Arguments N Numeric. The number of draws to generate in each dimension d Numeric. The number of dimensions to generate draws in i Numeric. The number of individuals to generate draws for Details Internal use only. Algorithm described in <NAME>., <NAME>., and <NAME>. (2006) Transportation Research Part B, 40, 147 - 163. Value A (N*i) x d matrix with random draws apollo_mnl Calculates Multinomial Logit probabilities Description Calculates the probabilities of a Multinomial Logit model and can also perform other operations based on the value of the functionality argument. Usage apollo_mnl(mnl_settings, functionality) Arguments mnl_settings List of inputs of the MNL model. It should contain the following. • alternatives: Named numeric vector. Names of alternatives and their corresponding value in choiceVar. • avail: Named list of numeric vectors or scalars. Availabilities of alterna- tives, one element per alternative. Names of elements must match those in alternatives. Values can be 0 or 1. These can be scalars or vectors (of length equal to rows in the database). A user can also specify avail=1 to indicate universal availability, or omit the setting completely. • choiceVar: Numeric vector. Contains choices for all observations. It will usually be a column from the database. Values are defined in alternatives. • utilities: Named list of deterministic utilities . Utilities of the alterna- tives. Names of elements must match those in alternatives. • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). Set to "all" by default if omitted. • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "gradient": List containing the likelihood and gradient of the model component. • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all alternatives, with an extra element for the probability of the chosen alternative. • "preprocess": Returns a list with pre-processed inputs, based on mnl_settings. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "raw": Same as "prediction" • "report": Choice overview • "shares_LL": vector/matrix/array. Returns the probability of the chosen alternative when only constants are estimated. • "validate": Same as "estimate" • "zero_LL": vector/matrix/array. Returns the probability of the chosen alternative when all parameters are zero. apollo_modeChoiceData Simulated dataset of mode choice. Description A simulated dataset containing 8,000 mode choices among four alternatives. Usage apollo_modeChoiceData Format A data.frame with 8,000 rows and 25 variables: ID Numeric. Identification number of the individual. RP Numeric. 1 if the row corresponds to a revealed preference (RP) observation. 0 otherwise. RP_journey Numeric. Consecutive ID of RP observations. 0 if SP observation. SP Numeric. 1 if the row corresponds to a stated preference (SP) observation. 0 otherwise. SP_task Numeric. Consecutive ID of SP choice tasks. 0 if RP observation. access_air Numeric. Access time (in minutes) of mode air. access_bus Numeric. Access time (in minutes) of mode bus. access_rail Numeric. Access time (in minutes) of mode rail. av_air Numeric. 1 if the mode air (plane) is available. 0 otherwise. av_bus Numeric. 1 if the mode bus is available. 0 otherwise. av_car Numeric. 1 if the mode car is available. 0 otherwise. av_rail Numeric. 1 if the mode rail (train) is available. 0 otherwise. business Numeric. Purpose of the trip. 1 for business, 0 for other. choice Numeric. Choice indicator, 1=car, 2=bus, 3=air, 4=rail. cost_air Numeric. Cost (in GBP) of mode air. cost_bus Numeric. Cost (in GBP) of mode bus. cost_car Numeric. Cost (in GBP) of mode car. cost_rail Numeric. Cost (in GBP) of mode rail. female Numeric. Sex of individual. 1 for female, 0 for male. income Numeric. Income (in GBP per annum) of the individual. service_air Numeric. Additional services for the air alternative. 1 for no-frills, 2 for wifi, 3 for food. This is not used in the RP data, where it is set to 0. service_rail Numeric. Additional services for the rail alternative. 1 for no-frills, 2 for wifi, 3 for food. This is not used in the RP data, where it is set to 0. time_air Numeric. Travel time (in minutes) of mode air. time_bus Numeric. Travel time (in minutes) of mode bus. time_car Numeric. Travel time (in minutes) of mode car. time_rail Numeric. Travel time (in minutes) of mode rail. Details This dataset is to be used for discrete choice modelling. Data comes from 500 individuals, each with two revealed preferences (RP) observation, and 14 stated stated (SC) observations. There are 8,000 choices in total. Data is simulated. Each observation contains attributes for the alternatives, availability of alternatives, and characteristics of the individuals. Source http://www.apollochoicemodelling.com/ apollo_modelOutput Prints estimation results to console Description Prints estimation results to console. Amount of information presented can be adjusted through arguments. Usage apollo_modelOutput(model, modelOutput_settings = NA) Arguments model Model object. Estimated model object as returned by function apollo_estimate. modelOutput_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • printChange: Logical. TRUE for printing difference between starting val- ues and estimates. FALSE by default. • printClassical: Logical. TRUE for printing classical standard errors. TRUE by default. • printCorr: Boolean. TRUE for printing parameters correlation matrix. If printClassical=TRUE, both classical and robust matrices are printed. For Bayesian estimation, this setting is used for the covariane of random parameters. FALSE by default. • printCovar: Boolean. TRUE for printing parameters covariance matrix. If printClassical=TRUE, both classical and robust matrices are printed. For Bayesian estimation, this setting is used for the correlation of random parameters. FALSE by default. • printDataReport: Logical. TRUE for printing summary of choices in database and other diagnostics. FALSE by default. • printFixed: Logical. TRUE for printing fixed parameters among esti- mated parameter. TRUE by default. • printFunctions: Logical. TRUE for printing apollo_control, apollo_randCoeff (when available), apollo_lcPars (when available) and apollo_probabilities. FALSE by default. • printHBconvergence: Boolean. TRUE for printing Geweke convergence tests. FALSE by default. • printHBiterations: Boolean. TRUE for printing an iterations report for HB estimation. TRUE by default. • printModelStructure: Logical. TRUE for printing model structure. TRUE by default. • printOutliers: Logical or Scalar. TRUE for printing 20 individuals with worst average fit across observations. FALSE by default. If Scalar is given, this replaces the default of 20. • printPVal: Logical or Scalar. TRUE or 1 for printing p-values for one- sided test, 2 for printing p-values for two-sided test, FALSE for not printing p-values. FALSE by default. • printT1: Logical. If TRUE, t-test for H0: apollo_beta=1 are printed. FALSE by default. Details Prints to screen the output of a model previously estimated by apollo_estimate() Value A matrix of coefficients, s.d. and t-tests (invisible) apollo_modifyUserDefFunc Checks and modifies Apollo user-defined functions Description Checks and enhances user defined functions apollo_probabilities, apollo_randCoeff and apollo_lcPars. Usage apollo_modifyUserDefFunc( apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs, validate = TRUE, noModification = FALSE ) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names of parameters inside apollo_beta whose values should be kept constant throughout estimation. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. validate Logical. If TRUE, the original and modified apollo_probabilities functions are estimated. If their results do not match, then the original functions are re- turned, and success is set to FALSE inside the returned list. noModification Logical. If TRUE, loop expansion etc are skipped. Details Internal use only. Called by apollo_estimate before estimation. Checks include: no re-definition of variables, no (direct) calls to database, calling of apollo_weighting if weights are defined. Value List with four elements: apollo_probabilities, apollo_randCoeff, apollo_lcPars and a dummy called success (TRUE if modification was successful, FALSE if not. FALSE will be only be returnes if the modifications are validated). apollo_nl Calculates Nested Logit probabilities Description Calculates the probabilities of a Nested Logit model and can also perform other operations based on the value of the functionality argument. Usage apollo_nl(nl_settings, functionality) Arguments nl_settings List of inputs of the NL model. It should contain the following. • alternatives: Named numeric vector. Names of alternatives and their corresponding value in choiceVar. • avail: Named list of numeric vectors or scalars. Availabilities of alterna- tives, one element per alternative. Names of elements must match those in alternatives. Values can be 0 or 1. These can be scalars or vectors (of length equal to rows in the database). A user can also specify avail=1 to indicate universal availability, or omit the setting completely. • choiceVar: Numeric vector. Contains choices for all observations. It will usually be a column from the database. Values are defined in alternatives. • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. • nlNests: List of numeric scalars or vectors. Lambda parameters for each nest. Elements must be named with the nest name. The lambda at the root is automatically fixed to 1 if not provided by the user. • nlStructure: Named list of character vectors. As many elements as nests, it must include the "root". Each element contains the names of the nests or alternatives that belong to it. Element names must match those in nlNests. • utilities: Named list of deterministic utilities . Utilities of the alterna- tives. Names of elements must match those in alternatives. • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Details In this implementation of the Nested Logit model, each nest must have a lambda parameter asso- ciated to it. For the model to be consistent with utility maximisation, the estimated value of the Lambda parameter of all nests should be between 0 and 1. Lambda parameters are inversely pro- portional to the correlation between the error terms of alternatives in a nest. If lambda=1, then there is no relevant correlation between the unobserved utility of alternatives in that nest. The tree must contain an upper nest called "root". The lambda parameter of the root is automatically set to 1 if not specified in nlNests, but can be changed by the user if desired (though not advised). Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "gradient": Not implemented. • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all alternatives, with an extra element for the probability of the chosen alternative. • "preprocess": Returns a list with pre-processed inputs, based on nl_settings. • "raw": Same as "prediction" • "report": List with tree structure and choice overview. • "shares_LL": vector/matrix/array. Returns the probability of the chosen alternative when only constants are estimated. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "zero_LL": vector/matrix/array. Returns the probability of the chosen alternative when all parameters are zero. apollo_normalDensity Calculates density for a Normal distribution Description Calculates density for a Normal distribution at a specific value with a specified mean and standard deviation and can also perform other operations based on the value of the functionality argument. Usage apollo_normalDensity(normalDensity_settings, functionality) Arguments normalDensity_settings List of arguments to the functions. It must contain the following. • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. • mu: Numeric scalar. Intercept of the linear model. • outcomeNormal: Numeric vector. Dependent variable. • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). • sigma: Numeric scalar. Variance of error component of linear model to be estimated. • xNormal: Numeric vector. Single explanatory variable. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Details This function calcualtes the probability of the linear model outcomeNormal = mu + xNormal + epsilon, where epsilon is a random error distributed Normal(0,sigma). If using this function in the context of an Integrated Choice and Latent Variable (ICLV) model with continuous indicators, then outcomeNormal would be the value of the indicator, xNormal would be the value of the la- tent variable (possibly multiplied by a parameter to measure its correlation with the indicator, e.g. xNormal=lambda*LV), and mu would be an additional parameter to be estimated (the mean of the indicator, which should be fixed to zero if the indicator is centered around its mean beforehand). Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the likelihood for each observation. • "gradient": Not implemented • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": Not implemented. Returns NA. • "preprocess": Returns a list with pre-processed inputs, based on normalDensity_settings. • "raw": Same as "estimate" • "report": Dependent variable overview. • "shares_LL": Not implemented. Returns a vector of NA with as many elements as observa- tions. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "zero_LL": Not implemented. Returns a vector of NA with as many elements as observations. apollo_ol Calculates Ordered Logit probabilities Description Calculates the probabilities of an Ordered Logit model and can also perform other operations based on the value of the functionality argument. Usage apollo_ol(ol_settings, functionality) Arguments ol_settings List of settings for the OL model. It should include the following. • coding: Numeric or character vector. Optional argument. Defines the order of the levels in outcomeOrdered. The first value is associated with the lowest level of outcomeOrdered, and the last one with the highest value. If not provided, is assumed to be 1:(length(tau) + 1). • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. • outcomeOrdered: Numeric vector. Dependent variable. The coding of this variable is assumed to be from 1 to the maximum number of different lev- els. For example, if the ordered response has three possible values: "never", "sometimes" and "always", then it is assumed that outcomeOrdered con- tains "1" for "never", "2" for "sometimes", and 3 for "always". If another coding is used, then it should be specified using the coding argument. • rows: Boolean vector. TRUE if a row must be considered in the calcula- tions, FALSE if it must be excluded. It must have length equal to the length of argument outcomeOrdered. Default value is "all", meaning all rows are considered in the calculation. • tau: List of numeric vectors/matrices/3-dim arrays. Thresholds. As many as number of different levels in the dependent variable - 1. Extreme thresh- olds are fixed at -inf and +inf. Mixing is allowed in thresholds. Can also be a matrix with as many rows as observations and as many columns as thresholds. • utilities: Numeric vector/matrix/3-sim array. A single explanatory vari- able (usually a latent variable). Must have the same number of rows as outcomeOrdered. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Details This function estimates an Ordered Logit model of the type: y* = V + epsilon outcomeOrdered = 1 if -Inf < y* < tau[1] 2 if tau[1] < y* < tau[2] ... maxLvl if tau[length(tau)] < y* < +Inf Where epsilon is distributed standard logistic, and the values 1, 2, ..., maxLvl can be replaces by coding[1], coding[2], ..., coding[maxLvl]. The behaviour of the function changes depending on the value of the functionality argument. Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "gradient": List containing the likelihood and gradient of the model component. • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all possible levels, with an extra element for the probability of the chosen alternative. • "preprocess": Returns a list with pre-processed inputs, based on ol_settings. • "raw": Same as "prediction" • "report": Dependent variable overview. • "shares_LL": vector/matrix/array. Returns the probability of the chosen alternative when only constants are estimated. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "zero_LL": Not implemented. Returns a vector of NA with as many elements as observations. apollo_op Calculates Ordered Probit probabilities Description Calculates the probabilities of an Ordered Probit model and can also perform other operations based on the value of the functionality argument. Usage apollo_op(op_settings, functionality) Arguments op_settings List of settings for the OP model. It should include the following. • coding: Numeric or character vector. Optional argument. Defines the order of the levels in outcomeOrdered. The first value is associated with the lowest level of outcomeOrdered, and the last one with the highest value. If not provided, is assumed to be 1:(length(tau) + 1). • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. • outcomeOrdered: Numeric vector. Dependent variable. The coding of this variable is assumed to be from 1 to the maximum number of different lev- els. For example, if the ordered response has three possible values: "never", "sometimes" and "always", then it is assumed that outcomeOrdered con- tains "1" for "never", "2" for "sometimes", and 3 for "always". If another coding is used, then it should be specified using the coding argument. • rows: Boolean vector. TRUE if a row must be considered in the calcula- tions, FALSE if it must be excluded. It must have length equal to the length of argument outcomeOrdered. Default value is "all", meaning all rows are considered in the calculation. • tau: List of numeric vectors/matrices/3-dim arrays. Thresholds. As many as number of different levels in the dependent variable - 1. Extreme thresh- olds are fixed at -inf and +inf. Mixing is allowed in thresholds. Can also be a matrix with as many rows as observations and as many columns as thresholds. • utilities: Numeric vector/matrix/3-sim array. A single explanatory vari- able (usually a latent variable). Must have the same number of rows as outcomeOrdered. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Details This function estimates an ordered probit model of the type: y ∗ = V + y = 1if − ∞ < y ∗ < τ1 , 2if τ1 < y ∗ < τ2 , ..., max(y)if τmax(y)−1 < y ∗ < ∞ Where  is distributed standard normal, and the values 1, 2, ..., max(y) can be replaced by coding[1], coding[2], ..., coding[maxLvl]. The behaviour of the function changes depending on the value of the functionality argument. Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "gradient": List containing the likelihood and gradient of the model component. • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all possible levels, with an extra element for the probability of the chosen alternative. • "preprocess": Returns a list with pre-processed inputs, based on op_settings. • "raw": Same as "prediction" • "report": Dependent variable overview. • "shares_LL": vector/matrix/array. Returns the probability of the chosen alternative when only constants are estimated. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "zero_LL": Not implemented. Returns a vector of NA with as many elements as observations. apollo_outOfSample Cross-validation of fit (LL) Description Randomly generates estimation and validation samples, estimates the model on the first and calcu- lates the likelihood for the second, then repeats. Usage apollo_outOfSample( apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs, estimate_settings = list(estimationRoutine = "bgw", maxIterations = 200, writeIter = FALSE, hessianRoutine = "none", printLevel = 3L, silent = TRUE), outOfSample_settings = list(nRep = 10, validationSize = 0.1, samples = NA, rmse = NULL) ) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. estimate_settings List. Options controlling the estimation process. See apollo_estimate. outOfSample_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. nRep Numeric scalar. Number of times a different pair of estimation and vali- dation sets are to be extracted from the full database. Default is 30. samples Numeric matrix or data.frame. Optional argument. Must have as many rows as observations in the database, and as many columns as number of repetitions wanted. Each column represents a re-sample, and each element must be a 0 if the observation should be assigned to the estimation sample, or 1 if the observation should be assigned to the prediction sample. If this argument is provided, then nRep and validationSize are ignored. Note that this allows sampling at the observation rather than the individual level. validationSize Numeric scalar. Size of the validation sample. Can be a per- centage of the sample (0-1) or the number of individuals in the validation sample (>1). Default is 0.1. rmse Character matrix with two columns. Used to calculate Root Mean Squared Error (RMSE) of prediction. The first column must contain the names of observed outcomes in the database. The second column must contain the names of the predicted outcomes as returned by apollo_prediction. If omitted or NULL, no RMSE is calculated. This only works for models with a single component. Details A common way to test for overfitting of a model is to measure its fit on a sample not used during estimation that is, measuring its out-of-sample fit. A simple way to do this is splitting the complete available dataset in two parts: an estimation sample, and a validation sample. The model of interest is estimated using only the estimation sample, and then those estimated parameters are used to measure the fit of the model (e.g. the log-likelihood of the model) on the validation sample. Doing this with only one validation sample, however, may lead to biased results, as a particular validation sample need not be representative of the population. One way to minimise this issue is to randomly draw several pairs of estimation and validation samples from the complete dataset, and apply the procedure to each pair. The splitting of the database into estimation and validation samples is done at the individual level, not at the observation level. If the sampling wants to be done at the individual level (not recom- mended on panel data), then the optional outOfSample_settings$samples argument should be provided. This function writes two different files to the working/output directory: • modelName_outOfSample_params.csv: Records the estimated parameters, final log-likelihood, and number of observations on each repetition. • modelName_outOfSample_samples.csv: Records the sample composition of each repetition. The first two files are updated throughout the run of this function, while the last one is only written once the function finishes. When run, this function will look for the two files above in the working/output directory. If they are found, the function will attempt to pick up re-sampling from where those files left off. This is useful in cases where the original bootstrapping was interrupted, or when additional re-sampling wants to be performed. Value A matrix with the average log-likelihood per observation for both the estimation and validation sam- ples, for each repetition. Two additional files with further details are written to the working/output directory. apollo_ownModel Calculates own model probabilities Description Receives functions or expressions for each functionality so that a user-defined model can interface with Apollo. Usage apollo_ownModel(ownModel_settings, functionality) Arguments ownModel_settings List of arguments. Only likelihood is mandatory. • likelihood: Function or expression used to calculate the likelihood of the model. Should evaluate to a vector, matrix, or 3-dimensional array. • prediction: Function or expression used to calculate the prediction of the model. Should evaluate to a vector, matrix, or 3-dimensional array. • zero_LL: Function or expression used to calculate the likelihood of the base model (e.g. equiprobable model). • shares_LL: Function or expression used to calculate the likelihood of the constants-only model. • gradient: Function or expression used to calculate the gradient of the like- lihood. If not provided, Apollo will attempt to calculate it automatically. • report: List of functions or expressions used to produce a text report sum- marising the input and parameter estimates of the model. Should contain two elements: "data" (with a summary of the input data), and "param" (with a summary of the estimated parameters). functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "gradient": List containing the likelihood and gradient of the model component. • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all alternatives, with an extra element for the probability of the chosen alternative. • "preprocess": Returns a list with pre-processed inputs, based on mnl_settings. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "raw": Same as "prediction" • "report": Choice overview • "shares_LL": vector/matrix/array. Returns the probability of the chosen alternative when only constants are estimated. • "validate": Same as "estimate" • "zero_LL": vector/matrix/array. Returns the probability of the chosen alternative when all parameters are zero. apollo_panelProd Calculates product across observations from same individual. Description Multiplies likelihood of observations from the same individual, or adds the log of them. Usage apollo_panelProd(P, apollo_inputs, functionality) Arguments P List of vectors, matrices or 3-dim arrays. Likelihood of the model components. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Details This function should be called inside apollo_probabilities only if the data has a panel structure. It should be called after apollo_avgIntraDraws if intra-individual draws are used. Value Argument P with (for most functionalities) the original contents after multiplying across observa- tions at the individual level. Shape depends on argument functionality. • "components": Returns P without changes. • "conditionals": Returns P without averaging across draws. Drops all components except "model". • "estimate": Returns P containing the likelihood of the model after multiplying observations at the individual level. Drops all components except "model". • "gradient": Returns P containing the gradient of the likelihood after applying the product rule across observations for the same individual. • "output": Returns P containing the likelihood of the model after multiplying observations at the individual level. • "prediction": Returns P containing the probabilities/likelihoods of all alternatives for all model components averaged across inter-individual draws. • "preprocess": Returns P without changes. • "raw": Returns P without changes. • "report": Returns P without changes. • "shares_LL": Returns P containing the likelihood of the model after multiplying observations at the individual level. • "validate": Returns P containing the likelihood of the model averaged across inter-individual draws. Drops all components except "model". • "zero_LL": Returns P containing the likelihood of the model after multiplying observations at the individual level. apollo_prediction Predicts using an estimated model Description Calculates apollo_probabilities with functionality="prediction". Usage apollo_prediction( model, apollo_probabilities, apollo_inputs, prediction_settings = list(), modelComponent = NA ) Arguments model Model object. Estimated model object as returned by function apollo_estimate. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. prediction_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • modelComponent: Character. Name of component of apollo_probabilities output to calculate predictions for. Default is to predict for all components. • nRep: Scalar integer. Only used for models that require simulation for pre- diction (e.g. MDCEV). Number of draws used to calculate prediction. De- fault is 100. • runs: Numeric. Number of runs to use for computing confidence intervals of predictions. • silent: Boolean. If TRUE, this function won’t print any output to screen. • summary: Boolean. If TRUE, a summary of the prediction is printed to screen. TRUE by default. modelComponent Deprecated. Same as modelComponent inside prediction_settings. Details Structure of predictions are simplified before returning, e.g. list of vectors are turned into a matrix. Value A list containing predictions for component modelComponent of the model described in apollo_probabilities. The particular shape of the prediction will depend on the model component. apollo_prepareProb Checks likelihood function Description Checks that the likelihood function for the mode is in the appropriate format to be returned. Usage apollo_prepareProb(P, apollo_inputs, functionality) Arguments P List of vectors, matrices or 3-dim arrays. Likelihood of the model components. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Details This function should be called inside apollo_probabilities, near the end of it, just before return(P). This function only performs checks on the shape of P, but does not change its values. Value Argument P with (for most functionalities) the original contents. Output depends on argument functionality. • "components": Returns P without changes. • "conditionals": Returns only the "model" component of argument P. • "estimate": Returns only the "model" component of argument P. • "gradient": Returns only the "model" component of argument P. • "output": Returns argument P without any changes to its content, but gives names to un- named elements. • "prediction": Returns argument P without any changes. • "preprocess": Returns argument P without any changes to its content, but gives names to elements corresponding to componentNames. • "raw": Returns argument P without any changes. • "report": Returns P without changes. • "shares_LL": Returns argument P without any changes to its content, but gives names to unnamed elements. • "validate": Returns argument P without any changes. • "zero_LL": Returns argument P without any changes to its content, but gives names to un- named elements. apollo_preprocess Pre-process input for multiple models return Description Pre-process input for multiple models return Usage apollo_preprocess(inputs, modelType, functionality, apollo_inputs) Arguments inputs List of settings modelType Character. Type of model, e.g. "mnl", "nl", "cnl", etc. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Value The returned object is a pre-processed version of the model settings. This is independent of functionality, but the function is only called during preprocessing. apollo_print Prints message to terminal Description Prints message to terminal if apollo_inputs$silent is FALSE Usage apollo_print(txt, nSignifD = 4, widthLim = 11, pause = 0, type = "t") Arguments txt Character, what to print. nSignifD Optional numeric integer. Minimum number of significant digits when printing numeric matrices. Default is 4. widthLim Optional numeric integer. Minimum width (in characters) of each column when printing numeric matrices. Default is 11 pause Scalar integer. Number of seconds the execution will pause after printing the message. Default is 0. type Character. "t" for regular text (default), "w" for warning, "i" for information. Value Nothing apollo_readBeta Reads parameters from file Description Reads in parameters from a previously estimated model and copies the values to the given apollo_beta vector, only for those parameters whose name matches. Usage apollo_readBeta( apollo_beta, apollo_fixed, inputModelName, overwriteFixed = FALSE ) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. inputModelName Character. modelName for model from which results are used as starting values. overwriteFixed Boolean. TRUE if starting values for fixed parameters should also be updated from input file. Details This function will update the values of the parameters in its argument apollo_beta with the match- ing values in the file (inputModelName)_estimates.csv. If there is no match for a given param- eter in apollo_beta, its value will not be updated. Value Named numeric vector. Names and updated starting values for parameters. apollo_rrm Calculates Random Regret Minimisation model probabilities Description Calculates the probabilities of a Random Regret Minimisation model and can also perform other operations based on the value of the functionality argument. Usage apollo_rrm(rrm_settings, functionality) Arguments rrm_settings List of inputs of the RRM model. It should contain the following. • alternatives: Named numeric vector. Names of alternatives and their corresponding value in choiceVar. • avail: Named list of numeric vectors or scalars. Availabilities of alterna- tives, one element per alternative. Names of elements must match those in alternatives. Values can be 0 or 1. These can be scalars or vectors (of length equal to rows in the database). A user can also specify avail=1 to indicate universal availability, or omit the setting completely. • choiceVar: Numeric vector. Contains choices for all observations. It will usually be a column from the database. Values are defined in alternatives. • rum_inputs: Named list of (optional) deterministic utilities. Utilities of the alternatives to be included in combined RUM-RRM models. Names of elements must match those in alternatives. • regret_inputs: Named list of regret functions. This should contain one list per attribute, where these lists themselves contain two vectors, namely a vector of attributes (at the alternative level) and parameters (either generic or attribute specific). Zeros can be used for omitted attributes for some alternatives. The order for each attribute needs to be the same as the order in alternatives.. • regret_scale: Named list of regret scales. This should have the same length as ’rrm_settings$regret_inputs’ or be a single entry in the case of a generic scale parameter across regret attributes. • choiceset_scaling: Vector. One entry per row in the database, often set to 2 divided by the number of available alternatives. • rows: Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). • componentName: Character. Name given to model component. If not pro- vided by the user, Apollo will set the name automatically according to the element in P to which the function output is directed. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Value The returned object depends on the value of argument functionality as follows. • "components": Same as "estimate" • "conditionals": Same as "estimate" • "estimate": vector/matrix/array. Returns the probabilities for the chosen alternative for each observation. • "gradient": List containing the likelihood and gradient of the model component. • "output": Same as "estimate" but also writes summary of input data to internal Apollo log. • "prediction": List of vectors/matrices/arrays. Returns a list with the probabilities for all alternatives, with an extra element for the probability of the chosen alternative. • "preprocess": Returns a list with pre-processed inputs, based on rrm_settings. • "validate": Same as "estimate", but it also runs a set of tests to validate the function inputs. • "raw": Same as "prediction" • "report": Choice overview • "shares_LL": vector/matrix/array. Returns the probability of the chosen alternative when only constants are estimated. • "validate": Same as "estimate" • "zero_LL": vector/matrix/array. Returns the probability of the chosen alternative when all parameters are zero. apollo_saveOutput Saves estimation results to files. Description Writes files in the working/output directory with the estimation results. Usage apollo_saveOutput(model, saveOutput_settings = NA) Arguments model Model object. Estimated model object as returned by function apollo_estimate. saveOutput_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • printChange: Boolean. TRUE for printing difference between starting values and estimates. TRUE by default. • printClassical: Boolean. TRUE for printing classical standard errors. TRUE by default. • printCorr: Boolean. TRUE for printing parameters correlation matrix. If printClassical=TRUE, both classical and robust matrices are printed. For Bayesian estimation, this setting is used for the covariane of random parameters. TRUE by default. • printCovar: Boolean. TRUE for printing parameters covariance matrix. If printClassical=TRUE, both classical and robust matrices are printed. For Bayesian estimation, this setting is used for the correlation of random parameters. TRUE by default. • printDataReport: Boolean. TRUE for printing summary of choices in database and other diagnostics. FALSE by default. • printFixed: Logical. TRUE for printing fixed parameters among esti- mated parameter. TRUE by default. • printFunctions: Boolean. TRUE for printing apollo_control, apollo_randCoeff (when available), apollo_lcPars (when available) and apollo_probabilities. TRUE by default. • printHBconvergence: Boolean. TRUE for printing Geweke convergence tests. TRUE by default. • printHBiterations: Boolean. TRUE for printing an iterations report for HB estimation. TRUE by default. • printModelStructure: Boolean. TRUE for printing model structure. TRUE by default. • printOutliers: Boolean or Scalar. TRUE for printing 20 individuals with worst average fit across observations. FALSE by default. If Scalar is given, this replaces the default of 20. • printPVal: Boolean or Scalar. TRUE or 1 for printing p-values for one- sided test, 2 for printing p-values for two-sided test, FALSE for not printing p-values. FALSE by default. • printT1: Boolean. If TRUE, t-test for H0: apollo_beta=1 are printed. FALSE by default. • saveEst: Boolean. TRUE for saving estimated parameters and standard errors to a CSV file. TRUE by default. • saveCorr: Boolean. TRUE for saving estimated correlation matrix to a CSV file. FALSE by default. • saveCov: Boolean. TRUE for saving estimated covariance matrix to a CSV file. FALSE by default. • saveHBiterations: Boolean. TRUE for including HB iterations in the saved model object. FALSE by default. • saveModelObject: Boolean. TRUE to save the R model object to a file (use apollo_loadModel to load it to memory). TRUE by default. • writeF12: Boolean. TRUE for writing results into an F12 file (ALOGIT format). FALSE by default. Details Estimation results are saved different files in the working/output directory: • (modelName)_corr.csv CSV file with the estimated classical correlation matrix. Only when bayesian estimation was not used. • (modelName)_covar.csv CSV file with the estimated classical covariance matrix. Only when bayesian estimation was not used. • (modelName)_estimates.csv CSV file with the estimated parameter values, their standars errors, and t-ratios. • (modelName).F12 F12 file with model results. Compatible with ALOGIT. • (modelName)_output.txt Text file with the output produced by function apollo_modelOutput. • (modelName)_robcorr.csv CSV file with the estimated robust correlation matrix. Only when bayesian estimation was not used. • (modelName)_robcovar.csv CSV file with the estimated robust covariance matrix. Only when bayesian estimation was not used. Value nothing apollo_searchStart Searches for better starting values. Description Given a set of starting values and a range for them, searches for points with a better likelihood and steeper gradients. Usage apollo_searchStart( apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs, searchStart_settings = NA ) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. searchStart_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • apolloBetaMax: Vector. Maximum possible value of parameters when generating candidates. Ignored if smartStart is TRUE. Default is apollo_beta + 0.1. • apolloBetaMin: Vector. Minimum possible value of parameters when gen- erating candidates. Ignored if smartStart is TRUE. Default is apollo_beta - 0.1. • bfgsIter: Numeric scalar. Number od BFGS iterations to perform at each stage to each remaining candidate. Default is 20. • dTest: Numeric scalar. Tolerance for test 1. A candidate is discarded if its distance in parameter space to a better one is smaller than dTest. Default is 1. • gTest: Numeric scalar. Tolerance for test 2. A candidate is discarded if the norm of its gradient is smaller than gTest AND its LL is further than llTest from a better candidate. Default is 10^(-3). • llTest: Numeric scalar. Tolerance for test 2. A candidate is discarded if the norm of its gradient is smaller than gTest AND its LL is further than llTest from a better candidate. Default is 3. • maxStages: Numeric scalar. Maximum number of search stages. The al- gorithm will stop when there is only one candidate left, or if it reaches this number of stages. Default is 5. • nCandidates: Numeric scalar. Number of candidate sets of parameters to be used at the start. Should be an integer bigger than 1. Default is 100. • smartStart: Boolean. If TRUE, candidates are randomly generated with more chances in the directions the Hessian indicates improvement of the LL function. Default is FALSE. Details This function implements a simplified version of the algorithm proposed by <NAME>., Themans, M. & <NAME>. (2010), A Heuristic for Nonlinear Global Optimization, INFORMS Journal on Computing, 22(1), pp.59-70. The main difference lies in it implementing only two out of three tests on the candidates described by the authors. The implemented algorithm has the following steps. 1. Randomly draw nCandidates candidates from an interval given by the user. 2. Label all candidates with a valid log-likelihood (LL) as active. 3. Apply bfgsIter iterations of the BFGS algorithm to each active candidate. 4. Apply the following tests to each active candidate: (a) Has the BGFS search converged? (b) Are the candidate parameters after BFGS closer than dTest from any other candidate with higher LL? (c) Is the LL of the candidate after BFGS further than distLL from a candidate with better LL, and its gradient smaller than gTest? 5. Mark any candidates for which at least one test results in yes as inactive. 6. Go back to step 3, unless only one candidate is active, or the maximum number of iterations (maxStages) has been reached. This function will write a CSV file to the working/output directory summarising progress. This file is called modelName_searchStart.csv . Value named vector of model parameters. These are the best values found. apollo_setRows Sets specified rows to a given value Description Given a numeric object (scalar, vector, matrix or 3-dim array) sets a subset of rows to a given value. Usage apollo_setRows(v, r, val) Arguments v Numeric scalar, vector, matrix or 3-dim array. Rows of this object will be re- placed by val and r Boolean vector. As many elements as rows in utilities. TRUE for replacing that row, FALSE for not changing it. val Numeric scalar. Value to which the specified rows must be set to. Value The same argument utilities but with the rows where r==TRUE set to val. apollo_sharesTest Compares predicted and observed shares Description Comparing the shares predicted by the model with the shares observed in the data, and conducts statistical tests. Usage apollo_sharesTest( model, apollo_probabilities, apollo_inputs, sharesTest_settings ) Arguments model Model object. Estimated model object as returned by function apollo_estimate. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. sharesTest_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • alternatives: Named numeric vector. Names of alternatives and their corresponding value in choiceVar. • choiceVar: Numeric vector. Contains choices for all observations. It will usually be a column from the database. Values are defined in alternatives. • modelComponent: Name of model component. Set to model by default. • newAlts: Optional list describing the new alternatives to be used by apollo_sharesTest. This should have as many elements as new alternatives, with each entry be- ing a matrix of 0-1 entries, with one row per observation, and one column per alternative used in the model. • newAltsOnly: Boolean. If TRUE, results will only be printed for the ’new’ alternatives defined in newAlts, not the original alternatives used in the model. Set to FALSE by default. • subsamples: Named list of boolean vectors. Each element of the list de- fines whether a given observation belongs to a given subsample (e.g. by sociodemographics). Details This is an auxiliary function to help guide the definition of utility functions in a choice model. By comparing the predicted and observed shares of alternatives for different categories of the data, it is possible to identify what additional explanatory variables could improve the fit of the model. Value Nothing apollo_sink Starts or stops writing output to a text file. Description Starts or stops writing the output shown in the console to a file named "modelName_additional_output.txt". Usage apollo_sink(apollo_inputs = NULL) Arguments apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. If not provided, it will be looked for in the global environment. Details After the first time this function is called, all output shown in the console will also be written to a text file called "modelName_additional_output.txt", where "modelName" is the modelName set inside apollo_control. The second time this function is called, it stops writing the console output to the file. The user should always call this function an even number of times to close the output file and prevents data loss. Value Nothing. apollo_speedTest Measures evaluation time of a model Description Measures the evaluation time of a model for different number of cores and draws. Usage apollo_speedTest( apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs, speedTest_settings = NA ) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. speedTest_settings List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional. • nCoresTry: Numeric vector. Number of threads to try. Default is from 1 to the detected number of cores. • nDrawsTry: Numeric vector. Number of inter and intra-person draws to try. Default value is c(50, 100, 200). • nRep: Numeric scalar. Number of times the likelihood is evaluated for each combination of threads and draws. Default is 10. Details This function evaluates the function apollo_probabilities several times using different number of threads (a.k.a. processor cores), and draws (if the model uses mixing). It then plots the estimation time for each combination. Estimation time grows at least linearly with number of draws, while time savings decrease with the number of threads. This function can help decide what number of draws and cores to use for estimation, though a high number of draws is always recommended. If the computer will be used for additional activities during estimation, no more than (machine number of cores - 1) should be used. Using more threads than cores available in the machine will lead to reduce dperformance. The use of additional cores come at the expense of additional memory usage. If R uses more memory than the physical RAM available, then significant slow-downs in processing time can be expected. This function can help avoiding such pitfalls. Value A matrix with the average time per evaluation for each number of threads and draws combination. A graph is also plotted. apollo_swissRouteChoiceData Dataset of route choice. Description A Stated Preference dataset containing 3,492 route choices among two alternatives. Usage apollo_swissRouteChoiceData Format A data frame with 3,492 rows and 16 variables: ID Numeric. Identification number of the individual. choice Numeric. Choice indicator, 1 for alternative 1, and 2 for alternative 2. tt1 Numeric. Travel time (in minutes) for alternative 1. tc1 Numeric. Travel cost (in CHF) for alternative 1. hw1 Numeric. Headway time (in minutes) for alternative 1. ch1 Numeric. Number of interchanges for alternative 1. tt2 Numeric. Travel time (in minutes) for alternative 2. tc2 Numeric. Travel cost (in CHF) for alternative 2. hw2 Numeric. Headway time (in minutes) for alternative 2. ch2 Numeric. Number of interchanges for alternative 2. hh_inc_abs Numeric. Household income (in CHF per annum). car_availability Numeric. 1 if respondent has a car available, 0 otherwise. commute Numeric. 1 if the purpose of the trip is commuting. 0 otherwise. shopping Numeric. 1 if the purpose of the trip is shopping. 0 otherwise. business Numeric. 1 if the purpose of the trip is business. 0 otherwise. leisure Numeric. 1 if the purpose of the trip is leisure. 0 otherwise. Details This dataset is to be used for discrete choice modelling. Data comes from 388 individuals who participated in a Stated Choice (SC) survey, providing a total of 3,492 observations. Each choice scenario includes two alternatives described in terms of travel time, cost, headway and interchanges. Additional information on respondents is available. This dataset comes from the following publi- cation. <NAME>. & <NAME>. (2003), The impact of tilting trains in Switzerland: A route choice model of regional and long distance public transport trips. 82nd annual meeting of the trans- portation research board, Washington, DC. Source http://www.apollochoicemodelling.com/ apollo_timeUseData Dataset of time use. Description A Revealed Preference dataset containing 2,826 full-day observations. Usage apollo_timeUseData Format An object of class data.frame with 2826 rows and 20 columns. Details This dataset is to be used for Multiple Discrete Continuous (MDC) modelling. Data comes from 447 individuals who provided activitry diaries for a total of 2,826 days. Each observation sum- marizes the amount of time spent in each of twelve different activities. The dataset also incluides characteristics of the participants. This dataset comes from the following publication. Calastri, C., <NAME> <NAME>. and <NAME>. (2020) We want it all: experiences from a survey seeking to capture social network structures, lifetime events and short-term travel and activity planning. Transportation, 47(1), pp. 175-201. indivID Numeric. Identification number of the individual. day Numeric. Index of the day for each observation (day 1 was excluded). date Numeric. Date in format yyyymmdd. budget Numeric. Total amount of time registered during the day (in minutes). t_a01 Numeric. Time spent dropping-of or picking up other people (in minutes). t_a02 Numeric. Time spent working (in minutes). t_a03 Numeric. Time spent on educational activities (in minutes). t_a04 Numeric. Time spent shopping (in minutes). t_a05 Numeric. Time spent on private business (in minutes). t_a06 Numeric. Time spent getting petrol (in minutes). t_a07 Numeric. Time spent on social or leasure activities (in minutes). t_a08 Numeric. Time spent on vacation or long (inter-city) travel (in minutes). t_a09 Numeric. Time spent doing exercise (in minutes). t_a10 Numeric. Time spent at home (in minutes). t_a11 Numeric. Time spent travelling (everyday travelling) (in minutes). t_a12 Numeric. Non-allocated time (in minutes). female Numeric. 1 if respondent is female. 0 otherwise. age Numeric. Age of respondent (in years, approximate). occ_full_time Numeric. 1 if the respondent works full time. weekend Numeric. 1 if the current date is a weekend. Source http://www.apollochoicemodelling.com/ apollo_unconditionals Returns unconditionals for models with random heterogeneity Description Returns unconditionals for random parameters in model, both for continuous mixtures and latent class. Usage apollo_unconditionals(model, apollo_probabilities, apollo_inputs) Arguments model Model object. Estimated model object as returned by function apollo_estimate. apollo_probabilities Function. Returns probabilities of the model to be estimated. Must receive three arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: List containing options of the model. See apollo_validateInputs. • functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Details This functions is only meant for use with models using continuous distributions or latent classes, or both at the same time. Value Depends on whether the model uses continuous mixtures or latent class. • If the model contains a continuous mixture, it returns a list with one object per random coef- ficient. When using inter-individual draws only, each element will be a matrix with one row per individual, and one column per draw. When using intra- individual draws, each element will be a three-dimensional array, with one row per observation, inter-individual draws in the second dimension, and intra- individual draws in the third dimension. • If the model contains latent classes, it returns a list with as many elements as random coeffi- cients in the model, plus one additional element containing the class allocation probabilities. • If the model contains both continuous mixing and latent classes, a list with the two elements described above will be returned. apollo_validate Pre-process input for common models return Description Pre-process input for common models return Usage apollo_validate(inputs, modelType, functionality, apollo_inputs) Arguments inputs List of settings modelType Character. Type of model, e.g. "mnl", "nl", "cnl", etc. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. apollo_inputs List of main inputs to the model estimation process. See apollo_validateInputs. Value The returned object depends on the value of argument operation apollo_validateControl Validates apollo_control Description Validates the options controlling the running of the code apollo_control and sets default values for the omitted ones. Usage apollo_validateControl(database, apollo_control, silent = FALSE) Arguments database data.frame. Data used by model. apollo_control List. Options controlling the running of the code. User input is required for all settings except those with a default or marked as optional. • calculateLLC: Boolean. TRUE if user wants to calculate LL at constants (if applicable). - TRUE by default. • HB: Boolean. TRUE if using RSGHB for Bayesian estimation of model. • indivID: Character. Name of column in the database with each decision maker’s ID. • mixing: Boolean. TRUE for models that include random parameters. • modelDescr: Character. Description of the model. Used in output files. • modelName: Character. Name of the model. Used when saving the output to files. • nCores: Numeric>0. Number of cores to use in calculations of the model likelihood. • noDiagnostics: Boolean. TRUE if user does not wish model diagnostics to be printed - FALSE by default. • noValidation: Boolean. TRUE if user does not wish model input to be validated before estimation - FALSE by default. • outputDirectory: Character. Optional directory for outputs if different from working director - empty by default • panelData: Boolean. TRUE if there are multiple observations (i.e. rows) for each decision maker - Automatically set based on indivID by default. • seed: Numeric. Seed for random number generation. • weights: Character. Name of column in database containing weights for estimation. • workInLogs: Boolean. TRUE for increased numeric precision in models with panel data - FALSE by default. silent Boolean. If TRUE, no messages are printed to screen. Details This function should be run before running apollo_validateData. Value Validated version of apollo_control, with additional element called panelData set to TRUE for repeated choice data. apollo_validateData Validates data Description Checks consistency of the database with apollo_control, sorts it by indivID, and adds an internal ID variable (apollo_sequence) Usage apollo_validateData(database, apollo_control, silent) Arguments database data.frame. Data used by model. apollo_control List. Options controlling the running of the code. See apollo_validateInputs. silent Boolean. TRUE to prevent the function from printing to the console. Default is FALSE. Details This function should be called after calling apollo_validateControl. Observations are sorted only if apollo_control$panelData=TRUE. Value Data.frame. Validated version of database. apollo_validateHBControl Validates the apollo_HB list of parameters Description Validates the apollo_HB list of parameters and sets default values for the omitted ones. Usage apollo_validateHBControl( apollo_HB, apollo_beta, apollo_fixed, apollo_control, silent = FALSE ) Arguments apollo_HB List. Contains options for Bayesian estimation. See ?RSGHB::doHB for details. Parameters modelname, gVarNamesFixed, gVarNamesNormal, gDIST, svN and FC are automatically set based on the other arguments of this function. Other settings to include are the following. • constraintNorm: Character vector. Constraints for random coefficients in bayesian estimation. Constraints can be written as "b1>b2", "b1<b2", "b1>0", or "b1<0". • fixedA: Named numeric vector. Contains the names and fixed mean values of random parameters. For example, c(b1=0) fixes the mean of b1 to zero. • fixedD: Named numeric vector. Contains the names and fixed variance of random parameters. For example, c(b1=1) fixes the variance of b1 to zero. • gFULLCV: Boolean. Whether the full variance-covariance structure should be used for random parameters (TRUE by default). • gNCREP: Numeric. Number of burn-in iterations to use prior to convergence (default=10^5). • gNEREP: Numeric. Number of iterations to keep for averaging after conver- gence has been reached (default=10^5). • gINFOSKIP: Numeric. Number of iterations between printing/plotting in- formation about the iteration process (default=250). • hbDist: Mandatory setting. A named character vector determining the dis- tribution of each parameter to be estimated. Possible values are as follows. – "CN+": Positive censored normal. – "CN-": Negative censored normal. – "JSB": Johnson SB. – "LN+": Positive log-normal. – "LN-": Negative log-normal. – "N": Normal. – "NR": Fixed (as in non-random) parameter. • nodiagnostics: Boolean. Turn off pre-estimation diagnostics for RS- GHB. Set to TRUE by default. apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. value is constant throughout estima- tion). apollo_control List. Options controlling the running of the code. See apollo_validateInputs. silent Boolean. TRUE to keep the function from printing to the console. Default is FALSE. Details This function is only necessary when using bayesian estimation. Value Validated apollo_HB apollo_validateInputs Prepares input for apollo_estimate Description Searches the user work space (.GlobalEnv) for all necessary input to run apollo_estimate, and packs it in a single list. Usage apollo_validateInputs( apollo_beta = NA, apollo_fixed = NA, database = NA, apollo_control = NA, apollo_HB = NA, apollo_draws = NA, apollo_randCoeff = NA, apollo_lcPars = NA, recycle = FALSE, silent = FALSE ) Arguments apollo_beta Named numeric vector. Names and values for parameters. apollo_fixed Character vector. Names (as defined in apollo_beta) of parameters whose value should not change during estimation. database data.frame. Data used by model. apollo_control List. Options controlling the running of the code. User input is required for all settings except those with a default or marked as optional. • analyticGrad: Boolean. TRUE to use analytical gradients during param- eter estimation, if they are available. FALSE to use numerical gradients. - TRUE by default. • calculateLLC: Boolean. TRUE if user wants to calculate LL at constants (if applicable). - TRUE by default. • HB: Boolean. TRUE if using RSGHB for Bayesian estimation of model. • indivID: Character. Name of column in the database with each decision maker’s ID. • mixing: Boolean. TRUE for models that include random parameters. • modelDescr: Character. Description of the model. Used in output files. • modelName: Character. Name of the model. Used when saving the output to files. • nCores: Numeric>0. Number of cores to use in calculations of the model likelihood. • noDiagnostics: Boolean. TRUE if user does not wish model diagnostics to be printed - FALSE by default. • noValidation: Boolean. TRUE if user does not wish model input to be validated before estimation - FALSE by default. • outputDirectory: Character. Optional directory for outputs if different from working director - empty by default • panelData: Boolean. TRUE if there are multiple observations (i.e. rows) for each decision maker - Automatically set based on indivID by default. • seed: Numeric. Seed for random number generation. • weights: Character. Name of column in database containing weights for estimation. • workInLogs: Boolean. TRUE for increased numeric precision in models with panel data - FALSE by default. apollo_HB List. Contains options for Bayesian estimation. See ?RSGHB::doHB for details. Parameters modelname, gVarNamesFixed, gVarNamesNormal, gDIST, svN and FC are automatically set based on the other arguments of this function. Other settings to include are the following. • constraintNorm: Character vector. Constraints for random coefficients in bayesian estimation. Constraints can be written as "b1>b2", "b1<b2", "b1>0", or "b1<0". • fixedA: Named numeric vector. Contains the names and fixed mean values of random parameters. For example, c(b1=0) fixes the mean of b1 to zero. • fixedD: Named numeric vector. Contains the names and fixed variance of random parameters. For example, c(b1=1) fixes the variance of b1 to zero. • gNCREP: Numeric. Number of burn-in iterations to use prior to convergence (default=10^5). • gNEREP: Numeric. Number of iterations to keep for averaging after conver- gence has been reached (default=10^5). • gINFOSKIP: Numeric. Number of iterations between printing/plotting in- formation about the iteration process (default=250). • hbDist: Mandatory setting. A named character vector determining the dis- tribution of each parameter to be estimated. Possible values are as follows. – "CN+": Positive censored normal. – "CN-": Negative censored normal. – "DNE": Parameter kept at its starting value (not estimated). – "JSB": Johnson SB. – "LN+": Positive log-normal. – "LN-": Negative log-normal. – "N": Normal. – "NR": Fixed (as in non-random) parameter. apollo_draws List of arguments describing the inter and intra individual draws. Required only if apollo_control$mixing = TRUE. Unused elements can be ommited. • interDrawsType: Character. Type of inter-individual draws (’halton’,’mlhs’,’pmc’,’sobol’,’sobolOw ’sobolFaureTezuka’, ’sobolOwenFaureTezuka’ or the name of an object loaded in memory, see manual in www.ApolloChoiceModelling.com for details). • interNDraws: Numeric scalar (>=0). Number of inter-individual draws per individual. Should be set to 0 if not using them. • interNormDraws: Character vector. Names of normaly distributed inter- individual draws. • interUnifDraws: Character vector. Names of uniform-distributed inter- individual draws. • intraDrawsType: Character. Type of intra-individual draws (’halton’,’mlhs’,’pmc’,’sobol’,’sobolOw ’sobolOwenFaureTezuka’ or the name of an object loaded in memory). • intraNDraws: Numeric scalar (>=0). Number of intra-individual draws per individual. Should be set to 0 if not using them. • intraUnifDraws: Character vector. Names of uniform-distributed intra- individual draws. • intraNormDraws: Character vector. Names of normaly distributed intra- individual draws. apollo_randCoeff Function. Used with mixing models. Constructs the random parameters of a mixing model. Receives two arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: The output of this function (apollo_validateInputs). apollo_lcPars Function. Used with latent class models. Constructs a list of parameters for each latent class. Receives two arguments: • apollo_beta: Named numeric vector. Names and values of model param- eters. • apollo_inputs: The output of this function (apollo_validateInputs). recycle Logical. If TRUE, an older version of apollo_inputs is looked for in the calling environment (parent frame), and any element in that old version cre- ated by the user is copied into the new apollo_inputs returned by this func- tion. For recycle=TRUE to work, the old version of apollo_inputs must be named "apollo_inputs". If FALSE, nothing is copied from any older version of apollo_inputs. FALSE is the default. silent Logical. TRUE to keep the function from printing to the console. Default is FALSE. Details All arguments to this function are optional. If the function is called without arguments, then it it will look in the user workspace (i.e. the global environment) for variables with the same name as its omitted arguments. We strongly recommend users to visit http://www.apollochoicemodelling. com/ for examples on how to use Apollo. In the website, users will also find a detailed manual and a user-group for help and further reference. Value List grouping several required input for model estimation. apollo_varcov Calculates variance-covariance matrix of an Apollo model Description Calculates the Hessian, variance-covariance matrix and standard errors of an Apollo model as de- fined by its likelihood function and apollo_inputs list of settings. Performs automatic scaling for increased numeric stability. Usage apollo_varcov(apollo_beta, apollo_fixed, varcov_settings) Arguments apollo_beta Named numeric vector. Names and values of parameters at which to calculate the covariance matrix. Values must not be scaled, and they must include any fixed parameter. apollo_fixed Character vector. Names of fixed parameters. varcov_settings List of settings defining the behaviour of this function. It must contain at least one of the following: apollo_logLike, apollo_grad or apollo_inputs to- gether with apollo_probabilities. • apollo_grad: Function to calculate the gradient of the model, as returned by apollo_makeGrad. • apollo_inputs: List grouping most common inputs. Created by function apollo_validateInputs. • apollo_logLike: Function to calculate the log-likelihood of the model, as returned by apollo_makeLogLike. • apollo_probabilities: apollo_probabilities Function. Returns probabil- ities of the model to be estimated. Must receive three arguments: – apollo_beta: Named numeric vector. Names and values of model parameters. – apollo_inputs: List containing options of the model. See apollo_validateInputs. – functionality: Character. Can be either "components", "conditionals", "estimate" (default), "gradient", "output", "prediction", "preprocess", "raw", "report", "shares_LL", "validate" or "zero_LL". • BHHH_matrix: Matrix. Optional input, providing the BHHH matrix so it does not get recalculated. • hessianRoutine: Character. Name of routine used to calculate the Hes- sian. Valid values are "analytic", "numDeriv", "maxLik" or "none" to avoid estimating the Hessian and covariance matrix. • numDeriv_method: Character. Method used for numerical differentiation. Can be "Richardson" or "simple", Only used if analytic gradients are available. See argument method in grad for more details. • numDeriv_settings: List. Additional arguments to the Richardson method used by numDeriv to calculate the Hessian. See argument method.args in grad for more details. • scaleBeta: Logical. If TRUE (default), parameters are scaled by their own value before calculating the Hessian to increase numerical stability. However, the output is de-scaled, so they are in the same scale as the apollo_beta argument. Details It calculates the Hessian, variance-covariance, and standard errors at apollo_beta values of an esti- mated model. At least one of the following settings must be provided (ordered by speed of computa- tion): apollo_grad, apollo_logLike, or (apollo_probabilities and apollo_inputs). If more than one is provided, then the priority is: apollo_grad, apollo_logLike, (apollo_probabilities and apollo_inputs). Value List with the following elements • apollo_beta: Named numerical vector. Parameter estimates (model$estimate, not scaled). • corrmat: Numerical matrix. Correlation between parameter estimates. • hessian: Numerical matrix. Hessian of the model at parameter estimates (model$estimate). • hessianScaling: Named numeric vector. Scales used on the paramaters to calculate the Hessian (non-fixed only). • methodsAttempted: Character vector. Name of methods attempted to calculate the Hessian. • methodUsed: Character. Name of method used to calculate the Hessian. • robcorrmat: Numerical matrix. Robust correlation between parameter estimates. • robse: Named numerical vector. Robust standard errors of parameter estimates. • robvarcov: Numerical matrix. Robust variance-covariance matrix. • se: Named numerical vector. Standard errors of parameter estimates. • varcov: Numerical matrix. Variance-covariance matrix. apollo_varList Lists variable names and definitions used inside a function Description Returns a list containing the names and definitions of variables in f, apollo_randCoeff and apollo_lcPars Usage apollo_varList(f, apollo_inputs) Arguments f A function, usually apollo_probabilities apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. Details It looks for variable definitions inside f, apollo_randCoeff, and apollo_lcPars. It returns them in a list. Value A list of expressions containing all definitions in f, apollo_randCoeff and apollo_probabilities apollo_weighting Applies weights Description Applies weights to individual observations in likelihood function. Usage apollo_weighting(P, apollo_inputs, functionality) Arguments P List of vectors, matrices or 3-dim arrays. Likelihood of the model components. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. functionality Character. Setting instructing Apollo what processing to apply to the likelihood function. This is in general controlled by the functions that call apollo_probabilities, though the user can also call apollo_probabilities manually with a given functionality for testing/debugging. Possible values are: • "components": For further processing/debugging, produces likelihood for each model component (if multiple components are present), at the level of individual draws and observations. • "conditionals": For conditionals, produces likelihood of the full model, at the level of individual inter-individual draws. • "estimate": For model estimation, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "gradient": For model estimation, produces analytical gradients of the likelihood, where possible. • "output": Prepares output for post-estimation reporting. • "prediction": For model prediction, produces probabilities for individual alternatives and individual model components (if multiple components are present) at the level of an observation, after averaging across draws. • "preprocess": Prepares likelihood functions for use in estimation. • "raw": For debugging, produces probabilities of all alternatives and in- dividual model components at the level of an observation, at the level of individual draws. • "report": Prepares output summarising model and choiceset structure. • "shares_LL": Produces overall model likelihood with constants only. • "validate": Validates model specification, produces likelihood of the full model, at the level of individual decision-makers, after averaging across draws. • "zero_LL": Produces overall model likelihood with all parameters at zero. Value The likelihood (i.e. probability in the case of choice models) of the model in the appropriate form for the given functionality, multiplied by individual-specific weights. apollo_writeF12 Writes an F12 file Description Writes an F12 file (ALogit format) with the results of a model estimation. Usage apollo_writeF12(model, truncateCoeffNames = TRUE) Arguments model Model object. Estimated model object as returned by function apollo_estimate. truncateCoeffNames Boolean. TRUE to truncate parameter names to 10 characters. TRUE by default. Value Nothing. apollo_writeTheta Writes the vector [beta,ll] to a file called modelname_iterations.csv Description Writes the vector [beta,ll] to a file called modelname_iterations.csv Usage apollo_writeTheta( beta, ll, modelName, scaling = NULL, outDir = NULL, apollo_beta = NULL ) Arguments beta vector of parameters to be written (including fixed ones). ll scalar representing the log-likelihood of the whole model. modelName Character. Name of the model. scaling Numeric vector of scales applied to beta outDir Scalar character. Name of output directory apollo_beta Named numeric vector of starting values. Value Nothing. aux_validateRows Validates and expands rows if necessary. Description Validates and expands rows if necessary. Usage aux_validateRows(rows, componentName = NULL, apollo_inputs = NULL) Arguments rows Boolean vector. Consideration of which rows to include. Length equal to the number of observations (nObs), with entries equal to TRUE for rows to include, and FALSE for rows to exclude. Default is "all", equivalent to rep(TRUE, nObs). Set to "all" by default if omitted. componentName Character. Name given to model component. If not provided by the user, Apollo will set the name automatically according to the element in P to which the func- tion output is directed. apollo_inputs List grouping most common inputs. Created by function apollo_validateInputs. print.apollo Prints brief summary of Apollo model Description Receives an estimated model object and prints a brief summary using the generic print function. Usage ## S3 method for class 'apollo' print(x, ...) Arguments x Model object. Estimated model object as returned by function apollo_estimate. ... further arguments passed to or from other methods. Value nothing. summary.apollo Prints summary of Apollo model Description Receives an estimated model object and prints a summary using the generic summary function. Usage ## S3 method for class 'apollo' summary(object, ..., pTwoSided = FALSE) Arguments object Model object. Estimated model object as returned by function apollo_estimate. ... further arguments passed to or from other methods. pTwoSided Logical. Should two-sided p-values be printed instead of one-sided p-values. FALSE by default. #’ @return nothing.
github.com/uber-go/kafka-client
go
Go
README [¶](#section-readme) --- ### Go Kafka Client Library [Mit License](https://github.com/uber-go/kafka-client/raw/master/LICENSE) [Build Status](https://travis-ci.org/uber-go/kafka-client/branches) [Coverage Status](https://codecov.io/gh/uber-go/kafka-client/branch/master) A high level Go client library for Apache Kafka that provides the following primitives on top of [sarama-cluster](https://github.com/bsm/sarama-cluster): * Competing consumer semantics with dead letter queue (DLQ) + Ability to process messages across multiple goroutines + Ability to Ack or Nack messages out of order (with optional DLQ) * Ability to consume from topics spread across different kafka clusters #### Stability This library is in alpha. APIs are subject to change, use at your own risk #### Contributing If you are interested in contributing, please sign the [License Agreement](https://cla-assistant.io/uber-go/kafka-client) and see our [development guide](https://github.com/uber-go/kafka-client/blob/master/docs/DEVELOPMENT-GUIDE.md) #### Installation `go get -u github.com/uber-go/kafka-client` #### Quick Start ``` package main import ( "os" "os/signal" "github.com/uber-go/kafka-client" "github.com/uber-go/kafka-client/kafka" "github.com/uber-go/tally" "go.uber.org/zap" ) func main() { // mapping from cluster name to list of broker ip addresses brokers := map[string][]string{ "sample_cluster": []string{"127.0.0.1:9092"}, "sample_dlq_cluster": []string{"127.0.0.1:9092"}, } // mapping from topic name to cluster that has that topic topicClusterAssignment := map[string][]string{ "sample_topic": []string{"sample_cluster"}, } // First create the kafkaclient, its the entry point for creating consumers or producers // It takes as input a name resolver that knows how to map topic names to broker ip addrs client := kafkaclient.New(kafka.NewStaticNameResolver(topicClusterAssignment, brokers), zap.NewNop(), tally.NoopScope) // Next, setup the consumer config for consuming from a set of topics config := &kafka.ConsumerConfig{ TopicList: kafka.ConsumerTopicList{ kafka.ConsumerTopic{ // Consumer Topic is a combination of topic + dead-letter-queue Topic: kafka.Topic{ // Each topic is a tuple of (name, clusterName) Name: "sample_topic", Cluster: "sample_cluster", }, DLQ: kafka.Topic{ Name: "sample_consumer_dlq", Cluster: "sample_dlq_cluster", }, }, }, GroupName: "sample_consumer", Concurrency: 100, // number of go routines processing messages in parallel } // Create the consumer through the previously created client consumer, err := client.NewConsumer(config) if err != nil { panic(err) } // Finally, start consuming if err := consumer.Start(); err != nil { panic(err) } sigCh := make(chan os.Signal, 1) signal.Notify(sigCh, os.Interrupt) for { select { case msg, ok := <-consumer.Messages(): if !ok { return // channel closed } if err := process(msg); err != nil { msg.Nack() } else { msg.Ack() } case <-sigCh: consumer.Stop() <-consumer.Closed() } } } ``` Documentation [¶](#section-documentation) --- ### Index [¶](#pkg-index) * [type Client](#Client) * + [func New(resolver kafka.NameResolver, logger *zap.Logger, scope tally.Scope) Client](#New) * [type ConsumerOption](#ConsumerOption) * + [func WithClientID(clientID string) ConsumerOption](#WithClientID) + [func WithDLQTopics(topicList kafka.ConsumerTopicList) ConsumerOption](#WithDLQTopics) + [func WithRetryTopics(topicList kafka.ConsumerTopicList) ConsumerOption](#WithRetryTopics) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) #### type [Client](https://github.com/uber-go/kafka-client/blob/v0.2.2/client.go#L35) [¶](#Client) ``` type Client interface { // NewConsumer returns a new instance of kafka consumer. NewConsumer(config *[kafka](/github.com/uber-go/[email protected]/kafka).[ConsumerConfig](/github.com/uber-go/[email protected]/kafka#ConsumerConfig), consumerOpts ...[ConsumerOption](#ConsumerOption)) ([kafka](/github.com/uber-go/[email protected]/kafka).[Consumer](/github.com/uber-go/[email protected]/kafka#Consumer), [error](/builtin#error)) } ``` Client refers to the kafka client. Serves as the entry point to producing or consuming messages from kafka #### func [New](https://github.com/uber-go/kafka-client/blob/v0.2.2/client.go#L48) [¶](#New) ``` func New(resolver [kafka](/github.com/uber-go/[email protected]/kafka).[NameResolver](/github.com/uber-go/[email protected]/kafka#NameResolver), logger *[zap](/go.uber.org/zap).[Logger](/go.uber.org/zap#Logger), scope [tally](/github.com/uber-go/tally).[Scope](/github.com/uber-go/tally#Scope)) [Client](#Client) ``` New returns a new kafka client. #### type [ConsumerOption](https://github.com/uber-go/kafka-client/blob/v0.2.2/consumerOptions.go#L30) [¶](#ConsumerOption) ``` type ConsumerOption interface { // contains filtered or unexported methods } ``` ConsumerOption is the type for optional arguments to the NewConsumer constructor. #### func [WithClientID](https://github.com/uber-go/kafka-client/blob/v0.2.2/consumerOptions.go#L83) [¶](#WithClientID) added in v0.2.2 ``` func WithClientID(clientID [string](/builtin#string)) [ConsumerOption](#ConsumerOption) ``` WithClientID sets client id. #### func [WithDLQTopics](https://github.com/uber-go/kafka-client/blob/v0.2.2/consumerOptions.go#L48) [¶](#WithDLQTopics) added in v0.1.3 ``` func WithDLQTopics(topicList [kafka](/github.com/uber-go/[email protected]/kafka).[ConsumerTopicList](/github.com/uber-go/[email protected]/kafka#ConsumerTopicList)) [ConsumerOption](#ConsumerOption) ``` WithDLQTopics creates a range consumer for the specified consumer DLQ topics. #### func [WithRetryTopics](https://github.com/uber-go/kafka-client/blob/v0.2.2/consumerOptions.go#L66) [¶](#WithRetryTopics) added in v0.1.3 ``` func WithRetryTopics(topicList [kafka](/github.com/uber-go/[email protected]/kafka).[ConsumerTopicList](/github.com/uber-go/[email protected]/kafka#ConsumerTopicList)) [ConsumerOption](#ConsumerOption) ``` WithRetryTopics creates a consumer for the specified consumer Retry topics.
bee-block
rust
Rust
Crate bee_block === Core data types for blocks in the tangle. Re-exports --- `pub use self::block::dto::BlockDto;``pub use self::block::Block;``pub use self::block::BlockBuilder;`Modules --- addressA module that provides types and syntactic validations of addresses.blockA module that provides types and syntactic validations of blocks.dtoA module that provides DTOs.helperA module that contains helper functions and types.inputA module that provides types and syntactic validations of inputs.outputA module that provides types and syntactic validations of outputs.parentA module that provides types and syntactic validations of parents. The parents module defines the core data type for storing the blocks directly approved by a block.payloadA module that provides types and syntactic validations of payloads. The payload module defines the core data types for representing block payloads.protocolA module that provides types and syntactic validations of protocol parameters.randA module that provides utilities for random generation of types.semanticA module that provides types and rules for semantic validation.signatureA module that provides types and syntactic validations of signatures.unlockA module that provides types and syntactic validations of unlocks.Macros --- create_bitflagsA convenience macro to work around the fact the `[bitflags]` crate does not yet support iterating over the individual flags. This macro essentially creates the `[bitflags]` and puts the individual flags into an associated constant `pub const ALL_FLAGS: &'static []`.impl_idTODOstring_serde_implHelper macro to serialize types to string via serde.Structs --- BlockIdA block identifier, the BLAKE2b-256 hash of the block bytes. See https://www.blake2.net/ for more information.Enums --- DtoErrorErrorError occurring when creating/parsing/validating blocks.InxError Crate bee_block === Core data types for blocks in the tangle. Re-exports --- `pub use self::block::dto::BlockDto;``pub use self::block::Block;``pub use self::block::BlockBuilder;`Modules --- addressA module that provides types and syntactic validations of addresses.blockA module that provides types and syntactic validations of blocks.dtoA module that provides DTOs.helperA module that contains helper functions and types.inputA module that provides types and syntactic validations of inputs.outputA module that provides types and syntactic validations of outputs.parentA module that provides types and syntactic validations of parents. The parents module defines the core data type for storing the blocks directly approved by a block.payloadA module that provides types and syntactic validations of payloads. The payload module defines the core data types for representing block payloads.protocolA module that provides types and syntactic validations of protocol parameters.randA module that provides utilities for random generation of types.semanticA module that provides types and rules for semantic validation.signatureA module that provides types and syntactic validations of signatures.unlockA module that provides types and syntactic validations of unlocks.Macros --- create_bitflagsA convenience macro to work around the fact the `[bitflags]` crate does not yet support iterating over the individual flags. This macro essentially creates the `[bitflags]` and puts the individual flags into an associated constant `pub const ALL_FLAGS: &'static []`.impl_idTODOstring_serde_implHelper macro to serialize types to string via serde.Structs --- BlockIdA block identifier, the BLAKE2b-256 hash of the block bytes. See https://www.blake2.net/ for more information.Enums --- DtoErrorErrorError occurring when creating/parsing/validating blocks.InxError Struct bee_block::block::dto::BlockDto === ``` pub struct BlockDto { pub protocol_version: u8, pub parents: Vec<String>, pub payload: Option<PayloadDto>, pub nonce: String, } ``` The block object that nodes gossip around in the network. Fields --- `protocol_version: u8``parents: Vec<String>``payload: Option<PayloadDto>``nonce: String`Trait Implementations --- ### impl Clone for BlockDto #### fn clone(&self) -> BlockDto Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn from(value: &Block) -> Self Converts to this type from the input type.### impl PartialEq<BlockDto> for BlockDto #### fn eq(&self, other: &BlockDto) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where    __S: Serializer, Serialize this value into the given Serde serializer. ### impl StructuralEq for BlockDto ### impl StructuralPartialEq for BlockDto Auto Trait Implementations --- ### impl RefUnwindSafe for BlockDto ### impl Send for BlockDto ### impl Sync for BlockDto ### impl Unpin for BlockDto ### impl UnwindSafe for BlockDto Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> IntoRequest<T> for T #### fn into_request(self) -> Request<TWrap the input message `T` in a `tonic::Request`### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere    V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Struct bee_block::block::Block === ``` pub struct Block { /* private fields */ } ``` Represent the object that nodes gossip around the network. Implementations --- ### impl Block #### pub fn try_from_dto(    value: &BlockDto,    protocol_parameters: &ProtocolParameters) -> Result<Block, DtoError### impl Block #### pub const LENGTH_MIN: usize = 46usize The minimum number of bytes in a block. #### pub const LENGTH_MAX: usize = 32_768usize The maximum number of bytes in a block. #### pub fn build(parents: Parents) -> BlockBuilder Creates a new `BlockBuilder` to construct an instance of a `Block`. #### pub fn protocol_version(&self) -> u8 Returns the protocol version of a `Block`. #### pub fn parents(&self) -> &Parents Returns the parents of a `Block`. #### pub fn payload(&self) -> Option<&PayloadReturns the optional payload of a `Block`. #### pub fn nonce(&self) -> u64 Returns the nonce of a `Block`. #### pub fn id(&self) -> BlockId Computes the identifier of the block. #### pub fn into_parents(self) -> Parents Consumes the `Block`, and returns ownership over its `Parents`. #### pub fn unpack_strict<T: AsRef<[u8]>>(    bytes: T,    visitor: &<Self as Packable>::UnpackVisitor) -> Result<Self, UnpackError<<Self as Packable>::UnpackError, UnexpectedEOF>Unpacks a `Block` from a sequence of bytes doing syntactical checks and verifying that there are no trailing bytes in the sequence. Trait Implementations --- ### impl Clone for Block #### fn clone(&self) -> Block Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn from(value: &Block) -> Self Converts to this type from the input type.### impl Packable for Block #### type UnpackError = Error The error type that can be returned if some semantic error occurs while unpacking. FIXME: docs#### fn pack<P: Packer>(&self, packer: &mutP) -> Result<(), P::ErrorPacks this value into the given `Packer`.#### fn unpack<U: Unpacker, const VERIFY: bool>(    unpacker: &mutU,    visitor: &Self::UnpackVisitor) -> Result<Self, UnpackError<Self::UnpackError, U::Error>Unpacks this value from the given `Unpacker`. The `VERIFY` generic parameter can be used to skip additional syntactic checks. #### fn eq(&self, other: &Block) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where    __S: Serializer, Serialize this value into the given Serde serializer. ### impl StructuralEq for Block ### impl StructuralPartialEq for Block Auto Trait Implementations --- ### impl RefUnwindSafe for Block ### impl Send for Block ### impl Sync for Block ### impl Unpin for Block ### impl UnwindSafe for Block Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> IntoRequest<T> for T #### fn into_request(self) -> Request<TWrap the input message `T` in a `tonic::Request`### impl<P> PackableExt for Pwhere    P: Packable, #### fn unpack_verified<T>(    bytes: T,    visitor: &<P as Packable>::UnpackVisitor) -> Result<P, UnpackError<<P as Packable>::UnpackError, UnexpectedEOF>>where    T: AsRef<[u8]>, Unpacks this value from a type that implements [`AsRef<[u8]>`]. #### fn unpack_unverified<T>(    bytes: T) -> Result<P, UnpackError<<P as Packable>::UnpackError, UnexpectedEOF>>where    T: AsRef<[u8]>, Unpacks this value from a type that implements [`AsRef<[u8]>`] skipping some syntatical checks. #### fn packed_len(&self) -> usize Returns the length in bytes of the value after being packed. The returned value always matches the number of bytes written using `pack`. Convenience method that packs this value into a `Vec<u8>`.### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere    V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Struct bee_block::block::BlockBuilder === ``` pub struct BlockBuilder<P: NonceProvider = Miner> { /* private fields */ } ``` A builder to build a `Block`. Implementations --- ### impl<P: NonceProvider> BlockBuilder<P#### pub fn new(parents: Parents) -> Self Creates a new `BlockBuilder`. #### pub fn with_protocol_version(self, protocol_version: u8) -> Self Adds a protocol version to a `BlockBuilder`. #### pub fn with_payload(self, payload: Payload) -> Self Adds a payload to a `BlockBuilder`. #### pub fn with_nonce_provider(self, nonce_provider: P) -> Self Adds a nonce provider to a `BlockBuilder`. #### pub fn finish(self, min_pow_score: u32) -> Result<Block, ErrorFinishes the `BlockBuilder` into a `Block`. Trait Implementations --- ### impl<P: Clone + NonceProvider> Clone for BlockBuilder<P#### fn clone(&self) -> BlockBuilder<PReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read moreAuto Trait Implementations --- ### impl<P> RefUnwindSafe for BlockBuilder<P>where    P: RefUnwindSafe, ### impl<P> Send for BlockBuilder<P>where    P: Send, ### impl<P> Sync for BlockBuilder<P>where    P: Sync, ### impl<P> Unpin for BlockBuilder<P>where    P: Unpin, ### impl<P> UnwindSafe for BlockBuilder<P>where    P: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> IntoRequest<T> for T #### fn into_request(self) -> Request<TWrap the input message `T` in a `tonic::Request`### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere    V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Module bee_block::address === A module that provides types and syntactic validations of addresses. Modules --- dtoStructs --- AliasAddressAn alias address.Ed25519AddressAn Ed25519 address.NftAddressAn NFT address.Enums --- AddressA generic address supporting different address kinds. Module bee_block::block === A module that provides types and syntactic validations of blocks. Modules --- dtoStructs --- BlockRepresent the object that nodes gossip around the network.BlockBuilderA builder to build a `Block`. Module bee_block::dto === A module that provides DTOs. Structs --- U256DtoDescribes a U256. Module bee_block::helper === A module that contains helper functions and types. Functions --- network_name_to_idHashes a string network name to a digit network ID. Module bee_block::input === A module that provides types and syntactic validations of inputs. Modules --- dtoStructs --- TreasuryInput`TreasuryInput` is an input which references a milestone which generated a `TreasuryOutput`.UtxoInputRepresents an input referencing an output.Enums --- InputA generic input supporting different input kinds.Constants --- INPUT_COUNT_MAXThe maximum number of inputs of a transaction.INPUT_COUNT_RANGEThe range of valid numbers of inputs of a transaction.INPUT_INDEX_MAXThe maximum index of inputs of a transaction.INPUT_INDEX_RANGEThe range of valid indices of inputs of a transaction. Module bee_block::output === A module that provides types and syntactic validations of outputs. Re-exports --- `pub use self::alias::AliasOutput;``pub use self::alias::AliasOutputBuilder;``pub use self::basic::BasicOutput;``pub use self::basic::BasicOutputBuilder;``pub use self::feature::Feature;``pub use self::feature::Features;``pub use self::foundry::FoundryOutput;``pub use self::foundry::FoundryOutputBuilder;``pub use self::nft::NftOutput;``pub use self::nft::NftOutputBuilder;``pub use self::unlock_condition::UnlockCondition;``pub use self::unlock_condition::UnlockConditions;`Modules --- aliasbasicdtofeaturefoundrynftunlock_conditionStructs --- AliasIdTODO.FoundryIdDefines the unique identifier of a foundry.InputsCommitmentRepresents a commitment to transaction inputs.NativeTokenNativeTokensNativeTokensBuilderA builder for `NativeTokens`.NftIdTODO.OutputIdThe identifier of an `Output`.RentStructureSpecifies the current parameters for the byte cost computation.RentStructureBuilderBuilder for a `RentStructure`.SimpleTokenSchemeTokenIdTODO.TreasuryOutput`TreasuryOutput` is an output which holds the treasury of a network.Enums --- ChainIdOutputA generic output that can represent different types defining the deposit of funds.StateTransitionErrorTokenSchemeConstants --- OUTPUT_COUNT_MAXThe maximum number of outputs of a transaction.OUTPUT_COUNT_RANGEThe range of valid numbers of outputs of a transaction .OUTPUT_INDEX_MAXThe maximum index of outputs of a transaction.OUTPUT_INDEX_RANGEThe range of valid indices of outputs of a transaction .Traits --- RentA trait to facilitate the computation of the byte cost of block outputs, which is central to dust protection.StateTransitionVerifier Module bee_block::parent === A module that provides types and syntactic validations of parents. The parents module defines the core data type for storing the blocks directly approved by a block. Structs --- ParentsA `Block`’s `Parents` are the `BlockId`s of the blocks it directly approves. Module bee_block::payload === A module that provides types and syntactic validations of payloads. The payload module defines the core data types for representing block payloads. Re-exports --- `pub use self::milestone::MilestoneOptions;``pub use self::milestone::MilestonePayload;``pub use self::tagged_data::TaggedDataPayload;``pub use self::transaction::TransactionPayload;``pub use self::treasury_transaction::TreasuryTransactionPayload;`Modules --- dtomilestoneModule describing the milestone payload.tagged_dataModule describing the tagged data payload.transactionModule describing the transaction payload.treasury_transactionModule describing the treasury payload.Structs --- OptionalPayloadRepresentation of an optional `Payload`. Essentially an `Option<Payload>` with a different `Packable` implementation, to conform to specs.Enums --- PayloadA generic payload that can represent different types defining block payloads. Module bee_block::protocol === A module that provides types and syntactic validations of protocol parameters. Structs --- ProtocolParametersDefines the parameters of the protocol.Functions --- protocol_parametersReturns a `ProtocolParameters` for testing purposes. Module bee_block::rand === A module that provides utilities for random generation of types. Modules --- addressModule providing random address generation utilities.blockModule providing random block generation utilities.boolModule providing random boolean generation utilities.bytesModule providing random byte generation utilities.inputModule providing random input generation utilities.milestoneModule providing random milestone generation utilities.milestone_optionModule providing random milestone option generation utilities.numberModule providing random number generation utilities.optionModule providing random option generation utilities.outputModule providing random output generation utilities.parentsModule providing random parents generation utilities.payloadModule providing random payload generation utilities.receiptModule providing random receipt generation utilities.stringModule providing random string generation utilities.transactionModule providing random transaction generation utilities. Module bee_block::semantic === A module that provides types and rules for semantic validation. Structs --- ValidationContextEnums --- ConflictErrorErrors related to ledger types.ConflictReasonRepresents the different reasons why a transaction can conflict with the ledger state.Functions --- semantic_validation Module bee_block::signature === A module that provides types and syntactic validations of signatures. Modules --- dtoStructs --- Ed25519SignatureAn Ed25519 signature.Enums --- SignatureA `Signature` contains a signature which is used to unlock a transaction input. Module bee_block::unlock === A module that provides types and syntactic validations of unlocks. Modules --- dtoStructs --- AliasUnlockPoints to the unlock of a consumed alias output.NftUnlockPoints to the unlock of a consumed NFT output.ReferenceUnlockAn `Unlock` that refers to another unlock.SignatureUnlockAn `Unlock` which is used to unlock a signature locked `Input`.UnlocksA collection of unlocks.Enums --- UnlockDefines the mechanism by which a transaction input is authorized to be consumed.Constants --- UNLOCK_COUNT_MAXThe maximum number of unlocks of a transaction.UNLOCK_COUNT_RANGEThe range of valid numbers of unlocks of a transaction.UNLOCK_INDEX_MAXThe maximum index of unlocks of a transaction.UNLOCK_INDEX_RANGEThe range of valid indices of unlocks of a transaction. Macro bee_block::create_bitflags === ``` macro_rules! create_bitflags { ($(#[$meta:meta])* $vis:vis $Name:ident, $TagType:ty, [$(($FlagName:ident, $TypeName:ident),)+]) => { ... }; } ``` A convenience macro to work around the fact the `[bitflags]` crate does not yet support iterating over the individual flags. This macro essentially creates the `[bitflags]` and puts the individual flags into an associated constant `pub const ALL_FLAGS: &'static []`. Macro bee_block::impl_id === ``` macro_rules! impl_id { ($vis:vis $name:ident, $length:literal, $doc:literal) => { ... }; } ``` TODO Macro bee_block::string_serde_impl === ``` macro_rules! string_serde_impl { ($type:ty) => { ... }; } ``` Helper macro to serialize types to string via serde. Struct bee_block::BlockId === ``` pub struct BlockId(_); ``` A block identifier, the BLAKE2b-256 hash of the block bytes. See https://www.blake2.net/ for more information. Implementations --- ### impl BlockId #### pub const LENGTH: usize = 32usize The length of a [`$ty`]. #### pub fn new(bytes: [u8; 32]) -> Self Creates a new [`$ty`]. #### pub fn null() -> Self Creates a null [`$ty`]. #### pub fn is_null(&self) -> bool Checks if the [`$ty`] is null. Methods from Deref<Target = [u8; 32]> --- 1.57.0 · source#### pub fn as_slice(&self) -> &[T]Notable traits for &[u8]`impl Read for &[u8]impl Write for &mut [u8]` Returns a slice containing the entire array. Equivalent to `&s[..]`. #### pub fn each_ref(&self) -> [&T; N] 🔬This is a nightly-only experimental API. (`array_methods`)Borrows each element and returns an array of references with the same size as `self`. ##### Example ``` #![feature(array_methods)] let floats = [3.1, 2.7, -1.0]; let float_refs: [&f64; 3] = floats.each_ref(); assert_eq!(float_refs, [&3.1, &2.7, &-1.0]); ``` This method is particularly useful if combined with other methods, like `map`. This way, you can avoid moving the original array if its elements are not `Copy`. ``` #![feature(array_methods)] let strings = ["Ferris".to_string(), "♥".to_string(), "Rust".to_string()]; let is_ascii = strings.each_ref().map(|s| s.is_ascii()); assert_eq!(is_ascii, [true, false, true]); // We can still access the original array: it has not been moved. assert_eq!(strings.len(), 3); ``` #### pub fn split_array_ref<const M: usize>(&self) -> (&[T; M], &[T]) 🔬This is a nightly-only experimental API. (`split_array`)Divides one array reference into two at an index. The first will contain all indices from `[0, M)` (excluding the index `M` itself) and the second will contain all indices from `[M, N)` (excluding the index `N` itself). ##### Panics Panics if `M > N`. ##### Examples ``` #![feature(split_array)] let v = [1, 2, 3, 4, 5, 6]; { let (left, right) = v.split_array_ref::<0>(); assert_eq!(left, &[]); assert_eq!(right, &[1, 2, 3, 4, 5, 6]); } { let (left, right) = v.split_array_ref::<2>(); assert_eq!(left, &[1, 2]); assert_eq!(right, &[3, 4, 5, 6]); } { let (left, right) = v.split_array_ref::<6>(); assert_eq!(left, &[1, 2, 3, 4, 5, 6]); assert_eq!(right, &[]); } ``` #### pub fn rsplit_array_ref<const M: usize>(&self) -> (&[T], &[T; M]) 🔬This is a nightly-only experimental API. (`split_array`)Divides one array reference into two at an index from the end. The first will contain all indices from `[0, N - M)` (excluding the index `N - M` itself) and the second will contain all indices from `[N - M, N)` (excluding the index `N` itself). ##### Panics Panics if `M > N`. ##### Examples ``` #![feature(split_array)] let v = [1, 2, 3, 4, 5, 6]; { let (left, right) = v.rsplit_array_ref::<0>(); assert_eq!(left, &[1, 2, 3, 4, 5, 6]); assert_eq!(right, &[]); } { let (left, right) = v.rsplit_array_ref::<2>(); assert_eq!(left, &[1, 2, 3, 4]); assert_eq!(right, &[5, 6]); } { let (left, right) = v.rsplit_array_ref::<6>(); assert_eq!(left, &[]); assert_eq!(right, &[1, 2, 3, 4, 5, 6]); } ``` Trait Implementations --- ### impl<__AsRefT: ?Sized> AsRef<__AsRefT> for BlockIdwhere    [u8; 32]: AsRef<__AsRefT>, #### fn as_ref(&self) -> &__AsRefT Converts this type into a shared reference of the (usually inferred) input type.### impl Clone for BlockId #### fn clone(&self) -> BlockId Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### type Target = [u8; 32] The resulting type after dereferencing.#### fn deref(&self) -> &Self::Target Dereferences the value.### impl<'de> Deserialize<'de> for BlockId #### fn deserialize<D>(deserializer: D) -> Result<BlockId, D::Error>where    D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn from(original: [u8; 32]) -> BlockId Converts to this type from the input type.### impl From<BlockId> for BlockId #### fn from(value: BlockId) -> Self Converts to this type from the input type.### impl FromStr for BlockId #### type Err = Error The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<Self, Self::ErrParses a string `s` to return a value of this type. #### fn hash<__H: Hasher>(&self, state: &mut__H) Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mutH)where    H: Hasher, Feeds a slice of this type into the given `Hasher`. #### fn cmp(&self, other: &BlockId) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Self Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Self Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere    Self: PartialOrd<Self>, Restrict a value to a certain interval. #### type UnpackError = <[u8; 32] as Packable>::UnpackError The error type that can be returned if some semantic error occurs while unpacking. FIXME: docs#### fn pack<P: Packer>(&self, packer: &mutP) -> Result<(), P::ErrorPacks this value into the given `Packer`.#### fn unpack<U: Unpacker, const VERIFY: bool>(    unpacker: &mutU,    visitor: &Self::UnpackVisitor) -> Result<Self, UnpackError<Self::UnpackError, U::Error>Unpacks this value from the given `Unpacker`. The `VERIFY` generic parameter can be used to skip additional syntactic checks. #### fn eq(&self, other: &BlockId) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. #### fn partial_cmp(&self, other: &BlockId) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn serialize<S: Serializer>(&self, s: S) -> Result<S::Ok, S::ErrorSerialize this value into the given Serde serializer. #### type Error = InxError The type returned in the event of a conversion error.#### fn try_from(value: BlockId) -> Result<Self, Self::ErrorPerforms the conversion.### impl Copy for BlockId ### impl Eq for BlockId ### impl StructuralEq for BlockId ### impl StructuralPartialEq for BlockId Auto Trait Implementations --- ### impl RefUnwindSafe for BlockId ### impl Send for BlockId ### impl Sync for BlockId ### impl Unpin for BlockId ### impl UnwindSafe for BlockId Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. #### fn as_byte_slice(&self) -> &[u8]Notable traits for &[u8]`impl Read for &[u8]impl Write for &mut [u8]` ### impl<U> AsSliceOf for Uwhere    U: AsRef<[u8]> + ?Sized, #### fn as_slice_of<T>(&self) -> Result<&[T], Error>where    T: FromByteSlice, ### impl<T> Base32Len for Twhere    T: AsRef<[u8]>, #### fn base32_len(&self) -> usize Calculate the base32 serialized length### impl<T> Borrow<T> for Twhere    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. #### default fn get_hash<H, B>(value: &H, build_hasher: &B) -> u64where    H: Hash + ?Sized,    B: BuildHasher, ### impl<T> CheckBase32<Vec<u5, Global>> for Twhere    T: AsRef<[u8]>, #### type Err = Error Error type if conversion fails#### fn check_base32(    self) -> Result<Vec<u5, Global>, <T as CheckBase32<Vec<u5, Global>>>::ErrCheck if all values are in range and return array-like struct of `u5` values### impl<Q, K> Equivalent<K> for Qwhere    Q: Eq + ?Sized,    K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> IntoRequest<T> for T #### fn into_request(self) -> Request<TWrap the input message `T` in a `tonic::Request`### impl<P> PackableExt for Pwhere    P: Packable, #### fn unpack_verified<T>(    bytes: T,    visitor: &<P as Packable>::UnpackVisitor) -> Result<P, UnpackError<<P as Packable>::UnpackError, UnexpectedEOF>>where    T: AsRef<[u8]>, Unpacks this value from a type that implements [`AsRef<[u8]>`]. #### fn unpack_unverified<T>(    bytes: T) -> Result<P, UnpackError<<P as Packable>::UnpackError, UnexpectedEOF>>where    T: AsRef<[u8]>, Unpacks this value from a type that implements [`AsRef<[u8]>`] skipping some syntatical checks. #### fn packed_len(&self) -> usize Returns the length in bytes of the value after being packed. The returned value always matches the number of bytes written using `pack`. Convenience method that packs this value into a `Vec<u8>`.### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToBase32 for Twhere    T: AsRef<[u8]>, #### fn write_base32<W>(&self, writer: &mutW) -> Result<(), <W as WriteBase32>::Err>where    W: WriteBase32, Encode as base32 and write it to the supplied writer Implementations shouldn’t allocate. Convert `Self` to base32 vector### impl<T> ToHex for Twhere    T: AsRef<[u8]>, #### fn encode_hex<U>(&self) -> Uwhere    U: FromIterator<char>, Encode the hex strict representing `self` into the result. Lower case letters are used (e.g. `f9b4ca`) Encode the hex strict representing `self` into the result. Upper case letters are used (e.g. `F9B4CA`) #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### default fn to_string(&self) -> String Converts the given value to a `String`. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere    V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Enum bee_block::DtoError === ``` pub enum DtoError { InvalidField(&'static str), Block(Error), } ``` Variants --- ### `InvalidField(&'static str)` ### `Block(Error)` Trait Implementations --- ### impl Debug for DtoError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(error: Error) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl RefUnwindSafe for DtoError ### impl Send for DtoError ### impl Sync for DtoError ### impl Unpin for DtoError ### impl UnwindSafe for DtoError Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> IntoRequest<T> for T #### fn into_request(self) -> Request<TWrap the input message `T` in a `tonic::Request`### impl<E> Provider for Ewhere    E: Error + ?Sized, #### fn provide(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`. #### type Output = T Should always be `Self`### impl<T> ToString for Twhere    T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere    V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Enum bee_block::Error === ``` pub enum Error { CannotReplaceMissingField, ConsumedAmountOverflow, ConsumedNativeTokensAmountOverflow, CreatedAmountOverflow, CreatedNativeTokensAmountOverflow, CryptoError(CryptoError), DuplicateSignatureUnlock(u16), DuplicateUtxo(UtxoInput), ExpirationUnlockConditionZero, FeaturesNotUniqueSorted, InputUnlockCountMismatch { input_count: usize, unlock_count: usize, }, InvalidAddress, InvalidAddressKind(u8), InvalidAliasIndex(<BoundedU16<{ _ }, { _ }> as TryFrom<u16>>::Error), InvalidControllerKind(u8), InvalidStorageDepositAmount(u64), InsufficientStorageDepositAmount { amount: u64, required: u64, }, StorageDepositReturnExceedsOutputAmount { deposit: u64, amount: u64, }, InsufficientStorageDepositReturnAmount { deposit: u64, required: u64, }, InvalidBinaryParametersLength(<BoundedU16<{ _ }, { _ }> as TryFrom<usize>>::Error), InvalidEssenceKind(u8), InvalidFeatureCount(<BoundedU8<0, { Features::COUNT_MAX }> as TryFrom<usize>>::Error), InvalidFeatureKind(u8), InvalidFoundryOutputSupply { minted: U256, melted: U256, max: U256, }, HexError(HexError), InvalidInputKind(u8), InvalidInputCount(<BoundedU16<{ _ }, { _ }> as TryFrom<usize>>::Error), InvalidInputOutputIndex(<BoundedU16<{ _ }, { _ }> as TryFrom<u16>>::Error), InvalidBech32Hrp(FromUtf8Error), InvalidBlockLength(usize), InvalidStateMetadataLength(<BoundedU16<0, { AliasOutput::STATE_METADATA_LENGTH_MAX }> as TryFrom<usize>>::Error), InvalidMetadataFeatureLength(<BoundedU16<{ _ }, { _ }> as TryFrom<usize>>::Error), InvalidMilestoneMetadataLength(<BoundedU16<{ u16::MIN }, { u16::MAX }> as TryFrom<usize>>::Error), InvalidMilestoneOptionCount(<BoundedU8<0, { MilestoneOptions::COUNT_MAX }> as TryFrom<usize>>::Error), InvalidMilestoneOptionKind(u8), InvalidMigratedFundsEntryAmount(u64), InvalidNativeTokenCount(<BoundedU8<0, { NativeTokens::COUNT_MAX }> as TryFrom<usize>>::Error), InvalidNetworkName(FromUtf8Error), InvalidNftIndex(<BoundedU16<{ _ }, { _ }> as TryFrom<u16>>::Error), InvalidOutputAmount(u64), InvalidOutputCount(<BoundedU16<{ _ }, { _ }> as TryFrom<usize>>::Error), InvalidOutputKind(u8), InvalidParentCount(<BoundedU8<{ _ }, { _ }> as TryFrom<usize>>::Error), InvalidPayloadKind(u32), InvalidPayloadLength { expected: usize, actual: usize, }, InvalidReceiptFundsCount(<BoundedU16<{ _ }, { _ }> as TryFrom<usize>>::Error), InvalidReceiptFundsSum(u128), InvalidReferenceIndex(<BoundedU16<{ _ }, { _ }> as TryFrom<u16>>::Error), InvalidSignature, InvalidSignatureKind(u8), InvalidStringPrefix(<u8 as TryFrom<usize>>::Error), InvalidTaggedDataLength(<BoundedU32<{ _ }, { _ }> as TryFrom<usize>>::Error), InvalidTagFeatureLength(<BoundedU8<{ _ }, { _ }> as TryFrom<usize>>::Error), InvalidTagLength(<BoundedU8<{ _ }, { _ }> as TryFrom<usize>>::Error), InvalidTailTransactionHash, InvalidTokenSchemeKind(u8), InvalidTransactionAmountSum(u128), InvalidTransactionNativeTokensCount(u16), InvalidTreasuryOutputAmount(u64), InvalidUnlockCount(<BoundedU16<{ _ }, { _ }> as TryFrom<usize>>::Error), InvalidUnlockKind(u8), InvalidUnlockReference(u16), InvalidUnlockAlias(u16), InvalidUnlockNft(u16), InvalidUnlockConditionCount(<BoundedU8<0, { UnlockConditions::COUNT_MAX }> as TryFrom<usize>>::Error), InvalidUnlockConditionKind(u8), MigratedFundsNotSorted, MilestoneInvalidSignatureCount(<BoundedU8<{ _ }, { _ }> as TryFrom<usize>>::Error), MilestonePublicKeysSignaturesCountMismatch { key_count: usize, sig_count: usize, }, MilestoneOptionsNotUniqueSorted, MilestoneSignaturesNotUniqueSorted, MissingAddressUnlockCondition, MissingGovernorUnlockCondition, MissingPayload, MissingRequiredSenderBlock, MissingStateControllerUnlockCondition, NativeTokensNotUniqueSorted, NativeTokensNullAmount, NativeTokensOverflow, NetworkIdMismatch { expected: u64, actual: u64, }, NonZeroStateIndexOrFoundryCounter, ParentsNotUniqueSorted, ProtocolVersionMismatch { expected: u8, actual: u8, }, ReceiptFundsNotUniqueSorted, RemainingBytesAfterBlock, SelfControlledAliasOutput(AliasId), SelfDepositNft(NftId), SignaturePublicKeyMismatch { expected: String, actual: String, }, StorageDepositReturnOverflow, TailTransactionHashNotUnique { previous: usize, current: usize, }, TimelockUnlockConditionZero, UnallowedFeature { index: usize, kind: u8, }, UnallowedUnlockCondition { index: usize, kind: u8, }, UnlockConditionsNotUniqueSorted, UnsupportedOutputKind(u8), } ``` Error occurring when creating/parsing/validating blocks. Variants --- ### `CannotReplaceMissingField` ### `ConsumedAmountOverflow` ### `ConsumedNativeTokensAmountOverflow` ### `CreatedAmountOverflow` ### `CreatedNativeTokensAmountOverflow` ### `CryptoError(CryptoError)` ### `DuplicateSignatureUnlock(u16)` ### `DuplicateUtxo(UtxoInput)` ### `ExpirationUnlockConditionZero` ### `FeaturesNotUniqueSorted` ### `InputUnlockCountMismatch` #### Fields `input_count: usize``unlock_count: usize`### `InvalidAddress` ### `InvalidAddressKind(u8)` ### `InvalidAliasIndex(<BoundedU16<{ _ }, { _ }> as TryFrom<u16>>::Error)` ### `InvalidControllerKind(u8)` ### `InvalidStorageDepositAmount(u64)` ### `InsufficientStorageDepositAmount` #### Fields `amount: u64``required: u64`### `StorageDepositReturnExceedsOutputAmount` #### Fields `deposit: u64``amount: u64`### `InsufficientStorageDepositReturnAmount` #### Fields `deposit: u64``required: u64`### `InvalidBinaryParametersLength(<BoundedU16<{ _ }, { _ }> as TryFrom<usize>>::Error)` ### `InvalidEssenceKind(u8)` ### `InvalidFeatureCount(<BoundedU8<0, { Features::COUNT_MAX }> as TryFrom<usize>>::Error)` ### `InvalidFeatureKind(u8)` ### `InvalidFoundryOutputSupply` #### Fields `minted: U256``melted: U256``max: U256`### `HexError(HexError)` ### `InvalidInputKind(u8)` ### `InvalidInputCount(<BoundedU16<{ _ }, { _ }> as TryFrom<usize>>::Error)` ### `InvalidInputOutputIndex(<BoundedU16<{ _ }, { _ }> as TryFrom<u16>>::Error)` ### `InvalidBech32Hrp(FromUtf8Error)` ### `InvalidBlockLength(usize)` ### `InvalidStateMetadataLength(<BoundedU16<0, { AliasOutput::STATE_METADATA_LENGTH_MAX }> as TryFrom<usize>>::Error)` ### `InvalidMetadataFeatureLength(<BoundedU16<{ _ }, { _ }> as TryFrom<usize>>::Error)` ### `InvalidMilestoneMetadataLength(<BoundedU16<{ u16::MIN }, { u16::MAX }> as TryFrom<usize>>::Error)` ### `InvalidMilestoneOptionCount(<BoundedU8<0, { MilestoneOptions::COUNT_MAX }> as TryFrom<usize>>::Error)` ### `InvalidMilestoneOptionKind(u8)` ### `InvalidMigratedFundsEntryAmount(u64)` ### `InvalidNativeTokenCount(<BoundedU8<0, { NativeTokens::COUNT_MAX }> as TryFrom<usize>>::Error)` ### `InvalidNetworkName(FromUtf8Error)` ### `InvalidNftIndex(<BoundedU16<{ _ }, { _ }> as TryFrom<u16>>::Error)` ### `InvalidOutputAmount(u64)` ### `InvalidOutputCount(<BoundedU16<{ _ }, { _ }> as TryFrom<usize>>::Error)` ### `InvalidOutputKind(u8)` ### `InvalidParentCount(<BoundedU8<{ _ }, { _ }> as TryFrom<usize>>::Error)` ### `InvalidPayloadKind(u32)` ### `InvalidPayloadLength` #### Fields `expected: usize``actual: usize`### `InvalidReceiptFundsCount(<BoundedU16<{ _ }, { _ }> as TryFrom<usize>>::Error)` ### `InvalidReceiptFundsSum(u128)` ### `InvalidReferenceIndex(<BoundedU16<{ _ }, { _ }> as TryFrom<u16>>::Error)` ### `InvalidSignature` ### `InvalidSignatureKind(u8)` ### `InvalidStringPrefix(<u8 as TryFrom<usize>>::Error)` ### `InvalidTaggedDataLength(<BoundedU32<{ _ }, { _ }> as TryFrom<usize>>::Error)` ### `InvalidTagFeatureLength(<BoundedU8<{ _ }, { _ }> as TryFrom<usize>>::Error)` ### `InvalidTagLength(<BoundedU8<{ _ }, { _ }> as TryFrom<usize>>::Error)` ### `InvalidTailTransactionHash` ### `InvalidTokenSchemeKind(u8)` ### `InvalidTransactionAmountSum(u128)` ### `InvalidTransactionNativeTokensCount(u16)` ### `InvalidTreasuryOutputAmount(u64)` ### `InvalidUnlockCount(<BoundedU16<{ _ }, { _ }> as TryFrom<usize>>::Error)` ### `InvalidUnlockKind(u8)` ### `InvalidUnlockReference(u16)` ### `InvalidUnlockAlias(u16)` ### `InvalidUnlockNft(u16)` ### `InvalidUnlockConditionCount(<BoundedU8<0, { UnlockConditions::COUNT_MAX }> as TryFrom<usize>>::Error)` ### `InvalidUnlockConditionKind(u8)` ### `MigratedFundsNotSorted` ### `MilestoneInvalidSignatureCount(<BoundedU8<{ _ }, { _ }> as TryFrom<usize>>::Error)` ### `MilestonePublicKeysSignaturesCountMismatch` #### Fields `key_count: usize``sig_count: usize`### `MilestoneOptionsNotUniqueSorted` ### `MilestoneSignaturesNotUniqueSorted` ### `MissingAddressUnlockCondition` ### `MissingGovernorUnlockCondition` ### `MissingPayload` ### `MissingRequiredSenderBlock` ### `MissingStateControllerUnlockCondition` ### `NativeTokensNotUniqueSorted` ### `NativeTokensNullAmount` ### `NativeTokensOverflow` ### `NetworkIdMismatch` #### Fields `expected: u64``actual: u64`### `NonZeroStateIndexOrFoundryCounter` ### `ParentsNotUniqueSorted` ### `ProtocolVersionMismatch` #### Fields `expected: u8``actual: u8`### `ReceiptFundsNotUniqueSorted` ### `RemainingBytesAfterBlock` ### `SelfControlledAliasOutput(AliasId)` ### `SelfDepositNft(NftId)` ### `SignaturePublicKeyMismatch` #### Fields `expected: String``actual: String`### `StorageDepositReturnOverflow` ### `TailTransactionHashNotUnique` #### Fields `previous: usize``current: usize`### `TimelockUnlockConditionZero` ### `UnallowedFeature` #### Fields `index: usize``kind: u8`### `UnallowedUnlockCondition` #### Fields `index: usize``kind: u8`### `UnlockConditionsNotUniqueSorted` ### `UnsupportedOutputKind(u8)` Trait Implementations --- ### impl Debug for Error #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(error: Error) -> Self Converts to this type from the input type.### impl From<Error> for Error #### fn from(error: CryptoError) -> Self Converts to this type from the input type.### impl From<Error> for InxError #### fn from(error: Error) -> Self Converts to this type from the input type.### impl From<Infallible> for Error #### fn from(err: Infallible) -> Self Converts to this type from the input type.### impl PartialEq<Error> for Error #### fn eq(&self, other: &Error) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. Auto Trait Implementations --- ### impl RefUnwindSafe for Error ### impl Send for Error ### impl Sync for Error ### impl Unpin for Error ### impl UnwindSafe for Error Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> IntoRequest<T> for T #### fn into_request(self) -> Request<TWrap the input message `T` in a `tonic::Request`### impl<E> Provider for Ewhere    E: Error + ?Sized, #### fn provide(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`. #### type Output = T Should always be `Self`### impl<T> ToString for Twhere    T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere    V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Enum bee_block::InxError === ``` pub enum InxError { InvalidId(&'static str, Vec<u8>), InvalidString(String), InvalidRawBytes(String), MissingField(&'static str), Block(Error), } ``` Variants --- ### `InvalidId(&'static str, Vec<u8>)` ### `InvalidString(String)` ### `InvalidRawBytes(String)` ### `MissingField(&'static str)` ### `Block(Error)` Trait Implementations --- ### impl Debug for InxError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(error: Error) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl RefUnwindSafe for InxError ### impl Send for InxError ### impl Sync for InxError ### impl Unpin for InxError ### impl UnwindSafe for InxError Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> IntoRequest<T> for T #### fn into_request(self) -> Request<TWrap the input message `T` in a `tonic::Request`### impl<E> Provider for Ewhere    E: Error + ?Sized, #### fn provide(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`. #### type Output = T Should always be `Self`### impl<T> ToString for Twhere    T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere    V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more
pagedown
cran
R
Package ‘pagedown’ December 13, 2022 Type Package Title Paginate the HTML Output of R Markdown with CSS for Print Version 0.20 Description Use the paged media properties in CSS and the JavaScript library 'paged.js' to split the content of an HTML document into discrete pages. Each page can have its page size, page numbers, margin boxes, and running headers, etc. Applications of this package include books, letters, reports, papers, business cards, resumes, and posters. Depends R (>= 3.5.0) Imports rmarkdown (>= 2.13), bookdown (>= 0.8), htmltools, jsonlite, later (>= 1.0.0), processx, servr (>= 0.23), httpuv, xfun, websocket Suggests promises, testit, xaringan, pdftools, revealjs, covr, xml2 License MIT + file LICENSE URL https://github.com/rstudio/pagedown BugReports https://github.com/rstudio/pagedown/issues SystemRequirements Pandoc (>= 2.2.3) Encoding UTF-8 RoxygenNote 7.1.2 NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-0645-5666>), <NAME> [aut, cph] (<https://orcid.org/0000-0002-0721-5595>), <NAME> [aut] (<https://orcid.org/0000-0002-1099-3857>), <NAME> [aut] (<https://orcid.org/0000-0002-6072-3521>), <NAME> [ctb] (<https://orcid.org/0000-0003-4474-2498>), <NAME> [ctb] (<https://orcid.org/0000-0002-8335-495X>), RStudio, PBC [cph], <NAME> [ctb] (paged.js in resources/js/), <NAME> [ctb] (resume.css in resources/css/), Zulko [ctb] (poster-relaxed.css in resources/css/) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2022-12-13 05:50:02 UTC R topics documented: book_cr... 2 business_car... 3 chrome_prin... 3 find_chrom... 5 html_lette... 5 html_page... 6 html_resum... 7 jss_page... 8 poster_relaxe... 8 thesis_page... 9 book_crc Create a book for Chapman & Hall/CRC Description This output format is similar to html_paged. The only difference is in the default stylesheets. Usage book_crc(..., css = c("crc-page", "default-page", "default", "crc")) Arguments ..., css Arguments passed to html_paged(). Value An R Markdown output format. business_card Create business cards Description This output format is based on an example in the Github repo https://github.com/RelaxedJS/ ReLaXed-examples. See https://pagedown.rbind.io/business-card/ for an example. Usage business_card(template = pkg_resource("html", "card.html")) Arguments template The path to the Pandoc template to convert Markdown to HTML. Value An R Markdown output format. Examples pagedown::business_card() chrome_print Print a web page to PDF or capture a screenshot using the headless Chrome Description Print an HTML page to PDF or capture a PNG/JPEG screenshot through the Chrome DevTools Protocol. Google Chrome or Microsoft Edge (or Chromium on Linux) must be installed prior to using this function. Usage chrome_print( input, output = xfun::with_ext(input, format), wait = 2, browser = "google-chrome", format = c("pdf", "png", "jpeg"), options = list(), selector = "body", box_model = c("border", "content", "margin", "padding"), scale = 1, work_dir = tempfile(), timeout = 30, extra_args = c("--disable-gpu"), verbose = 0, async = FALSE, outline = gs_available(), encoding ) Arguments input A URL or local file path to an HTML page, or a path to a local file that can be rendered to HTML via rmarkdown::render() (e.g., an R Markdown document or an R script). If the input is to be rendered via rmarkdown::render() and you need to pass any arguments to it, you can pass the whole render() call to chrome_print(), e.g., if you need to use the params argument: pagedown::chrome_print(rmarkdown: params = list(foo = 1:10))). This is because render() returns the HTML file, which can be passed to chrome_print(). output The output filename. For a local web page ‘foo/bar.html’, the default PDF output is ‘foo/bar.pdf’; for a remote URL ‘https://www.example.org/foo/bar.html’, the default output will be ‘bar.pdf’ under the current working directory. The same rules apply for screenshots. wait The number of seconds to wait for the page to load before printing (in certain cases, the page may not be immediately ready for printing, especially there are JavaScript applications on the page, so you may need to wait for a longer time). browser Path to Google Chrome, Microsoft Edge or Chromium. This function will try to find it automatically via find_chrome() if the path is not explicitly provided and the environment variable PAGEDOWN_CHROME is not set. format The output format. options A list of page options. See https://chromedevtools.github.io/devtools-protocol/tot/Page#met for the full list of options for PDF output, and https://chromedevtools.github.io/devtools-protoc for options for screenshots. Note that for PDF output, we have changed the defaults of printBackground (TRUE), preferCSSPageSize (TRUE) and when available transferMode (ReturnAsStream) in this function. selector A CSS selector used when capturing a screenshot. box_model The CSS box model used when capturing a screenshot. scale The scale factor used for screenshot. work_dir Name of headless Chrome working directory. If the default temporary directory doesn’t work, you may try to use a subdirectory of your home directory. timeout The number of seconds before canceling the document generation. Use a larger value if the document takes longer to build. extra_args Extra command-line arguments to be passed to Chrome. verbose Level of verbosity: 0 means no messages; 1 means to print out some auxiliary messages (e.g., parameters for capturing screenshots); 2 (or TRUE) means all messages, including those from the Chrome processes and WebSocket connec- tions. async Execute chrome_print() asynchronously? If TRUE, chrome_print() returns a promise value (the promises package has to be installed in this case). outline If not FALSE, chrome_print() will add the bookmarks to the generated pdf file, based on the table of contents informations. This feature is only available for output formats based on html_paged. It is enabled by default, as long as the Ghostscript executable can be detected by find_gs_cmd. encoding Not used. This argument is required by RStudio IDE. Value Path of the output file (invisibly). If async is TRUE, this is a promise value. References https://developer.chrome.com/blog/headless-chrome/ find_chrome Find Google Chrome, Microsoft Edge or Chromium in the system Description On Windows, this function tries to find Chrome or Edge from the registry. On macOS, it returns a hard-coded path of Chrome under ‘/Applications’. On Linux, it searches for chromium-browser and google-chrome from the system’s PATH variable. Usage find_chrome() Value A character string. html_letter Create a letter in HTML Description This output format is similar to html_paged. The only differences are in the default stylesheets and the default value of the fig_caption parameter which is set to FALSE. See https://pagedown. rbind.io/html-letter/ for an example. Usage html_letter(..., css = c("default", "letter"), fig_caption = FALSE) Arguments ..., css, fig_caption Arguments passed to html_paged(). Value An R Markdown output format. html_paged Create a paged HTML document suitable for printing Description This is an output format based on bookdown::html_document2 (which means you can use those Markdown features added by bookdown). The HTML output document is split into multiple pages via a JavaScript library paged.js. These pages contain elements commonly seen in PDF documents, such as page numbers and running headers. Usage html_paged( ..., css = c("default-fonts", "default-page", "default"), theme = NULL, template = pkg_resource("html", "paged.html"), csl = NULL, front_cover = NULL, back_cover = NULL ) Arguments ... Arguments passed to bookdown::html_document2. css A character vector of CSS and Sass file paths. If a path does not contain the ‘.css’, ‘.sass’, or ‘.scss’ extension, it is assumed to be a built-in CSS file. For example, default-fonts means the filepagedown:::pkg_resource('css', 'default-fonts.css'). To see all built-in CSS files, run pagedown:::list_css(). theme The Bootstrap theme. By default, Bootstrap is not used. template The path to the Pandoc template to convert Markdown to HTML. csl The path of the Citation Style Language (CSL) file used to format citations and references (see the Pandoc documentation). front_cover, back_cover Paths or urls to image files to be used as front or back covers. Theses images are available through CSS variables (see Details). Details When a path or an url is passed to the front_cover or back_cover argument, several CSS variables are created. They are named --front-cover and --back-cover and can be used as value for the CSS property background-image. For example, background-image: var(--front-cover);. When a vector of paths or urls is used as a value for front_cover or back_cover, the CSS variables are suffixed with an index: --front-cover-1, --front-cover-2, etc. Value An R Markdown output format. References https://pagedown.rbind.io html_resume Create a resume in HTML Description This output format is based on Min-Zhong Lu’s HTML/CSS in the Github repo https://github. com/mnjul/html-resume. See https://pagedown.rbind.io/html-resume/ for an example. Usage html_resume( ..., css = "resume", template = pkg_resource("html", "resume.html"), number_sections = FALSE, fig_caption = FALSE ) Arguments ..., css, template, number_sections, fig_caption See html_paged(). Value An R Markdown output format. jss_paged Create an article for the Journal of Statistical Software Description This output format is similar to html_paged. Usage jss_paged( ..., css = c("jss-fonts", "jss-page", "jss"), template = pkg_resource("html", "jss_paged.html"), csl = pkg_resource("csl", "journal-of-statistical-software.csl"), highlight = NULL, pandoc_args = NULL ) Arguments ..., css, template, csl, highlight, pandoc_args Arguments passed to html_paged(). Value An R Markdown output format. poster_relaxed Create posters in HTML Description The output format poster_relaxed() is based on an example in the Github repo https://github. com/RelaxedJS/ReLaXed-examples. See https://pagedown.rbind.io/poster-relaxed/ for an example. The output format poster_jacobs() mimics the style of the “Jacobs Landscape Poster LaTeX Template Version 1.0” at https://www.overleaf.com/gallery/tagged/poster. See https: //pagedown.rbind.io/poster-jacobs/ for an example. Usage poster_relaxed( ..., css = "poster-relaxed", template = pkg_resource("html", "poster-relaxed.html"), number_sections = FALSE ) poster_jacobs( ..., css = "poster-jacobs", template = pkg_resource("html", "poster-jacobs.html") ) Arguments ..., css, template, number_sections See html_paged(). Value An R Markdown output format. thesis_paged Create a paged HTML thesis document suitable for printing Description This output format is similar to html_paged. The only difference is in the default stylesheets and Pandoc template. See https://pagedown.rbind.io/thesis-paged/ for an example. Usage thesis_paged( ..., css = c("thesis"), template = pkg_resource("html", "thesis.html") ) Arguments ..., css, template Arguments passed to html_paged(). Value An R Markdown output format.
wikiTools
cran
R
Package ‘wikiTools’ October 12, 2022 Type Package Version 0.0.6 Date 2022-03-18 Title Tools for Wikidata and Wikipedia Description A set of wrappers intended to check, read and download information from the Wikime- dia sources. It is specifically created to work with names of celebrities, in which case their infor- mation and statistics can be downloaded. Additionally, it also builds links and snip- pets to use in combination with the function gallery() in netCoin package. License GPL-3 Depends R (>= 3.5.0) Encoding UTF-8 Imports WikidataQueryServiceR (>= 1.0.0), WikidataR (>= 1.4.0), httr, jsonlite, ratelimitr NeedsCompilation no Maintainer <NAME> <<EMAIL>> RoxygenNote 7.1.2 Author <NAME> [aut, cph, cre] (<https://orcid.org/0000-0003-2072-6071>), <NAME> [aut], <NAME> [aut] (<https://orcid.org/0000-0001-8178-9768>), <NAME> [aut], <NAME> [aut] Repository CRAN Date/Publication 2022-03-24 08:40:02 UTC R topics documented: c... 2 extractWik... 3 filex... 4 getFile... 5 getWikiDat... 6 GetWikidataite... 7 getWikiFile... 8 getWikiIn... 9 get_templat... 10 get_template_for_map... 11 limit_requeste... 12 nametoWikiFram... 13 nametoWikiHtm... 14 nametoWikiUR... 15 preNam... 16 req_WDQ... 17 req_wikimedia_metric... 17 searchWik... 18 urltoFram... 19 urltoHtm... 20 validUr... 21 Wikidata_occupationCoun... 22 Wikidata_sparql_quer... 22 Wikidata_Wikipedia... 23 Wikimedia_get_redirect... 24 Wikimedia_page_view... 24 Wikimedia_person_exist... 25 Wikimedia_quer... 26 wmflabs_get_allinf... 27 cc Converts a text separated by commas into a character vector. Description Converts a text separated by commas into a character vector. Usage cc(text, sep = ",") Arguments text Text to be separated. sep A character of separation. It must be a blank. If it is another character, trailing blanks are suppressed. Details Returns inside the text are omitted. Value A vector of the split segments of the text. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## A text with three names separated with commas is converted into a vector of length 3. cc("<NAME>, <NAME>, <NAME>") extractWiki Extract the first paragraph of a Wikipedia article with a maximum of characters. Description Extract the first paragraph of a Wikipedia article with a maximum of characters. Usage extractWiki( names, language = c("en", "es", "fr", "de", "it"), plain = FALSE, maximum = 1000 ) Arguments names A vector of names, whose entries have to be extracted. language A vector of Wikipedia’s languages to look for. If the article is not found in the language of the first element, it search for the followings,. plain If TRUE, the results are delivered in plain format. maximum Number maximum of characters to be included when the paragraph is too large. Value a character vector with html formatted (or plain text) Wikipedia paragraphs. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## Obtaining information in English Wikidata names <- c("<NAME>", "<NAME>") info <- getWikiInf(names) info$text <- extractWiki(info$label) filext Extract the extension of a file Description Extract the extension of a file Usage filext(fn) Arguments fn Character vector with the files whose extensions are to be extracted. Details This function extracts the extension of a vector of file names. Value A character vector of extension names. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## For a single item: filext("<NAME>.jpg") ## You can do the same for a vector: filext(c("<NAME>.png", "<NAME>.jpg", "<NAME>.tiff")) getFiles Downloads a list of files in a specified path of the computer, and return a vector of the no-found names (if any). Description Downloads a list of files in a specified path of the computer, and return a vector of the no-found names (if any). Usage getFiles(lista, path = "./", ext = NULL) Arguments lista A list or data frame of files’ URLs to be download (See details). path Directory where to export the files. ext Select desired extension of the files. Default= NULL. Details This function allows download a file of files directly into your directory. This function needs a preexistent data frame of names and pictures’ URL. It must be a list (or data.frame) with two values: "name" (specifying the names of the files) and "url" (containing the urls to the files to download).. All the errors are reported as outcomes (NULL= no errors). The files are donwload into your chosen directory. Value It returns a vector of errors, if any. All pictures are download into the selected directory (NULL= no errors). Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## Not run: ## In case you want to download a file directly from an URL: # dta <- data.frame(name = "Data", url = "https://sociocav.usal.es/me/Stata/example.dta") # getFiles(dta, path = "./") ## You can can also combine this function with getWikiData (among others). ## In case you want to download a picture of a person: # A <- data.frame(name= getWikiData("Rembrandt")$label, url=getWikiData("Rembrandt")$pics) # getFiles(A, path = "./", ext = "png") ## Or the pics of multiple authors: # B <- getWikiData(c("Monet", "Renoir", "Caillebotte")) # data <- data.frame(name = B$label, url = B$pics) # getFiles(data, path = "./", ext = NULL) ## End(Not run) getWikiData Create a data.frame with Wikidata of a vector of names. Description Create a data.frame with Wikidata of a vector of names. Usage getWikiData(names, language = "en", csv = NULL) Arguments names A vector consisting of one or more Wikidata’s entry (i.e., topic or person). language The language of the Wikipedia page version. This should consist of an ISO language code (default = "en"). csv A file name to save the results, in which case the only return is a message with the name of the saved file. Value A data frame with personal information of the names or a csv file with the information separated by semicolons. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## Obtaining information in English Wikidata ## Not run: names <- c("<NAME>", "<NAME>") info <- getWikiData(names) ## Obtaining information in Spanish Wikidata d <- getWikiData(names, language="es") ## End(Not run) GetWikidataitem GetWikidataitem Use Wikimedia_query to obtain the Wikidata entity of a article from a Wikimedia project. Automatically resolvs redirects. Description GetWikidataitem Use Wikimedia_query to obtain the Wikidata entity of a article from a Wikimedia project. Automatically resolvs redirects. Usage GetWikidataitem(article = "", project = "en.wikipedia.org") Arguments article Article to search project Wikimedia project, defaults "en.wikipedia.org" Value A vector with the firts element to 1 if exists the Wikidata item and if not a disambiguation page, the second element de normalized forma of article, and the third the wikidata item. If errors, the firts element is set to 0 and the third is the explication of error. Author(s) <NAME>, Departament of Computer Science and Automatics, University of Salamanca Examples GetWikidataitem('Max Planck', project='es.wikipedia.org') GetWikidataitem('Max') GetWikidataitem('Cervante') getWikiFiles Downloads a list of Wikipedia pages in a specified path of the com- puter, and return a vector of the no-found names (if any). Description Downloads a list of Wikipedia pages in a specified path of the computer, and return a vector of the no-found names (if any). Usage getWikiFiles(X, language = c("es", "en", "fr"), directory = "./", maxtime = 0) Arguments X A vector of Wikipedia’s entry). language The language of the Wikipedia page version. This should consist of an ISO language code (default = "en"). directory Directory where to export the files to. maxtime In case you want to apply a random waiting between consecutive searches. Details This function allows download a set of Wikipedia pages into a directory of the local computer. All the errors (not found pages) are reported as outcomes (NULL= no errors). The files are donwload into your chosen directory. Value It returns a vector of errors, if any. All pictures are download into the selected directory (NULL= no errors). Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## Not run: ## In case you want to download the Wikipage of a person: # getWikiFiles("Rembrandt", dir = "./") ## Or the pics of multiple authors: # B <- c("Monet", "Renoir", "Caillebotte") # getWikiFiles(B, dir = "./", language="fr") ## End(Not run) getWikiInf Create a data.frame with Q’s and descriptions of a vector of names. Description Create a data.frame with Q’s and descriptions of a vector of names. Usage getWikiInf(names, number = 1, language = "en") Arguments names A vector consisting of one or more Wikidata’s entry (i.e., topic or person). number Take the number occurrence in case there are several equal names in Wikidata. language The language of the Wikipedia page version. This should consist of an ISO language code (default = "en"). Value A data frame with name, Q, label and description of the names. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## Obtaining information in English Wikidata names <- c("<NAME>", "<NAME>") information <- getWikiInf(names) ## Obtaining information in Spanish Wikidata ## Not run: informacion <- getWikiInf(names, language="es") ## End(Not run) get_template Create a drop-down vignette for nodes from different items (for gal- leries). Description Create a drop-down vignette for nodes from different items (for galleries). Usage get_template( data, title = NULL, title2 = NULL, text = NULL, img = NULL, wiki = NULL, width = 300, color = "#135dcd", cex = 1 ) Arguments data data frame which contains the data. title column name which contains the first tittle of the vignette. title2 column name which contains the secondary title of the vignette. text column name which contains the main text of the vignette. img column name which contains the names of the image files. wiki column name which contains the wiki URL for the vignette. width length of the vignette’s width. color color of the vignette’s strip (It also could be a column name which contains colors). cex number indicating the amount by which plotting text should be scaled relative to the default. Value a character vector of html formatted vignettes. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## Obtaining information in English Wikidata ## Not run: names <- c("<NAME>", "<NAME>") information <- getWikiData(names) information$html <- get_template(information, title="entityLabel", text="entityDescription") ## End(Not run) get_template_for_maps Create a drop-down vignette for nodes from different items (for maps). Description Create a drop-down vignette for nodes from different items (for maps). Usage get_template_for_maps( data, title = NULL, title2 = NULL, text = NULL, img = NULL, wiki = NULL, color = "#cbdefb", cex = 1 ) Arguments data data frame which contains the data. title column name which contains the first tittle of the vignette. title2 column name which contains the secondary title of the vignette. text column name which contains the main text of the vignette. img column name which contains the names of the image files. wiki column name which contains the wiki URL for the vignette. color color of the vignette’s strip (It also could be a column name which contains colors). cex number indicating the amount by which plotting text should be scaled relative to the default. Value a character vector of html formatted vignettes. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## Obtaining information in English Wikidata ## Not run: names <- c("<NAME>", "<NAME>") info <- getWikiData(names) info$html <- get_template_for_maps(info, title="entityLabel", text="entityDescription") ## End(Not run) limit_requester limit_requester Limit the rate at which a function will execute Description limit_requester Limit the rate at which a function will execute Usage limit_requester(f, n, period) Arguments f The original function n Number of allowed events within a period period Length (in seconds) of measurement period Value If ’f’ is a single function, then a new function with the same signature and (eventual) behavior as the original function, but rate limited. If ’f’ is a named list of functions, then a new list of functions with the same names and signatures, but collectively bound by a shared rate limit. Author(s) <NAME>, Departament of Computer Science and Automatics, University of Salamanca See Also ratelimitr nametoWikiFrame Convert names into a Wikipedia’s iframe Description Convert names into a Wikipedia’s iframe Usage nametoWikiFrame(name, language = "en") Arguments name A vector consisting of one or more Wikipedia’s entry (i.e., topic or person). language The language of the Wikipedia page version. This should consist of an ISO language code (default = "en"). Details This function adds the Wikipedia’s iframe to a entry or name, i.e., "<NAME>" converts into "<iframe src=\"https://es.m.wikipedia.org/wiki/Max_Weber\" width=\"100...". It also manages dif- ferent the languages of Wikipedia through the abbreviated two-letter language parameter, i.e., "en" = "english". Value A character vector of Wikipedia’s iframes. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## When extracting a single item; nametoWikiFrame("Computer", language = "en") ## When extracting two objetcs; A <- c("Computer", "Operating system") nametoWikiFrame(A) ## Same when three or more items; B <- c("Socrates", "Plato", "Aristotle") nametoWikiFrame(B) nametoWikiHtml Create the Wikipedia link of a name or entry. Description Create the Wikipedia link of a name or entry. Usage nametoWikiHtml(name, language = "en") Arguments name A vector consisting of one or more Wikipedia’s entry (i.e., topic or person). language The language of the Wikipedia page version. This should consist of an ISO language code (default = "en"). Details This function adds the Wikipedia’s html link to a entry or name, i.e., "<NAME>" converts into "<a href=’https://es.wikipedia.org/wiki/Max_Weber’, target=’_blank’><NAME></a>". It also man- ages different the languages of Wikipedia through the abbreviated two-letter language parameter, i.e., "en" = "english". Value A character vector of names’ links. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## When extracting a single item; nametoWikiHtml("Computer", language = "en") ## When extracting two objetcs; A <- c("Computer", "Operating system") nametoWikiHtml(A) B <- c("Socrates", "Plato","Aristotle" ) nametoWikiHtml(B) nametoWikiURL Create the Wikipedia URL of a name or entry. Description Create the Wikipedia URL of a name or entry. Usage nametoWikiURL(name, language = "en") Arguments name A vector consisting of one or more Wikipedia’s entry (i.e., topic or person). language The language of the Wikipedia page version. This should consist of an ISO language code (default = "en"). Details This function adds the Wikipedia URL to a entry or name, i.e., "<NAME>" converts into "https://es.wikipedia.org/wiki/Max It also manages different the languages of Wikipedia thru the abbreviated two-letter language pa- rameter, i.e., "en" = "english". Value A character vector of names’ URLs. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## When extracting a single item; nametoWikiURL("Computer", language = "en") ## When extracting two objetcs; A <- c("Computer", "Operating system") nametoWikiURL(A) ## Same when three or more items; B <- c("Socrates", "Plato" , "Aristotle") nametoWikiURL(B) preName Reverse the order of the first and last names of every element of a vector. Description Reverse the order of the first and last names of every element of a vector. Usage preName(X) Arguments X A vector of names with format "name, prename". Details This function reverses the order of the first and last names of the items: i.e., "<NAME>" turns into "<NAME>". Value Another vector with its elements changed. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## To reconvert a single name: preName("<NAME>") ## It is possible to work with several items, as in here: A <- c("<NAME>", "<NAME>", "<NAME>") preName(A) req_WDQS req_WDQS Retrieve responses in JSON format from Wikidata Query Service (WDQS) Description req_WDQS Retrieve responses in JSON format from Wikidata Query Service (WDQS) Usage req_WDQS(sparql_query) Arguments sparql_query A string with the query in SPARQL language. Value A JSON response. Please check httr::stop_for_status(response) Note For short queries GET method is better, POST for long ones. Only GET queries as cached. Author(s) <NAME>, Departament of Computer Science and Automatics, University of Salamanca req_wikimedia_metrics req_wikimedia_metrics Retrieve responses in JSON format from Wiki- media metrics API Description req_wikimedia_metrics Retrieve responses in JSON format from Wikimedia metrics API Usage req_wikimedia_metrics(url) Arguments url The URL with the query Value A JSON response. Please check httr::stop_for_status(response) Note Used in Wikimedia_page_views Author(s) <NAME>, Departament of Computer Science and Automatics, University of Salamanca searchWiki Find if there is a Wikipedia page of a name(s) in the selected language. Description Find if there is a Wikipedia page of a name(s) in the selected language. Usage searchWiki( name, language = c("en", "es", "fr", "it", "de", "pt", "ca"), all = FALSE, maxtime = 0 ) Arguments name A vector consisting of one or more Wikipedia’s entry (i.e., topic or person). language The language of the Wikipedia page version. This should consist of an ISO language code. all If all, all the languages are checked. If false, once a term is found, there is no search of others, so it’s faster. maxtime In case you want to apply a random waiting between consecutive searches. Details This function checks any page or entry in order to find if it has a Wikipedia page in a given language. It manages the different the languages of Wikipedia thru the two-letters abbreviated language pa- rameter, i.e, "en" = "english". It is possible to check multiple languages in order of preference; in this case, only the first available language will appear as TRUE. Value A Boolean data frame of TRUE or FALSE. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## When you want to check an entry in a single language: searchWiki("<NAME>", language = "es") ## When you want to check an entry in several languages: ## Not run: searchWiki("<NAME>", language = c( "en", "es", "fr", "it", "de", "pt", "ca"), all=TRUE) ## End(Not run) ## Not run: A<-c("<NAME>", "<NAME>", "<NAME>") searchWiki(A, language = c("en", "es", "fr", "it", "de", "pt", "ca"), all=FALSE) ## End(Not run) urltoFrame Convert an URL link to an HTML iframe. Description Convert an URL link to an HTML iframe. Usage urltoFrame(url) Arguments url Character vector of URLs. Details This function converts an available URL direction to the corresponding HTML iframe, i.e., "https://es.wikipedia.org/wiki/Soc changes into "<a href=’https://es.wikipedia.org/wiki/Socrates’, target=’_blank’>Socrates</a>". Value A character vector of HTML iframe for the given urls. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## When you have a single URL: urltoFrame("https://es.wikipedia.org/wiki/Socrates") ## It is possible to work with a vector of URL to obtain another vector of html frames: A <- c("https://es.wikipedia.org/wiki/Socrates", "https://es.wikipedia.org/wiki/Plato", "https://es.wikipedia.org/wiki/Aristotle") urltoHtml (A) urltoHtml Convert a Wikipedia URL to an HTML link Description Convert a Wikipedia URL to an HTML link Usage urltoHtml(url, text = NULL) Arguments url Character vector of URLs. text A vector with name of the correspondent title of the url (See details). Details This function converts an available URL direction to the corresponding HTML link, i.e., "https://es.wikipedia.org/wiki/Socrat changes into "<a href=’https://es.wikipedia.org/wiki/Socrates’, target=’_blank’>Socrates</a>". Value A character vector of HTML links for the given urls. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples ## When you have a single URL: urltoHtml("https://es.wikipedia.org/wiki/Socrates", text = "Socrates") ## It is possible to work with several items: A <- c("https://es.wikipedia.org/wiki/Socrates", "https://es.wikipedia.org/wiki/Plato", "https://es.wikipedia.org/wiki/Aristotle") urltoHtml (A, text = c("Socrates", "Plato", "Aristotle")) ## And you can also directly extract the info from nametoWikiURL(): urltoHtml(nametoWikiURL("Plato", "en"), "Plato" ) urltoHtml(nametoWikiURL(c("Plato", "Socrates", "Aristotle"), language="en"), c("Plato", "Socrates", "Aristotle")) validUrl Find if an URL link is valid. Description Find if an URL link is valid. Usage validUrl(url, time = 2) Arguments url A vector of URLs. time The timeout (in seconds) to be used for each connection. Default = 2. Details This function checks if a URL exists on the Internet. Value A boolean value of TRUE or FALSE. Author(s) <NAME>, Department of Sociology and Communication, University of Salamanca. See https://sociocav.usal.es/blog/modesto-escobar/ Examples validUrl(url="https://es.wikipedia.org/wiki/Weber,_Max", time=2) Wikidata_occupationCount Wikidata_occupationCount Search Wikidata Query Service (WDQS) to know the number of Wikidata entities with P106 property (occupa- tion) set to Qoc. Description Wikidata_occupationCount Search Wikidata Query Service (WDQS) to know the number of Wiki- data entities with P106 property (occupation) set to Qoc. Usage Wikidata_occupationCount(Qoc = "") Arguments Qoc The Wikidata entity of the occupation Value The number of entities with that occupation (integer) Author(s) <NAME>, Departament of Computer Science and Automatics, University of Salamanca Examples Wikidata_occupationCount('Q2526255') # Film director Wikidata_sparql_query Wikidata_sparql_query Retrieve responses in JSON format from Wikidata Query Service (WDQS) #’ req_WDQS_rated The ratelimitr version of req_WDQS_rate. See https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual#SPARQL_endpoint Description Wikidata_sparql_query Retrieve responses in JSON format from Wikidata Query Service (WDQS) #’ req_WDQS_rated The ratelimitr version of req_WDQS_rate. See https://www.mediawiki.org/wiki/Wikidata_Query_Servi Usage Wikidata_sparql_query(sparql_query) Arguments sparql_query A string with the query in SPARQL language. Value A JSON response or NULL on errors. Author(s) <NAME>, Departament of Computer Science and Automatics, University of Salamanca Wikidata_Wikipedias Wikidata_wikipedias For an occupation, obtains all Wikidata entity of the people with that ocupation, also the number of wikipedias witch article they have, and the URL of those wikipedias (sep = tab) Run queries splitting with offset and chunk. Description Wikidata_wikipedias For an occupation, obtains all Wikidata entity of the people with that ocupa- tion, also the number of wikipedias witch article they have, and the URL of those wikipedias (sep = tab) Run queries splitting with offset and chunk. Usage Wikidata_Wikipedias(Qoc, chunk = 10000) Arguments Qoc The Wikidata entity of the occupation chunk The chunk to split intermediate results with the aim of reduce the limit 60 sec- onds processig time. Value A list with Wikidata entity, the number of wikipedias and the URL Author(s) <NAME>, Departament of Computer Science and Automatics, University of Salamanca Wikimedia_get_redirects Wikimedia_get_redirects Obtains redirection pages (from namespace 0) to the article page in the Wikimedia project Description Wikimedia_get_redirects Obtains redirection pages (from namespace 0) to the article page in the Wikimedia project Usage Wikimedia_get_redirects(article, project = "en.wikipedia.org") Arguments article Article target project Wikimedia project, defaults "en.wikipedio.org" Value A list with the firts element the target of all redirections, or NULL on error. Author(s) <NAME>, Departament of Computer Science and Automatics, University of Salamanca Wikimedia_page_views Wikimedia_page_views Return the number of views one article has in a Wikimedia project in a date interval (see granularity). Optionally include redirections to the article page. req_wikimeida_metrics_rated The limitratedr version of req_wikimedia_metrics. Description Wikimedia_page_views Return the number of views one article has in a Wikimedia project in a date interval (see granularity). Optionally include redirections to the article page. req_wikimeida_metrics_rated The limitratedr version of req_wikimedia_metrics. Usage Wikimedia_page_views( article, project = "en.wikipedia.org", start, end, access = "all-access", agent = "user", granularity = "monthly", include_redirects = FALSE ) Arguments article The article to search project The Wikimedia project, defaults en.wikipedia.org start, end First and last day to include (format YYYYMMDD or YYYYMMDDHH) access Filter by access method: all-access (default), desktop, mobile-app, mobile-web agent Filter by agent type: all-agents, user (default), spider, automated granularity Time unit for the response data: daily, monthly (default) include_redirects Boolean to include redirection to the article page (defaults: False) Value The number of visits Author(s) <NAME>, Departament of Computer Science and Automatics, University of Salamanca Wikimedia_person_exists Wikimedia_person_exists Use Wikimedia_query and Wiki- data_sparql_query to check if a article of a person exists in the Wikimedia project. If exists, also return the Wikipedia pages of that person in the languages indicated in param lang Description Wikimedia_person_exists Use Wikimedia_query and Wikidata_sparql_query to check if a article of a person exists in the Wikimedia project. If exists, also return the Wikipedia pages of that person in the languages indicated in param lang Usage Wikimedia_person_exists( article, project = "en.wikipedia.org", langs = "en|es|fr|de|it|pt|ca" ) Arguments article Article to search project Wikimedia project, defaults "en.wikipedia.org" langs Wikipedia languages to search if the person has a page, use "|" to split languages Value If the article of the person exists, a vector with four elements: the firts one set to 1, the second de article label normalized, the third de Wikidata id, and fourth a data frame with URL to Wikipedias (lang, label, URL) If the article of the person does not exist, the firts element is set to 0 and the third is the explication of error. Author(s) <NAME>, Departament of Computer Science and Automatics, University of Salamanca Wikimedia_query Wikimedia_query Use httr package to retrieve responses in JSON for- mat about an article from Wikimedia API. Description Wikimedia_query Use httr package to retrieve responses in JSON format about an article from Wikimedia API. Usage Wikimedia_query( query, project = "en.wikipedia.org", headers = my_headers, attempts = 2 ) Arguments query A list with de (key, values) pairs with the search. project The Wikimedia project to search. headers A vector with aditional query headers for the request. attempts On errors, the maximun number of times the query is launched if repetition_on_error is not zero (default 2) Value The response in JSON format or NULL on errors. Author(s) <NAME>, Departament of Computer Science and Automatics, University of Salamanca wmflabs_get_allinfo wmflabs_get_allinfo Obtains information about an article in the Wiki- media project in JSON format, or NULL on error. Description wmflabs_get_allinfo Obtains information about an article in the Wikimedia project in JSON format, or NULL on error. Usage wmflabs_get_allinfo( article, project = "en.wikipedia.org", links_in_count_redirects = FALSE ) Arguments article The article to search project The Wikimedia project, defaults en.wikipedia.org links_in_count_redirects If infotype==links, if redirects are included or not Value The number of visits Note Is important that the article be don’t a redirection: with "prose" infotype the function gets informa- tion of the target article, but with "articleinfo" and "links" the information is about the redirection. Author(s) <NAME>, Departament of Computer Science and Automatics, University of Salamanca
clap-cargo-extra
rust
Rust
Crate clap_cargo_extra === Simple Wrapper around clap cargo that adds some utilities to access the metadata ``` pub struct ArgStruct { #[clap(flatten)] pub cargo: ClapCargo, } ``` Re-exports --- * `pub use impls::*;` Modules --- * impls Structs --- * CargoBin * CargoBuild * ClapCargoCombination of all three clap cargo’s arg structs and two new ones, `CargoBuild` and `CargoBin`. Traits --- * ToCmd Crate clap_cargo_extra === Simple Wrapper around clap cargo that adds some utilities to access the metadata ``` pub struct ArgStruct { #[clap(flatten)] pub cargo: ClapCargo, } ``` Re-exports --- * `pub use impls::*;` Modules --- * impls Structs --- * CargoBin * CargoBuild * ClapCargoCombination of all three clap cargo’s arg structs and two new ones, `CargoBuild` and `CargoBin`. Traits --- * ToCmd Struct clap_cargo_extra::CargoBin === ``` #[non_exhaustive]pub struct CargoBin { pub channel: Option<String>, } ``` Fields (Non-exhaustive) --- Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional `Struct { .. }` syntax; cannot be matched against without a wildcard `..`; and struct update syntax will not work.`channel: Option<String>`stable, beta, nightly, and custom default: stable Implementations --- ### impl CargoBin #### pub fn bin(&self) -> String #### pub fn channel(&self) -> &str Trait Implementations --- ### impl Args for CargoBin #### fn group_id() -> Option<IdReport the [`ArgGroup::id`][crate::ArgGroup::id] for this set of arguments#### fn augment_args<'b>(__clap_app: Command) -> Command Append to [`Command`] so it can instantiate `Self`. Append to [`Command`] so it can update `self`. #### fn to_args(&self) -> Vec<OsString### impl Clone for CargoBin #### fn clone(&self) -> CargoBin Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> CargoBin Returns the “default value” for a type. #### fn from_arg_matches(__clap_arg_matches: &ArgMatches) -> Result<Self, ErrorInstantiate `Self` from [`ArgMatches`], parsing the arguments as needed. __clap_arg_matches: &mut ArgMatches ) -> Result<Self, ErrorInstantiate `Self` from [`ArgMatches`], parsing the arguments as needed. &mut self, __clap_arg_matches: &ArgMatches ) -> Result<(), ErrorAssign values from `ArgMatches` to `self`.#### fn update_from_arg_matches_mut( &mut self, __clap_arg_matches: &mut ArgMatches ) -> Result<(), ErrorAssign values from `ArgMatches` to `self`.### impl Merge for CargoBin #### fn merge(&mut self, other: Self) ### impl PartialEq<CargoBin> for CargoBin #### fn eq(&self, other: &CargoBin) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for CargoBin ### impl StructuralEq for CargoBin ### impl StructuralPartialEq for CargoBin Auto Trait Implementations --- ### impl RefUnwindSafe for CargoBin ### impl Send for CargoBin ### impl Sync for CargoBin ### impl Unpin for CargoBin ### impl UnwindSafe for CargoBin Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToCmd for Twhere T: Args, #### fn add_args<'a>(&'a self, cmd: &'a mut Command) -> &'a mut Command ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct clap_cargo_extra::CargoBuild === ``` #[non_exhaustive]pub struct CargoBuild { pub optimize: bool, pub target: Option<String>, pub all_targets: bool, pub link_args: bool, pub release: bool, pub profile: Option<String>, } ``` Fields (Non-exhaustive) --- Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional `Struct { .. }` syntax; cannot be matched against without a wildcard `..`; and struct update syntax will not work.`optimize: bool`Add additional nightly features for optimizing `target: Option<String>`Build for the target triple `all_targets: bool`Build all targets `link_args: bool``release: bool`Build artifacts in release mode, with optimizations `profile: Option<String>`Build artifacts with the specified profile Implementations --- ### impl CargoBuild #### pub fn profile(&self) -> &str Trait Implementations --- ### impl Args for CargoBuild #### fn group_id() -> Option<IdReport the [`ArgGroup::id`][crate::ArgGroup::id] for this set of arguments#### fn augment_args<'b>(__clap_app: Command) -> Command Append to [`Command`] so it can instantiate `Self`. Append to [`Command`] so it can update `self`. #### fn to_args(&self) -> Vec<OsString### impl Clone for CargoBuild #### fn clone(&self) -> CargoBuild Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> CargoBuild Returns the “default value” for a type. #### fn from_arg_matches(__clap_arg_matches: &ArgMatches) -> Result<Self, ErrorInstantiate `Self` from [`ArgMatches`], parsing the arguments as needed. __clap_arg_matches: &mut ArgMatches ) -> Result<Self, ErrorInstantiate `Self` from [`ArgMatches`], parsing the arguments as needed. &mut self, __clap_arg_matches: &ArgMatches ) -> Result<(), ErrorAssign values from `ArgMatches` to `self`.#### fn update_from_arg_matches_mut( &mut self, __clap_arg_matches: &mut ArgMatches ) -> Result<(), ErrorAssign values from `ArgMatches` to `self`.### impl Merge for CargoBuild #### fn merge(&mut self, other: CargoBuild) ### impl PartialEq<CargoBuild> for CargoBuild #### fn eq(&self, other: &CargoBuild) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for CargoBuild ### impl StructuralEq for CargoBuild ### impl StructuralPartialEq for CargoBuild Auto Trait Implementations --- ### impl RefUnwindSafe for CargoBuild ### impl Send for CargoBuild ### impl Sync for CargoBuild ### impl Unpin for CargoBuild ### impl UnwindSafe for CargoBuild Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToCmd for Twhere T: Args, #### fn add_args<'a>(&'a self, cmd: &'a mut Command) -> &'a mut Command ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct clap_cargo_extra::ClapCargo === ``` #[non_exhaustive]pub struct ClapCargo { pub features: Features, pub manifest: Manifest, pub workspace: Workspace, pub cargo_bin: CargoBin, pub cargo_build: CargoBuild, pub slop: Vec<OsString>, } ``` Combination of all three clap cargo’s arg structs and two new ones, `CargoBuild` and `CargoBin`. Fields (Non-exhaustive) --- Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional `Struct { .. }` syntax; cannot be matched against without a wildcard `..`; and struct update syntax will not work.`features: Features``manifest: Manifest``workspace: Workspace``cargo_bin: CargoBin``cargo_build: CargoBuild``slop: Vec<OsString>`Extra arguments passed to cargo after `--` Implementations --- ### impl ClapCargo #### pub fn metadata(&self) -> Result<&MetadataCurrent metadata for the CLI’s context #### pub fn manifest_path(&self) -> Result<PathBufCurrent manifest path in context #### pub fn target_dir(&self) -> Result<PathBufDirectory where build artifacts will go #### pub fn current_packages(&self) -> Result<Vec<&Package>Get the current packages that are selected by CLI #### pub fn packages(&self) -> Result<Vec<&Package>All packages referenced #### pub fn add_cargo_args(&self, cmd: &mut Command) 👎Deprecated: use add_args insteadAdd the correct CLI flags to a command #### pub fn get_deps( &self, p: &Package, dep_kind: DependencyKind ) -> Result<Vec<&Package>Returns all packages that package `p` depends on transitively. `dep_kind` = Normal, Development, Build, and Unknown Unknown is equivalent to `all` #### pub fn cargo_cmd(&self) -> Command Create a Command builder for cargo #### pub fn channel(&self) -> &str #### pub fn build_cmd(&self) -> Command #### pub fn find_package(&self, name: &str) -> Result<Option<&Package>Find package given a name #### pub fn built_bin(&self, target: &Target) -> Result<PathBufTrait Implementations --- ### impl Args for ClapCargo #### fn to_args(&self) -> Vec<OsString### impl Args for ClapCargo #### fn group_id() -> Option<IdReport the [`ArgGroup::id`][crate::ArgGroup::id] for this set of arguments#### fn augment_args<'b>(__clap_app: Command) -> Command Append to [`Command`] so it can instantiate `Self`. Append to [`Command`] so it can update `self`. #### fn clone(&self) -> ClapCargo Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> ClapCargo Returns the “default value” for a type. #### fn from_arg_matches(__clap_arg_matches: &ArgMatches) -> Result<Self, ErrorInstantiate `Self` from [`ArgMatches`], parsing the arguments as needed. __clap_arg_matches: &mut ArgMatches ) -> Result<Self, ErrorInstantiate `Self` from [`ArgMatches`], parsing the arguments as needed. &mut self, __clap_arg_matches: &ArgMatches ) -> Result<(), ErrorAssign values from `ArgMatches` to `self`.#### fn update_from_arg_matches_mut( &mut self, __clap_arg_matches: &mut ArgMatches ) -> Result<(), ErrorAssign values from `ArgMatches` to `self`.### impl Merge for ClapCargo #### fn merge(&mut self, other: Self) ### impl PartialEq<ClapCargo> for ClapCargo #### fn eq(&self, other: &ClapCargo) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for ClapCargo ### impl StructuralEq for ClapCargo ### impl StructuralPartialEq for ClapCargo Auto Trait Implementations --- ### impl RefUnwindSafe for ClapCargo ### impl Send for ClapCargo ### impl Sync for ClapCargo ### impl Unpin for ClapCargo ### impl UnwindSafe for ClapCargo Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToCmd for Twhere T: Args, #### fn add_args<'a>(&'a self, cmd: &'a mut Command) -> &'a mut Command ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole
go
Go
README [¶](#section-readme) --- ### Azure Serial Console Module for Go [![PkgGoDev](https://pkg.go.dev/badge/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole) The `armserialconsole` module provides operations for working with Azure Serial Console. [Source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/serialconsole/armserialconsole) ### Getting started #### Prerequisites * an [Azure subscription](https://azure.microsoft.com/free/) * Go 1.18 or above (You could download and install the latest version of Go from [here](https://go.dev/doc/install). It will replace the existing Go on your machine. If you want to install multiple Go versions on the same machine, you could refer this [doc](https://go.dev/doc/manage-install).) #### Install the package This project uses [Go modules](https://github.com/golang/go/wiki/Modules) for versioning and dependency management. Install the Azure Serial Console module: ``` go get github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole ``` #### Authorization When creating a client, you will need to provide a credential for authenticating with Azure Serial Console. The `azidentity` module provides facilities for various ways of authenticating with Azure including client/secret, certificate, managed identity, and more. ``` cred, err := azidentity.NewDefaultAzureCredential(nil) ``` For more information on authentication, please see the documentation for `azidentity` at [pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity). #### Client Factory Azure Serial Console module consists of one or more clients. We provide a client factory which could be used to create any client in this module. ``` clientFactory, err := armserialconsole.NewClientFactory(<subscription ID>, cred, nil) ``` You can use `ClientOptions` in package `github.com/Azure/azure-sdk-for-go/sdk/azcore/arm` to set endpoint to connect with public and sovereign clouds as well as Azure Stack. For more information, please see the documentation for `azcore` at [pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore). ``` options := arm.ClientOptions { ClientOptions: azcore.ClientOptions { Cloud: cloud.AzureChina, }, } clientFactory, err := armserialconsole.NewClientFactory(<subscription ID>, cred, &options) ``` #### Clients A client groups a set of related APIs, providing access to its functionality. Create one or more clients to access the APIs you require using client factory. ``` client := clientFactory.NewSerialPortsClient() ``` #### Provide Feedback If you encounter bugs or have suggestions, please [open an issue](https://github.com/Azure/azure-sdk-for-go/issues) and assign the `Serial Console` label. ### Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit <https://cla.microsoft.com>. When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [<EMAIL>](mailto:<EMAIL>) with any additional questions or comments. Documentation [¶](#section-documentation) --- ### Index [¶](#pkg-index) * [type ClientFactory](#ClientFactory) * + [func NewClientFactory(subscriptionID string, credential azcore.TokenCredential, ...) (*ClientFactory, error)](#NewClientFactory) * + [func (c *ClientFactory) NewMicrosoftSerialConsoleClient() *MicrosoftSerialConsoleClient](#ClientFactory.NewMicrosoftSerialConsoleClient) + [func (c *ClientFactory) NewSerialPortsClient() *SerialPortsClient](#ClientFactory.NewSerialPortsClient) * [type DisableSerialConsoleResult](#DisableSerialConsoleResult) * + [func (d DisableSerialConsoleResult) MarshalJSON() ([]byte, error)](#DisableSerialConsoleResult.MarshalJSON) + [func (d *DisableSerialConsoleResult) UnmarshalJSON(data []byte) error](#DisableSerialConsoleResult.UnmarshalJSON) * [type EnableSerialConsoleResult](#EnableSerialConsoleResult) * + [func (e EnableSerialConsoleResult) MarshalJSON() ([]byte, error)](#EnableSerialConsoleResult.MarshalJSON) + [func (e *EnableSerialConsoleResult) UnmarshalJSON(data []byte) error](#EnableSerialConsoleResult.UnmarshalJSON) * [type GetSerialConsoleSubscriptionNotFound](#GetSerialConsoleSubscriptionNotFound) * + [func (g GetSerialConsoleSubscriptionNotFound) MarshalJSON() ([]byte, error)](#GetSerialConsoleSubscriptionNotFound.MarshalJSON) + [func (g *GetSerialConsoleSubscriptionNotFound) UnmarshalJSON(data []byte) error](#GetSerialConsoleSubscriptionNotFound.UnmarshalJSON) * [type MicrosoftSerialConsoleClient](#MicrosoftSerialConsoleClient) * + [func NewMicrosoftSerialConsoleClient(subscriptionID string, credential azcore.TokenCredential, ...) (*MicrosoftSerialConsoleClient, error)](#NewMicrosoftSerialConsoleClient) * + [func (client *MicrosoftSerialConsoleClient) DisableConsole(ctx context.Context, defaultParam string, ...) (MicrosoftSerialConsoleClientDisableConsoleResponse, error)](#MicrosoftSerialConsoleClient.DisableConsole) + [func (client *MicrosoftSerialConsoleClient) EnableConsole(ctx context.Context, defaultParam string, ...) (MicrosoftSerialConsoleClientEnableConsoleResponse, error)](#MicrosoftSerialConsoleClient.EnableConsole) + [func (client *MicrosoftSerialConsoleClient) GetConsoleStatus(ctx context.Context, defaultParam string, ...) (MicrosoftSerialConsoleClientGetConsoleStatusResponse, error)](#MicrosoftSerialConsoleClient.GetConsoleStatus) + [func (client *MicrosoftSerialConsoleClient) ListOperations(ctx context.Context, ...) (MicrosoftSerialConsoleClientListOperationsResponse, error)](#MicrosoftSerialConsoleClient.ListOperations) * [type MicrosoftSerialConsoleClientDisableConsoleOptions](#MicrosoftSerialConsoleClientDisableConsoleOptions) * [type MicrosoftSerialConsoleClientDisableConsoleResponse](#MicrosoftSerialConsoleClientDisableConsoleResponse) * [type MicrosoftSerialConsoleClientEnableConsoleOptions](#MicrosoftSerialConsoleClientEnableConsoleOptions) * [type MicrosoftSerialConsoleClientEnableConsoleResponse](#MicrosoftSerialConsoleClientEnableConsoleResponse) * [type MicrosoftSerialConsoleClientGetConsoleStatusOptions](#MicrosoftSerialConsoleClientGetConsoleStatusOptions) * [type MicrosoftSerialConsoleClientGetConsoleStatusResponse](#MicrosoftSerialConsoleClientGetConsoleStatusResponse) * [type MicrosoftSerialConsoleClientListOperationsOptions](#MicrosoftSerialConsoleClientListOperationsOptions) * [type MicrosoftSerialConsoleClientListOperationsResponse](#MicrosoftSerialConsoleClientListOperationsResponse) * [type Operations](#Operations) * + [func (o Operations) MarshalJSON() ([]byte, error)](#Operations.MarshalJSON) + [func (o *Operations) UnmarshalJSON(data []byte) error](#Operations.UnmarshalJSON) * [type OperationsValueItem](#OperationsValueItem) * + [func (o OperationsValueItem) MarshalJSON() ([]byte, error)](#OperationsValueItem.MarshalJSON) + [func (o *OperationsValueItem) UnmarshalJSON(data []byte) error](#OperationsValueItem.UnmarshalJSON) * [type OperationsValueItemDisplay](#OperationsValueItemDisplay) * + [func (o OperationsValueItemDisplay) MarshalJSON() ([]byte, error)](#OperationsValueItemDisplay.MarshalJSON) + [func (o *OperationsValueItemDisplay) UnmarshalJSON(data []byte) error](#OperationsValueItemDisplay.UnmarshalJSON) * [type ProxyResource](#ProxyResource) * + [func (p ProxyResource) MarshalJSON() ([]byte, error)](#ProxyResource.MarshalJSON) + [func (p *ProxyResource) UnmarshalJSON(data []byte) error](#ProxyResource.UnmarshalJSON) * [type Resource](#Resource) * + [func (r Resource) MarshalJSON() ([]byte, error)](#Resource.MarshalJSON) + [func (r *Resource) UnmarshalJSON(data []byte) error](#Resource.UnmarshalJSON) * [type SerialPort](#SerialPort) * + [func (s SerialPort) MarshalJSON() ([]byte, error)](#SerialPort.MarshalJSON) + [func (s *SerialPort) UnmarshalJSON(data []byte) error](#SerialPort.UnmarshalJSON) * [type SerialPortConnectResult](#SerialPortConnectResult) * + [func (s SerialPortConnectResult) MarshalJSON() ([]byte, error)](#SerialPortConnectResult.MarshalJSON) + [func (s *SerialPortConnectResult) UnmarshalJSON(data []byte) error](#SerialPortConnectResult.UnmarshalJSON) * [type SerialPortListResult](#SerialPortListResult) * + [func (s SerialPortListResult) MarshalJSON() ([]byte, error)](#SerialPortListResult.MarshalJSON) + [func (s *SerialPortListResult) UnmarshalJSON(data []byte) error](#SerialPortListResult.UnmarshalJSON) * [type SerialPortProperties](#SerialPortProperties) * + [func (s SerialPortProperties) MarshalJSON() ([]byte, error)](#SerialPortProperties.MarshalJSON) + [func (s *SerialPortProperties) UnmarshalJSON(data []byte) error](#SerialPortProperties.UnmarshalJSON) * [type SerialPortState](#SerialPortState) * + [func PossibleSerialPortStateValues() []SerialPortState](#PossibleSerialPortStateValues) * [type SerialPortsClient](#SerialPortsClient) * + [func NewSerialPortsClient(subscriptionID string, credential azcore.TokenCredential, ...) (*SerialPortsClient, error)](#NewSerialPortsClient) * + [func (client *SerialPortsClient) Connect(ctx context.Context, resourceGroupName string, ...) (SerialPortsClientConnectResponse, error)](#SerialPortsClient.Connect) + [func (client *SerialPortsClient) Create(ctx context.Context, resourceGroupName string, ...) (SerialPortsClientCreateResponse, error)](#SerialPortsClient.Create) + [func (client *SerialPortsClient) Delete(ctx context.Context, resourceGroupName string, ...) (SerialPortsClientDeleteResponse, error)](#SerialPortsClient.Delete) + [func (client *SerialPortsClient) Get(ctx context.Context, resourceGroupName string, ...) (SerialPortsClientGetResponse, error)](#SerialPortsClient.Get) + [func (client *SerialPortsClient) List(ctx context.Context, resourceGroupName string, ...) (SerialPortsClientListResponse, error)](#SerialPortsClient.List) + [func (client *SerialPortsClient) ListBySubscriptions(ctx context.Context, options *SerialPortsClientListBySubscriptionsOptions) (SerialPortsClientListBySubscriptionsResponse, error)](#SerialPortsClient.ListBySubscriptions) * [type SerialPortsClientConnectOptions](#SerialPortsClientConnectOptions) * [type SerialPortsClientConnectResponse](#SerialPortsClientConnectResponse) * [type SerialPortsClientCreateOptions](#SerialPortsClientCreateOptions) * [type SerialPortsClientCreateResponse](#SerialPortsClientCreateResponse) * [type SerialPortsClientDeleteOptions](#SerialPortsClientDeleteOptions) * [type SerialPortsClientDeleteResponse](#SerialPortsClientDeleteResponse) * [type SerialPortsClientGetOptions](#SerialPortsClientGetOptions) * [type SerialPortsClientGetResponse](#SerialPortsClientGetResponse) * [type SerialPortsClientListBySubscriptionsOptions](#SerialPortsClientListBySubscriptionsOptions) * [type SerialPortsClientListBySubscriptionsResponse](#SerialPortsClientListBySubscriptionsResponse) * [type SerialPortsClientListOptions](#SerialPortsClientListOptions) * [type SerialPortsClientListResponse](#SerialPortsClientListResponse) * [type Status](#Status) * + [func (s Status) MarshalJSON() ([]byte, error)](#Status.MarshalJSON) + [func (s *Status) UnmarshalJSON(data []byte) error](#Status.UnmarshalJSON) #### Examples [¶](#pkg-examples) * [MicrosoftSerialConsoleClient.DisableConsole](#example-MicrosoftSerialConsoleClient.DisableConsole) * [MicrosoftSerialConsoleClient.EnableConsole](#example-MicrosoftSerialConsoleClient.EnableConsole) * [MicrosoftSerialConsoleClient.GetConsoleStatus](#example-MicrosoftSerialConsoleClient.GetConsoleStatus) * [MicrosoftSerialConsoleClient.ListOperations](#example-MicrosoftSerialConsoleClient.ListOperations) * [SerialPortsClient.Connect (ConnectToAScaleSetInstanceSerialPort)](#example-SerialPortsClient.Connect-ConnectToAScaleSetInstanceSerialPort) * [SerialPortsClient.Connect (ConnectToAVirtualMachineSerialPort)](#example-SerialPortsClient.Connect-ConnectToAVirtualMachineSerialPort) * [SerialPortsClient.Create](#example-SerialPortsClient.Create) * [SerialPortsClient.Delete](#example-SerialPortsClient.Delete) * [SerialPortsClient.Get](#example-SerialPortsClient.Get) * [SerialPortsClient.List](#example-SerialPortsClient.List) * [SerialPortsClient.ListBySubscriptions](#example-SerialPortsClient.ListBySubscriptions) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) #### type [ClientFactory](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/client_factory.go#L19) [¶](#ClientFactory) added in v1.1.0 ``` type ClientFactory struct { // contains filtered or unexported fields } ``` ClientFactory is a client factory used to create any client in this module. Don't use this type directly, use NewClientFactory instead. #### func [NewClientFactory](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/client_factory.go#L31) [¶](#NewClientFactory) added in v1.1.0 ``` func NewClientFactory(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[ClientFactory](#ClientFactory), [error](/builtin#error)) ``` NewClientFactory creates a new instance of ClientFactory with the specified values. The parameter values will be propagated to any client created from this factory. * subscriptionID - Subscription ID which uniquely identifies the Microsoft Azure subscription. The subscription ID forms part of the URI for every service call requiring it. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*ClientFactory) [NewMicrosoftSerialConsoleClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/client_factory.go#L42) [¶](#ClientFactory.NewMicrosoftSerialConsoleClient) added in v1.1.0 ``` func (c *[ClientFactory](#ClientFactory)) NewMicrosoftSerialConsoleClient() *[MicrosoftSerialConsoleClient](#MicrosoftSerialConsoleClient) ``` #### func (*ClientFactory) [NewSerialPortsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/client_factory.go#L47) [¶](#ClientFactory.NewSerialPortsClient) added in v1.1.0 ``` func (c *[ClientFactory](#ClientFactory)) NewSerialPortsClient() *[SerialPortsClient](#SerialPortsClient) ``` #### type [DisableSerialConsoleResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L13) [¶](#DisableSerialConsoleResult) ``` type DisableSerialConsoleResult struct { // Whether or not Serial Console is disabled. Disabled *[bool](/builtin#bool) `json:"disabled,omitempty"` } ``` DisableSerialConsoleResult - Returns whether or not Serial Console is disabled. #### func (DisableSerialConsoleResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L20) [¶](#DisableSerialConsoleResult.MarshalJSON) added in v1.1.0 ``` func (d [DisableSerialConsoleResult](#DisableSerialConsoleResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type DisableSerialConsoleResult. #### func (*DisableSerialConsoleResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L27) [¶](#DisableSerialConsoleResult.UnmarshalJSON) added in v1.1.0 ``` func (d *[DisableSerialConsoleResult](#DisableSerialConsoleResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type DisableSerialConsoleResult. #### type [EnableSerialConsoleResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L19) [¶](#EnableSerialConsoleResult) ``` type EnableSerialConsoleResult struct { // Whether or not Serial Console is disabled (enabled). Disabled *[bool](/builtin#bool) `json:"disabled,omitempty"` } ``` EnableSerialConsoleResult - Returns whether or not Serial Console is disabled (enabled). #### func (EnableSerialConsoleResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L47) [¶](#EnableSerialConsoleResult.MarshalJSON) added in v1.1.0 ``` func (e [EnableSerialConsoleResult](#EnableSerialConsoleResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type EnableSerialConsoleResult. #### func (*EnableSerialConsoleResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L54) [¶](#EnableSerialConsoleResult.UnmarshalJSON) added in v1.1.0 ``` func (e *[EnableSerialConsoleResult](#EnableSerialConsoleResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type EnableSerialConsoleResult. #### type [GetSerialConsoleSubscriptionNotFound](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L25) [¶](#GetSerialConsoleSubscriptionNotFound) ``` type GetSerialConsoleSubscriptionNotFound struct { // Error code Code *[string](/builtin#string) `json:"code,omitempty"` // Subscription not found message Message *[string](/builtin#string) `json:"message,omitempty"` } ``` GetSerialConsoleSubscriptionNotFound - Error saying that the provided subscription could not be found #### func (GetSerialConsoleSubscriptionNotFound) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L74) [¶](#GetSerialConsoleSubscriptionNotFound.MarshalJSON) added in v1.1.0 ``` func (g [GetSerialConsoleSubscriptionNotFound](#GetSerialConsoleSubscriptionNotFound)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type GetSerialConsoleSubscriptionNotFound. #### func (*GetSerialConsoleSubscriptionNotFound) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L82) [¶](#GetSerialConsoleSubscriptionNotFound.UnmarshalJSON) added in v1.1.0 ``` func (g *[GetSerialConsoleSubscriptionNotFound](#GetSerialConsoleSubscriptionNotFound)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type GetSerialConsoleSubscriptionNotFound. #### type [MicrosoftSerialConsoleClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/microsoftserialconsole_client.go#L27) [¶](#MicrosoftSerialConsoleClient) ``` type MicrosoftSerialConsoleClient struct { // contains filtered or unexported fields } ``` MicrosoftSerialConsoleClient contains the methods for the MicrosoftSerialConsoleClient group. Don't use this type directly, use NewMicrosoftSerialConsoleClient() instead. #### func [NewMicrosoftSerialConsoleClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/microsoftserialconsole_client.go#L37) [¶](#NewMicrosoftSerialConsoleClient) ``` func NewMicrosoftSerialConsoleClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[MicrosoftSerialConsoleClient](#MicrosoftSerialConsoleClient), [error](/builtin#error)) ``` NewMicrosoftSerialConsoleClient creates a new instance of MicrosoftSerialConsoleClient with the specified values. * subscriptionID - Subscription ID which uniquely identifies the Microsoft Azure subscription. The subscription ID forms part of the URI for every service call requiring it. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*MicrosoftSerialConsoleClient) [DisableConsole](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/microsoftserialconsole_client.go#L56) [¶](#MicrosoftSerialConsoleClient.DisableConsole) ``` func (client *[MicrosoftSerialConsoleClient](#MicrosoftSerialConsoleClient)) DisableConsole(ctx [context](/context).[Context](/context#Context), defaultParam [string](/builtin#string), options *[MicrosoftSerialConsoleClientDisableConsoleOptions](#MicrosoftSerialConsoleClientDisableConsoleOptions)) ([MicrosoftSerialConsoleClientDisableConsoleResponse](#MicrosoftSerialConsoleClientDisableConsoleResponse), [error](/builtin#error)) ``` DisableConsole - Disables the Serial Console service for all VMs and VM scale sets in the provided subscription If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2018-05-01 * defaultParam - Default parameter. Leave the value as "default". * options - MicrosoftSerialConsoleClientDisableConsoleOptions contains the optional parameters for the MicrosoftSerialConsoleClient.DisableConsole method. Example [¶](#example-MicrosoftSerialConsoleClient.DisableConsole) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/7a2ac91de424f271cf91cc8009f3fe9ee8249086/specification/serialconsole/resource-manager/Microsoft.SerialConsole/stable/2018-05-01/examples/DisableConsoleExamples.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armserialconsole.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewMicrosoftSerialConsoleClient().DisableConsole(ctx, "default", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.Value = armserialconsole.DisableSerialConsoleResult{ // Disabled: to.Ptr(true), // } } ``` ``` Output: ``` Share Format Run #### func (*MicrosoftSerialConsoleClient) [EnableConsole](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/microsoftserialconsole_client.go#L122) [¶](#MicrosoftSerialConsoleClient.EnableConsole) ``` func (client *[MicrosoftSerialConsoleClient](#MicrosoftSerialConsoleClient)) EnableConsole(ctx [context](/context).[Context](/context#Context), defaultParam [string](/builtin#string), options *[MicrosoftSerialConsoleClientEnableConsoleOptions](#MicrosoftSerialConsoleClientEnableConsoleOptions)) ([MicrosoftSerialConsoleClientEnableConsoleResponse](#MicrosoftSerialConsoleClientEnableConsoleResponse), [error](/builtin#error)) ``` EnableConsole - Enables the Serial Console service for all VMs and VM scale sets in the provided subscription If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2018-05-01 * defaultParam - Default parameter. Leave the value as "default". * options - MicrosoftSerialConsoleClientEnableConsoleOptions contains the optional parameters for the MicrosoftSerialConsoleClient.EnableConsole method. Example [¶](#example-MicrosoftSerialConsoleClient.EnableConsole) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/7a2ac91de424f271cf91cc8009f3fe9ee8249086/specification/serialconsole/resource-manager/Microsoft.SerialConsole/stable/2018-05-01/examples/EnableConsoleExamples.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armserialconsole.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewMicrosoftSerialConsoleClient().EnableConsole(ctx, "default", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.Value = armserialconsole.EnableSerialConsoleResult{ // Disabled: to.Ptr(false), // } } ``` ``` Output: ``` Share Format Run #### func (*MicrosoftSerialConsoleClient) [GetConsoleStatus](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/microsoftserialconsole_client.go#L188) [¶](#MicrosoftSerialConsoleClient.GetConsoleStatus) ``` func (client *[MicrosoftSerialConsoleClient](#MicrosoftSerialConsoleClient)) GetConsoleStatus(ctx [context](/context).[Context](/context#Context), defaultParam [string](/builtin#string), options *[MicrosoftSerialConsoleClientGetConsoleStatusOptions](#MicrosoftSerialConsoleClientGetConsoleStatusOptions)) ([MicrosoftSerialConsoleClientGetConsoleStatusResponse](#MicrosoftSerialConsoleClientGetConsoleStatusResponse), [error](/builtin#error)) ``` GetConsoleStatus - Gets whether or not Serial Console is disabled for a given subscription If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2018-05-01 * defaultParam - Default parameter. Leave the value as "default". * options - MicrosoftSerialConsoleClientGetConsoleStatusOptions contains the optional parameters for the MicrosoftSerialConsoleClient.GetConsoleStatus method. Example [¶](#example-MicrosoftSerialConsoleClient.GetConsoleStatus) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/7a2ac91de424f271cf91cc8009f3fe9ee8249086/specification/serialconsole/resource-manager/Microsoft.SerialConsole/stable/2018-05-01/examples/SerialConsoleStatus.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armserialconsole.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewMicrosoftSerialConsoleClient().GetConsoleStatus(ctx, "default", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.Value = armserialconsole.Status{ // Disabled: to.Ptr(true), // } } ``` ``` Output: ``` Share Format Run #### func (*MicrosoftSerialConsoleClient) [ListOperations](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/microsoftserialconsole_client.go#L253) [¶](#MicrosoftSerialConsoleClient.ListOperations) ``` func (client *[MicrosoftSerialConsoleClient](#MicrosoftSerialConsoleClient)) ListOperations(ctx [context](/context).[Context](/context#Context), options *[MicrosoftSerialConsoleClientListOperationsOptions](#MicrosoftSerialConsoleClientListOperationsOptions)) ([MicrosoftSerialConsoleClientListOperationsResponse](#MicrosoftSerialConsoleClientListOperationsResponse), [error](/builtin#error)) ``` ListOperations - Gets a list of Serial Console API operations. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2018-05-01 * options - MicrosoftSerialConsoleClientListOperationsOptions contains the optional parameters for the MicrosoftSerialConsoleClient.ListOperations method. Example [¶](#example-MicrosoftSerialConsoleClient.ListOperations) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/7a2ac91de424f271cf91cc8009f3fe9ee8249086/specification/serialconsole/resource-manager/Microsoft.SerialConsole/stable/2018-05-01/examples/GetOperationsExample.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armserialconsole.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewMicrosoftSerialConsoleClient().ListOperations(ctx, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.Operations = armserialconsole.Operations{ // Value: []*armserialconsole.OperationsValueItem{ // { // Name: to.Ptr("Microsoft.SerialConsole/consoleServices/read"), // Display: &armserialconsole.OperationsValueItemDisplay{ // Description: to.Ptr("Retrieves the current subscription state"), // Operation: to.Ptr("Default"), // Provider: to.Ptr("Microsoft.SerialConsole"), // Resource: to.Ptr("Serial Console instance"), // }, // IsDataAction: to.Ptr("false"), // }, // { // Name: to.Ptr("Microsoft.SerialConsole/consoleServices/disableConsole/action"), // Display: &armserialconsole.OperationsValueItemDisplay{ // Description: to.Ptr("Disable Serial Console for a subscription"), // Operation: to.Ptr("Disable Console"), // Provider: to.Ptr("Microsoft.SerialConsole"), // Resource: to.Ptr("Serial Console instance"), // }, // IsDataAction: to.Ptr("false"), // }, // { // Name: to.Ptr("Microsoft.SerialConsole/consoleServices/enableConsole/action"), // Display: &armserialconsole.OperationsValueItemDisplay{ // Description: to.Ptr("Enable Serial Console for a subscription"), // Operation: to.Ptr("Enable Console"), // Provider: to.Ptr("Microsoft.SerialConsole"), // Resource: to.Ptr("Serial Console instance"), // }, // IsDataAction: to.Ptr("false"), // }}, // } } ``` ``` Output: ``` Share Format Run #### type [MicrosoftSerialConsoleClientDisableConsoleOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L35) [¶](#MicrosoftSerialConsoleClientDisableConsoleOptions) ``` type MicrosoftSerialConsoleClientDisableConsoleOptions struct { } ``` MicrosoftSerialConsoleClientDisableConsoleOptions contains the optional parameters for the MicrosoftSerialConsoleClient.DisableConsole method. #### type [MicrosoftSerialConsoleClientDisableConsoleResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/response_types.go#L13) [¶](#MicrosoftSerialConsoleClientDisableConsoleResponse) ``` type MicrosoftSerialConsoleClientDisableConsoleResponse struct { // Possible types are DisableSerialConsoleResult, GetSerialConsoleSubscriptionNotFound Value [any](/builtin#any) } ``` MicrosoftSerialConsoleClientDisableConsoleResponse contains the response from method MicrosoftSerialConsoleClient.DisableConsole. #### type [MicrosoftSerialConsoleClientEnableConsoleOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L41) [¶](#MicrosoftSerialConsoleClientEnableConsoleOptions) ``` type MicrosoftSerialConsoleClientEnableConsoleOptions struct { } ``` MicrosoftSerialConsoleClientEnableConsoleOptions contains the optional parameters for the MicrosoftSerialConsoleClient.EnableConsole method. #### type [MicrosoftSerialConsoleClientEnableConsoleResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/response_types.go#L19) [¶](#MicrosoftSerialConsoleClientEnableConsoleResponse) ``` type MicrosoftSerialConsoleClientEnableConsoleResponse struct { // Possible types are EnableSerialConsoleResult, GetSerialConsoleSubscriptionNotFound Value [any](/builtin#any) } ``` MicrosoftSerialConsoleClientEnableConsoleResponse contains the response from method MicrosoftSerialConsoleClient.EnableConsole. #### type [MicrosoftSerialConsoleClientGetConsoleStatusOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L47) [¶](#MicrosoftSerialConsoleClientGetConsoleStatusOptions) ``` type MicrosoftSerialConsoleClientGetConsoleStatusOptions struct { } ``` MicrosoftSerialConsoleClientGetConsoleStatusOptions contains the optional parameters for the MicrosoftSerialConsoleClient.GetConsoleStatus method. #### type [MicrosoftSerialConsoleClientGetConsoleStatusResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/response_types.go#L25) [¶](#MicrosoftSerialConsoleClientGetConsoleStatusResponse) ``` type MicrosoftSerialConsoleClientGetConsoleStatusResponse struct { // Possible types are Status, GetSerialConsoleSubscriptionNotFound Value [any](/builtin#any) } ``` MicrosoftSerialConsoleClientGetConsoleStatusResponse contains the response from method MicrosoftSerialConsoleClient.GetConsoleStatus. #### type [MicrosoftSerialConsoleClientListOperationsOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L53) [¶](#MicrosoftSerialConsoleClientListOperationsOptions) ``` type MicrosoftSerialConsoleClientListOperationsOptions struct { } ``` MicrosoftSerialConsoleClientListOperationsOptions contains the optional parameters for the MicrosoftSerialConsoleClient.ListOperations method. #### type [MicrosoftSerialConsoleClientListOperationsResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/response_types.go#L31) [¶](#MicrosoftSerialConsoleClientListOperationsResponse) ``` type MicrosoftSerialConsoleClientListOperationsResponse struct { [Operations](#Operations) } ``` MicrosoftSerialConsoleClientListOperationsResponse contains the response from method MicrosoftSerialConsoleClient.ListOperations. #### type [Operations](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L58) [¶](#Operations) added in v0.2.0 ``` type Operations struct { // A list of Serial Console operations Value []*[OperationsValueItem](#OperationsValueItem) `json:"value,omitempty"` } ``` Operations - Serial Console operations #### func (Operations) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L105) [¶](#Operations.MarshalJSON) added in v0.2.0 ``` func (o [Operations](#Operations)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Operations. #### func (*Operations) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L112) [¶](#Operations.UnmarshalJSON) added in v1.1.0 ``` func (o *[Operations](#Operations)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Operations. #### type [OperationsValueItem](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L63) [¶](#OperationsValueItem) added in v0.2.0 ``` type OperationsValueItem struct { Display *[OperationsValueItemDisplay](#OperationsValueItemDisplay) `json:"display,omitempty"` IsDataAction *[string](/builtin#string) `json:"isDataAction,omitempty"` Name *[string](/builtin#string) `json:"name,omitempty"` } ``` #### func (OperationsValueItem) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L132) [¶](#OperationsValueItem.MarshalJSON) added in v1.1.0 ``` func (o [OperationsValueItem](#OperationsValueItem)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type OperationsValueItem. #### func (*OperationsValueItem) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L141) [¶](#OperationsValueItem.UnmarshalJSON) added in v1.1.0 ``` func (o *[OperationsValueItem](#OperationsValueItem)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type OperationsValueItem. #### type [OperationsValueItemDisplay](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L69) [¶](#OperationsValueItemDisplay) added in v0.2.0 ``` type OperationsValueItemDisplay struct { Description *[string](/builtin#string) `json:"description,omitempty"` Operation *[string](/builtin#string) `json:"operation,omitempty"` Provider *[string](/builtin#string) `json:"provider,omitempty"` Resource *[string](/builtin#string) `json:"resource,omitempty"` } ``` #### func (OperationsValueItemDisplay) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L167) [¶](#OperationsValueItemDisplay.MarshalJSON) added in v1.1.0 ``` func (o [OperationsValueItemDisplay](#OperationsValueItemDisplay)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type OperationsValueItemDisplay. #### func (*OperationsValueItemDisplay) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L177) [¶](#OperationsValueItemDisplay.UnmarshalJSON) added in v1.1.0 ``` func (o *[OperationsValueItemDisplay](#OperationsValueItemDisplay)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type OperationsValueItemDisplay. #### type [ProxyResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L78) [¶](#ProxyResource) ``` type ProxyResource struct { // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` ProxyResource - The resource model definition for a ARM proxy resource. It will have everything other than required location and tags #### func (ProxyResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L206) [¶](#ProxyResource.MarshalJSON) added in v1.1.0 ``` func (p [ProxyResource](#ProxyResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ProxyResource. #### func (*ProxyResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L215) [¶](#ProxyResource.UnmarshalJSON) added in v1.1.0 ``` func (p *[ProxyResource](#ProxyResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ProxyResource. #### type [Resource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L90) [¶](#Resource) ``` type Resource struct { // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` Resource - The Resource model definition. #### func (Resource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L241) [¶](#Resource.MarshalJSON) added in v1.1.0 ``` func (r [Resource](#Resource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Resource. #### func (*Resource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L250) [¶](#Resource.UnmarshalJSON) added in v1.1.0 ``` func (r *[Resource](#Resource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Resource. #### type [SerialPort](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L102) [¶](#SerialPort) ``` type SerialPort struct { // The properties of the serial port. Properties *[SerialPortProperties](#SerialPortProperties) `json:"properties,omitempty"` // READ-ONLY; Resource Id ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"` // READ-ONLY; Resource name Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"` // READ-ONLY; Resource type Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"` } ``` SerialPort - Represents the serial port of the parent resource. #### func (SerialPort) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L276) [¶](#SerialPort.MarshalJSON) added in v1.1.0 ``` func (s [SerialPort](#SerialPort)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type SerialPort. #### func (*SerialPort) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L286) [¶](#SerialPort.UnmarshalJSON) added in v1.1.0 ``` func (s *[SerialPort](#SerialPort)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type SerialPort. #### type [SerialPortConnectResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L117) [¶](#SerialPortConnectResult) ``` type SerialPortConnectResult struct { // Connection string to the serial port of the resource. ConnectionString *[string](/builtin#string) `json:"connectionString,omitempty"` } ``` SerialPortConnectResult - Returns a connection string to the serial port of the resource. #### func (SerialPortConnectResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L315) [¶](#SerialPortConnectResult.MarshalJSON) added in v1.1.0 ``` func (s [SerialPortConnectResult](#SerialPortConnectResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type SerialPortConnectResult. #### func (*SerialPortConnectResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L322) [¶](#SerialPortConnectResult.UnmarshalJSON) added in v1.1.0 ``` func (s *[SerialPortConnectResult](#SerialPortConnectResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type SerialPortConnectResult. #### type [SerialPortListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L123) [¶](#SerialPortListResult) ``` type SerialPortListResult struct { // The list of serial ports. Value []*[SerialPort](#SerialPort) `json:"value,omitempty"` } ``` SerialPortListResult - The list serial ports operation response. #### func (SerialPortListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L342) [¶](#SerialPortListResult.MarshalJSON) ``` func (s [SerialPortListResult](#SerialPortListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type SerialPortListResult. #### func (*SerialPortListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L349) [¶](#SerialPortListResult.UnmarshalJSON) added in v1.1.0 ``` func (s *[SerialPortListResult](#SerialPortListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type SerialPortListResult. #### type [SerialPortProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L129) [¶](#SerialPortProperties) ``` type SerialPortProperties struct { // Specifies whether the port is enabled for a serial console connection. State *[SerialPortState](#SerialPortState) `json:"state,omitempty"` } ``` SerialPortProperties - The properties of the serial port. #### func (SerialPortProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L369) [¶](#SerialPortProperties.MarshalJSON) added in v1.1.0 ``` func (s [SerialPortProperties](#SerialPortProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type SerialPortProperties. #### func (*SerialPortProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L376) [¶](#SerialPortProperties.UnmarshalJSON) added in v1.1.0 ``` func (s *[SerialPortProperties](#SerialPortProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type SerialPortProperties. #### type [SerialPortState](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/constants.go#L18) [¶](#SerialPortState) ``` type SerialPortState [string](/builtin#string) ``` SerialPortState - Specifies whether the port is enabled for a serial console connection. ``` const ( SerialPortStateEnabled [SerialPortState](#SerialPortState) = "enabled" SerialPortStateDisabled [SerialPortState](#SerialPortState) = "disabled" ) ``` #### func [PossibleSerialPortStateValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/constants.go#L26) [¶](#PossibleSerialPortStateValues) ``` func PossibleSerialPortStateValues() [][SerialPortState](#SerialPortState) ``` PossibleSerialPortStateValues returns the possible values for the SerialPortState const type. #### type [SerialPortsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/serialports_client.go#L26) [¶](#SerialPortsClient) ``` type SerialPortsClient struct { // contains filtered or unexported fields } ``` SerialPortsClient contains the methods for the SerialPorts group. Don't use this type directly, use NewSerialPortsClient() instead. #### func [NewSerialPortsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/serialports_client.go#L36) [¶](#NewSerialPortsClient) ``` func NewSerialPortsClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[SerialPortsClient](#SerialPortsClient), [error](/builtin#error)) ``` NewSerialPortsClient creates a new instance of SerialPortsClient with the specified values. * subscriptionID - Subscription ID which uniquely identifies the Microsoft Azure subscription. The subscription ID forms part of the URI for every service call requiring it. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*SerialPortsClient) [Connect](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/serialports_client.go#L59) [¶](#SerialPortsClient.Connect) ``` func (client *[SerialPortsClient](#SerialPortsClient)) Connect(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceProviderNamespace [string](/builtin#string), parentResourceType [string](/builtin#string), parentResource [string](/builtin#string), serialPort [string](/builtin#string), options *[SerialPortsClientConnectOptions](#SerialPortsClientConnectOptions)) ([SerialPortsClientConnectResponse](#SerialPortsClientConnectResponse), [error](/builtin#error)) ``` Connect - Connect to serial port of the target resource If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2018-05-01 * resourceGroupName - The name of the resource group. * resourceProviderNamespace - The namespace of the resource provider. * parentResourceType - The resource type of the parent resource. For example: 'virtualMachines' or 'virtualMachineScaleSets' * parentResource - The resource name, or subordinate path, for the parent of the serial port. For example: the name of the virtual machine. * serialPort - The name of the serial port to connect to. * options - SerialPortsClientConnectOptions contains the optional parameters for the SerialPortsClient.Connect method. Example (ConnectToAScaleSetInstanceSerialPort) [¶](#example-SerialPortsClient.Connect-ConnectToAScaleSetInstanceSerialPort) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/7a2ac91de424f271cf91cc8009f3fe9ee8249086/specification/serialconsole/resource-manager/Microsoft.SerialConsole/stable/2018-05-01/examples/SerialPortConnectVMSS.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armserialconsole.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewSerialPortsClient().Connect(ctx, "myResourceGroup", "Microsoft.Compute", "virtualMachineScaleSets", "myscaleset/virtualMachines/2", "0", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.SerialPortConnectResult = armserialconsole.SerialPortConnectResult{ // ConnectionString: to.Ptr("wss://eastus.gateway.serialconsole.azure.com/n/connector/{containerid}/sessions/{sessionId}/client"), // } } ``` ``` Output: ``` Share Format Run Example (ConnectToAVirtualMachineSerialPort) [¶](#example-SerialPortsClient.Connect-ConnectToAVirtualMachineSerialPort) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/7a2ac91de424f271cf91cc8009f3fe9ee8249086/specification/serialconsole/resource-manager/Microsoft.SerialConsole/stable/2018-05-01/examples/SerialPortConnectVM.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armserialconsole.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewSerialPortsClient().Connect(ctx, "myResourceGroup", "Microsoft.Compute", "virtualMachines", "myVM", "0", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.SerialPortConnectResult = armserialconsole.SerialPortConnectResult{ // ConnectionString: to.Ptr("wss://eastus.gateway.serialconsole.azure.com/n/connector/{containerid}/sessions/{sessionId}/client"), // } } ``` ``` Output: ``` Share Format Run #### func (*SerialPortsClient) [Create](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/serialports_client.go#L130) [¶](#SerialPortsClient.Create) ``` func (client *[SerialPortsClient](#SerialPortsClient)) Create(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceProviderNamespace [string](/builtin#string), parentResourceType [string](/builtin#string), parentResource [string](/builtin#string), serialPort [string](/builtin#string), parameters [SerialPort](#SerialPort), options *[SerialPortsClientCreateOptions](#SerialPortsClientCreateOptions)) ([SerialPortsClientCreateResponse](#SerialPortsClientCreateResponse), [error](/builtin#error)) ``` Create - Creates or updates a serial port If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2018-05-01 * resourceGroupName - The name of the resource group. * resourceProviderNamespace - The namespace of the resource provider. * parentResourceType - The resource type of the parent resource. For example: 'virtualMachines' or 'virtualMachineScaleSets' * parentResource - The resource name, or subordinate path, for the parent of the serial port. For example: the name of the virtual machine. * serialPort - The name of the serial port to create. * parameters - Parameters supplied to create the serial port. * options - SerialPortsClientCreateOptions contains the optional parameters for the SerialPortsClient.Create method. Example [¶](#example-SerialPortsClient.Create) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/7a2ac91de424f271cf91cc8009f3fe9ee8249086/specification/serialconsole/resource-manager/Microsoft.SerialConsole/stable/2018-05-01/examples/CreateSerialPort.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armserialconsole.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } _, err = clientFactory.NewSerialPortsClient().Create(ctx, "myResourceGroup", "Microsoft.Compute", "virtualMachines", "myVM", "0", armserialconsole.SerialPort{ Properties: &armserialconsole.SerialPortProperties{ State: to.Ptr(armserialconsole.SerialPortStateEnabled), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*SerialPortsClient) [Delete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/serialports_client.go#L200) [¶](#SerialPortsClient.Delete) ``` func (client *[SerialPortsClient](#SerialPortsClient)) Delete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceProviderNamespace [string](/builtin#string), parentResourceType [string](/builtin#string), parentResource [string](/builtin#string), serialPort [string](/builtin#string), options *[SerialPortsClientDeleteOptions](#SerialPortsClientDeleteOptions)) ([SerialPortsClientDeleteResponse](#SerialPortsClientDeleteResponse), [error](/builtin#error)) ``` Delete - Deletes a serial port If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2018-05-01 * resourceGroupName - The name of the resource group. * resourceProviderNamespace - The namespace of the resource provider. * parentResourceType - The resource type of the parent resource. For example: 'virtualMachines' or 'virtualMachineScaleSets' * parentResource - The resource name, or subordinate path, for the parent of the serial port. For example: the name of the virtual machine. * serialPort - The name of the serial port to delete. * options - SerialPortsClientDeleteOptions contains the optional parameters for the SerialPortsClient.Delete method. Example [¶](#example-SerialPortsClient.Delete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/7a2ac91de424f271cf91cc8009f3fe9ee8249086/specification/serialconsole/resource-manager/Microsoft.SerialConsole/stable/2018-05-01/examples/DeleteSerialPort.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armserialconsole.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } _, err = clientFactory.NewSerialPortsClient().Delete(ctx, "myResourceGroup", "Microsoft.Compute", "virtualMachines", "myVM", "0", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*SerialPortsClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/serialports_client.go#L261) [¶](#SerialPortsClient.Get) ``` func (client *[SerialPortsClient](#SerialPortsClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceProviderNamespace [string](/builtin#string), parentResourceType [string](/builtin#string), parentResource [string](/builtin#string), serialPort [string](/builtin#string), options *[SerialPortsClientGetOptions](#SerialPortsClientGetOptions)) ([SerialPortsClientGetResponse](#SerialPortsClientGetResponse), [error](/builtin#error)) ``` Get - Gets the configured settings for a serial port If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2018-05-01 * resourceGroupName - The name of the resource group. * resourceProviderNamespace - The namespace of the resource provider. * parentResourceType - The resource type of the parent resource. For example: 'virtualMachines' or 'virtualMachineScaleSets' * parentResource - The resource name, or subordinate path, for the parent of the serial port. For example: the name of the virtual machine. * serialPort - The name of the serial port to connect to. * options - SerialPortsClientGetOptions contains the optional parameters for the SerialPortsClient.Get method. Example [¶](#example-SerialPortsClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/7a2ac91de424f271cf91cc8009f3fe9ee8249086/specification/serialconsole/resource-manager/Microsoft.SerialConsole/stable/2018-05-01/examples/GetSerialPort.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armserialconsole.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewSerialPortsClient().Get(ctx, "myResourceGroup", "Microsoft.Compute", "virtualMachines", "myVM", "0", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.SerialPort = armserialconsole.SerialPort{ // Name: to.Ptr("0"), // Type: to.Ptr("Microsoft.SerialConsole/serialPorts"), // ID: to.Ptr("/subscriptions/00000000-00000-0000-0000-000000000000/resourcegroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.SerialConsole/serialPorts/0"), // Properties: &armserialconsole.SerialPortProperties{ // State: to.Ptr(armserialconsole.SerialPortStateEnabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*SerialPortsClient) [List](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/serialports_client.go#L330) [¶](#SerialPortsClient.List) ``` func (client *[SerialPortsClient](#SerialPortsClient)) List(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceProviderNamespace [string](/builtin#string), parentResourceType [string](/builtin#string), parentResource [string](/builtin#string), options *[SerialPortsClientListOptions](#SerialPortsClientListOptions)) ([SerialPortsClientListResponse](#SerialPortsClientListResponse), [error](/builtin#error)) ``` List - Lists all of the configured serial ports for a parent resource If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2018-05-01 * resourceGroupName - The name of the resource group. * resourceProviderNamespace - The namespace of the resource provider. * parentResourceType - The resource type of the parent resource. For example: 'virtualMachines' or 'virtualMachineScaleSets' * parentResource - The resource name, or subordinate path, for the parent of the serial port. For example: the name of the virtual machine. * options - SerialPortsClientListOptions contains the optional parameters for the SerialPortsClient.List method. Example [¶](#example-SerialPortsClient.List) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/7a2ac91de424f271cf91cc8009f3fe9ee8249086/specification/serialconsole/resource-manager/Microsoft.SerialConsole/stable/2018-05-01/examples/ListSerialPort.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armserialconsole.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewSerialPortsClient().List(ctx, "myResourceGroup", "Microsoft.Compute", "virtualMachines", "myVM", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.SerialPortListResult = armserialconsole.SerialPortListResult{ // Value: []*armserialconsole.SerialPort{ // { // Name: to.Ptr("0"), // Type: to.Ptr("Microsoft.SerialConsole/serialPorts"), // ID: to.Ptr("/subscriptions/00000000-00000-0000-0000-000000000000/resourcegroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.SerialConsole/serialPorts/0"), // Properties: &armserialconsole.SerialPortProperties{ // State: to.Ptr(armserialconsole.SerialPortStateEnabled), // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### func (*SerialPortsClient) [ListBySubscriptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/serialports_client.go#L391) [¶](#SerialPortsClient.ListBySubscriptions) ``` func (client *[SerialPortsClient](#SerialPortsClient)) ListBySubscriptions(ctx [context](/context).[Context](/context#Context), options *[SerialPortsClientListBySubscriptionsOptions](#SerialPortsClientListBySubscriptionsOptions)) ([SerialPortsClientListBySubscriptionsResponse](#SerialPortsClientListBySubscriptionsResponse), [error](/builtin#error)) ``` ListBySubscriptions - Handles requests to list all SerialPort resources in a subscription. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2018-05-01 * options - SerialPortsClientListBySubscriptionsOptions contains the optional parameters for the SerialPortsClient.ListBySubscriptions method. Example [¶](#example-SerialPortsClient.ListBySubscriptions) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/7a2ac91de424f271cf91cc8009f3fe9ee8249086/specification/serialconsole/resource-manager/Microsoft.SerialConsole/stable/2018-05-01/examples/ListSerialPortSubscription.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/serialconsole/armserialconsole" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armserialconsole.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewSerialPortsClient().ListBySubscriptions(ctx, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.SerialPortListResult = armserialconsole.SerialPortListResult{ // Value: []*armserialconsole.SerialPort{ // { // Name: to.Ptr("0"), // Type: to.Ptr("Microsoft.SerialConsole/serialPorts"), // ID: to.Ptr("/subscriptions/00000000-00000-0000-0000-000000000000/resourcegroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.SerialConsole/serialPorts/0"), // Properties: &armserialconsole.SerialPortProperties{ // State: to.Ptr(armserialconsole.SerialPortStateEnabled), // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### type [SerialPortsClientConnectOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L135) [¶](#SerialPortsClientConnectOptions) added in v0.2.0 ``` type SerialPortsClientConnectOptions struct { } ``` SerialPortsClientConnectOptions contains the optional parameters for the SerialPortsClient.Connect method. #### type [SerialPortsClientConnectResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/response_types.go#L36) [¶](#SerialPortsClientConnectResponse) added in v0.2.0 ``` type SerialPortsClientConnectResponse struct { [SerialPortConnectResult](#SerialPortConnectResult) } ``` SerialPortsClientConnectResponse contains the response from method SerialPortsClient.Connect. #### type [SerialPortsClientCreateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L140) [¶](#SerialPortsClientCreateOptions) added in v0.2.0 ``` type SerialPortsClientCreateOptions struct { } ``` SerialPortsClientCreateOptions contains the optional parameters for the SerialPortsClient.Create method. #### type [SerialPortsClientCreateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/response_types.go#L41) [¶](#SerialPortsClientCreateResponse) added in v0.2.0 ``` type SerialPortsClientCreateResponse struct { [SerialPort](#SerialPort) } ``` SerialPortsClientCreateResponse contains the response from method SerialPortsClient.Create. #### type [SerialPortsClientDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L145) [¶](#SerialPortsClientDeleteOptions) added in v0.2.0 ``` type SerialPortsClientDeleteOptions struct { } ``` SerialPortsClientDeleteOptions contains the optional parameters for the SerialPortsClient.Delete method. #### type [SerialPortsClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/response_types.go#L46) [¶](#SerialPortsClientDeleteResponse) added in v0.2.0 ``` type SerialPortsClientDeleteResponse struct { } ``` SerialPortsClientDeleteResponse contains the response from method SerialPortsClient.Delete. #### type [SerialPortsClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L150) [¶](#SerialPortsClientGetOptions) added in v0.2.0 ``` type SerialPortsClientGetOptions struct { } ``` SerialPortsClientGetOptions contains the optional parameters for the SerialPortsClient.Get method. #### type [SerialPortsClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/response_types.go#L51) [¶](#SerialPortsClientGetResponse) added in v0.2.0 ``` type SerialPortsClientGetResponse struct { [SerialPort](#SerialPort) } ``` SerialPortsClientGetResponse contains the response from method SerialPortsClient.Get. #### type [SerialPortsClientListBySubscriptionsOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L156) [¶](#SerialPortsClientListBySubscriptionsOptions) added in v0.2.0 ``` type SerialPortsClientListBySubscriptionsOptions struct { } ``` SerialPortsClientListBySubscriptionsOptions contains the optional parameters for the SerialPortsClient.ListBySubscriptions method. #### type [SerialPortsClientListBySubscriptionsResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/response_types.go#L56) [¶](#SerialPortsClientListBySubscriptionsResponse) added in v0.2.0 ``` type SerialPortsClientListBySubscriptionsResponse struct { [SerialPortListResult](#SerialPortListResult) } ``` SerialPortsClientListBySubscriptionsResponse contains the response from method SerialPortsClient.ListBySubscriptions. #### type [SerialPortsClientListOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L161) [¶](#SerialPortsClientListOptions) added in v0.2.0 ``` type SerialPortsClientListOptions struct { } ``` SerialPortsClientListOptions contains the optional parameters for the SerialPortsClient.List method. #### type [SerialPortsClientListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/response_types.go#L61) [¶](#SerialPortsClientListResponse) added in v0.2.0 ``` type SerialPortsClientListResponse struct { [SerialPortListResult](#SerialPortListResult) } ``` SerialPortsClientListResponse contains the response from method SerialPortsClient.List. #### type [Status](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models.go#L166) [¶](#Status) added in v0.2.0 ``` type Status struct { // Whether or not Serial Console is disabled. Disabled *[bool](/builtin#bool) `json:"disabled,omitempty"` } ``` Status - Returns whether or not Serial Console is disabled. #### func (Status) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L396) [¶](#Status.MarshalJSON) added in v1.1.0 ``` func (s [Status](#Status)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Status. #### func (*Status) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/serialconsole/armserialconsole/v1.1.0/sdk/resourcemanager/serialconsole/armserialconsole/models_serde.go#L403) [¶](#Status.UnmarshalJSON) added in v1.1.0 ``` func (s *[Status](#Status)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Status.
curl
cran
R
Package ‘curl’ October 2, 2023 Type Package Title A Modern and Flexible Web Client for R Version 5.1.0 Description The curl() and curl_download() functions provide highly configurable drop-in replacements for base url() and download.file() with better performance, support for encryption (https, ftps), gzip compression, authentication, and other 'libcurl' goodies. The core of the package implements a framework for performing fully customized requests where data can be processed either in memory, on disk, or streaming via the callback or connection interfaces. Some knowledge of 'libcurl' is recommended; for a more-user-friendly web client see the 'httr' package which builds on this package with http specific tools and logic. License MIT + file LICENSE SystemRequirements libcurl: libcurl-devel (rpm) or libcurl4-openssl-dev (deb). URL https://jeroen.r-universe.dev/curl https://curl.se/libcurl/ BugReports https://github.com/jeroen/curl/issues Suggests spelling, testthat (>= 1.0.0), knitr, jsonlite, rmarkdown, magrittr, httpuv (>= 1.4.4), webutils VignetteBuilder knitr Depends R (>= 3.0.0) RoxygenNote 7.2.3 Encoding UTF-8 Language en-US NeedsCompilation yes Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-4035-0289>), <NAME> [ctb], RStudio [cph] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-10-02 15:20:07 UTC R topics documented: cur... 2 curl_downloa... 4 curl_ech... 5 curl_escap... 6 curl_fetch_memor... 6 curl_option... 8 curl_uploa... 9 file_write... 10 handl... 11 handle_cookie... 12 ie_prox... 13 mult... 14 multipar... 16 multi_downloa... 16 nslooku... 19 parse_dat... 20 parse_header... 20 send_mai... 21 curl Curl connection interface Description Drop-in replacement for base url that supports https, ftps, gzip, deflate, etc. Default behavior is identical to url, but request can be fully configured by passing a custom handle. Usage curl(url = "https://hb.cran.dev/get", open = "", handle = new_handle()) Arguments url character string. See examples. open character string. How to open the connection if it should be opened initially. Currently only "r" and "rb" are supported. handle a curl handle object Details As of version 2.3 curl connections support open(con, blocking = FALSE). In this case readBin and readLines will return immediately with data that is available without waiting. For such non- blocking connections the caller needs to call isIncomplete to check if the download has completed yet. Examples ## Not run: con <- curl("https://hb.cran.dev/get") readLines(con) # Auto-opened connections can be recycled open(con, "rb") bin <- readBin(con, raw(), 999) close(con) rawToChar(bin) # HTTP error curl("https://hb.cran.dev/status/418", "r") # Follow redirects readLines(curl("https://hb.cran.dev/redirect/3")) # Error after redirect curl("https://hb.cran.dev/redirect-to?url=https://hb.cran.dev/status/418", "r") # Auto decompress Accept-Encoding: gzip / deflate (rfc2616 #14.3) readLines(curl("https://hb.cran.dev/gzip")) readLines(curl("https://hb.cran.dev/deflate")) # Binary support buf <- readBin(curl("https://hb.cran.dev/bytes/98765", "rb"), raw(), 1e5) length(buf) # Read file from disk test <- paste0("file://", system.file("DESCRIPTION")) readLines(curl(test)) # Other protocols read.csv(curl("ftp://cran.r-project.org/pub/R/CRAN_mirrors.csv")) readLines(curl("ftps://test.rebex.net:990/readme.txt")) readLines(curl("gopher://quux.org/1")) # Streaming data con <- curl("http://jeroen.github.io/data/diamonds.json", "r") while(length(x <- readLines(con, n = 5))){ print(x) } # Stream large dataset over https with gzip library(jsonlite) con <- gzcon(curl("https://jeroen.github.io/data/nycflights13.json.gz")) nycflights <- stream_in(con) ## End(Not run) curl_download Download file to disk Description Libcurl implementation of C_download (the "internal" download method) with added support for https, ftps, gzip, etc. Default behavior is identical to download.file, but request can be fully configured by passing a custom handle. Usage curl_download(url, destfile, quiet = TRUE, mode = "wb", handle = new_handle()) Arguments url A character string naming the URL of a resource to be downloaded. destfile A character string with the name where the downloaded file is saved. Tilde- expansion is performed. quiet If TRUE, suppress status messages (if any), and the progress bar. mode A character string specifying the mode with which to write the file. Useful values are "w", "wb" (binary), "a" (append) and "ab". handle a curl handle object Details The main difference between curl_download and curl_fetch_disk is that curl_download checks the http status code before starting the download, and raises an error when status is non-successful. The behavior of curl_fetch_disk on the other hand is to proceed as normal and write the error page to disk in case of a non success response. For a more advanced download interface which supports concurrent requests and resuming large files, have a look at the multi_download function. Value Path of downloaded file (invisibly). See Also Advanced download interface: multi_download Examples # Download large file ## Not run: url <- "http://www2.census.gov/acs2011_5yr/pums/csv_pus.zip" tmp <- tempfile() curl_download(url, tmp) ## End(Not run) curl_echo Echo Service Description This function is only for testing purposes. It starts a local httpuv server to echo the request body and content type in the response. Usage curl_echo(handle, port = 9359, progress = interactive(), file = NULL) Arguments handle a curl handle object port the port number on which to run httpuv server progress show progress meter during http transfer file path or connection to write body. Default returns body as raw vector. Examples if(require('httpuv')){ h <- new_handle(url = 'https://hb.cran.dev/post') handle_setform(h, foo = "blabla", bar = charToRaw("test"), myfile = form_file(system.file("DESCRIPTION"), "text/description")) # Echo the POST request data formdata <- curl_echo(h) # Show the multipart body cat(rawToChar(formdata$body)) # Parse multipart webutils::parse_http(formdata$body, formdata$content_type) } curl_escape URL encoding Description Escape all special characters (i.e. everything except for a-z, A-Z, 0-9, ’-’, ’.’, ’_’ or ’~’) for use in URLs. Usage curl_escape(url) curl_unescape(url) Arguments url A character vector (typically containing urls or parameters) to be encoded/decoded Examples # Escape strings out <- curl_escape("foo = bar + 5") curl_unescape(out) # All non-ascii characters are encoded mu <- "\u00b5" curl_escape(mu) curl_unescape(curl_escape(mu)) curl_fetch_memory Fetch the contents of a URL Description Low-level bindings to write data from a URL into memory, disk or a callback function. These are mainly intended for httr, most users will be better off using the curl or curl_download function, or the http specific wrappers in the httr package. Usage curl_fetch_memory(url, handle = new_handle()) curl_fetch_disk(url, path, handle = new_handle()) curl_fetch_stream(url, fun, handle = new_handle()) curl_fetch_multi( url, done = NULL, fail = NULL, pool = NULL, data = NULL, handle = new_handle() ) curl_fetch_echo(url, handle = new_handle()) Arguments url A character string naming the URL of a resource to be downloaded. handle a curl handle object path Path to save results fun Callback function. Should have one argument, which will be a raw vector. done callback function for completed request. Single argument with response data in same structure as curl_fetch_memory. fail callback function called on failed request. Argument contains error message. pool a multi handle created by new_pool. Default uses a global pool. data (advanced) callback function, file path, or connection object for writing incom- ing data. This callback should only be used for streaming applications, where small pieces of incoming data get written before the request has completed. The signature for the callback function is write(data, final = FALSE). If set to NULL the entire response gets buffered internally and returned by in the done callback (which is usually what you want). Details The curl_fetch functions automatically raise an error upon protocol problems (network, disk, ssl) but do not implement application logic. For example for you need to check the status code of http requests yourself in the response, and deal with it accordingly. Both curl_fetch_memory and curl_fetch_disk have a blocking and non-blocking C implemen- tation. The latter is slightly slower but allows for interrupting the download prematurely (us- ing e.g. CTRL+C or ESC). Interrupting is enabled when R runs in interactive mode or when getOption("curl_interrupt") == TRUE. The curl_fetch_multi function is the asynchronous equivalent of curl_fetch_memory. It wraps multi_add to schedule requests which are executed concurrently when calling multi_run. For each successful request the done callback is triggered with response data. For failed requests (when curl_fetch_memory would raise an error), the fail function is triggered with the error message. Examples # Load in memory res <- curl_fetch_memory("https://hb.cran.dev/cookies/set?foo=123&bar=ftw") res$content # Save to disk res <- curl_fetch_disk("https://hb.cran.dev/stream/10", tempfile()) res$content readLines(res$content) # Stream with callback drip_url <- "https://hb.cran.dev/drip?duration=3&numbytes=15&code=200" res <- curl_fetch_stream(drip_url, function(x){ cat(rawToChar(x)) }) # Async API data <- list() success <- function(res){ cat("Request done! Status:", res$status, "\n") data <<- c(data, list(res)) } failure <- function(msg){ cat("Oh noes! Request failed!", msg, "\n") } curl_fetch_multi("https://hb.cran.dev/get", success, failure) curl_fetch_multi("https://hb.cran.dev/status/418", success, failure) curl_fetch_multi("https://urldoesnotexist.xyz", success, failure) multi_run() str(data) curl_options List curl version and options. Description curl_version() shows the versions of libcurl, libssl and zlib and supported protocols. curl_options() lists all options available in the current version of libcurl. The dataset curl_symbols lists all sym- bols (including options) provides more information about the symbols, including when support was added/removed from libcurl. Usage curl_options(filter = "") curl_symbols(filter = "") curl_version() Arguments filter string: only return options with string in name Examples # Available options curl_options() # List proxy options curl_options("proxy") # Symbol table curl_symbols("proxy") # Curl/ssl version info curl_version() curl_upload Upload a File Description Upload a file to an http://, ftp://, or sftp:// (ssh) server. Uploading to HTTP means per- forming an HTTP PUT on that URL. Be aware that sftp is only available for libcurl clients built with libssh2. Usage curl_upload(file, url, verbose = TRUE, reuse = TRUE, ...) Arguments file connection object or path to an existing file on disk url where to upload, should start with e.g. ftp:// verbose emit some progress output reuse try to keep alive and recycle connections when possible ... other arguments passed to handle_setopt, for example a username and password. Examples ## Not run: # Upload package to winbuilder: curl_upload('mypkg_1.3.tar.gz', 'ftp://win-builder.r-project.org/R-devel/') ## End(Not run) file_writer Lazy File Writer Description Generates a closure that writes binary (raw) data to a file. Usage file_writer(path, append = FALSE) Arguments path file name or path on disk append open file in append mode Details The writer function automatically opens the file on the first write and closes when it goes out of scope, or explicitly by setting close = TRUE. This can be used for the data callback in multi_add() or curl_fetch_multi() such that we only keep open file handles for active downloads. This prevents running out of file descriptors when performing thousands of concurrent requests. Value Function with signature writer(data = raw(), close = FALSE) Examples # Doesn't open yet tmp <- tempfile() writer <- file_writer(tmp) # Now it opens writer(charToRaw("Hello!\n")) writer(charToRaw("How are you?\n")) # Close it! writer(charToRaw("All done!\n"), close = TRUE) # Check it worked readLines(tmp) handle Create and configure a curl handle Description Handles are the work horses of libcurl. A handle is used to configure a request with custom options, headers and payload. Once the handle has been set up, it can be passed to any of the download functions such as curl ,curl_download or curl_fetch_memory. The handle will maintain state in between requests, including keep-alive connections, cookies and settings. Usage new_handle(...) handle_setopt(handle, ..., .list = list()) handle_setheaders(handle, ..., .list = list()) handle_getheaders(handle) handle_getcustom(handle) handle_setform(handle, ..., .list = list()) handle_reset(handle) handle_data(handle) Arguments ... named options / headers to be set in the handle. To send a file, see form_file. To list all allowed options, see curl_options handle Handle to modify .list A named list of options. This is useful if you’ve created a list of options else- where, avoiding the use of do.call(). Details Use new_handle() to create a new clean curl handle that can be configured with custom op- tions and headers. Note that handle_setopt appends or overrides options in the handle, whereas handle_setheaders replaces the entire set of headers with the new ones. The handle_reset func- tion resets only options/headers/forms in the handle. It does not affect active connections, cookies or response data from previous requests. The safest way to perform multiple independent requests is by using a separate handle for each request. There is very little performance overhead in creating handles. Value A handle object (external pointer to the underlying curl handle). All functions modify the handle in place but also return the handle so you can create a pipeline of operations. See Also Other handles: handle_cookies() Examples h <- new_handle() handle_setopt(h, customrequest = "PUT") handle_setform(h, a = "1", b = "2") r <- curl_fetch_memory("https://hb.cran.dev/put", h) cat(rawToChar(r$content)) # Or use the list form h <- new_handle() handle_setopt(h, .list = list(customrequest = "PUT")) handle_setform(h, .list = list(a = "1", b = "2")) r <- curl_fetch_memory("https://hb.cran.dev/put", h) cat(rawToChar(r$content)) handle_cookies Extract cookies from a handle Description The handle_cookies function returns a data frame with 7 columns as specified in the netscape cookie file format. Usage handle_cookies(handle) Arguments handle a curl handle object See Also Other handles: handle Examples h <- new_handle() handle_cookies(h) # Server sets cookies req <- curl_fetch_memory("https://hb.cran.dev/cookies/set?foo=123&bar=ftw", handle = h) handle_cookies(h) # Server deletes cookies req <- curl_fetch_memory("https://hb.cran.dev/cookies/delete?foo", handle = h) handle_cookies(h) # Cookies will survive a reset! handle_reset(h) handle_cookies(h) ie_proxy Internet Explorer proxy settings Description Lookup and mimic the system proxy settings on Windows as set by Internet Explorer. This can be used to configure curl to use the same proxy server. Usage ie_proxy_info() ie_get_proxy_for_url(target_url = "http://www.google.com") Arguments target_url url with host for which to lookup the proxy server Details The ie_proxy_info function looks up your current proxy settings as configured in IE under "Internet Options" under "LAN Settings". The ie_get_proxy_for_url determines if and which proxy should be used to connect to a particular URL. If your settings have an "automatic configuration script" this involves downloading and executing a PAC file, which can take a while. multi Async Concurrent Requests Description AJAX style concurrent requests, possibly using HTTP/2 multiplexing. Results are only available via callback functions. Advanced use only! For downloading many files in parallel use multi_download instead. Usage multi_add(handle, done = NULL, fail = NULL, data = NULL, pool = NULL) multi_run(timeout = Inf, poll = FALSE, pool = NULL) multi_set(total_con = 50, host_con = 6, multiplex = TRUE, pool = NULL) multi_list(pool = NULL) multi_cancel(handle) new_pool(total_con = 100, host_con = 6, multiplex = TRUE) multi_fdset(pool = NULL) Arguments handle a curl handle with preconfigured url option. done callback function for completed request. Single argument with response data in same structure as curl_fetch_memory. fail callback function called on failed request. Argument contains error message. data (advanced) callback function, file path, or connection object for writing incom- ing data. This callback should only be used for streaming applications, where small pieces of incoming data get written before the request has completed. The signature for the callback function is write(data, final = FALSE). If set to NULL the entire response gets buffered internally and returned by in the done callback (which is usually what you want). pool a multi handle created by new_pool. Default uses a global pool. timeout max time in seconds to wait for results. Use 0 to poll for results without waiting at all. poll If TRUE then return immediately after any of the requests has completed. May also be an integer in which case it returns after n requests have completed. total_con max total concurrent connections. host_con max concurrent connections per host. multiplex enable HTTP/2 multiplexing if supported by host and client. Details Requests are created in the usual way using a curl handle and added to the scheduler with multi_add. This function returns immediately and does not perform the request yet. The user needs to call multi_run which performs all scheduled requests concurrently. It returns when all requests have completed, or case of a timeout or SIGINT (e.g. if the user presses ESC or CTRL+C in the console). In case of the latter, simply call multi_run again to resume pending requests. When the request succeeded, the done callback gets triggered with the response data. The struc- ture if this data is identical to curl_fetch_memory. When the request fails, the fail callback is triggered with an error message. Note that failure here means something went wrong in per- forming the request such as a connection failure, it does not check the http status code. Just like curl_fetch_memory, the user has to implement application logic. Raising an error within a callback function stops execution of that function but does not affect other requests. A single handle cannot be used for multiple simultaneous requests. However it is possible to add new requests to a pool while it is running, so you can re-use a handle within the callback of a request from that same handle. It is up to the user to make sure the same handle is not used in concurrent requests. The multi_cancel function can be used to cancel a pending request. It has no effect if the request was already completed or canceled. The multi_fdset function returns the file descriptors curl is polling currently, and also a timeout parameter, the number of milliseconds an application should wait (at most) before proceeding. It is equivalent to the curl_multi_fdset and curl_multi_timeout calls. It is handy for applications that is expecting input (or writing output) through both curl, and other file descriptors. See Also Advanced download interface: multi_download Examples results <- list() success <- function(x){ results <<- append(results, list(x)) } failure <- function(str){ cat(paste("Failed request:", str), file = stderr()) } # This handle will take longest (3sec) h1 <- new_handle(url = "https://hb.cran.dev/delay/3") multi_add(h1, done = success, fail = failure) # This handle writes data to a file con <- file("output.txt") h2 <- new_handle(url = "https://hb.cran.dev/post", postfields = "bla bla") multi_add(h2, done = success, fail = failure, data = con) # This handle raises an error h3 <- new_handle(url = "https://urldoesnotexist.xyz") multi_add(h3, done = success, fail = failure) # Actually perform the requests multi_run(timeout = 2) multi_run() # Check the file readLines("output.txt") unlink("output.txt") multipart POST files or data Description Build multipart form data elements. The form_file function uploads a file. The form_data func- tion allows for posting a string or raw vector with a custom content-type. Usage form_file(path, type = NULL, name = NULL) form_data(value, type = NULL) Arguments path a string with a path to an existing file on disk type MIME content-type of the file. name a string with the file name to use for the upload value a character or raw vector to post multi_download Advanced download interface Description Download multiple files concurrently, with support for resuming large files. This function is based on multi_run() and hence does not error in case any of the individual requests fail; you should inspect the return value to find out which of the downloads were completed successfully. Usage multi_download( urls, destfiles = NULL, resume = FALSE, progress = TRUE, timeout = Inf, multiplex = FALSE, ... ) Arguments urls vector with files to download destfiles vector (of equal length as urls) with paths of output files, or NULL to use base- name of urls. resume if the file already exists, resume the download. Note that this may change server responses, see details. progress print download progress information timeout in seconds, passed to multi_run multiplex passed to new_pool ... extra handle options passed to each request new_handle Details Upon completion of all requests, this function returns a data frame with results. The success column indicates if a request was successfully completed (regardless of the HTTP status code). If it failed, e.g. due to a networking issue, the error message is in the error column. A success value NA indicates that the request was still in progress when the function was interrupted or reached the elapsed timeout and perhaps the download can be resumed if the server supports it. It is also important to inspect the status_code column to see if any of the requests were successful but had a non-success HTTP code, and hence the downloaded file probably contains an error page instead of the requested content. Note that when you set resume = TRUE you should expect HTTP-206 or HTTP-416 responses. The latter could indicate that the file was already complete, hence there was no content left to resume from the server. If you try to resume a file download but the server does not support this, success if FALSE and the file will not be touched. In fact, if we request to a download to be resumed and the server responds HTTP 200 instead of HTTP 206, libcurl will error and not download anything, because this probably means the server did not respect our range request and is sending us the full file. About HTTP/2: Availability of HTTP/2 can increase the performance when making many parallel requests to a server, because HTTP/2 can multiplex many requests over a single TCP connection. Support for HTTP/2 depends on the version of libcurl that your system has, and the TLS back-end that is in use, check curl_version. For clients or servers without HTTP/2, curl makes at most 6 connections per host over which it distributes the queued downloads. On Windows and MacOS you can switch the active TLS backend by setting an environment variable CURL_SSL_BACKEND in your ~/.Renviron file. On Windows you can switch between SecureChannel (default) and OpenSSL where only the latter supports HTTP/2. On MacOS you can use either SecureTransport or LibreSSL, the default varies by MacOS version. Value The function returns a data frame with one row for each downloaded file and the following columns: • success if the HTTP request was successfully performed, regardless of the response status code. This is FALSE in case of a network error, or in case you tried to resume from a server that did not support this. A value of NA means the download was interrupted while in progress. • status_code the HTTP status code from the request. A successful download is usually 200 for full requests or 206 for resumed requests. Anything else could indicate that the downloaded file contains an error page instead of the requested content. • resumefrom the file size before the request, in case a download was resumed. • url final url (after redirects) of the request. • destfile downloaded file on disk. • error if success == FALSE this column contains an error message. • type the Content-Type response header value. • modified the Last-Modified response header value. • time total elapsed download time for this file in seconds. • headers vector with http response headers for the request. Examples ## Not run: # Example: some large files urls <- sprintf( "https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2021-%02d.parquet", 1:12) res <- multi_download(urls, resume = TRUE) # You can interrupt (ESC) and resume # Example: revdep checker # Download all reverse dependencies for the 'curl' package from CRAN: pkg <- 'curl' mirror <- 'https://cloud.r-project.org' db <- available.packages(repos = mirror) packages <- c(pkg, tools::package_dependencies(pkg, db = db, reverse = TRUE)[[pkg]]) versions <- db[packages,'Version'] urls <- sprintf("%s/src/contrib/%s_%s.tar.gz", mirror, packages, versions) res <- multi_download(urls) all.equal(unname(tools::md5sum(res$destfile)), unname(db[packages, 'MD5sum'])) # And then you could use e.g.: tools:::check_packages_in_dir() # Example: URL checker pkg_url_checker <- function(dir){ db <- tools:::url_db_from_package_sources(dir) res <- multi_download(db$URL, rep('/dev/null', nrow(db)), nobody=TRUE) db$OK <- res$status_code == 200 db } # Use a local package source directory pkg_url_checker(".") ## End(Not run) nslookup Lookup a hostname Description The nslookup function is similar to nsl but works on all platforms and can resolve ipv6 addresses if supported by the OS. Default behavior raises an error if lookup fails. Usage nslookup(host, ipv4_only = FALSE, multiple = FALSE, error = TRUE) has_internet() Arguments host a string with a hostname ipv4_only always return ipv4 address. Set to FALSE to allow for ipv6 as well. multiple returns multiple ip addresses if possible error raise an error for failed DNS lookup. Otherwise returns NULL. Details The has_internet function tests for internet connectivity by performing a dns lookup. If a proxy server is detected, it will also check for connectivity by connecting via the proxy. Examples # Should always work if we are online nslookup("www.r-project.org") # If your OS supports IPv6 nslookup("ipv6.test-ipv6.com", ipv4_only = FALSE, error = FALSE) parse_date Parse date/time Description Can be used to parse dates appearing in http response headers such as Expires or Last-Modified. Automatically recognizes most common formats. If the format is known, strptime might be easier. Usage parse_date(datestring) Arguments datestring a string consisting of a timestamp Examples # Parse dates in many formats parse_date("Sunday, 06-Nov-94 08:49:37 GMT") parse_date("06 Nov 1994 08:49:37") parse_date("20040911 +0200") parse_headers Parse response headers Description Parse response header data as returned by curl_fetch, either as a set of strings or into a named list. Usage parse_headers(txt, multiple = FALSE) parse_headers_list(txt) Arguments txt raw or character vector with the header data multiple parse multiple sets of headers separated by a blank line. See details. Details The parse_headers_list function parses the headers into a normalized (lowercase field names, trimmed whitespace) named list. If a request has followed redirects, the data can contain multiple sets of headers. When multiple = TRUE, the function returns a list with the response headers for each request. By default it only returns the headers of the final request. Examples req <- curl_fetch_memory("https://hb.cran.dev/redirect/3") parse_headers(req$headers) parse_headers(req$headers, multiple = TRUE) # Parse into named list parse_headers_list(req$headers) send_mail Send email Description Use the curl SMTP client to send an email. The message argument must be properly formatted RFC2822 email message with From/To/Subject headers and CRLF line breaks. Usage send_mail( mail_from, mail_rcpt, message, smtp_server = "smtp://localhost", use_ssl = c("try", "no", "force"), verbose = TRUE, ... ) Arguments mail_from email address of the sender. mail_rcpt one or more recipient email addresses. Do not include names, these go into the message headers. message either a string or connection with (properly formatted) email message, including sender/recipient/subject headers. See example. smtp_server hostname or address of the SMTP server, or, an smtp:// or smtps:// URL. See "Specifying the server, port, and protocol" below. use_ssl Request to upgrade the connection to SSL using the STARTTLS command, see CURLOPT_USE_SSL for details. Default will try to SSL, proceed as normal otherwise. verbose print output ... other options passed to handle_setopt. In most cases you will need to set a username and password to authenticate with the SMTP server. Specifying the server, port, and protocol The smtp_server argument takes a hostname, or an SMTP URL: • mail.example.com - hostname only • mail.example.com:587 - hostname and port • smtp://mail.example.com - protocol and hostname • smtp://mail.example.com:587 - full SMTP URL • smtps://mail.example.com:465 - full SMTPS URL By default, the port will be 25, unless smtps:// is specified–then the default will be 465 instead. Encrypting connections via SMTPS or STARTTLS There are two different ways in which SMTP can be encrypted: SMTPS servers run on a port which only accepts encrypted connections, similar to HTTPS. Alternatively, a regular insecure smtp connection can be "upgraded" to a secure TLS connection using the STARTTLS command. It is important to know which method your server expects. If your smtp server listens on port 465, then use a smtps://hostname:465 URL. The SMTPS protocol guarantees that TLS will be used to protect all communications from the start. If your email server listens on port 25 or 587, use an smtp:// URL in combination with the use_ssl parameter to control if the connection should be upgraded with STARTTLS. The default value "try" will opportunistically try to upgrade to a secure connection if the server supports it, and proceed as normal otherwise. Examples ## Not run: # Set sender and recipients (email addresses only) recipients <- readline("Enter your email address to receive test: ") sender <- '<EMAIL>' # Full email message in RFC2822 format message <- 'From: "R (curl package)" <<EMAIL>> To: "<NAME>" <<EMAIL>> Subject: Hello R user! Dear R user, I am sending this email using curl.' # Send the email send_mail(sender, recipients, message, smtp_server = 'smtps://smtp.gmail.com', username = 'curlpackage', password = 'qyyjddvphjsrbnlm') ## End(Not run)
EMMIXSSL
cran
R
Package ‘EMMIXSSL’ October 18, 2022 Type Package Title Semi-Supervised Gaussian Mixture Model with a Missing-Data Mechanism Version 1.1.1 Author <NAME>, <NAME>, <NAME> Maintainer <NAME> <<EMAIL>> Description The algorithm of semi-supervised learning based on finite Gaussian mixture models with a miss- ing-data mechanism is designed for a fitting g-class Gaussian mixture model via maximum likeli- hood (ML). It is proposed to treat the labels of the unclassified features as missing-data and to in- troduce a framework for their missing as in the pioneering work of Rubin (1976) for miss- ing in incomplete data analysis. This dependency in the missingness pattern can be lever- aged to provide additional information about the optimal classifier as specified by Bayes’ rule. Depends R (>= 3.1.0), mvtnorm,stats License GPL-3 Encoding UTF-8 LazyData true RoxygenNote 7.2.0 NeedsCompilation no Repository CRAN Date/Publication 2022-10-18 12:17:58 UTC R topics documented: Classifier_Baye... 2 cov2ve... 3 discriminant_bet... 4 EMMIXSS... 5 errorrat... 6 gastrodat... 7 gastro_label_binar... 7 gastro_label_trinar... 8 get_clusterprob... 8 get_entrop... 9 initialvalu... 10 list2pa... 11 loglk_ful... 12 loglk_i... 13 loglk_mis... 14 logsumex... 15 makelabelmatri... 15 neg_objective_functio... 16 normalise_logpro... 17 par2lis... 17 pro2ve... 18 rlabe... 18 rmi... 19 vec2co... 20 vec2pr... 20 Classifier_Bayes Classifier based on Bayes rule Description A classifier based on Bayes rule, that is maximum a posterior probabilities of class membership Usage Classifier_Bayes(dat, n, p, g, pi, mu, sigma, ncov = 2) Arguments dat An n × p matrix where each row represents an individual observation n Number of observations. p Dimension of observation vecor. g Number of classes. pi A g-dimensional vector for the initial values of the mixing proportions. mu A p × g matrix for the initial values of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices. Details The posterior probability can be expressed as πi φ(yj ; µi , Σi ) τi (yj ; θ) = P rob{zij = 1|yj } = Pg , where φ is a normal probability function with mean µi and covariance matrix Σi , and zij is is a zero-one indicator variable denoting the class of origin. The Bayes’ Classifier of allocation assigns an entity with feature vector yj to Class Ck if k = argmaxi τi (yj ; θ). Value cluster A vector of the class membership. Examples n<-150 pi<-c(0.25,0.25,0.25,0.25) sigma<-array(0,dim=c(3,3,4)) sigma[,,1]<-diag(1,3) sigma[,,2]<-diag(2,3) sigma[,,3]<-diag(3,3) sigma[,,4]<-diag(4,3) mu<-matrix(c(0.2,0.3,0.4,0.2,0.7,0.6,0.1,0.7,1.6,0.2,1.7,0.6),3,4) dat<-rmix(n=n,pi=pi,mu=mu,sigma=sigma,ncov=2) cluster<-Classifier_Bayes(dat=dat$Y,n=150,p=3,g=4,mu=mu,sigma=sigma,pi=pi,ncov=2) cov2vec Transform a variance matrix into a vector Description Transform a variance matrix into a vector i.e., Sigma=R^T*R Usage cov2vec(sigma) Arguments sigma A variance matrix Details The variance matrix is decomposed by computing the Choleski factorization of a real symmetric positive-definite square matrix. Then, storing the upper triangular factor of the Choleski decompo- sition into a vector. Value par A vector representing a variance matrix discriminant_beta Discriminant function Description Discriminant function in the particular case of g=2 classes with an equal-covariance matrix Usage discriminant_beta(pi, mu, sigma) Arguments pi A g-dimensional vector for the initial values of the mixing proportions. mu A p × g matrix for the initial values of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. Details Discriminant function in the particular case of g=2 classes with an equal-covariance matrix can be expressed d(yi , β) = β0 + β1 yi , 2 2 π1 1 µ1 −µ2 µ1 −µ2 where β0 = log π2 − 2 σ2 and β1 = σ2 . Value beta0 An intercept of discriminant function beta A coefficient of discriminant function EMMIXSSL Fitting Gaussian mixture models Description Fitting Gaussian mixture model to a complete classified dataset or a incomplete classified dataset with/without the missing-data mechanism. Usage EMMIXSSL( dat, zm, pi, mu, sigma, ncov, xi = NULL, type, iter.max = 500, eval.max = 500, rel.tol = 1e-06, sing.tol = 1e-20 ) Arguments dat An n × p matrix where each row represents an individual observation zm An n-dimensional vector containing the class labels including the missing-label denoted as NA. pi A g-dimensional vector for the initial values of the mixing proportions. mu A p × g matrix for the initial values of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices. xi A 2-dimensional vector containing the initial values of the coefficients in the logistic function of the Shannon entropy. type Three types of Gaussian mixture models, ’ign’ indicates fitting the model to a partially classified sample on the basis of the likelihood that ignores the miss- ing label mechanism, ’full’ indicates fitting the model to a partially classified sample on the basis of the full likelihood, taking into account the missing-label mechanism, and ’com’ indicate fitting the model to a completed classified sam- ple. iter.max Maximum number of iterations allowed. Defaults to 500 eval.max Maximum number of evaluations of the objective function allowed. Defaults to 500 rel.tol Relative tolerance. Defaults to 1e-15 sing.tol Singular convergence tolerance; defaults to 1e-20. Value objective Value of objective likelihood convergence Value of convergence iteration Number of iteration pi Estimated vector of the mixing proportions. mu Estimated matrix of the location parameters. sigma Estimated covariance matrix xi Estimated coefficient vector for a logistic function of the Shannon entropy Examples n<-150 pi<-c(0.25,0.25,0.25,0.25) sigma<-array(0,dim=c(3,3,4)) sigma[,,1]<-diag(1,3) sigma[,,2]<-diag(2,3) sigma[,,3]<-diag(3,3) sigma[,,4]<-diag(4,3) mu<-matrix(c(0.2,0.3,0.4,0.2,0.7,0.6,0.1,0.7,1.6,0.2,1.7,0.6),3,4) dat<-rmix(n=n,pi=pi,mu=mu,sigma=sigma,ncov=2) xi<-c(-0.5,1) m<-rlabel(dat=dat$Y,pi=pi,mu=mu,sigma=sigma,xi=xi,ncov=2) zm<-dat$clust zm[m==1]<-NA inits<-initialvalue(g=4,zm=zm,dat=dat$Y,ncov=2) ## Not run: fit_pc<-EMMIXSSL(dat=dat$Y,zm=zm,pi=inits$pi,mu=inits$mu,sigma=inits$sigma,xi=xi,type='full',ncov=2) ## End(Not run) errorrate Error rate of the Bayes rule for two-class Gaussian homoscedastic model Description The optimal error rate of Bayes rule for two-class Gaussian homoscedastic model Usage errorrate(beta0, beta, pi, mu, sigma) Arguments beta0 An n × p matrix where each row represents an individual observation beta Number of observations. pi A g-dimensional vector for the initial values of the mixing proportions. mu A p × g matrix for the initial values of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. Details The optimal error rate of Bayes rule for two-class Gaussian homoscedastic model can be expressed as β0 + β T µ1 β0 + β T µ2 err(yj ; θ) = π1 φ{− T 1 1 } + π2 φ{ T 1 1 } where φ is a normal probability function with mean µi and covariance matrix Σi . Value errval A vector of error rate. gastrodata Gastrointestinal dataset Description The collected dataset is composed of 76 colonoscopic videos (recorded with both White Light (WL) and Narrow Band Imaging (NBI)), the histology (classification ground truth), and the endoscopist’s opinion (including 4 experts and 3 beginners). There are $n=76$ observations, and each observation consists of 698 features extracted from colonoscopic videos on patients with gastrointestinal lesions. References http://www.depeca.uah.es/colonoscopy_dataset/ gastro_label_binary Gastrointestinal binary labels Description A panel of seven endoscopists viewed the videos and determined which patient needs resection (malignant) or no-resection (benign). References http://www.depeca.uah.es/colonoscopy_dataset/ gastro_label_trinary Gastrointestinal trinary labels Description Gastrointestinal trinary ground truth (Adenoma, Serrated, and Hyperplastic) References http://www.depeca.uah.es/colonoscopy_dataset/ get_clusterprobs Posterior probability Description Get posterior probabilities of class membership Usage get_clusterprobs(dat, n, p, g, pi, mu, sigma, ncov = 2) Arguments dat An n × p matrix where each row represents an individual observation n Number of observations. p Dimension of observation vecor. g Number of multivariate normal classes. pi A g-dimensional vector for the initial values of the mixing proportions. mu A p × g matrix for the initial values of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices. Details The posterior probability can be expressed as πi φ(yj ; µi , Σi ) τi (yj ; θ) = P rob{zij = 1|yj } = Pg , where φ is a normal probability function with mean µi and covariance matrix Σi , and zij is is a zero-one indicator variable denoting the class of origin. Value clusprobs Posterior probabilities of class membership for the ith entity Examples n<-150 pi<-c(0.25,0.25,0.25,0.25) sigma<-array(0,dim=c(3,3,4)) sigma[,,1]<-diag(1,3) sigma[,,2]<-diag(2,3) sigma[,,3]<-diag(3,3) sigma[,,4]<-diag(4,3) mu<-matrix(c(0.2,0.3,0.4,0.2,0.7,0.6,0.1,0.7,1.6,0.2,1.7,0.6),3,4) dat<-rmix(n=n,pi=pi,mu=mu,sigma=sigma,ncov=2) tau<-get_clusterprobs(dat=dat$Y,n=150,p=3,g=4,mu=mu,sigma=sigma,pi=pi,ncov=2) get_entropy Shannon entropy Description Shannon entropy Usage get_entropy(dat, n, p, g, pi, mu, sigma, ncov = 2) Arguments dat An n × p matrix where each row represents an individual observation n Number of observations. p Dimension of observation vecor. g Number of multivariate normal classes. pi A g-dimensional vector for the initial values of the mixing proportions. mu A p × g matrix for the initial values of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices. Details The concept of information entropy was introduced by shannon1948mathematical. The entropy of yj is formally defined as Xg ej (yj ; θ) = − τi (yj ; θ) log τi (yj ; θ). Value clusprobs The posterior probabilities of the i-th entity that belongs to the j-th group. Examples n<-150 pi<-c(0.25,0.25,0.25,0.25) sigma<-array(0,dim=c(3,3,4)) sigma[,,1]<-diag(1,3) sigma[,,2]<-diag(2,3) sigma[,,3]<-diag(3,3) sigma[,,4]<-diag(4,3) mu<-matrix(c(0.2,0.3,0.4,0.2,0.7,0.6,0.1,0.7,1.6,0.2,1.7,0.6),3,4) dat<-rmix(n=n,pi=pi,mu=mu,sigma=sigma,ncov=2) en<-get_entropy(dat=dat$Y,n=150,p=3,g=4,mu=mu,sigma=sigma,pi=pi,ncov=2) initialvalue Initial values for ECM Description Inittial values for claculating the estimates based on solely on the classified features. Usage initialvalue(dat, zm, g, ncov = 2) Arguments dat An n × p matrix where each row represents an individual observation zm An n-dimensional vector containing the class labels including the missing-label denoted as NA. g Number of multivariate normal classes. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices. Value pi A g-dimensional initial vector of the mixing proportions. mu A initial p × g matrix of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. Examples n<-150 pi<-c(0.25,0.25,0.25,0.25) sigma<-array(0,dim=c(3,3,4)) sigma[,,1]<-diag(1,3) sigma[,,2]<-diag(2,3) sigma[,,3]<-diag(3,3) sigma[,,4]<-diag(4,3) mu<-matrix(c(0.2,0.3,0.4,0.2,0.7,0.6,0.1,0.7,1.6,0.2,1.7,0.6),3,4) dat<-rmix(n=n,pi=pi,mu=mu,sigma=sigma,ncov=2) xi<-c(-0.5,1) m<-rlabel(dat=dat$Y,pi=pi,mu=mu,sigma=sigma,xi=xi,ncov=2) zm<-dat$clust zm[m==1]<-NA inits<-initialvalue(g=4,zm=zm,dat=dat$Y,ncov=2) list2par Transfer a list into a vector Description Transfer a list into a vector Usage list2par( p, g, pi, mu, sigma, ncov = 2, xi = NULL, type = c("ign", "full", "com") ) Arguments p Dimension of observation vecor. g Number of multivariate normal classes. pi A g-dimensional vector for the initial values of the mixing proportions. mu A p × g matrix for the initial values of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices. xi A 2-dimensional vector containing the initial values of the coefficients in the logistic function of the Shannon entropy. type Three types to fit to the model, ’ign’ indicates fitting the model on the basis of the likelihood that ignores the missing label mechanism, ’full’ indicates that the model to be fitted on the basis of the full likelihood, taking into account the missing-label mechanism, and ’com’ indicate that the model to be fitted to a completed classified sample. Value par a vector including all list information loglk_full Full log-likelihood function Description Full log-likelihood function with both terms of ignoring and missing Usage loglk_full(dat, zm, pi, mu, sigma, ncov = 2, xi) Arguments dat An n × p matrix where each row represents an individual observation zm An n-dimensional vector containing the class labels including the missing-label denoted as NA. pi A g-dimensional vector for the initial values of the mixing proportions. mu A p × g matrix for the initial values of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices. xi A 2-dimensional vector containing the initial values of the coefficients in the logistic function of the Shannon entropy. Details The full log-likelihood function can be expressed as (f ull) (ig) (miss) log LP C (Ψ) = log LP C (θ) + log LP C (θ, ξ), (ig) wherelog LP C (θ)is the log likelihood function formed ignoring the missing in the label of the (miss) unclassified features, and log LP C (θ, ξ) is the log likelihood function formed on the basis of the missing-label indicator. Value lk Log-likelihood value loglk_ig Log likelihood for partially classified data with ingoring the missing mechanism Description Log likelihood for partially classified data with ingoring the missing mechanism Usage loglk_ig(dat, zm, pi, mu, sigma, ncov = 2) Arguments dat An n × p matrix where each row represents an individual observation zm An n-dimensional vector containing the class labels including the missing-label denoted as NA. pi A g-dimensional vector for the initial values of the mixing proportions. mu A p × g matrix for the initial values of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices. Details The log-likelihood function for partially classified data with ingoring the missing mechanism can be expressed as g n " ( g )# (ig) X X X log LP C (θ) = (1 − mj ) zij {log πi + log fi (yj ; ωi )} + mj log πi fi (yj ; ωi ) , j=1 i=1 i=1 where mj is a missing label indicator, zij is a zero-one indicator variable defining the known group of origin of each, and fi (yj ; ωi ) is a probability density function with parameters ωi . Value lk Log-likelihood value. loglk_miss Log likelihood function formed on the basis of the missing-label indi- cator Description Log likelihood for partially classified data based on the missing mechanism with the Shanon entropy Usage loglk_miss(dat, zm, pi, mu, sigma, ncov = 2, xi) Arguments dat An n × p matrix where each row represents an individual observation zm An n-dimensional vector containing the class labels including the missing-label denoted as NA. pi A g-dimensional vector for the initial values of the mixing proportions. mu A p × g matrix for the initial values of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices. xi A 2-dimensional vector containing the initial values of the coefficients in the logistic function of the Shannon entropy. Details The log-likelihood function formed on the basis of the missing-label indicator can be expressed by n (miss) X   log LP C (θ, ξ) = (1 − mj ) log {1 − q(yj ; θ, ξ)} + mj log q(yj ; θ, ξ) , j=1 where q(yj ; θ, ξ) is a logistic function of the Shannon entropy ej (yj ; θ), and mj is a missing label indicator. Value lk loglikelihood value logsumexp log summation of exponential function Description log summation of exponential variable vector. Usage logsumexp(x) Arguments x A variable vector. Value val log summation of exponential variable vector. makelabelmatrix Label matrix Description Convert class indicator into a label maxtrix. Usage makelabelmatrix(clust) Arguments clust An n-dimensional vector of class partition. Value Z A matrix of class indicator. Examples cluster<-c(1,1,2,2,3,3) label_maxtrix<-makelabelmatrix(cluster) neg_objective_function Negative objective function for EMMIXSSL Description Negative objective function for EMMIXSSL Usage neg_objective_function( dat, zm, g, par, ncov = 2, type = c("ign", "full", "com") ) Arguments dat An n × p matrix where each row represents an individual observation zm An n-dimensional vector of group partition including the missing-label, denoted as NA. g Number of multivariate Gaussian groups. par An informative vector including mu, pi,sigma and xi. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices. type Three types to fit to the model, ’ign’ indicates fitting the model on the basis of the likelihood that ignores the missing label mechanism, ’full’ indicates that the model to be fitted on the basis of the full likelihood, taking into account the missing-label mechanism, and ’com’ indicate that the model to be fitted to a completed classified sample. Value val Value of negatvie objective function. normalise_logprob Normalize log-probability Description Normalize log-probability. Usage normalise_logprob(x) Arguments x A variable vector. Value val A normalize log probability of variable vector. par2list Transfer a vector into a list Description Transfer a vector into a list Usage par2list(par, g, p, ncov = 2, type = c("ign", "full")) Arguments par A vector with list information. g Number of multivariate normal classes. p Dimension of observation vecor. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix that sigma is a p × p matrix. ncov = 2 for the unequal covariance/scale matrices that sigma represents a list of g matrices with dimension p × p × g. type Three types to fit to the model, ’ign’ indicates fitting the model on the basis of the likelihood that ignores the missing label mechanism, ’full’ indicates that the model to be fitted on the basis of the full likelihood, taking into account the missing-label mechanism, and ’com’ indicate that the model to be fitted to a completed classified sample. Value parlist Return a list including mu, pi, sigma and xi. pro2vec Transfer a probability vector into a vector Description Transfer a probability vector into an informative vector Usage pro2vec(pro) Arguments pro An propability vector Value y An informative vector rlabel Generation of a missing-data indicator Description Generate the missing label indicator Usage rlabel(dat, pi, mu, sigma, ncov = 2, xi) Arguments dat An n × p matrix where each row represents an individual observation. pi A g-dimensional vector for the initial values of the mixing proportions. mu A p × g matrix for the initial values of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices. xi A 2-dimensional coefficient vector for a logistic function of the Shannon en- tropy. Value m A n-dimensional vector of missing label indicator. The element of outputs m represents its label indicator is missing if m equals 1, otherwise its label indicator is available if m equals to 0. Examples n<-150 pi<-c(0.25,0.25,0.25,0.25) sigma<-array(0,dim=c(3,3,4)) sigma[,,1]<-diag(1,3) sigma[,,2]<-diag(2,3) sigma[,,3]<-diag(3,3) sigma[,,4]<-diag(4,3) mu<-matrix(c(0.2,0.3,0.4,0.2,0.7,0.6,0.1,0.7,1.6,0.2,1.7,0.6),3,4) dat<-rmix(n=n,pi=pi,mu=mu,sigma=sigma,ncov=2) xi<-c(-0.5,1) m<-rlabel(dat=dat$Y,pi=pi,mu=mu,sigma=sigma,xi=xi,ncov=2) rmix Normal mixture model generator. Description Generate random observations from the normal mixture distributions. Usage rmix(n, pi, mu, sigma, ncov = 2) Arguments n Number of observations. pi A g-dimensional vector for the initial values of the mixing proportions. mu A p × g matrix for the initial values of the location parameters. sigma A p × p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p × p × g if ncov=2. ncov Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices. Value Y An n × p numeric matrix with samples drawn in rows. Z An n×g numeric matrix; each row represents zero-one indicator variables defin- ing the known class of origin of each. clust An n-dimensional vector of class partition. Examples n<-150 pi<-c(0.25,0.25,0.25,0.25) sigma<-array(0,dim=c(3,3,4)) sigma[,,1]<-diag(1,3) sigma[,,2]<-diag(2,3) sigma[,,3]<-diag(3,3) sigma[,,4]<-diag(4,3) mu<-matrix(c(0.2,0.3,0.4,0.2,0.7,0.6,0.1,0.7,1.6,0.2,1.7,0.6),3,4) dat<-rmix(n=n,pi=pi,mu=mu,sigma=sigma,ncov=2) vec2cov Transform a vector into a matrix Description Transform a vector into a matrix i.e., Sigma=R^T*R Usage vec2cov(par) Arguments par A vector representing a variance matrix Details The variance matrix is decomposed by computing the Choleski factorization of a real symmetric positive-definite square matrix. Then, storing the upper triangular factor of the Choleski decompo- sition into a vector. Value sigma A variance matrix vec2pro Transfer an informative vector to a probability vector Description Transfer an informative vector to a probability vector Usage vec2pro(vec) vec2pro 21 Arguments vec An informative vector Value pro A probability vector
maestral
readthedoc
TOML
Maestral 1.5.2 documentation [Maestral](#) --- Welcome to Maestral’s developer documentation[](#welcome-to-maestral-s-developer-documentation) === This documentation provides an API reference for the maestral daemon. It is built from the current dev branch and is intended for developers. For a user manual and an overview of Maestral’s functionality, please refer to [maestral.app](https://maestral.app). Sync Logic[](#sync-logic) --- The `maestral.sync.SyncEngine` class provides access to the current sync state through its properties and provides the methods which are necessary to complete an upload or a download sync cycle. This includes methods to wait for and return local and remote changes, to sort those changes and discard any excluded items and to apply local changes to the Dropbox server and vice versa. The `maestral.sync.SyncMonitor` class coordinates the sync process with its threads for startup, download-sync, upload-sync and periodic maintenance. ### Processing of sync events[](#processing-of-sync-events) Remote events come in three types: [`dropbox.files.DeletedMetadata`](https://dropbox-sdk-python.readthedocs.io/en/latest/api/files.html#dropbox.files.DeletedMetadata), [`dropbox.files.FolderMetadata`](https://dropbox-sdk-python.readthedocs.io/en/latest/api/files.html#dropbox.files.FolderMetadata) and [`dropbox.files.FileMetadata`](https://dropbox-sdk-python.readthedocs.io/en/latest/api/files.html#dropbox.files.FileMetadata). The Dropbox API does not differentiate between created, moved or modified events. Maestral processes remote events as follows: 1. `SyncEngine.wait_for_remote_changes()` blocks until remote changes are available. 2. `SyncEngine.list_remote_changes()` lists all remote changes since the last sync. Those events are processed at follows: * Events for entries which are excluded by selective sync and hard-coded file names which are always excluded (e.g., ‘.DS_Store’) are filtered out at this stage. * Multiple events per file path are combined to one. This is rarely necessary, Dropbox typically already provides only a single event per path but this is not guaranteed and may change. One exception is sharing a folder: Dropbox does this by removing the folder from the user’s root and re-mounting it as a shared folder. This produces at least one DeletedMetadata and one FolderMetadata event. If querying for changes *during* this process, multiple [`dropbox.files.DeletedMetadata`](https://dropbox-sdk-python.readthedocs.io/en/latest/api/files.html#dropbox.files.DeletedMetadata) events may be returned. * If a file / folder event implies a type changes, e.g., replacing a folder with a file, we explicitly generate the necessary [`dropbox.files.DeletedMetadata`](https://dropbox-sdk-python.readthedocs.io/en/latest/api/files.html#dropbox.files.DeletedMetadata) here to simplify conflict resolution. 3. `SyncEngine.apply_remote_changes()`: Sorts all events hierarchically, with top-level events coming first. Deleted and folder events are processed in order, file events in parallel with up to 6 worker threads. 4. `SyncEngine.notify_user()`: Shows a desktop notification for the remote changes. Local file events come in eight types: For both files and folders we collect created, moved, modified and deleted events. They are processed as follows: 1. `SyncEngine.wait_for_local_changes()`: Blocks until local changes are registered by `FSEventHandler`. 2. `SyncEngine.list_local_changes()`: Lists all local file events. Those are processed as follows: * Events ignored by a “mignore” pattern as well as hard-coded file names and changes in our cache path are filtered out at this stage. * Events are further cleaned up to return the minimum number of events necessary to reproduce the actual changes: Multiple events per path are combined into a single event which reproduces the file change. The only exception is when the entry type changes from file to folder or vice versa: in this case, both deleted and created events are kept. Further, when a whole folder is moved or deleted, we discard the moved or deleted events for its children. 2. `SyncEngine.apply_local_changes()`: Sorts local changes hierarchically and applies events in the order of deleted, folders and files. Deleted, created and modified events will be applies to the remote Dropbox in parallel with up to 6 threads. Moves will be carried out synchronously. Before processing, we convert all Dropbox metadata and local file events to a unified format of `maestral.database.SyncEvent` instances which are also used to store the sync history data in our SQLite database. ### Detection and resolution of sync conflicts[](#detection-and-resolution-of-sync-conflicts) Sync conflicts during a download are detected by comparing the file’s “rev” with the locally saved revision identifier in Maestral’s index. We assign folders a rev of `'folder'` and deleted / non-existent items a rev of `None`. 1. If both revs are equal, the local item is either the same as on Dropbox or newer and the local changes haven’t been uploaded and committed to our index yet. No download sync occurs (including deletion of the local file). 2. If revs are different, we compare content hashes. If those hashes are equal, no download occurs. 3. If content hashes are different, we check if the local item has been modified since the last download sync. In case of a folder, we take the most recent change of any of its children. If the local entry has not been modified since the last sync, it will be replaced. Otherwise, we create a conflicting copy. Conflict resolution for uploads is handled as follows: 1. For created and moved events, we check if the new path has been excluded by the user with selective sync but still exists on Dropbox. If yes, it will be renamed by appending “(selective sync conflict)”. 2. On case-sensitive file systems, we check if the new path differs only in casing from an existing path. If yes, it will be renamed by appending “(case conflict)”. 3. If a file has been replaced with a folder or vice versa, we check if any un-synced changes will be lost by replacing the remote item and create a conflicting copy if necessary. 4. For created or modified files, check if the local content hash equals the remote content hash. If yes, we don’t upload but update our rev number. If no, we upload the changes and specify the rev which we want to replace or delete. If the remote item is newer (different rev), Dropbox will handle conflict resolution for us. 5. We finally confirm the successful upload and check if Dropbox has renamed the item to a conflicting copy. In the latter case, we apply those changes locally. Logging[](#logging) --- Maestral makes extensive use of Python’s logging module to collect debug, status and error information from different parts of the program and distribute it to the appropriate channels. Broadly speaking, the builtin log levels are used as follows: | Level | Typical messages | | --- | --- | | DEBUG | Individual sync progress, conflict resolution, daemon startup and environment info | | INFO | Cumulative sync progress, sync errors that require user resolution (e.g., insufficient space) | | WARNING | Failure to initialise non-critical functionality (e.g., desktop notifications, sytem keyring) | | ERROR | Errors that prevent all syncing (revoked access token, deleted folder, etc) | Maestral defines a number of log handlers to process those messages, some of them for internal usage, others for external communication. For instance, cached logging handlers are used to populate the public APIs `Maestral.status` and `Maestral.fatal_errors` and therefore use fixed log levels. Logging to stderr, the systemd journal (if applicable) and to our log files uses the user defined log level from `Maestral.log_level` which defaults to INFO. | Target | Log level | Enabled | | --- | --- | --- | | Log file | User defined (default: INFO) | Alyways | | Stderr | User defined (default: INFO) | If verbose flag is passed | | Systemd journal | User defined (default: INFO) | If started as systemd service | | Systemd notify status | INFO | If started as systemd notify service | | `Maestral.status` API | INFO | Alyways | | Desktop notifications | WARNING | Alyways | | `Maestral.fatal_errors` API | ERROR | Alyways | All custom handlers are defined in the `maestral.logging` module. Maestral also subclasses the default [`logging.LogRecord`](https://docs.python.org/3/library/logging.html#logging.LogRecord) to guarantee that any surrogate escape characters in file paths are replaced before emitting a log and flushing to any streams. Otherwise, incorrectly encoded file paths could prevent logging from working properly in exactly those cases where it would be particularly useful. Config files[](#config-files) --- The config files are located at `$XDG_CONFIG_HOME/maestral` on Linux (typically `~/.config/maestral`) and `~/Library/Application Support/maestral` on macOS. Each configuration will get its own INI file with the settings documented below. Config values for `path` and `excluded_items` should not be edited manually but rather through the corresponding CLI commands or GUI options. This is because changes of these settings require Maestral to perform accompanying actions, e.g., download items which have been removed from the excluded list or move the local Dropbox directory. Those will not be performed if the user edits the options manually. This also holds for the `account_id` which will be written to the config file after successfully completing the OAuth flow with Dropbox servers. Any changes will only take effect once Maestral is restarted. Any changes made to the config file may be overwritten without warning if made while the sync daemon is running. ``` [main] # Config file version (not the Maestral version!) version = 15.0.0 [auth] # Unique Dropbox account ID. The account's email # address may change and is therefore not stored here. account_id = <KEY> # The keychain to use to store user credentials. If "automatic", # will be set automatically from available backends when # completing the OAuth flow. Mus be a fully qualified class name. keyring = keyring.backends.macOS.Keyring [app] # Level for desktop notifications: # 15 = FILECHANGE # 30 = SYNCISSUE # 40 = ERROR # 100 = NONE notification_level = 15 # Level for log messages: # 10 = DEBUG # 20 = INFO # 30 = WARNING # 40 = ERR0R log_level = 20 # Interval in sec to check for updates update_notification_interval = 604800 [sync] # The current Dropbox directory path = /Users/UserName/Dropbox (Maestral) # List of excluded files and folders excluded_items = ['/test_folder', '/sub/folder'] # Interval in sec to perform a full reindexing reindex_interval = 604800 # Maximum CPU usage per core max_cpu_percent = 20.0 # Sync history to keep in seconds keep_history = 604800 # Enable upload syncing upload = True # Enable download syncing download = True ``` State files[](#state-files) --- Maestral saves its persistent state in two files: `{config_name}.db` for the file index and `{config_name}.state` for anything else. Both files are located at `$XDG_DATA_DIR/maestral` on Linux (typically `~/.local/share/maestral`) and `~/Library/Application Support/maestral` on macOS. Each configuration will get its own state file. ### Database[](#database) The index is stored in a SQLite database with contains the sync index, the sync event history of the last week and a cache of locally calculated content hashes. We use our own light-weight ORM layer, defined in `maestral.utils.orm`, to manage the mapping between Python objects and database rows. The actual table declarations are given by the definitions of `maestral.database.IndexEntry`, `maestral.database.SyncEvent` and `maestral.database.HashCacheEntry`. ### State file[](#state-file) The state file has the following sections: ``` [main] # State file version (not the Maestral version!) version = 12.0.0 [account] email = <EMAIL> display_name = <NAME> abbreviated_name = FB type = business usage = 39.2% of 1312.8TB used usage_type = team # Information about the user's root namespace. This # may be a personal namespace or a Team Space for certain # Dropbox Business accounts. path_root_type = team path_root_nsid = 1287234 home_path = /User Name [auth] # The type of OAuth access token used: # legacy: long-lived token # offline: short-lived token with long-lived refresh token token_access_type = offline [app] # Version for which update / migration scripts have # run. This is bumped to the currently installed # version after an update. updated_scripts_completed = 1.2.0 # Time stamp of last update notification update_notification_last = 0.0 # Latest avilable release latest_release = 1.2.0 [sync] # Cursor reflecting last-synced remote state cursor = ... # Time stamp reflecting last-synced local state lastsync = 1589979736.623609 # Time stamp of last full reindexing last_reindex = 1589577566.8533309 # Lower case Dropbox paths with upload sync errors upload_errors = [] # Lower case Dropbox paths with download sync errors download_errors = [] # Lower case Dropbox paths of interrupted uploads pending_uploads = [] # Lower case Dropbox paths of interrupted downloads pending_downloads = [] ``` Notably, account info which can be changed by the user such as the email address is saved in the state file while only the fixed Dropbox ID is saved in the config file. Contributing[](#contributing) --- Thank you for your interest in contributing! ### Code[](#code) To start, install maestral with the `dev` extra to get all dependencies required for development: ``` pip3 install maestral[dev] ``` This will install packages to check and enforce the code style, use pre-commit hooks and bump the current version. Code is formatted with [black](https://github.com/psf/black). Coding style is checked with [flake8](http://flake8.pycqa.org). Type hints, [PEP484](https://www.python.org/dev/peps/pep-0484/), are checked with [mypy](http://mypy-lang.org/). You can check the format, coding style, and type hints at the same time by running the provided pre-commit hook from the git directory: ``` pre-commit run -a ``` You can also install the provided pre-commit hook to run checks on every commit. This will however significantly slow down commits. An introduction to pre-commit commit hooks is given at <https://pre-commit.com>. ### Documentation[](#documentation) The documentation is built using [sphinx](https://www.sphinx-doc.org/en/master/) and a few of its extensions. It is built from the develop and master branches and hosted on [Read The Docs](https://maestral.readthedocs.io/en/latest/). If you want to build the documentation locally, install maestral with the `docs` extra to get all required dependencies: ``` pip3 install maestral[docs] ``` The API documentation is mostly based on doc strings. Inline comments should be used whenever code may be difficult to understand for others. ### Tests[](#tests) The test suite uses a mixture of [unittest](https://docs.python.org/3.8/library/unittest.html) and [pytest](https://pytest-cov.readthedocs.io/en/latest/), depending on what is most convenient for the actual test and the preference of the author. Pytest should be used as the test runner. Test are grouped into those which require a linked Dropbox account (“linked”) and those who can run by themselves (“offline”). The former tend to be integration test while the latter are mostly unit tests. The current focus currently lies on integration tests, especially for the sync engine, as they are easier to maintain when the implementation and internal APIs change. Exceptions are made for performance tests, for instance for indexing and cleaning up sync events, and for particularly complex functions that are prone to regressions. The current test suite uses a Dropbox access token provided by the environment variable `DROPBOX_ACCESS_TOKEN` or a refresh token provided by `DROPBOX_REFRESH_TOKEN` to connect to a real account. The GitHub action which is running the tests will set the `DROPBOX_ACCESS_TOKEN` environment variable for you with a temporary access token that expires after 4 hours. Tests are run on `ubuntu-latest` and `macos-latest` in parallel on different accounts. When using the GitHub test runner, you should acquire a “lock” on the account before running tests to prevent them from interfering which each other by creating a folder `test.lock` in the root of the Dropbox folder. This folder should have a `client_modified` time set in the future, to the expiry time of the lock. Fixtures to create and clean up a test config and to acquire a lock are provided in the `tests/linked/conftest.py`. If you run the tests locally, you will need to provide a refresh or access token for your own Dropbox account. If your account is already linked with Maestral, it will have saved a long-lived “refresh token” in your system keyring. You can access it manually or through the Python API: ``` from maestral.main import Maestral m = Maestral() print(m.client.auth.refresh_token) ``` You can then store the retrieved refresh token in the environment variable `DROPBOX_REFRESH_TOKEN` to be automatically picked up by the tests. maestral.autostart[](#maestral-autostart) --- maestral.client[](#maestral-client) --- maestral.config[](#maestral-config) --- ### Submodules[](#submodules) #### maestral.config.base[](#maestral-config-base) #### maestral.config.main[](#maestral-config-main) #### maestral.config.user[](#maestral-config-user) ### Module contents[](#module-contents) maestral.constants[](#maestral-constants) --- maestral.daemon[](#maestral-daemon) --- maestral.database[](#maestral-database) --- maestral.errors[](#maestral-errors) --- maestral.fsevents[](#maestral-fsevents) --- ### Submodules[](#submodules) #### maestral.fsevents.polling[](#maestral-fsevents-polling) ### Module contents[](#module-contents) maestral.logging[](#maestral-logging) --- maestral.main[](#maestral-main) --- maestral.manager[](#maestral-manager) --- maestral.notify[](#maestral-notify) --- maestral.oauth[](#maestral-oauth) --- maestral.sync[](#maestral-sync) --- maestral.utils[](#maestral-utils) --- ### Submodules[](#submodules) #### maestral.utils.appdirs[](#maestral-utils-appdirs) #### maestral.utils.caches[](#maestral-utils-caches) #### maestral.utils.cli[](#maestral-utils-cli) #### maestral.utils.content_hasher[](#maestral-utils-content-hasher) #### maestral.utils.integration[](#maestral-utils-integration) #### maestral.utils.orm[](#maestral-utils-orm) #### maestral.utils.path[](#maestral-utils-path) #### maestral.utils.serializer[](#maestral-utils-serializer) ### Module contents[](#module-contents) Getting started[](#getting-started) --- To use the Maestral API in a Python interpreter, import the main module first and initialize a Maestral instance with a configuration name. For this example, we use a new configuration “private” which is not yet linked to a Dropbox account: ``` >>> from maestral.main import Maestral >>> m = Maestral(config_name="private") ``` Config files will be created on-demand for the new configuration, as described in [Config files](index.html#document-background/config_files) and [State files](index.html#document-background/state_files). We now link the instance to an existing Dropbox account. This is done by generating a Dropbox URL for the user to visit and authorize Maestral. Using the `link()` method, the resulting auth code is exchanged for an access token to make Dropbox API calls. See Dropbox’s [oauth-guide](https://www.dropbox.com/lp/developers/reference/oauth-guide) for details on the OAuth2 PKCE flow which we use. When the auth flow is successfully completed, the credentials will be saved in the system keyring (e.g., macOS Keychain or Gnome Keyring). ``` >>> url = m.get_auth_url() # get token from Dropbox website >>> print(f"Please go to {url} to retrieve a Dropbox authorization token.") >>> token = input("Enter auth token: ") >>> res = m.link(token) ``` The call to `link()` will return 0 on success, 1 for an invalid code and 2 for connection errors. We verify that linking succeeded and proceed to create a local Dropbox folder and start syncing: ``` >>> if res == 0: ... m.create_dropbox_directory("~/Dropbox (Private)") ... m.start_sync() ```
github.com/google/gnxi
go
Go
README [¶](#section-readme) --- [![License](https://img.shields.io/github/license/google/gnxi?style=for-the-badge)](https://opensource.org/licenses/Apache-2.0) [![GoDoc](https://img.shields.io/badge/godoc-reference-blue?style=for-the-badge)](https://godoc.org/github.com/google/gnxi) [![Go Report Card](https://goreportcard.com/badge/github.com/google/gnxi?style=for-the-badge)](https://goreportcard.com/report/github.com/google/gnxi) [![Build Status](https://img.shields.io/travis/google/gnxi?style=for-the-badge)](https://travis-ci.org/google/gnxi) [![Code coverage master](https://img.shields.io/codecov/c/github/google/gnxi/master?style=for-the-badge)](https://codecov.io/github/google/gnxi?branch=master) ### gNxI Tools * gNMI - gRPC Network Management Interface * gNOI - gRPC Network Operations Interface A collection of tools for Network Management that use the gNMI and gNOI protocols. ##### Summary Notes about these tools: * They are intended for testing and as reference implementation of the protocol. * They log to stderr by default, disable with `-logtostderr=false`. * They use glog's log levels, use `-v 1` to log proto message exchanges. ###### gNMI Clients: * [gNMI Capabilities](https://github.com/google/gnxi/blob/2cca83d741c3/gnmi_capabilities) * [gNMI Get](https://github.com/google/gnxi/blob/2cca83d741c3/gnmi_get) * [gNMI Set](https://github.com/google/gnxi/blob/2cca83d741c3/gnmi_set) * [gNMI Subscribe](https://github.com/google/gnxi/blob/2cca83d741c3/gnmi_subscribe) ###### gNMI Targets: * [gNMI Target](https://github.com/google/gnxi/blob/2cca83d741c3/gnmi_target) ###### gNOI Clients * [gNOI Cert](https://github.com/google/gnxi/blob/2cca83d741c3/gnoi_cert) * [gNOI OS](https://github.com/google/gnxi/blob/2cca83d741c3/gnoi_os) * [gNOI Reset](https://github.com/google/gnxi/blob/2cca83d741c3/gnoi_reset) ###### gNOI Targets * [gNOI Target](https://github.com/google/gnxi/blob/2cca83d741c3/gnoi_target) ###### Helpers * [gNOI mockOS](https://github.com/google/gnxi/blob/2cca83d741c3/gnoi_mockos) * [certificate generator](https://github.com/google/gnxi/blob/2cca83d741c3/certs) ##### Documentation * See [gNMI Protocol documentation](https://github.com/openconfig/reference/tree/master/rpc/gnmi). * See [gNOI Protocol documentation](https://github.com/openconfig/gnoi). * See [Openconfig documentation](http://www.openconfig.net/). #### Getting Started These instructions will get you a copy of the project up and running on your local machine. ##### Prerequisites Install **go** in your system <https://golang.org/doc/install>. Requires golang1.14+. ##### Download sources ``` go get github.com/google/gnxi ls $GOPATH/src/github.com/google/gnxi ``` ##### Building and installing binaries ``` cd $GOPATH mkdir bin # This reads the go modules dependencies for installation cd src/github.com/google/gnxi go install ./... ls -la $GOPATH/bin ``` ##### Generating certificates ``` cd $GOPATH/bin ./../src/github.com/google/gnxi/certs/generate.sh ``` ##### Running a client ``` cd $GOPATH/bin ./gnoi_reset \ -target_addr localhost:9339 \ -target_name target.com \ -rollback_os \ -zero_fill \ -key client.key \ -cert client.crt \ -ca ca.crt ``` Optionally define $GOBIN as $GOPATH/bin and add it to your path to run the binaries from any folder. ``` export GOBIN=$GOPATH/bin export PATH=$PATH:$GOBIN ``` #### Disclaimer * This is not an official Google product. * See [how to contribute](https://github.com/google/gnxi/blob/2cca83d741c3/CONTRIBUTING.md). Documentation [¶](#section-documentation) --- ![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg) There is no documentation for this package.
gopkg.in/vmihailenco/msgpack.v2
go
Go
README [¶](#section-readme) --- ### MessagePack encoding for Golang [![Build Status](https://travis-ci.org/vmihailenco/msgpack.svg?branch=v2)](https://travis-ci.org/vmihailenco/msgpack) [![GoDoc](https://godoc.org/github.com/vmihailenco/msgpack?status.svg)](https://godoc.org/github.com/vmihailenco/msgpack) Supports: * Primitives, arrays, maps, structs, time.Time and interface{}. * Appengine *datastore.Key and datastore.Cursor. * [CustomEncoder](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#example-CustomEncoder)/CustomDecoder interfaces for custom encoding. * [Extensions](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#example-RegisterExt) to encode type information. * Renaming fields via `msgpack:"my_field_name"`. * Inlining struct fields via `msgpack:",inline"`. * Omitting empty fields via `msgpack:",omitempty"`. * [Map keys sorting](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#Encoder.SortMapKeys). * Encoding/decoding all [structs as arrays](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#Encoder.StructAsArray) or [individual structs](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#example-Marshal--AsArray). * Simple but very fast and efficient [queries](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#example-Decoder-Query). API docs: <https://godoc.org/gopkg.in/vmihailenco/msgpack.v2>. Examples: <https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#pkg-examples>. #### Installation Install: ``` go get gopkg.in/vmihailenco/msgpack.v2 ``` #### Quickstart ``` func ExampleMarshal() { type Item struct { Foo string } b, err := msgpack.Marshal(&Item{Foo: "bar"}) if err != nil { panic(err) } var item Item err = msgpack.Unmarshal(b, &item) if err != nil { panic(err) } fmt.Println(item.Foo) // Output: bar } ``` #### Benchmark ``` BenchmarkStructVmihailencoMsgpack-4 200000 12814 ns/op 2128 B/op 26 allocs/op BenchmarkStructUgorjiGoMsgpack-4 100000 17678 ns/op 3616 B/op 70 allocs/op BenchmarkStructUgorjiGoCodec-4 100000 19053 ns/op 7346 B/op 23 allocs/op BenchmarkStructJSON-4 20000 69438 ns/op 7864 B/op 26 allocs/op BenchmarkStructGOB-4 10000 104331 ns/op 14664 B/op 278 allocs/op ``` #### Howto Please go through [examples](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#pkg-examples) to get an idea how to use this package. #### See also * [Golang PostgreSQL ORM](https://github.com/go-pg/pg) * [Golang message task queue](https://github.com/go-msgqueue/msgqueue) Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Example (DecodeMapStringString) [¶](#example-package-DecodeMapStringString) ``` decodedMap := make(map[string]string) buf := &bytes.Buffer{} m := map[string]string{"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} keys := []string{"foo1", "foo3", "foo2"} encodedMap, err := encodeMap(m, keys...) if err != nil { panic(err) } _, err = buf.Write(encodedMap) if err != nil { panic(err) } decoder := msgpack.NewDecoder(buf) n, err := decoder.DecodeMapLen() if err != nil { panic(err) } for i := 0; i < n; i++ { key, err := decoder.DecodeString() if err != nil { panic(err) } value, err := decoder.DecodeString() if err != nil { panic(err) } decodedMap[key] = value } for _, key := range keys { fmt.Printf("%#v: %#v, ", key, decodedMap[key]) } ``` ``` Output: "foo1": "bar1", "foo3": "bar3", "foo2": "bar2", ``` Example (EncodeMapStringString) [¶](#example-package-EncodeMapStringString) ``` buf := &bytes.Buffer{} m := map[string]string{"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} keys := []string{"foo1", "foo3"} encodedMap, err := encodeMap(m, keys...) if err != nil { panic(err) } _, err = buf.Write(encodedMap) if err != nil { panic(err) } decoder := msgpack.NewDecoder(buf) value, err := decoder.DecodeMap() if err != nil { panic(err) } decodedMapValue := value.(map[interface{}]interface{}) for _, key := range keys { fmt.Printf("%#v: %#v, ", key, decodedMapValue[key]) } ``` ``` Output: "foo1": "bar1", "foo3": "bar3", ``` ### Index [¶](#pkg-index) * [func Marshal(v ...interface{}) ([]byte, error)](#Marshal) * [func Register(typ reflect.Type, enc encoderFunc, dec decoderFunc)](#Register) * [func RegisterExt(id int8, value interface{})](#RegisterExt) * [func Unmarshal(data []byte, v ...interface{}) error](#Unmarshal) * [type CustomDecoder](#CustomDecoder) * [type CustomEncoder](#CustomEncoder) * [type Decoder](#Decoder) * + [func NewDecoder(r io.Reader) *Decoder](#NewDecoder) * + [func (d *Decoder) Buffered() io.Reader](#Decoder.Buffered) + [func (d *Decoder) Decode(v ...interface{}) error](#Decoder.Decode) + [func (d *Decoder) DecodeArrayLen() (int, error)](#Decoder.DecodeArrayLen) + [func (d *Decoder) DecodeBool() (bool, error)](#Decoder.DecodeBool) + [func (d *Decoder) DecodeBytes() ([]byte, error)](#Decoder.DecodeBytes) + [func (d *Decoder) DecodeBytesLen() (int, error)](#Decoder.DecodeBytesLen) + [func (d *Decoder) DecodeFloat32() (float32, error)](#Decoder.DecodeFloat32) + [func (d *Decoder) DecodeFloat64() (float64, error)](#Decoder.DecodeFloat64) + [func (d *Decoder) DecodeInt() (int, error)](#Decoder.DecodeInt) + [func (d *Decoder) DecodeInt16() (int16, error)](#Decoder.DecodeInt16) + [func (d *Decoder) DecodeInt32() (int32, error)](#Decoder.DecodeInt32) + [func (d *Decoder) DecodeInt64() (int64, error)](#Decoder.DecodeInt64) + [func (d *Decoder) DecodeInt8() (int8, error)](#Decoder.DecodeInt8) + [func (d *Decoder) DecodeInterface() (interface{}, error)](#Decoder.DecodeInterface) + [func (d *Decoder) DecodeMap() (interface{}, error)](#Decoder.DecodeMap) + [func (d *Decoder) DecodeMapLen() (int, error)](#Decoder.DecodeMapLen) + [func (d *Decoder) DecodeNil() error](#Decoder.DecodeNil) + [func (d *Decoder) DecodeSlice() ([]interface{}, error)](#Decoder.DecodeSlice) + [func (d *Decoder) DecodeSliceLen() (int, error)](#Decoder.DecodeSliceLen) + [func (d *Decoder) DecodeString() (string, error)](#Decoder.DecodeString) + [func (d *Decoder) DecodeTime() (time.Time, error)](#Decoder.DecodeTime) + [func (d *Decoder) DecodeUint() (uint, error)](#Decoder.DecodeUint) + [func (d *Decoder) DecodeUint16() (uint16, error)](#Decoder.DecodeUint16) + [func (d *Decoder) DecodeUint32() (uint32, error)](#Decoder.DecodeUint32) + [func (d *Decoder) DecodeUint64() (uint64, error)](#Decoder.DecodeUint64) + [func (d *Decoder) DecodeUint8() (uint8, error)](#Decoder.DecodeUint8) + [func (d *Decoder) DecodeValue(v reflect.Value) error](#Decoder.DecodeValue) + [func (d *Decoder) PeekCode() (code byte, err error)](#Decoder.PeekCode) + [func (d *Decoder) Query(query string) ([]interface{}, error)](#Decoder.Query) + [func (d *Decoder) Reset(r io.Reader) error](#Decoder.Reset) + [func (d *Decoder) Skip() error](#Decoder.Skip) * [type Encoder](#Encoder) * + [func NewEncoder(w io.Writer) *Encoder](#NewEncoder) * + [func (e *Encoder) Encode(v ...interface{}) error](#Encoder.Encode) + [func (e *Encoder) EncodeArrayLen(l int) error](#Encoder.EncodeArrayLen) + [func (e *Encoder) EncodeBool(value bool) error](#Encoder.EncodeBool) + [func (e *Encoder) EncodeBytes(v []byte) error](#Encoder.EncodeBytes) + [func (e *Encoder) EncodeBytesLen(l int) error](#Encoder.EncodeBytesLen) + [func (e *Encoder) EncodeFloat32(n float32) error](#Encoder.EncodeFloat32) + [func (e *Encoder) EncodeFloat64(n float64) error](#Encoder.EncodeFloat64) + [func (e *Encoder) EncodeInt(v int) error](#Encoder.EncodeInt) + [func (e *Encoder) EncodeInt16(v int16) error](#Encoder.EncodeInt16) + [func (e *Encoder) EncodeInt32(v int32) error](#Encoder.EncodeInt32) + [func (e *Encoder) EncodeInt64(v int64) error](#Encoder.EncodeInt64) + [func (e *Encoder) EncodeInt8(v int8) error](#Encoder.EncodeInt8) + [func (e *Encoder) EncodeMapLen(l int) error](#Encoder.EncodeMapLen) + [func (e *Encoder) EncodeNil() error](#Encoder.EncodeNil) + [func (e *Encoder) EncodeSliceLen(l int) error](#Encoder.EncodeSliceLen) + [func (e *Encoder) EncodeString(v string) error](#Encoder.EncodeString) + [func (e *Encoder) EncodeTime(tm time.Time) error](#Encoder.EncodeTime) + [func (e *Encoder) EncodeUint(v uint) error](#Encoder.EncodeUint) + [func (e *Encoder) EncodeUint16(v uint16) error](#Encoder.EncodeUint16) + [func (e *Encoder) EncodeUint32(v uint32) error](#Encoder.EncodeUint32) + [func (e *Encoder) EncodeUint64(v uint64) error](#Encoder.EncodeUint64) + [func (e *Encoder) EncodeUint8(v uint8) error](#Encoder.EncodeUint8) + [func (e *Encoder) EncodeValue(v reflect.Value) error](#Encoder.EncodeValue) + [func (e *Encoder) SortMapKeys(v bool) *Encoder](#Encoder.SortMapKeys) + [func (e *Encoder) StructAsArray(v bool) *Encoder](#Encoder.StructAsArray) + [func (e *Encoder) Writer() io.Writer](#Encoder.Writer) * [type Marshaler](#Marshaler) * [type Unmarshaler](#Unmarshaler) #### Examples [¶](#pkg-examples) * [Package (DecodeMapStringString)](#example-package-DecodeMapStringString) * [Package (EncodeMapStringString)](#example-package-EncodeMapStringString) * [CustomEncoder](#example-CustomEncoder) * [Decoder.Query](#example-Decoder.Query) * [Encoder.StructAsArray](#example-Encoder.StructAsArray) * [Marshal](#example-Marshal) * [Marshal (AsArray)](#example-Marshal-AsArray) * [Marshal (MapStringInterface)](#example-Marshal-MapStringInterface) * [Marshal (RecursiveMapStringInterface)](#example-Marshal-RecursiveMapStringInterface) * [RegisterExt](#example-RegisterExt) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [Marshal](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode.go#L32) [¶](#Marshal) ``` func Marshal(v ...interface{}) ([][byte](/builtin#byte), [error](/builtin#error)) ``` Marshal returns the MessagePack encoding of v. Example [¶](#example-Marshal) ``` type Item struct { Foo string } b, err := msgpack.Marshal(&Item{Foo: "bar"}) if err != nil { panic(err) } var item Item err = msgpack.Unmarshal(b, &item) if err != nil { panic(err) } fmt.Println(item.Foo) ``` ``` Output: bar ``` Example (AsArray) [¶](#example-Marshal-AsArray) ``` type Item struct { _msgpack struct{} `msgpack:",asArray"` Foo string Bar string } var buf bytes.Buffer enc := msgpack.NewEncoder(&buf) err := enc.Encode(&Item{Foo: "foo", Bar: "bar"}) if err != nil { panic(err) } dec := msgpack.NewDecoder(&buf) v, err := dec.DecodeInterface() if err != nil { panic(err) } fmt.Println(v) ``` ``` Output: [foo bar] ``` Example (MapStringInterface) [¶](#example-Marshal-MapStringInterface) ``` in := map[string]interface{}{"foo": 1, "hello": "world"} b, err := msgpack.Marshal(in) if err != nil { panic(err) } var out map[string]interface{} err = msgpack.Unmarshal(b, &out) if err != nil { panic(err) } fmt.Println("foo =", out["foo"]) fmt.Println("hello =", out["hello"]) ``` ``` Output: foo = 1 hello = world ``` Example (RecursiveMapStringInterface) [¶](#example-Marshal-RecursiveMapStringInterface) ``` buf := new(bytes.Buffer) enc := msgpack.NewEncoder(buf) in := map[string]interface{}{"foo": map[string]interface{}{"hello": "world"}} _ = enc.Encode(in) dec := msgpack.NewDecoder(buf) dec.DecodeMapFunc = func(d *msgpack.Decoder) (interface{}, error) { n, err := d.DecodeMapLen() if err != nil { return nil, err } m := make(map[string]interface{}, n) for i := 0; i < n; i++ { mk, err := d.DecodeString() if err != nil { return nil, err } mv, err := d.DecodeInterface() if err != nil { return nil, err } m[mk] = mv } return m, nil } out, err := dec.DecodeInterface() if err != nil { panic(err) } fmt.Println(out) ``` ``` Output: map[foo:map[hello:world]] ``` #### func [Register](https://github.com/vmihailenco/msgpack/blob/v2.9.2/types.go#L25) [¶](#Register) ``` func Register(typ [reflect](/reflect).[Type](/reflect#Type), enc encoderFunc, dec decoderFunc) ``` Register registers encoder and decoder functions for a type. In most cases you should prefer implementing CustomEncoder and CustomDecoder interfaces. #### func [RegisterExt](https://github.com/vmihailenco/msgpack/blob/v2.9.2/ext.go#L26) [¶](#RegisterExt) added in v2.4.1 ``` func RegisterExt(id [int8](/builtin#int8), value interface{}) ``` RegisterExt records a type, identified by a value for that type, under the provided id. That id will identify the concrete type of a value sent or received as an interface variable. Only types that will be transferred as implementations of interface values need to be registered. Expecting to be used only during initialization, it panics if the mapping between types and ids is not a bijection. Example [¶](#example-RegisterExt) ``` package main import ( "encoding/binary" "fmt" "time" "gopkg.in/vmihailenco/msgpack.v2" ) func init() { msgpack.RegisterExt(0, &EventTime{}) } // https://github.com/fluent/fluentd/wiki/Forward-Protocol-Specification-v1#eventtime-ext-format type EventTime struct { time.Time } var _ msgpack.Marshaler = (*EventTime)(nil) var _ msgpack.Unmarshaler = (*EventTime)(nil) func (tm *EventTime) MarshalMsgpack() ([]byte, error) { b := make([]byte, 8) binary.BigEndian.PutUint32(b, uint32(tm.Unix())) binary.BigEndian.PutUint32(b[4:], uint32(tm.Nanosecond())) return b, nil } func (tm *EventTime) UnmarshalMsgpack(b []byte) error { if len(b) != 8 { return fmt.Errorf("invalid data length: got %d, wanted 8", len(b)) } sec := binary.BigEndian.Uint32(b) usec := binary.BigEndian.Uint32(b[4:]) tm.Time = time.Unix(int64(sec), int64(usec)) return nil } func main() { b, err := msgpack.Marshal(&EventTime{time.Unix(123456789, 123)}) if err != nil { panic(err) } var v interface{} err = msgpack.Unmarshal(b, &v) if err != nil { panic(err) } fmt.Println(v.(EventTime).UTC()) var tm EventTime err = msgpack.Unmarshal(b, &tm) if err != nil { panic(err) } fmt.Println(tm.UTC()) } ``` ``` Output: 1973-11-29 21:33:09.000000123 +0000 UTC 1973-11-29 21:33:09.000000123 +0000 UTC ``` Share Format Run #### func [Unmarshal](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode.go#L36) [¶](#Unmarshal) ``` func Unmarshal(data [][byte](/builtin#byte), v ...interface{}) [error](/builtin#error) ``` Unmarshal decodes the MessagePack-encoded data and stores the result in the value pointed to by v. ### Types [¶](#pkg-types) #### type [CustomDecoder](https://github.com/vmihailenco/msgpack/blob/v2.9.2/msgpack.go#L17) [¶](#CustomDecoder) added in v2.2.1 ``` type CustomDecoder interface { DecodeMsgpack(*[Decoder](#Decoder)) [error](/builtin#error) } ``` #### type [CustomEncoder](https://github.com/vmihailenco/msgpack/blob/v2.9.2/msgpack.go#L13) [¶](#CustomEncoder) added in v2.2.1 ``` type CustomEncoder interface { EncodeMsgpack(*[Encoder](#Encoder)) [error](/builtin#error) } ``` Example [¶](#example-CustomEncoder) ``` package main import ( "fmt" "gopkg.in/vmihailenco/msgpack.v2" ) type customStruct struct { S string N int } var _ msgpack.CustomEncoder = (*customStruct)(nil) var _ msgpack.CustomDecoder = (*customStruct)(nil) func (s *customStruct) EncodeMsgpack(enc *msgpack.Encoder) error { return enc.Encode(s.S, s.N) } func (s *customStruct) DecodeMsgpack(dec *msgpack.Decoder) error { return dec.Decode(&s.S, &s.N) } func main() { b, err := msgpack.Marshal(&customStruct{S: "hello", N: 42}) if err != nil { panic(err) } var v customStruct err = msgpack.Unmarshal(b, &v) if err != nil { panic(err) } fmt.Printf("%#v", v) } ``` ``` Output: msgpack_test.customStruct{S:"hello", N:42} ``` Share Format Run #### type [Decoder](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode.go#L40) [¶](#Decoder) ``` type Decoder struct { DecodeMapFunc func(*[Decoder](#Decoder)) (interface{}, [error](/builtin#error)) // contains filtered or unexported fields } ``` #### func [NewDecoder](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode.go#L50) [¶](#NewDecoder) ``` func NewDecoder(r [io](/io).[Reader](/io#Reader)) *[Decoder](#Decoder) ``` #### func (*Decoder) [Buffered](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode.go#L66) [¶](#Decoder.Buffered) added in v2.9.2 ``` func (d *[Decoder](#Decoder)) Buffered() [io](/io).[Reader](/io#Reader) ``` Buffered returns a reader of the data remaining in the Decoder's buffer. The reader is valid until the next call to Decode. #### func (*Decoder) [Decode](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode.go#L70) [¶](#Decoder.Decode) ``` func (d *[Decoder](#Decoder)) Decode(v ...interface{}) [error](/builtin#error) ``` #### func (*Decoder) [DecodeArrayLen](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_slice.go#L19) [¶](#Decoder.DecodeArrayLen) added in v2.6.1 ``` func (d *[Decoder](#Decoder)) DecodeArrayLen() ([int](/builtin#int), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeBool](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode.go#L205) [¶](#Decoder.DecodeBool) ``` func (d *[Decoder](#Decoder)) DecodeBool() ([bool](/builtin#bool), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeBytes](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_string.go#L67) [¶](#Decoder.DecodeBytes) ``` func (d *[Decoder](#Decoder)) DecodeBytes() ([][byte](/builtin#byte), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeBytesLen](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_string.go#L59) [¶](#Decoder.DecodeBytesLen) ``` func (d *[Decoder](#Decoder)) DecodeBytesLen() ([int](/builtin#int), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeFloat32](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_number.go#L141) [¶](#Decoder.DecodeFloat32) ``` func (d *[Decoder](#Decoder)) DecodeFloat32() ([float32](/builtin#float32), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeFloat64](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_number.go#L165) [¶](#Decoder.DecodeFloat64) ``` func (d *[Decoder](#Decoder)) DecodeFloat64() ([float64](/builtin#float64), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeInt](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_number.go#L216) [¶](#Decoder.DecodeInt) ``` func (d *[Decoder](#Decoder)) DecodeInt() ([int](/builtin#int), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeInt16](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_number.go#L226) [¶](#Decoder.DecodeInt16) ``` func (d *[Decoder](#Decoder)) DecodeInt16() ([int16](/builtin#int16), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeInt32](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_number.go#L231) [¶](#Decoder.DecodeInt32) ``` func (d *[Decoder](#Decoder)) DecodeInt32() ([int32](/builtin#int32), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeInt64](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_number.go#L100) [¶](#Decoder.DecodeInt64) ``` func (d *[Decoder](#Decoder)) DecodeInt64() ([int64](/builtin#int64), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeInt8](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_number.go#L221) [¶](#Decoder.DecodeInt8) ``` func (d *[Decoder](#Decoder)) DecodeInt8() ([int8](/builtin#int8), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeInterface](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode.go#L250) [¶](#Decoder.DecodeInterface) ``` func (d *[Decoder](#Decoder)) DecodeInterface() (interface{}, [error](/builtin#error)) ``` DecodeInterface decodes value into interface. Possible value types are: * nil, * bool, * int64 for negative numbers, * uint64 for positive numbers, * float32 and float64, * string, * slices of any of the above, * maps of any of the above. #### func (*Decoder) [DecodeMap](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_map.go#L186) [¶](#Decoder.DecodeMap) ``` func (d *[Decoder](#Decoder)) DecodeMap() (interface{}, [error](/builtin#error)) ``` #### func (*Decoder) [DecodeMapLen](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_map.go#L76) [¶](#Decoder.DecodeMapLen) ``` func (d *[Decoder](#Decoder)) DecodeMapLen() ([int](/builtin#int), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeNil](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode.go#L194) [¶](#Decoder.DecodeNil) added in v2.2.1 ``` func (d *[Decoder](#Decoder)) DecodeNil() [error](/builtin#error) ``` #### func (*Decoder) [DecodeSlice](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_slice.go#L155) [¶](#Decoder.DecodeSlice) ``` func (d *[Decoder](#Decoder)) DecodeSlice() ([]interface{}, [error](/builtin#error)) ``` #### func (*Decoder) [DecodeSliceLen](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_slice.go#L15) [¶](#Decoder.DecodeSliceLen) ``` func (d *[Decoder](#Decoder)) DecodeSliceLen() ([int](/builtin#int), [error](/builtin#error)) ``` Deprecated. Use DecodeArrayLen instead. #### func (*Decoder) [DecodeString](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_string.go#L30) [¶](#Decoder.DecodeString) ``` func (d *[Decoder](#Decoder)) DecodeString() ([string](/builtin#string), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeTime](https://github.com/vmihailenco/msgpack/blob/v2.9.2/time.go#L27) [¶](#Decoder.DecodeTime) ``` func (d *[Decoder](#Decoder)) DecodeTime() ([time](/time).[Time](/time#Time), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeUint](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_number.go#L196) [¶](#Decoder.DecodeUint) ``` func (d *[Decoder](#Decoder)) DecodeUint() ([uint](/builtin#uint), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeUint16](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_number.go#L206) [¶](#Decoder.DecodeUint16) ``` func (d *[Decoder](#Decoder)) DecodeUint16() ([uint16](/builtin#uint16), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeUint32](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_number.go#L211) [¶](#Decoder.DecodeUint32) ``` func (d *[Decoder](#Decoder)) DecodeUint32() ([uint32](/builtin#uint32), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeUint64](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_number.go#L60) [¶](#Decoder.DecodeUint64) ``` func (d *[Decoder](#Decoder)) DecodeUint64() ([uint64](/builtin#uint64), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeUint8](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_number.go#L201) [¶](#Decoder.DecodeUint8) ``` func (d *[Decoder](#Decoder)) DecodeUint8() ([uint8](/builtin#uint8), [error](/builtin#error)) ``` #### func (*Decoder) [DecodeValue](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode.go#L189) [¶](#Decoder.DecodeValue) ``` func (d *[Decoder](#Decoder)) DecodeValue(v [reflect](/reflect).[Value](/reflect#Value)) [error](/builtin#error) ``` #### func (*Decoder) [PeekCode](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode.go#L348) [¶](#Decoder.PeekCode) added in v2.5.1 ``` func (d *[Decoder](#Decoder)) PeekCode() (code [byte](/builtin#byte), err [error](/builtin#error)) ``` peekCode returns next MessagePack code. See <https://github.com/msgpack/msgpack/blob/master/spec.md#formats> for details. #### func (*Decoder) [Query](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode_query.go#L33) [¶](#Decoder.Query) added in v2.6.0 ``` func (d *[Decoder](#Decoder)) Query(query [string](/builtin#string)) ([]interface{}, [error](/builtin#error)) ``` Query extracts data specified by the query from the msgpack stream skipping any other data. Query consists of map keys and array indexes separated with dot, e.g. key1.0.key2. Example [¶](#example-Decoder.Query) ``` b, err := msgpack.Marshal([]map[string]interface{}{ {"id": 1, "attrs": map[string]interface{}{"phone": 12345}}, {"id": 2, "attrs": map[string]interface{}{"phone": 54321}}, }) if err != nil { panic(err) } dec := msgpack.NewDecoder(bytes.NewBuffer(b)) values, err := dec.Query("*.attrs.phone") if err != nil { panic(err) } fmt.Println("phones are", values) dec.Reset(bytes.NewBuffer(b)) values, err = dec.Query("1.attrs.phone") if err != nil { panic(err) } fmt.Println("2nd phone is", values[0]) ``` ``` Output: phones are [12345 54321] 2nd phone is 54321 ``` #### func (*Decoder) [Reset](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode.go#L59) [¶](#Decoder.Reset) added in v2.6.0 ``` func (d *[Decoder](#Decoder)) Reset(r [io](/io).[Reader](/io#Reader)) [error](/builtin#error) ``` #### func (*Decoder) [Skip](https://github.com/vmihailenco/msgpack/blob/v2.9.2/decode.go#L304) [¶](#Decoder.Skip) added in v2.4.5 ``` func (d *[Decoder](#Decoder)) Skip() [error](/builtin#error) ``` Skip skips next value. #### type [Encoder](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode.go#L38) [¶](#Encoder) ``` type Encoder struct { // contains filtered or unexported fields } ``` #### func [NewEncoder](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode.go#L46) [¶](#NewEncoder) ``` func NewEncoder(w [io](/io).[Writer](/io#Writer)) *[Encoder](#Encoder) ``` #### func (*Encoder) [Encode](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode.go#L72) [¶](#Encoder.Encode) ``` func (e *[Encoder](#Encoder)) Encode(v ...interface{}) [error](/builtin#error) ``` #### func (*Encoder) [EncodeArrayLen](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_slice.go#L72) [¶](#Encoder.EncodeArrayLen) added in v2.4.11 ``` func (e *[Encoder](#Encoder)) EncodeArrayLen(l [int](/builtin#int)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeBool](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode.go#L125) [¶](#Encoder.EncodeBool) ``` func (e *[Encoder](#Encoder)) EncodeBool(value [bool](/builtin#bool)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeBytes](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_slice.go#L62) [¶](#Encoder.EncodeBytes) ``` func (e *[Encoder](#Encoder)) EncodeBytes(v [][byte](/builtin#byte)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeBytesLen](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_slice.go#L32) [¶](#Encoder.EncodeBytesLen) added in v2.8.3 ``` func (e *[Encoder](#Encoder)) EncodeBytesLen(l [int](/builtin#int)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeFloat32](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_number.go#L77) [¶](#Encoder.EncodeFloat32) ``` func (e *[Encoder](#Encoder)) EncodeFloat32(n [float32](/builtin#float32)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeFloat64](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_number.go#L81) [¶](#Encoder.EncodeFloat64) ``` func (e *[Encoder](#Encoder)) EncodeFloat64(n [float64](/builtin#float64)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeInt](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_number.go#L42) [¶](#Encoder.EncodeInt) ``` func (e *[Encoder](#Encoder)) EncodeInt(v [int](/builtin#int)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeInt16](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_number.go#L50) [¶](#Encoder.EncodeInt16) ``` func (e *[Encoder](#Encoder)) EncodeInt16(v [int16](/builtin#int16)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeInt32](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_number.go#L54) [¶](#Encoder.EncodeInt32) ``` func (e *[Encoder](#Encoder)) EncodeInt32(v [int32](/builtin#int32)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeInt64](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_number.go#L58) [¶](#Encoder.EncodeInt64) ``` func (e *[Encoder](#Encoder)) EncodeInt64(v [int64](/builtin#int64)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeInt8](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_number.go#L46) [¶](#Encoder.EncodeInt8) ``` func (e *[Encoder](#Encoder)) EncodeInt8(v [int8](/builtin#int8)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeMapLen](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_map.go#L123) [¶](#Encoder.EncodeMapLen) added in v2.4.8 ``` func (e *[Encoder](#Encoder)) EncodeMapLen(l [int](/builtin#int)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeNil](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode.go#L121) [¶](#Encoder.EncodeNil) ``` func (e *[Encoder](#Encoder)) EncodeNil() [error](/builtin#error) ``` #### func (*Encoder) [EncodeSliceLen](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_slice.go#L83) [¶](#Encoder.EncodeSliceLen) ``` func (e *[Encoder](#Encoder)) EncodeSliceLen(l [int](/builtin#int)) [error](/builtin#error) ``` Deprecated. Use EncodeArrayLen instead. #### func (*Encoder) [EncodeString](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_slice.go#L55) [¶](#Encoder.EncodeString) ``` func (e *[Encoder](#Encoder)) EncodeString(v [string](/builtin#string)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeTime](https://github.com/vmihailenco/msgpack/blob/v2.9.2/time.go#L17) [¶](#Encoder.EncodeTime) ``` func (e *[Encoder](#Encoder)) EncodeTime(tm [time](/time).[Time](/time#Time)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeUint](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_number.go#L10) [¶](#Encoder.EncodeUint) ``` func (e *[Encoder](#Encoder)) EncodeUint(v [uint](/builtin#uint)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeUint16](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_number.go#L18) [¶](#Encoder.EncodeUint16) ``` func (e *[Encoder](#Encoder)) EncodeUint16(v [uint16](/builtin#uint16)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeUint32](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_number.go#L22) [¶](#Encoder.EncodeUint32) ``` func (e *[Encoder](#Encoder)) EncodeUint32(v [uint32](/builtin#uint32)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeUint64](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_number.go#L26) [¶](#Encoder.EncodeUint64) ``` func (e *[Encoder](#Encoder)) EncodeUint64(v [uint64](/builtin#uint64)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeUint8](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode_number.go#L14) [¶](#Encoder.EncodeUint8) ``` func (e *[Encoder](#Encoder)) EncodeUint8(v [uint8](/builtin#uint8)) [error](/builtin#error) ``` #### func (*Encoder) [EncodeValue](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode.go#L116) [¶](#Encoder.EncodeValue) ``` func (e *[Encoder](#Encoder)) EncodeValue(v [reflect](/reflect).[Value](/reflect#Value)) [error](/builtin#error) ``` #### func (*Encoder) [SortMapKeys](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode.go#L61) [¶](#Encoder.SortMapKeys) added in v2.5.3 ``` func (e *[Encoder](#Encoder)) SortMapKeys(v [bool](/builtin#bool)) *[Encoder](#Encoder) ``` SortMapKeys causes the Encoder to encode map keys in increasing order. Supported map types are: * map[string]string * map[string]interface{} #### func (*Encoder) [StructAsArray](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode.go#L67) [¶](#Encoder.StructAsArray) added in v2.8.0 ``` func (e *[Encoder](#Encoder)) StructAsArray(v [bool](/builtin#bool)) *[Encoder](#Encoder) ``` StructAsArray causes the Encoder to encode Go structs as MessagePack arrays. Example [¶](#example-Encoder.StructAsArray) ``` type Item struct { Foo string Bar string } var buf bytes.Buffer enc := msgpack.NewEncoder(&buf).StructAsArray(true) err := enc.Encode(&Item{Foo: "foo", Bar: "bar"}) if err != nil { panic(err) } dec := msgpack.NewDecoder(&buf) v, err := dec.DecodeInterface() if err != nil { panic(err) } fmt.Println(v) ``` ``` Output: [foo bar] ``` #### func (*Encoder) [Writer](https://github.com/vmihailenco/msgpack/blob/v2.9.2/encode.go#L82) [¶](#Encoder.Writer) added in v2.9.2 ``` func (e *[Encoder](#Encoder)) Writer() [io](/io).[Writer](/io#Writer) ``` Writer returns the Encoder's writer. #### type [Marshaler](https://github.com/vmihailenco/msgpack/blob/v2.9.2/msgpack.go#L4) [¶](#Marshaler) ``` type Marshaler interface { MarshalMsgpack() ([][byte](/builtin#byte), [error](/builtin#error)) } ``` Deprecated. Use CustomEncoder. #### type [Unmarshaler](https://github.com/vmihailenco/msgpack/blob/v2.9.2/msgpack.go#L9) [¶](#Unmarshaler) ``` type Unmarshaler interface { UnmarshalMsgpack([][byte](/builtin#byte)) [error](/builtin#error) } ``` Deprecated. Use CustomDecoder.
pystematic
readthedoc
Markdown
## Defining experiments The central concept in pystematic is that of an Experiment. An experiment consists of: A main function that executes the code associated with the experiment, * a set of parameters, that controls aspects of the experiments behavior. To define an experiment you simply decorate the main function of your experiment with the @pystematic.experiment def my_experiment(params): print("Hello from my_experiment") ``` The main function must take a single argument which is a dict containing the experiment parameters. ## Running experiments To run the experiment you have a couple of different options. The simplest one is to run the experiment by with the ``` pystematic.core.Experiment.run() ``` ``` my_experiment.run({}) ``` it takes a single argument which is a dict of parameter values, as we haven’t defined any parameters yet, we’ll pass an empty dict for now. Another option is to run the experiment from the command line. To do that, we call the ``` pystematic.core.Experiment.cli() ``` ``` if __name__ == "__main__": my_experiment.cli() ``` The file now has the capabilities of a full-fledged CLI. When you run the file from the command line: ``` $ python path/to/file.py Hello from my_experiment ``` you will see that the experiment is run. Note At most one experiment can be active at a time. This mean that if you want to run an experiment from within another experiment, you need to start the new experiment in a new process, which can be done with the method ``` pystematic.core.Experiment.run_in_new_process() ``` ## Adding parameters Every experiment has a set of parameters associated with it. If you run: you will see that the experiment we defined earlier is already equipped with a set of default parameters. To add additional parameters to the experiment, you use the ``` pystematic.parameter() ``` @pystematic.parameter( name="string_to_print", type=str, help="This string will be printed when the experiment is run", default="No string was given", ) @pystematic.experiment def my_experiment(params): print(f"string_to_print is {params['string_to_print']}") ``` The code above adds a string parameter named `string_to_print` with a default value, and a description of the parameter. When we run the experiment - either programmatically or from the command line - we can set a value for the parameter. Pystematic provides a set of functions designated for the currently running experiment, referred to as the Experiment API. The API consists of a set of functions and attributed that provides functionality such as launching sub-processes and generating reproducible random seeds. Any installed extension also extends the experiment api with its own set of functions. ## A note on naming conventions At this point it is probably a good idea to mention something about the naming conventions used. You may have noticed that in the python source code, the name of all experiments and parameters use the snake_case convention, but on the command line, these are magically converted to kebab-case. This seems to be a convention in CLI tools, and this framework sticks to that convention. To reiterate, this means that on the command line, all paramters and experiments use the kebab-case naming convention, but in the source code, they all use the snake_case naming convention. ## Experiment output If you tried running the examples above you might have noticed that a folder named `output` was created in you current working directory. This is no accident. Every time an experiment is run, a unique output folder is created in the configured output directory. The folder creation follows the naming convention ``` <output_dir>/<experiment_name>/<current date and time> ``` , where `output_dir` is the value of the parameter with the same name (which defaults to your current working directory). The reason each invocation of an experiment gets its own output directory is to avoid mixing up outputs from different runs. If you look into the output directory of one of the experiment runs you will also notice that there is a file there named `parameters.yaml` . This file contains the values of all parameters when the experiment was run. This is extremely useful when you run an experiment many times with different set of parameters. When an experiment is run, this newly created output directory is bound to the ``` pystematic.output_dir ``` property. All data that you want to output from the experiment should be written to this directory. ## Managing random numbers Reproducibility is an integral part of any sort of research. One of the default parameters added to all experiments is an integer named `random_seed` . If a value for this parameter is not supplied when an experiment is run, a random value will be generated and assigned to this parameter. The value of the `random_seed` parameter is used to seed an internal random number generator used by pystematic. Whenever you need to seed a random number generator in your experiment, you call the function to obtain a seed. Internally, the function uses the internal number generator to generate a new number every time it is called. This way, you make the experiment reproducible by controlling all sources of randomness in the experiment with the single “global” seed provided in the `random_seed` parameter. Here’s how a simple experiment might make sure that random numbers are reproducible: ``` import random import numpy as np import pystematic as ps @ps.experiment def reproducible_experiment(params): random.seed(ps.new_seed()) np.random.seed(ps.new_seed()) # etc. ``` ## Grouping experiments If you have several experiments defined in the same file, you may want to be able to run them all from the CLI without changing your code. This is what groups are for. Take the following as an example: @ps.experiment def prepare_dataset(params): # ... if __name__ == "__main__": # prepare_dataset.cli() # fit_model.cli() visualize_results.cli() ``` The code above has three defined experiments. We can run them from the cli by calling each experiments `cli()` function, but that would require us to change the code whenever we want to run another experiment. To remedy this, we can add them all to a group. We first use the `pystematic.group()` decorator to define the group, and then use to group’s own experiment decorator to define the experiments, instead of the global experiment decorator. We then use the group’s `cli()` function to activate the cli: @ps.group def my_group(): pass @my_group.experiment # <--- Note that we are using the groups experiment # decorator instead of the global one. def prepare_dataset(params): # ... if __name__ == "__main__": my_group.cli() ``` We can now choose which experiment to run like this: ``` $ python path/to/file.py prepare-dataset <experiment params here> # or: $ python path/to/file.py fit-model <experiment params here> # or: $ python path/to/file.py visualize-results <experiment params here> ``` To get a list of all available experiments simply run the script with the `-h` flag: Another feature of groups is that all parameters that you add to the group will be inherited by all experiments in the group. This is very useful when you have several experiments take share a set of common parameters. By adding the common parameters to the group, you don’t have to repeat the parameter definitions for every experiment. In the example above, we could define a common parameters like this: @pystematic.parameter( name="dataset_path", type=str ) @ps.group def my_group(): pass ``` Groups can be arbitrarily nested to create hierarchies of experiments. Note that the main function of the group is never run. It is only used as a symbolic convenience for defining the group. ## Extensions Pystematic is built from the core to be extensible. See the page on Writing extensions to learn how you can design and customize experiments of your own. To be continued ## Decorators * @pystematic.experiment(name=None, inherit_params=None, defaults={}) * Creates a new experiment with the decorated function as the main function. * @pystematic.group(name=None, inherit_params=None) * Used to group experiments. This decorator is used on a function. Note that the decorated function will never be called. All parameters added to the group will be inherited by all experiments that are part of the group. name (str, optional) – The name of the group. Defaults to None. * Adds a parameter to an experiment. name (str) – The name of the parameter. The name must be a valid python identifier. * type (Callable[[str], Any], optional) – The type of the parameter. Must be a callable that takes a single string value argument and returns the converted value. Defaults to str. * default (Union[Any, Callable[[], Any], None], optional) – The default value of the parameter. Can be either a value or a zero arguments callable that returns the value. Defaults to None. * required (bool, optional) – Set to True if this parameter is required. If a default value is set, the parameter will effectively no longer be ‘required’, even if this options is set to True. Defaults to False. * allowed_values (list[Any], optional) – If given, the value must be in the list of allowed values. Defaults to None. * is_flag (bool, optional) – When set to True, this parameter is assumed to be a boolean flag. A flag parameter does not need to be given a value on the command line. Its mere presence on the command line will automatically assign it the value True. Defaults to False. * multiple (bool, optional) – When set to True, the parameter is assumed to be a list of zero or more values. Defaults to False. * allow_from_file (bool, optional) – Controls whether it should be allowed to load a value for this parameter from a params file. Defaults to True. * envvar (Union[str, None, Literal[False]], optional) – Name of the environment variable that the value for this parameter may be read from. Defaults to None. * help (Optional[str], optional) – A help text for the parameter that will be shown on the command line. Defaults to None. * default_help (Optional[str], optional) – A help text for the default value. If None, the default help text will be created by calling `str(default_value)` . Useful when the default value is a callable. Defaults to None. * @pystematic.param_group(name, help=None, *parameters) * Defines a parameter group. Useful when you have many parameters and want to organize them. Parameter groups are not visible when passing parameters to, or reading parameters in the experiment. Their sole purpose is to make the CLI help output a bit more structured, which hopefully helps your colleagues when inspecting the experiment. You define the parameters that you want to pass to this decorator with the normal parameter decorator, but without using the ‘@’ in front of the function. Like this: > import pystematic as ps @ps.param_group( "param_group", ps.parameter( name="str_param", type=str ), ps.parameter( name="int_param", type=int ) ) @ps.experiment def exp(params): print(f"str_param is {params["str_param"]} and int_param is {params["int_param"]}") name (str) – The name of the group * help (str, optional) – An optional description of this group. If provided, it has to be passed as the second argument to this decorator. Defaults to None. * *parameters (Parameter) – An arbirary number of parameters that should belong to this group. Use the parameter decorator - but without the ‘@’ - to create parameters that you pass as positional arguments to this function. The experiment API is available for the currently running experiment. The use of the API when no experiment is running results in undefined behavior. ### Attributes These attributes holds information related to the current experiment. Note that they are uninitialized until an experiment has actually started. * pystematic.output_dir : pathlib.Path * Holds a `pathlib.Path` object that points to the current output directory. All output from an experiment should be written to this folder. All internal procedures that produce output will always write it to this folder. When you want to output something persistent from the experiment yourself, it is your responsibly to use this output directory. * pystematic.params : dict * Holds a dict of all parameters of the current experiment. It is the same dict that is passed to the main function. You should never modify this dict. ### Functions * pystematic.new_seed(nbits=32) int * Use this function to generate random numbers seeded by the experiment parameter `random_seed` . Expected use is to seed your own random number generators. nbits (int, optional) – The number of bits to use to represent the generated number. Defaults to 32. * Returns * A random number seeded by the experiment parameter `random_seed` . * Return type * * pystematic.launch_subprocess(**additional_params) Process * Launches a subprocess. The subprocess will be instructed to execute the main function of the currently running experiment, and have the same output directory and parameters as the current process. **additional_params – Any additional parameters that should be passed to the subprocess. Params given here takes precedence over the parameters copied from the current experiment. Warning The subprocess will be initialized with the same random seed as the current process. If this is not what you want, you should pass a new seed to this function in the `random_seed` parameter. E.g.: > pystematic.launch_subprocess(random_seed=pystematic.new_seed()) * pystematic.run_parameter_sweep(experiment, list_of_params, max_num_processes=1) None * Runs an experiment multiple times with a set of different params. At most `max_num_processes` concurrent processes will be used. This call will block until all experiments have been run. experiment (Experiment) – The experiment to run. * list_of_params (list of dict) – A list of parameter dictionaries. Each corresponding to one run of the experiment. See ``` pystematic.param_matrix() ``` for a convenient way of generating such a list. * max_num_processes (int, optional) – The maximum number of concurrent processes to use for running the experiments. Defaults to 1. * pystematic.is_subprocess() bool * Returns true if this process is a subprocess. I.e. it has been launched by a call to `launch_subprocess()` in a parent process. Whether or not the current process is a subprocess. * Return type * bool * pystematic.local_rank() * Returns the local rank of the current process. The master process will always have rank 0, and every subprocess launched with ``` pystematic.launch_subprocess() ``` will be assigned a new local rank from an incrementing integer counter starting at 1. The local rank of the current process. * Return type * * pystematic.param_matrix(**param_values) * This function can be used to build parameter combinations to use when running parameter sweeps. It takes an arbitrary number of keywork arguments, where each argument is a parameter and a list of all values that you want to try for that parameter. It then builds a list of parameter dictionaries such that all combinations of parameter values appear once in the list. The output of this function can be passed directly to ``` pystematic.run_parameter_sweep() ``` . For example: > import pystematic as ps param_list = ps.param_matrix( int_param=[1, 2], str_param=["hello", "world"] ) assert param_list == [ { "int_param": 1, "str_param": "hello" }, { "int_param": 1, "str_param": "world" }, { "int_param": 2, "str_param": "hello" }, { "int_param": 2, "str_param": "world" } ] **param_values – A mapping from parameter name to a list of values to try for that parameter. If a value is not a list or tuple, it is assumed to be constant (its value will be the same in all combinations). * Returns * A list of parameter combinations created by taking the cartesian product of all values in the input. * Return type * list of dicts ## Default parameters The following parameters are added to all experiments by default. Note that these are also listed if you run an experiment from the command line with the `--help` option. * `output_dir` : Parent directory to store all run-logs in. Will be created if it does not exist. Default value is `./output` . * `random_seed` : The value to seed the master random number generator with. Default is randomly generated. * `params_file` : Read experiment parameters from a yaml file, such as the one dumped in the output dir from an eariler run. When this option is set from the command line, any other options supplied after will override the ones loaded from the file. * `debug` : Sets debug flag ON/OFF. Configures the python logging mechanism to print all DEBUG messages. Default value is `False` . ## Core types These classes are not supposed to be instantiated manually, but only through their corresponding decorators. * class pystematic.core.Experiment(main_function, name=None, defaults_override={}) * This is the class used to represent experiments. Note that you should not instantiate this class manually, but only through the decorator. Runs the experiment by parsing the parameters from the command line. * run(params={}) * Runs the experiment in the current process with the provided parameters. params (dict, optional) – A dict containing values for the parameters. Defaults to {}. * run_in_new_process(params) * Runs the experiment in a new process with the parameters provided. Returns a handle to the process object used to run the experiment. If you want to wait for the experiment to finish you have to manually wait for the process to exit. params (dict) – A dict containing the values for the parameters. * Returns * The process object used to run the experiment * Return type * multiprocessing.Process * class pystematic.core.ExperimentGroup(main_function, name=None) * Use when you have many experiments and want to group them in some way. Runs the group by parsing the parameters from the command line. The first argument should be the name of the experiment to run. * experiment(name=None, inherit_params=None, defaults={}, inherit_params_from_group=True) * Creates a new experiment that is part of this group. See also * group(name=None, inherit_params=None, inherit_params_from_group=True) * Creates a nested group. See also `pystematic.group()` name (str, optional) – Name of the group. Defaults to the name of the annotated function. * Represents an experiment parameter. You will typically never interact with this class directly. * class pystematic.core.ParameterGroup(name, help, parameter_decorators) * class pystematic.core.PystematicApp * A single instance of this class is created when pystematic initializes. Its main purpose is to centralize extension management. When an extension is initialized it is handed a reference to that instance. * get_api_object() * Returns a handle to the `pystematic` global api. Extensions can retrieve the handle by calling this function, and modify the global api during initialization. The api available under the `pystematic` global namespace. * Return type * module * on_after_experiment(callback, priority=50) * Adds a callback to the ‘after_experiment’ event. The event is triggered after the experiment main function has returned. callback (function) – A function that is called before an experiment is run. The callback must take a single argument which is the error that caused the experiment to end, or None if the experiment ended normally. * * on_before_experiment(callback, priority=50) * Adds a callback to the ‘before_experiment’ event. The event is triggered before the experiment main function is called. callback (function) – A function that is called before an experiment is run. It should take two arguments; the experiment, and a dict of parameter values. * * on_experiment_created(callback, priority=50) * Adds a callback to the ‘experiment_created’ event. callback (function) – A function that is called whenever a new experiment is defined. It should take the created experiment as a single argument. * # Writing extensions Writing extensions — pystematic documentation pystematic Writing extensions to be continued
github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp
go
Go
README [¶](#section-readme) --- ### Google Secret Manager Provider for Secret Store CSI Driver [![e2e](https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp/actions/workflows/e2e.yml/badge.svg)](https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp/actions/workflows/e2e.yml) [Google Secret Manager](https://cloud.google.com/secret-manager/) provider for the [Secret Store CSI Driver](https://github.com/kubernetes-sigs/secrets-store-csi-driver). Allows you to access secrets stored in Secret Manager as files mounted in Kubernetes pods. #### Install * Create a new GKE cluster with Workload Identity or enable [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable_on_existing_cluster) on an existing cluster. * Install the [Secret Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/getting-started/installation.html) v1.0.1 or higher to the cluster. * Install the Google plugin DaemonSet & additional RoleBindings: ``` kubectl apply -f deploy/provider-gcp-plugin.yaml # if you want to use helm # helm upgrade --install secrets-store-csi-driver-provider-gcp charts/secrets-store-csi-driver-provider-gcp ``` NOTE: The driver's rotation and secret syncing functionality is still in Alpha and requires [additional installation steps](https://secrets-store-csi-driver.sigs.k8s.io/getting-started/installation.html#optional-values). #### Usage The provider will use the workload identity of the pod that a secret is mounted onto when authenticating to the Google Secret Manager API. For this to work the workload identity of the pod must be configured and appropriate IAM bindings must be applied. * Setup the workload identity service account. ``` $ export PROJECT_ID=<your gcp project> $ gcloud config set project $PROJECT_ID # Create a service account for workload identity $ gcloud iam service-accounts create gke-workload # Allow "default/mypod" to act as the new service account $ gcloud iam service-accounts add-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:$PROJECT_ID.svc.id.goog[default/mypodserviceaccount]" \ <EMAIL> ``` * Create a secret that the workload identity service account can access ``` # Create a secret with 1 active version $ echo "foo" > secret.data $ gcloud secrets create testsecret --replication-policy=automatic --data-file=secret.data $ rm secret.data # grant the new service account permission to access the secret $ gcloud secrets add-iam-policy-binding testsecret \ --member=serviceAccount:gke-workload@$PROJECT_ID.iam.gserviceaccount.com \ --role=roles/secretmanager.secretAccessor ``` * Try it out the [example](https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp/blob/v1.3.0/examples) which attempts to mount the secret "test" in `$PROJECT_ID` to `/var/secrets/good1.txt` and `/var/secrets/good2.txt` ``` $ ./scripts/example.sh $ kubectl exec -it mypod /bin/bash root@mypod:/# ls /var/secrets ``` #### Security Considerations This plugin is built to ensure compatibility between Secret Manager and Kubernetes workloads that need to load secrets from the filesystem. It also enables syncing of those secrets to Kubernetes-native secrets for consumption as environment variables. When evaluating this plugin consider the following threats: * When a secret is accessible on the **filesystem**, application vulnerabilities like [directory traversal](https://en.wikipedia.org/wiki/Directory_traversal_attack) attacks can become higher severity as the attacker may gain the ability to read the secret material. * When a secret is consumed through **environment variables**, misconfigurations such as enabling a debug endpoint or including dependencies that log process environment details may leak secrets. * When **syncing** secret material to another data store (like Kubernetes Secrets), consider whether the access controls on that data store are sufficiently narrow in scope. For these reasons, *when possible* we recommend using the Secret Manager API directly (using one of the provided [client libraries](https://cloud.google.com/secret-manager/docs/reference/libraries), or by following the [REST](https://cloud.google.com/secret-manager/docs/reference/rest) or [GRPC](https://cloud.google.com/secret-manager/docs/reference/rpc) documentation). #### Contributing Please see the [contributing guidelines](https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp/blob/v1.3.0/docs/contributing.md). #### Support **This is not an officially supported Google product.** For support [please search open issues here](https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp/issues), and if your issue isn't already represented please [open a new one](https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp/issues/new/choose). Pull requests and issues will be triaged weekly. Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Binary secrets-store-csi-driver-provider-gcp is a plugin for the secrets-store-csi-driver for fetching secrets from Google Cloud's Secret Manager API.
sourmash
readthedoc
R
sourmash 2.3.1 documentation Welcome to sourmash![¶](#welcome-to-sourmash) === sourmash is a command-line tool and Python library for computing [hash sketches](https://en.wikipedia.org/wiki/MinHash) from DNA sequences, comparing them to each other, and plotting the results. This allows you to estimate sequence similarity between even very large data sets quickly and accurately. sourmash can be used to quickly search large databases of genomes for matches to query genomes and metagenomes; see [our list of available databases](databases.html). sourmash also includes k-mer based taxonomic exploration and classification routines for genome and metagenome analysis. These routines can use the NCBI taxonomy but do not depend on it in any way. We have [several tutorials](tutorials.html) available! Start with [Making signatures, comparing, and searching](tutorial-basic.html). The paper [Large-scale sequence comparisons with sourmash (Pierce et al., 2019)](https://f1000research.com/articles/8-1006) gives an overview of how sourmash works and what its major use cases are. Please also see the [mash software](http://mash.readthedocs.io/en/latest/) and the [mash paper (Ondov et al., 2016)](http://dx.doi.org/10.1186/s13059-016-0997-x) for background information on how and why MinHash works. **Questions? Thoughts?** Ask us on the [sourmash issue tracker](https://github.com/dib-lab/sourmash/issues/)! --- To use sourmash, you must be comfortable with the UNIX command line; programmers may find the [Python library and API](api.html) useful as well. If you use sourmash, please cite us! > <NAME> (2016), > **sourmash: a library for MinHash sketching of DNA** > Journal of Open Source Software, 1(5), 27, [doi:10.21105/joss.00027](https://joss.theoj.org/papers/3d793c6e7db683bee7c03377a4a7f3c9) sourmash in brief[¶](#sourmash-in-brief) --- sourmash uses MinHash-style sketching to create “signatures”, compressed representations of DNA/RNA sequence. These signatures can then be stored, searched, explored, and taxonomically annotated. * `sourmash` provides command line utilities for creating, comparing, and searching signatures, as well as plotting and clustering signatures by similarity (see [the command-line docs](command-line.html)). * `sourmash` can **search very large collections of signatures** to find matches to a query. * `sourmash` can also **identify parts of metagenomes that match known genomes**, and can **taxonomically classify genomes and metagenomes** against databases of known species. * `sourmash` can be used to **search databases of public sequences** (e.g. all of GenBank) and can also be used to create and search databases of **private sequencing data**. * `sourmash` supports saving, loading, and communication of signatures via [JSON](http://www.json.org/), a ~human-readable & editable format. * `sourmash` also has a simple Python API for interacting with signatures, including support for online updating and querying of signatures (see [the API docs](api.html)). * `sourmash` isn’t terribly slow, and relies on an underlying Cython module. * `sourmash` is developed [on GitHub](https://github.com/dib-lab/sourmash) and is **freely and openly available** under the BSD 3-clause license. Please see [the README](https://github.com/dib-lab/sourmash/blob/master/README.md) for more information on development, support, and contributing. You can take a look at sourmash analyses on real data [in a saved Jupyter notebook](https://github.com/dib-lab/sourmash/blob/master/doc/sourmash-examples.ipynb), and experiment with it yourself [interactively in a Jupyter Notebook](https://mybinder.org/v2/gh/dib-lab/sourmash/master?filepath=doc%2Fsourmash-examples.ipynb) at [mybinder.org](http://mybinder.org). Installing sourmash[¶](#installing-sourmash) --- We currently suggest installing the latest pre-release in the sourmash 2.0 series; please see [the README file in github.com/dib-lab/sourmash](https://github.com/dib-lab/sourmash/blob/master/README.md) for information. You can use pip or conda equally well. Memory and speed[¶](#memory-and-speed) --- sourmash has relatively small disk and memory requirements compared to many other software programs used for genome search and taxonomic classification. First, `mash` beats sourmash in speed and memory, so if you can use mash, more power to you :) `sourmash search` and `sourmash gather` can be used to search all genbank microbial genomes ([using our prepared databases](databases.html)) with about 20 GB of disk and in under 1 GB of RAM. Typically a search for a single genome takes about 30 seconds on a laptop. `sourmash lca` can be used to search/classify against all genbank microbial genomes with about 200 MB of disk space and about 10 GB of RAM. Typically a metagenome classification takes about 1 minute on a laptop. Limitations[¶](#limitations) --- **sourmash cannot find matches across large evolutionary distances.** sourmash seems to work well to search and compare data sets for matches at the species and genus level, but does not have much sensitivity beyond that. (It seems to be particularly good at strain-level analysis.) You should use protein-based analyses to do searches across larger evolutionary distances. **sourmash signatures can be very large.** We use a modification of the MinHash sketch approach that allows us to search the contents of metagenomes and large genomes with no loss of sensitivity, but there is a tradeoff: there is no guaranteed limit to signature size when using ‘scaled’ signatures. Contents:[¶](#contents) --- ### Using sourmash from the command line[¶](#using-sourmash-from-the-command-line) From the command line, sourmash can be used to compute [MinHash sketches](https://en.wikipedia.org/wiki/MinHash) from DNA sequences, compare them to each other, and plot the results; these sketches are saved into “signature files”. These signatures allow you to estimate sequence similarity quickly and accurately in large collections, among other capabilities. Please see the [mash software](http://mash.readthedocs.io/en/latest/__) and the [mash paper (Ondov et al., 2016)](http://biorxiv.org/content/early/2015/10/26/029827) for background information on how and why MinHash sketches work. sourmash uses a subcommand syntax, so all commands start with `sourmash` followed by a subcommand specifying the action to be taken. #### An example[¶](#an-example) Grab three bacterial genomes from NCBI: ``` curl -L -O ftp://ftp.ncbi.nlm.nih.gov/genomes/refseq/bacteria/Escherichia_coli/reference/GCF_000005845.2_ASM584v2/GCF_000005845.2_ASM584v2_genomic.fna.gz curl -L -O ftp://ftp.ncbi.nlm.nih.gov/genomes/refseq/bacteria/Salmonella_enterica/reference/GCF_000006945.2_ASM694v2/GCF_000006945.2_ASM694v2_genomic.fna.gz curl -L -O ftp://ftp.ncbi.nlm.nih.gov/genomes/refseq/bacteria/Sphingobacteriaceae_bacterium_DW12/latest_assembly_versions/GCF_000783305.1_ASM78330v1/GCF_000783305.1_ASM78330v1_genomic.fna.gz ``` Compute signatures for each: ``` sourmash compute -k 31 *.fna.gz ``` This will produce three `.sig` files containing MinHash signatures at k=31. Next, compare all the signatures to each other: ``` sourmash compare *.sig -o cmp ``` Optionally, parallelize compare to 8 threads with `-p 8`: ``` sourmash compare -p 8 *.sig -o cmp ``` Finally, plot a dendrogram: ``` sourmash plot cmp --labels ``` This will output two files, `cmp.dendro.png` and `cmp.matrix.png`, containing a clustering & dendrogram of the sequences, as well as a similarity matrix and heatmap. Matrix: Matrix #### The `sourmash` command and its subcommands[¶](#the-sourmash-command-and-its-subcommands) To get a list of subcommands, run `sourmash` without any arguments. There are five main subcommands: `compute`, `compare`, `plot`, `search`, and `gather`. See [the tutorial](tutorials.html) for a walkthrough of these commands. * `compute` creates signatures. * `compare` compares signatures and builds a distance matrix. * `plot` plots distance matrices created by `compare`. * `search` finds matches to a query signature in a collection of signatures. * `gather` finds non-overlapping matches to a metagenome in a collection of signatures. There are also a number of commands that work with taxonomic information; these are grouped under the `sourmash lca` subcommand. See [the LCA tutorial](tutorials-lca.html) for a walkthrough of these commands. * `lca classify` classifies many signatures against an LCA database. * `lca summarize` summarizes the content of a metagenome using an LCA database. * `lca gather` finds non-overlapping matches to a metagenome in an LCA database. * `lca index` creates a database for use with LCA subcommands. * `lca rankinfo` summarizes the content of a database. * `lca compare_csv` compares lineage spreadsheets, e.g. those output by `lca classify`. Finally, there are a number of utility and information commands: * `info` shows version and software information. * `index` indexes many signatures using a Sequence Bloom Tree (SBT). * `sbt_combine` combines multiple SBTs. * `categorize` is an experimental command to categorize many signatures. * `watch` is an experimental command to classify a stream of sequencing data. ##### `sourmash compute`[¶](#sourmash-compute) The `compute` subcommand computes and saves signatures for each sequence in one or more sequence files. It takes as input FASTA or FASTQ files, and these files can be uncompressed or compressed with gzip or bzip2. The output will be one or more JSON signature files that can be used with `sourmash compare`. Please see [Using sourmash: a practical guide](index.html#document-using-sourmash-a-guide) for more information on computing signatures. --- Usage: ``` sourmash compute filename [ filename2 ... ] ``` Optional arguments: ``` --ksizes K1[,K2,K3] -- one or more k-mer sizes to use; default is 31 --force -- recompute existing signatures; convert non-DNA characters to N --output -- save all the signatures to this file; can be '-' for stdout. --track-abundance -- compute and save k-mer abundances. --name-from-first -- name the signature based on the first sequence in the file --singleton -- instead of computing a single signature for each input file, compute one for each sequence --merged <name> -- compute a single signature for all of the input files, naming it <name> ``` ##### `sourmash compare`[¶](#sourmash-compare) The `compare` subcommand compares one or more signature files (created with `compute`) using estimated [Jaccard index](https://en.wikipedia.org/wiki/Jaccard_index). The default output is a text display of a similarity matrix where each entry `[i, j]` contains the estimated Jaccard index between input signature `i` and input signature `j`. The output matrix can be saved to a file with `--output` and used with the `sourmash plot` subcommand (or loaded with `numpy.load(...)`. Using `--csv` will output a CSV file that can be loaded into other languages than Python, such as R. Usage: ``` sourmash compare file1.sig [ file2.sig ... ] ``` Options: ``` --output -- save the distance matrix to this file (as a numpy binary matrix) --ksize -- do the comparisons at this k-mer size. ``` ##### `sourmash plot`[¶](#sourmash-plot) The `plot` subcommand produces two plots – a dendrogram and a dendrogram+matrix – from a distance matrix computed by `sourmash compare --output <matrix>`. The default output is two PNG files. Usage: ``` sourmash plot <matrix> ``` Options: ``` --pdf -- output PDF files. --labels -- display the signature names (by default, the filenames) on the plot --indices -- turn off index display on the plot. --vmax -- maximum value (default 1.0) for heatmap. --vmin -- minimum value (default 0.0) for heatmap. --subsample=<N> -- plot a maximum of <N> samples, randomly chosen. --subsample-seed=<seed> -- seed for pseudorandom number generator. ``` Example output: An E. coli comparison plot ##### `sourmash search`[¶](#sourmash-search) The `search` subcommand searches a collection of signatures or SBTs for matches to the query signature. It can search for matches with either high [Jaccard similarity](https://en.wikipedia.org/wiki/Jaccard_index) or containment; the default is to use Jaccard similarity, unless `--containment` is specified. `-o/--output` will create a CSV file containing the matches. `search` will load all of provided signatures into memory, which can be slow and somewhat memory intensive for large collections. You can use `sourmash index` to create a Sequence Bloom Tree (SBT) that can be quickly searched on disk; this is [the same format in which we provide GenBank and other databases](databases.html). Usage: ``` sourmash search query.sig [ list of signatures or SBTs ] ``` Example output: ``` 49 matches; showing first 20: similarity match --- --- 75.4% NZ_JMGW01000001.1 Escherichia coli 1-176-05_S4_C2 e117605... 72.2% NZ_GG774190.1 Escherichia coli MS 196-1 Scfld2538, whole ... 71.4% NZ_JMGU01000001.1 Escherichia coli 2-011-08_S3_C2 e201108... 70.1% NZ_JHRU01000001.1 Escherichia coli strain 100854 100854_1... 69.0% NZ_JH659569.1 Escherichia coli M919 supercont2.1, whole g... ... ``` ##### `sourmash gather`[¶](#sourmash-gather) The `gather` subcommand finds all non-overlapping matches to the query. This is specifically meant for metagenome and genome bin analysis. (See [Classifying Signatures](classifying-signatures.html) for more information on the different approaches that can be used here.) If the input signature was computed with `--track-abundance`, output will be abundance weighted (unless `--ignore-abundances` is specified). `-o/--output` will create a CSV file containing the matches. `gather`, like `search`, will load all of provided signatures into memory. You can use `sourmash index` to create a Sequence Bloom Tree (SBT) that can be quickly searched on disk; this is [the same format in which we provide GenBank and other databases](databases.html). Usage: ``` sourmash gather query.sig [ list of signatures or SBTs ] ``` Example output: ``` overlap p_query p_match --- --- --- 1.4 Mbp 11.0% 58.0% JANA01000001.1 Fusobacterium sp. OBRC... 1.0 Mbp 7.7% 25.9% CP001957.1 Haloferax volcanii DS2 pla... 0.9 Mbp 7.4% 11.8% BA000019.2 Nostoc sp. PCC 7120 DNA, c... 0.7 Mbp 5.9% 23.0% FOVK01000036.1 Proteiniclasticum rumi... 0.7 Mbp 5.3% 17.6% AE017285.1 Desulfovibrio vulgaris sub... ``` Note: Use `sourmash gather` to classify a metagenome against a collection of genomes with no (or incomplete) taxonomic information. Use `sourmash lca summarize` and `sourmash lca gather` to classify a metagenome using a collection of genomes with taxonomic information. #### `sourmash lca` subcommands for taxonomic classification[¶](#sourmash-lca-subcommands-for-taxonomic-classification) These commands use LCA databases (created with `lca index`, below, or prepared databases such as [genbank-k31.lca.json.gz](databases.html)). ##### `sourmash lca classify`[¶](#sourmash-lca-classify) `sourmash lca classify` classifies one or more signatures using the given list of LCA DBs. It is meant for classifying metagenome-assembled genome bins (MAGs) and single-cell genomes (SAGs). Usage: ``` sourmash lca classify --query query.sig [query2.sig ...] --db <lca db> [<lca db2> ...] ``` For example, the command ``` sourmash lca classify --query tests/test-data/63.fa.sig \ --db podar-ref.lca.json ``` will produce the following logging to stderr: ``` loaded 1 LCA databases. ksize=31, scaled=10000 finding query signatures... outputting classifications to stdout ... classifying NC_011663.1 Shewanella baltica OS223, complete genome classified 1 signatures total ``` and the example classification output is a CSV file with headers: ``` ID,status,superkingdom,phylum,class,order,family,genus,species "NC_009665.1 Shewanella baltica OS185, complete genome",found,Bacteria,Proteobacteria,Gammaproteobacteria,Alteromonadales,Shewanellaceae,Shewanella,Shewanella baltica ``` The `status` column in the classification output can take three possible values: `nomatch`, `found`, and `disagree`. `nomatch` means that no match was found for this query, and `found` means that an unambiguous assignment was found - all k-mers were classified within the same taxonomic hierarchy, and the most detailed lineage available was reported. `disagree` means that there was a taxonomic disagreement, and the lowest compatible taxonomic node was reported. To elaborate on this a bit, suppose that all of the k-mers within a signature were classified as family *Shewanellaceae*, genus *Shewanella*, or species *Shewanella baltica*. Then the lowest compatible node (here species *Shewanella baltica*) would be reported, and the status of the classification would be `found`. However, if a number of additional k-mers in the input signature were classified as *Shewanella oneidensis*, sourmash would be unable to resolve the taxonomic assignment below genus *Shewanella* and it would report a status of `disagree` with the genus-level assignment of *Shewanella*; species level assignments would not be reported. (This is the approach that Kraken and other lowest common ancestor implementations use, we believe.) ##### `sourmash lca summarize`[¶](#sourmash-lca-summarize) `sourmash lca summarize` produces a Kraken-style summary of the combined contents of the given query signatures. It is meant for exploring metagenomes and metagenome-assembled genome bins. Note, unlike `sourmash lca classify`, `lca summarize` merges all of the query signatures into one and reports on the combined contents. This may be changed in the future. Usage: ``` sourmash lca summarize --query query.sig [query2.sig ...] --db <lca db> [<lca db2> ...] ``` For example, the command line: ``` sourmash lca summarize --query tests/test-data/63.fa.sig \ --db tests/test-data/podar-ref.lca.json ``` will produce the following log output to stderr: ``` loaded 1 LCA databases. ksize=31, scaled=10000 finding query signatures... loaded 1 signatures from 1 files total. ``` and the following example summarize output to stdout: ``` 50.5% 278 Bacteria;Proteobacteria;Gammaproteobacteria;Alteromonadales;Shewanellaceae;Shewanella;Shewanella baltica;Shewanella baltica OS223 100.0% 550 Bacteria;Proteobacteria;Gammaproteobacteria;Alteromonadales;Shewanellaceae;Shewanella;Shewanella baltica 100.0% 550 Bacteria;Proteobacteria;Gammaproteobacteria;Alteromonadales;Shewanellaceae;Shewanella 100.0% 550 Bacteria;Proteobacteria;Gammaproteobacteria;Alteromonadales;Shewanellaceae 100.0% 550 Bacteria;Proteobacteria;Gammaproteobacteria;Alteromonadales 100.0% 550 Bacteria;Proteobacteria;Gammaproteobacteria 100.0% 550 Bacteria;Proteobacteria 100.0% 550 Bacteria ``` The output is space-separated and consists of three columns: the percentage of total k-mers that have this classification; the number of k-mers that have this classification; and the lineage classification. K-mer classifications are reported hierarchically, so the percentages and totals contain all assignments that are at a lower taxonomic level - e.g. *Bacteria*, above, contains all the k-mers in *Bacteria;Proteobacteria*. The same information is reported in a CSV file if `-o/--output` is used. ##### `sourmash lca gather`[¶](#sourmash-lca-gather) The `sourmash lca gather` command finds all non-overlapping matches to the query, similar to the `sourmash gather` command. This is specifically meant for metagenome and genome bin analysis. (See [Classifying Signatures](classifying-signatures.html) for more information on the different approaches that can be used here.) If the input signature was computed with `--track-abundance`, output will be abundance weighted (unless `--ignore-abundances` is specified). `-o/--output` will create a CSV file containing the matches. Usage: ``` sourmash lca gather query.sig [<lca database> ...] ``` Example output: ``` overlap p_query p_match --- --- --- 1.8 Mbp 14.6% 9.1% Fusobacterium nucleatum 1.0 Mbp 7.8% 16.3% Proteiniclasticum ruminis 1.0 Mbp 7.7% 25.9% Haloferax volcanii 0.9 Mbp 7.4% 11.8% Nostoc sp. PCC 7120 0.9 Mbp 7.0% 5.8% Shewanella baltica 0.8 Mbp 6.0% 8.6% Desulfovibrio vulgaris 0.6 Mbp 4.9% 12.6% Thermus thermophilus ``` ##### `sourmash lca index`[¶](#sourmash-lca-index) The `sourmash lca index` command creates an LCA database from a lineage spreadsheet and a collection of signatures. This can be used to create LCA databases from private collections of genomes, and can also be used to create databases for e.g. subsets of GenBank. See [the `sourmash lca` tutorial](tutorials-lca.html) and the blog post [Why are taxonomic assignments so different for Tara bins?](http://ivory.idyll.org/blog/2017-taxonomic-disagreements-in-tara-mags.html) for some use cases. If you are interested in preparing lineage spreadsheets from GenBank genomes (or building off of NCBI taxonomies more generally), please see [the NCBI lineage repository](https://github.com/dib-lab/2018-ncbi-lineages). ##### `sourmash lca rankinfo`[¶](#sourmash-lca-rankinfo) The `sourmash lca rankinfo` command displays k-mer specificity information for one or more LCA databases. See the blog post [How specific are k-mers for taxonomic assignment of microbes, anyway?](http://ivory.idyll.org/blog/2017-how-specific-kmers.html) for example output. ##### `sourmash lca compare_csv`[¶](#sourmash-lca-compare-csv) The `sourmash lca compare_csv` command compares two lineage spreadsheets (such as those output by `sourmash lca classify` or taken as input by `sourmash lca index`) and summarizes their agreement/disagreement. Please see the blog post [Why are taxonomic assignments so different for Tara bins?](http://ivory.idyll.org/blog/2017-taxonomic-disagreements-in-tara-mags.html) for an example use case. #### `sourmash signature` subcommands for signature manipulation[¶](#sourmash-signature-subcommands-for-signature-manipulation) These commands manipulate signatures from the command line. Currently supported subcommands are `merge`, `rename`, `intersect`, `extract`, `downsample`, `subtract`, `import`, `export`, `info`, `flatten`, and `filter`. The signature commands that combine or otherwise have multiple signatures interacting (`merge`, `intersect`, `subtract`) work only on compatible signatures, where the k-mer size and nucleotide/protein sequences match each other. If working directly with the hash values (e.g. `merge`, `intersect`, `subtract`) then the scaled values must also match; you can use `downsample` to convert a bunch of samples to the same scaled value. If there are multiple signatures in a file with different ksizes and/or from nucleotide and protein sequences, you can choose amongst them with `-k/--ksize` and `--dna` or `--protein`, as with other sourmash commands such as `search`, `gather`, and `compare`. Note, you can use `sourmash sig` as shorthand for all of these commands. ##### `sourmash signature merge`[¶](#sourmash-signature-merge) Merge two (or more) signatures. For example, ``` sourmash signature merge file1.sig file2.sig -o merged.sig ``` will output the union of all the hashes in `file1.sig` and `file2.sig` to `merged.sig`. All of the signatures passed to merge must either have been computed with `--track-abundance`, or not. If they have `track_abundance` on, then the merged signature will have the sum of all abundances across the individual signatures. The `--flatten` flag will override this behavior and allow merging of mixtures by removing all abundances. ##### `sourmash signature rename`[¶](#sourmash-signature-rename) Rename the display name for one or more signatures - this is the name output for matches in `compare`, `search`, `gather`, etc. For example, ``` sourmash signature rename file1.sig "new name" -o renamed.sig ``` will place a renamed copy of the hashes in `file1.sig` in the file `renamed.sig`. If you provide multiple signatures, all will be renamed to the same name. ##### `sourmash signature subtract`[¶](#sourmash-signature-subtract) Subtract all of the hash values from one signature that are in one or more of the others. For example, ``` sourmash signature subtract file1.sig file2.sig file3.sig -o subtracted.sig ``` will subtract all of the hashes in `file2.sig` and `file3.sig` from `file1.sig`, and save the new signature to `subtracted.sig`. To use `subtract` on signatures calculated with `--track-abundance`, you must specify `--flatten`. ##### `sourmash signature intersect`[¶](#sourmash-signature-intersect) Output the intersection of the hash values in multiple signature files. For example, ``` sourmash signature intersect file1.sig file2.sig file3.sig -o intersect.sig ``` will output the intersection of all the hashes in those three files to `intersect.sig`. The `intersect` command flattens all signatures, i.e. the abundances in any signatures will be ignored and the output signature will have `track_abundance` turned off. ##### `sourmash signature downsample`[¶](#sourmash-signature-downsample) Downsample one or more signatures. With `downsample`, you can – * increase the `--scaled` value for a signature computed with `--scaled`, shrinking it in size; * decrease the `num` value for a traditional num MinHash, shrinking it in size; * try to convert a `--scaled` signature to a `num` signature; * try to convert a `num` signature to a `--scaled` signature. For example, ``` sourmash signature downsample file1.sig file2.sig --scaled 100000 -o downsampled.sig ``` will output each signature, downsampled to a scaled value of 100000, to `downsampled.sig`; and ``` sourmash signature downsample --num 500 scaled_file.sig -o downsampled.sig ``` will try to convert a scaled MinHash to a num MinHash. ##### `sourmash signature extract`[¶](#sourmash-signature-extract) Extract the specified signature(s) from a collection of signatures. For example, ``` sourmash signature extract *.sig -k 21 --dna -o extracted.sig ``` will extract all nucleotide signatures calculated at k=21 from all .sig files in the current directory. There are currently two other useful selectors for `extract`: you can specify (part of) an md5sum, as output in the CSVs produced by `search` and `gather`; and you can specify (part of) a name. For example, ``` sourmash signature extract tests/test-data/*.fa.sig --md5 09a0869 ``` will extract the signature from `47.fa.sig` which has an md5sum of `09a08691ce52952152f0e866a59f6261`; and ``` sourmash signature extract tests/test-data/*.fa.sig --name NC_009665 ``` will extract the same signature, which has an accession number of `NC_009665.1`. ##### `sourmash signature flatten`[¶](#sourmash-signature-flatten) Flatten the specified signature(s), removing abundances and setting track_abundance to False. For example, ``` sourmash signature flatten *.sig -o flattened.sig ``` will remove all abundances from all of the .sig files in the current directory. The `flatten` command accepts the same selectors as `extract`. ##### `sourmash signature filter`[¶](#sourmash-signature-filter) Filter the hashes in the specified signature(s) by abundance, by either `-m/--min-abundance` or `-M/--max-abundance` or both. Abundance selection is inclusive, so `-m 2 -M 5` will select hashes with abundance greater than or equal to 2, and less than or equal to 5. For example, ``` sourmash signature -m 2 *.sig ``` will output new signatures containing only hashes that occur two or more times in each signature. The `filter` command accepts the same selectors as `extract`. ##### `sourmash signature import`[¶](#sourmash-signature-import) Import signatures into sourmash format. Currently only supports mash, and can import mash sketches output by `mash info -d <filename.msh>`. For example, ``` sourmash signature import filename.msh.json -o imported.sig ``` will import the contents of `filename.msh.json` into `imported.sig`. ##### `sourmash signature export`[¶](#sourmash-signature-export) Export signatures from sourmash format. Currently only supports mash dump format. For example, ``` sourmash signature export filename.sig -o filename.sig.msh.json ``` ##### `sourmash signature overlap`[¶](#sourmash-signature-overlap) Display a detailed comparison of two signatures. This computes the Jaccard similarity (as in `sourmash compare` or `sourmash search`) and the Jaccard containment in both directions (as with `--containment`). It also displays the number of hash values in the union and intersection of the two signatures, as well as the number of disjoint hash values in each signature. This command has two uses - first, it is helpful for understanding how similarity and containment are calculated, and second, it is useful for analyzing signatures with very small overlaps, where the similarity and/or containment might be very close to zero. For example, ``` sourmash signature overlap file1.sig file2.sig ``` will display the detailed comparison of `file1.sig` and `file2.sig`. ### sourmash tutorials and notebooks[¶](#sourmash-tutorials-and-notebooks) #### The first two tutorials![¶](#the-first-two-tutorials) These tutorials are both command line tutorials that should work on Mac OS X and Linux. They require about 5 GB of disk space and 5 GB of RAM. * [The first sourmash tutorial - making signatures, comparing, and searching](tutorial-basic.html) * [Using sourmash LCA to do taxonomic classification](tutorials-lca.html) #### Background and details[¶](#background-and-details) These next three tutorials are all notebooks that you can view, run yourself, or run interactively online via the [binder](http://mybinder.org) service. * [An introduction to k-mers for genome comparison and analysis](kmers-and-minhash.html) * [Some sourmash command line examples!](sourmash-examples.html) * [Working with private collections of signatures.](sourmash-collections.html) #### More information[¶](#more-information) If you are a Python programmer, you might also be interested in our [API examples](api-example.html). If you prefer R, we have [a short guide to using sourmash output with R](other-languages.html). ### Using sourmash: a practical guide[¶](#using-sourmash-a-practical-guide) So! You’ve installed sourmash, run a few of the tutorials and commands, and now you actually want to *use* it. This guide is here to answer some of your questions, and explain why we can’t answer others. (If you have additional questions, please [file an issue!](https://github.com/dib-lab/sourmash/issues)) #### What k-mer size(s) should I use?[¶](#what-k-mer-size-s-should-i-use) You can build signatures at a variety of k-mer sizes all at once, and (unless you are working with very large metagenomes) the resulting signature files will still be quite small. We suggest including k=31 and k=51. k=51 gives you the most stringent matches, and has very few false positives. k=31 may be more sensitive at the genus level. Why 31 and 51, specifically? To a large extent these numbers were picked out of a hat, based on our reading of papers like the [Metapalette paper (Koslicki and Falush, 2016](http://msystems.asm.org/content/1/3/e00020-16). You could go with k=49 or k=53 and probably get very similar results to k=51. The general rule is that longer k-mer sizes are less prone to false positives. But you can pick your own parameters. One additional wrinkle is that we provide a number of [precomputed databases](databases.html) at k=21, k=31, and k=51. It is often convenient to calculate signatures at these sizes so that you can use these databases. You’ll notice that all of the above numbers are odd. That is to avoid occasional minor complications from palindromes in numerical calculations, where the forward and reverse complements of a k-mer are identical. This cannot happen if k is odd. It is not enforced by sourmash, however, and it probably doesn’t really matter. (When we have blog posts or publications providing more formal guidance, we’ll link to them here!) #### What resolution should my signatures be / how should I compute them?[¶](#what-resolution-should-my-signatures-be-how-should-i-compute-them) sourmash supports two ways of choosing the resolution or size of your signatures: using `-n` to specify the maximum number of hashes, or `--scaled` to specify the compression ratio. Which should you use? We suggest calculating all your signatures using `--scaled 1000`. This will give you a compression ratio of 1000-to-1 while making it possible to detect regions of similarity in the 10kb range. For comparison with more traditional MinHash approaches like `mash`, if you have a 5 Mbp genome and use `--scaled 1000`, you will extract approximately 5000 hashes. So a scaled of 1000 is equivalent to using `-n 5000` with mash on a 5 Mbp genome. The difference between using `-n` and `--scaled` is in metagenome analysis: fixing the number of hashes with `-n` limits your ability to detect rare organisms, or alternatively results in very large signatures (e.g. if you use n larger than 10000). `--scaled` will scale your resolution with the diversity of the metagenome. You can read more about this in this blog post from the mash folk, [Mash Screen: What’s in my sequencing run?](https://genomeinformatics.github.io/mash-screen/) What we do with sourmash and `--scaled` is similar to the ‘modulo hash’ mentioned in that blog post. (Again, when we have formal guidance on this based on benchmarks, we’ll link to it here.) #### What kind of input data does sourmash work on?[¶](#what-kind-of-input-data-does-sourmash-work-on) sourmash has been used most extensively with Illumina read data sets and assembled genomes, transcriptomes, and metagenomes. The high error rate of PacBio and Nanopore sequencing is problematic for k-mer based approaches and we have not yet explored how to tune parameters for this kind of sequencing. On a more practical note, `sourmash compute` should autodetect FASTA, FASTQ, whether they are uncompressed, gzipped, or bzip2-ed. Nothing special needs to be done. #### How should I prepare my data?[¶](#how-should-i-prepare-my-data) Raw Illumina read data sets should be k-mer abundance trimmed to get rid of the bulk of erroneous kmers. We suggest a command like the following, using [trim-low-abund from the khmer project](https://peerj.com/preprints/890/) – ``` trim-low-abund.py -C 3 -Z 18 -V -M 2e9 <all of your input read files> ``` This is safe to use on genomes, metagenomes, and transcriptomes. If you are working with large genomes or diverse metagenomes, you may need to increase the `-M` parameter to use more memory. See [the khmer docs for trim-low-abund.py](https://khmer.readthedocs.io/en/v2.1.2/user/scripts.html#trim-low-abund-py) and [the semi-streaming preprint](https://peerj.com/preprints/890/) for more information. For high coverage genomic data, you can do very stringent trimming with an absolute cutoff, e.g. ``` trim-low-abund.py -C 10 -M 2e9 <all of your input read files> ``` will eliminate all k-mers that appear fewer than 10 times in your data set. This kind of trimming will dramatically reduce your sensitivity when working with metagenomes and transcriptomes, however, where there are always real low-abundance k-mers present. #### Could you just give us the !#%#!$ command line?[¶](#could-you-just-give-us-the-command-line) Sorry, yes! See below. ##### Computing signatures for read files:[¶](#computing-signatures-for-read-files) ``` trim-low-abund -C 3 -Z 18 -V -M 2e9 input-reads-1.fq input-reads-2.fq ... sourmash compute --scaled 1000 -k 21,31,51 input-reads*.fq.abundtrim \ --merge SOMENAME -o SOMENAME-reads.sig ``` The first command trims off low-abundance k-mers from high-coverage reads; the second takes all the trimmed read files, subsamples k-mers from them at 1000:1, and outputs a single merged signature named ‘SOMENAME’ into the file `SOMENAME-reads.sig`. ##### Computing signatures for individual genome files:[¶](#computing-signatures-for-individual-genome-files) ``` sourmash compute --scaled 1000 -k 21,31,51 *.fna.gz --name-from-first ``` This command computes signatures for all `*.fna.gz` files, and names each signature based on the first FASTA header in each file (that’s what the option `--name-from-first` does). The signatures will be placed in `*.fna.gz.sig`. ##### Computing signatures from a collection of genomes in a single file:[¶](#computing-signatures-from-a-collection-of-genomes-in-a-single-file) ``` sourmash compute --scaled 1000 -k 21,31,51 file.fa --singleton ``` This computes signatures for all individual FASTA sequences in `file.fa`, names them based on their FASTA headers, and places them all in a single `.sig` file, `file.fa.sig`. (This behavior is triggered by the option `--singleton`, which tells sourmash to treat each individual sequence in the file as an independent sequence.) ### Classifying signatures: `search`, `gather`, and `lca` methods.[¶](#classifying-signatures-search-gather-and-lca-methods) sourmash provides several different techniques for doing classification and breakdown of signatures. #### Searching for similar samples with `search`.[¶](#searching-for-similar-samples-with-search) The `sourmash search` command is most useful when you are looking for high similarity matches to other signatures; this is the most basic use case for MinHash searching. The command takes a query signature and one or more search signatures, and finds all the matches it can above a particular threshold. By default `search` will find matches with high [*Jaccard similarity*](https://en.wikipedia.org/wiki/Jaccard_index), which will consider all of the k-mers in the union of the two samples. Practically, this means that you will only find matches if there is both high overlap between the samples *and* relatively few k-mers that are disjoint between the samples. This is effective for finding genomes or transcriptomes that are similar but rarely works well for samples of vastly different sizes. One useful modification to `search` is to calculate containment with `--containment` instead of the (default) similarity; this will find matches where the query is contained within the subject, but the subject may have many other k-mers in it. For example, if you are using a plasmid as a query, you would use `--containment` to find genomes that contained that plasmid. See [the main sourmash tutorial](http://sourmash.readthedocs.io/en/latest/tutorials.html#make-and-search-a-database-quickly) for information on using `search` with and without `--containment`. #### Breaking down metagenomic samples with `gather` and `lca`[¶](#breaking-down-metagenomic-samples-with-gather-and-lca) Neither search option (similarity or containment) is effective when comparing or searching with metagenomes, which typically have a mixture of many different genomes. While you might use containment to see if a query genome is present in one or more metagenomes, a common question to ask is the reverse: **what genomes are in my metagenome?** We have implemented two algorithms in sourmash to do this. One algorithm uses taxonomic information from e.g. GenBank to classify individual k-mers, and then infers taxonomic distributions of metagenome contents from the presence of these individual k-mers. (This is the approach pioneered by [Kraken](https://ccb.jhu.edu/software/kraken/) and many other tools.) `sourmash lca` can be used to classify individual genome bins with `classify`, or summarize metagenome taxonomy with `summarize`. The [sourmash lca tutorial](http://sourmash.readthedocs.io/en/latest/tutorials-lca.html) shows how to use the `lca classify` and `summarize` commands, and also provides guidance on building your own database. The other approach, `gather`, breaks a metagenome down into individual genomes based on greedy partitioning. Essentially, it takes a query metagenome and searches the database for the most highly contained genome; it then subtracts that match from the metagenome, and repeats. At the end it reports how much of the metagenome remains unknown. The [basic sourmash tutorial](http://sourmash.readthedocs.io/en/latest/tutorials.html#what-s-in-my-metagenome) has some sample output from using gather with GenBank. Our preliminary benchmarking suggests that `gather` is the most accurate method available for doing strain-level resolution of genomes. More on that as we move forward! #### To do taxonomy, or not to do taxonomy?[¶](#to-do-taxonomy-or-not-to-do-taxonomy) By default, there is no structured taxonomic information available in sourmash signatures or SBT databases of signatures. Generally what this means is that you will have to provide your own mapping from a match to some taxonomic hierarchy. This is generally the case when you are working with lots of genomes that have no taxonomic information. The `lca` subcommands, however, work with LCA databases, which contain taxonomic information by construction. This is one of the main differences between the `sourmash lca` subcommands and the basic `sourmash search` functionality. So the `lca` subcommands will generally output structured taxonomic information, and these are what you should look to if you are interested in doing classification. The command `lca gather` applies the `gather` algorithm to search an LCA database; it reports taxonomy. It’s important to note that taxonomy based on k-mers is very, very specific and if you get a match, it’s pretty reliable. On the converse, however, k-mer identification is very brittle with respect to evolutionary divergence, so if you don’t get a match it may only mean that the particular species isn’t known. #### Abundance weighting[¶](#abundance-weighting) If you compute your input signatures with `--track-abundance`, both `sourmash gather` and `sourmash lca gather` will use that information to calculate an abundance-weighted result. Briefly, this will weight each match to a hash value by the multiplicity of the hash value in the query signature. You can turn off this behavior with `--ignore-abundance`. #### What commands should I use?[¶](#what-commands-should-i-use) It’s not always easy to figure that out, we know! We’re thinking about better tutorials and documentation constantly. We suggest the following approach: * build some signatures and do some searches, to get some basic familiarity with sourmash; * explore the available databases; * then ask questions [via the issue tracker](https://github.com/dib-lab/sourmash/issues) and we will do our best to help you out! This helps us figure out what people are actually interested in doing, and any help we provide via the issue tracker will eventually be added into the documentation. ### [`sourmash` Python API](#id1)[¶](#sourmash-python-api) The primary programmatic way of interacting with `sourmash` is via its Python API. Please also see [examples of using the API](api-example.html). Contents * [`sourmash` Python API](#sourmash-python-api) + [`MinHash`: basic MinHash sketch functionality](#module-sourmash) + [`SourmashSignature`: save and load MinHash sketches in JSON](#module-sourmash.signature) + [`SBT`: save and load Sequence Bloom Trees in JSON](#module-sourmash.sbt) + [`sourmash.fig`: make plots and figures](#module-sourmash.fig) #### [`MinHash`: basic MinHash sketch functionality](#id2)[¶](#module-sourmash) An implementation of a MinHash bottom sketch, applied to k-mers in DNA. #### [`SourmashSignature`: save and load MinHash sketches in JSON](#id3)[¶](#module-sourmash.signature) Save and load MinHash sketches in a JSON format, along with some metadata. *class* `sourmash.signature.``SourmashSignature`(*minhash*, *name=''*, *filename=''*)[[source]](_modules/sourmash/signature.html#SourmashSignature)[¶](#sourmash.signature.SourmashSignature) Main class for signature information. `contained_by`(*other*, *downsample=False*)[[source]](_modules/sourmash/signature.html#SourmashSignature.contained_by)[¶](#sourmash.signature.SourmashSignature.contained_by) Compute containment by the other signature. Note: ignores abundance. `jaccard`(*other*)[[source]](_modules/sourmash/signature.html#SourmashSignature.jaccard)[¶](#sourmash.signature.SourmashSignature.jaccard) Compute Jaccard similarity with the other MinHash signature. `md5sum`()[[source]](_modules/sourmash/signature.html#SourmashSignature.md5sum)[¶](#sourmash.signature.SourmashSignature.md5sum) Calculate md5 hash of the bottom sketch, specifically. `name`()[[source]](_modules/sourmash/signature.html#SourmashSignature.name)[¶](#sourmash.signature.SourmashSignature.name) Return as nice a name as possible, defaulting to md5 prefix. `similarity`(*other*, *ignore_abundance=False*, *downsample=False*)[[source]](_modules/sourmash/signature.html#SourmashSignature.similarity)[¶](#sourmash.signature.SourmashSignature.similarity) Compute similarity with the other MinHash signature. `sourmash.signature.``load_signatures`(*data*, *ksize=None*, *select_moltype=None*, *ignore_md5sum=False*, *do_raise=False*, *quiet=False*)[[source]](_modules/sourmash/signature.html#load_signatures)[¶](#sourmash.signature.load_signatures) Load a JSON string with signatures into classes. Returns list of SourmashSignature objects. Note, the order is not necessarily the same as what is in the source file. `sourmash.signature.``save_signatures`(*siglist*, *fp=None*)[[source]](_modules/sourmash/signature.html#save_signatures)[¶](#sourmash.signature.save_signatures) Save multiple signatures into a JSON string (or into file handle ‘fp’) #### [`SBT`: save and load Sequence Bloom Trees in JSON](#id4)[¶](#module-sourmash.sbt) An implementation of sequence bloom trees, Solomon & Kingsford, 2015. To try it out, do: ``` factory = GraphFactory(ksize, tablesizes, n_tables) root = Node(factory) graph1 = factory() # ... add stuff to graph1 ... leaf1 = Leaf("a", graph1) root.add_node(leaf1) ``` For example, ``` # filenames: list of fa/fq files # ksize: k-mer size # tablesizes: Bloom filter table sizes # n_tables: Number of tables factory = GraphFactory(ksize, tablesizes, n_tables) root = Node(factory) for filename in filenames: graph = factory() graph.consume_fasta(filename) leaf = Leaf(filename, graph) root.add_node(leaf) ``` then define a search function, ``` def kmers(k, seq): for start in range(len(seq) - k + 1): yield seq[start:start + k] def search_transcript(node, seq, threshold): presence = [ node.data.get(kmer) for kmer in kmers(ksize, seq) ] if sum(presence) >= int(threshold * len(seq)): return 1 return 0 ``` *class* `sourmash.sbt.``GraphFactory`(*ksize*, *starting_size*, *n_tables*)[[source]](_modules/sourmash/sbt.html#GraphFactory)[¶](#sourmash.sbt.GraphFactory) Build new nodegraphs (Bloom filters) of a specific (fixed) size. | Parameters: | * **ksize** (*int*) – k-mer size. * **starting_size** (*int*) – size (in bytes) for each nodegraph table. * **n_tables** (*int*) – number of nodegraph tables to be used. | `init_args`()[[source]](_modules/sourmash/sbt.html#GraphFactory.init_args)[¶](#sourmash.sbt.GraphFactory.init_args) *class* `sourmash.sbt.``Node`(*factory*, *name=None*, *path=None*, *storage=None*)[[source]](_modules/sourmash/sbt.html#Node)[¶](#sourmash.sbt.Node) Internal node of SBT. `data`[¶](#sourmash.sbt.Node.data) *static* `load`(*info*, *storage=None*)[[source]](_modules/sourmash/sbt.html#Node.load)[¶](#sourmash.sbt.Node.load) `save`(*path*)[[source]](_modules/sourmash/sbt.html#Node.save)[¶](#sourmash.sbt.Node.save) `update`(*parent*)[[source]](_modules/sourmash/sbt.html#Node.update)[¶](#sourmash.sbt.Node.update) *class* `sourmash.sbt.``NodePos`(*pos*, *node*)[¶](#sourmash.sbt.NodePos) `node`[¶](#sourmash.sbt.NodePos.node) Alias for field number 1 `pos`[¶](#sourmash.sbt.NodePos.pos) Alias for field number 0 *class* `sourmash.sbt.``SBT`(*factory*, *d=2*, *storage=None*)[[source]](_modules/sourmash/sbt.html#SBT)[¶](#sourmash.sbt.SBT) A Sequence Bloom Tree implementation allowing generic internal nodes and leaves. The default node and leaf format is a Bloom Filter (like the original implementation), but we also provide a MinHash leaf class (in the sourmash.sbtmh.SigLeaf class) | Parameters: | * **factory** (*Factory*) – Callable for generating new datastores for internal nodes. * **d** (*int*) – Number of children for each internal node. Defaults to 2 (a binary tree) * **storage** (*Storage**,* *default: None*) – A Storage is any place where we can save and load data for the nodes. If set to None, will use a FSStorage. | Notes We use two dicts to store the tree structure: One for the internal nodes, and another for the leaves (datasets). `add_node`(*leaf*)[[source]](_modules/sourmash/sbt.html#SBT.add_node)[¶](#sourmash.sbt.SBT.add_node) `child`(*parent*, *pos*)[[source]](_modules/sourmash/sbt.html#SBT.child)[¶](#sourmash.sbt.SBT.child) Return a child node at position `pos` under the `parent` node. | Parameters: | * **parent** (*int*) – Parent node position in the tree. * **pos** (*int*) – Position of the child one under the parent. Ranges from [0, arity - 1], where arity is the arity of the SBT (usually it is 2, a binary tree). | | Returns: | A NodePos namedtuple with the position and content of the child node. | | Return type: | [NodePos](index.html#sourmash.sbt.NodePos) | `children`(*pos*)[[source]](_modules/sourmash/sbt.html#SBT.children)[¶](#sourmash.sbt.SBT.children) Return all children nodes for node at position `pos`. | Parameters: | **pos** (*int*) – Position of the node in the tree. | | Returns: | A list of NodePos namedtuples with the position and content of all children nodes. | | Return type: | list of NodePos | `combine`(*other*)[[source]](_modules/sourmash/sbt.html#SBT.combine)[¶](#sourmash.sbt.SBT.combine) `find`(*search_fn*, **args*, ***kwargs*)[[source]](_modules/sourmash/sbt.html#SBT.find)[¶](#sourmash.sbt.SBT.find) Search the tree using search_fn. `leaves`(*with_pos=False*)[[source]](_modules/sourmash/sbt.html#SBT.leaves)[¶](#sourmash.sbt.SBT.leaves) *classmethod* `load`(*location*, *leaf_loader=None*, *storage=None*, *print_version_warning=True*)[[source]](_modules/sourmash/sbt.html#SBT.load)[¶](#sourmash.sbt.SBT.load) Load an SBT description from a file. | Parameters: | * **location** (*str*) – path to the SBT description. * **leaf_loader** (*function**,* *optional*) – function to load leaf nodes. Defaults to `Leaf.load`. * **storage** (*Storage**,* *optional*) – Storage to be used for saving node data. Defaults to FSStorage (a hidden directory at the same level of path) | | Returns: | the SBT tree built from the description. | | Return type: | [SBT](index.html#sourmash.sbt.SBT) | `new_node_pos`(*node*)[[source]](_modules/sourmash/sbt.html#SBT.new_node_pos)[¶](#sourmash.sbt.SBT.new_node_pos) `parent`(*pos*)[[source]](_modules/sourmash/sbt.html#SBT.parent)[¶](#sourmash.sbt.SBT.parent) Return the parent of the node at position `pos`. If it is the root node (position 0), returns None. | Parameters: | **pos** (*int*) – Position of the node in the tree. | | Returns: | A NodePos namedtuple with the position and content of the parent node. | | Return type: | [NodePos](index.html#sourmash.sbt.NodePos) | `print`()[[source]](_modules/sourmash/sbt.html#SBT.print)[¶](#sourmash.sbt.SBT.print) `print_dot`()[[source]](_modules/sourmash/sbt.html#SBT.print_dot)[¶](#sourmash.sbt.SBT.print_dot) `save`(*path*, *storage=None*, *sparseness=0.0*, *structure_only=False*)[[source]](_modules/sourmash/sbt.html#SBT.save)[¶](#sourmash.sbt.SBT.save) Saves an SBT description locally and node data to a storage. | Parameters: | * **path** (*str*) – path to where the SBT description should be saved. * **storage** (*Storage**,* *optional*) – Storage to be used for saving node data. Defaults to FSStorage (a hidden directory at the same level of path) * **sparseness** (*float*) – How much of the internal nodes should be saved. Defaults to 0.0 (save all internal nodes data), can go up to 1.0 (don’t save any internal nodes data) * **structure_only** (*boolean*) – Write only the index schema and metadata, but not the data. Defaults to False (save data too) | | Returns: | full path to the new SBT description | | Return type: | str | *class* `sourmash.sbt.``Leaf`(*metadata*, *data=None*, *name=None*, *storage=None*, *path=None*)[[source]](_modules/sourmash/sbt.html#Leaf)[¶](#sourmash.sbt.Leaf) `data`[¶](#sourmash.sbt.Leaf.data) *classmethod* `load`(*info*, *storage=None*)[[source]](_modules/sourmash/sbt.html#Leaf.load)[¶](#sourmash.sbt.Leaf.load) `save`(*path*)[[source]](_modules/sourmash/sbt.html#Leaf.save)[¶](#sourmash.sbt.Leaf.save) `update`(*parent*)[[source]](_modules/sourmash/sbt.html#Leaf.update)[¶](#sourmash.sbt.Leaf.update) #### [`sourmash.fig`: make plots and figures](#id5)[¶](#module-sourmash.fig) Make plots using the distance matrix+labels output by `sourmash compare`. `sourmash.fig.``load_matrix_and_labels`(*basefile*)[[source]](_modules/sourmash/fig.html#load_matrix_and_labels)[¶](#sourmash.fig.load_matrix_and_labels) Load the comparison matrix and associated labels. Returns a square numpy matrix & list of labels. `sourmash.fig.``plot_composite_matrix`(*D*, *labeltext*, *show_labels=True*, *show_indices=True*, *vmax=1.0*, *vmin=0.0*, *force=False*)[[source]](_modules/sourmash/fig.html#plot_composite_matrix)[¶](#sourmash.fig.plot_composite_matrix) Build a composite plot showing dendrogram + distance matrix/heatmap. Returns a matplotlib figure. ### Additional information on sourmash[¶](#additional-information-on-sourmash) #### Computational requirements[¶](#computational-requirements) Read more about the [compute requirements, here.](requirements.html) #### Prepared search database[¶](#prepared-search-database) We offer a number of [prepared search databases.](databases.html) #### Other MinHash implementations for DNA[¶](#other-minhash-implementations-for-dna) In addition to [mash](https://github.com/marbl/Mash), also see: * [RKMH](https://github.com/edawson/rkmh): Read Classification by Kmers * [mashtree](https://github.com/lskatz/mashtree/blob/master/README): For building trees using Mash * [Finch](https://github.com/onecodex/finch-rs): “Fast sketches, count histograms, better filtering.” * [BBMap and SendSketch](http://seqanswers.com/forums/showthread.php?t=74019): part of Brian Bushnell’s tool collection. * [PATRIC](https://patricbrc.org/) uses MinHash for genome search. If you are interested in exactly how these MinHash approaches calculate the hashes of DNA sequences, please see some simple Python code in sourmash, [utils/compute-dna-mh-another-way.py](https://github.com/dib-lab/sourmash/blob/master/utils/compute-dna-mh-another-way.py) #### Papers and references[¶](#papers-and-references) [On the resemblance and containment of documents](http://ieeexplore.ieee.org/document/666900/?reload=true), Broder, 1997. The original MinHash paper! [Mash: fast genome and metagenome distance estimation using MinHash.](https://genomebiology.biomedcentral.com/articles/10.1186/s13059-016-0997-x), Ondov et al. 2016. [sourmash: a library for MinHash sketching of DNA.](http://joss.theoj.org/papers/3d793c6e7db683bee7c03377a4a7f3c9), Brown and Irber, 2017. [Improving MinHash via the Containment Index with Applications to Metagenomic Analysis](https://www.biorxiv.org/content/early/2017/09/04/184150), Koslicki and Zabeti, 2017. [Ultra-fast search of all deposited bacterial and viral genomic data](http://dx.doi.org/10.1038/s41587-018-0010-1), Bradley et al., 2019. [Streaming histogram sketching for rapid microbiome analytics](https://www.biorxiv.org/content/10.1101/408070v1), Rowe et al., 2019. [Dashing: Fast and Accurate Genomic Distances with HyperLogLog](https://www.biorxiv.org/content/10.1101/501726v2), Baker and Langmead, 2019. #### Presentations and posters[¶](#presentations-and-posters) [Taxonomic classification of microbial metagenomes using MinHash signatures](https://osf.io/mu4gk/), Brooks et al., 2017. Presented at ASM. #### Blog posts[¶](#blog-posts) We (and others) have a number of blog posts on sourmash and MinHash more generally: * [Some background on k-mers, and their use in taxonomy](http://ivory.idyll.org/blog/2017-something-about-kmers.html) * From the Phillippy lab: [mash screen: what’s in my sequencing run?](https://genomeinformatics.github.io/mash-screen/) * [Applying MinHash to cluster RNAseq samples](http://ivory.idyll.org/blog/2016-sourmash.html) * [MinHash signatures as ways to find samples, and collaborators?](http://ivory.idyll.org/blog/2016-sourmash-signatures.html) * [Efficiently searching MinHash Sketch collections](http://ivory.idyll.org/blog/2016-sourmash-sbt.html): indexing and search 42,000 bacterial genomes with Sequence Bloom Trees. * [Quickly searching all the microbial genomes, mark 2 - now with archaea, phage, fungi, and protists!](http://ivory.idyll.org/blog/2016-sourmash-sbt-more.html): indexing and searching 50,000 microbial genomes, round 2. * [What metadata should we put in MinHash Sketch signatures?](http://ivory.idyll.org/blog/2016-sourmash-signatures-metadata.html): crowdsourcing ideas for what metadata belongs in a signature file. * [Minhashing all the things (part 1): microbial genomes](http://blog.luizirber.org/2016/12/28/soursigs-arch-1/): on approaches to computing MinHashes for large collections of public data. * [Comparing genome sets extracted from metagenomes](http://ivory.idyll.org/blog/2017-comparing-genomes-from-metagenomes.html) * [Taxonomic examinations of genome bins from Tara Oceans](http://ivory.idyll.org/blog/2017-taxonomy-of-tara-ocean-genomes.html) * [Classifying genome bins using a custom reference database, part I](http://ivory.idyll.org/blog/2017-classify-genome-bins-with-custom-db-part-1.html) * [Classifying genome bins using a custom reference database, part II](http://ivory.idyll.org/blog/2017-classify-genome-bins-with-custom-db-part-2.html) #### JSON format for the signature[¶](#json-format-for-the-signature) The JSON format is not necessarily final; this is a TODO item for future releases. In particular, we’d like to update it to store more metadata for samples. #### Interoperability with mash[¶](#interoperability-with-mash) The default sketches computed by sourmash and mash are comparable, but we are still [working on ways to convert the file formats](https://github.com/marbl/Mash/issues/27) #### Developing sourmash[¶](#developing-sourmash) Please see: * [Developer information](developer.html) * [Release workflow](release.html) #### Known issues[¶](#known-issues) For at least some versions of matplotlib, users may encounter an error “Failed to connect to server socket:” or “RuntimeError: Invalid DISPLAY variable”. This is because by default matplotlib tries to connect to X11 to use the Tkinter backend. The solution is to force the use of the ‘Agg’ backend in matplotlib; see [this stackoverflow answer](https://stackoverflow.com/a/34294056) or [this sourmash issue comment](https://github.com/dib-lab/sourmash/issues/254#issuecomment-304274590). Newer versions of matplotlib do not seem to have this problem. ### Support[¶](#support) Please ask questions and file bug descriptions [on the GitHub issuetracker for sourmash, dib-lab/sourmash/issues](https://github.com/dib-lab/sourmash/issues) You can also ask questions of Titus on Twitter at [@ctitusbrown](https://twitter.com/ctitusbrown/) Indices and tables[¶](#indices-and-tables) === * [Index](genindex.html) * [Module Index](py-modindex.html) * [Search Page](search.html) ### Table Of Contents [Welcome to Sourmash!](index.html) * [Using sourmash from the command line](index.html#document-command-line) * [sourmash tutorials and notebooks](index.html#document-tutorials) * [Using sourmash: a practical guide](index.html#document-using-sourmash-a-guide) * [Classifying signatures: `search`, `gather`, and `lca` methods.](index.html#document-classifying-signatures) * [`sourmash` Python API](index.html#document-api) * [Additional information on sourmash](index.html#document-more-info) * [Support](index.html#document-support) ### Related Topics * [Documentation overview](index.html#document-index) ### Quick search Working with matrix output by sourmash compare Working with matrix output by sourmash compare === ### Load a comparison matrix into R ``` sourmash_comp_matrix <- read.csv("ecoli.cmp.csv") # Label the rows rownames(sourmash_comp_matrix) <- colnames(sourmash_comp_matrix) # Transform for plotting sourmash_comp_matrix <- as.matrix(sourmash_comp_matrix) ``` ### Make an MDS plot ``` fit <- dist(sourmash_comp_matrix) fit <- cmdscale(fit) x <- fit[, 1] y <- fit[, 2] plot(fit[ , 1], fit[ , 2], xlab = "Dimension 1", ylab = "Dimension 2") ``` ### Make a tSNE plot ``` library(Rtsne) tsne_model <- Rtsne(sourmash_comp_matrix, check_duplicates=FALSE, pca=TRUE, perplexity=5, theta=0.5, dims=2) d_tsne = as.data.frame(tsne_model$Y) plot(d_tsne$V1, d_tsne$V2) ``` ### Make an unclustered heatmap ``` heatmap(sourmash_comp_matrix, Colv=F, scale='none') ``` ### Make a clustered heatmap ``` hc.rows <- hclust(dist(sourmash_comp_matrix)) hc.cols <- hclust(dist(t(sourmash_comp_matrix))) heatmap(sourmash_comp_matrix[cutree(hc.rows,k=2)==1,], Colv=as.dendrogram(hc.cols), scale='none') ```
efm32jg1b100-pac
rust
Rust
Crate efm32jg1b100_pac === Peripheral access API for EFM32JG1B100F128GM32 microcontrollers (generated using svd2rust v0.24.0 ( )) You can find an overview of the generated API here. API features to be included in the next svd2rust release can be generated by cloning the svd2rust repository, checking out the above commit, and running `cargo doc --open`. Modules --- acmp0ACMP0 acmp1ACMP1 adc0ADC0 cmuCMU cryotimerCRYOTIMER cryptoCRYPTO emuEMU genericCommon register and bit access and modify traits gpcrcGPCRC gpioGPIO i2c0I2C0 idac0IDAC0 ldmaLDMA letimer0LETIMER0 leuart0LEUART0 mscMSC pcnt0PCNT0 prsPRS rmuRMU rtccRTCC timer0TIMER0 timer1TIMER1 usart0USART0 usart1USART1 wdog0WDOG0 Structs --- ACMP0ACMP0 ACMP1ACMP1 ADC0ADC0 CBPCache and branch predictor maintenance operations CMUCMU CPUIDCPUID CRYOTIMERCRYOTIMER CRYPTOCRYPTO CorePeripheralsCore peripherals DCBDebug Control Block DWTData Watchpoint and Trace unit EMUEMU FPBFlash Patch and Breakpoint unit GPCRCGPCRC GPIOGPIO I2C0I2C0 IDAC0IDAC0 ITMInstrumentation Trace Macrocell LDMALDMA LETIMER0LETIMER0 LEUART0LEUART0 MPUMemory Protection Unit MSCMSC NVICNested Vector Interrupt Controller PCNT0PCNT0 PRSPRS PeripheralsAll the peripherals RMURMU RTCCRTCC SCBSystem Control Block SYSTSysTick: System Timer TIMER0TIMER0 TIMER1TIMER1 TPIUTrace Port Interface Unit USART0USART0 USART1USART1 WDOG0WDOG0 Enums --- InterruptEnumeration of all the interrupts. Constants --- NVIC_PRIO_BITSNumber available in the NVIC for configuring priority Crate efm32jg1b100_pac === Peripheral access API for EFM32JG1B100F128GM32 microcontrollers (generated using svd2rust v0.24.0 ( )) You can find an overview of the generated API here. API features to be included in the next svd2rust release can be generated by cloning the svd2rust repository, checking out the above commit, and running `cargo doc --open`. Modules --- acmp0ACMP0 acmp1ACMP1 adc0ADC0 cmuCMU cryotimerCRYOTIMER cryptoCRYPTO emuEMU genericCommon register and bit access and modify traits gpcrcGPCRC gpioGPIO i2c0I2C0 idac0IDAC0 ldmaLDMA letimer0LETIMER0 leuart0LEUART0 mscMSC pcnt0PCNT0 prsPRS rmuRMU rtccRTCC timer0TIMER0 timer1TIMER1 usart0USART0 usart1USART1 wdog0WDOG0 Structs --- ACMP0ACMP0 ACMP1ACMP1 ADC0ADC0 CBPCache and branch predictor maintenance operations CMUCMU CPUIDCPUID CRYOTIMERCRYOTIMER CRYPTOCRYPTO CorePeripheralsCore peripherals DCBDebug Control Block DWTData Watchpoint and Trace unit EMUEMU FPBFlash Patch and Breakpoint unit GPCRCGPCRC GPIOGPIO I2C0I2C0 IDAC0IDAC0 ITMInstrumentation Trace Macrocell LDMALDMA LETIMER0LETIMER0 LEUART0LEUART0 MPUMemory Protection Unit MSCMSC NVICNested Vector Interrupt Controller PCNT0PCNT0 PRSPRS PeripheralsAll the peripherals RMURMU RTCCRTCC SCBSystem Control Block SYSTSysTick: System Timer TIMER0TIMER0 TIMER1TIMER1 TPIUTrace Port Interface Unit USART0USART0 USART1USART1 WDOG0WDOG0 Enums --- InterruptEnumeration of all the interrupts. Constants --- NVIC_PRIO_BITSNumber available in the NVIC for configuring priority Module efm32jg1b100_pac::acmp0 === ACMP0 Modules --- aportconflictAPORT Conflict Status Register aportreqAPORT Request Status Register ctrlControl Register hysteresis0Hysteresis 0 Register hysteresis1Hysteresis 1 Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register inputselInput Selection Register routeloc0I/O Routing Location Register routepenI/O Routing Pine Enable Register statusStatus Register Structs --- RegisterBlockRegister block Type Definitions --- APORTCONFLICTAPORTCONFLICT register accessor: an alias for `Reg<APORTCONFLICT_SPEC>` APORTREQAPORTREQ register accessor: an alias for `Reg<APORTREQ_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` HYSTERESIS0HYSTERESIS0 register accessor: an alias for `Reg<HYSTERESIS0_SPEC>` HYSTERESIS1HYSTERESIS1 register accessor: an alias for `Reg<HYSTERESIS1_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` INPUTSELINPUTSEL register accessor: an alias for `Reg<INPUTSEL_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` ROUTEPENROUTEPEN register accessor: an alias for `Reg<ROUTEPEN_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` Module efm32jg1b100_pac::acmp1 === ACMP1 Modules --- aportconflictAPORT Conflict Status Register aportreqAPORT Request Status Register ctrlControl Register hysteresis0Hysteresis 0 Register hysteresis1Hysteresis 1 Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register inputselInput Selection Register routeloc0I/O Routing Location Register routepenI/O Routing Pine Enable Register statusStatus Register Structs --- RegisterBlockRegister block Type Definitions --- APORTCONFLICTAPORTCONFLICT register accessor: an alias for `Reg<APORTCONFLICT_SPEC>` APORTREQAPORTREQ register accessor: an alias for `Reg<APORTREQ_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` HYSTERESIS0HYSTERESIS0 register accessor: an alias for `Reg<HYSTERESIS0_SPEC>` HYSTERESIS1HYSTERESIS1 register accessor: an alias for `Reg<HYSTERESIS1_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` INPUTSELINPUTSEL register accessor: an alias for `Reg<INPUTSEL_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` ROUTEPENROUTEPEN register accessor: an alias for `Reg<ROUTEPEN_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` Module efm32jg1b100_pac::adc0 === ADC0 Modules --- aportconflictAPORT Conflict Status Register aportmasterdisAPORT Bus Master Disable Register aportreqAPORT Request Status Register biasprogBias Programming Register for Various Analog Blocks Used in ADC Operation calCalibration Register cmdCommand Register cmpthrCompare Threshold Register ctrlControl Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register scanctrlScan Control Register scanctrlxScan Control Register Continued scandataScan Conversion Result Data scandatapScan Sequence Result Data Peek Register scandataxScan Sequence Result Data + Data Source Register scandataxpScan Sequence Result Data + Data Source Peek Register scanfifoclearScan FIFO Clear Register scanfifocountScan FIFO Count Register scaninputselInput Selection Register for Scan Mode scanmaskScan Sequence Input Mask Register scannegselNegative Input Select Register for Scan singlectrlSingle Channel Control Register singlectrlxSingle Channel Control Register Continued singledataSingle Conversion Result Data singledatapSingle Conversion Result Data Peek Register singlefifoclearSingle FIFO Clear Register singlefifocountSingle FIFO Count Register statusStatus Register Structs --- RegisterBlockRegister block Type Definitions --- APORTCONFLICTAPORTCONFLICT register accessor: an alias for `Reg<APORTCONFLICT_SPEC>` APORTMASTERDISAPORTMASTERDIS register accessor: an alias for `Reg<APORTMASTERDIS_SPEC>` APORTREQAPORTREQ register accessor: an alias for `Reg<APORTREQ_SPEC>` BIASPROGBIASPROG register accessor: an alias for `Reg<BIASPROG_SPEC>` CALCAL register accessor: an alias for `Reg<CAL_SPEC>` CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CMPTHRCMPTHR register accessor: an alias for `Reg<CMPTHR_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` SCANCTRLSCANCTRL register accessor: an alias for `Reg<SCANCTRL_SPEC>` SCANCTRLXSCANCTRLX register accessor: an alias for `Reg<SCANCTRLX_SPEC>` SCANDATASCANDATA register accessor: an alias for `Reg<SCANDATA_SPEC>` SCANDATAPSCANDATAP register accessor: an alias for `Reg<SCANDATAP_SPEC>` SCANDATAXSCANDATAX register accessor: an alias for `Reg<SCANDATAX_SPEC>` SCANDATAXPSCANDATAXP register accessor: an alias for `Reg<SCANDATAXP_SPEC>` SCANFIFOCLEARSCANFIFOCLEAR register accessor: an alias for `Reg<SCANFIFOCLEAR_SPEC>` SCANFIFOCOUNTSCANFIFOCOUNT register accessor: an alias for `Reg<SCANFIFOCOUNT_SPEC>` SCANINPUTSELSCANINPUTSEL register accessor: an alias for `Reg<SCANINPUTSEL_SPEC>` SCANMASKSCANMASK register accessor: an alias for `Reg<SCANMASK_SPEC>` SCANNEGSELSCANNEGSEL register accessor: an alias for `Reg<SCANNEGSEL_SPEC>` SINGLECTRLSINGLECTRL register accessor: an alias for `Reg<SINGLECTRL_SPEC>` SINGLECTRLXSINGLECTRLX register accessor: an alias for `Reg<SINGLECTRLX_SPEC>` SINGLEDATASINGLEDATA register accessor: an alias for `Reg<SINGLEDATA_SPEC>` SINGLEDATAPSINGLEDATAP register accessor: an alias for `Reg<SINGLEDATAP_SPEC>` SINGLEFIFOCLEARSINGLEFIFOCLEAR register accessor: an alias for `Reg<SINGLEFIFOCLEAR_SPEC>` SINGLEFIFOCOUNTSINGLEFIFOCOUNT register accessor: an alias for `Reg<SINGLEFIFOCOUNT_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` Module efm32jg1b100_pac::cmu === CMU Modules --- adcctrlADC Control Register auxhfrcoctrlAUXHFRCO Control Register calcntCalibration Counter Register calctrlCalibration Control Register cmdCommand Register ctrlCMU Control Register dbgclkselDebug Trace Clock Select freezeFreeze Register hfbusclken0High Frequency Bus Clock Enable Register 0 hfclkselHigh Frequency Clock Select Command Register hfclkstatusHFCLK Status Register hfcoreprescHigh Frequency Core Clock Prescaler Register hfexpprescHigh Frequency Export Clock Prescaler Register hfperclken0High Frequency Peripheral Clock Enable Register 0 hfperprescHigh Frequency Peripheral Clock Prescaler Register hfprescHigh Frequency Clock Prescaler Register hfrcoctrlHFRCO Control Register hfxoctrlHFXO Control Register hfxoctrl1HFXO Control 1 hfxostartupctrlHFXO Startup Control hfxosteadystatectrlHFXO Steady State Control hfxotimeoutctrlHFXO Timeout Control hfxotrimstatusHFXO Trim Status ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register lfaclken0Low Frequency a Clock Enable Register 0 (Async Reg) lfaclkselLow Frequency A Clock Select Register lfapresc0Low Frequency a Prescaler Register 0 (Async Reg) lfbclken0Low Frequency B Clock Enable Register 0 (Async Reg) lfbclkselLow Frequency B Clock Select Register lfbpresc0Low Frequency B Prescaler Register 0 (Async Reg) lfeclken0Low Frequency E Clock Enable Register 0 (Async Reg) lfeclkselLow Frequency E Clock Select Register lfepresc0Low Frequency E Prescaler Register 0 (Async Reg) lfrcoctrlLFRCO Control Register lfxoctrlLFXO Control Register lockConfiguration Lock Register oscencmdOscillator Enable/Disable Command Register pcntctrlPCNT Control Register routeloc0I/O Routing Location Register routepenI/O Routing Pin Enable Register statusStatus Register syncbusySynchronization Busy Register ulfrcoctrlULFRCO Control Register Structs --- RegisterBlockRegister block Type Definitions --- ADCCTRLADCCTRL register accessor: an alias for `Reg<ADCCTRL_SPEC>` AUXHFRCOCTRLAUXHFRCOCTRL register accessor: an alias for `Reg<AUXHFRCOCTRL_SPEC>` CALCNTCALCNT register accessor: an alias for `Reg<CALCNT_SPEC>` CALCTRLCALCTRL register accessor: an alias for `Reg<CALCTRL_SPEC>` CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` DBGCLKSELDBGCLKSEL register accessor: an alias for `Reg<DBGCLKSEL_SPEC>` FREEZEFREEZE register accessor: an alias for `Reg<FREEZE_SPEC>` HFBUSCLKEN0HFBUSCLKEN0 register accessor: an alias for `Reg<HFBUSCLKEN0_SPEC>` HFCLKSELHFCLKSEL register accessor: an alias for `Reg<HFCLKSEL_SPEC>` HFCLKSTATUSHFCLKSTATUS register accessor: an alias for `Reg<HFCLKSTATUS_SPEC>` HFCOREPRESCHFCOREPRESC register accessor: an alias for `Reg<HFCOREPRESC_SPEC>` HFEXPPRESCHFEXPPRESC register accessor: an alias for `Reg<HFEXPPRESC_SPEC>` HFPERCLKEN0HFPERCLKEN0 register accessor: an alias for `Reg<HFPERCLKEN0_SPEC>` HFPERPRESCHFPERPRESC register accessor: an alias for `Reg<HFPERPRESC_SPEC>` HFPRESCHFPRESC register accessor: an alias for `Reg<HFPRESC_SPEC>` HFRCOCTRLHFRCOCTRL register accessor: an alias for `Reg<HFRCOCTRL_SPEC>` HFXOCTRLHFXOCTRL register accessor: an alias for `Reg<HFXOCTRL_SPEC>` HFXOCTRL1HFXOCTRL1 register accessor: an alias for `Reg<HFXOCTRL1_SPEC>` HFXOSTARTUPCTRLHFXOSTARTUPCTRL register accessor: an alias for `Reg<HFXOSTARTUPCTRL_SPEC>` HFXOSTEADYSTATECTRLHFXOSTEADYSTATECTRL register accessor: an alias for `Reg<HFXOSTEADYSTATECTRL_SPEC>` HFXOTIMEOUTCTRLHFXOTIMEOUTCTRL register accessor: an alias for `Reg<HFXOTIMEOUTCTRL_SPEC>` HFXOTRIMSTATUSHFXOTRIMSTATUS register accessor: an alias for `Reg<HFXOTRIMSTATUS_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` LFACLKEN0LFACLKEN0 register accessor: an alias for `Reg<LFACLKEN0_SPEC>` LFACLKSELLFACLKSEL register accessor: an alias for `Reg<LFACLKSEL_SPEC>` LFAPRESC0LFAPRESC0 register accessor: an alias for `Reg<LFAPRESC0_SPEC>` LFBCLKEN0LFBCLKEN0 register accessor: an alias for `Reg<LFBCLKEN0_SPEC>` LFBCLKSELLFBCLKSEL register accessor: an alias for `Reg<LFBCLKSEL_SPEC>` LFBPRESC0LFBPRESC0 register accessor: an alias for `Reg<LFBPRESC0_SPEC>` LFECLKEN0LFECLKEN0 register accessor: an alias for `Reg<LFECLKEN0_SPEC>` LFECLKSELLFECLKSEL register accessor: an alias for `Reg<LFECLKSEL_SPEC>` LFEPRESC0LFEPRESC0 register accessor: an alias for `Reg<LFEPRESC0_SPEC>` LFRCOCTRLLFRCOCTRL register accessor: an alias for `Reg<LFRCOCTRL_SPEC>` LFXOCTRLLFXOCTRL register accessor: an alias for `Reg<LFXOCTRL_SPEC>` LOCKLOCK register accessor: an alias for `Reg<LOCK_SPEC>` OSCENCMDOSCENCMD register accessor: an alias for `Reg<OSCENCMD_SPEC>` PCNTCTRLPCNTCTRL register accessor: an alias for `Reg<PCNTCTRL_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` ROUTEPENROUTEPEN register accessor: an alias for `Reg<ROUTEPEN_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` SYNCBUSYSYNCBUSY register accessor: an alias for `Reg<SYNCBUSY_SPEC>` ULFRCOCTRLULFRCOCTRL register accessor: an alias for `Reg<ULFRCOCTRL_SPEC>` Module efm32jg1b100_pac::cryotimer === CRYOTIMER Modules --- cntCounter Value ctrlControl Register em4wuenWake Up Enable ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register periodselInterrupt Duration Structs --- RegisterBlockRegister block Type Definitions --- CNTCNT register accessor: an alias for `Reg<CNT_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` EM4WUENEM4WUEN register accessor: an alias for `Reg<EM4WUEN_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` PERIODSELPERIODSEL register accessor: an alias for `Reg<PERIODSEL_SPEC>` Module efm32jg1b100_pac::crypto === CRYPTO Modules --- cmdCommand Register cstatusControl Status Register ctrlControl Register data0DATA0 Register Access data0byteDATA0 Register Byte Access data0byte12DATA0 Register Byte 12 Access data0byte13DATA0 Register Byte 13 Access data0byte14DATA0 Register Byte 14 Access data0byte15DATA0 Register Byte 15 Access data0xorDATA0XOR Register Access data0xorbyteDATA0 Register Byte XOR Access data1DATA1 Register Access data1byteDATA1 Register Byte Access data2DATA2 Register Access data3DATA3 Register Access ddata0DDATA0 Register Access ddata0bigDDATA0 Register Big Endian Access ddata0byteDDATA0 Register Byte Access ddata0byte32DDATA0 Register Byte 32 Access ddata1DDATA1 Register Access ddata1byteDDATA1 Register Byte Access ddata2DDATA2 Register Access ddata3DDATA3 Register Access ddata4DDATA4 Register Access dstatusData Status Register ienInterrupt Enable Register if_AES Interrupt Flags ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register keyKEY Register Access keybufKEY Buffer Register Access qdata0QDATA0 Register Access qdata0byteQDATA0 Register Byte Access qdata1QDATA1 Register Access qdata1bigQDATA1 Register Big Endian Access qdata1byteQDATA1 Register Byte Access seq0Sequence Register 0 seq1Sequence Register 1 seq2Sequence Register 2 seq3Sequence Register 3 seq4Sequence Register 4 seqctrlSequence Control seqctrlbSequence Control B statusStatus Register wacWide Arithmetic Configuration Structs --- RegisterBlockRegister block Type Definitions --- CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CSTATUSCSTATUS register accessor: an alias for `Reg<CSTATUS_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` DATA0DATA0 register accessor: an alias for `Reg<DATA0_SPEC>` DATA0BYTEDATA0BYTE register accessor: an alias for `Reg<DATA0BYTE_SPEC>` DATA0BYTE12DATA0BYTE12 register accessor: an alias for `Reg<DATA0BYTE12_SPEC>` DATA0BYTE13DATA0BYTE13 register accessor: an alias for `Reg<DATA0BYTE13_SPEC>` DATA0BYTE14DATA0BYTE14 register accessor: an alias for `Reg<DATA0BYTE14_SPEC>` DATA0BYTE15DATA0BYTE15 register accessor: an alias for `Reg<DATA0BYTE15_SPEC>` DATA0XORDATA0XOR register accessor: an alias for `Reg<DATA0XOR_SPEC>` DATA0XORBYTEDATA0XORBYTE register accessor: an alias for `Reg<DATA0XORBYTE_SPEC>` DATA1DATA1 register accessor: an alias for `Reg<DATA1_SPEC>` DATA1BYTEDATA1BYTE register accessor: an alias for `Reg<DATA1BYTE_SPEC>` DATA2DATA2 register accessor: an alias for `Reg<DATA2_SPEC>` DATA3DATA3 register accessor: an alias for `Reg<DATA3_SPEC>` DDATA0DDATA0 register accessor: an alias for `Reg<DDATA0_SPEC>` DDATA0BIGDDATA0BIG register accessor: an alias for `Reg<DDATA0BIG_SPEC>` DDATA0BYTEDDATA0BYTE register accessor: an alias for `Reg<DDATA0BYTE_SPEC>` DDATA0BYTE32DDATA0BYTE32 register accessor: an alias for `Reg<DDATA0BYTE32_SPEC>` DDATA1DDATA1 register accessor: an alias for `Reg<DDATA1_SPEC>` DDATA1BYTEDDATA1BYTE register accessor: an alias for `Reg<DDATA1BYTE_SPEC>` DDATA2DDATA2 register accessor: an alias for `Reg<DDATA2_SPEC>` DDATA3DDATA3 register accessor: an alias for `Reg<DDATA3_SPEC>` DDATA4DDATA4 register accessor: an alias for `Reg<DDATA4_SPEC>` DSTATUSDSTATUS register accessor: an alias for `Reg<DSTATUS_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` KEYKEY register accessor: an alias for `Reg<KEY_SPEC>` KEYBUFKEYBUF register accessor: an alias for `Reg<KEYBUF_SPEC>` QDATA0QDATA0 register accessor: an alias for `Reg<QDATA0_SPEC>` QDATA0BYTEQDATA0BYTE register accessor: an alias for `Reg<QDATA0BYTE_SPEC>` QDATA1QDATA1 register accessor: an alias for `Reg<QDATA1_SPEC>` QDATA1BIGQDATA1BIG register accessor: an alias for `Reg<QDATA1BIG_SPEC>` QDATA1BYTEQDATA1BYTE register accessor: an alias for `Reg<QDATA1BYTE_SPEC>` SEQ0SEQ0 register accessor: an alias for `Reg<SEQ0_SPEC>` SEQ1SEQ1 register accessor: an alias for `Reg<SEQ1_SPEC>` SEQ2SEQ2 register accessor: an alias for `Reg<SEQ2_SPEC>` SEQ3SEQ3 register accessor: an alias for `Reg<SEQ3_SPEC>` SEQ4SEQ4 register accessor: an alias for `Reg<SEQ4_SPEC>` SEQCTRLSEQCTRL register accessor: an alias for `Reg<SEQCTRL_SPEC>` SEQCTRLBSEQCTRLB register accessor: an alias for `Reg<SEQCTRLB_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` WACWAC register accessor: an alias for `Reg<WAC_SPEC>` Module efm32jg1b100_pac::emu === EMU Modules --- biasconfConfigurations Related to the Bias biastestctrlTest Control Register for Regulator and BIAS cmdCommand Register ctrlControl Register dcdcclimctrlDCDC Power Train PFET Current Limiter Control Register dcdcctrlDCDC Control dcdclncompctrlDCDC Low Noise Compensator Control Register dcdclnfreqctrlDCDC Low Noise Controller Frequency Control dcdclnvctrlDCDC Low Noise Voltage Register dcdclpctrlDCDC Low Power Control Register dcdclpvctrlDCDC Low Power Voltage Register dcdcmiscctrlDCDC Miscellaneous Control Register dcdcsyncDCDC Read Status Register dcdctimingDCDC Controller Timing Value Register dcdczdetctrlDCDC Power Train NFET Zero Current Detector Control Register em4ctrlEM4 Control Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register lockConfiguration Lock Register pwrcfgPower Configuration Register pwrctrlPower Control Register pwrlockRegulator and Supply Lock Register ram0ctrlMemory Control Register statusStatus Register tempValue of Last Temperature Measurement templimitsTemperature Limits for Interrupt Generation testlockTest Lock Register vmonaltavddctrlAlternate VMON AVDD Channel Control vmonavddctrlVMON AVDD Channel Control vmondvddctrlVMON DVDD Channel Control vmonio0ctrlVMON IOVDD0 Channel Control Structs --- RegisterBlockRegister block Type Definitions --- BIASCONFBIASCONF register accessor: an alias for `Reg<BIASCONF_SPEC>` BIASTESTCTRLBIASTESTCTRL register accessor: an alias for `Reg<BIASTESTCTRL_SPEC>` CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` DCDCCLIMCTRLDCDCCLIMCTRL register accessor: an alias for `Reg<DCDCCLIMCTRL_SPEC>` DCDCCTRLDCDCCTRL register accessor: an alias for `Reg<DCDCCTRL_SPEC>` DCDCLNCOMPCTRLDCDCLNCOMPCTRL register accessor: an alias for `Reg<DCDCLNCOMPCTRL_SPEC>` DCDCLNFREQCTRLDCDCLNFREQCTRL register accessor: an alias for `Reg<DCDCLNFREQCTRL_SPEC>` DCDCLNVCTRLDCDCLNVCTRL register accessor: an alias for `Reg<DCDCLNVCTRL_SPEC>` DCDCLPCTRLDCDCLPCTRL register accessor: an alias for `Reg<DCDCLPCTRL_SPEC>` DCDCLPVCTRLDCDCLPVCTRL register accessor: an alias for `Reg<DCDCLPVCTRL_SPEC>` DCDCMISCCTRLDCDCMISCCTRL register accessor: an alias for `Reg<DCDCMISCCTRL_SPEC>` DCDCSYNCDCDCSYNC register accessor: an alias for `Reg<DCDCSYNC_SPEC>` DCDCTIMINGDCDCTIMING register accessor: an alias for `Reg<DCDCTIMING_SPEC>` DCDCZDETCTRLDCDCZDETCTRL register accessor: an alias for `Reg<DCDCZDETCTRL_SPEC>` EM4CTRLEM4CTRL register accessor: an alias for `Reg<EM4CTRL_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` LOCKLOCK register accessor: an alias for `Reg<LOCK_SPEC>` PWRCFGPWRCFG register accessor: an alias for `Reg<PWRCFG_SPEC>` PWRCTRLPWRCTRL register accessor: an alias for `Reg<PWRCTRL_SPEC>` PWRLOCKPWRLOCK register accessor: an alias for `Reg<PWRLOCK_SPEC>` RAM0CTRLRAM0CTRL register accessor: an alias for `Reg<RAM0CTRL_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` TEMPTEMP register accessor: an alias for `Reg<TEMP_SPEC>` TEMPLIMITSTEMPLIMITS register accessor: an alias for `Reg<TEMPLIMITS_SPEC>` TESTLOCKTESTLOCK register accessor: an alias for `Reg<TESTLOCK_SPEC>` VMONALTAVDDCTRLVMONALTAVDDCTRL register accessor: an alias for `Reg<VMONALTAVDDCTRL_SPEC>` VMONAVDDCTRLVMONAVDDCTRL register accessor: an alias for `Reg<VMONAVDDCTRL_SPEC>` VMONDVDDCTRLVMONDVDDCTRL register accessor: an alias for `Reg<VMONDVDDCTRL_SPEC>` VMONIO0CTRLVMONIO0CTRL register accessor: an alias for `Reg<VMONIO0CTRL_SPEC>` Module efm32jg1b100_pac::generic === Common register and bit access and modify traits Structs --- RRegister reader. RegThis structure provides volatile access to registers. WRegister writer. Traits --- ReadableTrait implemented by readable registers to enable the `read` method. RegisterSpecRaw register type ResettableReset value of the register. WritableTrait implemented by writeable registers. Type Definitions --- BitReaderBit-wise field reader BitWriterBit-wise write field proxy BitWriter0CBit-wise write field proxy BitWriter0SBit-wise write field proxy BitWriter0TBit-wise write field proxy BitWriter1CBit-wise write field proxy BitWriter1SBit-wise write field proxy BitWriter1TBit-wise write field proxy FieldReaderField reader. FieldWriterWrite field Proxy with unsafe `bits` FieldWriterSafeWrite field Proxy with safe `bits` Module efm32jg1b100_pac::gpcrc === GPCRC Modules --- cmdCommand Register ctrlControl Register dataCRC Data Register databyterevCRC Data Byte Reverse Register datarevCRC Data Reverse Register initCRC Init Value inputdataInput 32-bit Data Register inputdatabyteInput 8-bit Data Register inputdatahwordInput 16-bit Data Register polyCRC Polynomial Value Structs --- RegisterBlockRegister block Type Definitions --- CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` DATADATA register accessor: an alias for `Reg<DATA_SPEC>` DATABYTEREVDATABYTEREV register accessor: an alias for `Reg<DATABYTEREV_SPEC>` DATAREVDATAREV register accessor: an alias for `Reg<DATAREV_SPEC>` INITINIT register accessor: an alias for `Reg<INIT_SPEC>` INPUTDATAINPUTDATA register accessor: an alias for `Reg<INPUTDATA_SPEC>` INPUTDATABYTEINPUTDATABYTE register accessor: an alias for `Reg<INPUTDATABYTE_SPEC>` INPUTDATAHWORDINPUTDATAHWORD register accessor: an alias for `Reg<INPUTDATAHWORD_SPEC>` POLYPOLY register accessor: an alias for `Reg<POLY_SPEC>` Module efm32jg1b100_pac::gpio === GPIO Modules --- em4wuenEM4 Wake Up Enable Register extifallExternal Interrupt Falling Edge Trigger Register extilevelExternal Interrupt Level Register extipinselhExternal Interrupt Pin Select High Register extipinsellExternal Interrupt Pin Select Low Register extipselhExternal Interrupt Port Select High Register extipsellExternal Interrupt Port Select Low Register extiriseExternal Interrupt Rising Edge Trigger Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register insenseInput Sense Register lockConfiguration Lock Register pa_ctrlPort Control Register pa_dinPort Data in Register pa_doutPort Data Out Register pa_douttglPort Data Out Toggle Register pa_modehPort Pin Mode High Register pa_modelPort Pin Mode Low Register pa_ovtdisOver Voltage Disable for All Modes pa_pinlocknPort Unlocked Pins Register pb_ctrlPort Control Register pb_dinPort Data in Register pb_doutPort Data Out Register pb_douttglPort Data Out Toggle Register pb_modehPort Pin Mode High Register pb_modelPort Pin Mode Low Register pb_ovtdisOver Voltage Disable for All Modes pb_pinlocknPort Unlocked Pins Register pc_ctrlPort Control Register pc_dinPort Data in Register pc_doutPort Data Out Register pc_douttglPort Data Out Toggle Register pc_modehPort Pin Mode High Register pc_modelPort Pin Mode Low Register pc_ovtdisOver Voltage Disable for All Modes pc_pinlocknPort Unlocked Pins Register pd_ctrlPort Control Register pd_dinPort Data in Register pd_doutPort Data Out Register pd_douttglPort Data Out Toggle Register pd_modehPort Pin Mode High Register pd_modelPort Pin Mode Low Register pd_ovtdisOver Voltage Disable for All Modes pd_pinlocknPort Unlocked Pins Register pe_ctrlPort Control Register pe_dinPort Data in Register pe_doutPort Data Out Register pe_douttglPort Data Out Toggle Register pe_modehPort Pin Mode High Register pe_modelPort Pin Mode Low Register pe_ovtdisOver Voltage Disable for All Modes pe_pinlocknPort Unlocked Pins Register pf_ctrlPort Control Register pf_dinPort Data in Register pf_doutPort Data Out Register pf_douttglPort Data Out Toggle Register pf_modehPort Pin Mode High Register pf_modelPort Pin Mode Low Register pf_ovtdisOver Voltage Disable for All Modes pf_pinlocknPort Unlocked Pins Register routeloc0I/O Routing Location Register routepenI/O Routing Pin Enable Register Structs --- RegisterBlockRegister block Type Definitions --- EM4WUENEM4WUEN register accessor: an alias for `Reg<EM4WUEN_SPEC>` EXTIFALLEXTIFALL register accessor: an alias for `Reg<EXTIFALL_SPEC>` EXTILEVELEXTILEVEL register accessor: an alias for `Reg<EXTILEVEL_SPEC>` EXTIPINSELHEXTIPINSELH register accessor: an alias for `Reg<EXTIPINSELH_SPEC>` EXTIPINSELLEXTIPINSELL register accessor: an alias for `Reg<EXTIPINSELL_SPEC>` EXTIPSELHEXTIPSELH register accessor: an alias for `Reg<EXTIPSELH_SPEC>` EXTIPSELLEXTIPSELL register accessor: an alias for `Reg<EXTIPSELL_SPEC>` EXTIRISEEXTIRISE register accessor: an alias for `Reg<EXTIRISE_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` INSENSEINSENSE register accessor: an alias for `Reg<INSENSE_SPEC>` LOCKLOCK register accessor: an alias for `Reg<LOCK_SPEC>` PA_CTRLPA_CTRL register accessor: an alias for `Reg<PA_CTRL_SPEC>` PA_DINPA_DIN register accessor: an alias for `Reg<PA_DIN_SPEC>` PA_DOUTPA_DOUT register accessor: an alias for `Reg<PA_DOUT_SPEC>` PA_DOUTTGLPA_DOUTTGL register accessor: an alias for `Reg<PA_DOUTTGL_SPEC>` PA_MODEHPA_MODEH register accessor: an alias for `Reg<PA_MODEH_SPEC>` PA_MODELPA_MODEL register accessor: an alias for `Reg<PA_MODEL_SPEC>` PA_OVTDISPA_OVTDIS register accessor: an alias for `Reg<PA_OVTDIS_SPEC>` PA_PINLOCKNPA_PINLOCKN register accessor: an alias for `Reg<PA_PINLOCKN_SPEC>` PB_CTRLPB_CTRL register accessor: an alias for `Reg<PB_CTRL_SPEC>` PB_DINPB_DIN register accessor: an alias for `Reg<PB_DIN_SPEC>` PB_DOUTPB_DOUT register accessor: an alias for `Reg<PB_DOUT_SPEC>` PB_DOUTTGLPB_DOUTTGL register accessor: an alias for `Reg<PB_DOUTTGL_SPEC>` PB_MODEHPB_MODEH register accessor: an alias for `Reg<PB_MODEH_SPEC>` PB_MODELPB_MODEL register accessor: an alias for `Reg<PB_MODEL_SPEC>` PB_OVTDISPB_OVTDIS register accessor: an alias for `Reg<PB_OVTDIS_SPEC>` PB_PINLOCKNPB_PINLOCKN register accessor: an alias for `Reg<PB_PINLOCKN_SPEC>` PC_CTRLPC_CTRL register accessor: an alias for `Reg<PC_CTRL_SPEC>` PC_DINPC_DIN register accessor: an alias for `Reg<PC_DIN_SPEC>` PC_DOUTPC_DOUT register accessor: an alias for `Reg<PC_DOUT_SPEC>` PC_DOUTTGLPC_DOUTTGL register accessor: an alias for `Reg<PC_DOUTTGL_SPEC>` PC_MODEHPC_MODEH register accessor: an alias for `Reg<PC_MODEH_SPEC>` PC_MODELPC_MODEL register accessor: an alias for `Reg<PC_MODEL_SPEC>` PC_OVTDISPC_OVTDIS register accessor: an alias for `Reg<PC_OVTDIS_SPEC>` PC_PINLOCKNPC_PINLOCKN register accessor: an alias for `Reg<PC_PINLOCKN_SPEC>` PD_CTRLPD_CTRL register accessor: an alias for `Reg<PD_CTRL_SPEC>` PD_DINPD_DIN register accessor: an alias for `Reg<PD_DIN_SPEC>` PD_DOUTPD_DOUT register accessor: an alias for `Reg<PD_DOUT_SPEC>` PD_DOUTTGLPD_DOUTTGL register accessor: an alias for `Reg<PD_DOUTTGL_SPEC>` PD_MODEHPD_MODEH register accessor: an alias for `Reg<PD_MODEH_SPEC>` PD_MODELPD_MODEL register accessor: an alias for `Reg<PD_MODEL_SPEC>` PD_OVTDISPD_OVTDIS register accessor: an alias for `Reg<PD_OVTDIS_SPEC>` PD_PINLOCKNPD_PINLOCKN register accessor: an alias for `Reg<PD_PINLOCKN_SPEC>` PE_CTRLPE_CTRL register accessor: an alias for `Reg<PE_CTRL_SPEC>` PE_DINPE_DIN register accessor: an alias for `Reg<PE_DIN_SPEC>` PE_DOUTPE_DOUT register accessor: an alias for `Reg<PE_DOUT_SPEC>` PE_DOUTTGLPE_DOUTTGL register accessor: an alias for `Reg<PE_DOUTTGL_SPEC>` PE_MODEHPE_MODEH register accessor: an alias for `Reg<PE_MODEH_SPEC>` PE_MODELPE_MODEL register accessor: an alias for `Reg<PE_MODEL_SPEC>` PE_OVTDISPE_OVTDIS register accessor: an alias for `Reg<PE_OVTDIS_SPEC>` PE_PINLOCKNPE_PINLOCKN register accessor: an alias for `Reg<PE_PINLOCKN_SPEC>` PF_CTRLPF_CTRL register accessor: an alias for `Reg<PF_CTRL_SPEC>` PF_DINPF_DIN register accessor: an alias for `Reg<PF_DIN_SPEC>` PF_DOUTPF_DOUT register accessor: an alias for `Reg<PF_DOUT_SPEC>` PF_DOUTTGLPF_DOUTTGL register accessor: an alias for `Reg<PF_DOUTTGL_SPEC>` PF_MODEHPF_MODEH register accessor: an alias for `Reg<PF_MODEH_SPEC>` PF_MODELPF_MODEL register accessor: an alias for `Reg<PF_MODEL_SPEC>` PF_OVTDISPF_OVTDIS register accessor: an alias for `Reg<PF_OVTDIS_SPEC>` PF_PINLOCKNPF_PINLOCKN register accessor: an alias for `Reg<PF_PINLOCKN_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` ROUTEPENROUTEPEN register accessor: an alias for `Reg<ROUTEPEN_SPEC>` Module efm32jg1b100_pac::i2c0 === I2C0 Modules --- clkdivClock Division Register cmdCommand Register ctrlControl Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register routeloc0I/O Routing Location Register routepenI/O Routing Pin Enable Register rxdataReceive Buffer Data Register rxdatapReceive Buffer Data Peek Register rxdoubleReceive Buffer Double Data Register rxdoublepReceive Buffer Double Data Peek Register saddrSlave Address Register saddrmaskSlave Address Mask Register stateState Register statusStatus Register txdataTransmit Buffer Data Register txdoubleTransmit Buffer Double Data Register Structs --- RegisterBlockRegister block Type Definitions --- CLKDIVCLKDIV register accessor: an alias for `Reg<CLKDIV_SPEC>` CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` ROUTEPENROUTEPEN register accessor: an alias for `Reg<ROUTEPEN_SPEC>` RXDATARXDATA register accessor: an alias for `Reg<RXDATA_SPEC>` RXDATAPRXDATAP register accessor: an alias for `Reg<RXDATAP_SPEC>` RXDOUBLERXDOUBLE register accessor: an alias for `Reg<RXDOUBLE_SPEC>` RXDOUBLEPRXDOUBLEP register accessor: an alias for `Reg<RXDOUBLEP_SPEC>` SADDRSADDR register accessor: an alias for `Reg<SADDR_SPEC>` SADDRMASKSADDRMASK register accessor: an alias for `Reg<SADDRMASK_SPEC>` STATESTATE register accessor: an alias for `Reg<STATE_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` TXDATATXDATA register accessor: an alias for `Reg<TXDATA_SPEC>` TXDOUBLETXDOUBLE register accessor: an alias for `Reg<TXDOUBLE_SPEC>` Module efm32jg1b100_pac::idac0 === IDAC0 Modules --- aportconflictAPORT Request Status Register aportreqAPORT Request Status Register ctrlControl Register curprogCurrent Programming Register dutyconfigDuty Cycle Configuration Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register statusStatus Register Structs --- RegisterBlockRegister block Type Definitions --- APORTCONFLICTAPORTCONFLICT register accessor: an alias for `Reg<APORTCONFLICT_SPEC>` APORTREQAPORTREQ register accessor: an alias for `Reg<APORTREQ_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` CURPROGCURPROG register accessor: an alias for `Reg<CURPROG_SPEC>` DUTYCONFIGDUTYCONFIG register accessor: an alias for `Reg<DUTYCONFIG_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` Module efm32jg1b100_pac::ldma === LDMA Modules --- ch0_cfgChannel Configuration Register ch0_ctrlChannel Descriptor Control Word Register ch0_dstChannel Descriptor Destination Data Address Register ch0_linkChannel Descriptor Link Structure Address Register ch0_loopChannel Loop Counter Register ch0_reqselChannel Peripheral Request Select Register ch0_srcChannel Descriptor Source Data Address Register ch1_cfgChannel Configuration Register ch1_ctrlChannel Descriptor Control Word Register ch1_dstChannel Descriptor Destination Data Address Register ch1_linkChannel Descriptor Link Structure Address Register ch1_loopChannel Loop Counter Register ch1_reqselChannel Peripheral Request Select Register ch1_srcChannel Descriptor Source Data Address Register ch2_cfgChannel Configuration Register ch2_ctrlChannel Descriptor Control Word Register ch2_dstChannel Descriptor Destination Data Address Register ch2_linkChannel Descriptor Link Structure Address Register ch2_loopChannel Loop Counter Register ch2_reqselChannel Peripheral Request Select Register ch2_srcChannel Descriptor Source Data Address Register ch3_cfgChannel Configuration Register ch3_ctrlChannel Descriptor Control Word Register ch3_dstChannel Descriptor Destination Data Address Register ch3_linkChannel Descriptor Link Structure Address Register ch3_loopChannel Loop Counter Register ch3_reqselChannel Peripheral Request Select Register ch3_srcChannel Descriptor Source Data Address Register ch4_cfgChannel Configuration Register ch4_ctrlChannel Descriptor Control Word Register ch4_dstChannel Descriptor Destination Data Address Register ch4_linkChannel Descriptor Link Structure Address Register ch4_loopChannel Loop Counter Register ch4_reqselChannel Peripheral Request Select Register ch4_srcChannel Descriptor Source Data Address Register ch5_cfgChannel Configuration Register ch5_ctrlChannel Descriptor Control Word Register ch5_dstChannel Descriptor Destination Data Address Register ch5_linkChannel Descriptor Link Structure Address Register ch5_loopChannel Loop Counter Register ch5_reqselChannel Peripheral Request Select Register ch5_srcChannel Descriptor Source Data Address Register ch6_cfgChannel Configuration Register ch6_ctrlChannel Descriptor Control Word Register ch6_dstChannel Descriptor Destination Data Address Register ch6_linkChannel Descriptor Link Structure Address Register ch6_loopChannel Loop Counter Register ch6_reqselChannel Peripheral Request Select Register ch6_srcChannel Descriptor Source Data Address Register ch7_cfgChannel Configuration Register ch7_ctrlChannel Descriptor Control Word Register ch7_dstChannel Descriptor Destination Data Address Register ch7_linkChannel Descriptor Link Structure Address Register ch7_loopChannel Loop Counter Register ch7_reqselChannel Peripheral Request Select Register ch7_srcChannel Descriptor Source Data Address Register chbusyDMA Channel Busy Register chdoneDMA Channel Linking Done Register (Single-Cycle RMW) chenDMA Channel Enable Register (Single-Cycle RMW) ctrlDMA Control Register dbghaltDMA Channel Debug Halt Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register linkloadDMA Channel Link Load Register reqclearDMA Channel Request Clear Register reqdisDMA Channel Request Disable Register reqpendDMA Channel Requests Pending Register statusDMA Status Register swreqDMA Channel Software Transfer Request Register syncDMA Synchronization Trigger Register (Single-Cycle RMW) Structs --- RegisterBlockRegister block Type Definitions --- CH0_CFGCH0_CFG register accessor: an alias for `Reg<CH0_CFG_SPEC>` CH0_CTRLCH0_CTRL register accessor: an alias for `Reg<CH0_CTRL_SPEC>` CH0_DSTCH0_DST register accessor: an alias for `Reg<CH0_DST_SPEC>` CH0_LINKCH0_LINK register accessor: an alias for `Reg<CH0_LINK_SPEC>` CH0_LOOPCH0_LOOP register accessor: an alias for `Reg<CH0_LOOP_SPEC>` CH0_REQSELCH0_REQSEL register accessor: an alias for `Reg<CH0_REQSEL_SPEC>` CH0_SRCCH0_SRC register accessor: an alias for `Reg<CH0_SRC_SPEC>` CH1_CFGCH1_CFG register accessor: an alias for `Reg<CH1_CFG_SPEC>` CH1_CTRLCH1_CTRL register accessor: an alias for `Reg<CH1_CTRL_SPEC>` CH1_DSTCH1_DST register accessor: an alias for `Reg<CH1_DST_SPEC>` CH1_LINKCH1_LINK register accessor: an alias for `Reg<CH1_LINK_SPEC>` CH1_LOOPCH1_LOOP register accessor: an alias for `Reg<CH1_LOOP_SPEC>` CH1_REQSELCH1_REQSEL register accessor: an alias for `Reg<CH1_REQSEL_SPEC>` CH1_SRCCH1_SRC register accessor: an alias for `Reg<CH1_SRC_SPEC>` CH2_CFGCH2_CFG register accessor: an alias for `Reg<CH2_CFG_SPEC>` CH2_CTRLCH2_CTRL register accessor: an alias for `Reg<CH2_CTRL_SPEC>` CH2_DSTCH2_DST register accessor: an alias for `Reg<CH2_DST_SPEC>` CH2_LINKCH2_LINK register accessor: an alias for `Reg<CH2_LINK_SPEC>` CH2_LOOPCH2_LOOP register accessor: an alias for `Reg<CH2_LOOP_SPEC>` CH2_REQSELCH2_REQSEL register accessor: an alias for `Reg<CH2_REQSEL_SPEC>` CH2_SRCCH2_SRC register accessor: an alias for `Reg<CH2_SRC_SPEC>` CH3_CFGCH3_CFG register accessor: an alias for `Reg<CH3_CFG_SPEC>` CH3_CTRLCH3_CTRL register accessor: an alias for `Reg<CH3_CTRL_SPEC>` CH3_DSTCH3_DST register accessor: an alias for `Reg<CH3_DST_SPEC>` CH3_LINKCH3_LINK register accessor: an alias for `Reg<CH3_LINK_SPEC>` CH3_LOOPCH3_LOOP register accessor: an alias for `Reg<CH3_LOOP_SPEC>` CH3_REQSELCH3_REQSEL register accessor: an alias for `Reg<CH3_REQSEL_SPEC>` CH3_SRCCH3_SRC register accessor: an alias for `Reg<CH3_SRC_SPEC>` CH4_CFGCH4_CFG register accessor: an alias for `Reg<CH4_CFG_SPEC>` CH4_CTRLCH4_CTRL register accessor: an alias for `Reg<CH4_CTRL_SPEC>` CH4_DSTCH4_DST register accessor: an alias for `Reg<CH4_DST_SPEC>` CH4_LINKCH4_LINK register accessor: an alias for `Reg<CH4_LINK_SPEC>` CH4_LOOPCH4_LOOP register accessor: an alias for `Reg<CH4_LOOP_SPEC>` CH4_REQSELCH4_REQSEL register accessor: an alias for `Reg<CH4_REQSEL_SPEC>` CH4_SRCCH4_SRC register accessor: an alias for `Reg<CH4_SRC_SPEC>` CH5_CFGCH5_CFG register accessor: an alias for `Reg<CH5_CFG_SPEC>` CH5_CTRLCH5_CTRL register accessor: an alias for `Reg<CH5_CTRL_SPEC>` CH5_DSTCH5_DST register accessor: an alias for `Reg<CH5_DST_SPEC>` CH5_LINKCH5_LINK register accessor: an alias for `Reg<CH5_LINK_SPEC>` CH5_LOOPCH5_LOOP register accessor: an alias for `Reg<CH5_LOOP_SPEC>` CH5_REQSELCH5_REQSEL register accessor: an alias for `Reg<CH5_REQSEL_SPEC>` CH5_SRCCH5_SRC register accessor: an alias for `Reg<CH5_SRC_SPEC>` CH6_CFGCH6_CFG register accessor: an alias for `Reg<CH6_CFG_SPEC>` CH6_CTRLCH6_CTRL register accessor: an alias for `Reg<CH6_CTRL_SPEC>` CH6_DSTCH6_DST register accessor: an alias for `Reg<CH6_DST_SPEC>` CH6_LINKCH6_LINK register accessor: an alias for `Reg<CH6_LINK_SPEC>` CH6_LOOPCH6_LOOP register accessor: an alias for `Reg<CH6_LOOP_SPEC>` CH6_REQSELCH6_REQSEL register accessor: an alias for `Reg<CH6_REQSEL_SPEC>` CH6_SRCCH6_SRC register accessor: an alias for `Reg<CH6_SRC_SPEC>` CH7_CFGCH7_CFG register accessor: an alias for `Reg<CH7_CFG_SPEC>` CH7_CTRLCH7_CTRL register accessor: an alias for `Reg<CH7_CTRL_SPEC>` CH7_DSTCH7_DST register accessor: an alias for `Reg<CH7_DST_SPEC>` CH7_LINKCH7_LINK register accessor: an alias for `Reg<CH7_LINK_SPEC>` CH7_LOOPCH7_LOOP register accessor: an alias for `Reg<CH7_LOOP_SPEC>` CH7_REQSELCH7_REQSEL register accessor: an alias for `Reg<CH7_REQSEL_SPEC>` CH7_SRCCH7_SRC register accessor: an alias for `Reg<CH7_SRC_SPEC>` CHBUSYCHBUSY register accessor: an alias for `Reg<CHBUSY_SPEC>` CHDONECHDONE register accessor: an alias for `Reg<CHDONE_SPEC>` CHENCHEN register accessor: an alias for `Reg<CHEN_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` DBGHALTDBGHALT register accessor: an alias for `Reg<DBGHALT_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` LINKLOADLINKLOAD register accessor: an alias for `Reg<LINKLOAD_SPEC>` REQCLEARREQCLEAR register accessor: an alias for `Reg<REQCLEAR_SPEC>` REQDISREQDIS register accessor: an alias for `Reg<REQDIS_SPEC>` REQPENDREQPEND register accessor: an alias for `Reg<REQPEND_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` SWREQSWREQ register accessor: an alias for `Reg<SWREQ_SPEC>` SYNCSYNC register accessor: an alias for `Reg<SYNC_SPEC>` Module efm32jg1b100_pac::letimer0 === LETIMER0 Modules --- cmdCommand Register cntCounter Value Register comp0Compare Value Register 0 comp1Compare Value Register 1 ctrlControl Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register prsselPRS Input Select Register rep0Repeat Counter Register 0 rep1Repeat Counter Register 1 routeloc0I/O Routing Location Register routepenI/O Routing Pin Enable Register statusStatus Register syncbusySynchronization Busy Register Structs --- RegisterBlockRegister block Type Definitions --- CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CNTCNT register accessor: an alias for `Reg<CNT_SPEC>` COMP0COMP0 register accessor: an alias for `Reg<COMP0_SPEC>` COMP1COMP1 register accessor: an alias for `Reg<COMP1_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` PRSSELPRSSEL register accessor: an alias for `Reg<PRSSEL_SPEC>` REP0REP0 register accessor: an alias for `Reg<REP0_SPEC>` REP1REP1 register accessor: an alias for `Reg<REP1_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` ROUTEPENROUTEPEN register accessor: an alias for `Reg<ROUTEPEN_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` SYNCBUSYSYNCBUSY register accessor: an alias for `Reg<SYNCBUSY_SPEC>` Module efm32jg1b100_pac::leuart0 === LEUART0 Modules --- clkdivClock Control Register cmdCommand Register ctrlControl Register freezeFreeze Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register inputLEUART Input Register pulsectrlPulse Control Register routeloc0I/O Routing Location Register routepenI/O Routing Pin Enable Register rxdataReceive Buffer Data Register rxdataxReceive Buffer Data Extended Register rxdataxpReceive Buffer Data Extended Peek Register sigframeSignal Frame Register startframeStart Frame Register statusStatus Register syncbusySynchronization Busy Register txdataTransmit Buffer Data Register txdataxTransmit Buffer Data Extended Register Structs --- RegisterBlockRegister block Type Definitions --- CLKDIVCLKDIV register accessor: an alias for `Reg<CLKDIV_SPEC>` CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` FREEZEFREEZE register accessor: an alias for `Reg<FREEZE_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` INPUTINPUT register accessor: an alias for `Reg<INPUT_SPEC>` PULSECTRLPULSECTRL register accessor: an alias for `Reg<PULSECTRL_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` ROUTEPENROUTEPEN register accessor: an alias for `Reg<ROUTEPEN_SPEC>` RXDATARXDATA register accessor: an alias for `Reg<RXDATA_SPEC>` RXDATAXRXDATAX register accessor: an alias for `Reg<RXDATAX_SPEC>` RXDATAXPRXDATAXP register accessor: an alias for `Reg<RXDATAXP_SPEC>` SIGFRAMESIGFRAME register accessor: an alias for `Reg<SIGFRAME_SPEC>` STARTFRAMESTARTFRAME register accessor: an alias for `Reg<STARTFRAME_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` SYNCBUSYSYNCBUSY register accessor: an alias for `Reg<SYNCBUSY_SPEC>` TXDATATXDATA register accessor: an alias for `Reg<TXDATA_SPEC>` TXDATAXTXDATAX register accessor: an alias for `Reg<TXDATAX_SPEC>` Module efm32jg1b100_pac::msc === MSC Modules --- addrbPage Erase/Write Address Buffer cachecmdFlash Cache Command Register cachehitsCache Hits Performance Counter cachemissesCache Misses Performance Counter cmdCommand Register ctrlMemory System Control Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register lockConfiguration Lock Register masslockMass Erase Lock Register readctrlRead Control Register startupStartup Control statusStatus Register wdataWrite Data Register writecmdWrite Command Register writectrlWrite Control Register Structs --- RegisterBlockRegister block Type Definitions --- ADDRBADDRB register accessor: an alias for `Reg<ADDRB_SPEC>` CACHECMDCACHECMD register accessor: an alias for `Reg<CACHECMD_SPEC>` CACHEHITSCACHEHITS register accessor: an alias for `Reg<CACHEHITS_SPEC>` CACHEMISSESCACHEMISSES register accessor: an alias for `Reg<CACHEMISSES_SPEC>` CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` LOCKLOCK register accessor: an alias for `Reg<LOCK_SPEC>` MASSLOCKMASSLOCK register accessor: an alias for `Reg<MASSLOCK_SPEC>` READCTRLREADCTRL register accessor: an alias for `Reg<READCTRL_SPEC>` STARTUPSTARTUP register accessor: an alias for `Reg<STARTUP_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` WDATAWDATA register accessor: an alias for `Reg<WDATA_SPEC>` WRITECMDWRITECMD register accessor: an alias for `Reg<WRITECMD_SPEC>` WRITECTRLWRITECTRL register accessor: an alias for `Reg<WRITECTRL_SPEC>` Module efm32jg1b100_pac::pcnt0 === PCNT0 Modules --- auxcntAuxiliary Counter Value Register cmdCommand Register cntCounter Value Register ctrlControl Register freezeFreeze Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register inputPCNT Input Register ovscfgOversampling Config Register routeloc0I/O Routing Location Register statusStatus Register syncbusySynchronization Busy Register topTop Value Register topbTop Value Buffer Register Structs --- RegisterBlockRegister block Type Definitions --- AUXCNTAUXCNT register accessor: an alias for `Reg<AUXCNT_SPEC>` CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CNTCNT register accessor: an alias for `Reg<CNT_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` FREEZEFREEZE register accessor: an alias for `Reg<FREEZE_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` INPUTINPUT register accessor: an alias for `Reg<INPUT_SPEC>` OVSCFGOVSCFG register accessor: an alias for `Reg<OVSCFG_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` SYNCBUSYSYNCBUSY register accessor: an alias for `Reg<SYNCBUSY_SPEC>` TOPTOP register accessor: an alias for `Reg<TOP_SPEC>` TOPBTOPB register accessor: an alias for `Reg<TOPB_SPEC>` Module efm32jg1b100_pac::prs === PRS Modules --- ch0_ctrlChannel Control Register ch1_ctrlChannel Control Register ch2_ctrlChannel Control Register ch3_ctrlChannel Control Register ch4_ctrlChannel Control Register ch5_ctrlChannel Control Register ch6_ctrlChannel Control Register ch7_ctrlChannel Control Register ch8_ctrlChannel Control Register ch9_ctrlChannel Control Register ch10_ctrlChannel Control Register ch11_ctrlChannel Control Register ctrlControl Register dmareq0DMA Request 0 Register dmareq1DMA Request 1 Register peekPRS Channel Values routeloc0I/O Routing Location Register routeloc1I/O Routing Location Register routeloc2I/O Routing Location Register routepenI/O Routing Pin Enable Register swlevelSoftware Level Register swpulseSoftware Pulse Register Structs --- RegisterBlockRegister block Type Definitions --- CH0_CTRLCH0_CTRL register accessor: an alias for `Reg<CH0_CTRL_SPEC>` CH1_CTRLCH1_CTRL register accessor: an alias for `Reg<CH1_CTRL_SPEC>` CH2_CTRLCH2_CTRL register accessor: an alias for `Reg<CH2_CTRL_SPEC>` CH3_CTRLCH3_CTRL register accessor: an alias for `Reg<CH3_CTRL_SPEC>` CH4_CTRLCH4_CTRL register accessor: an alias for `Reg<CH4_CTRL_SPEC>` CH5_CTRLCH5_CTRL register accessor: an alias for `Reg<CH5_CTRL_SPEC>` CH6_CTRLCH6_CTRL register accessor: an alias for `Reg<CH6_CTRL_SPEC>` CH7_CTRLCH7_CTRL register accessor: an alias for `Reg<CH7_CTRL_SPEC>` CH8_CTRLCH8_CTRL register accessor: an alias for `Reg<CH8_CTRL_SPEC>` CH9_CTRLCH9_CTRL register accessor: an alias for `Reg<CH9_CTRL_SPEC>` CH10_CTRLCH10_CTRL register accessor: an alias for `Reg<CH10_CTRL_SPEC>` CH11_CTRLCH11_CTRL register accessor: an alias for `Reg<CH11_CTRL_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` DMAREQ0DMAREQ0 register accessor: an alias for `Reg<DMAREQ0_SPEC>` DMAREQ1DMAREQ1 register accessor: an alias for `Reg<DMAREQ1_SPEC>` PEEKPEEK register accessor: an alias for `Reg<PEEK_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` ROUTELOC1ROUTELOC1 register accessor: an alias for `Reg<ROUTELOC1_SPEC>` ROUTELOC2ROUTELOC2 register accessor: an alias for `Reg<ROUTELOC2_SPEC>` ROUTEPENROUTEPEN register accessor: an alias for `Reg<ROUTEPEN_SPEC>` SWLEVELSWLEVEL register accessor: an alias for `Reg<SWLEVEL_SPEC>` SWPULSESWPULSE register accessor: an alias for `Reg<SWPULSE_SPEC>` Module efm32jg1b100_pac::rmu === RMU Modules --- cmdCommand Register ctrlControl Register lockConfiguration Lock Register rstReset Control Register rstcauseReset Cause Register Structs --- RegisterBlockRegister block Type Definitions --- CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` LOCKLOCK register accessor: an alias for `Reg<LOCK_SPEC>` RSTRST register accessor: an alias for `Reg<RST_SPEC>` RSTCAUSERSTCAUSE register accessor: an alias for `Reg<RSTCAUSE_SPEC>` Module efm32jg1b100_pac::rtcc === RTCC Modules --- cc0_ccvCapture/Compare Value Register cc0_ctrlCC Channel Control Register cc0_dateCapture/Compare Date Register cc0_timeCapture/Compare Time Register cc1_ccvCapture/Compare Value Register cc1_ctrlCC Channel Control Register cc1_dateCapture/Compare Date Register cc1_timeCapture/Compare Time Register cc2_ccvCapture/Compare Value Register cc2_ctrlCC Channel Control Register cc2_dateCapture/Compare Date Register cc2_timeCapture/Compare Time Register cmdCommand Register cntCounter Value Register combcntCombined Pre-Counter and Counter Value Register ctrlControl Register dateDate Register em4wuenWake Up Enable ienInterrupt Enable Register if_RTCC Interrupt Flags ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register lockConfiguration Lock Register powerdownRetention RAM Power-down Register precntPre-Counter Value Register ret0_regRetention Register ret1_regRetention Register ret2_regRetention Register ret3_regRetention Register ret4_regRetention Register ret5_regRetention Register ret6_regRetention Register ret7_regRetention Register ret8_regRetention Register ret9_regRetention Register ret10_regRetention Register ret11_regRetention Register ret12_regRetention Register ret13_regRetention Register ret14_regRetention Register ret15_regRetention Register ret16_regRetention Register ret17_regRetention Register ret18_regRetention Register ret19_regRetention Register ret20_regRetention Register ret21_regRetention Register ret22_regRetention Register ret23_regRetention Register ret24_regRetention Register ret25_regRetention Register ret26_regRetention Register ret27_regRetention Register ret28_regRetention Register ret29_regRetention Register ret30_regRetention Register ret31_regRetention Register statusStatus Register syncbusySynchronization Busy Register timeTime of Day Register Structs --- RegisterBlockRegister block Type Definitions --- CC0_CCVCC0_CCV register accessor: an alias for `Reg<CC0_CCV_SPEC>` CC0_CTRLCC0_CTRL register accessor: an alias for `Reg<CC0_CTRL_SPEC>` CC0_DATECC0_DATE register accessor: an alias for `Reg<CC0_DATE_SPEC>` CC0_TIMECC0_TIME register accessor: an alias for `Reg<CC0_TIME_SPEC>` CC1_CCVCC1_CCV register accessor: an alias for `Reg<CC1_CCV_SPEC>` CC1_CTRLCC1_CTRL register accessor: an alias for `Reg<CC1_CTRL_SPEC>` CC1_DATECC1_DATE register accessor: an alias for `Reg<CC1_DATE_SPEC>` CC1_TIMECC1_TIME register accessor: an alias for `Reg<CC1_TIME_SPEC>` CC2_CCVCC2_CCV register accessor: an alias for `Reg<CC2_CCV_SPEC>` CC2_CTRLCC2_CTRL register accessor: an alias for `Reg<CC2_CTRL_SPEC>` CC2_DATECC2_DATE register accessor: an alias for `Reg<CC2_DATE_SPEC>` CC2_TIMECC2_TIME register accessor: an alias for `Reg<CC2_TIME_SPEC>` CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CNTCNT register accessor: an alias for `Reg<CNT_SPEC>` COMBCNTCOMBCNT register accessor: an alias for `Reg<COMBCNT_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` DATEDATE register accessor: an alias for `Reg<DATE_SPEC>` EM4WUENEM4WUEN register accessor: an alias for `Reg<EM4WUEN_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` LOCKLOCK register accessor: an alias for `Reg<LOCK_SPEC>` POWERDOWNPOWERDOWN register accessor: an alias for `Reg<POWERDOWN_SPEC>` PRECNTPRECNT register accessor: an alias for `Reg<PRECNT_SPEC>` RET0_REGRET0_REG register accessor: an alias for `Reg<RET0_REG_SPEC>` RET1_REGRET1_REG register accessor: an alias for `Reg<RET1_REG_SPEC>` RET2_REGRET2_REG register accessor: an alias for `Reg<RET2_REG_SPEC>` RET3_REGRET3_REG register accessor: an alias for `Reg<RET3_REG_SPEC>` RET4_REGRET4_REG register accessor: an alias for `Reg<RET4_REG_SPEC>` RET5_REGRET5_REG register accessor: an alias for `Reg<RET5_REG_SPEC>` RET6_REGRET6_REG register accessor: an alias for `Reg<RET6_REG_SPEC>` RET7_REGRET7_REG register accessor: an alias for `Reg<RET7_REG_SPEC>` RET8_REGRET8_REG register accessor: an alias for `Reg<RET8_REG_SPEC>` RET9_REGRET9_REG register accessor: an alias for `Reg<RET9_REG_SPEC>` RET10_REGRET10_REG register accessor: an alias for `Reg<RET10_REG_SPEC>` RET11_REGRET11_REG register accessor: an alias for `Reg<RET11_REG_SPEC>` RET12_REGRET12_REG register accessor: an alias for `Reg<RET12_REG_SPEC>` RET13_REGRET13_REG register accessor: an alias for `Reg<RET13_REG_SPEC>` RET14_REGRET14_REG register accessor: an alias for `Reg<RET14_REG_SPEC>` RET15_REGRET15_REG register accessor: an alias for `Reg<RET15_REG_SPEC>` RET16_REGRET16_REG register accessor: an alias for `Reg<RET16_REG_SPEC>` RET17_REGRET17_REG register accessor: an alias for `Reg<RET17_REG_SPEC>` RET18_REGRET18_REG register accessor: an alias for `Reg<RET18_REG_SPEC>` RET19_REGRET19_REG register accessor: an alias for `Reg<RET19_REG_SPEC>` RET20_REGRET20_REG register accessor: an alias for `Reg<RET20_REG_SPEC>` RET21_REGRET21_REG register accessor: an alias for `Reg<RET21_REG_SPEC>` RET22_REGRET22_REG register accessor: an alias for `Reg<RET22_REG_SPEC>` RET23_REGRET23_REG register accessor: an alias for `Reg<RET23_REG_SPEC>` RET24_REGRET24_REG register accessor: an alias for `Reg<RET24_REG_SPEC>` RET25_REGRET25_REG register accessor: an alias for `Reg<RET25_REG_SPEC>` RET26_REGRET26_REG register accessor: an alias for `Reg<RET26_REG_SPEC>` RET27_REGRET27_REG register accessor: an alias for `Reg<RET27_REG_SPEC>` RET28_REGRET28_REG register accessor: an alias for `Reg<RET28_REG_SPEC>` RET29_REGRET29_REG register accessor: an alias for `Reg<RET29_REG_SPEC>` RET30_REGRET30_REG register accessor: an alias for `Reg<RET30_REG_SPEC>` RET31_REGRET31_REG register accessor: an alias for `Reg<RET31_REG_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` SYNCBUSYSYNCBUSY register accessor: an alias for `Reg<SYNCBUSY_SPEC>` TIMETIME register accessor: an alias for `Reg<TIME_SPEC>` Module efm32jg1b100_pac::timer0 === TIMER0 Modules --- cc0_ccvCC Channel Value Register cc0_ccvbCC Channel Buffer Register cc0_ccvpCC Channel Value Peek Register cc0_ctrlCC Channel Control Register cc1_ccvCC Channel Value Register cc1_ccvbCC Channel Buffer Register cc1_ccvpCC Channel Value Peek Register cc1_ctrlCC Channel Control Register cc2_ccvCC Channel Value Register cc2_ccvbCC Channel Buffer Register cc2_ccvpCC Channel Value Peek Register cc2_ctrlCC Channel Control Register cc3_ccvCC Channel Value Register cc3_ccvbCC Channel Buffer Register cc3_ccvpCC Channel Value Peek Register cc3_ctrlCC Channel Control Register cmdCommand Register cntCounter Value Register ctrlControl Register dtctrlDTI Control Register dtfaultDTI Fault Register dtfaultcDTI Fault Clear Register dtfcDTI Fault Configuration Register dtlockDTI Configuration Lock Register dtogenDTI Output Generation Enable Register dttimeDTI Time Control Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register lockTIMER Configuration Lock Register routeloc0I/O Routing Location Register routeloc2I/O Routing Location Register routepenI/O Routing Pin Enable Register statusStatus Register topCounter Top Value Register topbCounter Top Value Buffer Register Structs --- RegisterBlockRegister block Type Definitions --- CC0_CCVCC0_CCV register accessor: an alias for `Reg<CC0_CCV_SPEC>` CC0_CCVBCC0_CCVB register accessor: an alias for `Reg<CC0_CCVB_SPEC>` CC0_CCVPCC0_CCVP register accessor: an alias for `Reg<CC0_CCVP_SPEC>` CC0_CTRLCC0_CTRL register accessor: an alias for `Reg<CC0_CTRL_SPEC>` CC1_CCVCC1_CCV register accessor: an alias for `Reg<CC1_CCV_SPEC>` CC1_CCVBCC1_CCVB register accessor: an alias for `Reg<CC1_CCVB_SPEC>` CC1_CCVPCC1_CCVP register accessor: an alias for `Reg<CC1_CCVP_SPEC>` CC1_CTRLCC1_CTRL register accessor: an alias for `Reg<CC1_CTRL_SPEC>` CC2_CCVCC2_CCV register accessor: an alias for `Reg<CC2_CCV_SPEC>` CC2_CCVBCC2_CCVB register accessor: an alias for `Reg<CC2_CCVB_SPEC>` CC2_CCVPCC2_CCVP register accessor: an alias for `Reg<CC2_CCVP_SPEC>` CC2_CTRLCC2_CTRL register accessor: an alias for `Reg<CC2_CTRL_SPEC>` CC3_CCVCC3_CCV register accessor: an alias for `Reg<CC3_CCV_SPEC>` CC3_CCVBCC3_CCVB register accessor: an alias for `Reg<CC3_CCVB_SPEC>` CC3_CCVPCC3_CCVP register accessor: an alias for `Reg<CC3_CCVP_SPEC>` CC3_CTRLCC3_CTRL register accessor: an alias for `Reg<CC3_CTRL_SPEC>` CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CNTCNT register accessor: an alias for `Reg<CNT_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` DTCTRLDTCTRL register accessor: an alias for `Reg<DTCTRL_SPEC>` DTFAULTDTFAULT register accessor: an alias for `Reg<DTFAULT_SPEC>` DTFAULTCDTFAULTC register accessor: an alias for `Reg<DTFAULTC_SPEC>` DTFCDTFC register accessor: an alias for `Reg<DTFC_SPEC>` DTLOCKDTLOCK register accessor: an alias for `Reg<DTLOCK_SPEC>` DTOGENDTOGEN register accessor: an alias for `Reg<DTOGEN_SPEC>` DTTIMEDTTIME register accessor: an alias for `Reg<DTTIME_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` LOCKLOCK register accessor: an alias for `Reg<LOCK_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` ROUTELOC2ROUTELOC2 register accessor: an alias for `Reg<ROUTELOC2_SPEC>` ROUTEPENROUTEPEN register accessor: an alias for `Reg<ROUTEPEN_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` TOPTOP register accessor: an alias for `Reg<TOP_SPEC>` TOPBTOPB register accessor: an alias for `Reg<TOPB_SPEC>` Module efm32jg1b100_pac::timer1 === TIMER1 Modules --- cc0_ccvCC Channel Value Register cc0_ccvbCC Channel Buffer Register cc0_ccvpCC Channel Value Peek Register cc0_ctrlCC Channel Control Register cc1_ccvCC Channel Value Register cc1_ccvbCC Channel Buffer Register cc1_ccvpCC Channel Value Peek Register cc1_ctrlCC Channel Control Register cc2_ccvCC Channel Value Register cc2_ccvbCC Channel Buffer Register cc2_ccvpCC Channel Value Peek Register cc2_ctrlCC Channel Control Register cc3_ccvCC Channel Value Register cc3_ccvbCC Channel Buffer Register cc3_ccvpCC Channel Value Peek Register cc3_ctrlCC Channel Control Register cmdCommand Register cntCounter Value Register ctrlControl Register dtctrlDTI Control Register dtfaultDTI Fault Register dtfaultcDTI Fault Clear Register dtfcDTI Fault Configuration Register dtlockDTI Configuration Lock Register dtogenDTI Output Generation Enable Register dttimeDTI Time Control Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register lockTIMER Configuration Lock Register routeloc0I/O Routing Location Register routeloc2I/O Routing Location Register routepenI/O Routing Pin Enable Register statusStatus Register topCounter Top Value Register topbCounter Top Value Buffer Register Structs --- RegisterBlockRegister block Type Definitions --- CC0_CCVCC0_CCV register accessor: an alias for `Reg<CC0_CCV_SPEC>` CC0_CCVBCC0_CCVB register accessor: an alias for `Reg<CC0_CCVB_SPEC>` CC0_CCVPCC0_CCVP register accessor: an alias for `Reg<CC0_CCVP_SPEC>` CC0_CTRLCC0_CTRL register accessor: an alias for `Reg<CC0_CTRL_SPEC>` CC1_CCVCC1_CCV register accessor: an alias for `Reg<CC1_CCV_SPEC>` CC1_CCVBCC1_CCVB register accessor: an alias for `Reg<CC1_CCVB_SPEC>` CC1_CCVPCC1_CCVP register accessor: an alias for `Reg<CC1_CCVP_SPEC>` CC1_CTRLCC1_CTRL register accessor: an alias for `Reg<CC1_CTRL_SPEC>` CC2_CCVCC2_CCV register accessor: an alias for `Reg<CC2_CCV_SPEC>` CC2_CCVBCC2_CCVB register accessor: an alias for `Reg<CC2_CCVB_SPEC>` CC2_CCVPCC2_CCVP register accessor: an alias for `Reg<CC2_CCVP_SPEC>` CC2_CTRLCC2_CTRL register accessor: an alias for `Reg<CC2_CTRL_SPEC>` CC3_CCVCC3_CCV register accessor: an alias for `Reg<CC3_CCV_SPEC>` CC3_CCVBCC3_CCVB register accessor: an alias for `Reg<CC3_CCVB_SPEC>` CC3_CCVPCC3_CCVP register accessor: an alias for `Reg<CC3_CCVP_SPEC>` CC3_CTRLCC3_CTRL register accessor: an alias for `Reg<CC3_CTRL_SPEC>` CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CNTCNT register accessor: an alias for `Reg<CNT_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` DTCTRLDTCTRL register accessor: an alias for `Reg<DTCTRL_SPEC>` DTFAULTDTFAULT register accessor: an alias for `Reg<DTFAULT_SPEC>` DTFAULTCDTFAULTC register accessor: an alias for `Reg<DTFAULTC_SPEC>` DTFCDTFC register accessor: an alias for `Reg<DTFC_SPEC>` DTLOCKDTLOCK register accessor: an alias for `Reg<DTLOCK_SPEC>` DTOGENDTOGEN register accessor: an alias for `Reg<DTOGEN_SPEC>` DTTIMEDTTIME register accessor: an alias for `Reg<DTTIME_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` LOCKLOCK register accessor: an alias for `Reg<LOCK_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` ROUTELOC2ROUTELOC2 register accessor: an alias for `Reg<ROUTELOC2_SPEC>` ROUTEPENROUTEPEN register accessor: an alias for `Reg<ROUTEPEN_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` TOPTOP register accessor: an alias for `Reg<TOP_SPEC>` TOPBTOPB register accessor: an alias for `Reg<TOPB_SPEC>` Module efm32jg1b100_pac::usart0 === USART0 Modules --- clkdivClock Control Register cmdCommand Register ctrlControl Register ctrlxControl Register Extended frameUSART Frame Format Register i2sctrlI2S Control Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register inputUSART Input Register irctrlIrDA Control Register routeloc0I/O Routing Location Register routeloc1I/O Routing Location Register routepenI/O Routing Pin Enable Register rxdataRX Buffer Data Register rxdataxRX Buffer Data Extended Register rxdataxpRX Buffer Data Extended Peek Register rxdoubleRX FIFO Double Data Register rxdoublexRX Buffer Double Data Extended Register rxdoublexpRX Buffer Double Data Extended Peek Register statusUSART Status Register timecmp0Used to Generate Interrupts and Various Delays timecmp1Used to Generate Interrupts and Various Delays timecmp2Used to Generate Interrupts and Various Delays timingTiming Register trigctrlUSART Trigger Control Register txdataTX Buffer Data Register txdataxTX Buffer Data Extended Register txdoubleTX Buffer Double Data Register txdoublexTX Buffer Double Data Extended Register Structs --- RegisterBlockRegister block Type Definitions --- CLKDIVCLKDIV register accessor: an alias for `Reg<CLKDIV_SPEC>` CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` CTRLXCTRLX register accessor: an alias for `Reg<CTRLX_SPEC>` FRAMEFRAME register accessor: an alias for `Reg<FRAME_SPEC>` I2SCTRLI2SCTRL register accessor: an alias for `Reg<I2SCTRL_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` INPUTINPUT register accessor: an alias for `Reg<INPUT_SPEC>` IRCTRLIRCTRL register accessor: an alias for `Reg<IRCTRL_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` ROUTELOC1ROUTELOC1 register accessor: an alias for `Reg<ROUTELOC1_SPEC>` ROUTEPENROUTEPEN register accessor: an alias for `Reg<ROUTEPEN_SPEC>` RXDATARXDATA register accessor: an alias for `Reg<RXDATA_SPEC>` RXDATAXRXDATAX register accessor: an alias for `Reg<RXDATAX_SPEC>` RXDATAXPRXDATAXP register accessor: an alias for `Reg<RXDATAXP_SPEC>` RXDOUBLERXDOUBLE register accessor: an alias for `Reg<RXDOUBLE_SPEC>` RXDOUBLEXRXDOUBLEX register accessor: an alias for `Reg<RXDOUBLEX_SPEC>` RXDOUBLEXPRXDOUBLEXP register accessor: an alias for `Reg<RXDOUBLEXP_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` TIMECMP0TIMECMP0 register accessor: an alias for `Reg<TIMECMP0_SPEC>` TIMECMP1TIMECMP1 register accessor: an alias for `Reg<TIMECMP1_SPEC>` TIMECMP2TIMECMP2 register accessor: an alias for `Reg<TIMECMP2_SPEC>` TIMINGTIMING register accessor: an alias for `Reg<TIMING_SPEC>` TRIGCTRLTRIGCTRL register accessor: an alias for `Reg<TRIGCTRL_SPEC>` TXDATATXDATA register accessor: an alias for `Reg<TXDATA_SPEC>` TXDATAXTXDATAX register accessor: an alias for `Reg<TXDATAX_SPEC>` TXDOUBLETXDOUBLE register accessor: an alias for `Reg<TXDOUBLE_SPEC>` TXDOUBLEXTXDOUBLEX register accessor: an alias for `Reg<TXDOUBLEX_SPEC>` Module efm32jg1b100_pac::usart1 === USART1 Modules --- clkdivClock Control Register cmdCommand Register ctrlControl Register ctrlxControl Register Extended frameUSART Frame Format Register i2sctrlI2S Control Register ienInterrupt Enable Register if_Interrupt Flag Register ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register inputUSART Input Register irctrlIrDA Control Register routeloc0I/O Routing Location Register routeloc1I/O Routing Location Register routepenI/O Routing Pin Enable Register rxdataRX Buffer Data Register rxdataxRX Buffer Data Extended Register rxdataxpRX Buffer Data Extended Peek Register rxdoubleRX FIFO Double Data Register rxdoublexRX Buffer Double Data Extended Register rxdoublexpRX Buffer Double Data Extended Peek Register statusUSART Status Register timecmp0Used to Generate Interrupts and Various Delays timecmp1Used to Generate Interrupts and Various Delays timecmp2Used to Generate Interrupts and Various Delays timingTiming Register trigctrlUSART Trigger Control Register txdataTX Buffer Data Register txdataxTX Buffer Data Extended Register txdoubleTX Buffer Double Data Register txdoublexTX Buffer Double Data Extended Register Structs --- RegisterBlockRegister block Type Definitions --- CLKDIVCLKDIV register accessor: an alias for `Reg<CLKDIV_SPEC>` CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` CTRLXCTRLX register accessor: an alias for `Reg<CTRLX_SPEC>` FRAMEFRAME register accessor: an alias for `Reg<FRAME_SPEC>` I2SCTRLI2SCTRL register accessor: an alias for `Reg<I2SCTRL_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` INPUTINPUT register accessor: an alias for `Reg<INPUT_SPEC>` IRCTRLIRCTRL register accessor: an alias for `Reg<IRCTRL_SPEC>` ROUTELOC0ROUTELOC0 register accessor: an alias for `Reg<ROUTELOC0_SPEC>` ROUTELOC1ROUTELOC1 register accessor: an alias for `Reg<ROUTELOC1_SPEC>` ROUTEPENROUTEPEN register accessor: an alias for `Reg<ROUTEPEN_SPEC>` RXDATARXDATA register accessor: an alias for `Reg<RXDATA_SPEC>` RXDATAXRXDATAX register accessor: an alias for `Reg<RXDATAX_SPEC>` RXDATAXPRXDATAXP register accessor: an alias for `Reg<RXDATAXP_SPEC>` RXDOUBLERXDOUBLE register accessor: an alias for `Reg<RXDOUBLE_SPEC>` RXDOUBLEXRXDOUBLEX register accessor: an alias for `Reg<RXDOUBLEX_SPEC>` RXDOUBLEXPRXDOUBLEXP register accessor: an alias for `Reg<RXDOUBLEXP_SPEC>` STATUSSTATUS register accessor: an alias for `Reg<STATUS_SPEC>` TIMECMP0TIMECMP0 register accessor: an alias for `Reg<TIMECMP0_SPEC>` TIMECMP1TIMECMP1 register accessor: an alias for `Reg<TIMECMP1_SPEC>` TIMECMP2TIMECMP2 register accessor: an alias for `Reg<TIMECMP2_SPEC>` TIMINGTIMING register accessor: an alias for `Reg<TIMING_SPEC>` TRIGCTRLTRIGCTRL register accessor: an alias for `Reg<TRIGCTRL_SPEC>` TXDATATXDATA register accessor: an alias for `Reg<TXDATA_SPEC>` TXDATAXTXDATAX register accessor: an alias for `Reg<TXDATAX_SPEC>` TXDOUBLETXDOUBLE register accessor: an alias for `Reg<TXDOUBLE_SPEC>` TXDOUBLEXTXDOUBLEX register accessor: an alias for `Reg<TXDOUBLEX_SPEC>` Module efm32jg1b100_pac::wdog0 === WDOG0 Modules --- cmdCommand Register ctrlControl Register ienInterrupt Enable Register if_Watchdog Interrupt Flags ifcInterrupt Flag Clear Register ifsInterrupt Flag Set Register pch0_prsctrlPRS Control Register pch1_prsctrlPRS Control Register syncbusySynchronization Busy Register Structs --- RegisterBlockRegister block Type Definitions --- CMDCMD register accessor: an alias for `Reg<CMD_SPEC>` CTRLCTRL register accessor: an alias for `Reg<CTRL_SPEC>` IENIEN register accessor: an alias for `Reg<IEN_SPEC>` IFIF register accessor: an alias for `Reg<IF_SPEC>` IFCIFC register accessor: an alias for `Reg<IFC_SPEC>` IFSIFS register accessor: an alias for `Reg<IFS_SPEC>` PCH0_PRSCTRLPCH0_PRSCTRL register accessor: an alias for `Reg<PCH0_PRSCTRL_SPEC>` PCH1_PRSCTRLPCH1_PRSCTRL register accessor: an alias for `Reg<PCH1_PRSCTRL_SPEC>` SYNCBUSYSYNCBUSY register accessor: an alias for `Reg<SYNCBUSY_SPEC>` Struct efm32jg1b100_pac::ACMP0 === ``` pub struct ACMP0 { /* private fields */ } ``` ACMP0 Implementations --- source### impl ACMP0 source#### pub const PTR: *constRegisterBlock = {0x40000000 as *const acmp0::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for ACMP0 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for ACMP0 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for ACMP0 Auto Trait Implementations --- ### impl RefUnwindSafe for ACMP0 ### impl !Sync for ACMP0 ### impl Unpin for ACMP0 ### impl UnwindSafe for ACMP0 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::ACMP1 === ``` pub struct ACMP1 { /* private fields */ } ``` ACMP1 Implementations --- source### impl ACMP1 source#### pub const PTR: *constRegisterBlock = {0x40000400 as *const acmp1::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for ACMP1 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for ACMP1 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for ACMP1 Auto Trait Implementations --- ### impl RefUnwindSafe for ACMP1 ### impl !Sync for ACMP1 ### impl Unpin for ACMP1 ### impl UnwindSafe for ACMP1 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::ADC0 === ``` pub struct ADC0 { /* private fields */ } ``` ADC0 Implementations --- source### impl ADC0 source#### pub const PTR: *constRegisterBlock = {0x40002000 as *const adc0::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for ADC0 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for ADC0 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for ADC0 Auto Trait Implementations --- ### impl RefUnwindSafe for ADC0 ### impl !Sync for ADC0 ### impl Unpin for ADC0 ### impl UnwindSafe for ADC0 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::CBP === ``` pub struct CBP { /* private fields */ } ``` Cache and branch predictor maintenance operations Implementations --- source### impl CBP source#### pub fn iciallu(&mut self) I-cache invalidate all to PoU source#### pub fn icimvau(&mut self, mva: u32) I-cache invalidate by MVA to PoU source#### pub unsafe fn dcimvac(&mut self, mva: u32) D-cache invalidate by MVA to PoC source#### pub unsafe fn dcisw(&mut self, set: u16, way: u16) D-cache invalidate by set-way `set` is masked to be between 0 and 3, and `way` between 0 and 511. source#### pub fn dccmvau(&mut self, mva: u32) D-cache clean by MVA to PoU source#### pub fn dccmvac(&mut self, mva: u32) D-cache clean by MVA to PoC source#### pub fn dccsw(&mut self, set: u16, way: u16) D-cache clean by set-way `set` is masked to be between 0 and 3, and `way` between 0 and 511. source#### pub fn dccimvac(&mut self, mva: u32) D-cache clean and invalidate by MVA to PoC source#### pub fn dccisw(&mut self, set: u16, way: u16) D-cache clean and invalidate by set-way `set` is masked to be between 0 and 3, and `way` between 0 and 511. source#### pub fn bpiall(&mut self) Branch predictor invalidate all source### impl CBP source#### pub const PTR: *constRegisterBlock = {0xe000ef50 as *const cortex_m::peripheral::cbp::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock 👎 Deprecated since 0.7.5: Use the associated constant `PTR` instead Returns a pointer to the register block Trait Implementations --- source### impl Deref for CBP #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &<CBP as Deref>::Target Dereferences the value. source### impl Send for CBP Auto Trait Implementations --- ### impl RefUnwindSafe for CBP ### impl !Sync for CBP ### impl Unpin for CBP ### impl UnwindSafe for CBP Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::CMU === ``` pub struct CMU { /* private fields */ } ``` CMU Implementations --- source### impl CMU source#### pub const PTR: *constRegisterBlock = {0x400e4000 as *const cmu::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for CMU source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for CMU #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for CMU Auto Trait Implementations --- ### impl RefUnwindSafe for CMU ### impl !Sync for CMU ### impl Unpin for CMU ### impl UnwindSafe for CMU Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::CPUID === ``` pub struct CPUID { /* private fields */ } ``` CPUID Implementations --- source### impl CPUID source#### pub fn select_cache(&mut self, level: u8, ind: CsselrCacheType) Selects the current CCSIDR * `level`: the required cache level minus 1, e.g. 0 for L1, 1 for L2 * `ind`: select instruction cache or data/unified cache `level` is masked to be between 0 and 7. source#### pub fn cache_num_sets_ways(    &mut self,     level: u8,     ind: CsselrCacheType) -> (u16, u16) Returns the number of sets and ways in the selected cache source#### pub fn cache_dminline() -> u32 Returns log2 of the number of words in the smallest cache line of all the data cache and unified caches that are controlled by the processor. This is the `DminLine` field of the CTR register. source#### pub fn cache_iminline() -> u32 Returns log2 of the number of words in the smallest cache line of all the instruction caches that are controlled by the processor. This is the `IminLine` field of the CTR register. source### impl CPUID source#### pub const PTR: *constRegisterBlock = {0xe000ed00 as *const cortex_m::peripheral::cpuid::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock 👎 Deprecated since 0.7.5: Use the associated constant `PTR` instead Returns a pointer to the register block Trait Implementations --- source### impl Deref for CPUID #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &<CPUID as Deref>::Target Dereferences the value. source### impl Send for CPUID Auto Trait Implementations --- ### impl RefUnwindSafe for CPUID ### impl !Sync for CPUID ### impl Unpin for CPUID ### impl UnwindSafe for CPUID Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::CRYOTIMER === ``` pub struct CRYOTIMER { /* private fields */ } ``` CRYOTIMER Implementations --- source### impl CRYOTIMER source#### pub const PTR: *constRegisterBlock = {0x4001e000 as *const cryotimer::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for CRYOTIMER source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for CRYOTIMER #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for CRYOTIMER Auto Trait Implementations --- ### impl RefUnwindSafe for CRYOTIMER ### impl !Sync for CRYOTIMER ### impl Unpin for CRYOTIMER ### impl UnwindSafe for CRYOTIMER Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::CRYPTO === ``` pub struct CRYPTO { /* private fields */ } ``` CRYPTO Implementations --- source### impl CRYPTO source#### pub const PTR: *constRegisterBlock = {0x400f0000 as *const crypto::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for CRYPTO source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for CRYPTO #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for CRYPTO Auto Trait Implementations --- ### impl RefUnwindSafe for CRYPTO ### impl !Sync for CRYPTO ### impl Unpin for CRYPTO ### impl UnwindSafe for CRYPTO Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::CorePeripherals === ``` pub struct CorePeripherals { pub CBP: CBP, pub CPUID: CPUID, pub DCB: DCB, pub DWT: DWT, pub FPB: FPB, pub FPU: FPU, pub ICB: ICB, pub ITM: ITM, pub MPU: MPU, pub NVIC: NVIC, pub SAU: SAU, pub SCB: SCB, pub SYST: SYST, pub TPIU: TPIU, /* private fields */ } ``` Core peripherals Fields --- `CBP: CBP`Cache and branch predictor maintenance operations. Not available on Armv6-M. `CPUID: CPUID`CPUID `DCB: DCB`Debug Control Block `DWT: DWT`Data Watchpoint and Trace unit `FPB: FPB`Flash Patch and Breakpoint unit. Not available on Armv6-M. `FPU: FPU`Floating Point Unit. `ICB: ICB`Implementation Control Block. The name is from the v8-M spec, but the block existed in earlier revisions, without a name. `ITM: ITM`Instrumentation Trace Macrocell. Not available on Armv6-M and Armv8-M Baseline. `MPU: MPU`Memory Protection Unit `NVIC: NVIC`Nested Vector Interrupt Controller `SAU: SAU`Security Attribution Unit `SCB: SCB`System Control Block `SYST: SYST`SysTick: System Timer `TPIU: TPIU`Trace Port Interface Unit. Not available on Armv6-M. Implementations --- source### impl Peripherals source#### pub fn take() -> Option<PeripheralsReturns all the core peripherals *once* source#### pub unsafe fn steal() -> Peripherals Unchecked version of `Peripherals::take` Auto Trait Implementations --- ### impl RefUnwindSafe for Peripherals ### impl Send for Peripherals ### impl !Sync for Peripherals ### impl Unpin for Peripherals ### impl UnwindSafe for Peripherals Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::DCB === ``` pub struct DCB { /* private fields */ } ``` Debug Control Block Implementations --- source### impl DCB source#### pub fn enable_trace(&mut self) Enables TRACE. This is for example required by the `peripheral::DWT` cycle counter to work properly. As by STM documentation, this flag is not reset on soft-reset, only on power reset. source#### pub fn disable_trace(&mut self) Disables TRACE. See `DCB::enable_trace()` for more details source#### pub fn is_debugger_attached() -> bool Is there a debugger attached? (see note) Note: This function is reported not to work on Cortex-M0 devices. Per the ARM v6-M Architecture Reference Manual, “Access to the DHCSR from software running on the processor is IMPLEMENTATION DEFINED”. Indeed, from the Cortex-M0+ r0p1 Technical Reference Manual, “Note Software cannot access the debug registers.” source### impl DCB source#### pub const PTR: *constRegisterBlock = {0xe000edf0 as *const cortex_m::peripheral::dcb::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock 👎 Deprecated since 0.7.5: Use the associated constant `PTR` instead Returns a pointer to the register block Trait Implementations --- source### impl Deref for DCB #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &<DCB as Deref>::Target Dereferences the value. source### impl Send for DCB Auto Trait Implementations --- ### impl RefUnwindSafe for DCB ### impl !Sync for DCB ### impl Unpin for DCB ### impl UnwindSafe for DCB Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::DWT === ``` pub struct DWT { /* private fields */ } ``` Data Watchpoint and Trace unit Implementations --- source### impl DWT source#### pub fn num_comp() -> u8 Number of comparators implemented A value of zero indicates no comparator support. source#### pub fn has_exception_trace() -> bool Returns `true` if the the implementation supports sampling and exception tracing source#### pub fn has_external_match() -> bool Returns `true` if the implementation includes external match signals source#### pub fn has_cycle_counter() -> bool Returns `true` if the implementation supports a cycle counter source#### pub fn has_profiling_counter() -> bool Returns `true` if the implementation the profiling counters source#### pub fn enable_cycle_counter(&mut self) Enables the cycle counter The global trace enable (`DCB::enable_trace`) should be set before enabling the cycle counter, the processor may ignore writes to the cycle counter enable if the global trace is disabled (implementation defined behaviour). source#### pub fn disable_cycle_counter(&mut self) Disables the cycle counter source#### pub fn cycle_counter_enabled() -> bool Returns `true` if the cycle counter is enabled source#### pub fn get_cycle_count() -> u32 👎 Deprecated since 0.7.4: Use `cycle_count` which follows the C-GETTER convention Returns the current clock cycle count source#### pub fn cycle_count() -> u32 Returns the current clock cycle count source#### pub fn set_cycle_count(&mut self, count: u32) Set the cycle count source#### pub fn unlock() Removes the software lock on the DWT Some devices, like the STM32F7, software lock the DWT after a power cycle. source#### pub fn cpi_count() -> u8 Get the CPI count Counts additional cycles required to execute multi-cycle instructions, except those recorded by `lsu_count`, and counts any instruction fetch stalls. source#### pub fn set_cpi_count(&mut self, count: u8) Set the CPI count source#### pub fn exception_count() -> u8 Get the total cycles spent in exception processing source#### pub fn set_exception_count(&mut self, count: u8) Set the exception count source#### pub fn sleep_count() -> u8 Get the total number of cycles that the processor is sleeping ARM recommends that this counter counts all cycles when the processor is sleeping, regardless of whether a WFI or WFE instruction, or the sleep-on-exit functionality, caused the entry to sleep mode. However, all sleep features are implementation defined and therefore when this counter counts is implementation defined. source#### pub fn set_sleep_count(&mut self, count: u8) Set the sleep count source#### pub fn lsu_count() -> u8 Get the additional cycles required to execute all load or store instructions source#### pub fn set_lsu_count(&mut self, count: u8) Set the lsu count source#### pub fn fold_count() -> u8 Get the folded instruction count Increments on each instruction that takes 0 cycles. source#### pub fn set_fold_count(&mut self, count: u8) Set the folded instruction count source### impl DWT source#### pub const PTR: *constRegisterBlock = {0xe0001000 as *const cortex_m::peripheral::dwt::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock 👎 Deprecated since 0.7.5: Use the associated constant `PTR` instead Returns a pointer to the register block Trait Implementations --- source### impl Deref for DWT #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &<DWT as Deref>::Target Dereferences the value. source### impl Send for DWT Auto Trait Implementations --- ### impl RefUnwindSafe for DWT ### impl !Sync for DWT ### impl Unpin for DWT ### impl UnwindSafe for DWT Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::EMU === ``` pub struct EMU { /* private fields */ } ``` EMU Implementations --- source### impl EMU source#### pub const PTR: *constRegisterBlock = {0x400e3000 as *const emu::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for EMU source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for EMU #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for EMU Auto Trait Implementations --- ### impl RefUnwindSafe for EMU ### impl !Sync for EMU ### impl Unpin for EMU ### impl UnwindSafe for EMU Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::FPB === ``` pub struct FPB { /* private fields */ } ``` Flash Patch and Breakpoint unit Implementations --- source### impl FPB source#### pub const PTR: *constRegisterBlock = {0xe0002000 as *const cortex_m::peripheral::fpb::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock 👎 Deprecated since 0.7.5: Use the associated constant `PTR` instead Returns a pointer to the register block Trait Implementations --- source### impl Deref for FPB #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &<FPB as Deref>::Target Dereferences the value. source### impl Send for FPB Auto Trait Implementations --- ### impl RefUnwindSafe for FPB ### impl !Sync for FPB ### impl Unpin for FPB ### impl UnwindSafe for FPB Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::GPCRC === ``` pub struct GPCRC { /* private fields */ } ``` GPCRC Implementations --- source### impl GPCRC source#### pub const PTR: *constRegisterBlock = {0x4001c000 as *const gpcrc::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for GPCRC source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for GPCRC #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for GPCRC Auto Trait Implementations --- ### impl RefUnwindSafe for GPCRC ### impl !Sync for GPCRC ### impl Unpin for GPCRC ### impl UnwindSafe for GPCRC Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::GPIO === ``` pub struct GPIO { /* private fields */ } ``` GPIO Implementations --- source### impl GPIO source#### pub const PTR: *constRegisterBlock = {0x4000a000 as *const gpio::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for GPIO source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for GPIO #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for GPIO Auto Trait Implementations --- ### impl RefUnwindSafe for GPIO ### impl !Sync for GPIO ### impl Unpin for GPIO ### impl UnwindSafe for GPIO Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::I2C0 === ``` pub struct I2C0 { /* private fields */ } ``` I2C0 Implementations --- source### impl I2C0 source#### pub const PTR: *constRegisterBlock = {0x4000c000 as *const i2c0::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for I2C0 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for I2C0 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for I2C0 Auto Trait Implementations --- ### impl RefUnwindSafe for I2C0 ### impl !Sync for I2C0 ### impl Unpin for I2C0 ### impl UnwindSafe for I2C0 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::IDAC0 === ``` pub struct IDAC0 { /* private fields */ } ``` IDAC0 Implementations --- source### impl IDAC0 source#### pub const PTR: *constRegisterBlock = {0x40006000 as *const idac0::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for IDAC0 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for IDAC0 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for IDAC0 Auto Trait Implementations --- ### impl RefUnwindSafe for IDAC0 ### impl !Sync for IDAC0 ### impl Unpin for IDAC0 ### impl UnwindSafe for IDAC0 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::ITM === ``` pub struct ITM { /* private fields */ } ``` Instrumentation Trace Macrocell Implementations --- source### impl ITM source#### pub const PTR: *mutRegisterBlock = {0xe0000000 as *mut cortex_m::peripheral::itm::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *mutRegisterBlock 👎 Deprecated since 0.7.5: Use the associated constant `PTR` instead Returns a pointer to the register block Trait Implementations --- source### impl Deref for ITM #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &<ITM as Deref>::Target Dereferences the value. source### impl DerefMut for ITM source#### fn deref_mut(&mut self) -> &mut <ITM as Deref>::Target Mutably dereferences the value. source### impl Send for ITM Auto Trait Implementations --- ### impl RefUnwindSafe for ITM ### impl !Sync for ITM ### impl Unpin for ITM ### impl UnwindSafe for ITM Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::LDMA === ``` pub struct LDMA { /* private fields */ } ``` LDMA Implementations --- source### impl LDMA source#### pub const PTR: *constRegisterBlock = {0x400e2000 as *const ldma::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for LDMA source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for LDMA #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for LDMA Auto Trait Implementations --- ### impl RefUnwindSafe for LDMA ### impl !Sync for LDMA ### impl Unpin for LDMA ### impl UnwindSafe for LDMA Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::LETIMER0 === ``` pub struct LETIMER0 { /* private fields */ } ``` LETIMER0 Implementations --- source### impl LETIMER0 source#### pub const PTR: *constRegisterBlock = {0x40046000 as *const letimer0::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for LETIMER0 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for LETIMER0 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for LETIMER0 Auto Trait Implementations --- ### impl RefUnwindSafe for LETIMER0 ### impl !Sync for LETIMER0 ### impl Unpin for LETIMER0 ### impl UnwindSafe for LETIMER0 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::LEUART0 === ``` pub struct LEUART0 { /* private fields */ } ``` LEUART0 Implementations --- source### impl LEUART0 source#### pub const PTR: *constRegisterBlock = {0x4004a000 as *const leuart0::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for LEUART0 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for LEUART0 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for LEUART0 Auto Trait Implementations --- ### impl RefUnwindSafe for LEUART0 ### impl !Sync for LEUART0 ### impl Unpin for LEUART0 ### impl UnwindSafe for LEUART0 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::MPU === ``` pub struct MPU { /* private fields */ } ``` Memory Protection Unit Implementations --- source### impl MPU source#### pub const PTR: *constRegisterBlock = {0xe000ed90 as *const cortex_m::peripheral::mpu::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock 👎 Deprecated since 0.7.5: Use the associated constant `PTR` instead Returns a pointer to the register block Trait Implementations --- source### impl Deref for MPU #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &<MPU as Deref>::Target Dereferences the value. source### impl Send for MPU Auto Trait Implementations --- ### impl RefUnwindSafe for MPU ### impl !Sync for MPU ### impl Unpin for MPU ### impl UnwindSafe for MPU Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::MSC === ``` pub struct MSC { /* private fields */ } ``` MSC Implementations --- source### impl MSC source#### pub const PTR: *constRegisterBlock = {0x400e0000 as *const msc::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for MSC source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for MSC #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for MSC Auto Trait Implementations --- ### impl RefUnwindSafe for MSC ### impl !Sync for MSC ### impl Unpin for MSC ### impl UnwindSafe for MSC Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::NVIC === ``` pub struct NVIC { /* private fields */ } ``` Nested Vector Interrupt Controller Implementations --- source### impl NVIC source#### pub fn request<I>(&mut self, interrupt: I) where    I: InterruptNumber, Request an IRQ in software Writing a value to the INTID field is the same as manually pending an interrupt by setting the corresponding interrupt bit in an Interrupt Set Pending Register. This is similar to `NVIC::pend`. This method is not available on ARMv6-M chips. source#### pub fn mask<I>(interrupt: I) where    I: InterruptNumber, Disables `interrupt` source#### pub unsafe fn unmask<I>(interrupt: I) where    I: InterruptNumber, Enables `interrupt` This function is `unsafe` because it can break mask-based critical sections source#### pub fn get_priority<I>(interrupt: I) -> u8 where    I: InterruptNumber, Returns the NVIC priority of `interrupt` *NOTE* NVIC encodes priority in the highest bits of a byte so values like `1` and `2` map to the same priority. Also for NVIC priorities, a lower value (e.g. `16`) has higher priority (urgency) than a larger value (e.g. `32`). source#### pub fn is_active<I>(interrupt: I) -> bool where    I: InterruptNumber, Is `interrupt` active or pre-empted and stacked source#### pub fn is_enabled<I>(interrupt: I) -> bool where    I: InterruptNumber, Checks if `interrupt` is enabled source#### pub fn is_pending<I>(interrupt: I) -> bool where    I: InterruptNumber, Checks if `interrupt` is pending source#### pub fn pend<I>(interrupt: I) where    I: InterruptNumber, Forces `interrupt` into pending state source#### pub unsafe fn set_priority<I>(&mut self, interrupt: I, prio: u8) where    I: InterruptNumber, Sets the “priority” of `interrupt` to `prio` *NOTE* See `get_priority` method for an explanation of how NVIC priorities work. On ARMv6-M, updating an interrupt priority requires a read-modify-write operation. On ARMv7-M, the operation is performed in a single atomic write operation. ##### Unsafety Changing priority levels can break priority-based critical sections (see `register::basepri`) and compromise memory safety. source#### pub fn unpend<I>(interrupt: I) where    I: InterruptNumber, Clears `interrupt`’s pending state source### impl NVIC source#### pub const PTR: *constRegisterBlock = {0xe000e100 as *const cortex_m::peripheral::nvic::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock 👎 Deprecated since 0.7.5: Use the associated constant `PTR` instead Returns a pointer to the register block Trait Implementations --- source### impl Deref for NVIC #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &<NVIC as Deref>::Target Dereferences the value. source### impl Send for NVIC Auto Trait Implementations --- ### impl RefUnwindSafe for NVIC ### impl !Sync for NVIC ### impl Unpin for NVIC ### impl UnwindSafe for NVIC Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::PCNT0 === ``` pub struct PCNT0 { /* private fields */ } ``` PCNT0 Implementations --- source### impl PCNT0 source#### pub const PTR: *constRegisterBlock = {0x4004e000 as *const pcnt0::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for PCNT0 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for PCNT0 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for PCNT0 Auto Trait Implementations --- ### impl RefUnwindSafe for PCNT0 ### impl !Sync for PCNT0 ### impl Unpin for PCNT0 ### impl UnwindSafe for PCNT0 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::PRS === ``` pub struct PRS { /* private fields */ } ``` PRS Implementations --- source### impl PRS source#### pub const PTR: *constRegisterBlock = {0x400e6000 as *const prs::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for PRS source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for PRS #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for PRS Auto Trait Implementations --- ### impl RefUnwindSafe for PRS ### impl !Sync for PRS ### impl Unpin for PRS ### impl UnwindSafe for PRS Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::Peripherals === ``` pub struct Peripherals { pub MSC: MSC, pub EMU: EMU, pub RMU: RMU, pub CMU: CMU, pub CRYPTO: CRYPTO, pub GPIO: GPIO, pub PRS: PRS, pub LDMA: LDMA, pub GPCRC: GPCRC, pub TIMER0: TIMER0, pub TIMER1: TIMER1, pub USART0: USART0, pub USART1: USART1, pub LEUART0: LEUART0, pub LETIMER0: LETIMER0, pub CRYOTIMER: CRYOTIMER, pub PCNT0: PCNT0, pub I2C0: I2C0, pub ADC0: ADC0, pub ACMP0: ACMP0, pub ACMP1: ACMP1, pub IDAC0: IDAC0, pub RTCC: RTCC, pub WDOG0: WDOG0, } ``` All the peripherals Fields --- `MSC: MSC`MSC `EMU: EMU`EMU `RMU: RMU`RMU `CMU: CMU`CMU `CRYPTO: CRYPTO`CRYPTO `GPIO: GPIO`GPIO `PRS: PRS`PRS `LDMA: LDMA`LDMA `GPCRC: GPCRC`GPCRC `TIMER0: TIMER0`TIMER0 `TIMER1: TIMER1`TIMER1 `USART0: USART0`USART0 `USART1: USART1`USART1 `LEUART0: LEUART0`LEUART0 `LETIMER0: LETIMER0`LETIMER0 `CRYOTIMER: CRYOTIMER`CRYOTIMER `PCNT0: PCNT0`PCNT0 `I2C0: I2C0`I2C0 `ADC0: ADC0`ADC0 `ACMP0: ACMP0`ACMP0 `ACMP1: ACMP1`ACMP1 `IDAC0: IDAC0`IDAC0 `RTCC: RTCC`RTCC `WDOG0: WDOG0`WDOG0 Implementations --- source### impl Peripherals source#### pub fn take() -> Option<SelfReturns all the peripherals *once* source#### pub unsafe fn steal() -> Self Unchecked version of `Peripherals::take` Auto Trait Implementations --- ### impl RefUnwindSafe for Peripherals ### impl Send for Peripherals ### impl !Sync for Peripherals ### impl Unpin for Peripherals ### impl UnwindSafe for Peripherals Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::RMU === ``` pub struct RMU { /* private fields */ } ``` RMU Implementations --- source### impl RMU source#### pub const PTR: *constRegisterBlock = {0x400e5000 as *const rmu::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for RMU source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for RMU #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for RMU Auto Trait Implementations --- ### impl RefUnwindSafe for RMU ### impl !Sync for RMU ### impl Unpin for RMU ### impl UnwindSafe for RMU Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::RTCC === ``` pub struct RTCC { /* private fields */ } ``` RTCC Implementations --- source### impl RTCC source#### pub const PTR: *constRegisterBlock = {0x40042000 as *const rtcc::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for RTCC source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for RTCC #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for RTCC Auto Trait Implementations --- ### impl RefUnwindSafe for RTCC ### impl !Sync for RTCC ### impl Unpin for RTCC ### impl UnwindSafe for RTCC Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::SCB === ``` pub struct SCB { /* private fields */ } ``` System Control Block Implementations --- source### impl SCB source#### pub fn vect_active() -> VectActive Returns the active exception number source### impl SCB source#### pub fn enable_icache(&mut self) Enables I-cache if currently disabled. This operation first invalidates the entire I-cache. source#### pub fn disable_icache(&mut self) Disables I-cache if currently enabled. This operation invalidates the entire I-cache after disabling. source#### pub fn icache_enabled() -> bool Returns whether the I-cache is currently enabled. source#### pub fn invalidate_icache(&mut self) Invalidates the entire I-cache. source#### pub fn enable_dcache(&mut self, cpuid: &mut CPUID) Enables D-cache if currently disabled. This operation first invalidates the entire D-cache, ensuring it does not contain stale values before being enabled. source#### pub fn disable_dcache(&mut self, cpuid: &mut CPUID) Disables D-cache if currently enabled. This operation subsequently cleans and invalidates the entire D-cache, ensuring all contents are safely written back to main memory after disabling. source#### pub fn dcache_enabled() -> bool Returns whether the D-cache is currently enabled. source#### pub fn clean_dcache(&mut self, cpuid: &mut CPUID) Cleans the entire D-cache. This function causes everything in the D-cache to be written back to main memory, overwriting whatever is already there. source#### pub fn clean_invalidate_dcache(&mut self, cpuid: &mut CPUID) Cleans and invalidates the entire D-cache. This function causes everything in the D-cache to be written back to main memory, and then marks the entire D-cache as invalid, causing future reads to first fetch from main memory. source#### pub unsafe fn invalidate_dcache_by_address(&mut self, addr: usize, size: usize) Invalidates D-cache by address. * `addr`: The address to invalidate, which must be cache-line aligned. * `size`: Number of bytes to invalidate, which must be a multiple of the cache line size. Invalidates D-cache cache lines, starting from the first line containing `addr`, finishing once at least `size` bytes have been invalidated. Invalidation causes the next read access to memory to be fetched from main memory instead of the cache. ##### Cache Line Sizes Cache line sizes vary by core. For all Cortex-M7 cores, the cache line size is fixed to 32 bytes, which means `addr` must be 32-byte aligned and `size` must be a multiple of 32. At the time of writing, no other Cortex-M cores have data caches. If `addr` is not cache-line aligned, or `size` is not a multiple of the cache line size, other data before or after the desired memory would also be invalidated, which can very easily cause memory corruption and undefined behaviour. ##### Safety After invalidating, the next read of invalidated data will be from main memory. This may cause recent writes to be lost, potentially including writes that initialized objects. Therefore, this method may cause uninitialized memory or invalid values to be read, resulting in undefined behaviour. You must ensure that main memory contains valid and initialized values before invalidating. `addr` **must** be aligned to the size of the cache lines, and `size` **must** be a multiple of the cache line size, otherwise this function will invalidate other memory, easily leading to memory corruption and undefined behaviour. This precondition is checked in debug builds using a `debug_assert!()`, but not checked in release builds to avoid a runtime-dependent `panic!()` call. source#### pub unsafe fn invalidate_dcache_by_ref<T>(&mut self, obj: &mutT) Invalidates an object from the D-cache. * `obj`: The object to invalidate. Invalidates D-cache starting from the first cache line containing `obj`, continuing to invalidate cache lines until all of `obj` has been invalidated. Invalidation causes the next read access to memory to be fetched from main memory instead of the cache. ##### Cache Line Sizes Cache line sizes vary by core. For all Cortex-M7 cores, the cache line size is fixed to 32 bytes, which means `obj` must be 32-byte aligned, and its size must be a multiple of 32 bytes. At the time of writing, no other Cortex-M cores have data caches. If `obj` is not cache-line aligned, or its size is not a multiple of the cache line size, other data before or after the desired memory would also be invalidated, which can very easily cause memory corruption and undefined behaviour. ##### Safety After invalidating, `obj` will be read from main memory on next access. This may cause recent writes to `obj` to be lost, potentially including the write that initialized it. Therefore, this method may cause uninitialized memory or invalid values to be read, resulting in undefined behaviour. You must ensure that main memory contains a valid and initialized value for T before invalidating `obj`. `obj` **must** be aligned to the size of the cache lines, and its size **must** be a multiple of the cache line size, otherwise this function will invalidate other memory, easily leading to memory corruption and undefined behaviour. This precondition is checked in debug builds using a `debug_assert!()`, but not checked in release builds to avoid a runtime-dependent `panic!()` call. source#### pub unsafe fn invalidate_dcache_by_slice<T>(&mut self, slice: &mut [T]) Invalidates a slice from the D-cache. * `slice`: The slice to invalidate. Invalidates D-cache starting from the first cache line containing members of `slice`, continuing to invalidate cache lines until all of `slice` has been invalidated. Invalidation causes the next read access to memory to be fetched from main memory instead of the cache. ##### Cache Line Sizes Cache line sizes vary by core. For all Cortex-M7 cores, the cache line size is fixed to 32 bytes, which means `slice` must be 32-byte aligned, and its size must be a multiple of 32 bytes. At the time of writing, no other Cortex-M cores have data caches. If `slice` is not cache-line aligned, or its size is not a multiple of the cache line size, other data before or after the desired memory would also be invalidated, which can very easily cause memory corruption and undefined behaviour. ##### Safety After invalidating, `slice` will be read from main memory on next access. This may cause recent writes to `slice` to be lost, potentially including the write that initialized it. Therefore, this method may cause uninitialized memory or invalid values to be read, resulting in undefined behaviour. You must ensure that main memory contains valid and initialized values for T before invalidating `slice`. `slice` **must** be aligned to the size of the cache lines, and its size **must** be a multiple of the cache line size, otherwise this function will invalidate other memory, easily leading to memory corruption and undefined behaviour. This precondition is checked in debug builds using a `debug_assert!()`, but not checked in release builds to avoid a runtime-dependent `panic!()` call. source#### pub fn clean_dcache_by_address(&mut self, addr: usize, size: usize) Cleans D-cache by address. * `addr`: The address to start cleaning at. * `size`: The number of bytes to clean. Cleans D-cache cache lines, starting from the first line containing `addr`, finishing once at least `size` bytes have been invalidated. Cleaning the cache causes whatever data is present in the cache to be immediately written to main memory, overwriting whatever was in main memory. ##### Cache Line Sizes Cache line sizes vary by core. For all Cortex-M7 cores, the cache line size is fixed to 32 bytes, which means `addr` should generally be 32-byte aligned and `size` should be a multiple of 32. At the time of writing, no other Cortex-M cores have data caches. If `addr` is not cache-line aligned, or `size` is not a multiple of the cache line size, other data before or after the desired memory will also be cleaned. From the point of view of the core executing this function, memory remains consistent, so this is not unsound, but is worth knowing about. source#### pub fn clean_dcache_by_ref<T>(&mut self, obj: &T) Cleans an object from the D-cache. * `obj`: The object to clean. Cleans D-cache starting from the first cache line containing `obj`, continuing to clean cache lines until all of `obj` has been cleaned. It is recommended that `obj` is both aligned to the cache line size and a multiple of the cache line size long, otherwise surrounding data will also be cleaned. Cleaning the cache causes whatever data is present in the cache to be immediately written to main memory, overwriting whatever was in main memory. source#### pub fn clean_dcache_by_slice<T>(&mut self, slice: &[T]) Cleans a slice from D-cache. * `slice`: The slice to clean. Cleans D-cache starting from the first cache line containing members of `slice`, continuing to clean cache lines until all of `slice` has been cleaned. It is recommended that `slice` is both aligned to the cache line size and a multiple of the cache line size long, otherwise surrounding data will also be cleaned. Cleaning the cache causes whatever data is present in the cache to be immediately written to main memory, overwriting whatever was in main memory. source#### pub fn clean_invalidate_dcache_by_address(&mut self, addr: usize, size: usize) Cleans and invalidates D-cache by address. * `addr`: The address to clean and invalidate. * `size`: The number of bytes to clean and invalidate. Cleans and invalidates D-cache starting from the first cache line containing `addr`, finishing once at least `size` bytes have been cleaned and invalidated. It is recommended that `addr` is aligned to the cache line size and `size` is a multiple of the cache line size, otherwise surrounding data will also be cleaned. Cleaning and invalidating causes data in the D-cache to be written back to main memory, and then marks that data in the D-cache as invalid, causing future reads to first fetch from main memory. source### impl SCB source#### pub fn set_sleepdeep(&mut self) Set the SLEEPDEEP bit in the SCR register source#### pub fn clear_sleepdeep(&mut self) Clear the SLEEPDEEP bit in the SCR register source### impl SCB source#### pub fn set_sleeponexit(&mut self) Set the SLEEPONEXIT bit in the SCR register source#### pub fn clear_sleeponexit(&mut self) Clear the SLEEPONEXIT bit in the SCR register source### impl SCB source#### pub fn sys_reset() -> ! Initiate a system reset request to reset the MCU source### impl SCB source#### pub fn set_pendsv() Set the PENDSVSET bit in the ICSR register which will pend the PendSV interrupt source#### pub fn is_pendsv_pending() -> bool Check if PENDSVSET bit in the ICSR register is set meaning PendSV interrupt is pending source#### pub fn clear_pendsv() Set the PENDSVCLR bit in the ICSR register which will clear a pending PendSV interrupt source#### pub fn set_pendst() Set the PENDSTSET bit in the ICSR register which will pend a SysTick interrupt source#### pub fn is_pendst_pending() -> bool Check if PENDSTSET bit in the ICSR register is set meaning SysTick interrupt is pending source#### pub fn clear_pendst() Set the PENDSTCLR bit in the ICSR register which will clear a pending SysTick interrupt source### impl SCB source#### pub fn get_priority(system_handler: SystemHandler) -> u8 Returns the hardware priority of `system_handler` *NOTE*: Hardware priority does not exactly match logical priority levels. See `NVIC.get_priority` for more details. source#### pub unsafe fn set_priority(&mut self, system_handler: SystemHandler, prio: u8) Sets the hardware priority of `system_handler` to `prio` *NOTE*: Hardware priority does not exactly match logical priority levels. See `NVIC.get_priority` for more details. On ARMv6-M, updating a system handler priority requires a read-modify-write operation. On ARMv7-M, the operation is performed in a single, atomic write operation. ##### Unsafety Changing priority levels can break priority-based critical sections (see `register::basepri`) and compromise memory safety. source#### pub fn enable(&mut self, exception: Exception) Enable the exception If the exception is enabled, when the exception is triggered, the exception handler will be executed instead of the HardFault handler. This function is only allowed on the following exceptions: * `MemoryManagement` * `BusFault` * `UsageFault` * `SecureFault` (can only be enabled from Secure state) Calling this function with any other exception will do nothing. source#### pub fn disable(&mut self, exception: Exception) Disable the exception If the exception is disabled, when the exception is triggered, the HardFault handler will be executed instead of the exception handler. This function is only allowed on the following exceptions: * `MemoryManagement` * `BusFault` * `UsageFault` * `SecureFault` (can not be changed from Non-secure state) Calling this function with any other exception will do nothing. source#### pub fn is_enabled(&self, exception: Exception) -> bool Check if an exception is enabled This function is only allowed on the following exception: * `MemoryManagement` * `BusFault` * `UsageFault` * `SecureFault` (can not be read from Non-secure state) Calling this function with any other exception will read `false`. source### impl SCB source#### pub const PTR: *constRegisterBlock = {0xe000ed04 as *const cortex_m::peripheral::scb::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock 👎 Deprecated since 0.7.5: Use the associated constant `PTR` instead Returns a pointer to the register block Trait Implementations --- source### impl Deref for SCB #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &<SCB as Deref>::Target Dereferences the value. source### impl Send for SCB Auto Trait Implementations --- ### impl RefUnwindSafe for SCB ### impl !Sync for SCB ### impl Unpin for SCB ### impl UnwindSafe for SCB Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::SYST === ``` pub struct SYST { /* private fields */ } ``` SysTick: System Timer Implementations --- source### impl SYST source#### pub fn clear_current(&mut self) Clears current value to 0 After calling `clear_current()`, the next call to `has_wrapped()` will return `false`. source#### pub fn disable_counter(&mut self) Disables counter source#### pub fn disable_interrupt(&mut self) Disables SysTick interrupt source#### pub fn enable_counter(&mut self) Enables counter *NOTE* The reference manual indicates that: “The SysTick counter reload and current value are undefined at reset, the correct initialization sequence for the SysTick counter is: * Program reload value * Clear current value * Program Control and Status register“ The sequence translates to `self.set_reload(x); self.clear_current(); self.enable_counter()` source#### pub fn enable_interrupt(&mut self) Enables SysTick interrupt source#### pub fn get_clock_source(&mut self) -> SystClkSource Gets clock source *NOTE* This takes `&mut self` because the read operation is side effectful and can clear the bit that indicates that the timer has wrapped (cf. `SYST.has_wrapped`) source#### pub fn get_current() -> u32 Gets current value source#### pub fn get_reload() -> u32 Gets reload value source#### pub fn get_ticks_per_10ms() -> u32 Returns the reload value with which the counter would wrap once per 10 ms Returns `0` if the value is not known (e.g. because the clock can change dynamically). source#### pub fn has_reference_clock() -> bool Checks if an external reference clock is available source#### pub fn has_wrapped(&mut self) -> bool Checks if the counter wrapped (underflowed) since the last check *NOTE* This takes `&mut self` because the read operation is side effectful and will clear the bit of the read register. source#### pub fn is_counter_enabled(&mut self) -> bool Checks if counter is enabled *NOTE* This takes `&mut self` because the read operation is side effectful and can clear the bit that indicates that the timer has wrapped (cf. `SYST.has_wrapped`) source#### pub fn is_interrupt_enabled(&mut self) -> bool Checks if SysTick interrupt is enabled *NOTE* This takes `&mut self` because the read operation is side effectful and can clear the bit that indicates that the timer has wrapped (cf. `SYST.has_wrapped`) source#### pub fn is_precise() -> bool Checks if the calibration value is precise Returns `false` if using the reload value returned by `get_ticks_per_10ms()` may result in a period significantly deviating from 10 ms. source#### pub fn set_clock_source(&mut self, clk_source: SystClkSource) Sets clock source source#### pub fn set_reload(&mut self, value: u32) Sets reload value Valid values are between `1` and `0x00ffffff`. *NOTE* To make the timer wrap every `N` ticks set the reload value to `N - 1` source### impl SYST source#### pub const PTR: *constRegisterBlock = {0xe000e010 as *const cortex_m::peripheral::syst::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock 👎 Deprecated since 0.7.5: Use the associated constant `PTR` instead Returns a pointer to the register block Trait Implementations --- source### impl Deref for SYST #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &<SYST as Deref>::Target Dereferences the value. source### impl Send for SYST Auto Trait Implementations --- ### impl RefUnwindSafe for SYST ### impl !Sync for SYST ### impl Unpin for SYST ### impl UnwindSafe for SYST Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::TIMER0 === ``` pub struct TIMER0 { /* private fields */ } ``` TIMER0 Implementations --- source### impl TIMER0 source#### pub const PTR: *constRegisterBlock = {0x40018000 as *const timer0::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for TIMER0 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for TIMER0 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for TIMER0 Auto Trait Implementations --- ### impl RefUnwindSafe for TIMER0 ### impl !Sync for TIMER0 ### impl Unpin for TIMER0 ### impl UnwindSafe for TIMER0 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::TIMER1 === ``` pub struct TIMER1 { /* private fields */ } ``` TIMER1 Implementations --- source### impl TIMER1 source#### pub const PTR: *constRegisterBlock = {0x40018400 as *const timer1::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for TIMER1 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for TIMER1 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for TIMER1 Auto Trait Implementations --- ### impl RefUnwindSafe for TIMER1 ### impl !Sync for TIMER1 ### impl Unpin for TIMER1 ### impl UnwindSafe for TIMER1 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::TPIU === ``` pub struct TPIU { /* private fields */ } ``` Trace Port Interface Unit Implementations --- source### impl TPIU source#### pub const PTR: *constRegisterBlock = {0xe0040000 as *const cortex_m::peripheral::tpiu::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock 👎 Deprecated since 0.7.5: Use the associated constant `PTR` instead Returns a pointer to the register block Trait Implementations --- source### impl Deref for TPIU #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &<TPIU as Deref>::Target Dereferences the value. source### impl Send for TPIU Auto Trait Implementations --- ### impl RefUnwindSafe for TPIU ### impl !Sync for TPIU ### impl Unpin for TPIU ### impl UnwindSafe for TPIU Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::USART0 === ``` pub struct USART0 { /* private fields */ } ``` USART0 Implementations --- source### impl USART0 source#### pub const PTR: *constRegisterBlock = {0x40010000 as *const usart0::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for USART0 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for USART0 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for USART0 Auto Trait Implementations --- ### impl RefUnwindSafe for USART0 ### impl !Sync for USART0 ### impl Unpin for USART0 ### impl UnwindSafe for USART0 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::USART1 === ``` pub struct USART1 { /* private fields */ } ``` USART1 Implementations --- source### impl USART1 source#### pub const PTR: *constRegisterBlock = {0x40010400 as *const usart1::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for USART1 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for USART1 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for USART1 Auto Trait Implementations --- ### impl RefUnwindSafe for USART1 ### impl !Sync for USART1 ### impl Unpin for USART1 ### impl UnwindSafe for USART1 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct efm32jg1b100_pac::WDOG0 === ``` pub struct WDOG0 { /* private fields */ } ``` WDOG0 Implementations --- source### impl WDOG0 source#### pub const PTR: *constRegisterBlock = {0x40052000 as *const wdog0::RegisterBlock} Pointer to the register block source#### pub const fn ptr() -> *constRegisterBlock Return the pointer to the register block Trait Implementations --- source### impl Debug for WDOG0 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Deref for WDOG0 #### type Target = RegisterBlock The resulting type after dereferencing. source#### fn deref(&self) -> &Self::Target Dereferences the value. source### impl Send for WDOG0 Auto Trait Implementations --- ### impl RefUnwindSafe for WDOG0 ### impl !Sync for WDOG0 ### impl Unpin for WDOG0 ### impl UnwindSafe for WDOG0 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum efm32jg1b100_pac::Interrupt === ``` #[repr(u16)] pub enum Interrupt { EMU, WDOG0, LDMA, GPIO_EVEN, TIMER0, USART0_RX, USART0_TX, ACMP0, ADC0, IDAC0, I2C0, GPIO_ODD, TIMER1, USART1_RX, USART1_TX, LEUART0, PCNT0, CMU, MSC, CRYPTO, LETIMER0, RTCC, CRYOTIMER, } ``` Enumeration of all the interrupts. Variants --- ### `EMU` 0 - EMU ### `WDOG0` 2 - WDOG0 ### `LDMA` 8 - LDMA ### `GPIO_EVEN` 9 - GPIO_EVEN ### `TIMER0` 10 - TIMER0 ### `USART0_RX` 11 - USART0_RX ### `USART0_TX` 12 - USART0_TX ### `ACMP0` 13 - ACMP0 ### `ADC0` 14 - ADC0 ### `IDAC0` 15 - IDAC0 ### `I2C0` 16 - I2C0 ### `GPIO_ODD` 17 - GPIO_ODD ### `TIMER1` 18 - TIMER1 ### `USART1_RX` 19 - USART1_RX ### `USART1_TX` 20 - USART1_TX ### `LEUART0` 21 - LEUART0 ### `PCNT0` 22 - PCNT0 ### `CMU` 23 - CMU ### `MSC` 24 - MSC ### `CRYPTO` 25 - CRYPTO ### `LETIMER0` 26 - LETIMER0 ### `RTCC` 29 - RTCC ### `CRYOTIMER` 31 - CRYOTIMER Trait Implementations --- source### impl Clone for Interrupt source#### fn clone(&self) -> Interrupt Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for Interrupt source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl InterruptNumber for Interrupt source#### fn number(self) -> u16 Return the interrupt number associated with this variant. Read more source### impl PartialEq<Interrupt> for Interrupt source#### fn eq(&self, other: &Interrupt) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl Copy for Interrupt source### impl Eq for Interrupt source### impl StructuralEq for Interrupt source### impl StructuralPartialEq for Interrupt Auto Trait Implementations --- ### impl RefUnwindSafe for Interrupt ### impl Send for Interrupt ### impl Sync for Interrupt ### impl Unpin for Interrupt ### impl UnwindSafe for Interrupt Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Constant efm32jg1b100_pac::NVIC_PRIO_BITS === ``` pub const NVIC_PRIO_BITS: u8 = 3; ``` Number available in the NVIC for configuring priority
www_web2py_com_books_default_chapter_41
free_programming_book
Haskell
# Introducción¶ # Chapter 1: Introducción ## Introducción¶ web2py[web2py] es un marco de código abierto para el desarrollo ágil de aplicaciones web seguras conectadas a servicios de bases de datos; está programado en Python[python] y es programable en Python. web2py es un marco de desarrollo completamente integrado, es decir, contiene todos los componentes que necesitas para armar aplicaciones web totalmente funcionales. web2py se ha diseñado para guiar al desarrollador web para que siga buenas prácticas en ingeniería de software, como por ejemplo el uso del patrón Modelo Vista Controlador (MVC). web2py separa la representación de los datos (el modelo) de la presentación de los datos (la vista) y de los algoritmos y flujo de operación (el controlador). web2py provee de librerías que ayudan al desarrollador en el diseño, implementación y realización de pruebas, y las administra de forma que las distintas librerías trabajen en conjunto. web2py se ha construido tomando como base la seguridad. Esto significa que automáticamente resuelve muchos de los problemas que pueden generar vulnerabilidades de seguridad, siguiendo prácticas recomendadas por fuentes autorizadas. Por ejemplo, web2py valida todo dato ingresado (para prevenir la inyección de código fuente), escapa toda salida (para prevenir las Secuencias de comandos en sitios cruzados o cross-site scripting), cambia los nombres de archivos subidos (para evitar el ataque de tipo directory traversal o ataque punto punto barra). web2py se encarga de los problemas de seguridad más importantes, para que los desarrolladores tengan pocas oportunidades de introducir vulnerabilidades. web2py incluye una Capa de Abstracción de la Base de Datos (DAL) que escribe SQL[sql-w] en forma dinámica para que tu, el desarrollador, no tengas que hacerlo. La DAL sabe cómo generar el SQL en forma transparente para SQLite[sqlite], MySQL[mysql], PostgreSQL[postgres], MSSQL[mssql], FireBird[firebird], Oracle[oracle], IBM DB2[db2], Informix[informix] e Ingres[ingresdb]. La DAL además puede generar llamadas a funciones para el Google Datastore cuando se corre sobre el Google App Engine (GAE)[gae]. Experimentalmente, se soportan más bases de datos y se agregan nuevas constantemente. Si necesitas un adaptador para otra base de datos no listada, puedes buscar información sobre nuevos adaptadores en el sitio de web2py o en la lista de correo. Una vez que una o más tablas de la base de datos se han definido, web2py genera en forma automática una interfaz administrativa para la base de datos completamente funcional con la que se pueden consultar los datos almacenados. web2py difiere de otros marcos de desarrollo en que es el único marco que adopta el paradigma Web 2.0 en forma total, según el cual la web es la computadora. De hecho, web2py no requiere configuración o instalación alguna; puede correr en cualquier arquitectura que incluya Python (Windows, Windows CE, Mac OS X, iOS, y Unix/Linux), y las fases de desarrollo, despliegue y mantenimiento de la aplicación se pueden realizar por medio de una interfaz web en forma remota o local. web2py corre con CPython (la implementación en C) y PyPy (Python escrito en Python), para las versiones de Python 2.5, 2.6 y 2.7. web2py provee de un sistema de ticket para reporte de errores. Si ocurre un error, se envía un ticket al usuario, y se registra el error en un log para el administrador. web2py es código abierto liberado bajo los términos de la licencia LGPL versión 3. Otra característica importante de web2py es que nosotros, sus desarrolladores, nos hemos propuesto mantener la compatibilidad hacia atrás en versiones futuras. Asó lo hemos hecho desde la primera versión liberada de web2py en octubre del 2007. Se han agregado nuevas características y reparación de fallas, pero si un programa funcionaba con web2py 1.0, ese programa también funcionará del mismo modo actualmente, o incluso mejor. Estos son algunos ejemplos de instrucciones de web2py que ilustran su potencia y simplicidad. El siguiente código: ``` db.define_table('persona', Field('nombre'), Field('imagen', 'upload')) ``` crea una tabla de la base de datos llamada "persona" que tiene dos campos: "nombre", una cadena; e "imagen", algo que debe ser subido (la imagen en sí). Si la tabla ya existe pero no coincide con la definición, se alterará según sea necesario. Dada la tabla definida más arriba, el siguiente código: ``` formulario = SQLFORM(db.persona).process() ``` crea un formulario de ingreso para esa tabla que permite a los usuarios subir imágenes. Además realiza la validación de los formularios enviados, cambia el nombre de la imagen subida en forma segura, almacena la imagen en un archivo, inserta el registro correspondiente en la base de datos, previene el envío duplicado del formulario, y eventualmente modifica el formulario mismo agregando mensajes de error si los datos enviados por el usuario no pasaron la validación. Este código embebe una wiki completa y lista para usar con etiquetas, la correspondiente nube de etiquetas, incrustar distintos tipos de recursos y soporte para oembed: Por otra parte, el siguiente código: ``` @auth.requires_permission('read', 'persona') def f(): .... ``` evita los visitantes accedan a la función `f` , a menos que el visitante sea miembro del grupo cuyos miembros tienen permiso para leer ("read") registros de la tabla "persona". Si el visitante no se ha autenticado, se lo redirige a la página de acceso (provista por defecto por web2py). Además web2py tiene soporte para componentes, es decir, acciones que se pueden incorporar en una vista para que interactúen con el visitante a través de Ajax sin tener que refrescar toda la página. Esto se hace por medio de un ayudante `LOAD` que permite un diseño de aplicaciones verdaderamente modular; esto se trata en el capítulo 3 en relación con wiki y con cierto detalle en el último capítulo del libro. Esta 5.ª edición del libro cubre `web2py` 2.4.1 y versiones posteriores. ### Principios¶ La programación en Python sigue típicamente estos principios básicos: * No te repitas (DRY). * Debería haber una única forma para hacer las cosas. * Explícito es mejor que implícito. web2py respeta estrictamente los dos primeros principios obligando al desarrollador a que use prácticas reconocidas de ingeniería de software que desalientan la repetición de código fuente. web2py guía al desarrollador a través de casi todas las tareas comunes en el desarrollo de aplicaciones web (creación y procesamiento de formularios, administración de sesiones, cookie, errores, etc.). web2py se diferencia de otros marcos de desarrollo en relación con el tercer principio, que a veces entra en conflicto con los otros dos. En especial, web2py no importa las aplicaciones de usuarios, sino que las ejecuta en un contexto predefinido. Este contexto expone las palabras clave de Python (Python keywords) así como también las palabras clave de web2py. Para algunos esto puede parecer magia, pero no debería ser así. Simplemente, en realidad, algunos módulos ya se han importado sin que tu debas hacerlo. web2py intenta eludir la molesta característica de otros marcos de desarrollo que obligan al desarrollador a que importe los mismos módulos al inicio de cada modelo y de cada controlador. Al importar sus propios módulos, web2py ahorra tiempo y evita errores y confusiones, siguiendo de esa forma el espíritu de no repetirse y también de la unicidad del método para una tarea específica. Si el desarrollador quiere usar otros módulos de Python o módulos de terceros, esos módulos se deben importar en forma explícita, como con cualquier otro programa en Python. ### Marcos de Desarrollo Web¶ En su nivel más elemental, una aplicación web consiste de un conjunto de programas (o funciones) que se ejecutan cuando se visita un determinado URL. La salida del programa se devuelve al visitante y es procesada por el navegador. El propósito de los marcos de desarrollo web es el de permitir a los desarrolladores la creación de nuevas aplicaciones con rapidez, facilidad y sin cometer errores. Esto se logra presentando interfaces API y herramientas que reducen y simplifican la cantidad de código que se requiere. Los dos métodos clásicos para el desarrollo de aplicaciones web son: * Generar HTML[html-w] [html-o] al vuelo, en forma programática. * Embeber código fuente en páginas HTML. La primera técnica es la que se ha utilizado, por ejemplo, en los primeros script CGI. El segundo modelo de trabajo es el que se usa para PHP[php] (donde el código está escrito en PHP, un lenguaje parecido a C), ASP (con código Visual Basic), y JSP (que utiliza Java). Aquí se puede ver un ejemplo de programa en PHP que, cuando se ejecuta, recupera información de una base de datos y devuelve una página HTML que muestra los registros recuperados: ``` <html><body><h1>Registros</h1><? mysql_connect(localhost, usuario, clave); @mysql_select_db(database) or die( "Imposible recuperar datos"); $consulta="SELECT * FROM contactos"; $resultado=mysql_query($consulta); mysql_close(); $i=0; while ($i < mysql_numrows($resultado)) { $nombre=mysql_result($resultado, $i, "nombre"); $telefono=mysql_result($resultado, $i, "telefono"); echo "<b>$nombre</b><br>Teléfono:$telefono<br /><br /><hr /><br />"; $i++; } ?></body></html> ``` El problema con este tipo de solución es que el código está incrustado en el HTML, pero ese mismo código también debe generar HTML adicional y además generar las instrucciones SQL para consultar la base de datos, enredando múltiples capas de la aplicación y haciendo que el código sea de difícil lectura y mantenimiento. La situación es aún peor para el caso de aplicaciones que usan Ajax, y la complejidad crece con el número de páginas (archivos) que componen la aplicación. La funcionalidad implementada por el código arriba se puede expresar en web2py con dos líneas de código en Python: ``` def index(): return HTML(BODY(H1('Registros'), db().select(db.contactos.ALL))) ``` En este simple ejemplo, la estructura de la página HTML es representada en forma programática por los objetos `HTML` , `BODY` , Y `H1` ; la base de datos `db` se consulta con el comando `select` ; por último, todo se serializa como HTML. Observa que `db` no es una palabra especial sino una variable definida por el usuario. Usaremos este nombre como una convención para referirnos en general a una conexión de la base de datos. Los marcos de desarrollo web se suelen separar en dos categorías: Un marco de desarrollo glued (pegado) se arma ensamblando (pegando juntos) muchos componentes de terceros. Un marco de desarrollo full-stack (completo e integrado) se arma creando componentes especialmente diseñados para estar fuertemente ligados y trabajar en conjunto. web2py es un marco de desarrollo full-stack. Prácticamente todos sus elementos se han creado desde cero y se diseñaron para funcionar en conjunto, pero además están preparados para funcionar en forma independientemente, como componentes de terceros de otras aplicaciones. Por ejemplo, la Capa de Abstracción de la Base de Datos (DAL) o el lenguaje de plantillas se pueden usar en forma independiente de web2py, importando `gluon.dal` o `gluon.template` en tus propias aplicaciones. `gluon` es el nombre del paquete que contiene las librerías del sistema. Algunas librerías de web2py, como las que crean y procesan formularios a partir de tablas de bases de datos, tienen como dependencias otras partes de web2py. Además, web2py puede funcionar con librerías Python de terceros, incluyendo otros lenguajes de plantillas y capas de abstracción de bases de datos, pero estas librerías no se integrarán en el mismo grado que los componentes originales. ### Modelo-Vista-Controlador¶ web2py alienta al desarrollador a que separe la representación de los datos (el modelo), la presentación de los datos (la vista) y el flujo de operación o workflow de la aplicación (el del controlador). Consideremos nuevamente el ejemplo anterior y veamos cómo podemos crear una aplicación en torno de él. Este es un ejemplo de la interfaz de edición MVC: El típico flujo de trabajo de una solicitud en web2py se describe en el siguiente diagrama: En el diagrama: * El servidor puede ser el o bien el servidor incorporado o un servidor de terceros, como por ejemplo Apache. El servidor maneja múltiples hilos (tiene soporte para multi-threading). * "main" es la aplicación principal WSGI. Realiza todas las tareas comunes y envuelve la aplicación del usuario. Se encarga del manejo de las cookie, sesiones, transacciones, enrutamiento de URL y enrutamiento inverso, y la administración de direcciones o dispatching. Puede administrar y realizar transmisiones de archivos estáticos si el servidor web no se ha configurado para hacerlo. * Los componentes de los Modelos, las Vistas y los Controladores, conforman la aplicación del usuario. * Se pueden alojar múltiples aplicaciones en la misma instancia de web2py. * Las flechas suaves representan la comunicación con el motor de la base de datos (o los motores). Las consultas a la base de datos se pueden escribir en SQL puro (no está recomendado) o usando la Capa de Abstracción de la Base de Datos (la forma recomendada), de manera que el código de las aplicaciones no depende de una base de datos específica. * El administrador de direcciones asocia la URL solicitada a una llamada a función en el controlador. La salida de la función puede ser una cadena o un diccionario conteniendo pares nombre-valor (una hash table). Los datos contenidos en el diccionario se convierten en la página HTML. Si el visitante solicita la misma página en XML, web2py busca una vista que pueda convertir el diccionario a XML. El desarrollador puede crear vistas que conviertan páginas en cualquier protocolo soportado (HTML, XML, JSON, RSS, CSV y RTF) o en protocolos personalizados adicionales. * Todas las llamadas se envuelven en una transacción, y toda excepción no manejada hace que la transacción recupere el estado inicial. Si la solicitud tiene éxito, se aplican los cambios a la base de datos. * web2py además maneja sesiones y los cookie de sesión en forma automática, y cuando se aplican los cambios de una transacción, también se almacena la sesión, a menos que se especifique lo contrario. * Es posible registrar tareas recurrentes (por medio de cron) para que se ejecuten según un programa en un momento determinado y/o al finalizar ciertas acciones. De esta forma es posible correr tareas largas y que requieren un uso intensivo del hardware en segundo plano sin que la navegación se torne más lenta. He aquí una aplicación mínima pero completa MVC, que consiste en tres archivos: "db.py" es el modelo: ``` db = DAL('sqlite://storage.sqlite') db.define_table('contacto', Field('nombre'), Field('telefono')) ``` Este código conecta a la base de datos (para este ejemplo una base de datos SQLite almacenada en el archivo `storage.sqlite` ) y define una tabla llamada `contacto` . Si la tabla no existe, web2py la crea y, en forma transparente, genera el código SQL en el dialecto apropiado para el motor de la base de datos específico que usa. El desarrollador puede ver el SQL generado, pero no necesita modificarlo cuando la base de datos, que por defecto es SQLite, se reemplaza por MySQL, PostgreSQL, MSSQL, FireBird, Oracle, DB2, Informix, Interbase, Ingres, o Google App Engine (tanto para SQL como para NoSQL). Una vez que se ha definido y creado una tabla, web2py además genera una interfaz de administración de la base de datos completamente funcional, llamada appadmin, para poder acceder a esa base de datos y a sus tablas. "default.py" es el controlador: ``` def contactos(): grid=SQLFORM.grid(db.contacto, user_signature=False) return locals() ``` En web2py, los URL se asocian a módulos de Python y llamadas a funciones. En este caso, el controlador contiene una única función o "acción" llamada `contactos` . Toda acción puede devolver una cadena (la página web devuelta) o un diccionario de Python (un conjunto de pares `clave:valor` ) o un conjunto de las variables locales (como en este ejemplo). Si la función devuelve un diccionario, se pasa a la vista con el mismo nombre que el controlador y la función, que a su vez convierte (render) la página. En este ejemplo, la función `contactos` genera una grilla que incluye los comandos recuperar/buscar/crear/modificar/borrar para la tabla `db.contacto` y devuelve la grilla a la vista. "default/contactos.html" es la vista: ``` {{extend 'layout.html'}} <h1>Administración de contactos</h1> {{=grid}} ``` Esta vista es llamada automáticamente por web2py luego de que la función del controlador asociado (acción) se ejecute. La tarea de esta vista es la conversión de las variables en el diccionario devuelto (para este caso `grid` ) en HTML. El archivo de la vista está escrito en HTML, pero embebe código de Python delimitado por los signos especiales `{{` y `}}` . Esto es bastante distinto al ejemplo de código fuente de PHP, porque el único código embebido en el HTML es código de la "capa de presentación". El archivo "layout.html" al que se hace referencia en la parte superior de la vista es provisto por web2py constituye el diseño de página básico para toda aplicación de web2py. El archivo del diseño de página se puede reemplazar o modificar fácilmente. ### ¿Por qué web2py?¶ web2py es uno entre los varios marcos de aplicaciones web, pero tiene algunas características atractivas que lo diferencian. web2py se desarrolló inicialmente como una herramienta para enseñanza, con las siguientes motivaciones básicas: * Que fuera fácil para los usuarios aprender desarrollo web del lado del servidor sin comprometer la funcionalidad. Por esa razón, web2py no requiere instalación ni configuración, no tiene dependencias (excepto la distribución de código fuente, que requiere Python 2.5 y los módulos de las librerías estándar), y expone la mayor parte de su funcionalidad a través de una interfaz de navegador web, incluyendo un entorno integrado de desarrollo con un depurador y una interfaz para la base de datos. * web2py se ha mantenido estable desde el primer día porque sigue un diseño `top-down` ; por ejemplo, su API fue diseñada antes de ser implementada. Incluso si bien nuevas funcionalidades fueron agregadas, web2py nunca ha introducido incompatibilidades hacia atrás, y no va a introducir incompatibilidades cuando se agreguen nuevas características en el futuro. * web2py resuelve activamente los problemas de seguridad más importantes que plagan a muchas aplicaciones web modernas, según el análisis de OWASP[owasp] que se puede consultar más abajo. * web2py es liviano. Su núcleo de librerías, incluyendo la Capa de Abstracción de la Base de Datos, el lenguaje de plantillas, y el conjunto completo de ayudantes pesan 1.4MB. El código fuente completo incluyendo aplicaciones de ejemplo e imágenes pesa 10.4MB. * web2py tiene una huella (footprint) pequeña y es realmente rápido. Utiliza el servidor WSGI Rocket [rocket] desarrollado por <NAME>. Es tan rápido como Apache con mod_wsgi, y además tiene soporte para SSL e IPv6. * web2py usa sintaxis Python para modelos, controladores y vistas, pero no importa modelos y controladores (a diferencia del resto de los marcos de desarrollo) - sino que los ejecuta. Esto significa que las app pueden instalarse, desinstalarse y modificarse sin tener que reiniciar el servidor web (incluso en producción), y aplicaciones diferentes pueden coexistir sin que sus módulos interfieran entre sí. * web2py utiliza una Capa de Abstracción de la Base de Datos (DAL) en lugar del mapeo objeto-relacional (ORM). Desde un punto de vista conceptual, esto significa que distintas tablas de bases de datos se asocian ( `map` ) en diferentes instancias de una clase `Table` y no en distintas clases, mientras que los registros se asocian o mapean a instancias de una clase `Row` , no a instancias de la clase de la tabla correspondiente. Desde un punto de vista práctico, significa que la sintaxis SQL se asocia en una relación uno a uno en la sintaxis de DAL, y no es necesaria una "caja negra" con programación compleja de metaclases como es común en los ORM más populares, que implicaría un incremento de la latencia. WSGI [wsgi-w,wsgi-o] (Web Server Gateway Interface) es un estándar moderno de Python para comunicaciones entre un servidor web y aplicaciones Python. Aquí se puede ver una captura de la interfaz administrativa de web2py admin: ### Seguridad¶ El Open Web Application Security Project[owasp] (OWASP) es una libre y abierta dedicada a mejorar la seguridad para aplicaciones de software. OWASP ha listado los mayores problemas de seguridad que pueden poner en peligro a las aplicaciones web. Esa lista se reproduce a continuación, junto con una descripción de cómo se resuelve cada uno de los problemas en web2py: * cross site scriptingCross Site Scripting (XSS): las vulnerabilidades de Secuencias de comandos en sitios cruzados ocurren cuando una aplicación toma datos provistos por el usuario y los envía a un navegador web sin previamente validar o codificar el contenido. XSS permite a los atacantes ejecutar script en el navegador de la víctima que pueden tomar control de la sesión del usuario, desfigurar sitios web, probablemente introducir "gusanos", etc. web2py, por defecto, "escapa" todas las variables procesadas en la vista, previniendo el XSS. * injection flawsInjection Flaws (vulnerabilidades de inyección de código fuente): La inyección, en particular la de SQL, es común en las aplicaciones web. Las inyecciones ocurren cuando información provista por el usuario es enviada como parte de una instrucción o consulta. La información hostil del atacante engaña al intérprete para que ejecute comandos no esperados o para que modifique información. web2py incluye una Capa de Abstracción de la Base de Datos que hace imposible la inyección de SQL. Normalmente, las instrucciones SQL no son escritas por el desarrollador. En cambio, el SQL es generado dinámicamente por el DAL, asegurando que toda información ingresada se haya escapado correctamente. * malicious file executionMalicious File Execution (ejecución maliciosa de archivos): El código vulnerable a la inclusión remota de archivos (RFI) permite a los atacantes incluir código hostil e información, dando como resultado ataques devastadores, como por ejemplo el compromiso total del servidor. web2py permite únicamente la ejecución de funciones expuestas, previniendo la ejecución maliciosa de archivos. Las funciones importadas nunca se exponen; sólo se exponen las acciones. web2py utiliza una interfaz administrativa web que hace muy fácil el seguimiento de las acciones que se han expuesto. * insecure object referenceInsecure Direct Object Reference (referencias directas a objetos inseguros): una referencia directa a un objeto ocurre cuando un desarrollador expone una referencia a un objeto de implementación interna, como un archivo, directorio, registro de la base de datos o clave, como URL o parámetro de formulario. Los atacantes pueden manipular esas referencias para acceder a otros objetos sin autorización. web2py no expone ningún objeto interno; es más, web2py valida toda URL, previniendo de esa forma los denominados ataques de tipo `directory traversal` . web2py también provee un mecanismo simple para la creación de formularios que automáticamente valida todas los datos ingresados. * CSRFCross Site Request Forgery (CSRF) o Suplantación de identidad entre servidores: Un ataque CSRF fuerza al navegador de una víctima autenticada a que envíe una solicitud preautenticada a una aplicación web vulnerable, que entonces fuerza al navegador de la víctima a realizar una acción hostil en beneficio del atacante. El CSFR puede ser tan grave como la importancia de la aplicación web a la que ataca. web2py previene el CSRF tanto como los envíos de formularios duplicadas por error asignando una clave única aleatoria a cada formulario. Incluso, web2py utiliza UUID para la cookie de la sesión. * information leakageimproper error handlingInformation Leakage and Improper Error Handling (Filtrado de información y manejo inapropiado de errores): Las aplicaciones pueden filtrar inadvertidamente información sobre su configuración, constitución interna, o violar la privacidad a través de una serie de problemas en la aplicación misma. Los atacantes utilizan estas debilidades para hurtar información sensible, o perpetrar ataques más peligrosos. web2py incluye un sistema de ticket de reporte. Ningún error puede resultar en código expuesto al usuario. Todos los errores se almacenan y se envía el ticket al usuario para permitir su seguimiento. Pero el detalle de los errores sólo es accesible para el administrador. * Broken Authentication and Session Management (Fallos de autenticación y manejo de sesiones): Las credenciales y claves de sesión a menudo no se protegen apropiadamente. Los atacantes pueden acceder a contraseñas, claves o referencias cifradas para falsificar la identidad del usuario (por ejemplo autenticándose como un usuario registrado). web2py incorpora un mecanismo para autenticación administrativa, y maneja las sesiones en forma independiente para cada aplicación. La interfaz administrativa además fuerza el uso de cookie seguras cuando el cliente no está en "localhost". Para las aplicaciones, incluye una poderosa API para Control de Acceso Basada en Roles ( `RBAC` ) * cryptographic storeInsecure Cryptographic Storage (Almacenamiento criptográfico inseguro): Las aplicaciones web raramente utilizan funciones criptográficas apropiadamente para proteger datos y credenciales. Los atacantes utilizan la información débilmente protegida para el robo de identidad y otros crímenes, como el fraude con tarjetas de crédito. web2py usa los algoritmos MD5 o el HMAC+SHA-512 para proteger las contraseñas almacenadas. Existen otros algoritmos también disponibles. * secure communicationsInsecure Communications (inseguridad de transferencias): Las aplicaciones frecuentemente omiten la encripción del tráfico de red cuando es necesario para proteger comunicaciones sensibles. web2py incluye el [ssl] servidor Rocket WSGI con soporte para SSL, pero también puede utilizar Apache o Lighthttpd y mod_ssl para servir comunicaciones con cifrado SSL. * access restrictionFailure to Restrict URL Access (Acceso irrestricto con URL): Con frecuencia una aplicación sólo protege las funcionalidades sensibles evitando mostrar link o direcciones URL a usuarios no autorizados. Los atacantes pueden utilizar esta debilidad para acceder y realizar operaciones no autorizadas accediendo a esos URL directamente. web2py asocia las solicitudes URL a módulos y funciones de Python. web2py provee de un mecanismo para declarar cuáles funciones son públicas y cuáles requieren autenticación y autorización. La API de Control de Acceso Basado en Roles incorporada permite a los desarrolladores la restricción de acceso a cualquier función basada en autenticación simple ( `login` ), pertenencia a grupo o permisos basados en grupos. La permisología es minuciosa y se puede combinar con filtros de la base de datos para permitir, por ejemplo, el acceso a un conjunto específico de tablas y/o registros. web2py además permite el uso de direcciones URL con firma digital y provee de una API para firmar digitalmente solicitudes con ajax. Las características de web2py se han analizado en función de su seguridad y los resultados se pueden consultar en ref.[pythonsecurity]. ### El paquete y su contenido¶ Puedes descargar web2py desde el sitio oficial: ``` http://www.web2py.com ``` web2py está compuesto por los siguientes componentes: * librerías: proveen de las funcionalidades del núcleo de web2py y se puede acceder a ellas en forma programática. * servidor web: el servidor web WSGI Rocket. * la aplicación admin: se usa para crear, diseñar y administrar las demás aplicaciones. admin provee de un completo Entorno de Integrado de Desarrollo (IDE) para la creación de aplicaciones web2py. Además incluye otras funcionalidades, como el entorno de pruebas para interfaz web y la consola shell para navegador. * la aplicación examples: contiene documentación y ejemplos interactivos. examples es un clon del sitio web oficial, e incluye la documentación epydoc. * la aplicación welcome: la plantilla de andamiaje para cualquier otra aplicación. Por defecto, incluye un menú desplegable escalonado CSS y un sistema de autenticación (que se detalla en el Capítulo 9). web2py se distribuye como código fuente, y en formato binario para Microsoft Windows y para Mac OS X. La distribución de código fuente se puede usar en cualquier plataforma que corra Python e incluye los componentes mencionados más arriba. Para correr el código fuente, necesitas tener instalado previamente Python 2.5 o 2.7 en el sistema. Además necesitas alguno de los motores de bases de datos soportados instalado. Para pruebas y aplicaciones de baja demanda, puedes usar la base de datos SQLite, que se incluye en las instalaciones de Python 2.7. Las versiones binarias de web2py (para Windows y Mac OS X) incluyen un intérprete de Python 2.7 y la base de datos SQLite). Técnicamente hablando, estos no son componentes de web2py. El incluirlos en la distribución binaria te permite utilizar web2py en forma instantánea. La siguiente imagen ilustra la estructura aproximada de web2py: En la parte inferior se puede encontrar el intérprete. Si nos desplazamos hacia arriba encontramos el servidor web (rocket), las librerías, y las aplicaciones. Cada aplicación cuenta con su propio diseño MVC (modelos, controladores, vistas, idiomas, bases de datos, y archivos estáticos). Cada aplicación incluye su propia interfaz de administración de la base de datos (appadmin). Cada instancia de web2py incluye por defecto tres aplicaciones: welcome (la app de andamiaje), admin (el IDE para navegador), y examples (la copia del sitio web y los ejemplos). ### Acerca de este libro¶ Este libro incluye los siguientes capítulos, además de su introducción: * Capítulo 2: es una introducción minimalista a Python. Se supone que el lector conoce tanto los conceptos básicos de programación estructurada como de programación orientada a objetos, como por ejemplo las nociones de bucle, condición, función y clase, y describe la sintaxis básica de Python. Además se agregan ejemplos para módulos de Python que se usan en el resto del libro. Si ya conoces Python, puedes omitir este capítulo. * Capítulo 3: muestra como se inicia web2py, describe la interfaz administrativa y guía al lector a través de varios ejemplos que van creciendo en complejidad: una aplicación que devuelve una cadena, una aplicación para conteo, un blog de imágenes, una wiki completa y con características avanzadas que permite subir imágenes y hacer comentarios, provee de un servicio de autenticación, autorización, webservices y una fuente RSS. Mientras lees este capítulo, puedes consultar el Capítulo 2 como referencia para la sintaxis general de Python o el resto de los capítulos para una referencia más detallada sobre las características que se usan. * Capítulo 4: describe en una forma más sistemática la estructura del núcleo y sus librerías: traducción de URL, solicitudes, respuestas, caché, planificador, tareas en segundo plano, traducción automática y el flujo de trabajo general o workflow. * Capítulo 5: es una guía de referencia para el lenguaje de plantillas usado para la creación de vistas. Muestra cómo embeber código de Python en HTML, e ilustra el uso de los ayudantes (objetos que pueden crear HTML). * Capítulo 6: describe la Capa de Abtracción de la Base de Datos, o DAL. La sintaxis de DAL se expone por medio de una serie de ejemplos. * Capítulo 7: describe los formularios, su validación y procesamiento. FORM es el ayudante de bajo nivel para creación de formularios. SQLFORM es el creador de formularios de alto nivel. En el Capítulo 7 además se describe la API para Crear/Leer/Modificar/Borrar (CRUD). * Capítulo 8: describe las funcionalidades de comunicaciones como el envío de correo y SMS y la lectura de bandejas de correo. * Capítulo 9: describe la autenticación y autorización y el mecanismo ampliable para Control de Acceso disponible en web2py. Además incluye la configuración de Mail (para correo electrónico) y CAPTCHA, ya que se usan en conjunto con el sistema de autenticación. En la tercera edición del libro hemos agregado un detalle pormenorizado de la integración con sistemas de autenticación de terceros como OpenID, OAuth, Google, Facebook, LinkedIn, etc. * Capítulo 10: trata sobre la creación de webservices con web2py. Daremos ejemplos de integración con el Google Web Toolkit a través de Pyjamas, y también para Adebe Flash y PyAMF. * Capítulo 11: incluye recetas para jQuery y web2py. web2py se diseñó principalmene para la programación del lado del servidor, pero incluye jQuery, ya que hemos comprobado que es la mejor librería de JavaScript de código abierto disponible para la creación de efectos y el uso de Ajax. En este capítulo, se describe el uso eficiente de jQuery en combinación con web2py. * Capítulo 12: describe los componentes y plugin de web2py como herramientas para la creación de aplicaciones modulares. Crearemos un plugin como ejemplo que implementa funcionalidades de uso común, como la generación de gráficas, comentarios y el uso de etiquetas. * Capítulo 13: trata sobre la implementación en producción de aplicaciones web2py. Se trata en especial sobre la implementación en servidores web LAMP (considerada la alternativa de más peso). Se describen además servicios alternativos y la configuración de la base de datos PostgreSQL. También se detalla cómo correr web2py como un servicio en un entorno Microsoft Windows, y la implementación en algunas plataformas específicas incluyendo Google App Engine, Heroku, y PythonAnywhere. Además, en este capítulo, hablaremos sobre cuestiones de escalabilidad y seguridad. * Capítulo 14: contiene varias recetas para realizar otras tareas específicas, incluyendo los upgrade (actualizaciones del núcleo), geocoding (aplicaciones geográficas), paginación, la API de Twitter, y más. Este libro sólo cubre las funcionalidades básicas y la API incorporada de web2py. Por otra parte, este libro no cubre las appliances (es decir, aplicaciones llave en mano). Puedes descargar esas aplicaciones desde el sitio web [appliances]. Se puede encontrar información complementaria en los hilos del grupo de usuarios[usergroup] (grupo de usuarios de web2py). También está disponible como referencia AlterEgo[alterego], el antiguo blog y listado de preguntas frecuentes de web2py. Este libro fue escrito utilizando la sintaxis markmin y fue automáticamente convertido a los formatos HTML, LaTex y PDF. ### Soporte¶ El canal principal para obtener soporte es el grupo de usuarios[usergroup], con docenas de publicaciones diarias. Incluso si eres un novato, no dudes en consultar - será un placer ayudarte. También disponemos de un sistema para el reporte de problemas (issue tracker) en https://github.com/web2py/web2py/issues . Por último pero no menos importante, puedes obtener soporte profesional (consulta la página web para más detalles). ### Cómo contribuir¶ Toda ayuda es realmente apreciada. Puedes ayudar a otros usuarios en el grupo de usuarios, o directamente enviando parches del programa (en el sitio de GitHub https://github.com/web2py/web2py). Incluso si encuentras un error de edición en este libro, o quieres proponer una mejora, la mejor forma de ayudar es haciendo un parche del libro en sí (que está en https://github.com/mdipierro/web2py-book). ### Normas de estilo¶ PEP8 [style] contiene un compendio de buenas prácticas de estilo para programar en Python. Encontrarás que web2py no siempre sigue estas reglas. Esto no se debe a omisiones o negligencia; nosotros creemos que los usuarios de web2py deberían seguir estas reglas y los alentamos a hacerlo. Hemos preferido no seguir algunas de estas reglas cuando definimos objetos ayudantes con el propósito de minimizar la posibilidad de conflictos con los nombres de objetos creados por el usuario. Por ejemplo, la clase que representa un `<div>` se llama `DIV` , mientras que según las reglas de estilo de Python se la debería haber llamado `Div` . Creemos que, para el ejemplo específico dado, el uso de mayúsculas en toda la palabra "DIV" es una elección más natural. Más aún, esta técnica le da libertad a los programadores para que puedan crear una clase llamada "Div", si así lo quisieran. Nuestra sintaxis además, coincide naturalmente con la notación DOM de la mayoría de los navegadores (incluyendo, por ejemplo, Firefox). Según la guía de estilo de Pyhton, todas las cadenas mayúsculas se deberían usar para constantes, no para variables. Siguiendo con nuestro ejemplo, incluso considerando que `DIV` es una clase, se trata de una clase especial que nunca debería ser modificada por el usuario porque el hacerlo implicaría una incompatibilidad con otras aplicaciones de web2py. Por lo tanto, creemos que esto autoriza el uso de `DIV` porque esa clase debería tratarse como constante, justificandose de esta forma la notación elegida. En resumen, se siguen las siguientes convenciones: * Los ayudantes HTML se notan con mayúsculas para toda la palabra como se describió anteriormente (por ejemplo `DIV` , `A` , `FORM` , `URL` ). * El objeto de traducción `T` se escribe con mayúscula aunque se trate de una instancia de la clase, no la clase en sí. Lógicamente, el objeto traductor realiza una acción similar a la de los ayudantes de HTML; este objeto opera en parte de la conversión (rendering) de la información presentada. Además, `T` debe ser fácil de identificar en el código fuente y debe tener un nombre corto. * Las clases de la DAL siguen el estándar de estilo de Python (mayúscula en la primera letra), por ejemplo `Table` , `Field` , `Query` , `Row` , `Rows` , etc. Para el resto de los casos, creemos que hemos seguido en lo posible, la guía de estilo de Python (PEP8). Por ejemplo, todas las instancias de objetos son en minúsculas (request, response, session, cache), y todas las clases internas tienen mayúsculas iniciales. En todos los ejemplos de este libro, las palabras clave de web2py se muestran en negrita, mientras que las cadenas y comentarios se muestran en letra itálica o inclinada. ### Licencia¶ web2py se distribuye según los términos de la licencia LGPL versión 3. El texto completo de la licencia está disponible en ref.[lgpl3] En conformidad con LGPL puedes: * redistribuir web2py con tus aplicaciones (incluyendo las versiones binarias oficiales) * distribuir tus aplicaciones usando las librerías oficiales de web2py según la licencia que elijas. Pero tienes la obligación de: * disponer de las aclaraciones necesarias para dar noticia del uso de web2py en la documentación * distribuir toda modificación de las librerías de web2py con licencia LGPLv3. La licencia incluye las aclaraciones y advertencias usuales: NO HAY GARANTÍAS PARA EL PROGRAMA, EN LA MEDIDA PERMITIDA POR LA LEY APLICABLE. EXCEPTO CUANDO SE INDIQUE LO CONTRARIO POR ESCRITO, LOS TITULARES DEL COPYRIGHT Y/U OTRAS PARTES PROPORCIONAN EL PROGRAMA "TAL CUAL" SIN GARANTÍAS DE NINGÚN TIPO, YA SEAN EXPRESAS O IMPLICADAS, INCLUYENDO, PERO NO LIMITADO A, LAS GARANTÍAS IMPLÍCITAS DE COMERCIALIZACIÓN Y APTITUD PARA UN PROPÓSITO PARTICULAR. EL RIESGO EN CUANTO A LA CALIDAD Y RENDIMIENTO DEL PROGRAMA QUEDA BAJO SU RESPONSABILIDAD. SI EL PROGRAMA ES DEFECTUOSO, USTED ASUME EL COSTO DE TODO SERVICIO, REPARACIÓN O CORRECCIÓN. EN NINGÚN CASO, A MENOS QUE LO EXIJA LA LEY APLICABLE O QUE SEA ACORDADO POR ESCRITO, UN TITULAR DE DERECHO DE AUTOR O CUALQUIER OTRA PARTE QUE MODIFIQUE Y/O TRANSMITA EL PROGRAMA COMO SE PERMITE ARRIBA, SERÁ RESPONSABLE ANTE USTED POR DAÑOS, INCLUYENDO CUALQUIER DAÑO GENERAL, ESPECIAL, INCIDENTAL O DAÑOS DERIVADOS DEL USO O IMPOSIBILIDAD DE USAR EL PROGRAMA, INCLUYENDO PERO NO LIMITADO A, LA PÉRDIDA O DAÑO DE DATOS, LAS PÉRDIDAS SUFRIDAS POR USTED O TERCEROS O UN FALLO DEL PROGRAMA PARA OPERAR CON CUALQUIER OTRO PROGRAMA, INCLUSO SI EL PROPIETARIO O PARTE DE OTRO TIPO FUE INFORMADA SOBRE LA POSIBILIDAD DE TALES DAÑOS. Primeras anteriores Las primeras versiones de web2py, 1.0.*-1.90.*, se publicaron bajo la licencia GPL2 con una cláusula adicional para fines comerciales que, por razones prácticas, era muy similar a la actual licencia LGPLv3. Software de terceros distribuido en conjunto con web2py web2py contiene software de terceros en la carpeta gluon/contrib/ y varios archivos de JavaScript y CSS. Estos archivos se distribuyeron con web2py según los términos de sus licencias originales, como se detalla en los respectivos archivos. ### Agradecimientos¶ web2py fue originalmente desarrollado y registrado por <NAME>. La primera versión (1.0) se liberó en octubre del año 2007. Desde entonces ha sido adoptado por muchos usuarios, y algunos de ellos también han contribuido con reportes de fallas, pruebas, depuración, parches, y la corrección de este libro. Algunos de los desarrolladores y colaboradores más importantes son, en orden alfabético, ordenados por el primer nombre: <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Benjamin, <NAME>, <NAME>, Blomqvist, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Iceberg, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Limodou, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Pai, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Zahariash. Estoy seguro de que me olvido de alguien, así que pido disculpas. Particularmente quiero agradecer a Anthony, Jonathan, Mariano, Bruno, Vladyslav, Martin, Nathan, Simone, Thadeus, Tim, Iceberg, Denes, Hans, Christian, Fran y Patrick por sus importantes contribuciones a web2py y a Anthony, Alvaro, Brian, Bruno, Denes, <NAME>, Erwin, Felipe, Graham, Jonathan, Hans, Kyle, Mark, Margaret, Michele, Nico, Richard, Roberto, Robin, Roman, Scott, Shane, Sharriff, Sriram, Sterling, Stuart, Thadeus, Wen (y otros) por corregir varias versiones de este libro. Su colaboración es invaluable. Si encuentras algún error en este libro, es exclusivamente mi culpa, probablemente introducido en una edición de último minuto. También agradezco a <NAME> de Wiley Custom Learning Solutions por su ayuda en la primera edición de este libro. web2py contiene código de los siguientes autores, a los que quisera también agradecer: <NAME> por Python[python], <NAME>, <NAME>, <NAME> por el servidor web Rocket[rocket] , <NAME> por EditArea[editarea], <NAME> por simplejson[simplejson], <NAME> y <NAME> por pyRTF[pyrtf], Dalke Scientific Software por pyRSS2Gen[pyrss2gen], <NAME> por feedparser[feedparser], <NAME> por markdown2[markdown2], <NAME> por fcgi.py, <NAME> por el módulo memcache de Python[memcache], <NAME> por jQuery[jquery]. Quiero agradecer a <NAME> (Rector de la Universidad DePaul), <NAME> (Decano del Colegio de Computación y Medios Digitales de la Universidad DePaul), y <NAME> (Miembro de MetaCryption LLC), por su continua confianza y ayuda. Por último, quisiera agradecer a mi esposa, Claudia, y a mi hijo, Marco, por tolerarme durante todo el tiempo que he empleado en el desarrollo de web2py, intercambiando correos con usuarios y colaboradores, y escribiendo este libro. Este libro está dedicado a ellos. # El lenguaje Python¶ # Chapter 2: El lenguaje Python ## El lenguaje Python¶ ### Acerca de Python¶ Python es un lenguaje de programación multipropósito de alto nivel Su filosofía de diseño enfatiza la productividad del programador y la legibilidad del código. Tiene un núcleo sintáctico minimalista con unos pocos comandos básicos y simple semántica, pero además tiene una enorme y variada librería estándar, que incluye una Interfaz de Programación de Aplicaciones (API) `API` para muchas de las funciones en el nivel del sistema operativo (OS). El código Python, aunque minimalista, define objetos incorporados como listas enlazadas ( `list` ), tuplas ( `tuple` ), tablas hash ( `dict` ), y enteros de longitud arbitraria ( `long` ). Python soporta múltiples paradigmas de programación, incluyendo programación orientada a objetos ( `class` ), programación imperativa ( `def` ) y funcional ( `lambda` ). Python tiene un sistema de tipado dinámico y manejo automatizado de memoria utilizando conteo de referencias (similar a Perl, Ruby y Scheme). Python fue publicado por primera vez por <NAME> en 1991. El lenguaje tiene un modelo abierto de desarrollo basado en la comunidad administrado por la organización sin fines de lucro Python Software Foundation. Existen varios intérpretes y compiladores que implementan el lenguaje Python, incluyendo uno en Java (Jython) pero, en esta corta revisión, vamos a centrarnos en la implementación en C creada por Guido. Puedes encontrar varios tutoriales, la documentación oficial y la referencia de las librerías del lenguaje en el sitio web oficial de Python. [python] Para referencia adicional sobre Python, podemos recomendar los libros en ref. [guido] y ref.[lutz]. Puedes saltarte este capítulo si ya tienes experiencia con el lenguaje Python. ### Comenzando¶ Las distribuciones binarias de web2py para Microsoft Windows o Apple OS X vienen empaquetadas con el intérprete de Python incorporado en el mismo archivo de la distribución. Puedes iniciarlo en Windows con el siguiente comando (escribe en prompt/consola del DOS): ``` web2py.exe -S welcome ``` Sobre Apple OS X, ingresa el siguiente comando en una ventana de terminal (suponiendo que estás en la misma carpeta que web2py.app): ``` ./web2py.app/Contents/MacOS/web2py -S welcome ``` En una máquina con Linux u otro Unix, probablemente ya tengas instalado Python. Si es así, en el prompt de la shell escribe: ``` python web2py.py -S welcome ``` Si no tienes Python 2.5 (o las posteriores 2.x) pre-instalado, tendrás que descargarlo e instalarlo antes de correr web2py. La opción `-S welcome` de línea de comandos ordena a web2py que ejecute la shell interactiva como si los comandos se ejecutaran en un controlador para la aplicación welcome, la aplicación de andamiaje de web2py. Esto pone a tu disposición casi todas las clases, objetos y funciones de web2py. Esta es la única diferencia entre la línea de comando interactiva de web2py y la línea de comando normal de Python. La interfaz administrativa además provee de una shell basada en web para cada aplicación. Puedes acceder a la de la aplicación "welcome" en: ``` http://127.0.0.1:8000/admin/shell/index/welcome ``` Puedes seguir todos los ejemplos en este capítulo utilizando una shell normal o la shell para web. ### help, dir¶ El lenguaje Python provee de dos comandos para obtener documentación sobre objetos definidos en el scope actual, tanto los incorporados como los definidos por el usuario. Podemos pedir ayuda ( `help` ) acerca de un objeto, por ejemplo "1": ``` >>> help(1) Help on int object: class int(object) | int(x[, base]) -> integer | | Convert a string or number to an integer, if possible. A floating point | argument will be truncated towards zero (this does not include a string | representation of a floating point number!) When converting a string, use | the optional base. It is an error to supply a base when converting a | non-string. If the argument is outside the integer range a long object | will be returned instead. | | Methods defined here: | | __abs__(...) | x.__abs__() <==> abs(x) ... ``` y, como "1" es un entero, obtenemos una descripción de la clase `int` y de todos sus métodos. Aquí la salida fue truncada porque es realmente larga y detallada. En forma similar, podemos obtener una lista de métodos del objeto "1" con el comando `dir` : ``` >>> dir(1) ['__abs__', ..., '__xor__'] ``` ### Tipos¶ Python es un lenguaje de tipado dinámico, o sea que las variables no tienen un tipo y por lo tanto no deben ser declaradas. Los valores, sin embargo, tienen tipo. Puedes consultar a una variable el tipo de valor que contiene: ``` >>> a = 3 >>> print type(a) <type 'int'> >>> a = 3.14 >>> print type(a) <type 'float'> >>> a = 'hola Python' >>> print type(a) <type 'str'> ``` Python además incluye, como características nativas, estructuras de datos como listas y diccionarios. `str` ¶ Python soporta el uso de dos diversos tipo de cadenas: ASCII y Unicode. Las cadenas ASCII se delimitan por '...', "..." o por '...' o """...""". Las comillas triples delimitan cadenas multilínea. Las cadenas Unicode comienzan con un `u` seguido por la cadena conteniendo caracteres Unicode. Una cadena Unicode puede convertirse en una cadena ASCII seleccionando un una codificación por ejemplo: ``` >>> a = 'esta es una cadena ASCII' >>> b = u'esta es una cadena Unicode' >>> a = b.encode('utf8') ``` Al ejecutar estos tres comandos, la `a` resultante es una cadena ASCII que almacena caracteres codificados con UTF8. Por diseño, web2py utiliza cadenas codificadas con UTF8 internamente. Además es posible utilizar variables en cadenas de distintas formas: ``` >>> print 'el número es ' + str(3) el número es 3 >>> print 'el número es %s' % (3) el número es 3 >>> print 'el número es %(numero)s' % dict(numero=3) el número es 3 ``` La última notación es más explícita y menos propensa a errores, y es la recomendada. Muchos objetos de Pyhton, por ejemplo números, pueden ser serializados en cadenas utilizando `str` o `repr` . Estos dos comandos son realmente similares pero producen una salida ligeramente diferente. Por ejemplo: ``` >>> for i in [3, 'hola']: print str(i), repr(i) 3 3 hola 'hola' ``` Para las clases definidas por el usuario, `str` y `repr` pueden definirse/redefinirse utilizando los operadores especiales `__str__` y `__repr__` . Estos se describirán básicamente más adelante; para mayor información, consulta la documentación oficial de Python [pydocs]. `repr` siempre tiene un valor por defecto. Otra característica importante de una cadena de Python es que, como una lista, es un objeto `iterable` ``` >>> for i in 'hola': print i h o l a ``` `list` ¶ Los métodos principales de una lista de Python son append, insert, y delete: ``` >>> a = [1, 2, 3] >>> print type(a) <type 'list'> >>> a.append(8) >>> a.insert(2, 7) >>> del a[0] >>> print a [2, 7, 3, 8] >>> print len(a) 4 ``` Las listas se pueden cortar ( `slice` ): ``` >>> print a[:3] [2, 7, 3] >>> print a[1:] [7, 3, 8] >>> print a[-2:] [3, 8] ``` y concatenar: ``` >>> a = [2, 3] >>> b = [5, 6] >>> print a + b [2, 3, 5, 6] ``` Una lista es iterable; puedes recorrerla en un bucle: ``` >>> a = [1, 2, 3] >>> for i in a: print i 1 2 3 ``` Los elementos de una lista no tienen que ser del mismo tipo; pueden ser de cualquier tipo de objeto de Python. Hay una situación muy común en la que se puede usar una lista por comprensión o `list comprehension` . Consideremos el siguiente código: ``` >>> a = [1,2,3,4,5] >>> b = [] >>> for x in a: if x % 2 == 0: b.append(x * 3) >>> b [6, 12] ``` Este código claramente procesa una lista de ítems, separa y modifica un subconjunto de la lista ingresada y crea una nueva lista resultante, y este código puede ser enteramente reemplazado por la siguiente lista por comprensión: ``` >>> a = [1,2,3,4,5] >>> b = [x * 3 for x in a if x % 2 == 0] >>> b [6, 12] ``` `tuple` ¶ Una tupla es como una lista, pero su tamaño y elementos son inmutables, mientras que en una lista son mutables. Si un elemento de una tupla es un objeto, los atributos del objeto son mutables. Una tupla está delimitada por paréntesis. `>>> a = (1, 2, 3)` Entonces si esto funciona para una lista: ``` >>> a = [1, 2, 3] >>> a[1] = 5 >>> print a [1, 5, 3] ``` la asignación a un elemento no funciona para una tupla: ``` >>> a = (1, 2, 3) >>> print a[1] 2 >>> a[1] = 5 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignment ``` Una tupla, como en la lista, es un objeto iterable. Nótese que una tupla que consista de un elemento debe incluir una coma al final, como se muestra abajo: ``` >>> a = (1) >>> print type(a) <type 'int'> >>> a = (1,) >>> print type(a) <type 'tuple'> ``` Las tuplas son realmente útiles para ordenar objetos en grupos eficientemente por su inmutabilidad, y los paréntesis son a veces opcionales: ``` >>> a = 2, 3, 'hola' >>> x, y, z = a >>> print x 2 >>> print z hola ``` `dict` ¶ Un `dict` (diccionario) de Python es una tabla hash que asocia ( `map` ) un objeto-clave a un objeto-valor. Por ejemplo: ``` >>> a = {'k':'v', 'k2':3} >>> a['k'] v >>> a['k2'] 3 >>> a.has_key('k') True >>> a.has_key('v') False ``` Las claves pueden ser de cualquier tipo apto para tabla hash (int, string, o cualquier objeto cuya clase implemente el método `__hash__` ). Los valores pueden ser de cualquier tipo. Las claves y valores diferentes en el mismo diccionario no tienen que ser de un único tipo. Si las claves son caracteres alfanuméricos, el diccionario también se puede declarar con una sintaxis alternativa: ``` >>> a = dict(k='v', h2=3) >>> a['k'] v >>> print a {'k':'v', 'h2':3} ``` `has_key` , `keys` , `values` y `items` son métodos útiles: ``` >>> a = dict(k='v', k2=3) >>> print a.keys() ['k', 'k2'] >>> print a.values() ['v', 3] >>> print a.items() [('k', 'v'), ('k2', 3)] ``` El método `items` produce una lista de tuplas, cada una conteniendo una clave y su valor asociado. ``` >>> a = [1, 2, 3] >>> del a[1] >>> print a [1, 3] >>> a = dict(k='v', h2=3) >>> del a['h2'] >>> print a {'k':'v'} ``` Internamente, Python utiliza el operador `hash` para convertir objetos en enteros, y usa ese entero para determinar dónde almacenar el valor. ``` >>> hash("hola mundo") -1500746465 ``` ### Acerca del espaciado¶ Python usa espaciado/sangría para delimitar bloques de código. Un bloque de código comienza con una línea que finaliza con dos puntos, y continúa para todas las líneas que tengan igual o mayor espaciado que la próxima línea. Por ejemplo: ``` >>> i = 0 >>> while i < 3: >>> print i >>> i = i + 1 >>> 0 1 2 ``` Es común el uso de cuatro espacios para cada nivel de espaciado o indentation. Es una buena práctica no combinar la tabulación con el espacio, porque puede resultar (invisiblemente) confuso. `for...in` ¶ En Python, puedes recorrer objetos iterables en un bucle ``` >>> a = [0, 1, 'hola', 'python'] >>> for i in a: print i 0 1 hola python ``` Un atajo usual es `xrange` , que genera un rango iterable sin almacenar la lista entera de elementos. ``` >>> for i in xrange(0, 4): print i 0 1 2 3 ``` Esto es equivalente a la sintaxis de C/C++/C#/Java: ``` for(int i=0; i<4; i=i+1) { print(i); } ``` Otro comando de utilidad es `enumerate` , que realiza un conteo mientras avanza el bucle: ``` >>> a = [0, 1, 'hola', 'python'] >>> for i, j in enumerate(a): print i, j 0 0 1 1 2 hola 3 python ``` También hay un keyword `range(a, b, c)` que devuelve una lista de enteros comenzando con el valor `a` y con un incremento de `c` , y que finaliza con el último valor menor a `b` . Por defecto, `a` es 0 y `c` es 1. `xrange` es similar es similar pero en realidad no genera una lista, sólo un iterator para la lista; que es más apropiado para crear estos bucles. Se puede salir de un bucle utilizando `break` ``` >>> for i in [1, 2, 3]: print i break 1 ``` Puedes saltar a la próxima iteración del bucle sin ejecutar todo el bloque de código con `continue` ``` >>> for i in [1, 2, 3]: print i continue print 'test' 1 2 3 ``` `while` ¶ El bucle `while` en Python opera básicamente como lo hace en otros lenguajes de programación, iterando una cantidad indefinida de veces y comprobando una condición antes de cada iteración. Si la condición es `False` , el bucle finaliza. ``` >>> i = 0 >>> while i < 10: i = i + 1 >>> print i 10 ``` No hay una instrucción especial `loop...until` en Python. `if...elif...else` ¶ ``` >>> for i in range(3): >>> if i == 0: >>> print 'cero' >>> elif i == 1: >>> print 'uno' >>> else: >>> print 'otro' cero uno otro ``` "elif" significa "else if". Tanto `elif` como `else` son partes opcionales. Puede haber más de una `elif` pero sólo una declaración `else` . Se pueden crear condicionales complicados utilizando los operadores `not` , `and` y `or` . ``` >>> for i in range(3): >>> if i == 0 or (i == 1 and i + 1 == 2): >>> print '0 or 1' ``` ¶ ``` >>> try: >>> a = 1 / 0 >>> except Exception, e: >>> print 'epa: %s' % e >>> else: >>> print 'sin problemas aquí' >>> finally: >>> print 'listo' epa: integer division or modulo by zero listo ``` Si la excepción se genera (raise), es atrapada por la cláusula `except` , que es ejecutada; no se ejecuta en cambio la cláusula `else` . Si no se genera ninguna excepción, la cláusula de `except` no se ejecuta, pero en cambio la de `else` sí. La cláusula de `finally` se ejecuta siempre. Puede haber múltiples cláusulas `except` para distintas excepciones posibles: ``` >>> try: >>> raise SyntaxError >>> except ValueError: >>> print 'error en el valor' >>> except SyntaxError: >>> print 'error sintáctico' error sintáctico ``` Las cláusulas `else` y `finally` son opcionales. Aquí mostramos una lista ``` BaseException +-- HTTP (defined by web2py) +-- SystemExit +-- KeyboardInterrupt +-- Exception +-- GeneratorExit +-- StopIteration +-- StandardError | +-- ArithmeticError | | +-- FloatingPointError | | +-- OverflowError | | +-- ZeroDivisionError | +-- AssertionError | +-- AttributeError | +-- EnvironmentError | | +-- IOError | | +-- OSError | | +-- WindowsError (Windows) | | +-- VMSError (VMS) | +-- EOFError | +-- ImportError | +-- LookupError | | +-- IndexError | | +-- KeyError | +-- MemoryError | +-- NameError | | +-- UnboundLocalError | +-- ReferenceError | +-- RuntimeError | | +-- NotImplementedError | +-- SyntaxError | | +-- IndentationError | | +-- TabError | +-- SystemError | +-- TypeError | +-- ValueError | | +-- UnicodeError | | +-- UnicodeDecodeError | | +-- UnicodeEncodeError | | +-- UnicodeTranslateError +-- Warning +-- DeprecationWarning +-- PendingDeprecationWarning +-- RuntimeWarning +-- SyntaxWarning +-- UserWarning +-- FutureWarning +-- ImportWarning +-- UnicodeWarning ``` Para una descripción detallada de cada una, consulta la documentación oficial de Python. web2py expone sólo una nueva excepción, llamada `HTTP` . Cuando es generada, hace que el programa devuelva una página de error HTTP (para más sobre este tema consulta el Capítulo 4). Cualquier objeto puede ser utilizado para generar una excepción, pero es buena práctica generar excepciones con objetos que extienden una de las clases de excepción incorporadas. `def...return` ¶ Las funciones se declaran utilizando `def` . Aquí se muestra una función de Python típica: ``` >>> def f(a, b): return a + b >>> print f(4, 2) 6 ``` No hay necesidad (o forma) de especificar los tipos de los argumentos ni el tipo o tipos devueltos. En este ejemplo, se define una función `f` para que tome dos argumentos. Las funciones son la primer característica sintáctica descripta en este capítulo para introducir el concepto de "scope" (alcance/ámbito), o "namespace" (espacio de nombres). En el ejemplo de arriba, los identificadores ( `identifier` ) `a` y `b` son indefinidos fuera del scope de la función `f` : ``` >>> def f(a): return a + 1 >>> print f(1) 2 >>> print a Traceback (most recent call last): File "<pyshell#22>", line 1, in <module> print a NameError: name 'a' is not defined ``` Los identificadores definidos por fuera del scope de una función son accesibles dentro de la función; nótese cómo el identificador `a` es manejado en el siguiente código: ``` >>> a = 1 >>> def f(b): return a + b >>> print f(1) 2 >>> a = 2 >>> print f(1) # se usa un nuevo valor para a 3 >>> a = 1 # redefine a >>> def g(b): a = 2 # crea un nuevo a local return a + b >>> print g(2) 4 >>> print a # el a global no ha cambiado 1 ``` Si se modifica `a` , las siguientes llamadas a la función usarán el nuevo valor del `a` global porque la definición de la función enlaza la ubicación de almacenamiento del identificador `a` , no el valor de `a` mismo al momento de la declaración de la función; sin embargo, si se asigna a `a` dentro de la función `g` , la `a` global no es afectada porque la nueva `a` local protege el valor global. La referencia del scope externo puede ser utilizada en la creación de "cierres" (closures): ``` >>> def f(x): def g(y): return x * y return g >>> duplicador = f(2) # duplicador es una nueva función >>> triplicador = f(3) # triplicador es una nueva función >>> cuadruplicador = f(4) # cuadruplicador es una nueva función >>> print duplicador(5) 10 >>> print triplicador(5) 15 >>> print cuadruplicador(5) 20 ``` La función `f` crea nuevas funciones; y nótese que el scope del nombre `g` es enteramente interno a `f` . Los cierres son extremadamente poderosos. Los argumentos de función pueden tener valores por defecto, y pueden devolver múltiples resultados: Los argumentos de las funciones pueden pasarse explícitamente por nombre, y esto significa que el orden de los argumentos especificados en la llamada puede ser diferente del orden de los argumentos con los que la función fue definida: Las funciones también pueden aceptar un número variable de argumentos en tiempo de ejecución: ``` >>> def f(*a, **b): return a, b >>> x, y = f(3, 'hola', c=4, test='mundo') >>> print x (3, 'hola') >>> print y {'c':4, 'test':'mundo'} ``` Aquí los argumentos no pasados por nombre (3, 'hola') se almacenan en la tupla `a` , y los argumentos pasados por nombre ( `c` y `test` ) se almacenan en el diccionario `b` . En el caso opuesto, puede pasarse una lista o tupla a una función que requiera una conjunto ordenado de argumentos para que los "abra" (unpack): ``` >>> def f(a, b): return a + b >>> c = (1, 2) >>> print f(*c) 3 ``` y un diccionario se puede "abrir" para pasar argumentos por nombre: ``` >>> def f(a, b): return a + b >>> c = {'a':1, 'b':2} >>> print f(**c) 3 ``` `lambda` ¶ `lambda` presenta una forma de declarar en forma fácil y abreviada funciones sin nombre: ``` >>> a = lambda b: b + 2 >>> print a(3) 5 ``` La expresión " `lambda` [a]:[b]" se lee exactamente como "una función con argumentos [a] que devuelve [b]". La expresión `lambda` es anónima, pero la función adquiere un nombre al ser asignada a un identificador `a` . Las reglas de espacios de nombres para `def` también son igualmente válidas para `lambda` , y de hecho el código de arriba, con respecto a `a` es idéntico al de la declaración de la función utilizando `def` : ``` >>> def a(b): return b + 2 >>> print a(3) 5 ``` La única ventaja de `lambda` es la brevedad; sin embargo, la brevedad puede ser muy conveniente en ciertas ocasiones. Consideremos una función llamada `map` que aplica una función a todos los ítems en una lista, creando una lista nueva: ``` >>> a = [1, 7, 2, 5, 4, 8] >>> map(lambda x: x + 2, a) [3, 9, 4, 7, 6, 10] ``` Este código se hubiese duplicado en tamaño si utilizábamos `def` en lugar de `lambda` . El problema principal de `lambda` es que (en la implementación de Python) la sintaxis permite una sólo una expresión simple; aunque, para funciones más largas, puede utilizarse `def` y el costo extra de proveer un nombre para la función disminuye cuando el tamaño de la función aumenta. Igual que con `def` , `lambda` puede utilizarse para "condimentar" una función: las nuevas funciones se pueden crear envolviendo funciones existentes de manera que la nueva función tome un conjunto distinto de argumentos: ``` >>> def f(a, b): return a + b >>> g = lambda a: f(a, 3) >>> g(2) 5 ``` Hay muchas situaciones donde es útil condimentar, pero una de ellas es especialmente a propósito en web2py: el manejo del caché. Supongamos que tenemos una función pesada que comprueba si un argumento es número primo: ``` def esprimo(numero): for p in range(2, numero): if (numero % p) == 0: return False return True ``` Esta función obviamente consume mucho tiempo de proceso. Supongamos que tenemos una función de caché `cache.ram` que toma tres argumentos: una clave, una función y una cantidad de segundos. ``` valor = cache.ram('clave', f, 60) ``` La primera vez que se llame, llamará a su vez a la función `f()` , almacenará la salida en un diccionario en memoria (digamos "d"), y lo devolverá de manera que el valor es: ``` valor = d['clave']=f() ``` La segunda vez que se llame, si la clave está en el diccionario y no es más antigua del número de segundos especificados (60), devolverá el valor correspondiente sin volver a ejecutar la llamada a la función. `valor = d['clave']` ¿Cómo podemos almacenar en el caché la salida de la función esprimo para cualquier valor de entrada? De esta forma: ``` >>> numero = 7 >>> segundos = 60 >>> print cache.ram(str(numero), lambda: esprimo(numero), segundos) True >>> print cache.ram(str(numero), lambda: esprimo(numero), segundos) True ``` La salida es siempre la misma, pero la primera vez que se llama a `cache.ram` , se llama a `esprimo` ; la segunda vez, no. Las funciones de Python, creadas tanto con `def` como con `lambda` permiten refactorizar funciones existentes en términos de un conjunto distinto de argumentos. `cache.ram` y `cache.disk` son funciones para manejo de caché de web2py. `class` ¶ Como Python es un lenguaje de tipado dinámico, las clases y objetos pueden resultar extrañas. De hecho no necesitas definir las variables incluidas (atributos) al declarar una clase, y distintas instancias de la misma clase pueden tener distintos atributos. Los atributos (attribute) se asocian generalmente con la instancia, no la clase (excepto cuando se declaran como atributos de clase o "class attributes", que vienen a ser las "static member variables" de C++/Java). ``` >>> class MiClase(object): pass >>> miinstancia = MiClase() >>> miinstancia.mivariable = 3 >>> print miinstancia.mivariable 3 ``` Nótese que `pass` es un comando que no "hace" nada. En este caso se utiliza para definir una clase `MiClase` que no contiene nada. `MiClase()` llama al constructor de la clase (en este caso el constructor por defecto) y devuelve un objeto, una instancia de la clase. El `(object)` en la definición de la clase indica que nuestra clase extiende la clase incorporada `object` . Esto no es obligatorio, pero se considera una buena práctica. He aquí una clase más complicada: ``` >>> class MiClase(object): >>> z = 2 >>> def __init__(self, a, b): >>> self.x = a >>> self.y = b >>> def sumar(self): >>> return self.x + self.y + self.z >>> miinstancia = MiClase(3, 4) >>> print miinstancia.sumar() 9 ``` Las funciones declaradas adentro de la clase son métodos. Algunos métodos tienen nombres especiales reservados. Por ejemplo, `__init__` es el constructor. Todas las variables son variables locales del método exceptuando las variables declaradas fuera de los métodos. Por ejemplo, `z` es una "variable de clase", que equivale a una "static member variable" de C++ que almacena el mismo valor para toda instancia de la clase. Hay que tener en cuenta que `__init__` toma 3 argumentos y `add` toma uno, y sin embargo los llamamos con dos y ningún argumento respectivamente. El primer argumento representa, por convención, el nombre local utilizado dentro del método para referirse a el objeto actual, pero podríamos haber utilizado cualquier otro. `self` cumple el mismo rol que `*this` en C++ o `this` en Java, pero `self` no es una palabra reservada. Esta sintaxis es necesaria para evitar ambigüedad cuando se declaran clases anidadas, como una clase que es local a un método dentro de otra clase. ### Atributos especiales, métodos y operadores¶ Los atributos de clase, métodos y operadores que comienzan con doble guión bajo ( `__` ) están usualmente como privados (por ejemplo para usarse internamente pero no expuestos fuera de la clase) aunque esta convención no está controlada en el intérprete. Algunos de ellos son palabras reservadas y tienen un significado especial. Aquí se muestran, como ejemplo, tres de ellos: * `__len__` * `__getitem__` * `__setitem__` Se pueden usar, por ejemplo, para crear un objeto contenedor que se comporta como una lista: ``` >>> class MiLista(object): >>> def __init__(self, *a): self.a = list(a) >>> def __len__(self): return len(self.a) >>> def __getitem__(self, i): return self.a[i] >>> def __setitem__(self, i, j): self.a[i] = j >>> b = MiLista(3, 4, 5) >>> print b[1] 4 >>> b.a[1] = 7 >>> print b.a [3, 7, 5] ``` Entre otros operadores especiales están `__getattr__` y `__setattr__` , que definen los atributos get y set para la clase, y `__sum__` y `__sub__` , que hacen sobrecarga de operadores aritméticos. Para el uso de estos operadores se pueden consultar textos más avanzados en este tema. Ya mencionamos anteriormente los operadores especiales `__str__` y `__repr__` . ### Entrada/salida de archivos¶ En Python puedes abrir y escribir en un archivo con: ``` >>> archivo = open('miarchivo.txt', 'w') >>> archivo.write('hola mundo') >>> archivo.close() ``` En forma similar, puedes leer lo escrito en el archivo con: ``` >>> archivo = open('miarchivo.txt', 'r') >>> print archivo.read() hola mundo ``` Como alternativa, puedes leer en modo binario con "rb", escribir en modo binario con "wb", y abrir el archivo en modo incremental (append) con "a", utilizando la notación estándar de C. El comando de lectura `read` toma un argumento opcional, que es el número de byte. Puedes además saltar a cualquier ubicación en el archivo usando `seek` . Puedes recuperar lo escrito en el archivo con `read` ``` >>> print archivo.seek(5) >>> print archivo.read() mundo ``` y puedes cerrar el archivo con: `>>> archivo.close()` En la distribución estándar de Python, conocida como CPython, las variables son calculadas por referencia ("reference-counting"), incluyendo aquellas que manejan archivos, así que CPython sabe que cuando le conteo por referencia de un archivo abierto disminuye hasta cero, el archivo debería cerrarse y la variable descartada. Sin embargo, en otras implementaciones de Python como PyPy, se usa la recolección de basura en lugar del conteo de referencias, y eso implica que es posible que se acumulen demasiados manejadores de archivos abiertos al mismo tiempo, resultando en un error antes que "gc" tenga la opción cerrarlos y descartarlos a todos. Por eso es mejor cerrar explícitamente los manejadores de archivos cuando ya no se necesitan. web2py provee de dos funciones ayudantes, `read_file()` y `write_file()` en el espacio de nombres de `gluon.fileutils` que encapsulan el acceso a los archivos y aseguran que los manejadores de archivos en uso se cierren oportunamente. Cuando utilizas web2py, no sabes dónde se ubica el directorio actual, porque eso depende de la forma en que se halla configurado el marco de desarrollo. La variable `request.folder` contiene la ruta (path) a la aplicación actual. Las rutas pueden concatenarse con el comando `os.path.join` , del hablamos más adelante. `exec` , `eval` ¶ A diferencia de Java, Python es realmente un lenguaje interpretado. Esto significa que tiene la habilidad de ejecutar comandos de Python almacenados en cadenas. Por ejemplo: ``` >>> a = "print 'hola mundo'" >>> exec(a) 'hola mundo' ``` ¿Qué ha ocurrido? La función `exec` le dice al intérprete que se llame a sí mismo y ejecute le contenido de la cadena pasada como argumento. También es posible ejecutar el contenido de una cadena en el contexto definido por los símbolos de un diccionario: ``` >>> a = "print b" >>> c = dict(b=3) >>> exec(a, {}, c) 3 ``` Aquí el intérprete, cuando ejecuta la cadena `a` , ve los símbolos definidos en `c` ( `b` en el ejemplo), pero no ve a `c` o `a` . Esto no es equivalente a un entorno restringido, porque `exec` no impone límites a lo que el código interior pueda hacer; sólo define el conjunto de variables visibles en el código. Una función relacionada es `eval` , que hace algo muy parecido a `exec` , pero espera que el argumento pasado evalúe a un valor dado, y devuelve ese valor. ``` >>> a = "3*4" >>> b = eval(a) >>> print b 12 ``` `import` ¶ Por ejemplo, si necesitamos utilizar un número aleatorio, podemos hacer: ``` >>> import random >>> print random.randint(0, 9) 5 ``` Esto imprime un entero aleatorio entre 0 y 9 (incluyendo al 9), 5 en el ejemplo. La función `randint` está definida en el módulo `random` . Además es posible importar un objeto de un módulo en el espacio de nombres actual: o importar todo objeto de un módulo en el espacio de nombres actual: o importar todo en un nuevo espacio de nombres específico: ``` >>> import random as myrand >>> print myrand.randint(0, 9) ``` En adelante, vamos a usar objetos principalmente definidos en módulos como `os` , `sys` , `datetime` , `time` y `cPickle` . Todo objeto de Python es accesible a través de un módulo llamado `gluon` y ese es el tema de capítulos posteriores. Internamente, web2py usa muchos módulos de Python (por ejemplo `thread` ), pero sólo en raras ocasiones necesitarás acceder a ellos en forma directa. En las secciones siguientes describiremos aquellos módulos que son de mayor utilidad. `os` ¶ Este módulo provee de una interfaz a la API del sistema operativo. Por ejemplo: ``` >>> import os >>> os.chdir('..') >>> os.unlink('archivo_a_borrar') ``` Algunas de las funciones de `os` , como `chdir` , NO DEBEN usarse en web2py, porque no son aptas para el proceso por hilos ("thread-safe"). `os.path.join` es de gran utilidad; permite concatenar rutas a directorios y archivos en una forma independiente del sistema operativo: ``` >>> import os >>> a = os.path.join('ruta', 'sub_ruta') >>> print a ruta/sub_ruta ``` Las variables de entorno del sistema se pueden examinar con: `>>> print os.environ` que es un diccionario de sólo lectura. `sys` ¶ El módulo `sys` contiene muchas variables y funciones, pero la que más usamos es `sys.path` . Contiene una lista de rutas (path) donde Python busca los módulos. Cuando intentamos importar un módulo, Python lo busca en todos los directorios listados en `sys.path` . Si instalas módulos adicionales en alguna ubicación y quieres que Python los encuentre, necesitas añadir (append) en `sys.path` la ruta o path a esa ubicación. ``` >>> import sys >>> sys.path.append('ruta/a/mis/módulos') ``` Cuando corremos web2py, Python permanece residente en memoria, y se configura un único `sys.path` , aunque puede haber varios hilos respondiendo a las solicitudes HTTP. Para evitar la pérdida de espacio en memoria (leak), es mejor comprobar que una ruta ya está presente antes de añadirla: ``` >>> ruta = 'ruta/a/mis/módulos' >>> if not ruta in sys.path: sys.path.append(ruta) ``` `datetime` ¶ El uso del módulo `datetime` es más fácil de describir por algunos ejemplos: ``` >>> import datetime >>> print datetime.datetime.today() 2008-07-04 14:03:90 >>> print datetime.date.today() 2008-07-04 ``` De vez en cuando puedes necesitar una referencia cronológica para los datos (timestamp) según el tiempo de UTC en lugar de usar la hora local. En ese caso puedes usar la siguiente función: ``` >>> import datetime >>> print datetime.datetime.utcnow() 2008-07-04 14:03:90 ``` El módulo datetime contiene varias clases: date (fecha), datetime (fecha y hora), time (hora) y timedelta. La diferencia entre dos objetos date o dos datetime o dos time es un timedelta: ``` >>> a = datetime.datetime(2008, 1, 1, 20, 30) >>> b = datetime.datetime(2008, 1, 2, 20, 30) >>> c = b - a >>> print c.days 1 ``` En web2py, date y datetime se usan para almacenar los tipos de datos SQL correspondientes cuando se pasan desde o a la base de datos. `time` ¶ El módulo time difiere de `date` o `datetime` porque representa el tiempo en segundos desde el epoch (comenzando desde 1970) ``` >>> import time >>> t = time.time() 1215138737.571 ``` Consulta la documentación de Python para otras funciones de conversión entre tiempo en segundos y tiempo como `datetime` . `cPickle` ¶ Este es un módulo realmente poderoso. Provee de funciones que pueden serializar casi cualquier objeto de Python, incluyendo objetos autoreferidos (self-referential). Por ejemplo, vamos a crear un objeto inusual: ``` >>> class MiClase(object): pass >>> miinstancia = MiClase() >>> miinstancia.x = 'algo' >>> a = [1 ,2, {'hola':'mundo'}, [3, 4, [miinstancia]]] ``` y ahora: ``` >>> import cPickle >>> b = cPickle.dumps(a) >>> c = cPickle.loads(b) ``` En el ejemplo, `b` is una cadena que representa `a` , y `c` es una copia de `a` generada por la deserialización de `b` . cPickle puede además serializar y deserializar desde un archivo: ``` >>> cPickle.dump(a, open('myfile.pickle', 'wb')) >>> c = cPickle.load(open('myfile.pickle', 'rb')) ``` # Resumen¶ # Chapter 3: Resumen * Resumen * Inicio * Ejemplos sencillos * Un blog de imágenes * Una wiki simple * La wiki incorporada de web2py * Elementos básicos de MARKMIN * El protocolo oembed * Creando hipervínculos para contenidos de la wiki * Menús de la wiki * Las funciones de servicios * Ampliación de la funcionalidad auth.wiki * Componentes * Más sobre admin * Site * Acerca de' * Editar * Errores * Mercurial * El Asistente de admin (experimental) * Configurando admin * admin móvil * Más acerca de appadmin ## Resumen¶ ### Inicio¶ web2py viene con paquetes binarios para Windows y Mac OS X. Estos paquetes incluyen el intérprete de Python para que no necesites tenerlo preinstalado. También hay una versión de código fuente que corre en Mac, Linux, y otros sistemas Unix. Se asume para el paquete de código fuente que se cuenta con el intérprete de Python instalado en la computadora. web2py no requiere instalación. Para iniciarlo, descomprime el archivo zip descargado para tu sistema operativo específico y ejecuta el archivo `web2py` correspondiente. En Unix y Linux (distribución de código fuente), ejecuta el siguiente comando: `python web2py.py` En OS X (distribución binaria), debes correr: `open web2py.app` En Windows (distribución binaria), corre el siguiente comando: `web2py.exe` En Windows (distribución de código fuente), ejecuta: ``` c:/Python27/python.exe web2py.exe ``` Importante: para correr la distribución de código fuente de web2py en Windows debes instalar primero las extensiones de Python para Windows de Mark Hammond desde ``` http://sourceforge.net/projects/pywin32/ ``` El programa web2py acepta varias opciones de la línea de comandos que se describen más adelante. Por defecto, al iniciar, web2py muestra una pantalla de inicio y luego un widget de la GUI que pide que elijas una una contraseña temporaria de administrador, la dirección IP de la interfaz de red a utilizar para el servidor web, y un número de puerto desde el cual se servirán las solicitudes. Por defecto, web2py corre su servidor web en 127.0.0.1:8000 (puerto 8000 en localhost), pero puedes correrlo en cualquier dirección IP y puerto disponible. Puedes consultar la dirección IP de tu interfaz de red abriendo una consola y escribiendo `ipconfig` en Windows o `ifconfig` en OS X y Linux. En adelante asumiremos que web2py está corriendo en localhost (127.0.0.1:8000). Utiliza 0.0.0.0:80 para correr web2py públicamente en cualquiera de tus interfaces de red. Si no especificas una contraseña administrativa, la interfaz administrativa se deshabilita. Esta es una medida de seguridad para evitar que la interfaz admin quede expuesta públicamente. El acceso a la interfaz administrativa, admin, sólo es posible desde localhost a menos que estés corriendo web2py detrás de Apache con mod_proxy. Si admin detecta un proxy, la cookie de la sesión se establece como segura y el login de admin no funciona a menos que la comunicación entre el cliente y el proxy sea sobre HTTPS; esta es una medida de seguridad. Todas las comunicaciones entre el cliente y admin deben siempre ser o locales o cifradas; de lo contrario un atacante podría realizar un ataque tipo "man-in-the-middle" o "replay attack" y ejecutar código arbitrario en el servidor. Luego de que la contraseña administrativa ha sido creada, web2py inicia automáticamente el navegador con la dirección: ``` http://127.0.0.1:8000/ ``` Si la computadora no tiene un navegador por defecto, abre un navegador web e ingresa la URL. Al hacer clic sobre "interfaz administrativa" se abre la página de login de esa interfaz. La contraseña del administrador es la seleccionada al inicio del programa. Ten en cuenta que sólo hay un administrador, y por lo tanto sólo una contraseña administrativa. Por razones de seguridad, se pide al desarrollador que ingrese una nueva contraseña cada vez que web2py inicie a menos que se especifique la opción <recycle>. Esto es diferente a cómo se manejan las contraseñas en las aplicaciones de web2py. Luego de que el administrador se autentica en web2py, el navegador es redirigido a la página "site". Esta página lista todas las aplicaciones de web2py instaladas y permite su administración. web2py viene con tres aplicaciones: * Una aplicación llamada admin, la que estás utilizando en este momento. * La aplicación examples, con la documentación interactiva en línea y una réplica del sitio oficial de web2py. * Una aplicación llamada welcome. Esta es la plantilla básica para cualquier otra aplicación. Se dice que es la aplicación de andamiaje. Esta es también la aplicación que da la bienvenida al usuario al inicio del programa. Las aplicaciones de web2py listas para usar son llamadas "appliances" (artefactos). Hay muchas appliances disponibles para descarga gratuita en [appliances] . Invitamos a los usuarios de web2py a que suban nuevas appliances, tanto de código abierto como de código cerrado (compiladas y empaquetadas). Desde la página "site" de la aplicación admin, puedes realizar las siguientes operaciones: * instalar una aplicación completando el formulario en la parte inferior derecha de la página. Especifica el nombre de la aplicación, selecciona el archivo conteniendo una aplicación empaquetada o la URL donde se ubica la aplicación, y haz clic en "submit". * desinstalar una aplicación haciendo clic en el botón correspondiente. Se muestra una página de confirmación. * crear una nueva aplicación especificando un nombre y haciendo clic en "crear". * empaquetar una aplicación para su distribución haciendo clic en el botón correspondiente. La descarga de la aplicación es un archivo .tar conteniendo todo, incluyendo la base de datos. No deberías descomprimir este archivo; es automáticamente descomprimido por web2py cuando se instala a través de admin. * limpiar los archivos temporarios de la aplicación, como sesiones, errores y archivos de caché. * habilitar/deshabilitar cada aplicación. Cuando una aplicación se deshabilita, no es posible utilizarla en forma remota pero no se deshabilitará en forma local (para llamadas desde el mismo sistema). Esto quiere decir que aún es posible acceder a las aplicaciones deshabilitadas detrás de un servidor proxy. Las aplicaciones se deshabilitan creando un archivo llamado "DISABLED" en la carpeta de la aplicación. Los usuarios que intenten acceder a una aplicación deshabilitada recibirán un error 503 de HTTP. Puedes usar routes_onerror para personalizar las páginas de error. * EDITAR la aplicación. Cuando creas una nueva aplicación utilizando admin, esta se instala como un clon de la app de andamiaje "welcome" con un archivo "models/db.py" que crea una base de datos SQLite, conecta a ella, instancia Auth, Crud, y Service, y los configura. Además provee un archivo "controllers/default.py", que expone las acciones "index", "download", "user" para el manejo de usuarios, y "call" para los servicios. En adelante, asumiremos que estos archivos se han eliminado; vamos a crear apps de cero. web2py también viene con un asistente, descripto más adelante en este capítulo, que puede escribir código alternativo de andamiaje automáticamente basado en plantillas y plugin disponibles en la web y basándose en descripciones de alto nivel de los modelos. ### Ejemplos sencillos¶ # Di hola¶ Aquí, como ejemplo, crearemos una app web simple que muestre el mensaje "Hola desde MiApp" al usuario. Llamaremos a esta aplicación "miapp". Además añadiremos un contador que realice un conteo de cuántas veces el mismo usuario visita la página. Puedes crear una nueva aplicación con sólo escribir su nombre en el formulario de la parte superior derecha de la página site en admin. Luego de que presiones el botón de [crear], la aplicación se genera como copia de la aplicación welcome incorporada. Para correr la aplicación, visita: ``` http://127.0.0.1:8000/miapp ``` Ahora tienes una copia de la aplicación welcome. Para editar una aplicación, haz clic en el botón editar para la aplicación recién creada. La página para editar te muestra lo que contiene la aplicación. Cada aplicación de web2py consiste de ciertos archivos, que en su mayoría caen en las siguientes categorías: * modelos: describen la representación de los datos. * controladores: describen el workflow o flujo de trabajo de la aplicación. * vistas: describen la presentación de los datos. * idiomas: describen como se debe traducir la presentación de la aplicación a otros idiomas. * módulos: módulos de Python que pertenecen a la aplicación. * archivos estáticos: imágenes estáticas, archivos CSS [css-w,css-o,css-school] , archivos JavaScript [js-w,js-b] , etc. * plugin: grupos de archivos diseñados para trabajar en conjunto. Todo está prolijamente organizado según el patrón de diseño Modelo-Vista-Controlador. Cada sección en la página editar corresponde a una subcarpeta en la carpeta de la aplicación. Nótese que haciendo clic en los encabezados de sección las secciones se contraen o expanden. Los nombres de las carpetas bajo archivos estáticos también tienen esta propiedad. Cada archivo listado en la sección corresponde a un archivo con una ubicación física en la subcarpeta. Toda operación realizada en un archivo a través de la interfaz admin (crear, editar, eliminar) se puede realizar directamente desde la shell utilizando nuestro editor favorito. La aplicación contiene otros tipos de archivos (bases de datos, archivos de sesiones, archivos de errores, etc.), pero no se listan en la página editar porque no son creadas o modificadas por el administrador; esos archivos son creados y modificados por la aplicación misma. Los controladores contienen la lógica y flujo de trabajo de la aplicación. Cada URL se asocia a una llamada a una de las funciones en los controladores (acciones). Hay dos controladores por defecto: "appadmin.py" y "default.py". appadmin provee de una interfaz administrativa para la base de datos; por ahora no la usaremos. "default.py es el controlador que debes editar, que es el que se va a llamar por defecto cuando no se especifique el controlador en la URL. Modifica la función "index" de esta forma: ``` def index(): return "Hola desde MiApp" ``` Así es como se ve el editor web: Guárdalo y regresa a la página de editar. Haz clic en el link de index para visitar la nueva página creada. Cuando visitas la URL ``` http://127.0.0.1:8000/myapp/default/index ``` se llama a la acción index en el controlador por defecto (default). Este devuelve la cadena que el navegador va a mostrarnos. Debería verse algo como esto: Ahora, modifica la función "index" como sigue: ``` def index(): return dict(mensaje="Hola desde MiApp") ``` También desde la página editar, modifica la vista "default/index.html" (el archivo de vista asociado a la acción) y reemplaza completamente los contenidos de ese archivo con lo siguiente: ``` <html> <head></head> <body> <h1>{{=mensaje}}</h1> </body> </html> ``` Ahora la acción devuelve un diccionario que define un `mensaje` . Cuando una acción devuelve un diccionario, web2py busca una vista con el nombre ``` [controller]/[function].[extension] ``` y la ejecuta. Aquí `[extension]` es la extensión de la solicitud. Si no se especifica la extensión, se toma "html" por defecto, que es la extensión que utilizaremos para este caso. Con estas especificaciones, la vista es un archivo con HTML que incrusta o embebe código Python utilizando etiquetas especiales con {{ }}. En particular, en el ejemplo, el `{{=mensaje}}` ordena a web2py que reemplace el código entre etiquetas con el valor de `message` devuelto por la acción. Nótese que `mensaje` aquí no es una palabra especial (keyword) sino que se definió en la acción. Hasta aquí no hemos utilizado keyword de web2py. Si web2py no encuentra la vista solicitada, utiliza una vista llamada "generic.html" que viene con cada aplicación. > Mac MailGoogle MapsjsonpSi se especifica una extensión que no sea "html" (por ejemplo "json"), y el archivo de la vista "[controlador]/[función].json" no se encuentra, web2py busca la vista genérica "generic.json". web2py viene con generic.html, generic.json, generic.jsonp, generic.xml, generic.rss, generic.ics (para Mac Mail Calendar), generic.map (para incrustar Google Maps), y generic.pdf (basado en fpdf). Estas vistas genéricas se pueden modificar para cada aplicación individualmente, y se pueden agregar vistas adicionales fácilmente. Las vistas genéricas son herramientas de desarrollo. En producción cada acción debería tener su propia vista. De hecho, por defecto, las vistas genéricas sólo están habilitadas para localhost. También puedes especificar una vista con ``` response.view = 'default/something.html' ``` Consulta más detalles sobre este tema en el Capítulo 10. Si regresas a EDITAR y haces clic en index, verás la nueva página HTML: # La barra de herramientas para depuración¶ Con fines de depuración es posible añadir al código de la vista y te mostrará algunos datos útiles, incluyendo los objetos request, response y session, y una lista de todas las consultas a la base de datos cronometradas. # Contemos¶ Ahora vamos a agregar un contador a esta página que llevará la cuenta de cuántas veces el mismo visitante muestra la página. web2py automática y transparentemente registra a los usuarios utilizando sesiones y cookie. Para cada nuevo visitante, crea una sesión y le asigna un "session_id" único. El objeto session es un contenedor para variables que se almacenan del lado del servidor. La clave única id es enviada al navegador a través de una cookie. Cuando el visitante solicita otra página desde la misma aplicación, el navegador envía la cookie de regreso, es recuperada por web2py, y se reactiva la sesión correspondiente. Para utilizar el session, modifica el controlador por defecto (default): ``` def index(): if not session.contador: session.contador = 1 else: session.contador += 1 return dict(mensaje="Hola desde MiApp", contador=session.contador) ``` Ten en cuenta que `contador` no es una palabra especial de web2py pero sí lo es `session` . Le pedimos a web2py que revise si existe una variable contador en session y, si no es así, que cree una con un valor inicial de 1. Si el contador ya existe, le pedimos a web2py que lo aumente en una unidad. Finalmente le pasamos el valor del contador a la vista. Una forma más compacta de declarar la misma función es: ``` def index(): session.contador = (session.contador or 0) + 1 return dict(mensaje="Hola desde MiApp", contador=session.contador) ``` Ahora modifica la vista para agregar una línea que muestre el valor del contador: ``` <html> <head></head> <body> <h1>{{=mensaje}}</h1> <h2>Cantidad de visitas: {{=contador}}</h2> </body> </html> ``` Cuando visitas la página index nuevamente (y otra vez), deberías obtener la siguiente página HTML: El contador está asociado a cada visitante, y se incrementa cada vez que el visitante vuelve a cargar la página. Cada visitante ve un contador distinto. # Di mi nombre¶ Ahora crea dos páginas (primera y segunda), donde la primera página crea un formulario, le pide el nombre al visitante, y redirige a la segunda página, que crea un mensaje de bienvenida personalizado. Escribe las acciones correspondientes en el controlador default: ``` def primera(): return dict() Luego crea una vista "default/primera.html" para la primera acción, e ingresa: ``` {{extend 'layout.html'}} <h1>¿Cuál es tu nombre?</h1> <form action="segunda"> <input name="nombre_del_visitante" /> <input type="submit" /> </form> ``` Finalmente, crea una vista "default/segunda.html" para la segunda acción: ``` {{extend 'layout.html'}} <h1>Hello {{=request.vars.nombre_del_visitante}}</h1> ``` En ambas vistas hemos extendido el "layout.html" básico que viene con web2py. La vista layout (plantilla general) mantiene la apariencia/estilo entre dos páginas coherente. El archivo de layout se puede modificar y reemplazar fácilmente, ya que principalmente contiene código HTML. Si ahora visitas la primera página, e ingresa tu nombre: y aceptas el formulario, recibirás un mensaje de bienvenida: # Postback¶ El mecanismo para envío de formularios que hemos utilizado antes es muy común, pero no es una buena práctica de programación. Todos los datos ingresados deberían validarse y, en el ejemplo anterior, la tarea de validación recaería en la segunda acción. De esa forma, la acción que realiza la validación es distinta a la que ha generado el formulario. Esto tiende a la redundancia en el código. Un patrón más apropiado para el envío de formularios es enviarlos a la misma acción que los generó, en nuestro caso "primera". La acción "primera" debería recibir las variables, procesarlas, almacenarlas del lado del servidor, y redirigir al visitante a la página "segunda", que recupera las variables. Este mecanismo se denomina "postback". Modifica el controlador default para implementar el auto-envío: ``` def primera(): if request.vars.nombre_del_visitante: session.visitor_name = request.vars.nombre_del_visitante redirect(URL('segunda')) return dict() Y ahora modifica la vista "default/primera.html": ``` {{extend 'layout.html'}} ¿Cuál es tu nombre? <form> <input name="nombre_del_visitante" /> <input type="submit" /> </form> ``` y la vista "default/segunda.html" necesita recuperar la información de `session` en lugar de "request.vars": ``` {{extend 'layout.html'}} <h1>Hola {{=session.visitor_name or "anónimo"}}</h1> ``` Desde el punto de vista del visitante, el auto-envío se comporta exactamente igual que con la implementación anterior. No hemos añadido validación todavía, pero ahora está claro que la validación debería realizarla la primera acción. Esta estrategia es mejor además porque el nombre del visitante se mantiene en la sesión, y permanece accesible en toda acción o vista de la aplicación sin que sea necesario pasarlo explícitamente. Ten en cuenta que si la acción "segunda" se llamara antes de establecer el nombre del visitante, mostrará "Hola anónimo" porque `session.visitor_name` devuelve `None` . Como alternativa podríamos haber agregado el siguiente código en el controlador (en la función "segunda"): ``` if not request.function=='primera' and not session.visitor_name: redirect(URL('primera')) ``` Este es un mecanismo ad hoc que puedes utilizar para un mayor control de la autorización en los controladores, aunque puedes consultar un método más eficiente en el Capítulo 9. Con web2py podemos avanzar aun más y pedirle que genere los formularios por nosotros, incluyendo la validación. web2py provee de ayudantes (FORM, INPUT, TEXTAREA, y SELECT/OPTION) con nombres equivalentes a las etiquetas de HTML. Estos ayudantes pueden utilizarse para crear formularios tanto en el controlador como en la vista. Por ejemplo, aquí se muestra una posible forma de reescribir la primera acción: ``` def primera(): formulario = FORM(INPUT(_name='nombre_del_visitante', requires=IS_NOT_EMPTY()), INPUT(_type='submit')) if formulario.process().accepted: session.nombre_del_visitante = formulario.vars.nombre_del_visitante redirect(URL('segunda')) return dict(formulario=formulario) ``` donde decimos que la etiqueta FORM contiene dos etiquetas INPUT. Los atributos de las etiquetas se especifican con pares nombre/argumento (named arguments) que comienzan con subguión ("_"). El argumento `requires` no es un atributo de etiqueta (porque no comienza con subguión) pero establece un validador para el valor de nombre_del_visitante. Aquí tenemos una forma todavía mejor de crear el mismo formulario: ``` def primera(): formulario = SQLFORM.factory(Field('nombre_del_visitante', label='¿Cuál es tu nombre?', requires=IS_NOT_EMPTY())) if formulario.process().accepted: session.nombre_del_visitante = form.vars.nombre_del_visitante redirect(URL('segunda')) return dict(formulario=formulario) ``` El objeto `formulario` puede serializarse fácilmente en HTML al incrustarlo en la vista "default/primera.html". El método `formulario.process()` aplica los validadores y devuelve el formulario en sí. La variable `formulario.accepted` se establece como True si el formulario se procesó y pasó la validación. Si el formulario auto-enviado pasa la validación, almacena las variables en la sesión y redirige como antes. Si el formulario no pasa la validación, se insertan mensajes de error en él y se muestran al usuario, como se muestra a continuación: En la próxima sección vamos a mostrar como los formularios pueden crearse automáticamente desde el modelo. En todos los ejemplos hemos usado la sesión para pasar el nombre del usuario entre la primera y la segunda acción. Podríamos haber usado un mecanismo diferente pasando los datos como parte del URL de redirección: ``` def primera(): formulario = SQLFORM.factory(Field('nombre_del_visitante', requires=IS_NOT_EMPTY())) if formulario.process().accepted: nombre = form.vars.nombre_del_visitante redirect(URL('segunda', vars=dict(nombre=nombre))) return dict(formulario=formulario) def segunda(): nombre = request.vars.nombre or redirect(URL('primera')) return dict(nombre=nombre) ``` Ten en cuenta que en general no es buena idea pasar datos entre acciones por medio de los URL. Esto hace más difícil la protección de los datos. Es más seguro almacenar los datos en la sesión. # Internacionalización¶ Probablemente tu código incluya parámetros fijos (hardcoded), como la cadena "¿Cuál es tu nombre?". Deberías poder personalizar esas cadenas sin tener que editar el código fuente y en especial agregar traducciones en diferentes idiomas. De esa forma, si un visitante tuviera como preferencia el idioma italiano en su navegador, web2py usaría la traducción al italiano para las cadenas, si estuvieran disponibles. Esta característica de web2py se llama internationalization y se describe con más detalle en el próximo capítulo. Aquí solo nos interesa aclarar que para poder usar esta funcionalidad debes marcar las cadenas que se deben traducir. Esto se hace envolviendo las cadenas entre comillas como por ejemplo ``` "¿Cuál es tu nombre?" ``` con el operador `T` : ``` T("¿Cuál es tu nombre?") ``` Además puedes marcar para traducción cadenas fijas en las vistas. Por ejemplo ``` <h1>¿Cuál es tu nombre?</h1> ``` se convierte en ``` <h1>{{=T("¿Cuál es tu nombre?")}}</h1> ``` Es una buena práctica el hacerlo para toda cadena en el código (etiquetas de campos, mensajes emergentes, etc.) salvo para los nombres de tablas o campos de la base de datos. Una vez que identifiquemos y marquemos las cadenas para traducción, web2py se encargara de prácticamente todo lo demás. Además, la interfaz administrativa provee de una página donde puedes realizar las traducciones de cada cadena en el idioma para el que quieras dar soporte. web2py incluye un potente motor para pluralización que se describe en el próximo capítulo. Se integra tanto al motor de internacionalización como al conversor para markmin. ### Un blog de imágenes¶ Aquí, como nuevo ejemplo, queremos crear una aplicación web que permita al administrador publicar imágenes y asignarles un nombre, y que permita a los usuarios del sitio web ver las imágenes rotuladas y enviar comentarios (publicaciones). Como antes, desde la página site en admin, crea una nueva aplicación llamada `imagenes` , y navega hasta la página de "editar": Comenzamos creando un modelo, una representación de los datos permanentes en la aplicación (las imágenes a subir, sus nombres y los comentarios). Primero, necesitas crear/modificar un archivo de modelo que, por falta de imaginación, llamaremos "db.py". Suponemos que el código a continuación reemplazará todo código existente en "db.py". Los modelos y los controladores deben `.py` como extensión porque son código fuente de Python. Si la extensión no se provee, web2py la agrega automáticamente. Las vistas en cambio tienen `.html` como extensión ya que principalmente contienen código HTML. Modifica el archivo "db.py" haciendo clic en el botón "editar" correspondiente: e ingresa lo siguiente: ``` db = DAL("sqlite://storage.sqlite") db.define_table('imagen', Field('titulo', unique=True), Field('archivo', 'upload'), format = '%(titulo)s') db.define_table('publicacion', Field('imagen_id', 'reference imagen'), Field('autor'), Field('email'), Field('cuerpo', 'text')) db.imagen.titulo.requires = IS_NOT_IN_DB(db, db.imagen.titulo) db.publicacion.imagen_id.requires = IS_IN_DB(db, db.imagen.id, '%(titulo)s') db.publicacion.autor.requires = IS_NOT_EMPTY() db.publicacion.email.requires = IS_EMAIL() db.publicacion.cuerpo.requires = IS_NOT_EMPTY() db.publicacion.imagen_id.writable = db.publicacion.imagen_id.readable = False ``` Analicemos esto línea a línea. La línea 1 define una variable global llamada `db` que representa la conexión de la base de datos. En este caso es una conexión a una base de datos SQLite almacenada en el archivo "applications/imagenes/databases/storage.sqlite". Si se usa SQLite y el archivo de la base de datos no existe, se crea uno nuevo. Puedes cambiar el nombre del archivo, así como también el nombre de la variable global `db` , pero es conveniente que se les dé el mismo nombre, para que sea más fácil recordarlos. Las líneas 3-6 definen una tabla "imagen". `define_table` es un método del objeto `db` . El primer argumento "imagen", es el nombre de la tabla que estamos definiendo. Los otros argumentos son los campos que pertenecen a la tabla. Esta tabla tiene un campo denominado "titulo", otro llamado "archivo", y un campo llamado "id" que sirve como clave primaria ("id" no se declara explícitamente porque todas las tablas tienen un campo id por defecto). El campo "titulo" es una cadena, y el campo "archivo" es de tipo "upload". "upload" es un tipo de campo especial usado por la Capa de Abstracción de Datos (DAL) de web2py para almacenar los nombres de los archivos subidos. web2py sabe como subir archivos (por medio de streaming si son grandes), cambiarles el nombre por seguridad, y almacenarlos. Cuando se define una tabla, web2py realiza una de muchas acciones posibles: * Si la tabla no existe, la tabla es creada; * Si la tabla existe y no se corresponde con la definición, la tabla se modifica apropiadamente, y si un campo tiene un tipo distinto, web2py intenta la conversión de sus contenidos. * Si la tabla existe y se corresponde con la definición, web2py no realiza ninguna acción. Este comportamiento se llama "migración" (migration). En web2py las migraciones son automáticas, pero se pueden deshabilitar por tabla pasando `migrate=False` como último argumento de `define_table` . La línea 6 define una cadena de formato para la tabla. Esto determina cómo un registro debería representarse como cadena. Nótese que el argumento `format` también puede ser una función que toma un registro y devuelve una cadena. Por ejemplo: ``` format=lambda registro: registro.titulo ``` Las líneas 8-12 definen otra tabla llamada "publicacion". Una publicación tiene un "autor", un "email" (vamos a guardar la dirección de correo electrónico del autor de la publicación), un "cuerpo" de tipo "text" (queremos utilizarlo para guardar el texto en sí publicado por el autor), y un campo "imagen_id" de tipo reference que apunta a `db.imagen` por medio del campo "id". En la línea 14, `db.imagen.titulo` representa el campo "titulo" de la tabla "imagen". El atributo `requires` te permite configurar requerimientos/restricciones que se controlarán por medio de los formularios de web2py. Aquí requerimos que "titulo" no se repita: ``` IS_NOT_IN_DB(db, db.imagen.titulo) ``` Ten en cuenta que esto es opcional porque se configura automáticamente siempre que se especifique ``` Field('titulo', unique=True) ``` . Los objetos que representan estas restricciones se llaman validadores (validators). Se pueden agrupar múltiples validadores en una lista. Los validadores se ejecutan en el orden en que se especifican. `IS_NOT_IN_DB(a, b)` es un validador especial que comprueba que el valor de un campo `b` para un nuevo registro no exista previamente en `a` . La línea 15 requiere que el campo "imagen_id" de la tabla "publicacion" esté en `db.imagen.id` . En lo que respecta a la base de datos, ya lo habíamos declarado al definir la tabla "publicacion". Ahora estamos diciendo explícitamente al modelo que esta condición debería ser controlada por web2py también en el nivel del procesamiento de los formularios cuando se publica un comentario, para que los datos inválidos no se propaguen desde los formularios de ingreso a la base de datos. Además requerimos que la "imagen_id" se represente por el "titulo", `'%(titulo)s'` , del registro correspondiente. La línea 20 indica que el campo "imagen_id" de la tabla "publicacion" no debería mostrarse en formularios, `writable=False` y tampoco en formularios de sólo-lectura, `readable=False` . El significado de los validadores en las líneas 17-18 debería de ser obvio. Ten en cuenta que el validador ``` db.publicacion.imagen_id.requires = IS_IN_DB(db, db.imagen.id, '%(titulo)s') ``` se puede omitir (y se configuraría automáticamente) si especificáramos un formato para una tabla referenciada: ``` db.define_table('imagen', ..., format='%(titulo)s') ``` donde el formato puede ser una cadena o una función que toma un registro y devuelve una cadena. Una vez que el modelo se ha definido, si no hay errores, web2py crea una interfaz de la aplicación para administrar la base de datos. Puedes acceder a ella a través del link "administración de la base de datos" en la página "editar" o directamente: ``` http://127.0.0.1:8000/imagenes/appadmin ``` Aquí se muestra una captura de la interfaz appadmin: Esta interfaz se escribe en el controlador llamado "addpadmin.py" y la vista "appadmin.html" correspondiente. En adelante, nos referiremos a esta interfaz como "appadmin.py". Esta interfaz permite al administrador insertar nuevos registros de la base de datos, editar y eliminar registros existentes, examinar las tablas y hacer consultas tipo join en la base de datos. La primera vez que se accede a appadmin, se ejecuta el modelo y se crean las tablas. La DAL de web2py traduce el código Python en comandos SQL específicos del motor de base de datos implementado (en este ejemplo SQLite). Puedes ver el SQL generado desde la página "editar" haciendo clic en el link "sql.log" debajo de "modelos". Ten en cuenta que el link no está disponible si no se crean las tablas. Si editas el modelo y accedes a appadmin de nuevo, web2py generará SQL para alterar las tablas existentes. El SQL generado se registra en "sql.log". Regresa a appadmin y prueba insertando un nuevo registro: web2py ha traducido el campo tipo "upload" `db.imagen.archivo` en un formulario para subir el archivo. Cuando el formulario se acepta y se sube una imagen, el archivo se cambia de nombre por seguridad conservando la extensión, se guarda con el nuevo nombre en la carpeta "uploads" de la aplicación, y se almacena el nuevo nombre el campo `db.imagen.archivo` . Este mecanismo está diseñado para prevenir los ataques de tipo directory traversal. Ten en cuenta que cada tipo de campo es procesado por un "widget". Los widget por defecto se pueden reemplazar (override). Cuando haces clic en un nombre de tabla en appadmin, web2py realiza un select de todos los registros de la tabla elegida, encontrados por la consulta a la base de datos o query `db.imagen.id > 0` y convierte el resultado (render). Puedes seleccionar un conjunto distinto de registros editando la consulta de DAL y presionando [Enviar]. Para modificar o eliminar un solo registro, haz clic en su número id. Por causa del validador `IS_IN_DB` , el campo de referencia "imagen_id" se convierte en un menú desplegable (drop-down). Los ítems en el menú son claves ( `db.imagen.id` ), pero se representan con el campo `db.imagen.titulo` , como se especificó al crear el validador. Los validadores son objetos muy potentes que saben como representar campos, filtrar sus valores, generar errores, y dar formato a los datos extraídos del campo. La siguiente figura muestra qué pasa cuando se acepta un formulario que no pasa la validación: Los mismos formularios que se generan automáticamente en appadmin pueden también generarse en forma programática a través del ayudante `SQLFORM` e embebidos en aplicaciones del usuario. Estos formularios son aptos para CSS (CSS-friendly), y se pueden personalizar. Cada aplicación tiene su propio appadmin; por lo tanto, el mismo appadmin puede modificarse sin que se afecten otras aplicaciones. Hasta aquí, la aplicación sabe cómo almacenar la información, y hemos visto cómo acceder a la base de datos a través de appadmin. El acceso a appadmin está restringido al administrador, y no está pensado como una interfaz web de producción para la aplicación; De ahí la próxima parte de este tutorial paso a paso. Específicamente queremos crear: * Una página "index" que liste todas las imágenes disponibles ordenadas por título y link a páginas con detalles para las imágenes. * Una página "mostrar/[id]" que muestre al visitante la imagen solicitada y le permita ver y publicar comentarios. * Una acción "download/[nombre]" para la descarga de las imágenes subidas. Esto se muestra esquemáticamente aquí: Regresa a la página "editar" y modifica el controlador "default.py", reemplazando sus contenidos con lo que sigue: ``` def index(): imagenes = db().select(db.imagen.ALL, orderby=db.imagen.titulo) return dict(imagenes=imagenes) ``` Esta acción devuelve un diccionario. Las claves de los ítems en el diccionario se interpretan como variables pasadas a la vista asociada a la acción. Durante el desarrollo, si no hay una vista, la acción es convertida (render) por la vista "generic.html" que se provee con cada aplicación de web2py. La acción de index realiza una consulta select de todos los campos ( `db.imagen.ALL` ) de la tabla imagen, ordenados por `db.imagen.titulo` . El resultado del select es un objeto `Rows` que contiene los registros. Se asigna a una variable local llamada imagenes devuelta por la acción a la vista. `imagenes` es iterable y sus elementos son los registros consultados. Para cada registro (row) las columnas se pueden examinar como diccionarios: ``` imagenes[0]['titulo'] ``` o `imagenes[0].titulo` con igual resultado. Si no escribes una vista, el diccionario es convertido por "views/generic.html" y una llamada a la acción index se vería de esta forma: No has creado una vista por esta acción todavía, así que web2py convierte el set de registros en un formulario tabular simple. Ahora crea una vista para la acción index. Regresa a admin, edita "default/index.html" y reemplaza su contenido con lo que sigue: ``` {{extend 'layout.html'}} <h1>Imágenes registradas</h1> <ul> {{for imagen in imagenes:}} {{=LI(A(imagen.titulo, _href=URL("mostrar", args=imagen.id)))}} {{pass}} </ul> ``` Lo primero a tener en cuenta es que una vista es HTML puro con etiquetas {{...}} especiales. El código incrustado en {{...}} es código Python puro con una salvedad: la indentación o espaciado no es relevante. Los bloques de código comienzan con líneas que terminan en dos puntos (:) y terminan en líneas que comienzan con la palabra clave `pass` . En algunos casos el final de un bloque es obvio teniendo en cuenta el contexto y no es necesario el uso de `pass` . Las líneas 5-7 recorren los registros de las imágenes y para cada registro muestran: ``` LI(A(imagen.titulo, _href=URL('mostrar', args=imagen.id)) ``` Se trata de una etiqueta `<li>...</li>` que contiene una etiqueta ``` <a href="...">...</a> ``` que contiene el campo `imagen.titulo` . El valor del hipervínculo o hypertext reference (atributo href) es: Es decir, el URL en el ámbito de la misma aplicación y controlador que la solicitud (request) actual que llama a una función llamada "mostrar", pasándole un argumento único `args=imagen.id` a esa función. `LI` , `A` , etc. son ayudantes de web2py que están asociados a las etiquetas HTML correspondientes. Sus argumentos sin nombre se interpretan como objetos a serializar e insertar en el HTML incluido en la etiqueta. Los argumentos por nombre que comienzan con subguión (por ejemplo `_href` ) son interpretados como atributos de la etiqueta sin el subguión. Por ejemplo `_href` es el atributo `href` , `_class` es el atributo `class` , etc. Por ejemplo, el siguiente comando: ``` {{=LI(A('algo', _href=URL('mostrar', args=123))}} ``` se convierte en: ``` <li><a href="/imagenes/default/mostrar/123">algo</a></li> ``` Algunos ayudantes ( `INPUT` , `TEXTAREA` , `OPTION` y `SELECT` ) también aceptan algunos atributos especiales que no comienzan con subguión ( `value` , y `requires` ). Estos parámetros son importantes para la creación de formularios y se tratarán más adelante. Vuelve a la página "editar". Ahora nos indica que "default.py" expone "index". Haciendo clic en "index", puedes visitar la nueva página creada: ``` http://127.0.0.1:8000/imagenes/default/index ``` que se ve así: Si haces clic en el link del nombre de la imagen, el navegador abre la dirección: ``` http://127.0.0.1:8000/imagenes/default/mostrar/1 ``` y esto resulta en un error, ya que todavía no has creado una acción llamada "mostrar" en el controlador "default.py". Editemos el controlador "default.py" y reemplazando su contenido con: ``` def index(): imagenes = db().select(db.imagen.ALL, orderby=db.imagenes.titulo) return dict(imagenes=imagenes) def mostrar(): imagen = db.imagen(request.args(0, cast=int)) or redirect(URL('index')) db.comentario.imagen_id.default = imagen.id formulario = SQLFORM(db.comentario) if formulario.process().accepted: response.flash = 'tu comentario se ha publicado' comentarios = db(db.comentario.imagen_id==image.id).select() return dict(imagen=imagen, comentarios=comentarios, formulario=formulario) El controlador contiene dos acciones: "mostrar" y "download". La acción "mostrar" selecciona la imagen con el `id` leído (parsed) de request args y todos los comentarios asociados a la imagen. a continuación, "mostrar" pasa todo a la vista "default/mostrar.html". El id de la imagen en la referencia de: en "default/index.html", se puede consultar como: ``` request.args(0,cast=int) ``` desde la acción "mostrar". El argumento `cast=int` es opcional pero muy importante. Intenta convertir (cast) la cadena pasada en PATH_INFO a un int. En caso de fallar genera la excepción apropiada en lugar de generar un ticket. También se puede hacer una redirección en caso de un fallo al convertir el dato: ``` request.args(0, cast=int, otherwise=URL('error')) ``` Además, `db.imagen(...)` es un atajo para ``` db(db.imagen.id==...).select().first() ``` La acción "download" espera un nombre de archivo en `request.args(0)` , arma la ruta a la ubicación donde se supone que se ha almacenado el archivo, y lo devuelve al cliente. Si el archivo es demasiado grande, lo transfiere por medio de un stream sin sobrecargar la memoria (overhead). Ten en cuenta los siguientes comandos: * La línea 7 crea un formulario de inserción SQLFORM para la tabla `db.publicacion` utilizando sólo los campos especificados. * La línea 8 configura el valor del campo de referencia, que no es parte del formulario de ingreso de datos porque no está en la lista de campos especificados arriba. * La línea 9 procesa el formulario enviado (las variables del formulario están en `request.vars` ) en el contexto de la sesión actual (la sesión se usa para prevenir envíos duplicados y para facilitar la navegación). Si las variables del formulario se validan, el nuevo comentario se inserta en la tabla `db.publicacion` ; de lo contrario el formulario se modifica para incluir los mensajes de error (por ejemplo, si el email del autor no es válido). ¡Todo esto lo hace la línea 9! * La línea 10 se ejecuta únicamente cuando el formulario se acepta, luego de que el registro se inserte en la base de datos. `response.flash` es una variable que se muestra en las vistas y es utilizada para notificar al visitante cuando se detecta un evento en la aplicación. * La línea 11 selecciona todos los comentarios que refieren o apuntan a la imagen actual. La acción "download" (descargar) ya está definida en el controlador "default.py" de la aplicación de andamiaje. La acción "download" no devuelve un diccionario, por lo que no necesita una vista. La acción "mostrar", sin embargo, debería tener una vista, entonces regresa al admin y crea una nueva vista llamada "default/mostrar.html" Modifica este archivo y reemplaza su contenido con lo siguiente: ``` {{extend 'layout.html'}} <h1>Imagen: {{=imagen.titulo}}</h1> <center> <img width="200px" src="{{=URL('download', args=imagen.archivo)}}" /> </center> {{if len(comentarios):}} <h2>Comentarios</h2><br /><p> {{for publicacion in comentarios:}} <p>{{=publicacion.autor}} dice <i>{{=publicacion.cuerpo}}</i></p> {{pass}}</p> {{else:}} <h2>Todavía no se han publicado comentarios</h2> {{pass}} <h2>Publica un comentario</h2> {{=formulario}} ``` Esta vista muestra el imagen.archivo llamando a la acción "download" dentro de una etiqueta `<img .../>` . Si hay comentarios, los recorre en un bucle y muestra cada uno. Así es como se verá para el visitante: Cuando un visitante envía un comentario a través de esta página, el comentario se almacena en la base de datos y se agrega al final de la página. # Agregando la autenticación¶ La API de web2py para el Control de Acceso Basado en Roles es bastante sofisticado, pero por ahora vamos a limitarnos a restringir el acceso a la acción mostrar a los usuarios autenticados, dejando una descripción más detallada para el capítulo 9. Para limitar el acceso a los usuarios autenticados, debemos completar tres pasos. En un modelo, por ejemplo "db.py", debemos añadir: En nuestro controlador, tenemos que agregar una acción: ``` def user(): return dict(formulario=auth()) ``` Esto es suficiente para habilitar el login (autenticación), logout (cerrar sesión), etc. La plantilla general del diseño (layout) además mostrará opciones en las que requieran autenticación en la parte superior derecha del navegador. Ahora podemos decorar las funciones que queremos restringir, por ejemplo: ``` @auth.requires_login() def mostrar(): ... ``` Todo intento de acceder a ``` http://127.0.0.1:8000/imagenes/default/show/[imagen_id] ``` requerirá autenticación. Si el usuario no se autentica (login), será redirigido a ``` http://127.0.0.1:8000/imagenes/default/user/login ``` La función `user` además expone, entre otras, las siguientes acciones: ``` http://127.0.0.1:8000/imagenes/default/user/logout http://127.0.0.1:8000/imagenes/default/user/register http://127.0.0.1:8000/imagenes/default/user/profile http://127.0.0.1:8000/imagenes/default/user/change_password http://127.0.0.1:8000/imagenes/default/user/request_reset_password http://127.0.0.1:8000/imagenes/default/user/retrieve_username http://127.0.0.1:8000/imagenes/default/user/retrieve_password http://127.0.0.1:8000/imagenes/default/user/verify_email http://127.0.0.1:8000/imagenes/default/user/impersonate http://127.0.0.1:8000/imagenes/default/user/not_authorized ``` Ahora, un usuario nuevo debe registrarse para poder autenticarse, leer y publicar comentarios. Tanto el objeto `auth` como la función `user` están definidos por defecto en la aplicación de andamiaje. El objeto `auth` es altamente personalizable y puede manejar aspectos como verificación por email, confirmación de registro, CAPTCHA, y métodos alternativos de autenticación por medio de los plugin. # Agregando los grid¶ Podemos añadir mejoras a lo realizado hasta aquí usando los gadget `SQLFORM.grid` y `SQLFORM.smartgrid` para crear una interfaz de administración para nuestra aplicación: ``` @auth.requires_membership('administrador') def administrar(): grid = SQLFORM.smartgrid(db.imagen, linked_tables=['comentario']) return dict(grid=grid) ``` con su "views/default/administrar.html" asociado ``` {{extend 'layout.html'}} <h2>Interfaz de administración</h2> {{=grid}} ``` Por medio de appadmin crea un grupo "administrador" y agrega algunos miembros al grupo. Ellos podrán acceder a la interfaz administrativa ``` http://127.0.0.1:8000/imagenes/default/manage ``` con funcionalidad para la navegación y búsqueda: y opciones para crear, actualizar y eliminar imágenes y sus comentarios: # Configurando la plantilla general del diseño¶ Puedes configurar la plantilla general por defecto editando "views/layout.html" pero además puedes configurarla sin editar el HTML. De hecho, la plantilla de estilo "static/base.css" está documentada y descripta en detalle en el capítulo 5. Puedes cambiar el color, las columnas, el tamaño, bordes y fondo sin editar el HTML. Si deseas modificar el menú, el título o el subtítulo, puedes hacerlo en cualquier archivo del modelo. La aplicación de andamiaje, configura los valores por defecto de estos parámetros en el archivo "models/menu.py": ``` response.title = request.application response.subtitle = '¡Modifícame!' response.meta.author = 'tú' response.meta.description = 'describe tu app' response.meta.keywords = 'bla bla bla' response.menu = [ [ 'Inicio', False, URL('index') ] ] ``` ### Una wiki simple¶ En esta sección, armamos una wiki simple y desde cero usando únicamente las API de bajo nivel (a diferencia del uso de la característica de wiki incorporada de web2py que se detalla en la próxima sección). El visitante podrá crear páginas, realizar búsquedas de páginas por título y editarlas. El visitante además podrá publicar comentarios (de la misma forma que en las aplicaciones anteriores), y además publicar documentos (adjuntos con las páginas) y enlazarlos desde las páginas. Como convención, adoptaremos la sintaxis markmin para la sintaxis de nuestra wiki. Además implementaremos una página de búsqueda con Ajax, una fuente de RSS para las páginas, y un servicio para la búsqueda de páginas a través de XML-RPC[xmlrpc] . El siguiente diagrama lista las acciones que necesitamos implementar y los enlaces que queremos establecer entre ellos. Comienza creando una nueva app de andamiaje, con nombre "miwiki". El modelo debe contener tres tablas: página, comentario, y documento. Tanto comentario como documento hacen referencia a página porque pertenecen a ella. Un documento contiene un campo archivo de tipo upload como en la anterior aplicación de imágenes. Aquí se muestra el modelo completo: from gluon.tools import * auth = Auth(db) auth.define_tables() crud = Crud(db) db.define_table('pagina', Field('titulo'), Field('cuerpo', 'text'), Field('creada_en', 'datetime', default=request.now), Field('creada_por', 'reference auth_user', default=auth.user_id), format='%(titulo)s') db.define_table('comentario', Field('pagina_id', 'reference pagina'), Field('cuerpo', 'text'), Field('creado_en', 'datetime', default=request.now), Field('creado_por', 'reference auth_user', default=auth.user_id)) db.define_table('documento', Field('pagina_id', 'reference pagina'), Field('nombre'), Field('archivo', 'upload'), Field('creado_en', 'datetime', default=request.now), Field('creado_por', 'reference auth_user', default=auth.user_id), format='%(name)s') db.pagina.titulo.requires = IS_NOT_IN_DB(db, 'page.title') db.pagina.cuerpo.requires = IS_NOT_EMPTY() db.pagina.creada_por.readable = db.pagina.creada_por.writable = False db.pagina.creada_en.readable = db.pagina.creada_en.writable = False db.comentario.cuerpo.requires = IS_NOT_EMPTY() db.comentario.pagina_id.readable = db.comentario.page_id.writable = False db.comentario.creado_por.readable = db.comentario.creado_por.writable = False db.comentario.creado_en.readable = db.comentario.creado_en.writable = False db.documento.nombre.requires = IS_NOT_IN_DB(db, 'documento.nombre') db.documento.pagina_id.readable = db.documento.pagina_id.writable = False db.documento.creado_por.readable = db.documento.creado_por.writable = False db.documento.creado_en.readable = db.documento.creado_en.writable = False ``` Modifica el controlador "default.py" y crea las siguientes acciones: * index: listar todas las páginas wiki * crear: publicar una página wiki nueva * mostrar: mostrar una página wiki, listar sus comentarios y agregar comentarios nuevos * editar: modificar una página existente * documentos: administrar los documentos adjuntos de una página * download: descargar un documento (como en el ejemplo de las imágenes) * buscar: mostrar un campo de búsqueda y, a través de una llamada de Ajax, devolver los títulos a medida que el visitante escribe * callback: una función callback de Ajax. Devuelve el HTML que se embebe en la página de búsqueda mientras el visitante escribe. Aquí se muestra el controlador "default.py": ``` def index(): """ Este controlador devuelve un diccionario convertido por la vista Lista todas las páginas wiki index().has_key('pages') True """ paginas = db().select(db.pagina.id, db.pagina.titulo, orderby=db.pagina.titulo) return dict(paginas=paginas) @auth.requires_login() def crear(): """crea una nueva página wiki en blanco""" formulario = SQLFORM(db.pagina).process(next=URL('index')) return dict(formulario=formulario) def mostrar(): """muestra una página wiki""" esta_pagina = db.pagina(request.args(0, cast=int)) or redirect(URL('index')) db.comentario.pagina_id.default = esta_pagina.id formulario = SQLFORM(db.comentario).process() if auth.user else None comentariospagina = db(db.comentario.pagina_id==esta_pagina.id).select() return dict(pagina=esta_pagina, comentarios=comentariospagina, formulario=formulario) @auth.requires_login() def editar(): """editar una página existente""" esta_pagina = db.pagina(request.args(0, cast=int)) or redirect(URL('index')) formulario = SQLFORM(db.pagina, esta_pagina).process( next = URL('mostrar', args=request.args)) return dict(formulario=formulario) @auth.requires_login() def documentos(): """lista y edita los comentarios asociados a una página determinada""" pagina= db.pagina(request.args(0, cast=int)) or redirect(URL('index')) db.documento.pagina_id.default = pagina.id db.documento.pagina_id.writable = False grid = SQLFORM.grid(db.documento.pagina_id==pagina.id, args=[pagina.id]) return dict(pagina=pagina, grid=grid) def user(): return dict(formulario=auth()) def download(): """permite la descarga de documentos""" return response.download(request, db) def buscar(): """una página de búsqueda de wikis via ajax""" return dict(formulario=FORM(INPUT(_id='palabra',_name='palabra', _onkeyup="ajax('callback', ['palabra'], 'target');")), target_div=DIV(_id='target')) def callback(): """un callback de ajax que devuelve un <ul> de link a páginas wiki""" consulta = db.pagina.titulo.contains(request.vars.palabra) paginas = db(consulta).select(orderby=db.page.title) direcciones = [A(p.titulo, _href=URL('mostrar',args=p.id)) for p in paginas] return UL(*direcciones) ``` Las líneas 2-6 consisten en un comentario de la acción index. Las líneas 4-5 dentro de los comentarios son interpretadas por Python como código de doctest. Los tests se pueden ejecutar a través de la interfaz admin. En este caso el test verifica que la acción index corra sin errores. Las líneas 18, 27 y 35 tratan de recuperar el registro `pagina` con el id en `request.args(0)` . Las líneas 13 y 20 definen y procesan formularios de creación para una nueva página y un nuevo comentario. La línea 28 define y procesa un formulario de modificación para una página wiki. La línea 38 crea un objeto `grid` que permite visualizar, agregar y actualizar comentarios asociados a una página. Cierta magia ocurre en la línea 51. Se configura el atributo `onkeyup` de la etiqueta "palabra". Cada vez que el visitante deja de presionar una tecla, el código JavaScript dentro del atributo `onkeyup` se ejecuta, del lado del cliente. Este es el código JavaScript: ``` ajax('callback', ['palabra'], 'target'); ``` `ajax` es una función JavaScript definida en el archivo "web2py.js" que se incluye por defecto en "layout.html". Toma tres parámetros: el URL de la acción que realiza el callback asíncrono, una lista de los IDs de variables a ser enviadas al callback (["palabra"]), y el ID donde la respuesta se debe insertar ("target"). Tan pronto como escribas algo en el campo de búsqueda y dejes de presionar una tecla, el cliente llama al servidor y envía el contenido del campo "palabra", y, cuando el servidor responde, la respuesta se embebe en la misma página como HTML incluido en la etiqueta de destino ('target'). La etiqueta 'target' es un DIV definido en la línea 52. Podría haberse definido en la vista también. ``` {{extend 'layout.html'}} <h1>Crear una nueva página wiki</h1> {{=formulario}} ``` Suponiendo que ya te has registrado y autenticado, si visitas la página crear, verás lo siguiente: ``` {{extend 'layout.html'}} <h1>Páginas wiki disponibles</h1> [ {{=A('buscar', _href=URL('buscar'))}} ]<br /> <ul>{{for pagina in paginas:}} {{=LI(A(pagina.titulo, _href=URL('mostrar', args=pagina.id)))}} {{pass}}</ul> [ {{=A('crear página', _href=URL('crear'))}} ] ``` Esto crea la siguiente página: Aquí se puede ver el código para la vista "default/mostrar.html": ``` {{extend 'layout.html'}} <h1>{{=pagina.titulo}}</h1> [ {{=A('editar', _href=URL('editar', args=request.args))}} | {{=A('documentos', _href=URL('documentos', args=request.args))}} ]<br /> {{=MARKMIN(pagina.cuerpo)}} <h2>Comentarios</h2> {{for comentario in comentarios:}} <p>{{=db.auth_user[comentario.creado_por].first_name}} on {{=comentario.creado_en}} dice <I>{{=comentario.cuerpo}}</i></p> {{pass}} <h2>Publicar un comentario</h2> {{=formulario}} ``` Si deseas utilizar la sintaxis markdown en lugar de markmin: ``` from gluon.contrib.markdown import WIKI as MARKDOWN ``` y usa `MARKDOWN` en lugar del ayudante `MARKMIN` . Alternativamente, puedes elegir aceptar HTML puro en lugar de la sintaxis markmin. En ese caso deberías reemplazar: ``` {{=MARKMIN(pagina.cuerpo)}} ``` con: ``` {{=XML(pagina.cuerpo)}} ``` (para que no se realice el "escapado" del XML, que es el comportamiento por defecto de web2py por razones de seguridad). Esto es mejor hacerlo con: ``` {{=XML(pagina.cuerpo, sanitize=True)}} ``` Al configurar `sanitize=True` , le dices a web2py que "escape" las etiquetas XML delicadas como "<script>", y que de esa forma se puedan prevenir las vulnerabilidades de tipo XSS. Ahora si, desde la página index, haces clic en el título de una página, puedes ver la página que has creado: Aquí está el código para la vista "default/edit.html": ``` {{extend 'layout.html'}} <h1>Edición de la página wiki</h1> [ {{=A('mostrar', _href=URL('mostrar', args=request.args))}} ]<br /> {{=formulario}} ``` Genera una página que se ve prácticamente idéntica a la de crear. Aquí se muestra el código de la vista "default/documentos.html": ``` {{extend 'layout.html'}} <h1>Documentos para la página: {{=pagina.titulo}}</h1> [ {{=A('mostrar', _href=URL('mostrar', args=request.args))}} ]<br /> <h2>Documentos</h2> {{=grid}} ``` Si, desde la página "mostrar", haces clic en documentos, ahora puedes administrar los documentos asociados a la página. Por último, aquí está el código para la vista "default/buscar.html": ``` {{extend 'layout.html'}} <h1>Buscar páginas wiki</h1> [ {{=A('listar todo', _href=URL('index'))}}]<br /> {{=formulario}}<br />{{=target_div}} ``` que genera el siguiente formulario Ajax de búsqueda: Además puedes probar llamando a la acción callback directamente visitando, por ejemplo, el siguiente URL: ``` http://127.0.0.1:8000/miwiki/default/callback?palabra=wiki ``` Si ahora examinas el código fuente verás el HTML devuelto por el callback: ``` <ul><li><a href="/miwiki/default/mostrar/4">He creado una wiki</a></li></ul> ``` Es fácil generar una fuente de RSS para las páginas de la wiki utilizando web2py porque incluye `gluon.contrib.rss2` . Sólo debes añadir la siguiente acción al controlador default: ``` def noticias(): """genera una fuente de rss a partir de las páginas wiki""" response.generic_patterns = ['.rss'] paginas = db().select(db.pagina.ALL, orderby=db.pagina.titulo) return dict( titulo = 'fuente rss de miwiki', link = 'http://127.0.0.1:8000/miwiki/default/index', description = 'noticias de miwiki', creada_en = request.now, elementos = [ dict(titulo = registro.titulo, link = URL('mostrar', args=registro.id), descripcion = MARKMIN(registro.cuerpo).xml(), creada_en = registro.creada_en ) for registro in paginas]) ``` y cuando visitas la página ``` http://127.0.0.1:8000/miwiki/default/news.rss ``` verás la fuente (la salida exacta depende del lector de fuentes rss). Observa cómo el dict se convierte automáticamente a RSS, gracias a la extensión en el URL. web2py también incluye un intérprete (parser) de fuentes para leer fuentes de terceros. Observa que la línea: ``` response.generic_patterns = ['.rss'] ``` le indica a web2py que debe usar vistas genéricas (para nuestro ejemplo "views/generic.css") cuando la terminación del URL coincide con el patrón glob ".rss". Por defecto las vistas genéricas solo están habilitadas para el desarrollo desde localhost. Por último, agreguemos un manejador de XML-RPC que permita la búsqueda de wiki en forma programática: ``` service = Service() @service.xmlrpc def buscar_por(palabra): """busca páginas que contengan la palabra para XML-RPC""" return db(db.pagina.titulo.contains(palabra)).select().as_list() def call(): """expone todos los servicios registrados, incluyendo XML-RPC""" return service() ``` Aquí, la acción de manejo (handler action) simplemente publica (via XML-RPC), las funciones especificadas en la lista. En este caso, `buscar_por` no es una acción (porque toma un argumento). Consulta a la base de datos con `.select()` y luego extrae los registros en forma de lista con `.as_list()` y devuelve la lista. Aquí hay un ejemplo de cómo se accede al manejador de XML-RPC desde un programa externo en Python. ``` >>> import xmlrpclib >>> servidor = xmlrpclib.ServerProxy( 'http://127.0.0.1:8000/mywiki/default/call/xmlrpc') >>> for item in servidor.buscar_por('wiki'): print(item['creada_en'], item['title']) ``` Se puede acceder al manejador desde distintos lenguajes de programación que hablen XML-RPC, incluyendo C, C++, C# y Java. # Acerca de los formatos `date` , `datetime` y `time` ¶ Existen tres distintas representaciones para cada uno de los tipos `date` , `datetime` y `time` : * la representación de la base de datos * la representación interna de web2py * la representación como cadena en formularios y tablas La representación de la base de datos es una cuestión interna y no afecta al código. Internamente, en el nivel de web2py, se almacenan como objetos `datetime.date` , `datetime.datetime` y `datetime.time` respectivamente y pueden ser manipulados según las clases mencionadas: ``` for pagina in db(db.pagina).select(): print pagina.title, pagina.day, pagina.month, pagina.year ``` Cuando las fechas se convierten a cadenas en los formularios son convertidas utilizando la representación ISO `%Y-%m-%d %H:%M:%S` de todas formas esta representación está internacionalizada y puedes usar la página de traducción de la aplicación administrativa para cambiar el formato por uno alternativo. Por ejemplo: `%m/%b/%Y %H:%M:%S` Ten en cuenta que por defecto el idioma Inglés no se traduce porque web2py asume que las aplicaciones ya vienen escritas en ese idioma. Si quieres que la internacionalización funcione con el idioma Inglés debes crear el archivo de traducción (utilizando admin) y declarar que el lenguaje actual de la aplicación es otro distinto del Inglés, por ejemplo: ``` T.current_languages = ['null'] ``` ### La wiki incorporada de web2py¶ Ahora puedes olvidarte del código que hemos creado en la sección anterior (pero no de lo que has aprendido sobre las API de web2py, sólo el código específico de ese ejemplo), porque vamos a presentar un ejemplo de la wiki incorporada de web2py. En realidad, web2py incluye la funcionalidad de wiki con características como el soporte para adjuntar archivos y recursos multimedia (media attachments), etiquetas y nubes de etiquetas, permisología de páginas, el uso de componentes (capítulo 14) y el protocolo oembed [oembed]. Esta wiki se puede utilizar con cualquier aplicación de web2py. Ten en cuenta que la API de la wiki incorporada todavía está considerada como experimental y por lo tanto es posible que se le hagan modificaciones menores. Aquí vamos a suponer que comenzaremos desde cero con un simple clon de la aplicación "welcome" que llamaremos "wikidemo". Modifica el controlador y reemplaza la acción "index" con ¡Hecho! Tienes una wiki completamente funcional. Hasta ahora no se han agregado páginas y para poder hacerlo debes estar autenticado y ser miembro del grupo llamado "wiki_editor" o del grupo "wiki_author" (autores de wiki). Si te has autenticado como administrador el grupo "wiki_editor" se creará automáticamente y serás agregado como miembro. La diferencia entre editores y autores es que los editores pueden crear, modificar y borrar cualquier página, mientras que los autores pueden crear y modificar páginas (con restricciones opcionales) y pueden modificar o eliminar las páginas que han creado. La función `auth.wiki()` devuelve un diccionario con una clave `content` que es automáticamente detectada por la vista "default/index.html". Puedes crear tu propia vista para esa acción: ``` {{extend 'layout.html'}} {{=content}} ``` y agregar HTML adicional o el código que necesites. No es obligatorio que la acción se llame "index" para exponer la wiki. Puedes usar una acción con otro nombre. Para probar la wiki, simplemente accede a la interfaz admin, y luego visita la página ``` http://127.0.0.1:8000/wikidemo/default/index ``` Luego elige un título abreviado o slug (en el oficio gráfico, el término del inglés slug es un nombre corto que se usa para títulos de artículos en producción) y serás redirigido a una página vacía donde puedes editar los contenidos usando la sintaxis de wiki markmin. Un nuevo menú llamado "[wiki]" te permitirá crear, buscar y modificar las páginas. Las páginas wiki tienen URL como el siguiente: ``` http://127.0.0.1:8000/wikidemo/default/index/[slug] ``` Las páginas de los servicios tienen nombres que comienzan con guión bajo: ``` http://127.0.0.1:8000/wikidemo/default/index/_create http://127.0.0.1:8000/wikidemo/default/index/_search http://127.0.0.1:8000/wikidemo/default/index/_could http://127.0.0.1:8000/wikidemo/default/index/_recent http://127.0.0.1:8000/wikidemo/default/index/_edit/... http://127.0.0.1:8000/wikidemo/default/index/_editmedia/... http://127.0.0.1:8000/wikidemo/default/index/_preview/... ``` Prueba a crear más páginas como "index", "quienes-somos" y "contactenos" Luego prueba a modificarlas El método `wiki` tiene la siguiente lista de parámetros o signature: ``` def wiki(self, slug=None, env=None, render='markmin', manage_permissions=False, force_prefix='', restrict_search=False, resolve=True, extra=None, menugroups=None) ``` Acepta los siguientes argumentos: * `render` , que es por defecto `'markmin'` pero que puede establecerse como `'html'` . Determina la sintaxis de la wiki. Daremos más detalles de la sintaxis markmin más adelante. Si cambias este parámetro a HTML podrías necesitar agregar un editor WYSIWYG JavaScript como TinyMCE o NicEdit. * `manage_permissions` . Esta opción tiene el valor `False` por defecto y sólo reconocerá la permisología para "wiki_editor" y "wiki_author". Si lo cambias a `True` , la página de creación y modificación dará la opción de especificar por nombres los grupos cuyos miembros tienen permiso para leer y modificar la página. Se puede usar el grupo "everybody" para otorgar permisos a todo usuario. * `force_prefix` . Si se especifica algo como `'%(id)s'` restringirá la creación de páginas de los autores (no editores) con prefijos "[id del usuario]-[nombre de página]". El prefijo puede contener el id ("%(id)s") o el nombre del usuario ("%(username)s") o cualquier otro campo de la tabla auth_user, siempre y cuando la columna correspondiente contenga una cadena capaz de pasar la validación del URL. * `restrict_search` . Por defecto es `False` e implica que todo usuario autenticado puede hacer búsquedas de páginas para toda la wiki (aunque no necesariamente tendrá acceso de lectura o escritura a toda página). Si se especifica `True` , los autores pueden buscar únicamente entre sus propias páginas, los editores podrán buscar cualquier página, y el resto de los usuarios no tendrán acceso a la búsqueda. * `menu_groups` . Si se establece como `None` (el valor por defecto), el menú de administración de la wiki (para búsqueda, creación, modificación, etc.) estará siempre disponible. Puedes especificar esta opción como una lista de nombres de grupos para los cuales será visible el menú, por ejemplo ``` ['wiki_editor', 'wiki_author'] ``` . Ten en cuenta que incluso cuando el menú se exponga a todos los usuarios, esto no implica que todo usuario tenga acceso a todos los comandos del menú, ya que estos permisos están regulados por el sistema de control de acceso. El método `wiki` tiene algunos parámetros adicionales que se explicarán más adelante: `slug` , `env` y `extra` . # Elementos básicos de MARKMIN¶ La sintaxis MARKMIN te permite marcar un texto como negrita usando `**negrita**` , el texto con letra itálica con `''itálica''` , y el texto con `código fuente` se debe delimitar con comillas invertidas (`) dobles. Para los títulos se debe anteponer con un #, para las secciones ## y para secciones menores ###. Usa el signo menos (-) como prefijo en listas sin orden de elementos y un más (+) como prefijo en listas ordenadas. Los URL se convierten automáticamente en link. He aquí un ejemplo de texto markmin: ``` # Este es un título ## Este es un título de sección ### Este es un título de una sección menor El texto puede tener **negrita**, ''itálica'', ``código`` etc. Puedes encontrar más información en: http://web2py.com ``` Puedes usar el parámetro `extra` de `auth.wiki` para pasar reglas adicionales de conversión al ayudante MARKMIN. Encontrarás más información sobre la sintaxis MARKMIN en el capítulo 5. `auth.wiki` es más potente que el ayudante MARKMIN por sí solo, ya que, por ejemplo, tiene soporte para oembed y componentes. Puedes usar el parámetro `env` de `auth.wiki` para exponer funciones en tu wiki. Por ejemplo: ``` auth.wiki(env=dict(unir=lambda a, b, c:"%s-%s-%s" % (a, b, c))) ``` Te permite usar el siguiente lenguaje de marcado o markup syntax: `@(join:1,2,3)` Esto llama a la función unir que se pasó como parámetro en extra y especifica los argumentos `a, b, c=1, 2, 3` y se convertirá como `1-2-3` . # El protocolo oembed¶ Puedes agregar(o copiar y pegar) cualquier URL en una página wiki y normalmente se convertirá en un link a ese URL. Hay algunas excepciones: * Si el URL tiene una extensión de imagen, el link se incrustará como imagen, `<img/>` . * Si el URL tiene una extensión de audio, el link se incrustará como audio HTML5 `<audio/>` . * Si el URL tiene una extensión de MS Office o PDF, se embebe un Google Doc Viewer, que muestra el contenido del documento (sólo funciona para documentos públicos). * Si el URL está vinculado a una página de YouTube, Vimeo o Flickr, web2py conecta con el servicio correspondiente y consulta la forma apropiada en que se debe incrustar el contenido. Esto se hace utilizando el protocolo `oembed` . Esta es una lista completa de los formatos soportados: ``` Imagen (.PNG, .GIF, .JPG, .JPEG) Audio (.WAV, .OGG, .MP3) Video (.MOV, .MPE, .MP4, .MPG, .MPG2, .MPEG, .MPEG4, .MOVIE) ``` Los formatos soportados a través del servicio de Google Doc Viewer: ``` Microsoft Excel (.XLS and .XLSX) Microsoft PowerPoint 2007 / 2010 (.PPTX) Apple Pages (.PAGES) Adobe PDF (.PDF) Adobe Illustrator (.AI) Adobe Photoshop (.PSD) Autodesk AutoCad (.DXF) Scalable Vector Graphics (.SVG) PostScript (.EPS, .PS) TrueType (.TTF) XML Paper Specification (.XPS) ``` Soportados por oembed: ``` flickr.com youtube.com hulu.com vimeo.com slideshare.net qik.com polleverywhere.com wordpress.com revision3.com viddler.com ``` La implementación pertenece al archivo de web2py ``` gluon.contrib.autolinks ``` más específicamente en la función `exand_one` . Puedes extender el soporte para oembed registrando más servicios. Esto se hace agregando una entrada a la lista `EMBED_MAPS` : ``` from gluon.contrib.autlinks import EMBED_MAPS EMBED_MAPS.append((re.compile('http://vimeo.com/\S*'), 'http://vimeo.com/api/oembed.json')) ``` # Creando hipervínculos para contenidos de la wiki¶ Si creas una página wiki con el título "contactenos", puedes hacer referencia a ella como `@////contactenos` Aquí `@////` reemplaza a ``` @/app/controlador/funcion/ ``` per "app", "controlador" y "función" se omiten usando los valores por defecto. En forma similar, puedes usar el menú de la wiki para subir archivos multimedia (por ejemplo una imagen) asociados a la página. La página "manage media" (administración de archivos multimedia) mostrará todos los archivos que se hayan subido y mostrará la notación apropiada con el hipervínculo al archivo multimedia. Si, por ejemplo, subes un archivo "prueba.jpg", con el título "playa", la notación del hipervínculo será algo como: `@////15/playa.jpg` `@////` es el mismo prefijo que se describió anteriormente. `15` es el id del registro que almacena el archivo multimedia. `playa` es el título. `.jpg` es la extensión del archivo original. Si cortas y pegas `@////15/playa.jpg` en la página wiki incrustarás la imagen. Ten en cuenta que los archivos multimedia están asociados a las páginas y heredan sus permisos de acceso. # Menús de la wiki¶ Si creas una página con el slug "wiki-menu" será interpretado como la descripción del menú. He aquí un ejemplo: ``` - Inicio > @////index - Informacion > @////info - web2py > http://www.web2py.com - - Quiénes somos > @////quienes-somos - - Contáctenos > @////contactenos ``` Cada línea es un ítem del menú. Para los menús anidados se usa el guión doble. El signo `>` separa el título del ítem de menú del link. Ten en cuenta que el menú se agrega al objeto `response.menu` , no lo reemplaza. Los ítems del menú `[wiki]` que dan acceso a los servicios de la wiki se agregan automáticamente. # Las funciones de servicios¶ Si por ejemplo, quieres usar la wiki para crear una barra lateral editable, puedes crear una página con `slug="sidebar"` y luego embeberla en el layout.html con ``` {{=auth.wiki(slug='sidebar')}} ``` Ten en cuenta que la palabra "sidebar" aquí no tiene un significado especial. Toda página de wiki se puede recuperar y embeber en cualquier instancia de tu código fuente. Esto permite entrelazar funcionalidades de la wiki con las funcionalidades comunes de las aplicaciones de web2py. También debes tener en cuenta que es lo mismo queauth.wiki('sidebar'), ya que el argumento slug es el primero en la lista de argumentos del método. El primer comando provee de una sintaxis un tanto más simple.auth.wiki(slug='sidebar') Además puedes incrustar las funciones especiales de la wiki como la búsqueda por etiquetas: ``` {{=auth.wiki('_search')}} ``` o la nube de etiquetas: ``` {{=auth.wiki('_cloud')}} ``` # Ampliación de la funcionalidad auth.wiki¶ Cuando tu app con la wiki incorporada se torne más complicada, quizás quieras personalizar los registros de la base de datos para la wiki que son administrados por la interfaz Auth o exponer formularios personalizados para las altas, bajas y modificaciones (ABM o CRUD). Por ejemplo, podrías querer personalizar la representación de un registro de una tabla de la wiki o agregar un nuevo validador de campo. Esto no es posible por defecto, ya que el modelo de la wiki se define únicamente luego de que la interfaz wiki se inicie utilizando el método auth.wiki(). Para permitir el acceso a la configuración específica de la base de datos para la wiki en el modelo de la app, debes agregar el siguiente comando a tu modelo (por ejemplo, en db.py) ``` # Asegúrate de que la función se llame luego de crear el objeto auth # y antes de cualquier cambio en las tablas de la wiki auth.wiki(resolve=False) ``` Al utilizra la línea de arriba en tu modelo, podrás acceder a las tablas de la wiki (por ejemplo, `wiki_page` ) para operaciones personalizadas en la base de datos. Ten en cuenta que de todas formas debes usar auth.wik() en el controlador o vista para poder exponer la interfaz wiki, ya que el parámetro `resolve=False` sólo le indica al objeto auth que debe configurar el modelo de la wiki sin realizar otra acción de configuración específica. Además, al establecer resolve como `False` en el llamado al método, se podrá acceder a las tablas y los registros de la wiki través de la interfaz por defecto para la base de datos en `<app>/appadmin` . Otra personalización posible es la de agregar campos adicionales a los estándar de la wiki (de igual forma que para la tabla `auth_user` , como se describe en el capítulo 9). Esto se hace de la siguiente forma: ``` # coloca este código luego de la inicialización del objeto auth auth.settings.extra_fields["wiki_page"] = [Field("unblob", "blob"),] ``` La línea de arriba agrega un campo `blob` a la tabla `wiki_page` . No hay necesidad de llamar a ``` auth.wiki(resolve=False) ``` # Componentes¶ Una de las funciones más potentes del nuevo web2py consiste de la habilidad para embeber una acción dentro de otra acción. A esta característica la llamamos componentes. ``` db.define_table('cosas',Field('nombre', requires=IS_NOT_EMPTY())) ``` ``` @auth.requires_login() def administrar_cosas(): return SQLFORM.grid(db.cosa) ``` Esta acción es especial porque devuelve un widget/ayudante y no un diccionario de objetos. Ahora podemos incrustar la acción `administrar_cosas` en la vista, con ``` {{=LOAD('default','manage_things', ajax=True)}} ``` Esto permite que un visitante pueda interactuar con el componente a través de Ajax sin actualizar la página anfitrión que contiene el widget. La acción es llamada por medio de Ajax, hereda el estilo de la página anfitrión y captura todos los envíos de formularios y mensajes emergentes para que sean manejados en la página actual. Como complemento de todo lo anterior, el widget `SQLFORM.grid` usa direcciones URL firmadas digitalmente para restringir el acceso. Se puede encontrar información más detallada sobre componentes en el capítulo 13. Los componentes como el que se describe arriba se pueden incrustar en páginas wiki usando la sintaxis MARKMIN: ``` @{component:default/administrar_cosas} ``` Esto sencillamente le indica a web2py que queremos incluir la acción "administrar_cosas" definida en el controlador "default" a través de Ajax. La mayoría de los usuarios podrán crear aplicaciones relativamente complejas simplemente usando `auth.wiki` para crear páginas y menús, e incrustar componentes personalizados en páginas wiki. Wiki puede ser pensado como un mecanismo para permitir crear páginas a los miembros de un grupo determinado, pero también puede entenderse como una metodología modular para el desarrollo de aplicaciones. ### Más sobre admin¶ La interfaz administrativa provee de funcionalidad adicional que se tratará brevemente en esta sección. # Site¶ Esta página es la interfaz administrativa principal de web2py. A la izquierda tiene una lista de todas las aplicaciones instaladas y a la derecha tiene algunos formularios especiales. El primero de esos formularios muestra la versión de web2py y da la opción de hacer un upgrade si existe una nueva versión disponible. Por supuesto, antes de actualizar, ¡asegúrate de hacer una copia de seguridad completa! Luego hay dos formularios que permiten la creación de una nueva aplicación (en un solo paso o usando un ayudante en línea) especificando su nombre. El formulario que sigue permite subir una aplicación existente tanto desde una ubicación local del sistema como desde un URL remoto. Cuando subes una aplicación, debes especificar su nombre (el uso de distintos nombres te permite instalar múltiples copias de una misma aplicación). Puedes probar, por ejemplo, a subir la aplicación de redes sociales Movuca creada por <NAME>: ``` https://github.com/rochacbruno/Movuca ``` o el CMS Instant Press creado por <NAME>: ``` http://code.google.com/p/instant-press/ ``` o uno de los muchos ejemplos disponibles en: ``` http://web2py.com/appliances ``` Las apps de web2py se empaquetan como archivos `.w2p` , que son archivos tar comprimidos con gzip. web2py utiliza la extensión `.w2p` en lugar de `.tgz` para evitar que el navegador intente descomprimirlos al descargarlos. Se pueden descomprimir manualmente con ``` tar zxvf [nombredearchivo] ``` aunque esto normalmente no es necesario. Si la subida es exitosa, web2py muestra la suma de verificación (checksum) del archivo subido. Puedes utilizarla para verificar que el archivo no se dañó durante la subida. El nombre InstantPress aparecerá en la lista de aplicaciones instaladas. Si corres la distribución de web2py de código fuente y tienes `gitpython` instalado (si es necesario, puedes configurarlo con 'easy_install gitpython'), puedes instalar aplicaciones en forma directa desde los repositorios git utilizando el URL con extensión `.git` del formulario para subir aplicaciones. En ese caso también podrás usar la interfaz admin para aplicar los cambios en el repositorio remoto, pero esa funcionalidad es experimental. Por ejemplo, puedes instalar en forma local la aplicación que muestra este libro en el sitio de web2py con el URL: ``` https://github.com/mdipierro/web2py-book.git ``` Ese repositorio aloja la versión actual de este libro (que puede diferir de la versión estable que ves en el sitio web). Las mejoras, reparaciones de fallos y correcciones son bienvenidas y se pueden enviar como solicitud de cambios de git (pull request) En función de cada aplicación instalada puedes usar la página site para: * Acceder directamente a la aplicación haciendo clic en su nombre. * Desinstalar la aplicación * Ir a la página "acerca de" (ver abajo) * Ir a la página de "editar" (ver abajo) * Ir a la página de "errores" (ver abajo) * Eliminar archivos temporarios (sesiones, errores, y archivos cache.disk) * Empaquetar todo. Esto devuelve un archivo .tar que contiene una copia completa de la aplicación. Te sugerimos que elimines los archivos temporarios antes de empaquetar una aplicación. * Compilar la aplicación. Si no hay errores, esta opción generará código bytecode-compiled de todos los módulos, controladores y vistas. Como las vistas pueden extender e incluir a otras vistas en una estructura jerárquica, antes de la compilación, el "árbol" de vistas se condensa en un archivo único. El efecto de este procedimiento es que una aplicación compilada es más rápida, porque se omite la lectura de plantillas (parse) y sustituciones de cadenas durante la ejecución. * Empaquetar compilados. Esta opción sólo está disponible para aplicaciones compiladas. Permite empaquetar la aplicación sin el código fuente para su distribución "closed source". Ten en cuenta que Python (como cualquier otro lenguaje de programación) puede ser descompilado en la práctica; por lo tanto la compilación no garantiza la protección del código fuente. Sin embargo, la descompilación puede ser una tarea difícil e incluso ilegal. * Eliminar compilados. Simplemente elimina todos los archivos de los modelos, vistas y controladores bytecode-compiled de la aplicación. Si la aplicación se empaquetó con código fuente o se editó localmente, no hay peligro al eliminar los archivos compilados, y la aplicación funcionará de todas formas. Si la aplicación se instaló desde un paquete compilado, entonces la operación no es segura, porque hay un código fuente hacia el cual se puedan revertir los cambios, y la aplicación dejará de funcionar. Todas las funcionalidades disponibles desde el sitio admin de web2py también se pueden utilizar programáticamente a través de la API definida en el módulo `gluon/admin.py` . Basta con abrir una consola con el intérprete de Python e importar ese módulo. Si se ha instalado el SDK de Google App Engine la página de la interfaz administrativa site muestra un botón para desplegar tu aplicación en el servicio remoto de GAE. Si se instaló `python-git` , entonces también encontrarás un botón para aplicar los cambios en Open Shift. Para instalar aplicaciones en `Heroku` u otro sistema de alojamiento puedes buscar el programa correspondiente en la carpeta "scripts". # Acerca de'¶ La sección "acerca de" (about) permite editar la descripción de la aplicación y su licencia. Estas últimas están escritas respectivamente en los archivos ABOUT y LICENSE en la carpeta de la aplicación. Se pueden utilizar las sintaxis `MARKMIN` , o ``` gluon.contrib.markdown.WIKI ``` para estos archivos como se describe en ref.[markdown2] . # Editar¶ Ya has utilizado la página "editar" o design en otra ocasión en este capítulo. Aquí queremos señalar algunas funcionalidades más de esta página. * Si haces clic en cualquier nombre de archivo, puedes visualizar sus contenidos con resaltado de código fuente. * Si haces clic en editar, puedes editar el archivo a través de la interfaz web. * Si haces clic en eliminar, puedes eliminar el archivo (en forma permanente). * Si haces clic en test (pruebas), web2py correrá los tests. Los tests son creados por el desarrollador utilizando doctests, y cada función debería tener sus propios tests. * Puedes agregar archivos de idiomas, buscar en el dominio de la aplicación para detectar todas las cadenas y editar sus traducciones a través de la interfaz web. * Si los archivos estáticos se organizan en carpetas y subcarpetas, las jerarquías de las carpetas se pueden colapsar o desplegar haciendo clic en el nombre de la carpeta. La imagen a continuación muestra la salida de la página de tests para la aplicación welcome. La imagen que sigue muestra la sección de idiomas para la aplicación welcome. La siguiente imagen muestra cómo se edita un archivo de idioma, en este caso el idioma "it" (Italiano) para la aplicación welcome. # El depurador incorporado¶ (requiere Python 2.6 o superior) La app admin de web2py incluye un depurador para navegador. Usando el editor en línea puedes agregar los breakpoint (instrucciones para detener el flujo de operación) y, desde la consola de depuración asociada al editor, examinar las variables en esos breakpoint y continuar la ejecución. Se puede ver un ejemplo de este proceso en la imagen que sigue: La funcionalidad se basa en el depurador Qdb creado por <NAME>. Esta aplicación utiliza el módulo multiprocessing.connection para la intercomunicación entre backend y frontend, por medio de un protocolo de transmisión similar a JSON-RPC. [qdb] # Shell o consola¶ Si haces clic en el link "shell" debajo de la sección de controladores en "editar", web2py abrirá una consola de Python para web y ejecutará los modelos para la aplicación actual. Esto te permite comunicarte con la aplicación en forma interactiva. Ten cuidado cuando usas la shell para navegador - porque las distintas solicitudes de la consola se ejecutarán en diferentes hilos. Esto puede fácilmente generar errores, en especial si estás realizando operaciones y estás haciendo pruebas para crear o conectar a bases de datos. Para este tipo de actividades (por ejemplo, si necesitas almacenamiento permanente) es preferible usar la línea de comandos de Python. # Crontab¶ También debajo de la sección de controladores en "editar" hay un link a "crontab". Haciendo clic en este link podrás editar el archivo de crontab de web2py. El crontab de web2py sigue la misma sintaxis que el crontab para Unix, pero no requiere un sistema Unix. En realidad, sólo requiere web2py, y funciona en Windows. Te permite registrar acciones que se tienen que ejecutar en segundo plano en horarios programados. Para más detalles sobre este tema, consulta el siguiente capítulo. # Errores¶ Mientras programes con web2py, inevitablemente cometerás errores e introducirás fallas o bugs. web2py te ayuda en dos formas: 1) te permite crear tests para cada función que pueden ejecutarse en el navegador desde la página "editar"; y 2) cuando se produce un error, se devuelve un ticket al visitante y el error se reporta/almacena (log). Introduce intencionalmente un error en la aplicación de imágenes como se muestra abajo: ``` def index(): imagenes = db().select(db.imagen.ALL,orderby=db.imagen.titulo) 1/0 return dict(imagenes=imagenes) ``` Cuando accedas a la acción de index, obtendrás el siguiente ticket: Sólo el administrador puede visualizar el detalle del ticket: El ticket muestra el traceback (traza o trayectoria del error), y el contenido del archivo que causó el problema, y el estado completo del sistema (variables, objetos request, session, etc.). Si el error se produce en la vista, web2py muestra la vista convertida de HTML a código Python. Esto permite una identificación más fácil de la estructura lógica del archivo. Por defecto los ticket (o tiques) se almacenan en el sistema de archivos y se muestran agrupados según la traza del error o traceback. La interfaz administrativa provee de vistas con estadística (tipo de traceback y número de repeticiones) y una vista detallada (todos los ticket se listan por id). El administrador puede intercambiar los dos tipos de vistas. Observa que admin siempre muestra código fuente resaltado (por ejemplo en los reportes de errores, las palabras especiales de web2py se muestran en naranja). Si haces clic en una keyword de web2py, eres redirigido a la página con la documentación sobre esa palabra. Si reparas el bug de división por cero en la acción index e introduces otro en la vista de index: <h1>Imágenes registradas</h1> <ul> {{for imagen in imagenes:}} {{1/0}} {{=LI(A(imagen.titulo, _href=URL("mostrar", args=imagen.id)))}} {{pass}} </ul> ``` obtienes el siguiente ticket: Nótese que web2py ha convertido la vista de HTML a un archivo Python, y el error descripto en el ticket se refiere al código Python generado y NO a la vista original: Esto puede resultar confuso al principio, pero en la práctica hace el trabajo de depuración más sencillo, porque el espaciado (o indentación) de Python resalta la estructura lógica del código que has embebido en las vistas. El código se muestra al final de la misma página. Todos los ticket se listan bajo la aplicación admin en la página "errores" de cada aplicación: # Mercurial¶ Si corres la distribución de código fuente, la interfaz administrativa muestra un ítem de menú llamado Versioning (Control de versiones). Si ingresas un comentario y presionas el botón de aplicar cambios (commit) en la página asociada a ese botón, se aplicarán los cambios de la aplicación actual. Al aplicar los cambios por primera vez, se creará un repositorio Mercurial específico de la aplicación. En forma transparente para el usuario, Mercurial almacena información sobre los cambios que se hacen en el código en una carpeta oculta ".hg" ubicada en la subcarpeta de tu aplicación. Cada aplicación tiene su carpeta ".hg" y su propio archivo ".hgignore" (que le dice a Mercurial qué archivos debe ignorar). Para poder usar esta característica, debes tener instaladas las librerías para control de versiones de Mercurial (versión 1.9 o superior): ``` easy_install mercurial ``` La interfaz web de mercurial no te permite navegar por los cambios previos y sus archivos diff pero te recomendamos el uso de Mercurial directamente desde la consola o alguno de los numerosos clientes GUI para Mercurial que son más potentes. Por ejemplo te permiten sincronizar tu aplicación con un repositorio remoto: Puedes leer más sobre Mercurial aquí: ``` http://mercurial.selenic.com/ ``` # El Asistente de admin (experimental)¶ La intefaz admin incluye un asistente que puede ayudarte en la creación de nuevas aplicaciones. Puedes acceder al asistente desde la página "sites" como se muestra en la imagen de abajo. El asistente te guiará a través de una serie de pasos para la creación de una nueva aplicación: * Elegir un nombre para la aplicación * Configurar la aplicación y elegir los plugin necesarios * Armar los modelos requeridos (creará páginas de ABM/CRUD para cada modelo) * Te permitirá editar las vistas de esas páginas utilizando la sintaxis MARKMIN La imagen de abajo muestra la segunda fase del proceso. Se puede ver un menú desplegable para la elección de un plugin de plantilla general (desde `web2py.com/layouts` ), un menú de opción múltiple para agregar un conjunto de plugin extra (desde `web2py.com/plugins` ) y un campo "login config" para ingresar una "domain:key" de Janrain. Las demás fases son un tanto más simples, por lo que obviamos su explicación. El asistente es eficaz para su objetivo pero es considerado una "funcionalidad experimental" por dos razones: * Las aplicaciones creadas con el asistente y editadas manualmente ya no pueden ser más modificadas por el asistente. * La interfaz del asistente cambiará eventualmente para incluir soporte para más características y un desarrollo visual más fácil. En todo caso el asistente es una herramienta útil para crear prototipos velozmente y puede servir como plantilla (bootstrap) de una aplicación con un diseño alternativo y otro conjunto de plugin. # Configurando admin¶ Normalmente no hay necesidad de modificar ninguna configuración en admin aunque son posibles algunas personalizaciones. Luego de autenticarte en admin puedes editar la configuración a través la URL: ``` http://127.0.0.1:8000/admin/default/edit/admin/models/0.py ``` Ten en cuenta cómo admin puede utilizarse para auto-modificarse. De hecho admin es una app como cualquier otra. El archivo "0.py" contiene suficientemente documentación. De todas formas, aquí se muestran las personalizaciones más importantes que puedes necesitar: ``` GAE_APPCFG = os.path.abspath(os.path.join('/usr/local/bin/appcfg.py')) ``` Esto debería apuntar a la ubicación del archivo "appcfg.py" que viene con el SDK de Google App Engine. Si tienes el SDK quizás quieras cambiar estos parámetros de configuración a los valores correctos. Esto te permitirá desplegar en GAE desde la interfaz de admin. Además puedes configurar la app admin de web2py en modo demo: ``` DEMO_MODE = True FILTER_APPS = ['welcome'] ``` Y sólo las aplicaciones listadas en FILTER_APPS estarán disponibles y sólo se podrá acceder a ellas en modo de solo-lectura. Si eres docente y quieres exponer la interfaz administrativa a estudiantes para que puedan compartir una interfaz administrativa para sus proyectos (piensa en un laboratorio virtual), lo puedes hacer configurando: De esa forma los estudiantes deberán autenticarse y sólo podrán acceder a sus propias aplicaciones a través de admin. Tú, como el usuario principal/maestro, tendrás acceso a todas. En modo multiusuario, puedes registrar estudiantes usando un link para el registro múltiple o bulk register y administrarlos usando el link correspondiente (manage students''). El sistema además llevará un registro de los accesos de los estudiantes y de la cantidad de líneas que agregan o eliminan de su código fuente. Esta información puede ser consultada por el administrador por medio de gráficos en la página "acerca de" de la aplicación. Ten en cuenta que este mecanismo de todas formas asume que todos los usuarios son confiables. Todas las aplicaciones creadas bajo admin corren con las mismas credenciales en el mismo sistema de archivos. Es posible para una aplicación creada por un estudiante el acceso a los datos y el código fuente de una app de otro estudiante. Además sería posible para un estudiante crear una app que prohíba el acceso al servidor. # admin móvil¶ Ten en cuenta que la aplicación admin incluye el plugin "plugin_jqmobile", que incluye la librería jQuery Mobile. Cuando se accede a la app admin desde un dispositivo móvil, web2py lo detectará y mostrará una interfaz gráfica apta para el dispositivo. ### Más acerca de appadmin¶ appadmin no está pensada para ser expuesta al público. Está diseñada para ayudarte como forma de fácil acceso a la base de datos. Consiste de sólo dos archivos: un controlador "appadmin.py" y una vista "appadmin.html", que son utilizados por todas las acciones en el controlador. El controlador de appadmin es relativamente pequeño y legible; sirve además como ejemplo para el diseño de una interfaz de acceso a la base de datos. appadmin muestra cuales bases de datos están disponibles y qué tablas existen en cada base de datos. Puedes insertar registros y listar todos los registros para cada tabla individualmente. appadmin hace una separación en páginas de la salida por cada 100 registros. Una vez que se selecciona un set de registros, el encabezado de las páginas cambia, permitiéndote actualizar o eliminar los registros devueltos por la consulta. Para actualizar los registros, ingresa un criterio SQL en el campo para la cadena de la consulta: `title = 'prueba'` donde los valores de la cadena deben estar entre comillas simples. Los criterios múltiples se pueden separar con comas. Para eliminar un registro, haz clic en el checkbox para confirmar que estás seguro. appadmin también puede realizar consultas tipo join si el FILTRO de SQL contiene una instrucción condicional de SQL que tenga dos o más tablas. Por ejemplo, prueba con: ``` db.imagen.id == db.publicacion.imagen_id ``` web2py le pasa el comando a la DAL, que entiende que la consulta asocia dos tablas; así, las dos tablas se consultan con un INNER JOIN. Esta es la salida: No se pueden actualizar o eliminar registros consultados con un join, porque implicaría la modificación de registros de múltiples tablas y podría resultar confuso. Aparte de sus funciones para administración de la base de datos, appadmin además te permite visualizar el detalle de los contenidos del `cache` de la aplicación (en ``` /tuapp/appadmin/cache ``` ) así como también los contenidos de los objetos `request` , `response` , y `session` (en ``` /tuapp/appadmin/state ``` ). appadmin reemplaza `response.menu` con su propio menú, que incluye, para la app actual, accesos a la página edit en admin, la página db (administración de la base de datos), la página state, y la página cache. Si la plantilla general de tu aplicación no genera un menú usando `response.menu` , entonces no verás el menú de appadmin. En este caso, puedes modificar el archivo appadmin.html y agregar para mostrar el menú. # El núcleo¶ # Chapter 4: El núcleo * El núcleo * Opciones de la línea de comandos * Flujo de trabajo o workflow * Administración de direcciones o Dispatching * Librerías * Applications * API * request * response * session * cache * URL * HTTP and redirect * Internacionalización y Pluralización con T * Estableciendo el idioma * Traducción de variables * Comentarios y traducciones múltiples * El motor de pluralización * Traducciones, pluralización y MARKMIN * Cookie * La aplicación init * Reescritura de URL * Sistema basado en parámetros * Sistema basado en patrones * Enrutamiento y errores * Administración de recursos estáticos * Ejecutando tareas en segundo plano * Módulos de terceros * Entorno de ejecución * Cooperación * Historial o logging * WSGI ## El núcleo¶ ### Opciones de la línea de comandos¶ Es posible omitir el uso de la GUI e iniciar web2py directamente desde la línea de comandos escribiendo algo como: ``` python web2py.py -a 'tu contraseña' -i 127.0.0.1 -p 8000 ``` Cuando web2py inicie, creará un archivo llamado "parameters_8000.py" donde se almacenará el hash (la codificación) de la contraseña. Si especificas como contraseña "<ask>", web2py te pedirá que ingreses la contraseña al iniciar. Para mayor seguridad, puedes iniciar web2py con: ``` python web2py.py -a '<recycle>' -i 127.0.0.1 -p 8000 ``` En este caso web2py reutiliza la contraseña previamente codificada y almacenada. Si no se provee de una contraseña, o si se ha borrado el archivo "parameters_8000.py", la interfaz administrativa web se deshabilitará. En algunos sistemas Unix/Linux, si la contraseña es ``` <pam_user:un_usuario> ``` web2py usa la contraseña PAM de la cuenta en el Sistema Operativo del usuario especificado para la autenticación como administrador, a menos que la configuración de PAM bloquee el acceso. Normalmente web2py corre con CPython (la implementación en C del intérprete de Python creada por <NAME>), peró también puede correr con PyPy y Jython. Esta última posibilidad te permite usar web2py en el contexto de una infraestructura J2EE. Para usar Jython, simplemente reemplaza "python web2py.py ..." por "jython web2py.py". Los detalles sobre la instalación de los módulos Jython y zxJDBC requeridos para el acceso a las bases de datos se puede consultar en el Capítulo 14. El script "web2py.py" puede tomar varios argumentos de la línea de comandos especificando el número máximo de hilos, el uso de SSL, etc. Para una lista completa escribe: ``` >>> python web2py.py -h Forma de uso: python web2py.py Script de inicio del marco de desarrollo web web2py ADVERTENCIA: si no se especifica una contraseña (-a 'contraseña'), web2py intentará ejecutar una GUI, en este caso las opciones de la línea de comandos se omitirán. Opciones: --version muestra la versión del programa y sale -h, --help muestra esta lista de ayuda y sale -i IP, --ip=IP la dirección IP del servidor (e.g., 127.0.0.1 or ::1); Nota: Este valor se ignora cuando se usa la opción 'interfaces'. -p PUERTO, --port=PUERTO puerto del servidor (8000) -a CONTRASEÑA, --password=CONTRASEÑA contraseña que se usará para la cuenta administrativa (usa -a "<recycle>" para reutilizar la última contraseña almacenada) -c CERTIFICADO_SSL, --ssl_certificate=CERTIFICADO_SSL archivo que contiene el certificado ssl -k CLAVE_PRIVADA_SSL, --ssl_private_key=CLAVE_PRIVADA_SSL archivo que contiene la clave privada ssl --ca-cert=CERTIFICADO_CA_SSL Usa este archivo conteniendo el certificado CA para validar los certificados X509 de los clientes -d ARCHIVO_PID, --pid_filename=ARCHIVO_PID archivo que almacena el pid del servidor -l ARCHIVO_LOG, --log_filename=ARCHIVO_LOG archivo para llevar un registro de las conexiones -n CANTHILOS, --numthreads=CANTHILOS cantidad de hilos (obsoleto) --minthreads=MÍNHILOS número mínimo de hilos del servidor --maxthreads=MAXHILOS número máximo de hilos del servidor -s NOMBRE_SERVIDOR, --server_name=NOMBRE_SERVIDOR nombre asignado al servidor web -q TAM_COLA_SOLICITUD, --request_queue_size=REQUEST_QUEUE_SIZE máximo número de solicitudes en la cola cuando el servidor no está disponible -o VENCIMIENTO, --timeout=VENCIMIENTO tiempo límite de espera para cada solicitud (10 segundos) -z VENC_CIERRE, --shutdown_timeout=VENC_CIERRE tiempo límite de espera para cerrar el servidor (5 segundos) --socket-timeout=VENCIMIENTO_SOCKET tiempo límite para el ''socket'' (5 segundos) -f CARPETA, --folder=CARPETA carpeta desde la cual correrá web2py -v, --verbose incremento de la salida de depuración de --test -Q, --quiet deshabilita toda salida -D NIVEL_DEPURACIÓN, --debug=NIVEL_DEPURACIÓN establece el nivel de la salida de depuración (0-100, 0 es todo, 100 es nada; por defecto es 30) -S NOMBRE_APP, --shell=NOMBRE_APP corre web2py en la consola shell interactiva de IPython (si está disponible) con el nombre especificado de la app (si la app no existe se creará). NOMBRE_APP tiene el formato a/c/f (c y f son opcionales) -B, --bpython corre web2py en la shell interactiva o en bpython (si se instaló) con el nombre especificado (si no existe la app se creará). Usa esta opción en combinación con --shell -P, --plain usar únicamente la shell de Python; se debería usar con la opción --shell -M, --import_models importar automáticamente los archivos del modelo; por defecto es False; se debería usar con la opción --shell -R ARCHIVO_PYTHON, --run=ARCHIVO_PYTHON correr el archivo de python en un entorno de web2py; se debería usar con la opción --shell -K PLANIFICADOR, --scheduler=PLANIFICADOR correr tareas planificadas para las app especificadas: lee una lista de nombres de apps del tipo -K app1,app2,app3 o una lista con grupos como -K app1:grupo1:grupo2,app2:grupo1 para sobrescribir nombres específicos de grupos. (solo cadenas, no se admiten los espacios. Requiere definir un planificador en los modelos) -X, --with-scheduler corre el planificador junto con el servidor web -T RUTA_PRUEBAS, --test=RUTA_PRUEBAS corre las pruebas ''doctest'' en el entorno de web2py; RUTA_PRUEBAS tiene el formato a/c/f (c y f son opcionales) -W SERVICIOWIN, --winservice=SERVICIOWIN control del servicio de Windows -W install|start|stop -C, --cron activa una lista de tareas cron en forma manual; usualmente se llama desde un crontab del sistema --softcron activa el uso de softcron -Y, --run-cron iniciar como proceso en segundo plano -J, --cronjob identificar un comando iniciado por cron -L CONFIG, --config=CONFIG archivo de configuración -F ARCHIVO_PROFILER, --profiler=ARCHIVO_PROFILER nombre de archivo del profiler -t, --taskbar usar la gui de web2py y correr en la barra de tareas o ''taskbar'' (bandeja del sistema) --nogui solo texto, sin GUI -A ARGUMENTOS, --args=ARGUMENTOS se debe completar con una lista de argumentos a pasarse al script; se utiliza en conjunto con -S. -A debe ser la última opción --no-banner No mostrar la pantalla de inicio --interfaces=INTERFACES aceptar conexiones para múltiples direcciones: "ip1:puerto1:clave1:cert1:ca_cert1; ip2:puerto2:clave2:cert2:ca_cert2;..." (:clave:cert:ca_cert es opcional; no debe contener espacios; las direcciones IPv6 deben llevar corchetes []) --run_system_tests corre las pruebas para web2py ``` Las opciones en minúsculas se usan para configurar el servidor web. La opción `-L` le dice a web2py que lea las opciones de configuración desde un archivo, `-W` instala web2py como servicio de Windows, mientras que las opciones `-S` , `-P` y `-M` inician una sesión interactiva de la consola de Python. La opción `-T` busca y ejecuta las pruebas doctest en un entorno de ejecución de web2py. Por ejemplo, el siguiente ejemplo corre los doctest para todos los controladores en la aplicación "welcome": ``` python web2py.py -vT welcome ``` Si ejecutas web2py como servicio de Windows, `-W` , no es conveniente pasar los parámetros de configuración por medio de los argumentos de la línea de comandos. Por esa razón, en la carpeta de web2py se puede ver un ejemplo de archivo de configuración "options_std.py" para el servidor web incorporado: ``` import socket import os ip = '0.0.0.0' port = 80 interfaces = [('0.0.0.0', 80)] #,('0.0.0.0',443,'clave_privada_ssl.pem','certificado_ssl.pem')] password = '<recycle>' # <recycle> significa que se usará la contraseña previamente almacenada pid_filename = 'servidorhttp.pid' log_filename = 'servidorhttp.log' profiler_filename = None ssl_certificate = None # 'certificado_ssl.pem' # ## ruta al archivo con el certificado ssl_private_key = None # 'clave_privada_ssl.pem' # ## ruta al archivo con la clave privada #numthreads = 50 # ## obsoleto; eliminar minthreads = None maxthreads = None server_name = socket.gethostname() request_queue_size = 5 timeout = 30 shutdown_timeout = 5 folder = os.getcwd() extcron = None nocron = None ``` Este archivo contiene los valores por defecto de web2py, debes importarlo en forma explícita con la opción de línea de comandos `-L` . Solo funcionará cuando corras web2py como servicio de Windows. ### Flujo de trabajo o workflow¶ El flujo de operación de web2py es el siguiente: * El servidor web recibe una solicitud HTTP (el servidor web incorporado Rocket u otro servidor web conectado a web2py a través de WSGI u otro adaptador). El servidor web administra cada solicitud en su propio hilo, en forma paralela. * Se analiza el encabezado HTTP y se pasa al administrador de direcciones (dispatcher, descripto más adelante en este capítulo). * El administrador de direcciones decide cuál de las aplicaciones manejará la solicitud y asocia la información en PATH_INFO del URL con una llamada a una función. Cada URL se corresponde con una llamada a una función. * Las solicitudes de archivos de la carpeta static se sirven en forma directa, y los archivos extensos se transmiten al cliente automáticamente usando un stream. * Toda solicitud que no esté asociada a un archivo estático se asocia a una acción (es decir, a una función en un archivo del controlador, en la aplicación solicitada). * Antes de llamar a la acción, suceden algunas cosas: si el encabezado de la solicitud contiene una cookie de sesión para la app, se recupera el objeto de la sesión (session), si no, se crea una sesión nueva (pero el archivo de la sesión no se almacenará inmediatamente); se crea un ambiente de ejecución para la solicitud; los modelos se ejecutan en ese entorno. * Por último, se ejecuta la acción del controlador en el entorno creado previamente. * Si la acción devuelve una cadena, se devolverá al cliente (o si la acción devuelve un objeto ayudante HTML de web2py, se devolverá la serialización del ayudante). * Si la acción devuelve un iterable, el cliente recibirá un stream de datos generado por un bucle que recorre ese objeto. * Si la acción devuelve un diccionario, web2py intentará ubicar la vista para convertir el diccionario. La vista debe tener el mismo nombre que la acción (a menos que se haya especificado otro), y la misma extensión que la página solicitada (por defecto es .html); si se produce una falla, web2py puede recuperar una vista genérica (si está disponible y habilitada). La vista tiene acceso a toda variable definida en los modelos así como también el contenido del diccionario devuelto por la acción, pero no tiene acceso a las variables globales definidas en el controlador. * La totalidad del código del usuario se ejecuta en el ámbito de una única transacción de la base de datos, a menos que se especifique lo contrario. * Si el código del usuario finaliza la ejecución con éxito, se aplicarán los cambios en la base de datos. * Si se produce una falla en la ejecución del código del usuario, la traza del error (error traceback) se almacena en un ticket, y el id del ticket se informa en la respuesta al cliente. Solo el administrador del sistema puede buscar y leer las trazas de error incluidas en los tickets. Hay algunos detalles a tener en cuenta: * Los modelos que pertenecen a la misma carpeta se ejecutan en orden alfabético. * Toda variable definida en el modelo será visible para los otros modelos que le sigan en orden alfabético, para los controladores y para las vistas. * Los modelos en subcarpetas se ejecutan condicionalmente. Por ejemplo, si el usuario solicitó "a/c/f" donde "a" es la aplicación, "c" el controlador y "f" la función (acción), entonces se ejecutarán los siguientes modelos: ``` applications/a/models/*.py applications/a/models/c/*.py applications/a/models/c/f/*.py ``` * Se ejecutará el controlador solicitado y se llamará a la función solicitada. Esto implica que el código del nivel superior en el controlador también se ejecuta para cada solicitud que corresponda a ese controlador. * La vista se llama únicamente cuando la acción devuelve un diccionario. * Si no se encuentra la vista, web2py intenta usar una vista genérica. Por defecto, las vistas genéricas están deshabilitadas, a menos que la app de andamiaje incluya una línea en /models/db.py para habilitarlas restringiéndolas para su uso en localhost. Las vistas genéricas se pueden habilitar en función del tipo de extensión y en función de la acción (usando ). En general, las vistas genéricas son una herramienta de desarrollo y normalmente no se deberían usar en producción. Si quieres que algunas acciones usen las vistas genéricas, agrega esas acciones en (descripto con más detalle en el capítulo dedicado a los servicios). Los comportamientos posibles para una acción son los siguientes: Devuelve una cadena Devuelve un diccionario para una vista: Devuelve todas las variables locales: Redirigir al usuario a otra página: ``` def index(): redirect(URL('otra_accion')) ``` Devolver otra respuesta HTTP distinta a "200 OK": ``` def index(): raise HTTP(404) ``` Devolver un ayudante (por ejemplo, un FORM): ``` def index(): return FORM(INPUT(_name='prueba')) ``` (esto se usa más que nada para llamadas de retorno con Ajax y para componentes, para más información puedes consultar el capítulo 12) Cuando una acción devuelve un diccionario, el diccionario puede contener objetos generados por ayudantes, incluyendo formularios creados a partir de tablas de la base de datos o formularios creados por un creador de formularios o form factory, por ejemplo: ``` def index(): return dict(formulario=SQLFORM.factory(Field('nombre')).process()) ``` (todos los formularios generados por web2py usan el método postback, ver capítulo 3) ### Administración de direcciones o Dispatching¶ web2py asocia los URL con el formato: ``` http://127.0.0.1:8000/a/c/f.html ``` con la función `f()` en el controlador "c.py" de la aplicación "a". Si no se encuentra un `f` , web2py usa por defecto la función `index` del controlador. Si no se encuentra un `c` , entonces web2py usa por defecto el controlador "default.py", y si no se encuentra una aplicación `a` , web2py usa por defecto la aplicación `init` . Si no existe una aplicación `init` , web2py intentará ejecutar la aplicación `welcome` . Esto se muestra en un esquema en la imagen de abajo: Por defecto, toda nueva solicitud creará una nueva sesión. Además, se devuelve una cookie de sesión al navegador cliente para mantener un registro y control de esa sesión. La extensión `.html` es opcional; `.html` se asume por defecto. La extensión determina la extensión de la vista que procesa y convierte la salida de la función `f()` del controlador. Esto permite que el mismo contenido se pueda servir en múltiples formatos (html, xml, json, rss, etc.). Las funciones que toman argumentos o comienzan con un doble guión no se exponen públicamente y solo pueden ser llamadas por otras funciones. Existe una excepción para el caso de los URL que tienen la forma: ``` http://127.0.0.1:8000/a/static/nombredearchivo ``` No hay un controlador llamado "static". web2py interpreta esto como una solicitud de un archivo llamado "nombredearchivo" en la subcarpeta "static" de la aplicación "a". Además web2py soporta el protocolo IF_MODIFIED, y no envía el archivo si ya se ha almacenado en el caché de navegación y si el archivo no se modificó posteriormente. Cuando se crea un link a un archivo de audio o video de la carpeta static, si quieres hacer que el navegador descargue el archivo en lugar de hacer una descarga por streamming con un reproductor de medios, agrega `?attachment` al URL. Esto le dice a web2py que debe establecer el encabezado `Content-Disposition` de la respuesta HTTP como "attachment" (adjunto). Por ejemplo: ``` <a href="/app/static/mi_archivo_de_audio.mp3?attachment">Descargar</a> ``` Cuando se hace clic en el link de arriba, el navegador le mostrará una opción de descarga del MP3 en lugar de iniciar la transmisión del audio. (Como se detalla más abajo, puedes además establecer los encabezados de la respuesta HTTP directamente almacenando un diccionario con los nombres de los encabezados y sus valores en `response.headers` .) ``` http://127.0.0.1:8000/a/c/f.html/x/y/z?p=1&q=2 ``` a una función `f` en el controlador "c.py" de la aplicación `a` , y almacena los parámetros del URL en la variable `request` de la siguiente forma: ``` request.args = ['x', 'y', 'z'] ``` y: ``` request.vars = {'p':1, 'q':2} ``` y también: ``` request.application = 'a' request.controller = 'c' request.function = 'f' ``` En el ejemplo de arriba, se puede usar tanto `request.args[i]` como `request.args(i)` para recuperar el i-ésimo elemento de `request.args` , la diferencia es que la primera notación genera una excepción cuando la lista no tiene el índice especificado, mientras que la segunda devuelve None en ese caso. `request.url` almacena el URL completo de la solicitud actual (no incluye las variables GET). `request.ajax` por defecto es False pero se establece como True si web2py determina que la acción fue solicitada por medio de Ajax. Si la solicitud es una solicitud Ajax y fue iniciada por un componente de web2py, el nombre del componente se puede recuperar con: `request.cid` Los componentes se tratan con más detalla en el Capítulo 12. se establece como "GET"; si es POST, tomará el valor "POST", las variables de consulta del URL se almacenan en el diccionario Storage `request.vars` ; también se almacenan en `request.get_vars` (en el caso de una solicitud POST) o `request.post_vars` (para solicitudes POST). web2py almacena las variables de su propio entorno y las del entorno WSGI en `request.env` , por ejemplo: ``` request.env.path_info = 'a/c/f' ``` y los encabezados HTTP en variables de entorno, por ejemplo: ``` request.env.http_host = '127.0.0.1:8000' ``` Ten en cuenta que web2py valida todos los URL para evitar ataques de tipo "directory traversal". Los URL sólo pueden contener caracteres alfanuméricos, subguiones y barras; los `args` (argumentos) pueden contener puntos no consecutivos. Los espacios se reemplazan por subguiones antes de la validación. Si la sintaxis del URL no es válida, web2py devuelve un mensaje con el código de error HTTP 400[http-w] [http-o]. Si el URL corresponde a una solicitud de un archivo estático, web2py simplemente lo lee y transmite el archivo solicitado por medio de un stream. Si el URL no solicita un archivo estático, web2py procesa la solicitud en el siguiente orden: * Analiza y recupera las cookie. * Crea un entorno para ejecutar la función. * Inicializa los objetos `request` , `response` y `cache` . * Abre el objeto `session` existente o crea uno nuevo. * Ejecuta los modelos que corresponden a la aplicación solicitada. * Ejecuta la función del controlador que corresponde a la acción solicitada. * Si la función devuelve un diccionario, ejecuta la vista asociada. * En caso de finalizar exitosamente, aplica los cambios de las transacciones pendientes. * Guarda la sesión. * Devuelve una respuesta HTTP. Ten en cuenta que el controlador y la vista se ejecutan en distintas copias del mismo entorno; por lo tanto, la vista no puede examinar el controlador, pero si tiene acceso al modelo y a las variables devueltas por la función del controlador correspondiente a la acción. Si se genera una excepción (que no sea de tipo HTTP), web2py hace lo siguiente: * Almacena la traza del error en un archivo y le asigna un número de ticket. * Recupera el estado inicial de todas las transacciones de la base de datos. * Devuelve una página de error informando el número de ticket. Si la excepción generada es de tipo `HTTP` , se interpretará como el comportamiento normal (por ejemplo, una redirección `HTTP` ), y se aplican los cambios a todas las transacciones abiertas. El comportamiento posterior está especificado por el tipo de excepción `HTTP` mismo. La clase de excepción `HTTP` no es una excepción estándar de Python; está definida en web2py. ### Librerías¶ Las librerías de módulos de web2py se exponen a las aplicaciones del usuario como objetos del espacio de nombres global. Por ejemplo ( `request` , `response` , `session` o `cache` ), clases (ayudantes, validadores, la API de DAL), y funciones ( `T` y `redirect` ). Estos objetos están definidos en los siguientes archivos: ``` web2py.py gluon/__init__.py gluon/highlight.py gluon/restricted.py gluon/streamer.py gluon/admin.py gluon/html.py gluon/rewrite.py gluon/template.py gluon/cache.py gluon/http.py gluon/rocket.py gluon/storage.py gluon/cfs.py gluon/import_all.py gluon/sanitizer.py gluon/tools.py gluon/compileapp.py gluon/languages.py gluon/serializers.py gluon/utils.py gluon/contenttype.py gluon/main.py gluon/settings.py gluon/validators.py gluon/dal.py gluon/myregex.py gluon/shell.py gluon/widget.py gluon/decoder.py gluon/newcron.py gluon/sql.py gluon/winservice.py gluon/fileutils.py gluon/portalocker.py gluon/sqlhtml.py gluon/xmlrpc.py gluon/globals.py gluon/reserved_sql_keywords.py ``` Observa que muchos de esos módulos, en especial `dal` (la capa de abstracción de la base de datos), `template` (el lenguaje de plantillas), `rocket` (el servidor web), y `html` (los ayudantes) no tienen dependencias y se pueden usar fuera de web2py. La app de andamiaje comprimida con tar y gzip que viene con web2py es `welcome.w2p` Esta es creada durante la instalación y se sobrescribe al hacer un upgrade. Cuando corres web2py por primera vez, se crean dos carpetas: deposit y applications. La carpeta deposit se usa como espacio de almacenamiento temporal para la instalación y desinstalación de aplicaciones. Si inicias web2py por primera vez y además después de un upgrade, la app "welcome" se comprime en el archivo "welcome.w2p" para usarse como app de andamiaje. Cuando se hace un upgrade de web2py, esta actualización viene con un archivo llamado "NEWINSTALL". Si web2py encuentra ese archivo, entiende que se ha hecho un upgrade, elimina ese archivo y crea un nuevo archivo "welcome.w2p". La versión actual de web2py se almacena en el campo "VERSION" y sigue las reglas semánticas estándar para el control de versiones donde el id de la versión del programa (build id) es la fecha y hora (timestamp). Las pruebas unit-test están en `gluon/tests/` Hay controladores para conexión a varios servidores web: ``` cgihandler.py # no se recomienda gaehandler.py # para Google App Engine fcgihandler.py # para FastCGI wsgihandler.py # para WSGI isapiwsgihandler.py # para IIS modpythonhandler.py # obsoleto ``` ("fcgihandler" utiliza "gluon/contrib/gateways/fcgi.py" desarrollado por <NAME>) y `anyserver.py` que es un script para interfaz con distintos tipos de servidor web, descripto en el Capítulo 13. Hay tres archivos de ejemplo: ``` options_std.py routes.example.py router.example.py ``` El primero es un archivo con opciones de configuración que se puede pasar a web2py.poy con el parámetro `-L` . El segundo es un ejemplo de archivo para mapeo de URL (url mapping). Este último se cargará automáticamente cuando se cambie su nombre a "routes.py". El tercero es una sintaxis alternativa para el mapeo de URL, y también se puede renombrar (o copiar como) "routes.py". Los archivos ``` app.example.yaml queue.example.yaml ``` Son ejemplos de archivos de configuración usados para el despliegue en Google App Engine. Puedes leer más acerca de ellos en el capítulo sobre recetas de implementación y en las páginas de la documentación de Google. Hay otras librerías adicionales, algunas de ellas son software de terceros: feedparser[feedparser] de <NAME> para la lectura fuentes RSS y Atom: ``` gluon/contrib/__init__.py gluon/contrib/feedparser.py ``` markdown2[markdown2] de <NAME> para el lenguaje de marcado wiki: ``` gluon/contrib/markdown/__init__.py gluon/contrib/markdown/markdown2.py ``` markmin markup: ``` gluon/contrib/markmin ``` fpdf creado por <NAME> para la generación de documentos PDF: `gluon/contrib/fpdf` Esta librería no está documentada en este texto pero está alojada y documentada aquí: ``` http://code.google.com/p/pyfpdf/ ``` pysimplesoap es una implementación ligera del servidor SOAP creada por <NAME>: ``` gluon/contrib/pysimplesoap/ ``` simplejsonrpc es cliente para JSON-RPC ligero, también creado por <NAME>: ``` gluon/contrib/simplejsonrpc.py ``` memcache[memcache] API Python de Evan Martin: ``` gluon/contrib/memcache/__init__.py gluon/contrib/memcache/memcache.py ``` redis_cache ``` gluon/contrib/redis_cache.py ``` gql, un port o adaptación de DAL para Google App Engine: `gluon/contrib/gql.py` memdb, una adaptación de DAL que funciona sobre memcache: ``` gluon/contrib/memdb.py ``` gae_memcache es una API para el uso de memcache en Google App Engine: ``` gluon/contrib/gae_memcache.py ``` pyrtf[pyrtf] para la generación de documentos Rich Text Format (RTF), desarrollado por <NAME> y revisado por <NAME>: `gluon/contrib/pyrtf/` PyRSS2Gen[pyrss2gen] desarrollado por Dalke Scientific Software, para la crear fuentes de RSS: ``` gluon/contrib/rss2.py ``` simplejson[simplejson] de <NAME>, la librería estándar para la lectura, análisis y escritura de objetos JSON: ``` gluon/contrib/simplejson/ ``` Google Wallet [googlewallet] provee de botones "pagar ahora" enlazados al sistema de procesamiento de pagos de Google: ``` gluon/contrib/google_wallet.py ``` Stripe.com [stripe] provee de una API simple para aceptar pagos con tarjeta de crédito: ``` gluon/contrib/stripe.py ``` AuthorizeNet [authorizenet] provee de una simple API para aceptar pagos con tarjeta de crédito a través de la red Authorize.net ``` gluon/contrib/AuthorizeNet.py ``` Dowcommerce [dowcommerce] API para operaciones con tarjetas de crédito: ``` gluon/contrib/DowCommerce.py ``` PaymentTech API para operaciones con tarjetas de crédito: ``` gluon/contrib/paymentech.py ``` PAM[PAM] API de autenticación creada por <NAME>: `gluon/contrib/pam.py` Un clasificador bayesiano para crear registros ficticios de la base de datos utilizados para pruebas: ``` gluon/contrib/populate.py ``` Un archivo con una API para correr aplicaciones en Heroku.com: ``` gluon/contrib/heroku.py ``` Un archivo que permite la interacción con la barra de tareas de Windows, cuando web2py corre como servicio: ``` gluon/contrib/taskbar_widget.py ``` Métodos opcionales de acceso (login_methods) y formularios (login_form) para la autenticación: ``` gluon/contrib/login_methods/__init__.py gluon/contrib/login_methods/basic_auth.py gluon/contrib/login_methods/browserid_account.py gluon/contrib/login_methods/cas_auth.py gluon/contrib/login_methods/dropbox_account.py gluon/contrib/login_methods/email_auth.py gluon/contrib/login_methods/extended_login_form.py gluon/contrib/login_methods/gae_google_account.py gluon/contrib/login_methods/ldap_auth.py gluon/contrib/login_methods/linkedin_account.py gluon/contrib/login_methods/loginza.py gluon/contrib/login_methods/oauth10a_account.py gluon/contrib/login_methods/oauth20_account.py gluon/contrib/login_methods/oneall_account.py gluon/contrib/login_methods/openid_auth.py gluon/contrib/login_methods/pam_auth.py gluon/contrib/login_methods/rpx_account.py gluon/contrib/login_methods/x509_auth.py ``` además web2py contiene una carpeta con scripts que pueden ser de ayuda incluyendo ``` scripts/setup-web2py-fedora.sh scripts/setup-web2py-ubuntu.sh scripts/setup-web2py-nginx-uwsgi-ubuntu.sh scripts/setup-web2py-heroku.sh scripts/update-web2py.sh scripts/make_min_web2py.py ... scripts/sessions2trash.py scripts/sync_languages.py scripts/tickets2db.py scripts/tickets2email.py ... scripts/extract_mysql_models.py scripts/extract_pgsql_models.py ... scripts/access.wsgi scripts/cpdb.py ``` Los `setup-web2py-*` son especialmente útiles porque realizan una instalación y configuración íntegra en ambientes de producción desde cero. Algunos de ellos se detallan en el Capítulo 14, pero todos incluyen documentación explicando sus características y opciones. Por último, web2py incluye estos archivos necesarios para crear las distribuciones binarias. ``` Makefile setup_exe.py setup_app.py ``` Estos son script de configuración para py2exe y py2app, respectivamente, y sólo son requeridos para crear la distribución binaria de web2py. NO ES NECESARIA SU EJECUCIÓN. Las aplicaciones de web2py contienen archivos adicionales, particularmente librerías de JavaScript, como jQuery, calendar y codemirror. Los créditos para cada proyecto están documentados en sus respectivos archivos. ### Applications¶ Las aplicaciones de web2py se componen de las siguientes partes: * models describe una representación de la información en función de tablas de la base de datos y relaciones entre tablas. * controllers describe los algoritmos de la aplicación y su flujo de trabajo. * views describe cómo la información se debería presentar al usuario usando HTML y JavaScript. * languages describe cómo traducir las cadenas de la aplicación a los distintos lenguajes soportados. * static files los archivos estáticos no requieren procesamiento (por ejemplo imágenes, hojas de estilo CSS, etc). * ABOUT y README son documentos cuyo significado y uso es obvio. * errors almacena los reportes de errores generados por la aplicación. * sessions almacena la información relacionada con cada usuario particular. * databases almacena bases de datos SQLite e información adicional de las tablas. * cache almacena los ítems de aplicaciones en caché. * modules son los módulos opcionales Python. * private los controladores tienen acceso a los archivos privados, mientras que los desarrolladores no pueden acceder a ellos directamente. * uploads los modelos tienen acceso a los archivos en uploads, pero no están disponibles directamente para el desarrollador (por ejemplo, los archivos subidos por usuarios de la application). * tests es un directorio para almacenar script de pruebas, y programas fixture o mock. Se puede acceder a los modelos, vistas, controladores, idiomas y archivos estáticos a través de la interfaz administrativa [design]. También se puede acceder a ABOUT, README y los errores a través de la interfaz administrativa por medio del ítem de menú correspondiente. La aplicación tiene acceso a los archivos de sesión, caché, módulos y privados pero no a través de la interfaz administrativa. Todo está prolijamente organizado en una clara estructura de directorios que se reproduce en cada aplicación instalada, si bien el usuario no necesita acceder al sistema de archivos en forma directa: ``` __init__.py ABOUT LICENSE models views controllers modules private tests cron cache errors upload sessions static ``` "__init__.py" es un archivo vacío que es requerido para que Python (y web2py) pueda importar los módulos en el directorio `modules` . Observa que la aplicación admin simplemente provee de una interfaz web para las aplicaciones en el sistema de archivos del servidor. Las aplicaciones de web2py también se pueden crear y desarrollar desde la línea de comandos y también puedes desarrollar las aplicaciones usando tu editor preferido de texto o IDE; no estás obligado a usar la interfaz administrativa para navegador. Se puede crear una nueva aplicación en forma manual si se reproduce la estructura de directorio detallada arriba en una subcarpeta, por ejemplo, "applications/nuevaapp/" (o simplemente descomprimiendo con tar el archivo `welcome.w2p` en tu nuevo directorio de aplicación). Los archivos de aplicaciones también se pueden crear y editar desde la línea de comandos sin necesidad de usar la interfaz admin. Los modelos, controladores y vistas se ejecutan en un entorno para el cual ya se han importado por nosotros los siguientes objetos : Objetos Globales: ``` request, response, session, cache ``` Internacionalización: `T` Navegación: `redirect, HTTP` Ayudantes: ``` XML, URL, BEAUTIFY A, B, BODY, BR, CENTER, CODE, COL, COLGROUP, DIV, EM, EMBED, FIELDSET, FORM, H1, H2, H3, H4, H5, H6, HEAD, HR, HTML, I, IFRAME, IMG, INPUT, LABEL, LEGEND, LI, LINK, OL, UL, META, OBJECT, OPTION, P, PRE, SCRIPT, OPTGROUP, SELECT, SPAN, STYLE, TABLE, TAG, TD, TEXTAREA, TH, THEAD, TBODY, TFOOT, TITLE, TR, TT, URL, XHTML, xmlescape, embed64 CAT, MARKMIN, MENU, ON ``` Formularios y tablas ``` SQLFORM (SQLFORM.factory, SQLFORM.grid, SQLFORM.smartgrid) ``` Validadores: ``` CLEANUP, CRYPT, IS_ALPHANUMERIC, IS_DATE_IN_RANGE, IS_DATE, IS_DATETIME_IN_RANGE, IS_DATETIME, IS_DECIMAL_IN_RANGE, IS_EMAIL, IS_EMPTY_OR, IS_EXPR, IS_FLOAT_IN_RANGE, IS_IMAGE, IS_IN_DB, IS_IN_SET, IS_INT_IN_RANGE, IS_IPV4, IS_LENGTH, IS_LIST_OF, IS_LOWER, IS_MATCH, IS_EQUAL_TO, IS_NOT_EMPTY, IS_NOT_IN_DB, IS_NULL_OR, IS_SLUG, IS_STRONG, IS_TIME, IS_UPLOAD_FILENAME, IS_UPPER, IS_URL ``` Base de datos: `DAL, Field` Para compatibilidad hacia atrás `SQLDB=DAL` y `SQLField=Field` . Te recomendamos que uses la nueva sintaxis `DAL` y `Field` , en lugar de la anterior. También se definen otros objetos y módulos en las librerías, pero estos no se importan automáticamente, ya que no se usan con tanta frecuencia. Los objetos esenciales de la API en el entorno de ejecución de web2py son `request` , `response` , `session` , `cache` , `URL` , `HTTP` , `redirect` y `T` y se detallan abajo. Algunos objetos y funciones, incluyendo Auth, Crud y Service , están definidos en "gluon/tools.py" y se deben importar cuando se los requiere: ``` from gluon.tools import Auth, Crud, Service ``` # Acceso a la API desde módulos de Python¶ Tus módulos o controladores pueden importar módulos de Python, y estos pueden necesitar el uso de alguna parte de la API de web2py. La forma de hacer esto es importando esas partes: `from gluon import *` De hecho, cualquier módulo de Python, incluso cuando no se importe en el entorno de ejecución de web2py, puede importar la API de web2py siempre y cuando web2py esté incluido en el `sys.path` . Sin embargo, existe una particularidad. web2py define algunos objetos globales (request, response, session, cache, T) que sólo pueden existir cuando hay una solicitud HTTP disponible (o simulada). Por lo tanto, los módulos pueden acceder a ellos sólo si se han llamado desde una aplicación. Por esta razón se incluyen en un contenedor llamado `current` , que es un objeto que pertenece al dominio de un hilo (thread local). Aquí hay un ejemplo: Crear un módulo "/miapp/modules/prueba.py" que contenga: ``` from gluon import * def ip(): return current.request.client ``` Ahora desde un controlador en "miapp" se puede hacer: ``` import test def index(): return "Tu ip es " + test.ip() ``` Algunas cosas a tener en cuenta: * `import test` busca el módulo inicialmente en la carpeta modules de la app, luego en las carpetas listadas en `sys.path` . Por eso, los módulos del nivel de la aplicación siempre tienen precedencia sobre módulos de Python. Esto permite que distintas app incluyan distintas versiones de sus módulos, sin conflictos. * Los distintos usuarios pueden llamar a la misma acción `index` simultáneamente, que llama a la función en el módulo, y sin embargo no hay conflicto porque `current.request` es un objeto diferente para distintos hilos. Sólo ten cuidado de no acceder a `current.request` fuera de funciones o clases (por ejemplo en el nivel más general) en el módulo. * `import test` es un atajo de ``` from applications.nombreapp.modules import test ``` . Al usar la sintaxis más larga, es posible importar módulos desde otras aplicaciones. Para mantener la uniformidad con el comportamiento estándar de Python, por defecto web2py no vuelve a cargar módulos cuando se realizan cambios. De todos modos esto se puede cambiar. Para habilitar la recarga automática de módulos, utiliza la función `track_changes` como sigue (típicamente en un módulo, antes de cualquier import): ``` from gluon.custom_import import track_changes; track_changes(True) ``` De ahora en más, cada vez que un módulo se importe, la funcionalidad de importación revisará si el archivo de código fuente (.py) ha cambiado. Si se detectan cambios, se cargará el módulo nuevamente. No debes llamar a track_changes en los módulos en sí. Track changes sólo comprueba cambios para módulos que se almacenan en la aplicación. Los módulos que importan `current` tienen acceso a: * `current.request` * `current.response` * `current.session` * `current.cache` * `current.T` y a cualquier otra variable que tu aplicación decida almacenar en current. Por ejemplo un modelo podría hacer esto: ``` auth = Auth(db) from gluon import current current.auth = auth ``` y ahora todos los módulos importados tienen acceso a `current.auth` . `current` e `import` proveen de un poderoso mecanismo para crear módulos ampliables y reutilizables para tus aplicaciones. Hay un detalle importante a tener en cuenta. Dado un ``` from gluon import current ``` , es correcto el uso de `current.request` o cualquiera de los demás objetos locales del hilo pero uno nunca debería pasarlos a variables globales en el módulo, como enrequest = current.request # ¡INCORRECTO! ¡PELIGRO! ni debería pasarlos a atributos de clase class MyClass: request = current.request # ¡INCORRECTO! ¡PELIGRO! Esto se debe a que los objetos locales del hilo deben extraerse en tiempo de ejecución. Las variables globales, en cambio, se definen una sola vez cuando el modelo se importa inicialmente. Hay otro problema relacionado con el caché. No se puede usar el objeto `cache` para decorar funciones en los módulos, esto se debe a que el comportamiento no sería el esperado. Para poder hacer un caché de la función `f` en un módulo debes usar `lazy_cache` : ``` from gluon.cache import lazy_cache lazy_cache('clave', time_expire=60, cache_model='ram') def f(a, b, c): .... ``` Ten en cuenta que la clave está definida por el usuario pero debe estar identificada estrictamente con la función. Si se omite la clave, web2py la determinará automáticamente. `request` ¶ El objeto `request` es una instancia de la clase omnipresente llamada , que extiende la clase `dict` de Python. Básicamente se trata de un diccionario, pero los valores de cada ítem también pueden obtenerse como atributos: `request.vars` es lo mismo que: `request['vars']` A diferencia de un diccionario, si un atributo (o clave) no existe, Storage no genera una excepción: en su lugar devuelve `None` . A veces es de utilidad crear nuestros propios objetos Storage. Puedes hacerlo de la siguiente forma: from gluon.storage import Storage mi_storage = Storage() # objeto Storage vacío mi_otro_storage = Storage(dict(a=1, b=2)) # convertir un diccionario a Storage `request` tiene los siguientes ítems/atributos, de los cuales algunos son también instancias de la clase `Storage` : * `request.cookies` : un objeto ``` Cookie.SimpleCookie() ``` que contiene las cookie pasadas con la solicitud HTTP. Se comporta como un diccionario compuesto por cookie. Cada cookie es un objeto Morsel[morsel]. * `request.env` : un objeto `Storage` que contiene las variables de entorno pasadas al controlador, incluyendo las variables del encabezado HTTP de la solicitud y los parámetros WSGI estándar. Las variables de entorno se convierten a minúsculas, y los puntos se convierten a subguiones para mejorar la memorización. * `request.application` : el nombre de la aplicación solicitada. * `request.controller` : el nombre del controlador solicitado. * `request.function` : el nombre de la función solicitada. * `request.extension` : la extensión de la acción solicitada. Por defecto es "html". Si la función del controlador devuelve un diccionario y no especifica una vista, esto es usado para determinar la extensión del archivo de la vista que convertirá (render) el diccionario (extraída de ``` request.env.path_info ``` ). * `request.folder` : el directorio de la aplicación. Por ejemplo si la aplicación es "welcome", `request.folder` se establece como la ruta absoluta "ruta/a/welcome". En tus programas, deberías usar siempre esta variable y la función `os.path.join` para obtener rutas a los archivos que quieras manipular. Si bien web2py usa siempre rutas absolutas, es una buena práctica no cambiar explícitamente el directorio en uso (current working directory) sea cual sea, ya que no es una práctica segura para el trabajo con hilos (thread-safe). * `request.now` : un objeto `datetime.datetime` que almacena la hora y la fecha de la solicitud actual. * `request.utcnow` : un objeto `datetime.datetime` que almacena la hora y fecha UTC de la solicitud actual. * `request.args` : Una lista de los componentes de la ruta de la URL que siguen después del nombre de la función del controlador; equivalente a ``` request.env.path_info.split('/')[3:] ``` * `request.vars` : un objeto que contiene las variables de la consulta para HTTP GET y HTTP POST. * `request.get_vars` : un objeto que contiene sólo las variables de la consulta para HTTP GET. * `request.post_vars` : un objeto que contiene sólo las variables de la consulta para HTTP POST. * `request.client` : La dirección ip del cliente determinada por, si se detectó, ``` request.env.http_x_forwarded_for ``` o por de lo contrario. Si bien esto es útil no es confiable porque el `http_x_forwarded_for` se puede falsificar. * `request.is_local` : `True` si el cliente está en localhost, `False` en su defecto. Debería de funcionar detrás de un proxy si el proxy soporta `http_x_forwarded_for` . * `request.is_https` : `True` si la solicitud utiliza el protocolo HTTPS, `False` en su defecto. * `request.body` : un stream de archivo de sólo-lectura conteniendo el cuerpo de la solicitud HTTP. Esto se lee (parse) automáticamente para obtener el `request.post_vars` para luego devolverse a su estado inicial. Se puede leer con `request.body.read()` . * `request.ajax` es True si la función se llamó desde una solicitud tipo Ajax. * `request.cid` es el `id` del componente que generó la solicitud Ajax (en caso de existir). Puedes leer más acerca de componentes en el Capítulo 12. * ``` request.requires_https() ``` evita que se ejecute todo comando si la solicitud no se realizó utilizando HTTPS y redirige al visitante a la actual página usando ese protocolo. * `request.restful` este es un decorador nuevo y realmente útil que se puede usar para cambiar el comportamiento por defecto de una acción de web2py separando las solicitudes según GET/POST/PUSH/DELETE. Se tratará con cierto detalle en el Capítulo 10. * `request.user_agent()` extrae (parse) el campo user_agent del cliente y devuelve la información en forma de diccionario. Es útil para la detección de dispositivos móviles. Utiliza "gluon/contrib/user_agent_parser.py" creado por <NAME>. Para ver como funciona, prueba incrustando el siguiente código en una vista: ``` {{=BEAUTIFY(request.user_agent())}} ``` ``` request.global_settings ``` request.global_settingscontiene parámetros de configuración general de web2py. Estos parámetros se establecen automáticamente y no deberías cambiarlos. Por ejemplo ``` request.global_settings.gluon_parent ``` contiene la ruta completa a la carpeta de web2py, ``` request.global_settings.is_pypy ``` determina si web2py está corriendo en PyPy. * `request.wsgi` es un hook que te permite llamar a aplicaciones WSGI de terceros en el interior de las acciones El último incluye: * `request.wsgi.environ` * ``` request.wsgi.start_response ``` ``` request.wsgi.middleware ``` su uso se trata al final de este Capítulo. Como ejemplo, la siguiente llamada en un sistema típico: ``` http://127.0.0.1:8000/examples/default/status/x/y/z?p=1&q=2 ``` resulta en el siguiente objeto `request` : variable | valor | | --- | --- | variablevalorrequest.applicationexamplesrequest.controllerdefaultrequest.functionindexrequest.extensionhtmlrequest.viewstatusrequest.folderapplications/examples/request.args['x', 'y', 'z']request.vars<Storage {'p': 1, 'q': 2}>request.get_vars<Storage {'p': 1, 'q': 2}>request.post_vars<Storage {}>request.is_localFalserequest.is_httpsFalserequest.ajaxFalserequest.cidNonerequest.wsgi<hook>request.env.content_length0request.env.content_typerequest.env.http_accepttext/xml,text/html;request.env.http_accept_encodinggzip, deflaterequest.env.http_accept_languageenrequest.env.http_cookiesession_id_examples=127.0.0.1.119725request.env.http_host127.0.0.1:8000request.env.http_refererhttp://web2py.com/request.env.http_user_agentMozilla/5.0request.env.path_info/examples/simple_examples/statusrequest.env.query_stringremote_addr:127.0.0.1request.env.request_methodGETrequest.env.script_namerequest.env.server_name127.0.0.1request.env.server_port8000request.env.server_protocolHTTP/1.1request.env.server_softwareRocket 1.2.6request.env.web2py_path/Users/mdipierro/web2pyrequest.env.web2py_versionVersion 2.4.1request.env.wsgi_errors<open file, mode 'w' at >request.env.wsgi_inputrequest.env.wsgi_url_schemehttp Según el servidor web, se establecerán unas u otras de las variables de entorno. Aquí nos basamos en el servidor wsgi incorporado Rocket. El conjunto de variables no difiere en mucho cuando se utiliza el servidor web Apache. Las variables de `request.env.http_*` se extraen del encabezado HTTP de la solicitud. Las variables de `request.env.web2py_*` no se extraen del entorno del servidor web, sino que son creadas por web2py en caso de que la aplicación necesite saber acerca de la versión y ubicación de web2py, y si está corriendo en el Google App Engine (porque algunas optimizaciones específicas podrían ser necesarias). Se deben tener en cuenta además las variables de `request.env.wsgi_*` , que son específicas del adaptador wsgi. `response` ¶ `response` es otra instancia de la clase `Storage` , que contiene lo siguiente: * `response.body` : Un objeto `StringIO` en el que web2py escribe el cuerpo de la página devuelta. NUNCA MODIFIQUES ESTA VARIABLE. * `response.cookies` : es similar a `request.cookies` , pero mientras el último contiene las cookie enviadas desde el cliente al servidor, el primero contiene las cookie enviados desde el servidor al cliente. La cookie de la sesión se maneja automáticamente. * ``` response.download(request, db) ``` : un método usado para implementar la función del controlador que permite descargar los archivos subidos. `request.download` usa el último argumento en `request.args` para recuperar el nombre codificado del archivo (por ejemplo, el nombre del archivo generado cuando se subió al servidor y almacenado en el campo upload). Este método extrae el nombre del campo upload y el nombre de la tabla así como también el nombre del archivo original del nombre de archivo codificado. `response.dowload` recibe dos argumentos opcionales: `chunk_size` configura el tamaño en byte para streaming por partes (chunked streaming, por defecto es 64K), y `attachments` determina si el archivo descargado debería tratarse como attachment o no (por defecto `True` ). Ten en cuenta que `response.download` se usa específicamente para la descarga de archivos asociados a campos upload de la base de datos. Usa `response.stream` (ver abajo) para otras clases de descargas de archivos y streaming. Además, ten en cuenta que no es necesario el uso de `response.download` para examinar los archivos subidos a la carpeta static -- los archivos estáticos pueden (y deberían en general) examinarse directamente a través de su URL (por ejemplo, /app/static/files/miarchivo.pdf). * `response.files` : una lista de archivos .css, .js, .coffee, y .less asociados a la página. Se añadirán automáticamente en el encabezado de la plantilla general "layout.html" a través de la vista incluida "web2py_ajax.html". Para añadir nuevos archivos CSS, JS, COFFEE, o LESS, basta con agregarlos a la lista. Se detectan los archivos duplicados. El orden es relevante. * ``` response.include_files() ``` genera etiquetas del encabezado html para incluir todos los archivos en `response.files` (utilizado por "views/web2py_ajax.html"). * `response.flash` : parámetro opcional que puede incluirse en las vistas. Normalmente se usa para notificar al usuario sobre algo que ha ocurrido. * `response.headers` : un `dict` para los encabezados de la respuesta HTTP. web2py establece algunos encabezados por defecto, incluyendo "Content-Length", "Content-Type", y "X-Powered-By" (que se especifica como web2py). Además, web2py establece el valor de los encabezados "Cache-Control", "Expires", y "Pragma" para prevenir el cacheado del lado del cliente, excepto para las solicitudes de archivos estáticos, para los cuales la opción de cacheado se habilita. Los encabezados que web2py establece se pueden sobrescribir o eliminar, y es posible añadir nuevos encabezados (por ejemplo, ``` response.headers['Cache-Control'] = 'private' ``` ). Puedes eliminar un encabezado por su clave en el diccionario response.headers, por ejemplo con ``` del response.headers['Custom-Header'] ``` , sin embargo, los encabezados por defecto de web2py se agregarán nuevamente antes de devolver la respuesta. Para evitar este comportamiento, debes establecer el valor del encabezado como None, por ejemplo, para eliminar el encabezado Content-Type por defecto, usa ``` response.headers['Content-Type'] = None ``` * `response.menu` : parámetro opcional que se puede incluir en las vistas, normalmente para pasar un árbol de menús de navegación a la vista. Esto puede ser convertido (render) por el ayudante MENU. * `response.meta` : un objeto Storage (similar a un diccionario) que contiene información de tipo `<meta>` opcional como `response.meta.author` , `.description` , y/o `.keywords` . El contenido de cada variable meta se inserta automáticamente en la etiqueta `META` correspondiente a través del código en "views/web2py_ajax.html", que se incluye en "views/layout.html". * ``` response.include_meta() ``` genera una cadena que incluye todos los encabezados `response.meta` serializados (usado por "views/web2py_ajax.html"). * ``` response.postprocessing ``` : esta es una lista de funciones, vacías por defecto. Estas funciones se usan para filtrar el objeto response en la salida de una acción, antes de que la salida sea convertida (render) por la vista. Se podría utilizar para implementar el soporte de otros lenguajes de plantillas. * ``` response.render(vistas, variables) ``` : un método usado para llamar a la vista en forma explícita en el controlador. `vista` es un parámetro opcional que especifica el nombre del archivo de la vista, `variables` es un diccionario de valores asignados a nombres que se pasan a la vista. * ``` response.session_file ``` : stream de archivo que contiene la sesión. * ``` response.session_file_name ``` : el nombre del archivo donde se guardará la sesión. * `response.session_id` : el id de la sesión actual. Se detecta automáticamente. NUNCA CAMBIES ESTA VARIABLE. * ``` response.session_id_name ``` : el nombre de la cookie de sesión para la app actual. NUNCA CAMBIES ESTA VARIABLE. * `response.status` : el número entero del código de status HTTP que se pasa en la respuesta. Por defecto es 200 (OK). * ``` response.stream(archivo, chunk_size, request=request, attachment=False, filename=None, headers=None) ``` : cuando un controlador devuelve este objeto, web2py crea un stream con el contenido para el cliente en bloques del tamaño especificado en `chunk_size` . El parámetro `request` es obligatorio para utilizar el inicio del paquete en el encabezado HTTP. Como se señala más arriba, `response.download` debería usarse para recuperar archivos almacenados a través del campo upload. Para otros casos se puede usar `response.stream` , como el envío de un archivo temporario u objeto StringIO creado en el controlador. Si `attachment` es True, el encabezado Content-Disposition se establecerá como "attachment", y si se pasa el nombre del archivo, también se agregará a ese encabezado (pero sólo cuando `attachment` sea True). Si no se incluyen previamente en `response.headers` , los siguientes encabezados de la respuesta se establecerán automáticamente: Content-Type, Content-Length, Cache-Control, Pragma y Last-Modified (los últimos tres se establecen para permitir el caché del archivo en el navegador). Para sobrescribir cualquiera de estos encabezados automáticos, simplemente configúralos en `response.headers` antes de llamar a `response.stream` . * `response.subtitle` : parámetro opcional que se puede incluir en las vistas. Debería contener el subtítulo de la página. * `response.title` : parámetro opcional que se puede incluir en las vistas. Debería contener el título de la página y debería ser convertido (render) para el objeto HTML TAG del título en el encabezado. * `response.toolbar` : una función que te permite embeber una barra de herramientas en la página para depuración . La barra de herramientas muestra las variables de request, response, session y el tiempo de acceso a la base de datos para cada consulta. * `response._vars` : se puede acceder a esta variable solamente desde una vista, no en la acción. Contiene los valores devueltos por la acción a la vista. * `response._caller` : esta es una función que envuelve todas las llamadas de la acción. Por defecto es la función idéntica, pero se puede modificar para poder manejar ciertas clases de excepción y registrar información adicional; > response._caller = lambda f: f() * ``` response.optimize_css ``` : se puede establecer como "concat,minify,inline" para concatenar, simplificar y alinear los archivos CSS incluidos con web2py. * `response.optimize_js` : se puede establecer como "concat,minify,inline" para concatenar, simplificar y alinear los archivos JavaScript incluidos con web2py. * `response.view` : el nombre de la plantilla que debe convertir (render) la página. Por defecto es: ``` "%s/%s.%s" % (request.controller, request.function, request.extension) ``` o, si este archivo no se encuentra: ``` "generic.%s" % (request.extension) ``` Cambia el valor de esta variable para modificar el archivo la vista asociado a una acción particular. * `response.delimiters` por defecto `('{{','}}')` . Te permite cambiar los delimitadores de código incrustado en las vistas. * ``` response.xmlrpc(request, methods) ``` : si un controlador devuelve este tipo de objeto, la función expone los métodos a través de XML-RPC[xmlrpc]. Esta función es obsoleta ya que se ha implementado un mecanismo mejor y se detalla en el Capítulo 10. * `response.write(text)` : un método para escribir texto en el cuerpo de la página de la salida. * `response.js` puede contener código JavaScript. Este código se ejecutará si y sólo si la respuesta es recibida por un componente de web2py, según se detalla en el capítulo 12. Como `response` es un objeto , se puede usar para almacenar otros atributos que quieras pasar a la vista. Si bien no hay una restricción técnicamente, lo recomendable es almacenar sólo las variables que se vayan a convertir (render) en todas las páginas en la plantilla general ("layout.html"). De todos modos, es muy recomendable que el uso esté restringido a las variables que se listan aquí: ``` response.title response.subtitle response.flash response.menu response.meta.author response.meta.description response.meta.keywords response.meta.* ``` porque esto hará mucho más fácil la tarea de reemplazar el archivo "layout.html" que viene con web2py por otra plantilla, una que use las mismas variables. Las versiones antiguas de web2py usaban `response.author` en lugar de `response.meta.author` y un formato similar para el resto de los atributos meta. `session` ¶ `session` es otra instancia de la clase `Storage` . Se puede almacenar cualquier cosa en ella, por ejemplo: ``` session.myvariable = "hola" ``` se puede recuperar más tarde: ``` a = session.mivariable ``` Siempre que el código se ejecute durante la misma sesión para el mismo usuario (suponiendo que el usuario no eliminó las cookie de la sesión y la sesión no venció). Al ser `session` un objeto `Storage` , el intento fallido de acceder a atributos o nombres no establecidos no genera una excepción: en su lugar devuelve `None` . El objeto session tiene tres métodos importantes. Uno es `forget` : Este le dice a web2py que no guarde la sesión. Este método debería usarse en los controladores cuyas acciones se llamen a menudo y no requieran el registro de la actividad del usuario. `session.forget()` impide la escritura del archivo session, sin importar si se ha modificado o no. adicionalmente desbloquea y cierra el archivo de la sesión. Difícilmente necesites llamar a este método ya que las sesiones no se guardan cuando no han cambiado. Sin embargo, si la página hace múltiples solicitudes Ajax simultaneas, es buena idea que las acciones llamadas vía Ajax utilicen (siempre que la acción no necesite la sesión). De lo contrario, cada acción Ajax tendrá que esperar a la anterior a que se complete (y a que el archivo de la sesión se desbloquee) antes de continuar, haciéndose más lenta la descarga de la página. Ten en cuenta que las sesiones no se bloquean cuando se almacenan en la base de datos. Otro método es: `session.secure()` que le dice a web2py que establezca la cookie de la sesión para que sea segura. Esto se debería configurar si la app corre sobre https. Al configurar la cookie de sesión como segura, el servidor le informa al navegador que no envíe la cookie de regreso al servidor a menos que la conexión sea sobre https. El otro método es `connect` . Por defecto las sesiones se almacenan en el sistema de archivos y la cookie de la sesión se usa para almacenar y recuperar el `session.id` . Usando el método `connect` es posible decirle a web2py que almacene las sesiones en la base de datos o en las cookie, eliminando de esa forma la necesidad de usar el sistema de archivos para el manejo de las sesiones. Por ejemplo, para guardar las sesiones en la base de datos: ``` session.connect(request, response, db, masterapp=None) ``` donde `db` es el nombre de una conexión a base de datos abierta (como las que genera la DAL). Esto le dice a web2py que queremos almacenar las sesiones en la base de datos y no en el sistema de archivos. `session.connect` se debe ubicar luego de `db=DAL(...)` , pero antes que cualquier otro algoritmo que utilice la sesión, por ejemplo, la configuración inicial de `Auth` . web2py crea una tabla: ``` db.define_table('web2py_session', Field('locked', 'boolean', default=False), Field('client_ip'), Field('created_datetime', 'datetime', default=now), Field('modified_datetime', 'datetime'), Field('unique_key'), Field('session_data', 'text')) ``` y almacena una sesión cPickleada en el campo `session_data` . La opción `masterapp=None` , por defecto, le dice a web2py que intente recuperar una sesión existente para la aplicación con el nombre en `request.application` , en la aplicación actual. Si deseas que una o más aplicaciones compartan las sesiones, establece el valor de `masterapp` con el nombre de la aplicación maestra. Para almacenar sesiones en cookie en cambio puedes hacer: ``` session.connect(request, response, cookie_key='yoursecret', compression_level=None) ``` Aquí `cookie_key` es una clave de cifrado simétrico (symmetric encryption key). `compression_level` es un nivel de cifrado `zlib` opcional. Si bien las sesiones en las cookie son frecuentemente recomendables por razones de escalabilidad, son limitados en tamaño. Las sesiones pesadas producirán fallas en las cookie. Puedes revisar el estado de tu aplicación en todo momento mostrando la salida de las variables del sistema `request` , `session` y `response` . Una forma de hacer esto es creando una acción especial: ``` def status(): return dict(request=request, session=session, response=response) ``` En la vista "generic.html" esto se puede hacer usando # Separando sesiones¶ Si almacenas las sesiones en sistemas de archivos y manejas una cantidad importante, el sistema de archivos puede convertirse un cuello de botella, una forma de resolver esto es la siguiente: ``` session.connect(request, response, separate=True) ``` Al establecer `separate=True` web2py almacenará las sesiones no en la carpeta `sessions/` sino en distintas subcarpetas de esa ruta. Cada subcarpeta se creará automáticamente. Las sesiones con el mismo prefijo se ubicarán en la misma carpeta. Nuevamente, ten en cuenta que esto se debe ejecutar antes de cualquier otro algoritmo que utilice el objeto session. `cache` ¶ `cache` es un objeto global que también está disponible en el entorno de ejecución de web2py. Tiene dos atributos: * `cache.ram` : el caché de la aplicación en la memoria principal. * `cache.disk` : el caché de la aplicación en el disco. se pueden hacer llamadas a `cache` (es un callable), esto le permite ser usado como decorador para el caché de acciones y vistas. El siguiente ejemplo guarda en caché la función `time.ctime()` en la RAM: ``` def cache_en_ram(): import time t = cache.ram('tiempo', lambda: time.ctime(), time_expire=5) return dict(tiempo=t, link=A('clic aquí', _href=request.url)) ``` La salida de `lambda: time.ctime()` se guarda en caché en RAM por 5 segundos. La cadena `'tiempo'` se usa como clave del caché. El ejemplo siguiente guarda en caché la función `time.ctime()` en disco: ``` def cache_en_disco(): import time t = cache.disk('tiempo', lambda: time.ctime(), time_expire=5) return dict(tiempo=t, link=A('clic aquí', _href=request.url)) ``` La salida de `lambda: time.ctime()` se guarda en caché en el disco (usando el módulo shelve) por 5 segundos. Ten en cuenta que el segundo argumento de `cache.ram` y `cache.disk` debe ser una función u objeto que admita llamadas (callable). Si quieres guardar en caché un objeto existente en lugar de la salida de una función, puedes simplemente devolverlo por medio de una función lambda: ``` cache.ram('miobjeto', lambda: miobjeto, time_expire=60*60*24) ``` El próximo ejemplo guarda en caché la función `time.ctime()` tanto en RAM como en el disco: ``` def cache_en_ram_y_disco(): import time t = cache.ram('tiempo', lambda: cache.disk('tiempo', lambda: time.ctime(), time_expire=5), time_expire=5) return dict(tiempo=t, link=A('clic aquí', _href=request.url)) ``` La salida de `lambda: time.ctime()` se guarda en caché en el disco (usando el módulo shelve) y luego en RAM por 5 segundos. web2py busca en el RAM primero y si no está allí busca en el disco. Si no está en RAM o en el disco, `lambda: time.ctime()` se ejecuta y se actualiza el caché. Esta técnica es de utilidad en un entorno de procesos múltiples (multiprocess). Los dos objetos tiempo no necesariamente deben ser iguales. El siguiente ejemplo guarda en caché en RAM la salida de la función del controlador (pero no la vista): ``` @cache(request.env.path_info, time_expire=5, cache_model=cache.ram) def cache_del_controlador_en_ram(): import time t = time.ctime() return dict(tiempo=t, link=A('clic aquí', _href=request.url)) ``` ``` cache_del_controlador_en_ram ``` se guarda en caché durante 5 segundos. Ten en cuenta que el resultado de un select de la base de datos no se puede guardar en caché sin una serialización previa. Una forma más apropiada es guardar el select de la base de datos directamente en caché por medio del argumento `chache` del método `select` . El siguiente ejemplo guarda en caché la salida de la función del controlador en el disco (pero no la vista): ``` @cache(request.env.path_info, time_expire=5, cache_model=cache.disk) def cache_del_controlador_en_disco(): import time t = time.ctime() return dict(tiempo=t, link=A('clic para refrescar', _href=request.url)) ``` ``` cache_del_controlador_en_disco ``` se guarda en caché en el disco por 5 segundos. Recuerda que web2py no puede guardar en caché un diccionario que contenga objetos que no se puedan picklear. Además es posible guardar la vista en el caché. El truco consiste en convertir (render) la vista en la función del controlador, para que el controlador devuelva una cadena. Esto se hace devolviendo `response.render(d)` , donde `d` es el diccionario que queremos pasar a la vista. El siguiente ejemplo guarda en caché la salida de la función del controlador en RAM (incluyendo la vista convertida): ``` @cache(request.env.path_info, time_expire=5, cache_model=cache.ram) def cache_de_controlador_y_vista(): import time t = time.ctime() d = dict(time=t, link=A('Clic para refrescar', _href=request.url)) return response.render(d) ``` `response.render(d)` devuelve la vista convertida como cadena, que ahora se guarda en caché por 5 segundos. Esta es la mejor y la más rápida forma de usar el caché. Ten en cuenta que `time_expire` se usa para comparar la hora actual con la hora en la que el objeto solicitado fue almacenado en caché por última vez. No afecta a las solicitudes posteriores. Esto permite a `time_expire` establecerse dinámicamente cuando se solicita un objeto en lugar en lugar de tomar un valor fijo cuando se guarda el objeto. Por ejemplo: ``` mensaje = cache.ram('mensaje', lambda: 'Hola', time_expire=5) ``` Ahora, supongamos que la siguiente llamada se hace 10 segundos después de la llamada de arriba: ``` mensaje = cache.ram('mensaje', lambda: 'Adiós', time_expire=20) ``` Como `time_expire` se establece en 20 segundos en la segunda llamada y sólo han transcurrido 10 segundos desde la primera vez que se ha guardado el mensaje, se recuperará el valor "Hola" de el caché, y no se actualizará con "Adiós". El valor de `time_expire` de 5 segundos en la primera llamada no tiene impacto en la segunda llamada. Al configurar `time_expire=0` (o usando un valor negativo), se fuerza la actualización del ítem en caché (porque el tiempo transcurrido desde el último almacenamiento será siempre > 0), y si se configura `time_expire=None` se fuerza la recuperación del valor en caché, sin importar el tiempo transcurrido desde la última vez que se guardó (si `time_expire` es siempre `None` , se impide efectivamente el vencimiento del ítem en caché). Puedes borrar una o más variables de caché con ``` cache.ram.clear(regex='...') ``` donde `regex` es una expresión regular (regular expression) que especifica todas las palabras que quieras eliminar del caché. También puedes eliminar un sólo ítem con: ``` cache.ram(clave, None) ``` donde `clave` es la palabra asociada al ítem en caché. Además es posible definir otros mecanismos de caché como memcache. Memcache está disponible con ``` gluon.contrib.memcache ``` y se trata con más detalle en el Capítulo 14. Ten cuidado con el caché porque usualmente trabaja en el nivel de la aplicación, no en el nivel de usuario. Si necesitas, por ejemplo, guardar en caché contenido específico del usuario, utiliza una clave que incluya el id de ese usuario. `URL` ¶ La función `URL` es una de las más importantes de web2py. Genera URL de rutas internas para las acciones y los archivos estáticos. Aquí hay un ejemplo: `URL('f')` se asocia (map) a ``` /[aplicación]/[controlador]/f ``` Ten en cuenta que la salida de la función `URL` depende del nombre de la aplicación actual, el controlador que se llamó y otros parámetros. web2py soporta URL mapping y URL mapping inverso. El URL mapping o mapeo de URL te permite redefinir el formato de las URL externas. Si usas la función `URL` para generar todas las URL internas, entonces los agregados o modificaciones no presentarán vínculos incorrectos (broken links) en el ámbito de la aplicación. Puedes pasar parámetros adicionales a la función `URL` , por ejemplo, palabras extra en la ruta del URL (args) y variables de consulta (query variables): ``` URL('f', args=['x', 'y'], vars=dict(z='t')) ``` se asocia (mapea) a ``` /[aplicación]/[controlador]/f/x/y?z=t ``` Los atributos `arg` son leídos (parse), decodificados y finalmente almacenados automáticamente en `request.args` por web2py. De forma similar ocurre con las variables de consulta que se almacenan en `request.vars` . `args` y `vars` proveen de un mecanismo básico usado por web2py para el intercambio de información con el navegador cliente. Si args contiene sólo un elemento, no hace falta que se pase como lista. Además puedes usar la función `URL` para generar las URL de acciones en otros controladores o aplicaciones: ``` URL('a', 'c', 'f', args=['x', 'y'], vars=dict(z='t')) ``` se asocia (map) a `/a/c/f/x/y?z=t` Además es posible especificar una aplicación, controlador y función usando argumentos con nombre (named arguments): ``` URL(a='a', c='c', f='f') ``` Si no se especifica el nombre de la aplicación se asume la app actual. `URL('c', 'f')` Si falta el nombre del controlador, se asume el actual. `URL('f')` En lugar de pasar el nombre de una función del controlador también es posible pasar la función en sí `URL(f)` Por las razones expuestas más arriba, deberías utilizar siempre la función `URL` para generar los URL de archivos estáticos para tus aplicaciones. Los archivos estáticos se almacena en la subcarpeta `static` de la aplicación (es ese el lugar que se les asigna cuando se suben a través de la interfaz administrativa). web2py provee de un controlador virtual 'static' que tiene la tarea de recuperar los archivos de la subcarpeta `static` , determinar su tipo de contenido, y crear el stream del archivo para el cliente. El siguiente ejemplo genera una URL para la imagen estática "imagen.png": ``` URL('static', 'imagen.png') ``` se asocia (map) a ``` /[aplicación]/static/imagen.png ``` Si la imagen estática está en una subcarpeta incluida en la carpeta `static` , puedes incluir la/s subcarpeta/s como parte del nombre del archivo. Por ejemplo, para generar: ``` /[aplicación]/static/imagenes/iconos/flecha.png ``` uno debería usar: ``` URL('static', 'imagenes/iconos/flecha.png') ``` No es necesario que codifiques o escapes los argumentos en `args` o `vars` ; esto se realiza automáticamente por ti. Por defecto, la extensión correspondiente a la solicitud actual (que se puede encontrar en `request.extension` ) se agrega a la función, a menos que request.extension sea html, el valor por defecto. Este comportamiento se puede sobrescribir incluyendo explícitamente una extensión como parte del nombre de la función `URL(f='nombre.ext')` o con el argumento extension: ``` URL(..., extension='css') ``` La extensión actual se puede omitir explícitamente: ``` URL(..., extension=False) ``` # URL absolutos¶ Por defecto, `URL` genera URL relativas. Sin embargo, puedes además generar URL absolutas especificando los argumentos `scheme` y `host` (esto es de utilidad, por ejemplo, cuando se insertan URL en mensajes de email): ``` URL(..., scheme='http', host='www.misitio.com') ``` Puedes incluir automáticamente el scheme y host de la solicitud actual simplemente estableciendo los argumentos como `True` . ``` URL(..., scheme=True, host=True) ``` La función `URL` además acepta un argumento `port` para especificar el puerto del servidor si es necesario. # Firma digital de URL¶ Cuando generas una URL, tienes la opción de firmarlas digitalmente. Esto añadirá una variable `_signature` tipo GET que se puede ser verificada por el servidor. Esto se puede realizar de dos formas distintas. Puedes pasar los siguientes argumentos a la función URL: * `hmac_key` : la clave para la firma del URL (una cadena) * `salt` : una cadena opcional para utilizar la técnica salt antes de la firma * `hash_vars` : una lista opcional de nombres de variables de la cadena de la consulta URL (query string variables, es decir, variables GET) a incluir en la firma. También se puede establecer como `True` (por defecto) para incluir todas las variables, o `False` para no incluir variables. Aquí se muestra un ejemplo de uso: ``` KEY = 'miclave' def uno(): return dict(link=URL('dos', vars=dict(a=123), hmac_key=KEY)) def dos(): if not URL.verify(request, hmac_key=KEY): raise HTTP(403) # hacer algo return locals() ``` Esto hace que se pueda acceder a la acción `dos` sólo por medio de una URL firmada digitalmente. Una URL firmada digitalmente se ve así: ``` '/welcome/default/dos?a=123&_signature=4981bc70e13866bb60e52a09073560ae822224e9' ``` Ten en cuenta que la firma digital se verifica a través de la función `URL.verify` . `URL.verify` además toma los parámetros `hmac_key` , `salt` , y `hash_vars` descriptos anteriormente, y sus valores deben coincidir con los que se pasaron a la función `URL` cuando se creó la firma digital para poder verificar la URL. Una segunda forma más sofisticada y más usual de URL firmadas digitalmente es la combinación con Auth. Esto se explica más fácilmente por medio de un ejemplo: ``` @auth.requires_login() def uno(): return dict(link=URL('dos', vars=dict(a=123), user_signature=True) @auth.requires_signature() def dos(): # hacer algo return locals() ``` En este caso la `hmac_key` se genera automáticamente y se comparte en la sesión. Esto permite que la acción `dos` delegue todo control de acceso a la acción `uno` . Si se genera el link y se firma, este es válido; de lo contrario no lo es. Si otro usuario se apropia del link, este no será válido. Es una buena práctica la firma digital de todo callback de Ajax. Si usas la función `LOAD` , esta también tiene un argumento `user_signature` que se puede usar con ese fin: ``` {{=LOAD('default', 'dos', vars=dict(a=123), ajax=True, user_signature=True)}} ``` `HTTP` and `redirect` ¶ web2py define sólo una excepción llamada `HTTP` . Esta excepción se puede generar en cualquier parte de un modelo, controlador o vista con el comando: ``` raise HTTP(400, "mi mensaje") ``` Esto hace que el flujo del control (control flow) se salga del código del usuario, de vuelta a web2py y que devuelva una respuesta HTTP como esta: El primer argumento de `HTTP` es el código de estado HTTP. El segundo argumento es la cadena que se devolverá como cuerpo de la respuesta. Se pueden pasar otros argumentos por nombre adicionales para crear el encabezado de la respuesta HTTP. Por ejemplo: ``` raise HTTP(400, 'mi mensaje', test='hola') ``` genera: Si no deseas aplicar los cambios (commit) de la transacción abierta de la base de datos, puedes anularlos (rollback) antes de generar la excepción. Toda excepción que no sea `HTTP` hace que web2py anule (rollback) toda transacción de base de datos abierta, registre el error, envíe un ticket al visitante y devuelva una página de error estándar. Esto significa que el flujo de control entre páginas sólo es posible con `HTTP` . Las otras excepciones se deben manejar en la aplicación, de lo contrario, web2py generará un ticket. El comando: ``` redirect('http://www.web2py.com') ``` es básicamente un atajo de: ``` raise HTTP(303, 'Estás siendo redirigido a esta <a href="%s">página web</a>' % ubicacion, Location='http://www.web2py.com') ``` Los argumentos por nombre del método de inicialización `HTTP` se traducen en directivas de encabezado HTTP, en este caso, la ubicación de destino de la redirección (target location). `redirect` toma un segundo argumento opcional, que es el código de estado HTTP para la redirección (por defecto 303). Para una redirección temporaria cambia ese valor a 307 o puedes cambiarlo a 301 para una redirección permanente. La forma más usual para redirigir es la redirección a otras páginas en la misma app y (opcionalmente) pasar parámetros: ``` redirect(URL('index', args=(1,2,3), vars=dict(a='b'))) ``` En el Capítulo 12 trataremos sobre los componentes de web2py. Ellos hacen solicitudes Ajax a acciones de web2py. Si la acción llamada hace un redirect, podrías necesitar que la solicitud Ajax siga la redirección o que la página completa cambie de dirección. Para este último caso, se puede establecer: ``` redirect(..., type='auto') ``` ### Internacionalización y Pluralización con `T` ¶ El objeto `T` es el traductor de idiomas. Se compone de una única instancia global de la clase de web2py ``` gluon.language.translator ``` . Todas las cadenas fijas (string constants, y sólo ellas) deberían marcarse con `T` , por ejemplo: `a = T("hola mundo")` Las cadenas que se marcan con `T` son detectadas por web2py como traducibles y se traducirán cuando el código (en el modelo, controlador o vista) se ejecute. Si la cadena a traducir no es constante, sino que es variable, se agregará al archivo de traducción en tiempo de ejecución (runtime, salvo en GAE) para su traducción posterior. El objeto `T` también admite interpolación de variables y soporta múltiples sintaxis equivalentes: ``` a = T("hola %s", ('Timoteo',)) a = T("hola %(nombre)s", dict(nombre='Timoteo')) a = T("hola %s") % ('Tim',) a = T("hola %(nombre)s") % dict(nombre='Timoteo') ``` La última de las sintaxis es la recomendada porque hace la traducción más fácil. La primera cadena se traduce según el archivo de idioma solicitado y la variable `nombre` se reemplaza independientemente del idioma. Es posible la concatenación de cadenas traducidas y cadenas normales: ``` T("bla ") + nombre + T("bla") ``` El siguiente código también está permitido y con frecuencia es preferible: ``` T("bla %(nombre)s bla", dict(nombre='Timoteo')) ``` o la sintaxis alternativa ``` T("bla %(nombre)s bla") % dict(nombre='Timoteo') ``` En ambos casos la traducción ocurre antes de que la variable nombre sea sustituida en la ubicación de "%(nombre)s". La alternativa siguiente NO SE DEBERÍA USAR: ``` T("bla %(nombre)s bla" % dict(nombre='Timoteo')) ``` porque la traducción ocurriría después de la sustitución. # Estableciendo el idioma¶ El lenguaje solicitado se determina con el campo "Accepted-Language" en el encabezado HTTP, pero esta opción se puede sobrescribir programáticamente solicitando un archivo específico, por ejemplo: `T.force('it-it')` que lee el archivo de idioma "languages/it-it.py". Los archivos de idiomas se pueden crear y editar a través de la interfaz administrativa. Además puedes forzar el uso de un idioma para cada cadena: ``` T("<NAME>", language="it-it") ``` En el caso de que se soliciten múltiples idiomas, por ejemplo "it-it, fr-fr", web2py intentará ubicar los archivos de traducción "it-it.py" y "fr-fr.py". Si no se encuentra ninguno de los archivos, intentará como alternativa "it.py" y "fr.py". Si estos archivos no se encuentran utilizará "default.py". Si tampoco ese archivo es encontrado, usará el comportamiento por defecto sin traducción. La regla, en una forma más general, es que web2py intenta con "xx-xy-yy.py", luego "xx-xy.py", luego "xx.py" y por último "default.py" para cada idioma "xx-xy-yy" aceptado, buscando la opción más parecida a las preferencias del usuario. Puedes deshabilitar completamente las traducciones con `T.force(None)` Normalmente, las traducciones de cadenas se evalúan con pereza (lazily) cuando se convierte (render) la vista; por lo tanto, no se debería usar `force` dentro de una vista. Es posible deshabilitar la evaluación perezosa con `T.lazy = False` De esta forma, las cadenas se traducen de inmediato con el operador `T` según el idioma establecido o forzado. Además se puede deshabilitar la evaluación perezosa para cadenas individuales: ``` T("<NAME>", lazy=False) ``` El siguiente es un problema usual. La aplicación original está en Inglés. Supongamos que existe un archivo de traducción (por ejemplo en Italiano, "it-it.py") y el cliente HTTP declara que acepta tanto Inglés como Italiano en ese orden. Ocurre la siguiente situación inesperada: web2py no sabe que por defecto la app se escribió en Inglés (en). Por lo tanto, prefiere traducir todo al Italiano (it-it) porque únicamente detectó el archivo de traducción al Italiano. Si no hubiera encontrado el archivo "it-it.py", hubiese usado las cadenas del idioma por defecto (Inglés). Hay dos soluciones posibles para este problema: crear el archivo de traducción al Inglés, que sería algo redundante e innecesario, o mejor, decirle a web2py qué idiomas deberían usar las cadenas del idioma por defecto (las cadenas escritas en la aplicación). Esto se hace con: ``` T.set_current_languages('en', 'en-en') ``` Esto almacena una lista de idiomas en `T.current_languages` que no necesitan traducción y fuerza la recarga de los archivos de idiomas. Ten en cuenta que "it" e "it-it" son idiomas diferentes desde el punto de vista de web2py. Para dar soporte para ambos, uno necesitaría dos archivos de traducción, siempre en minúsculas. Lo mismo para los demás idiomas. El idioma aceptado actualmente se almacena en `T.accepted_language` # Traducción de variables¶ T(...) no sólo traduce las cadenas sino que además puede traducir los valores contenidos en variables: ``` >>> a="test" >>> print T(a) ``` En este caso se traduce la palabra "test" pero, si no se encuentra y el sistema de archivos permite la escritura, se agregará a la lista de palabras a traducir en el archivo del idioma. Observa que esto puede resultar en una gran cantidad de E/S de archivo y puedes necesitar deshabilitarlo: ``` T.is_writable = False ``` evita que T actualice los archivos de idioma en forma dinámica. # Comentarios y traducciones múltiples¶ Es posible que la misma cadena aparezca en distintos contextos en la aplicación y que necesite distintas traducciones según el contexto. Para resolver este problema, uno puede agregar comentarios a la cadena original. Los comentarios no se convierten, web2py los usará para determinar la traducción más apropiada. Por ejemplo: ``` T("hola mundo ## primer caso") T("hola mundo ## segundo caso") ``` Los textos a la derecha de los `##` , incluyendo los dobles `##` , son comentarios. # El motor de pluralización¶ A partir de la versión 2.0, web2py incluye un potente sistema de pluralización (PS). Esto quiere decir que cuando se marque al texto para traducción y el texto dependa de una variable numérica, puede ser traducido en forma diferente según el valor numérico. Por ejemplo en Inglés podemos convertir: `x book(s)` como ``` a book (x==1) 5 books (x==5) ``` El idioma inglés tiene una forma para el singular y una para el plural. La forma plural se construye agregando una "-s" o "-es" o usando una forma especial. web2py provee de una forma de definir reglas de pluralización para cada lenguaje, así como también definir excepciones a esas reglas. De hecho, web2py maneja por defecto las reglas de varios idiomas, y también excepciones a sus reglas. Sabe, por ejemplo, que el esloveno tiene una forma singular y 3 formas plurales (para x==1, x==3 o x==4 y x>4). Estas reglas se establecen en los archivos "gluon/contrib/plural_rules/*.py" y se pueden crear nuevos archivos. Las pluralizaciones explícitas para las palabras se pueden crear editando los archivos de pluralización en la interfaz administrativa. Por defecto, el sistema PS no está activado. Se activa cuando su usa el argumento `symbols` de la función `T` . Por ejemplo: ``` T("Tienes %s %%{libro}", symbols=10) ``` Ahora el sistema PS se activará para la palabra "libro" y el número 10. El resultado en Español será: "Tienes 10 libros". Observa que "libro" se ha pluralizado como "libros". El sistema PS está compuesto por 3 partes: * marcadores de posición `%%{}` para marcar las palabras en textos procesados por `T` * reglas para establecer qué forma se usará ("rules/plural_rules/*.py") * un diccionario de formas plurales de palabras "app/languages/plural-*.py" El valor de symbols puede ser una única variable, una lista o tupla de variables o un diccionario. El marcador de posición `%%{}` está compuesto por 3 partes: ``` %%{[<modificador>]<palabra>[<parámetro>]}, ``` ``` <modificador>::= ! | !! | !!! <palabra> ::= cualquier palabra o frase en singular y minúsculas (!) <parámetro> ::= [índice] | (clave) | (número) ``` * `%%{palabra}` equivale a `%%{palabra[0]}` (si no se usan modificadores). * `%%{palabra[índice]}` se usa cuando symbols es una tupla. symbols[índice] nos da un número usado para establecer que forma de la palabra se debe usar. * `%%{palabra(clave)}` se usa para recuperar el parámetro numérico en symbols[clave] * `%%{palabra(numero)}` permite establecer un `numero` en forma directa (por ejemplo: `%%{palabra(%i)}` ) * `%%{?palabra?numero}` devuelve "palabra" si se comprueba `numero==1` , de lo contrario devuelve `numero` * ``` %%{?numero} or %%{??numero} ``` devuelve `numero` si `numero!=1` , de lo contrario no devuelve nada ``` T("blabla %s %%{palabra}", symbols=var) ``` `%%{palabra}` por defecto significa `%%{word[0]}` , donde `[0]` es un índice de ítem de la tupla de symbols. ``` T("blabla %s %s %%{palabra[1]}", (var1, var2)) ``` Se usa el PS para "palabra" y var2 respectivamente. Puedes usar muchos marcadores de posición `%%{}` con el índice: ``` T("%%{este} %%{es} %%{el} %s %%{libro}", var) ``` ``` T("%%{este[0]} %%{es[0]} %%{el[0]} %s %%{libro[0]}", var) ``` Estas expresiones generarán lo siguiente: ``` var salida ------------------ 1 este es el 1 libro 2 estos son los 2 libros 3 estos son los 2 libros ``` En forma similar puedes pasar un diccionario a los símbolos: ``` T("blabla %(var1)s %(ctapalabras)s %%{word(ctapalabras)}", dict(var1="tututu", ctapalabras=20)) ``` que producirá ``` blabla tututu 20 palabras ``` Puedes reemplazar "1" con cualquier palabra que desees usando el marcador de posición `%%{?palabra?numero}` . Por ejemplo ``` T("%%{este} %%{es} %%{?un?%s} %%{libro}", var) ``` ``` var salida ------------------ 1 este es un libro 2 estos son 2 libros 3 estos son 3 libros ... ``` Dentro de `%%{...}` puedes además usar los siguientes modificadores: * `!` para que la letra inicial sea mayúscula (equivale a `string.capitalize` ) * `!!` para que cada palabra tenga inicial mayúscula (equivale a `string.title` ) * `!!!` para convertir cada palabra a mayúsculas (equivale a `string.upper` ) Ten en cuenta que puedes usar \ para escapar `!` y `?` . # Traducciones, pluralización y MARKMIN¶ También puedes usar la potente sintaxis de MARKMIN dentro de las cadenas a traducir reemplazando `T("hola mundo")` con `T.M("hola mundo")` Ahora la cadena aceptará el lenguaje de marcas MARKMIN según se describe más adelante en este libro. Además puedes usar el sistema de pluralización dentro de MARKMIN. ### Cookie¶ web2py utiliza los módulos de Python para el manejo de las cookie. Las cookie del navegador se encuentran en `request.cookies` y las cookie enviadas por el servidor están en `response.cookies` . Puedes crear valores de cookie de la siguiente forma: ``` response.cookies['micookie'] = 'unvalor' response.cookies['micookie']['expires'] = 24 * 3600 response.cookies['micookie']['path'] = '/' ``` La segunda linea le dice al navegador que conserve la cookie por 24 horas. La tercer línea le dice al navegador que envíe una cookie de regreso a cualquier aplicación en el dominio actual (ruta URL). Ten en cuenta que si no especificas la ruta para la cookie, el navegador se configurará con la ruta del URL que se solicitó, por lo que la cookie sólo se enviará de regreso al servidor cuando se solicite esa misma ruta URL. La cookie se puede hacer segura con: ``` response.cookies['micookie']['secure'] = True ``` Esto le dice al navegador que sólo devuelva la cookie sobre HTTPS, no sobre HTTP. La cookie se puede recuperar con: ``` if request.cookies.has_key('micookie'): valor = request.cookies['micookie'].value ``` A menos que se deshabiliten las sesiones, web2py, como proceso interno, establece la siguiente cookie, que usa para el manejo de sesiones: ``` response.cookies[response.session_id_name] = response.session_id response.cookies[response.session_id_name]['path'] = "/" ``` Ten en cuenta que si una aplicación determinada incluye subdominios múltiples, y quieres compartir las sesiones a través de esos subdominios (por ejemplo, sub1.tudominio.com, sub2.tudominio.com, etc.), debes establecer explícitamente el dominio de la cookie de sesión de esta forma: ``` if not request.env.remote_addr in ['127.0.0.1', 'localhost']: response.cookies[response.session_id_name]['domain'] = ".tudominio.com" ``` El comando descripto arriba puede ser útil si, por ejemplo, quisieras permitir al usuario que permanezca autenticado para distintos dominios. ### La aplicación init¶ Cuando despliegues (deploy) web2py, querrás establecer una aplicación por defecto, por ejemplo, la aplicación que se inicia cuando hay una ruta vacía en la URL, como en: ``` http://127.0.0.1:8000 ``` Por defecto, cuando web2py se encuentra con una ruta vacía, busca una aplicación llamada init. Si no hay una aplicación init buscará una aplicación llamada welcome. El nombre de la apliación por defecto se puede cambiar de init a otro nombre estableciendo el valor de `default_application` en routes.py: ``` default_application = "miapp" ``` Ten en cuenta que `default_application` se agregó por primera vez en la versión 1.83 de web2y. Aquí mostramos cuatro formas de establecer la aplicación por defecto: * Establecer el nombre de la aplicación por defecto como "init". * Establecer `default_application` con el nombre de la aplicación en routes.py * Hacer un link simbólico de "applications/init" a la carpeta de la aplicación. * Usar la reescritura de URL como se detalla en la próxima sección. ### Reescritura de URL¶ web2py tiene la habilidad de reescribir la ruta de la URL para solicitudes entrantes antes de llamar a la acción en el controlador (mapeo de URL o URL mapping), y en el otro sentido, web2py puede reescribir la ruta de la URL generada por la función `URL` (mapeo reverso de URL o reverse URL mapping). Una de las razones para esto es para el soporte de las URL heredadas (legacy), otra razón es la posibilidad de simplificar las rutas y hacerlas más cortas. web2py incluye dos sistemas de reescritura de URL: un sistema basado en parámetros de fácil uso para la mayor parte de los casos posibles, y un sistema basado en patrones para los casos más complicados. Para especificar las reglas de reescritura de URL, crea un archivo nuevo en la carpeta "web2py" llamada `routes.py` (el contenido de `routes.py` dependerá del sistema de reescritura que elijas, como se describe en las siguientes secciones). Los dos sistemas no se pueden mezclar. Observa que si editas routes.py, debes volver a cargarlo. Esto se puede hacer en dos formas: reiniciando el servidor web o haciendo clic en el botón para cargar nuevamente routes en admin. Si hay un fallo en routes, no se actualizará. El sistema basado en parámetros (paramétrico) provee de un acceso fácil a muchos métodos "enlatados". Entre sus posibilidades se puede enumerar: * Omitir los nombres de la aplicación, controlador y función por defecto en las URL visibles desde afuera (las creadas por la función URL()) * Asociar (mapear) dominios (y/o puertos) a controladores de aplicaciones * Embeber un selector de idioma en la URL * Eliminar un prefijo determinado de las URL entrantes y añadirlo nuevamente a las URL salientes * Asociar (mapear) archivos de la raíz como /robots.txt a un directorio estático de la aplicación El router paramétrico también provee de una forma más flexible de validación para las URL entrantes. Digamos que has creado una aplicación llamada `miapp` y deseas que sea la aplicación por defecto, de forma que el nombre de la aplicación ya no forme parte de la URL cuando la ve el usuario. Tu controlador por defecto sigue siendo `default` , y también quieres eliminar su nombre de la URL que ve el usuario. Esto es entonces lo que debes poner en `routes.py` : ``` routers = dict( BASE = dict(default_application='miapp'), ) ``` Y eso es todo. El router paramétrico es lo suficientemente inteligente para hacer lo correcto con una URL como: ``` http://dominio.com/myapp/default/miapp ``` ``` http://dominio.com/myapp/miapp/index ``` donde el acortamiento normal sería ambiguo. Si tienes dos aplicaciones, `miapp` y `miapp2` , obtendrás el mismo efecto, y adicionalmente, el controlador por defecto de `miapp2` se recortará de la URL cuando sea seguro (que es lo normal casi siempre). Aquí hay otro caso: digamos que quieres dar soporte para lenguajes basado en URL, donde las URL son algo así: ``` http://miapp/es/una/ruta ``` o (reescrita) `http://es/una/ruta` Esta es la forma de hacerlo: ``` routers = dict( BASE = dict(default_application='miapp'), miapp = dict(languages=['en', 'it', 'jp'], default_language='es'), ) ``` Ahora una URL entrante como esta: ``` http:/dominio.com/it/una/ruta ``` se asociará como `/miapp/una/ruta` , y se establecerá request.uri_language como 'it', para que puedas forzar la traducción. Además puedes tener archivos estáticos específicos por idioma. ``` http://domain.com/it/static/archivo ``` se asociará con: ``` applications/miapp/static/it/archivo ``` si ese archivo existiera. Si no, entonces los URL como: ``` http://domain.com/it/static/base.css ``` se asociarán como: ``` applications/miapp/static/base.css ``` (porque no hay un `static/it/base.css` ). Entonces ahora puedes tener archivos estáticos específicos por idioma, incluyendo imágenes, si así lo necesitaras. También está soportada el mapeo de dominio (domain mapping): ``` routers = dict( BASE = dict( domains = { 'dominio1.com' : 'app1', 'dominio2.com' : 'app2', } ), ) ``` cuya función es obvia. ``` routers = dict( BASE = dict( domains = { 'dominio.com:80' : 'app/insegura', 'domnio.com:443' : 'app/segura', } ), ) ``` asocia los accesos al controlador llamado `inseguro` de `http://dominio.com` , mientras que los accesos con `HTTPS` van al controlador `seguro` . Como alternativa, puedes asociar distintos puertos a distintas app, con una notación análoga a la de los ejemplos anteriores. Para más información, puedes consultar el archivo `router.example.py` que se incluye en la carpeta raíz de la distribución estándar de web2py. Nota: el sistema basado en parámetros está disponible desde la versión 1.92.1 de web2py. Si bien el sistema basado en parámetros recién descripto debería de ser suficiente para la mayor parte de los casos, el sistema alternativo basado en patrones provee de flexibilidad adicional para casos más complejos. Para usar el sistema basado en patrones, en lugar de definir los enrutadores como diccionarios con parámetros de enrutamiento, se definen dos listas (o tuplas) de pares (2-tuplas), `routes_in` y `routes_out` . Cada tupla contiene dos elementos: el patrón a reemplazar y la cadena que lo reemplaza. Por ejemplo: ``` routes_in = ( ('/pruebame', '/ejemplos/default/index'), ) routes_out = ( ('/ejemplos/default/index', '/pruebame'), ) ``` Con estas rutas, la URL: ``` http://127.0.0.1:8000/pruebame ``` es asociada a: ``` http://127.0.0.1:8000/ejemplos/default/index ``` Para el visitante, todos los link a la URL de la página se ven como `/pruebame` . Los patrones tienen la misma sintaxis que las expresiones regulares de Python. Por ejemplo: ``` ('.*.php', '/init/default/index'), ``` asocia toda URL que termine en ".php" a la página de inicio. El segundo término de una regla también puede ser una redirección a otra página: ``` ('.*.php', '303->http://ejemplo.com/nuevapagina'), ``` Aquí 303 es el código HTTP para la respuesta de redirección. A veces queremos deshacernos del prefijo de la aplicación en las URL porque queremos exponer sólo una aplicación. Esto es posible con: ``` routes_in = ( ('/(?P<any>.*)', '/init/\g<any>'), ) routes_out = ( ('/init/(?P<any>.*)', '/\g<any>'), ) ``` También hay una sintaxis alternativa que se puede mezclar con la notación anterior de expresiones regulares. Consiste en usar `$nombre` en lugar de `(?P<nombre>\w+)` o `\g<nombre>` . Por ejemplo: routes_out = ( ('/init/$c/$f', '/$c/$f'), ) ``` también elimina el prefijo de la aplicación en todas las URL. Si usas la notación `$nombre` , puedes asociar automáticamente `routes_in` a `routes_out` , siempre y cuando no uses expresiones regulares. Por ejemplo: routes_out = [(x, y) for (y, x) in routes_in] ``` Si existen múltiples rutas, se ejecuta la primera que coincida con la URL, si no hay coincidencias en el patrón, no se hacen cambios a la ruta. Puedes usar `$anything` para comparar con cualquier cadena ( `.*` ) hasta el final de línea. Aquí mostramos una versión mínima de "routes.py" para el manejo de solicitudes de favicon y robot: ``` routes_in = ( ('/favicon.ico', '/ejemplos/static/favicon.ico'), ('/robots.txt', '/ejemplos/static/robots.txt'), ) routes_out = () ``` Este es un ejemplo más complejo que expone una sola app, "miapp", sin prefijos innecesarios y que además expone admin, appadmin y static: ``` routes_in = ( ('/admin/$anything', '/admin/$anything'), ('/static/$anything', '/miapp/static/$anything'), ('/appadmin/$anything', '/miapp/appadmin/$anything'), ('/favicon.ico', '/miapp/static/favicon.ico'), ('/robots.txt', '/miapp/static/robots.txt'), ) routes_out = [(x, y) for (y, x) in routes_in[:-2]] ``` La sintaxis general para routes es más compleja que los ejemplos básicos que hemos visto hasta ahora. Este es un ejemplo más general y representativo: ``` routes_in = ( ('140.191.\d+.\d+:https?://www.web2py.com:post /(?P<any>.*).php', '/prueba/default/index?vars=\g<any>'), ) ``` Asocia las solicitudes `http` o `https` `POST` (ten en cuenta el uso de minúsculas en "post") a la máquina en `www.web2py.com` desde una IP remota que coincide con la expresión regular `'140.191.\d+.\d+'` Si se solicita una página que coincida con la expresión `'/(?P<any>.*).php'` se asociará a ``` '/prueba/default/index?vars=\g<any>' ``` donde `\g<any>` es reemplazada por la expresión regular que coincida. La sintaxis general es ``` '[dirección remota]:[protocolo]://[equipo (host)]:[método] [ruta]' ``` Si falta la primer sección del patrón (todo excepto `[ruta]` , web2py la reemplaza con un valor por defecto: ``` '.*?:https?://[^:/]+:[a-z]+' ``` La expresión completa se compara como expresión regular, por lo que "." debe escaparse (escape) y toda subexpresión que coincida se puede capturar usando `(?P<...>...)` , con la notación de expresiones regulares de Python. El método de la solicitud (típicamente GET o POST) debe ser en minúsculas. Además, se eliminan los delimitadores (unquote) de toda cadena de tipo `%xx` en la URL a comparar. Esto permite reenrutar las solicitudes basadas en la IP o dominio del cliente, según el tipo de solicitud, tipo de método y ruta. Además permite que web2py asocie distintas máquinas virtuales (virtual host) a distintas aplicaciones. Toda subexpresión que coincida se puede usar para armar la URL de destino (target) y, eventualmente, puede ser pasada como variable GET. Los servidores más conocidos, como Apache y lighttpd, tienen además la posibilidad de reescribir las URL. En un entorno de producción se podría optar por esa opción en lugar de `routes.py` . Sea cual sea tu decisión, te recomendamos que no escribas "a mano" (hardcode) las URL internas en tu app y que uses la función URL para generarlas. Esto hará tu aplicación más portátil en caso de que se realicen cambios en el enrutamiento. # Reescritura de URL específica de una aplicación¶ Al usar el sistema basado en patrones, una aplicación puede configurar sus propias rutas en un archivo específico routes.py ubicado en la carpeta base de la aplicación. Esto se habilita configurando `routes.py` en el archivo routes.py base para determinar en una URL entrante el nombre de la aplicación seleccionada. Cuando esto ocurre, se usa el routes.py específico de la aplicación en lugar del routes.py base. El formato de `routes_app` es idéntico al de `routes_in` , excepto que el patrón de reemplazo es simplemente el nombre de la aplicación. Si al aplicar `routes_app` a la URL entrante no devuelve un nombre de aplicación, o el routes.py específico de la aplicación no se encuentra, se utiliza el routes.py base. Nota: `routes_app` `se agregó a partir de la versión 1.83 de web2py. # Aplicación, controlador y función por defecto¶ Cuando se usa el sistema basado en patrones, el nombre de la aplicación, controlador y función por defecto se puede cambiar de init, default, e index respectivamente por otro nombre configurando el valor apropiado en routes.py: ``` default_application = "miapp" default_controller = "admin" default_function = "comienzo" ``` Nota: Estos ítems aparecieron por primera vez en la versión 1.83. # Enrutamiento y errores¶ También puedes usar `routes.py` para reenrutar las solicitudes hacia acciones especiales en caso de que ocurra un error en el servidor. Puedes especificar el mapeo (mapping) en forma global, por aplicación, por código de error o por tipo de error para cada app. Un ejemplo: ``` routes_onerror = [ ('init/400', '/init/default/login'), ('init/*', '/init/static/falla.html'), ('*/404', '/init/static/noseencuentra.html'), ('*/*', '/init/error/index') ] ``` Por cada tupla, la primera cadena es comparada con "[nombre de app]/[código de error]". Si hay una coincidencia, la solicitud fallida se reenruta hacia la URL de la segunda cadena de la tupla que coincide. Si la URL de manejo de errores no es un archivo estático, se pasarán a la acción del error las siguientes variables GET: * `code` : el código de status HTTP (por ejemplo, 404, 500) * `ticket` : de la forma "[nombre de app]/[número de ticket]" (o "None" si no hay un ticket) * `requested_uri` : el equivalente de * `request_url` : el equivalente de `request.url` Estas variables serán accesibles para la acción de manejo de error por medio de `request.vars` y se pueden usar para generar la respuesta con el error. En particular, es buena idea que la acción del error devuelva el código original de error HTTP en lugar del código de status 200 (OK) por defecto. Esto se puede hacer configurando ``` response.status = request.vars.code ``` . También es posible hacer que la acción del error envíe (o encole) un correo a un administrador, incluyendo un link al ticket en `admin` . Los errores que no coinciden mostrarán una página de error por defecto. Esta página de error por defecto también se puede personalizar aquí (ver `router.example.py` y `routes.example.py` en la carpeta raíz de web2py): ``` error_message = '<html><body><h1>%s</h1></body></html>' error_message_ticket = '''<html><body><h1>Error interno</h1> Ticket creado: <a href="/admin/default/ticket/%(ticket)s" target="_blank">%(ticket)s</a></body></html>''' ``` La primera variable contiene el mensaje de error cuando se solicita una aplicación o función inválida. La segunda variable contiene el mensaje de error cuando se crea un ticket. `routes_onerror` funciona con ambos mecanismos de enrutamiento En "routes.py" puedes además especificar una acción que se encargará de manejar los errores: ``` error_handler = dict(application='error', controller='default', function='index') ``` Si se especifica el `error_handler` , la acción se llamará sin redirigir al usuario y se encargará del manejo del error. Si la página para el manejo del error devolviera otro error, web2py cambiará al comportamiento original devolviendo respuestas estáticas. # Administración de recursos estáticos¶ A partir de la versión 2.1.0, web2py tiene la habilidad de administrar los recursos estáticos. Cuando una aplicación está en etapa de desarrollo, los archivos estáticos cambian a menudo, por lo tanto web2py envía archivos estáticos sin encabezados de caché. Esto tiene como efecto secundario el "forzar" al navegador a que incluya los archivos estáticos en cada solicitud. Esto resulta en un bajo rendimiento cuando se carga la página. En un sitio en "producción", puedes necesitar servir archivos estáticos con encabezados `cache` para evitar las descargas innecesarias ya que los archivos estáticos se modifican. Los encabezados `cache` permiten que el navegador recupere cada archivo por única vez, ahorrando de esta forma ancho de banda y reduciendo el tiempo de descarga. De todos modos hay un problema: ¿qué deberían declarar los encabezados cache? ¿Cuándo deberían vencer el plazo para omitir la descarga de los archivos? Cuando se sirven los archivos por primera vez, el servidor no puede pronosticar cuándo se modificarán. Una forma manual de resolverlo es creando subcarpetas para las distintas versiones de los archivos estáticos. Por ejemplo, se puede habilitar el acceso a una versión anterior de "layout.css" en el URL "/miapp/static/css/1.2.3/layout.css". Cuando cambias el archivo, creas una nueva subcarpeta y la enlazas como "/miapp/static/css/1.2.4/layout.css". Este procedimiento funciona pero es molesto, porque cada vez que actualices un archivo css, deberás acordarte de copiarlo a otra carpeta, cambiar el URL del archivo en tu layout.html y luego desplegar la aplicación. La administración de recursos estáticos resuelve el problema permitiendo al desarrollador declarar la versión de un grupo de archivos estáticos que se solicitarán nuevamente solo si ha cambiado el número de versión. El número de versión se incluye en la ruta al archivo estático como en el ejemplo anterior. La diferencia con el método anterior es que el número de versión sólo se muestra en el URL, pero se aplicará al sistema de archivos. Si quieres servir "/myapp/static/layout.css" con los encabezados cache, solo debes incluir el archivo con un URL distinto que incluya el número de versión: ``` /miapp/static/_1.2.3/layout.css ``` (Ten en cuenta que el URL define un número de versión, no se muestra en ningún otro lado). Observa que el URL comienza con "/miapp/static/", seguido del número de versión compuesto por un subguión y 3 enteros separados por puntos (como se describe en SemVer), y luego por el nombre del archivo. Además, ten en cuenta que no debes crear una carpeta "_1.2.3/". Cada vez que el archivo estático es solicitado indicando la versión en el url, se servirá con un encabezado cache especificando un valor de vencimiento muy lejano, específicamente. ``` Cache-Control : max-age=315360000 Expires: Thu, 31 Dec 2037 23:59:59 GMT ``` Esto significa que el navegador recuperará aquellos archivos por única vez, y se guardarán por un término indefinido (prácticamente sin vencimiento) en el caché del navegador. Si cambias el número de versión en el URL, esto hace que el navegador piense que está solicitando un archivo distinto, y el archivo se descarga nuevamente. Puedes usar "_1.2.3", "_0.0.0", "_999.888.888", siempre y cuando la versión comience con un subguión seguido de tres números separados por puntos. En desarrollo, puedes usar ``` response.files.append(...) ``` para enlazar los URL de los archivos estáticos. En este caso puedes incluir la parte "_1.2.3/" en forma manual, o puedes aprovechar el nuevo parámetro del objeto response: ``` response.static_version ``` . Solo incluye los archivos estáticos en la forma usual, por ejemplo ``` {{response.files.append(URL('static','layout.css'))}} ``` y en el modelo establece el valor ``` response.static_version = '1.2.3' ``` Esto traducirá automáticamente cada url "/miapp/static/layout.css" en "/miapp/static/_1.2.3/layout.css" para cada archivo incluido en `response.files` . A menudo en producción optas por servir archivos estáticos por medio del servidor web (apache, nginx, etc.). Debes ajustar la configuración de forma que se omita la parte que contiene "_1.2.3/". Por ejemplo, para Apache, cambia esto: ``` AliasMatch ^/([^/]+)/static/(.*) /home/www-data/web2py/applications/$1/static/$2 ``` ``` AliasMatch ^/([^/]+)/static/(?:/_[\d]+.[\d]+.[\d]+)?(.*) /home/www-data/web2py/applications/$1/static/$2 ``` En forma similar, para Nginx, debes cambiar esto: ``` location ~* /(\w+)/static/ { root /home/www-data/web2py/applications/; expires max; } ``` ``` location ~* /(\w+)/static(?:/_[\d]+.[\d]+.[\d]+)?/(.*)$ { alias /home/www-data/web2py/applications/$1/static/$2; expires max; } ``` ### Ejecutando tareas en segundo plano¶ En web2py, cada solicitud http se sirve en un hilo (thread) propio. Los hilos se reciclan para mayor eficiencia y son administrados por el servidor web. Por seguridad, el servidor establece un tiempo límite para cada solicitud. Esto significa que las acciones no deberían correr tareas que toman demasiado tiempo, ni deberían crear nuevos hilos y tampoco deberían bifurcarse (fork) en otros procesos (esto es posible pero no recomendable). La forma adecuada para correr tareas prolongadas es hacerlo en segundo plano. No hay una única forma de hacerlo, pero aquí describiremos tres mecanismos que vienen incorporados en web2py: cron, colas de tareas simples, y el planificador de tareas (scheduler). Con respecto a cron, nos referimos a una funcionalidad de web2py y no al mecanismo Cron de Unix. El cron de web2py funciona también en Windows. El cron de web2py es el método recomendado si necesitas tareas en segundo plano en tiempos programados y estas tareas toman un tiempo relativamente corto comparado con el tiempo transcurrido entre dos llamadas. Cada tarea corre en su proceso propio, y las distintas tareas pueden ejecutarse simultáneamente, pero no tienes control sobre la cantidad de tareas que se ejecutan. Si por accidente una de las tareas se superpone con sí misma, puede causar el bloqueo de la base de datos y un pico en el uso de memoria. El planificador de tareas de web2py tiene una metodología distinta. La cantidad de procesos corriendo es fija y estos pueden correr en distintos equipos. Cada proceso es llamado obrero (worker). Cada obrero toma una tarea cuando está disponible y la ejecuta lo antes posible a partir del tiempo programado, pero no necesariamente en el momento exacto para el que se programó. No puede haber más procesos corriendo que el número de tareas programadas y por lo tanto no habrá picos del uso de memoria. Las tareas del planificador se pueden definir en modelos y se almacenan en la base de datos. El planificador de web2py no implementa una cola distribuida (distributed queue) porque se asume que el tiempo para la distribución de tareas es insignificante comparado con el tiempo para la ejecución de las tareas. Los obreros toman las tareas de la base de datos. Las colas de tareas simples (homemade task queues) pueden ser una alternativa más simple al programador en algunos casos. # Cron¶ El cron de web2py provee a las aplicaciones de la habilidad para ejecutar tareas en tiempos preestablecidos, de forma independiente con la plataforma. Para cada aplicación, la funcionalidad de cron se define en un archivo crontab: `app/cron/crontab` Este sigue la sintaxis definida en ref. [cron] (con algunos agregados que son específicos de web2py). Antes de web2py 2.1.1, cron se habilitaba por defecto y se podía deshabilitar con la opción de la línea de comandos. A partir de 2.1.1 está deshabilitado por defecto y se puede habilitar con la opción `-Y` . Este cambio fue motivado por el deseo de promover el uso del nuevo planificador (que tiene un mecanismo más avanzado que cron) y también porque el uso de cron puede incidir en el rendimiento. Esto significa que cada aplicación puede tener una configuración de cron propia y separada y que esta configuración se puede cambiar desde web2py sin modificar el sistema operativo anfitrión. ``` 0-59/1 * * * * root python /path/to/python/script.py 30 3 * * * root *applications/admin/cron/limpieza_db.py */30 * * * * root **applications/admin/cron/algo.py @reboot root *mycontroller/mifuncion @hourly root *applications/admin/cron/expire_sessions.py ``` Las últimas dos líneas en este ejemplo usan extensiones a la sintaxis normal de cron que dan funcionalidad adicional de web2py. El archivo "applications/admin/cron/expire_sessions.py" en realidad existe y viene con la app admin. Busca sesiones vencidas y las elimina. "applications/admin/cron/crontab" corre esta tarea cada hora. Si la tarea/script tiene el prefijo asterisco ( `*` ) y termina en `.py` , se ejecuta en el entorno de web2py. Esto quiere decir que tendrás todos los controladores y modelos a tu disposición. Si usas dos asteriscos ( `**` ), los modelos no se ejecutarán. Este es el método recomendado para los llamados, ya que tiene una menor sobrecarga (overhead) y evita potenciales problemas de bloqueo. Ten en cuenta que los scripts o funciones ejecutadas en el entorno de web2py requieren un `db.commit()` manual al final de la función o la transacción se revertirá. web2py no genera ticket o trazas (traceback) significativas en modo consola (shell), que es el modo en el cual corre cron, por lo que debes procurar que tu código de web2py corra sin errores antes de configurarlo como tarea de cron, ya que posiblemente no podrás ver esos errores cuando se ejecuten en cron. Es más, ten cuidado con el uso de modelos: mientras que la ejecución ocurre en procesos separados, los bloqueos de base de datos se deben tener en cuenta para evitar que las páginas tengan que esperar a tareas de cron que podrían bloquear la base de datos. Utiliza la sintaxis `**` si no necesitas acceso a la base de datos en tu tarea de cron. Además puedes llamar a una función de controlador en cron, en cuyo caso no hay necesidad de especificar una ruta. El controlador y la función serán los de la aplicación de origen. Se debe tener especial cuidado con los problemas listados arriba. Ejemplo: ``` */30 * * * * root *micontrolador/mifuncion ``` Si especificas `@reboot` en el primer campo del archivo crontab, la tarea correspondiente se ejecuta sólo una vez, al inicio de web2py. Puedes usar esta funcionalidad si deseas hacer caché previo, comprobaciones o configuración inicial de datos para una aplicación al inicio de web2py. Ten en cuenta que las tareas de cron se ejecutan en paralelo con la aplicación --- si la aplicación no está lista para servir solicitudes antes de que la tarea cron haya finalizado, deberías implementar las comprobaciones adecuadas. Ejemplo: ``` @reboot * * * * root *mycontroller/mifuncion ``` Según cómo estés corriendo web2py, hay cuatro modos de operación para el web2py cron. * "soft cron": disponible en todos los modos de ejecución * "hard cron": disponible si se usa el servidor web incorporado (directamente o a través de mod_proxy de Apache) * "external cron": disponible si se tiene acceso al servicio de cron propio del sistema * Sin cron El modo por defecto es hard cron si utilizas el servidor incorporado; en el resto de los casos, es soft cron por defecto. El soft cron es el método por defecto si utilizas CGI, FASTCGI o WSGI (pero ten en cuenta que el soft cron no se habilita por defecto en el archivo `wsgihandler.py` provisto con web2py). Tus tareas se ejecutarán al realizarse la primer llamada (carga de página) a web2py a partir de tiempo especificado en crontab; pero sólo luego del proceso de la página, por lo que el usuario no observará una demora. Obviamente, hay cierta incertidumbre con respecto al momento preciso en que se ejecutará la tarea, según el tráfico que reciba el servidor. Además, la tarea de cron podría interrumpirse si el servidor web tiene configurado un tiempo límite para la descarga de la página. Si estas limitaciones no son aceptables, puedes optar por external cron (cron externo). El soft cron es razonable como último recurso, pero si tu servidor permite otros métodos cron, deberían tener prioridad. El hard cron es el método por defecto si estás utilizando el servidor web incorporado (directamente o a través de Apache con mod_proxy). El hard cron se ejecuta en un hilo paralelo, por lo que a diferencia del soft cron, no existen limitaciones con respecto a la precisión en el tiempo o a la duración de la ejecución de la tarea. El cron externo no es la opción por defecto en ninguna situación, pero requiere que tengas acceso a los servicios cron del sistema. Se ejecuta en un proceso paralelo, por lo que ninguna de las limitaciones de soft cron tienen lugar. Este es el modo recomendado de uso de cron bajo WSGI o FASTCGI. Ejemplo de línea a agregar al crontab del sistema, (por lo general /etc/crontab): ``` 0-59/1 * * * * web2py cd /var/www/web2py/ && python web2py.py -J -C -D 1 >> /tmp/cron.output 2>&1 ``` Si usas el `cron` externo, asegúrate de agregar o bien `-J` (o `--cronjob` , que es lo mismo) como se indica más arriba, para que web2py sepa que se ejecuta esa por medio de cron. Web2py establece estos valores internamente si se usa soft o hard `cron` . # Colas de tareas simples¶ Si bien cron es útil para correr tareas en intervalos regulares de tiempo, no es siempre la mejor solución para correr tareas en segundo plano. Para este caso web2py provee la posibilidad de correr cualquier script de Python como si estuviera dentro de un controlador: ``` python web2py.py -S app -M -R applications/app/private/myscript.py -A a b c ``` donde `-S app` le dice a web2py que corra "miscript.py" como "app", `-M` le dice a web2py que ejecute los modelos, y `-A a b c` le pasa los argumentos opcionales de línea de comandos ``` sys.args=['a', 'b', 'c'] ``` a "miscript.py". Este tipo de proceso en segundo plano no debería ejecutarse con cron (a excepción quizás de cron y la opción @reboot) porque necesitas asegurarte de que no se correrá más de una instancia al mismo tiempo. Con cron es posible que un proceso comience en la iteración 1 y no se complete para la iteración 2, por lo que cron vuelve a comenzar, y nuevamente, y otra vez - atascando de este modo el servidor. En el capitulo 8, damos un ejemplo de cómo usar el método anterior para enviar email. # Planificador de tareas (Scheduler, experimental)¶ El planificador de tareas de web2py funciona en forma muy similar a la cola de tareas descripta en la subsección anterior con algunas particularidades: * Provee de un mecanismo estándar para crear y programar y monitorear tareas. * No hay un único proceso en segundo plano sino un conjunto de procesos obreros. * El trabajo de un obrero se puede supervisar porque sus estados, así como también los estados de cada tarea, se almacenan en la base de datos. * Funciona sin web2py pero los detalles no están documentados aquí. El planificador no usa cron, sin embargo se podría usar el @reboot de cron para iniciar los nodos de los obreros. Se pueden consultar instrucciones para desplegar el planificador con Linux o Windows en el capítulo de recetas de implementación. En el planificador, una tarea es simplemente una función definida en un modelo (o en un módulo e importada en un modelo). Por ejemplo: ``` def tarea_sumar(a,b): return a+b ``` Las tareas siempre se llamarán en el mismo entorno configurado para los controladores y por lo tanto ven todas las variables globales definidas en los modelos, incluyendo las conexiones a bases de datos ( `db` ). Las tareas se diferencian de las acciones en controladores en que no están asociadas con una solicitud HTTP y por lo tanto no hay un objeto `request.env` . Recuerda que debes ejecutar `db.commit()` al final de cada tarea si contiene comandos de modificación de la base de datos. web2py por defecto aplica los cambios a las bases de datos al finalizar las acciones, pero las tareas del planificador no son acciones. Para habilitar el planificador debes instanciar la clase Scheduler en un modelo. La forma recomendable de habilitar el planificador para tu aplicación es crear un archivo del modelo llamado `scheduler.py` y definir tu función allí. Luego definir las funciones, puedes usar el siguiente código en el modelo: ``` from gluon.scheduler import Scheduler planificador = Scheduler(db) ``` Si tus tareas están definidas en un módulo (en lugar de usar un modelo) puedes necesitar reiniciar los obreros. La tarea se planifica con ``` planificador.queue_task(tarea_sumar, pvars=dict(a=1, b=2)) ``` # Parámetros¶ El primer argumento de la clase `Scheduler` debe ser la base de datos que usará el planificador para comunicarse con los obreros. Puede ser la `db` de la app u otra `db` especial para el planificador, quizás una base de datos compartida por múltiples aplicaciones. Si usas SQLite es recomendable el uso de bases de datos distintas para los datos de la app y para el registro de las tareas para que la app continúe respondiendo normalmente. Una vez que se han definido las tareas y creado la instancia de `Scheduler` , solo hace falta iniciar los obreros. Puedes hacerlo de varias formas: ``` python web2py.py -K miapp ``` inicia un obrero para la app `miapp` . Si quieres iniciar múltiples obreros para la misma app, puedes hacerlo con solo pasar `myapp, myapp` como argumentos. Además puedes pasar el argumento `group_names` (sobrescribiendo el definido en el tu modelo) con ``` python web2py.py -K miapp:grupo1:grupo2,miotraapp:grupo1 ``` Si tienes un modelo llamado `scheduler.py` puedes iniciar o parar a los obreros desde la ventana por defecto de web2py (la que usas para establecer la dirección ip y el puerto). Otra mejora interesante: si usas el servidor web incorporado, puedes iniciarlo junto con el planificador con una única línea de código (se asume que no quieres que se muestre ventana de inicio, de lo contrario puedes usar el menú "Schedulers") ``` python web2py.py -a contraseña -K miapp -X ``` Puedes pasar los parámetros usuales (-i, -p, aquí -a evita que la ventana se muestre), usa una app en el parámetro -K y agrega un -X. ¡El planificador correrá en conjunto con el servidor web! La lista completa de los argumentos que acepta el planificador es: ``` Scheduler( db, tasks=None, migrate=True, worker_name=None, group_names=None, heartbeat=HEARTBEAT, max_empty_runs=0, discard_results=False, utc_time=False ) ``` Vamos a detallarlos en orden: * `db` es la instancia de base de datos DAL donde se crearán las tablas del planificador. * `tasks` es un diccionario que asocia nombres a funciones. si no usas este parámetro, la función se recuperará del entorno de la aplicación. * `worker_name` es por defecto None. Tan pronto como se inicie el obrero, se generará un nombre de obrero de tipo anfitrión-uuid. Si quieres especificarlo, asegúrate de que sea único. * `group_names` se establece por defecto como [main]. Todas las tareas tienen un parámetro `group_name` , que es por defecto main. Los obreros solo pueden tomar tareas de su propio grupo. Nota importante: Esto es útil si tienes distintas instancias de obreros (por ejemplo en distintas máquinas) y quieres asignar tareas a un obrero específico. Otra nota importante: Es posible asignar más grupos a un obrero, y ellos pueden también ser todos iguales, como por ejemplo . Las tareas se distribuirán teniendo en cuenta que un obrero con grupos es capaz de procesar el doble de tareas que un obrero con grupos `['migrupo']` . * `heartbeat` se configura por defecto en 3 segundos. Este parámetro es el que controla cuán frecuentemente un planificador comprobará su estado en la tabla `scheduler_worker` y verá si existe alguna tarea pendiente de procesamiento con el valor ASSIGNED (asignada) para él. * `max_emtpty_runs` es por defecto 0; eso significa que el obrero continuará procesando tareas siempre que contengan el valor ASSIGNED. Si configuras este parámetro como un valor, digamos, 10, un obrero finalizará instantáneamente si su valor es ACTIVE y no existen tareas con el valor ASSIGNED para ese obrero en un plazo de 10 iteraciones. Una iteración se entiende como el proceso de un obrero de búsqueda de tareas que tiene una frecuencia de 3 segundos (o el valor establecido para `heartbeat` ). * `discard_results` es por defecto False. Si se cambia a True, no se crearán registros scheduler_run. Nota importante: los registros scheduler_run se crearán como antes para las tareas con los valores de estado FAILED, TIMEOUT y STOPPED tasks's. * `utc_time` es por defecto False. Si necesitas coordinar obreros que funcionan con distintos husos horarios, o no tienes problemas con la hora de verano o solar, utilizando fechas y horas de distintos países, etc., puedes configurarlo como True. El planificador respetará la hora UTC y funcionará omitiendo la hora local. Hay un detalle: debes programar las tareas con la hora de UTC (para los parámetros start_time, stop_time, y así sucesivamente). Ahora tenemos la infraestructura que necesitábamos: hemos definido las tareas, hemos informado al planificador sobre ellas e iniciamos el obrero o los obreros. Lo que queda por hacer es la planificación de las tareas en sí. # Tareas¶ Las tareas se pueden planificar en forma programática o a través de appadmin. De hecho, una tarea se planifica simplemente agregando una entrada en la tabla "scheduler_task", a la que puedes acceder a través de appadmin: ``` http://127.0.0.1:8000/miapp/appadmin/insert/db/scheduler_task ``` El significado de los campos en esta tabla es obvio. Los campos "args" y "vars" son los valores a pasarse a la tarea en formato JSON. En el caso de "tarea_sumar" previo, un ejemplo de "args" y "vars" podría ser: ``` args = [3, 4] vars = {} ``` ``` args = [] vars = {'a':3, 'b':4} ``` La tabla `scheduler_task` es la tabla donde se organizan las tareas. Todas las tareas siguen un ciclo vital Por defecto, cuando envías una tarea al planificador, este tiene el estado QUEUED. Si necesitas que este se ejecute más tarde, usa el parámetro `start_time` (por defecto es now). Si por alguna razón necesitas asegurarte de que la tarea no se ejecutará antes de cierto horario (quizás una consulta a un webservice que cierra a la 1 de la mañana, un correo que no se deba enviar al terminar el horario laboral, etc ...) puedes hacerlo estableciendo el parámetro `stop_time` (por defecto es None). Si tu tarea NO es tomada por otro obrero antes de `stop_time` , se establecerá como EXPIRED. Las tareas que no tengan un valor `stop_time` configurado o tomadas antes que el parámetro `stop_time` se asignan a un obrero estableciendo el valor ASSIGNED. Cuando un obrero toma una tarea, su estado se establece como RUNNING. Las tareas que se ejecuten pueden dar como resultado los siguientes valores: * TIMEOUT cuando hayan pasado más de `n` segundos especificados con el parámetro `timeout` (por defecto 60 segundos). * FAILED cuando se detecta una excepción. * COMPLETED cuando se completan en forma exitosa. Los valores para `start_time` y `stop_time` deberían ser objetos datetime. Para programar la ejecución de "mitarea" en un plazo de 30 segundos a partir de la hora actual, por ejemplo, tendrías que hacer lo siguiente: ``` from datetime import timedelta as timed planificador.queue_task('mitarea', start_time=request.now + timed(seconds=30)) ``` En forma complementaria, puedes controlar la cantidad de veces que una tarea se debe repetir (por ejemplo, puedes necesitar calcular la suma de ciertos datos con una frecuencia determinada). Para hacerlo, establece el parámetro `repeats` (por defecto es 1, es decir, una sola vez, si se establece 0, se repite indefinidamente). Puedes especificar la cantidad de segundos que deben pasar con el parámetro `period` (que por defecto es 60 segundos). El período de tiempo no se calcula entre la finalización de la primer tanda y el comienzo de la próxima, sino entre el tiempo de inicio de la primera tanda y el tiempo de inicio del ciclo que le sigue). Además puedes establecer la cantidad de veces que una función puede generar una excepción (por ejemplo cuando se recuperan datos de un webservice lento) y volver a incluirse en la cola en lugar de detenerse con el estado FAILED si usas el parámetro `retry_failed` (por defecto es 0, usa -1 para no detenerse). Resumiendo: dispones de * `period` y `repeats` para replanificar automáticamente una función * `timeout` para asegurarte que la función no exceda una cierta cantidad de tiempo de ejecución * `retry_failed` para controlar cuantas veces puede fallar una tarea * `start_time` y `stop_time` para planificar una función en un horario restringido `queue_task` y `task_status` ¶ El método: ``` scheduler.queue_task(function, pargs=[], pvars={}, **kwargs) ``` te permite agregar a la cola tareas a ejecutar por obreros. Acepta los siguientes parámetros: * `function` (obligatorio): puede ser el nombre de la tarea o una referencia a la función en sí. * `pargs` : son los argumentos que se deben parar a la tarea, almacenados como una lista de Python. * `pvars` : son los pares de argumentos nombre-valor que se usarán en la tarea, almacenados como diccionario de Python. * `kwargs` : otras columnas de scheduler_task que se pueden pasar como argumentos de par nombre-valor (por ejemplo repeats, period, timeout). Por ejemplo: ``` scheduler.queue_task('demo1', [1, 2]) ``` hace exactamente lo mismo que ``` scheduler.queue_task('demo1', pvars={'a':1, 'b':2}) ``` y lo mismo que ``` st.validate_and_insert(function_name='demo1', args=json.dumps([1, 2])) ``` y que: ``` st.validate_and_insert(function_name='demo1', vars=json.dumps({'a':1, 'b':2})) ``` He aquí un ejemplo más complejo y completo: ``` def tarea_sumar(a, b): return a + b planificador = Scheduler(db, tasks=dict(demo1=tarea_sumar)) scheduler.queue_task('demo1', pvars=dict(a=1, b=2), repeats = 0, period = 180) ``` Desde la versión 2.4.1, si pasas el argumento adicional `inmediate=True` hará que el obrero principal reorganice las tareas. Antes de 2.4.1, el obrero verificaba las nuevas tareas cada 5 ciclos (o sea, `5*heartbeat` segundos). Si tenías una app que necesitaba comprobar frecuentemente nuevas tareas, para lograr un comportamiento ágil estabas obligado a disminuir el parámetro `heartbeat` , exigiendo a la base de datos injustificadamente. Con `inmediate=True` puedes forzar la comprobación de nuevas tareas: esto ocurrirá cuando hayan transcurrido n segundos, con n equivalente al valor establecido para `heartbeat` . Una llamada a ``` planificador.queue_task ``` devuelve el id y el `uudi` de la tarea que has agregado a la cola (puede ser un valor que le hayas asignado o uno generado automáticamente), y los erroes posibles `errors` : ``` <Row {'errors': {}, 'id': 1, 'uuid': '08e6433a-cf07-4cea-a4cb-01f16ae5f414'}> ``` Si existen errores (usualmente errores sintácticos o de validación de los argumentos de entrada), obtienes el resultado de la validación, e id y uuid serán None ``` <Row {'errors': {'period': 'ingresa un entero mayor o igual a 0'}, 'id': None, 'uuid': None}> ``` # Salida y resultados¶ La tabla "scheduler_run" almacena los estados de toda tarea en ejecución. Cada registro hace referencia a una tarea que ha sido tomada por un obrero. Una tarea puede ejecutarse más de una vez. Por ejemplo, una tarea programada para repetirse 10 veces por hora probablemente se ejecute 10 veces (a menos que una falle o que tomen en total más de una hora). Ten en cuenta que si la tarea no devuelve valores, se elimina de la tabla scheduler_run una vez que finalice. Los posibles estados son ``` RUNNING, COMPLETED, FAILED, TIMEOUT ``` Si se completa la ejecución, no se generaron excepciones y no venció la tarea, la ejecución se marca como `COMPLETED` y la tarea se marca como `QUEUED` o `COMPLETED` según si se supone que se debe ejecutar nuevamente o no. La salida de la tarea se serializa como JSON y se almacena en el registro de ejecución. Cuando una tarea con estado `RUNNING` genera un excepción, tanto la ejecución como la tarea se marcan con `FAILED` . La traza del error se almacena en el registro. En una forma similar, cuando una ejecución supera el plazo de vencimiento, se detiene y tanto la ejecución como la tarea se marcan con `TIMEOUT` . En todo caso, se captura el stdout y además se almacena en el registro de la ejecución. Usando appadmin, uno puede comprobar todas las tareas en ejecución `RUNNING` , la salida de las tareas finalizadas `COMPLETE` , el error en las tareas `FAILED` , etc. El planificador también crea una tabla más llamada "scheduler_worker", que almacena el heartbeat de los obreros y sus estados. # Administración de procesos¶ El manejo pormenorizado de los obreros es difícil. Este módulo intenta una implementación común para todas las plataformas (Mac, Win, Linux). Cuando inicias un obrero, puedes necesitar en algún momento: * matarlo "sin importar lo que esté haciendo" * matarlo solo si no está procesando tareas * desactivarlo Quizás tengas todavía algunas tareas en la cola, y quieres ahorrar recursos. Sabes que las quieres procesar a cada hora, por lo que necesitarás: * procesar todas las tareas y finalizar automáticamente Todas estas cosas son posibles administrando los parámetros de `Scheduler` o la tabla `scheduler_worker` . Para ser más precisos, para los obreros que han iniciado puedes cambiar el valor de estado de cualquiera para modificar su comportamiento. Igual que con las tareas, los obreros pueden tener uno de los siguientes estados: ACTIVE, DISABLED, TERMINATE or KILLED. ACTIVE y DISABLED son "permanentes", mientras que TERMINATE o KILL, como sugieren los nombres de estado, son más bien "comandos" antes que estados. El uso de la combinación de teclas ctrl+c equivale a establecer un estado de obrero como KILL Hay algunas funciones convenientes a partir de la versión 2.4.1 (que no necesitan mayor descripción). ``` scheduler.disable() # deshabilitar scheduler.resume() # continuar scheduler.terminate() # finalizar scheduler.kill() # matar ``` cada función toma un parámetro opcional, que puede ser una cadena o una lista, para administrar obreros según sus grupos `group_names` . Por defecto es equivalente a los valores de `group_names` , definidos crear la instancia del planificador. Un ejemplo es mejor que cien palabras: CANCELARÁ todos los obreros que estén procesando tareas `alta_prioridad` , mientras que ``` sheduler.terminate(['alta_prioridad', 'baja_prioridad']) ``` cancelará todos los obreros `alta_prioridad` y `baja_prioridad` . Cuidado: si tienes un obrero procesando `alta_prioridad` y `baja_prioridad` , cancelará el obrero para todo el conjunto, incluso si no quieres cancelar las tareas `baja_prioridad` . Todo lo que se puede hacer a través de appadmin también puede hacerse insertando o modificando los registros de esas tablas. De todas formas, uno no debería modificar registros relacionados con tareas en ejecución `RUNNING` ya que esto puede generar un comportamiento inesperado. Es mejor práctica agregar tareas a la cola usando el método "queue_task". Por ejemplo: ``` scheduler.queue_task( function_name='tarea_sumar', pargs=[], pvars={'a':3,'b':4}, repeats = 10, # correr 10 veces period = 3600, # cada 1 hora timeout = 120, # debería tomar menos de 120 segundos ) ``` Observa que los campos "times_run", "last_run_time" y "assgned_worker_name" no se especifican al programarse sino son completados automáticamente por los trabajadores. También puedes recuperar la salida de las tareas completadas: ``` ejecuciones_finalizadas = db(db.scheduler_run.run_status='COMPLETED').select() ``` El planificador se considera en fase experimental porque puede necesitar pruebas más intensivas y porque la estructura de tablas puede cambiar en caso de agregarse más características. # Informando porcentajes¶ Hay una palabra especial para los comandos print en tus funciones que limpia toda la salida anterior. Esa palabra es `!clear!` . Esto, combinado con el parámetro `sync_output` , permite generar informes de porcentajes. He aquí un ejemplo: ``` def informe_de_porcentajes(): time.sleep(5) print '50%' time.sleep(5) print '!clear!100%' return 1 ``` La función ``` informe_de_porcentajes ``` está inactiva durante 5 segundos, luego devuelve `50%` . Entonces, cesa la actividad por otros 5 segundos y por último devuelve `100%` . Ten en cuenta que la salida en la tabla sheduler_run se sincroniza cada dos segundos y que el segundo comando print que contiene `!clear!100%` hace que se limpie el `50%` y se reemplace por `100%` . ``` scheduler.queue_task(informe_de_porcentajes, sync_output=2) ``` ### Módulos de terceros¶ web2py está desarrollado en Python, por lo que puede importar y utilizar cualquier módulo de Python, incluyendo los módulos de terceros. Sólo necesita poder hallarlos. Como con cualquier aplicación de Python, los módulos se pueden instalar en la carpeta oficial de Python "site-packages", y se pueden importar desde cualquier ubicación en tu código. Los módulos en la carpeta "site-packages" son, como lo sugiere el nombre, paquetes del entorno/sistema. Las aplicaciones que requieren estos paquetes no son portátiles a menos que esos módulos se instalen por separado. La ventaja del uso de módulos en "site-packages" es que las distintas aplicaciones los pueden compartir. Consideremos, por ejemplo, un paquete para ploteo llamado "matplotlib". Puedes instalarlo desde la consola usando el comando `easy_install` de PEAK[easy-install] (o la alternativa más moderna `pip` [PIP]): ``` easy_install py-matplotlib ``` y luego puedes importarlo en un modelo/controlador/vista con: `import matplotlib` La distribución de código fuente de web2py y la distribución binaria de Windows tiene un site-packages en la carpeta raíz. La distribución binaria para Mac tiene una carpeta site-packages en la ruta: ``` web2py.app/Contents/Resources/site-packages ``` El problema al usar site-packages es que se torna difícil el uso de distintas versiones de un mismo módulo al mismo tiempo, por ejemplo podría haber dos aplicaciones que usen distintas versiones del mismo archivo. En este ejemplo, `sys.path` no se puede alterar porque afectaría a ambas aplicaciones. Para estas situaciones, web2py provee de otra forma de importar módulos de forma que el `sys.path` global no se altere: ubicándolos en la carpeta "modules" de una aplicación determinada. Una ventaja de esta técnica es que el módulo se copiará y distribuirá automáticamente con la aplicación. Una vez que un módulo "mimodulo.py" se ubica en la carpeta "modules/" de una app, se puede importar desde cualquier ubicación dentro de una aplicación de web2py (sin necesidad de modificar `sys.path` ) con:import mimodulo ### Entorno de ejecución¶ Si bien todo lo descripto aquí es válido, es recomendable armar la aplicación usando componentes, como se detalla en el capítulo 12. Los archivos de modelo y controlador no son módulos de Python en el sentido de que no se pueden importar usando la instrucción `import` . La razón es que los modelos y controladores están diseñados para ejecutarse en un entorno preparado que se ha preconfigurado con los objetos globales de web2py (request, response, session, cache y T) y funciones ayudantes. Esto es necesario porque Python es un lenguaje de espacios estáticos (statically -lexically- scoped language), mientras que el entorno de web2py se crea en forma dinámica. web2py provee de una función `exec_environment` que te permite acceder a los modelos y controladores directamente. `exec_evironment` crea un entorno de ejecución de web2py, carga el archivo en él y devuelve un objeto Storage que contiene el entorno. El objeto Storage además sirve como mecanismo de espacio de nombres. Todo archivo de Python diseñado para que corra en el entorno de ejecución se puede cargar con `exec_environment` . Los usos de `exec_environment` incluyen: * Acceso a datos (modelos) desde otras aplicaciones. * Acceso a objetos globales desde otros modelos o controladores. * Ejecución de funciones de otros controladores. * Carga de librerías de ayudantes para todo el sitio/sistema. El siguiente ejemplo lee registros de la tabla `user` en la aplicación `cas` : ``` from gluon.shell import exec_environment cas = exec_environment('applications/cas/models/db.py') registros = cas.db().select(cas.db.user.ALL) ``` Otro ejemplo: supongamos que tenemos un controlador "otro.py" que contiene: ``` def una_accion(): return dict(direccion_remota=request.env.remote_addr) ``` Esto se puede llamar desde otra acción de la siguiente forma (o desde la consola de web2py): ``` from gluon.shell import exec_environment otro = exec_environment('applications/app/controllers/otro.py', request=request) resultado = otro.una_accion() ``` En la línea 2, `request=request` es opcional. Tiene el efecto de pasar la solicitud actual al entorno de "otro". Sin ese argumento, el entorno contendría un objeto request nuevo y vacío (excepto por `request.folder` ). También es posible pasar un objeto response y session a `exec_environment` . Ten cuidado al pasar los objetos request, response y session --- las modificaciones en la acción llamada o sus dependencias pueden dar lugar a efectos no esperados. La llamada a la función en la línea 3 no ejecuta la vista; sólo devuelve el diccionario a menos que `response.render` se llame explícitamente por "una_accion". Un detalle más a observar: no utilices `exec_environment` en forma inapropiada. Si quieres que los resultados de las acciones se recuperen en otra aplicación, probablemente deberías implementar una API XML-RPC (la implementación de una API XML-RPC con web2py es prácticamente trivial). No utilices `exec_environment` como mecanismo de redirección; utiliza el ayudante `redirect` . ### Cooperación¶ Hay varias formas de cooperación entre aplicaciones: * Las aplicaciones pueden conectarse a la misma base de datos y por lo tanto, compartir las tablas. No es necesario que todas las tablas en la base de datos se definan en cada aplicación, pero se deben definir en las aplicaciones que las usan. Todas las aplicaciones que usan la misma tabla excepto una de las aplicaciones, deben definir la tabla con `migrate=False` . * Las aplicaciones pueden embeber componentes desde otras aplicaciones usando el ayudante LOAD (descripto en el capítulo 12). * Las aplicaciones pueden compartir sesiones. * Las aplicaciones pueden llamar a las acciones de otras aplicaciones en forma remota a través de XML-RPC. * Las aplicaciones pueden acceder a los archivos de otras aplicaciones a través del sistema de archivos (se asume que las aplicaciones comparten el sistema de archivos). * Las aplicaciones pueden llamar a las acciones de otras aplicaciones en forma local utilizando `exec_environment` como se detalla más arriba. * Las aplicaciones pueden importar módulos de otras aplicaciones usando la sintaxis: ``` from applications.nombreapp.modules import mimodulo ``` * Las aplicaciones pueden importar cualquier módulo en las rutas de búsqueda del `PYTHONPATH` y `sys.path` . * Una app puede cargar la sesión de otra app usando el comando: ``` session.connect(request, response, masterapp='nombreapp', db=db) ``` Aquí "nombreapp" es el nombre de la aplicación maestra, es decir, la que establece la sesión_id inicial en la cookie. `db` es una conexión a la base de datos que contiene la tabla de la sesión ( `web2py_session` ). Todas las app que comparten sesiones deben usar las misma base de datos para almacenar las sesiones. * Una aplicación puede cargar un módulo desde otra app usando ``` import applications.otraapp.modules.otromodulo ``` ### Historial o logging¶ Python provee de distintas API para historial o logging. web2py dispone de un mecanismo para configurarlo para que las app lo puedan usar. En tu aplicación, creas un logger, por ejemplo en un modelo: ``` import logging logger = logging.getLogger("web2py.app.miapp") logger.setLevel(logging.DEBUG) ``` y puedes usarlo para registrar (log) mensajes de distinta importancia ``` logger.debug("Sólo comprobando que %s" % detalles) logger.info("Deberías saber que %s" % detalles) logger.warn("Cuidado que %s" % detalles) logger.error("Epa, algo malo ha ocurrido %s" % detalles) ``` `logging` es un módulo estándar de Python que se detalla aquí: ``` http://docs.python.org/library/logging.html ``` La cadena "web2py.app.miapp" define un logger en el nivel de la aplicación. Para que esto funcione adecuadamente, necesitas un archivo de configuración para el logger. Hay un archivo incluido en la instalación de web2py en la carpeta raíz, "logging.example.conf". Debes cambiar el nombre del archivo como "logging.conf" y personalizarlo según tus requerimientos. Este archivo contiene documentación de uso, por lo que es conveniente que lo abras y lo leas. Para crear un logger configurable para la aplicación "miapp", debes agregar miapp a la lista de claves [loggers]: ``` [loggers] keys=root,rocket,markdown,web2py,rewrite,app,welcome,miapp ``` y debes agregar una sección [logger_miapp], usando [logger_welcome] como ejemplo. ``` [logger_myapp] level=WARNING qualname=web2py.app.miapp handlers=consoleHandler propagate=0 ``` La directiva "handlers" especifica el tipo de historial y, para el ejemplo, la salida del historial para miapp se muestra por consola. ### WSGI¶ web2py y WSGI tienen una relación de amor-odio. Nuestra opinión es que WSGI fue desarrollado como protocolo para conectar servidores web a aplicaciones web en forma portátil, y lo usamos con ese fin. web2py en su núcleo es una aplicación WSGI: `gluon.main.wsgibase` . Algunos desarrolladores han llevado a WSGI a sus límites como protocolo para comunicaciones middleware y desarrollan aplicaciones web en forma de cebolla, con sus múltiples capas (cada capa es un middleware desarrollado en forma independiente de la totalidad del marco de desarrollo). web2py no adopta esta estructura en forma interna. Esto se debe a que creemos que las funcionalidades del núcleo de los marcos de desarrollo (manejo de las cookie, sesión, errores, transacciones, manejo de las URL o dispatching) se pueden optimizar para que sean más seguras y veloces si son manejadas por una única capa que las incluya. De todos modos, web2py te permite el uso de aplicaciones WSGI de terceros y middleware en tres formas (y sus combinaciones): * Puedes editar el archivo "wsgihandler.py" e incluir cualquier middleware WSGI de terceros. * Puedes conectar middleware WSGI de terceros a cualquier acción específica en tus app. * Puedes llamar a una app WSGI de terceros desde tus acciones. La única limitación es que no puedes usar middleware de terceros para reemplazar las funciones del núcleo de web2py. # Middleware externo¶ Consideremos el archivo "wsgibase.py": ``` #... LOGGING = False #... if LOGGING: aplicacion = gluon.main.appfactory(wsgiapp=gluon.main.wsgibase, logfilename='httpserver.log', profilerfilename=None) else: aplicacion = gluon.main.wsgibase ``` Cuando `LOGGING` se establece como `True` , `gluon.main.wsgibase` es envuelto (wrapped) por la función middleware ``` gluon.main.appfactory ``` . Esta provee de registro del historial en el archivo "httpserver.log". En forma similar puedes agregar cualquier middleware de terceros. Se puede encontrar más información sobre este tema en la documentación oficial de WSGI. # Middleware interno¶ Dada cualquier acción en tus controladores (por ejemplo `index` ) y cualquier aplicación middleware de terceros (por ejemplo `MiMiddleware` , que convierte la salida a mayúsculas), puedes usar un decorador de web2py para aplicar el middleware a esa acción. Este es un ejemplo: ``` class MiMiddleware: """Convertir la salida a mayúsculas""" def __init__(self, app): self.app = app def __call__(self, entorno, iniciar_respuesta): elementos = self.app(entorno, iniciar_respuesta) return [item.upper() for item in elementos] @request.wsgi.middleware(MyMiddleware) def index(): return 'hola mundo' ``` No podemos garantizar que todo middleware de terceros funcione con este mecanismo. # Llamando a aplicaciones WSGI¶ Es fácil llamar a una app WSGI desde una acción en web2py. Este es un ejemplo: ``` def probar_app_wsgi(entorno, iniciar_respuesta): """Esta es una app WSGI para prueba""" estado = '200 OK' encabezados_respuesta = [('Content-type','text/plain'), ('Content-Length','13')] iniciar_respuesta(estado, encabezados_respuesta) return ['¡hola mundo!\n'] def index(): """Una acción para prueba que llama a la app previa y escapa la salida""" elementos = probar_app_wsgi(request.wsgi.environ, request.wsgi.start_response) for item in elementos: response.write(item, escape=False) return response.body.getvalue() ``` En este caso, la acción `index` llama a `probar_app_wsgi` y escapa el valor obtenido antes de devolverlo. Observa que `index` por sí misma no es una app WSGI y debe usar la API normal de web2py (por ejemplo `response.write` para escribir en el socket). # Las vistas¶ Date: 1999-01-01 Categories: Tags: # Chapter 5: Las vistas * Las vistas * Sintaxis básica * Ayudantes HTML * BEAUTIFY * DOM en el servidor y parseado * Diseño de página (layout) * Diseño de página por defecto * Personalización del diseño por defecto * Desarrollo para dispositivos móviles * Funciones en las vistas * Bloques en las vistas ## Las vistas¶ web2py usa Python para sus modelos, controladores y vistas, aunque lo hace con una sintaxis levemente modificada en las vistas para permitir un código más legible, pero sin imponer restricciones al uso correcto del lenguaje. El propósito de una vista es el de embeber código (Python) en un documento HTML. En general, esto trae algunos problemas: * ¿Cómo debería realizarse el escapado del código embebido? * ¿El espaciado debería responder a las normas de Python o a las de HTML? web2py utiliza `{{ ... }}` para escapado de código Python embebido en HTML. La ventaja del uso de llaves en vez de corchetes es que los primeros son transparentes para los editores más utilizados. Esto permite al desarrollador el uso de esos editores para crear vistas de web2py. Los delimitadores se pueden cambiar por ejemplo con ``` response.delimiters = ('<?','?>') ``` Si se agrega esa línea en el modelo se aplicará en toda la aplicación, si se agrega a un controlador sólo a las vistas asociadas a acciones de ese controlador, si se incluye en una acción sólo en la vista para esa acción. Como el desarrollador está embebiendo código Python en el HTML, se debería aplicar las reglas de espaciado de HTLM, no las reglas de Pyhton. Por lo tanto, permitimos código Python sin espaciado dentro de las etiquetas `{{ ... }}` . Como Python normalmente usa espaciado para delimitar bloques de código, necesitamos una forma diferente para delimitarlos; es por eso que el lenguaje de plantillas hace uso de la palabra `pass` . Un bloque de código comienza con una línea terminada con punto y coma y finaliza con una línea que comienza con `pass` . La palabra `pass` no es necesaria cuando el final del bloque es obvio para su contexto. Aquí hay un ejemplo: ``` {{ if i == 0: response.write('i es 0') else: response.write('i no es 0') pass }} ``` Ten en cuenta que `pass` es una palabra de Python, no de web2py. Algunos editores de Python, como Emacs, usan la palabra `pass` para señalar la división de bloques y la emplean en la modificación automática del espaciado. El lenguaje de plantillas de web2py hace exactamente lo mismo. Cuando encuentra algo como: ``` <html><body> {{for x in range(10):}}{{=x}}hola<br />{{pass}} </body></html> ``` lo traduce en un programa: ``` response.write("""<html><body>""", escape=False) for x in range(10): response.write(x) response.write("""hola<br />""", escape=False) response.write("""</body></html>""", escape=False) ``` `response.write` escribe en `response.body` . Cuando hay un error en una vista de web2py, el reporte del error muestra el código generado de la vista, no la vista real escrita por el desarrollador. Esto ayuda al desarrollador en la depuración del código resaltando la sección que se ejecuta (que se puede depurar con un editor de HTML o el inspector del DOM del navegador). Además, ten en cuenta que: `{{=x}}` genera `response.write(x)` Las variables inyectadas en el HTML de esta forma se escapan por defecto. El escapado es ignorado si `x` es un objeto `XML` , incluso si se ha establecido escape como `True` . Aquí hay un ejemplo que introduce el ayudante `H1` : `{{=H1(i)}}` que se traduce como: ``` response.write(H1(i)) ``` al evaluarse, el objeto `H1` y sus componentes se serializan en forma recursiva, se escapan y escriben en el cuerpo de la respuesta. Las etiquetas generadas por `H1` y el HTML incluido no se escapan. Este mecanismo garantiza que el texto -- y sólo el texto -- mostrado en la página web se escapa siempre, previniendo de esa forma vulnerabilidades XSS. Al mismo tiempo, el código es simple y fácil de depurar. El método ``` response.write(obj, escape=True) ``` toma dos argumentos, el objeto a escribirse y si se debe escapar (con valor `True` por defecto). Si `obj` tiene un método `.xml()` , se llama y el resultado se escribe en el cuerpo de la respuesta (el argumento `escape` se ignora). De lo contrario, usa el método `__str__` del objeto para serializarlo y, si el argumento escape es `True` , lo escapa. Todos los ayudantes incorporados ( `H1` por ejemplo) son objetos que saben cómo serializarse a sí mismos a través del método `.xml()` . Todo esto se hace en forma transparente. Nunca necesitas (y nunca deberías) llamar al método `response.write` en forma explícita. ### Sintaxis básica¶ El lenguaje de plantillas de web2py soporta todas las estructuras de control de Python. Aquí damos algunos ejemplos de cada uno. Se pueden anidar según las convenciones usuales de programación. `for...in` ¶ En plantillas puedes realizar bucles de objetos iterable: ``` {{items = ['a', 'b', 'c']}} <ul> {{for item in items:}}<li>{{=item}}</li>{{pass}} </ul> ``` ``` <ul> <li>a</li> <li>b</li> <li>c</li> </ul> ``` Aquí `items` es todo iterable como por ejemplo un list o tuple de Python o un objeto Rows, o cualquier objeto que implemente un iterator. Los elementos mostrados son previamente serializados y escapados. `while` ¶ Para crear un bucle puedes usar la palabra while: ``` {{k = 3}} <ul> {{while k > 0:}}<li>{{=k}}{{k = k - 1}}</li>{{pass}} </ul> ``` ``` <ul> <li>3</li> <li>2</li> <li>1</li> </ul> ``` `if...elif...else` ¶ Puedes usar estructuras condicionales: ``` {{ import random k = random.randint(0, 100) }} <h2> {{=k}} {{if k % 2:}}es impar{{else:}}es par{{pass}} </h2> ``` que genera: ``` <h2> 45 es impar </h2> ``` Como es obvio que `else` termina el primer bloque `if` , no hay necesidad de una instrucción `pass` , y el uso de esa instrucción sería incorrecta. En cambio, debes cerrar explícitamente el bloque `else` con un `pass` . Recuerda que en Python "else if" se escribe `elif` como en el siguiente ejemplo: ``` {{ import random k = random.randint(0, 100) }} <h2> {{=k}} {{if k % 4 == 0:}}es divisible por 4 {{elif k % 2 == 0:}}es par {{else:}}es impar {{pass}} </h2> ``` Eso genera: ``` <h2> 64 es divisible por 4 </h2> ``` ¶ Es posible usar estructuras `try...except` en vistas con un detalle. Tomemos el siguiente ejemplo: ``` {{try:}} Hola {{= 1 / 0}} {{except:}} división por cero {{else:}} no hay división por cero {{finally}} <br /> {{pass}} ``` Esto generará la siguiente salida: ``` Hola división por cero <br /> ``` Este ejemplo muestra que toda la salida generada antes de que la excepción ocurra se convierte (incluyendo la salida que precede a la excepción) dentro del bloque try. "Hola" se escribe porque precede a la excepción. `def...return` ¶ El lenguaje de plantillas de web2py permite al desarrollador que defina e implemente funciones que pueden devolver cualquier objeto Python o una cadena con HTML. Aquí tenemos en cuenta dos ejemplos: ``` {{def itemize1(link): return LI(A(link, _href="http://" + link))}} <ul> {{=itemize1('www.google.com')}} </ul> ``` produce la siguiente salida: ``` <ul> <li><a href="http:/www.google.com">www.google.com</a></li> </ul> ``` La función `itemize1` devuelve un ayudante que se inserta en la ubicación donde la función se llama. Observa ahora el siguiente ejemplo: ``` {{def itemize2(link):}} <li><a href="http://{{=link}}">{{=link}}</a></li> {{return}} <ul> {{itemize2('www.google.com')}} </ul> ``` Este produce exactamente la misma salida que arriba. En este caso la función `itemize2` representa una sección de HTML que va a reemplazar la etiqueta de web2py donde se ha llamado a la función. Ten en cuenta que no hay '=' delante de la llamada a `itemize2` , ya que la función no devuelve el texto, sino que lo escribe directamente en la respuesta. Hay un detalle: las funciones definidas dentro de una vista deben terminar con una instrucción return, o el espaciado automático fallará. ### Ayudantes HTML¶ Si tenemos el siguiente código en una vista: ``` {{=DIV('esta', 'es', 'una', 'prueba', _id='123', _class='miclase')}} ``` este se procesa como: ``` <div id="123" class="miclase">estaesunaprueba</div> ``` `DIV` es una clase ayudante, es decir, algo que se puede usar para armar HTML en forma programática. Se corresponde con la etiqueta `<div>` de HTML. Los argumentos posicionales se interpretan como objetos contenidos entre las etiquetas de apertura y cierre. Los pares nombre-valor o (named argument) que comienzan con subguión son interpretados como atributos HTML (sin subguión). Algunos ayudantes también tienen pares nombre-valor que no comienzan con subguión; estos valores son específicos de la etiqueta HTML. En lugar de un conjunto de argumentos sin nombre, un ayudante puede también tomar una lista o tupla como conjunto de componentes usando la notación `*` y puede tomar un diccionario como conjunto de atributos usando `**` , por ejemplo: ``` {{ contenido = ['este','es','un','prueba'] atributos = {'_id':'123', '_class':'miclase'} =DIV(*contenido,**atributos) }} ``` (genera la misma salida que antes). La siguiente lista de ayudantes: `A` , `B` , `BEAUTIFY` , `BODY` , `BR` , `CAT` , `CENTER` , `CODE` , `COL` , `COLGROUP` , `DIV` , `EM` , `EMBED` , `FIELDSET` , `FORM` , `H1` , `H2` , `H3` , `H4` , `H5` , `H6` , `HEAD` , `HR` , `HTML` , `I` , `IFRAME` , `IMG` , `INPUT` , `LABEL` , `LEGEND` , `LI` , `LINK` , `MARKMIN` , `MENU` , `META` , `OBJECT` , `ON` , `OL` , `OPTGROUP` , `OPTION` , `P` , `PRE` , `SCRIPT` , `SELECT` , `SPAN` , `STYLE` , `TABLE` , `TAG` , `TBODY` , `TD` , `TEXTAREA` , `TFOOT` , `TH` , `THEAD` , `TITLE` , `TR` , `TT` , `UL` , `URL` , `XHTML` , `XML` , `embed64` , `xmlescape` se puede usar para construir expresiones complejas que se pueden serializar como XML[xml-w] [xml-o]. Por ejemplo: ``` {{=DIV(B(I("hola ", "<mundo>"))), _class="miclase")}} ``` ``` <div class="miclase"><b><i>hola &lt;mundo&gt;</i></b></div> ``` Los ayudantes también pueden serializarse como cadenas, indistintamente, con los métodos `__str__` y `xml` : ``` >>> print str(DIV("hola mundo")) <div>hola mundo</div> >>> print DIV("hola mundo").xml() <div>hola mundo</div> ``` El mecanismo de ayudantes en web2py es más que un sistema para generar HTML sin métodos de concatenación. Este provee una representación del lado del servidor del Document Object Model (DOM). Los componentes de los ayudantes se pueden recuperar por su posición, y los ayudantes funcionan como listas con respecto a sus componentes: ``` >>> a = DIV(SPAN('a', 'b'), 'c') >>> print a <div><span>ab</span>c</div> >>> del a[1] >>> a.append(B('x')) >>> a[0][0] = 'y' >>> print a <div><span>yb</span><b>x</b></div> ``` Los atributos de los ayudantes se pueden recuperar por nombre, y los ayudantes funcionan como diccionarios con respecto a sus atributos: ``` >>> a = DIV(SPAN('a', 'b'), 'c') >>> a['_class'] = 's' >>> a[0]['_class'] = 't' >>> print a <div class="s"><span class="t">ab</span>c</div> ``` Ten en cuenta que el conjunto completo de componentes puede recuperarse a través de una lista llamada `a.components` , y el conjunto completo de atributos se puede recuperar a través de un diccionario llamado `a.attributes` . Por lo que, `a[i]` equivale a `a.components[i]` cuando `i` es un entero, y `a[s]` es equivalente a `a.attributes[s]` cuando `s` es una cadena. Observa que los atributos de ayudantes se pasan como pares de argumentos nombre-valor (named argument) al ayudante. En algunos casos, sin embargo, los nombres de atributos incluyen caracteres especiales que no son elementos válidos en variables de Python (por ejemplo, los guiones) y por lo tanto no se pueden usar en pares de argumentos nombre-valor. Por ejemplo: ``` DIV('text', _data-role='collapsible') ``` No funcionará porque "_data-role" incluye un guión, que produciría un error sintáctico de Python. En esos casos, sin embargo, puedes pasar los atributos como un diccionario y utilizar la notación `**` para argumentos de función, que asocia un diccionario de pares (nombre:valor) a un conjunto de pares de argumentos nombre-valor. ``` >>> print DIV('text', **{'_data-role': 'collapsible'}) <div data-role="collapsible">text</div> ``` También puedes crear etiquetas (TAG) especiales en forma dinámica: ``` >>> print TAG['soap:Body']('algo',**{'_xmlns:m':'http://www.example.org'}) <soap:Body xmlns:m="http://www.example.org">algo</soap:Body> ``` `XML` ¶ `XML` es un objeto que se utiliza para encapsular texto que no debería escaparse. El texto puede o no contener XML válido. Por ejemplo, puede contener JavaScript. El texto de este ejemplo se escapa: ``` >>> print DIV("<b>hola</b>") &lt;b&gt;hola&lt;/b&gt; ``` con el uso de `XML` puedes prevenir el escapado: ``` >>> print DIV(XML("<b>hola</b>")) <b>hola</b> ``` A veces puedes necesitar convertir HTML almacenado en una variable, pero el HTML puede contener etiquetas inseguras como scripts: ``` >>> print XML('<script>alert("no es seguro!")</script>') <script>alert("no es seguro!")</script> ``` Código ejecutable sin escapado como este (por ejemplo, ingresado en el cuerpo de un comentario de un blog) no es seguro, porque puede ser usado para generar un ataque de tipo Cross Site Scripting (XSS) en perjuicio de otros usuarios de la página. El ayudante `XML` puede sanear (sanitize) nuestro texto para prevenir inyecciones y escapar todas las etiquetas excepto aquellas que explícitamente permitas. Aquí hay un ejemplo: ``` >>> print XML('<script>alert("no es seguro!")</script>', sanitize=True) &lt;script&gt;alert(&quot;no es seguro!&quot;)&lt;/script&gt; ``` El constructor de `XML` , por defecto, considera los contenidos de algunas etiquetas y algunos de sus atributos como seguros. Puedes modificar los valores por defecto con los argumentos opcionales `permitted_tags` y `allowed_attributes` . Estos son los valores por defecto de los argumentos opcionales para el ayudante `XML` . ``` XML(text, sanitize=False, permitted_tags=['a', 'b', 'blockquote', 'br/', 'i', 'li', 'ol', 'ul', 'p', 'cite', 'code', 'pre', 'img/'], allowed_attributes={'a':['href', 'title'], 'img':['src', 'alt'], 'blockquote':['type']}) ``` `A` ¶ Este ayudante se usa para generar vínculos (links). ``` >>> print A('<haz clic>', XML('<b>aquí</b>'), _href='http://www.web2py.com') <a href='http://www.web2py.com'>&lt;haz clic&gt;<b>aquí</b></a> ``` En lugar de `_href` puedes pasar el URL usando el argumento `callback` . Por ejemplo en una vista: ``` {{=A('haz clic aquí', callback=URL('miaccion'))}} ``` y el resultado de hacer clic en el link será una llamada ajax a "miaccion" en lugar de una redirección. En este caso, opcionalmente se pueden especificar dos argumentos más: `target` y `delete` : ``` {{=A('clic aquí', callback=URL('miaccion'), target="t")}} <div id="t"><div> ``` y la respuesta de la llamada ajax será almacenada en el DIV con el id igual a "t". ``` <div id="b">{{=A('clic aquí', callback=URL('miaccion'), delete='div#b")}}</div> ``` y al recibir la respuesta, la etiqueta más próxima que coincida con "div#b" se eliminará. En este caso, el botón será eliminado. Una aplicación típica es: ``` {{=A('clic aquí', callback=URL('miaccion'), delete='tr")}} ``` en una tabla. Presionando el botón se ejecutará el callback y se eliminará el registro de la tabla. Se pueden combinar `callback` y `delete` El ayudante A toma un argumento especial llamado `cid` . Funciona de la siguiente forma: ``` {{=A('página del link', _href='http://example.com', cid='miid')}} <div id="miid"></div> ``` y hacer clic en el link hace que el contenido se cargue en el div. Esto es parecido pero más avanzado que la sintaxis de arriba ya que se ha diseñado para que refresque los componentes de la página. Las aplicaciones de `cid` se tratarán con más detalle en el capítulo 12, en relación con el concepto de componente. Estas funcionalidades ajax requieren jQuery y "static/js/web2py.js", que se incluyen automáticamente al agregar ``` {{'web2py_ajax.html'}} ``` la sección head de la plantilla general (layout). "views/web2py_ajax.html" define algunas variables basadas en `request` e incluye todos los archivos js y css necesarios. `B` ¶ Este ayudante hace que su contenido sea en negrita. ``` >>> print B('<hola>', XML('<i>mundo</i>'), _class='prueba', _id=0) <b id="0" class="prueba">&lt;hola&gt;<i>mundo</i></b> ``` `BODY` ¶ Este ayudante crea el cuerpo de una página. ``` >>> print BODY('<hola>', XML('<b>mundo</b>'), _bgcolor='red') <body bgcolor="red">&lt;hola&gt;<b>mundo</b></body> ``` `BR` ¶ Este ayudante crea un salto de línea. ``` >>> print BR() <br /> ``` Ten en cuenta que los ayudantes se pueden repetir utilizando el operador de multiplicación: ``` >>> print BR()*5 <br /><br /><br /><br /><br /> ``` `CAT` ¶ Este es un ayudante que realiza concatenación de otros ayudantes, análogo a TAG[]. ``` >>> print CAT('Aquí tenemos ', A('link', _href=URL()), ', y aquí hay un poco de ', B('texto en negrita'), '.') Aquí tenemos <a href="/app/default/index">link</a>, y aquí hay un poco de <b>texto en negrita</b>. ``` `CENTER` ¶ Este ayudante centra su contenido. ``` >>> print CENTER('<hola>', XML('<b>mundo</b>'), >>> _class='prueba', _id=0) <center id="0" class="prueba">&lt;hola&gt;<b>mundo</b></center> ``` `CODE` ¶ Este ayudante resalta la sintaxis para Python, C, C++, HTML y código web2py, y se recomienda en lugar de `PRE` para muestras de código. `CODE` también tiene la funcionalidad de crear links a la documentación de la API de web2py. Este es un ejemplo de resaltado de una sección de código fuente Python. ``` >>> print CODE('print "hola"', language='python').xml() <table><tr valign="top"><td style="width:40px; text-align: right;"><pre style=" font-size: 11px; font-family: Bitstream Vera Sans Mono,monospace; background-color: transparent; margin: 0; padding: 5px; border: none; background-color: #E0E0E0; color: #A0A0A0; ">1.</pre></td><td><pre style=" font-size: 11px; font-family: Bitstream Vera Sans Mono,monospace; background-color: transparent; margin: 0; padding: 5px; border: none; overflow: auto; "><span style="color:#185369; font-weight: bold">print </span> <span style="color: #FF9966">"hola"</span></pre></td></tr> </table> ``` Aquí hay un ejemplo similar para HTML ``` >>> print CODE( >>> '<html><body>{{=request.env.remote_add}}</body></html>', >>> language='html') ``` ``` <table>...<code>... <html><body>{{=request.env.remote_add}}</body></html> ...</code>...</table> ``` Estos son los argumentos por defecto para el ayudante `CODE` : ``` CODE("print 'hola mundo'", language='python', link=None, counter=1, styles={}) ``` Los valores soportados para el argumento `language` son "python", "html_plain", "c", "cpp", "web2py", y "html". El lenguaje interpreta etiquetas {{ y }} como código "web2py", mientras que "html_plain" no lo hace. Si se especifica un valor `link` , por ejemplo "/examples/global/vars/", las referencias a la API de web2py en el código se vinculan con la documentación en el URL del link. Por ejemplo, "request" se asociaría a "/examples/global/vars/request". En el ejemplo de arriba, el URL del link es manejado por la acción "vars" en el controlador "global.py" que se distribuye como parte de la aplicación "examples" de web2py. El argumento `counter` se usa para numeración de líneas. Se puede establecer según tres opciones. Puede ser `None` para omitir números de línea, un valor numérico indicando el número inicial, o una cadena. Si el contador se especifica con una cadena, es interpretado como un símbolo (prompt), y no se crean números de línea. El argumento `styles` es un poco complicado. Si observas el HTML generado arriba, notarás que contiene una tabla con dos columnas, y cada columna tiene su propio estilo declarado por línea utilizando CSS. Los atributos `style` te dan la posibilidad de sobrescribir esos estilos CSS. Por ejemplo: ``` {{=CODE(...,styles={'CODE':'margin: 0;padding: 5px;border: none;'})}} ``` El atributo `styles` debe ser un diccionario, y permite dos posibles valores: `CODE` para el estilo del código en si, y `LINENUMBERS` para el estilo de la columna izquierda, que contiene los números de línea. Ten en cuenta que estos estilos no se agregan a los estilos existentes, sino que los sustituyen. `COL` ¶ ``` >>> print COL('a','b') <col>ab</col> ``` `COLGROUP` ¶ ``` >>> print COLGROUP('a','b') <colgroup>ab</colgroup> ``` `DIV` ¶ Todos los ayudantes a excepción de `XML` derivan de `DIV` y heredan sus métodos básicos. ``` >>> print DIV('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <div id="0" class="prueba">&lt;hola&gt;<b>mundo</b></div> ``` `EM` ¶ Enfatiza su contenido ``` >>> print EM('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <em id="0" class="prueba">&lt;hola&gt;<b>mundo</b></em> ``` `FIELDSET` ¶ Esto se utiliza para crear un campo de carga de datos (input) junto con su etiqueta (label). ``` >>> print FIELDSET('Altura:', INPUT(_name='altura'), _class='prueba') <fieldset class="prueba">Alto:<input name="altura" /></fieldset> ``` `FORM` ¶ Este es uno de los ayudantes más importantes. Es un simple formulario, sencillamente crea una etiqueta `<form>...</form>` , pero como los ayudantes son objetos y tienen control de su contenido, pueden procesar formularios enviados (por ejemplo, realizar validación de los campos). Esto se tratará en detalle en el capítulo 7. ``` >>> print FORM(INPUT(_type='submit'), _action='', _method='post') <form enctype="multipart/form-data" action="" method="post"> <input type="submit" /></form> ``` El "enctype" es "multipart/form-data" por defecto. El constructor de un `FORM` , y el de un `SQLFORM` , puede además tomar un argumento llamado `hidden` . Cuando un diccionario se pasa como `hidden` (oculto), sus ítem son traducidos como campos INPUT de tipo "hidden". Por ejemplo: ``` >>> print FORM(hidden=dict(a='b')) <form enctype="multipart/form-data" action="" method="post"> <input value="b" type="hidden" name="a" /></form> ``` `H1` , `H2` , `H3` , `H4` , `H5` , `H6` ¶ Estos ayudantes son para los títulos de párrafos y encabezados menores: ``` >>> print H1('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <h1 id="0" class="prueba">&lt;hola&gt;<b>mundo</b></h1> ``` `HEAD` ¶ Para crear la etiqueta HEAD de una página HTML. ``` >>> print HEAD(TITLE('<hola>', XML('<b>mundo</b>'))) <head><title>&lt;hola&gt;<b>mundo</b></title></head> ``` `HTML` ¶ Este ayudante es un tanto diferente. Además de crear las etiquetas `<html>` , configura la etiqueta con una cadena doctype [xhtml-w,xhtml-o,xhtml-school] . ``` >>> print HTML(BODY('<hola>', XML('<b>mundo</b>'))) <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html><body>&lt;hola&gt;<b>mundo</b></body></html> ``` El ayudante HTML también recibe algunos argumentos opcionales que tienen los siguientes valores por defecto: ``` HTML(..., lang='en', doctype='transitional') ``` `XHTML` ¶ XHTML es similar a HTML pero en cambio crea un doctype XHTML. ``` XHTML(..., lang='en', doctype='transitional', xmlns='http://www.w3.org/1999/xhtml') ``` `HR` ¶ Este ayudante crea una línea horizontal en la página ``` >>> print HR() <hr /> ``` `I` ¶ Este ayudante hace que el contenido sea en letra inclinada (italic). ``` >>> print I('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <i id="0" class="prueba">&lt;hola&gt;<b>mundo</b></i> ``` `INPUT` ¶ Crea una etiqueta `<input.../>` . Una etiqueta input puede no contener otras etiquetas y es cerrada por `/>` en lugar de `>` . La etiqueta input tiene un atributo opcional `_type` que se puede establecer como "text" (por defecto), "submit", "checkbox" o "radio". ``` >>> print INPUT(_name='prueba', _value='a') <input value="a" name="prueba" /> ``` Además toma un argumento opcional especial llamado "value", distinto de "_value". El último establece el valor por defecto para el campo input; el primero establece su valor actual. Para un input de tipo "text", el primero sobrescribe al segundo: ``` >>> print INPUT(_name='prueba', _value='a', value='b') <input value="b" name="prueba" /> ``` Para los botones tipo radio, `INPUT` establece el valor "checked" según la selección: ``` >>> for v in ['a', 'b', 'c']: >>> print INPUT(_type='radio', _name='prueba', _value=v, value='b'), v <input value="a" type="radio" name="prueba" /> a <input value="b" type="radio" checked="checked" name="prueba" /> b <input value="c" type="radio" name="prueba" /> c ``` y en forma similar para los checkbox: ``` >>> print INPUT(_type='checkbox', _name='prueba', _value='a', value=True) <input value="a" type="checkbox" checked="checked" name="prueba" /> >>> print INPUT(_type='checkbox', _name='prueba', _value='a', value=False) <input value="a" type="checkbox" name="prueba" /> ``` `IFRAME` ¶ Este ayudante incluye otra página web en la página web actual. El url de la otra página se especifica por el atributo "_src". ``` >>> print IFRAME(_src='http://www.web2py.com') <iframe src="http://www.web2py.com"></iframe> ``` `IMG` ¶ Se puede usar para embeber imágenes en HTML: ``` >>> IMG(_src='http://example.com/image.png',_alt='prueba') <img src="http://example.com/image.png" alt="prueba" /> ``` Aquí hay una combinación de los ayudantes A, IMG y URL para incluir una imagen estática con un link: ``` >>> A(IMG(_src=URL('static','logo.png'), _alt="Mi Logo"), _href=URL('default','index')) <a href="/myapp/default/index"> <img src="/myapp/static/logo.png" alt="Mi Logo" /> </a> ``` `LABEL` ¶ Se usa para crear una etiqueta LABEL para un campo INPUT. ``` >>> print LABEL('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <label id="0" class="prueba">&lt;hola&gt;<b>mundo</b></label> ``` `LEGEND` ¶ Se usa para crear una etiqueta legend para un campo form. ``` >>> print LEGEND('Name', _for='micampo') <legend for="micampo">Name</legend> ``` `LI` ¶ Crea un ítem de lista y debería incluirse en una etiqueta `UL` o `OL` . ``` >>> print LI('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <li id="0" class="prueba">&lt;hola&gt;<b>mundo</b></li> ``` `META` ¶ Se utiliza para crear etiquetas `META` en el encabezado `HTML` . Por ejemplo: ``` >>> print META(_name='seguridad', _content='alta') <meta name="seguridad" content="alta" /> ``` `MARKMIN` ¶ Implementa la sintaxis de wiki markmin. Convierte el texto de entrada en salida html según las reglas de markmin descriptas en el ejemplo que sigue: ``` >>> print MARKMIN("esto es en **negrita** o ''inclinada'' y este es [[un link http://web2py.com]]") <p>esto es en <b>negrita</b> o <i>inclinada</i> y este es <a href="http://web2py.com">un link</a></p> ``` La sintaxis markmin se describe en este archivo incluido con web2py: ``` http://127.0.0.1:8000/examples/static/markmin.html ``` Puedes usar markmin para generar documentos HTML, LaTex y PDF: ``` m = "hola **mundo** [[link http://web2py.com]]" from gluon.contrib.markmin.markmin2html import markmin2html print markmin2html(m) from gluon.contrib.markmin.markmin2latex import markmin2latex print markmin2latex(m) from gluon.contrib.markmin.markmin2pdf import markmin2pdf print markmin2pdf(m) # requiere pdflatex ``` (el ayudante `MARKMIN` es un atajo para `markmin2html` ) Esta es una pequeña introducción a la sintaxis: CÓDIGO | SALIDA | | --- | --- | | título | | sección | | subsección | | negrita | | inclinada | | | | http://google.com | | | | | | | | | | | | | | clic aquí | | Simplemente incluyendo un link a una imagen, video o archivo de audio sin etiquetas produce la correspondiente archivo de imagen, video o audio incluido automáticamente (para audio y video usa las etiquetas html <audio> y <video>). Si se agrega un link con el prefijo `qr:` como por ejemplo `qr:http://web2py.com` hace que se embeba el correspondiente código QR y que enlace al URL. Si se agrega el link con el prefijo `embed:` de esta forma: ``` embed:http://www.youtube.com/embed/x1w8hKTJ2Co ``` hace que la página se embeba, en este caso se embebe un video de youtube. Las imágenes también se pueden embeber con la siguiente sintaxis: ``` [[descripción-de-la-imagen http://.../imagen.png right 200px]] ``` Listas sin orden con: ``` - one - two - three ``` Listas ordenadas con: ``` + one + two + three ``` y tablas con: ``` ---------- X | 0 | 0 0 | X | 0 0 | 0 | 1 ---------- ``` La sintaxis MARKMIN además contempla blockquote, etiquetas de audio y video HTML5, alineamiento de imágenes, css personalizado, y se puede extender: ``` MARKMIN("``abab``:custom", extra=dict(custom=lambda text: text.replace('a','c')) ``` crea `'cbcb'` Los bloques personalizados se delimitan con ```...``:<clave>` y se procesan en la función pasada como valor para la clave correspondiente en el argumento extra (diccionario) de MARKMIN. Ten en cuenta que la función puede necesitar escapar la salida para prevenir XSS. `OBJECT` ¶ Se usa para embeber objetos (por ejemplo, un reproductor de flash) en el HTML. ``` >>> print OBJECT('<hola>', XML('<b>mundo</b>'), >>> _src='http://www.web2py.com') <object src="http://www.web2py.com">&lt;hola&gt;<b>mundo</b></object> ``` `OL` ¶ Significa Listas Ordenadas. La lista debería contener etiquetas LI. Los argumentos de `OL` que no son `LI` se encierran automáticamente en etiquetas `<li>...</li>` . ``` >>> print OL('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <ol id="0" class="prueba"><li>&lt;hola&gt;</li><li><b>mundo</b></li></ol> ``` `ON` ¶ Esto se incluye para compatibilidad hacia atrás y es un simple alias para `True` . Se usa exclusivamente para los checkbox y se ha deprecado ya que `True` es más pythónico. ``` >>> print INPUT(_type='checkbox', _name='prueba', _checked=ON) <input checked="checked" type="checkbox" name="prueba" /> ``` `OPTGROUP` ¶ Te permite agrupar múltiples opciones en un SELECT y es útil para personalizar los campos usando CSS. ``` >>> print SELECT('a', OPTGROUP('b', 'c')) <select> <option value="a">a</option> <optgroup> <option value="b">b</option> <option value="c">c</option> </optgroup> </select> ``` `OPTION` ¶ Esto se debería usar únicamente como parte de una combinación SELECT/OPTION. ``` >>> print OPTION('<hola>', XML('<b>mundo</b>'), _value='a') <option value="a">&lt;hola&gt;<b>mundo</b></option> ``` Como en el caso de `INPUT` , web2py distingue entre "_value" (el valor de la opción OPTION) y "value" (el valor actual del SELECT). Si son iguales, la opción es "selected". ``` >>> print SELECT('a', 'b', value='b'): <select> <option value="a">a</option> <option value="b" selected="selected">b</option> </select> ``` `P` ¶ Se usa para crear un párrafo. ``` >>> print P('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <p id="0" class="prueba">&lt;hola&gt;<b>mundo</b></p> ``` `PRE` ¶ Crea una etiqueta `<pre>...</pre>` para mostrar texto preformateado (pre-formatted). El ayudante `CODE` es en general preferible para muestras de código fuente (code listings). ``` >>> print PRE('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <pre id="0" class="prueba">&lt;hola&gt;<b>mundo</b></pre> ``` `SCRIPT` ¶ Esto incluye o "linkea" un script, como por ejemplo JavaScript. El contenido entre etiquetas se genera como comment de HTML, para compatibilidad con navegadores muy antiguos. ``` >>> print SCRIPT('alert("hola mundo");', _type='text/javascript') <script type="text/javascript"><!-- alert("hola mundo"); //--></script> ``` `SELECT` ¶ Crea una etiqueta `<select>...</select>` . Esto se usa para el ayudante `OPTION` . Los argumentos `SELECT` que no sean objetos `OPTION` se convierten automáticamente a option. ``` >>> print SELECT('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <select id="0" class="prueba"> <option value="&lt;hola&gt;">&lt;hola&gt;</option> <option value="&lt;b&gt;mundo&lt;/b&gt;"><b>mundo</b></option> </select> ``` `SPAN` ¶ Parecido a `DIV` pero se usa para contenido tipo inline (en lugar de block). ``` >>> print SPAN('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <span id="0" class="prueba">&lt;hola&gt;<b>mundo</b></span> ``` `STYLE` ¶ Parecido a script, pero se usa para o bien incluir o hacer un link de código CSS. Este es el CSS incluido: ``` >>> print STYLE(XML('body {color: white}')) <style><!-- body { color: white } //--></style> ``` en cambio aquí se "linkea" (linked code): ``` >>> print STYLE(_src='style.css') <style src="style.css"><!-- //--></style> ``` `TABLE` , `TR` , `TD` ¶ Estas etiquetas (junto con los ayudantes opcionales `THEAD` , `TBODY` y `TFOOTER` ) se usan para armar tablas HTML. ``` >>> print TABLE(TR(TD('a'), TD('b')), TR(TD('c'), TD('d'))) <table><tr><td>a</td><td>b</td></tr><tr><td>c</td><td>d</td></tr></table> ``` `TR` espera contenido `TD` ; los argumentos que no son objetos `TD` se convierten automáticamente. ``` >>> print TABLE(TR('a', 'b'), TR('c', 'd')) <table><tr><td>a</td><td>b</td></tr><tr><td>c</td><td>d</td></tr></table> ``` Es fácil convertir una lista de Python en una tabla HTML usando la notación `*` para argumentos de funciones, que asocia elementos de una lista con argumentos de función posicionales (positional function arguments). Aquí, lo hacemos línea por línea: ``` >>> table = [['a', 'b'], ['c', 'd']] >>> print TABLE(TR(*table[0]), TR(*table[1])) <table><tr><td>a</td><td>b</td></tr><tr><td>c</td><td>d</td></tr></table> ``` Aquí, en cambio, usamos todas las líneas de una vez: ``` >>> table = [['a', 'b'], ['c', 'd']] >>> print TABLE(*[TR(*rows) for rows in table]) <table><tr><td>a</td><td>b</td></tr><tr><td>c</td><td>d</td></tr></table> ``` `TBODY` ¶ Esto se usa para crear etiquetas de registros de tabla contenidos en el cuerpo de la tabla, no para registros incluidos en el encabezado o pie de la tabla. Esto es opcional. ``` >>> print TBODY(TR('<hola>'), _class='prueba', _id=0) <tbody id="0" class="prueba"><tr><td>&lt;hola&gt;</td></tr></tbody> ``` `TEXTAREA` ¶ Este ayudante crea una etiqueta ``` <textarea>...</textarea> ``` ``` >>> print TEXTAREA('<hola>', XML('<b>mundo</b>'), _class='prueba') <textarea class="prueba" cols="40" rows="10">&lt;hola&gt;<b>mundo</b></textarea> ``` El único detalle es que su atributo "value" opcional sobrescribe su contenido (HTML interno) ``` >>> print TEXTAREA(value="<hola mundo>", _class="prueba") <textarea class="prueba" cols="40" rows="10">&lt;hola mundo&gt;</textarea> ``` `TFOOT` ¶ Esto se usa para crear etiquetas para registros de pie de tabla. ``` >>> print TFOOT(TR(TD('<hola>')), _class='prueba', _id=0) <tfoot id="0" class="prueba"><tr><td>&lt;hola&gt;</td></tr></tfoot> ``` `TH` ¶ Usado para los encabezados de tabla en lugar de `TD` . ``` >>> print TH('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <th id="0" class="prueba">&lt;hola&gt;<b>mundo</b></th> ``` `THEAD` ¶ Se usa para los registros de encabezado de tabla. ``` >>> print THEAD(TR(TH('<hola>')), _class='prueba', _id=0) <thead id="0" class="prueba"><tr><th>&lt;hola&gt;</th></tr></thead> ``` `TITLE` ¶ Usado para crear etiquetas de título de página en el encabezado HTML. ``` >>> print TITLE('<hola>', XML('<b>mundo</b>')) <title>&lt;hola&gt;<b>mundo</b></title> ``` `TR` ¶ Etiqueta un registro de tabla. se debería generar dentro de una tabla y contener etiquetas `<td>...</td>` . Los argumentos `TR` que no son objetos `TD` se convierten automáticamente. ``` >>> print TR('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <tr id="0" class="prueba"><td>&lt;hola&gt;</td><td><b>mundo</b></td></tr> ``` `TT` ¶ Etiqueta un texto como monoespaciado o de máquina de escribir. ``` >>> print TT('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <tt id="0" class="prueba">&lt;hola&gt;<b>mundo</b></tt> ``` `UL` ¶ Indica una lista sin orden y debería contener elementos LI. Si el contenido no se etiqueta como LI, UL lo hace automáticamente. ``` >>> print UL('<hola>', XML('<b>mundo</b>'), _class='prueba', _id=0) <ul id="0" class="prueba"><li>&lt;hola&gt;</li><li><b>mundo</b></li></ul> ``` `embed64` ¶ ``` embed64(filename=None, file=None, data=None, extension='image/gif') ``` codifica la información en formato binario en base64. filename: si se especifica, abre y lee el archivo en modo 'rb'. file: si se especifica, lee el archivo. data: si se provee, usa los datos ingresados. `xmlescape` ¶ ``` xmlescape(data, quote=True) ``` devuelve una cadena escapada con los datos ingresados. ``` >>> print xmlescape('<hola>') &lt;hola&gt; ``` `TAG` ¶ A veces se debe generar etiquetas personalizadas XML. web2py incorpora `TAG` , un generador universal de etiquetas. ``` {{=TAG.name('a', 'b', _c='d')}} ``` crea el siguiente XML ``` <name c="d">ab</name> ``` Los argumentos "a", "b", y "d" se escapan automáticamente; usa el ayudante `XML` para suprimir este comportamiento. Usando `TAG` puedes crear etiquetas HTML/XML que no son provistas por la API por defecto. Las TAG se pueden anidar, y se serializan con `str()` . Una sintaxis equivalente es: ``` {{=TAG['name']('a', 'b', c='d')}} ``` Si el objeto TAG se crea con un nombre vacío, se puede usar para concatenar cadenas múltiples junto con ayudantes HTML sin incorporarlos en una etiqueta envolvente, pero esa técnica esta deprecada. En su lugar, usa el ayudante `CAT` . Nota que `TAG` es un objeto, y `TAG.name` o `TAG['name']` es una función que devuelve una clase de ayudante temporaria. `MENU` ¶ El ayudante MENU toma una lista de listas o de tuplas con el formato de `response.menu` (según se describe en el capítulo 4) y crea una estructura de árbol utilizando listas sin orden para mostrar el menú. Por ejemplo: ``` >>> print MENU([['Uno', False, 'link1'], ['Dos', False, 'link2']]) <ul class="web2py-menu web2py-menu-vertical"> <li><a href="link1">Uno</a></li> <li><a href="link2">Dos</a></li> </ul> ``` El tercer ítem en cada lista/tupla puede ser un ayudante HTML (que podría incluir ayudantes anidados), y el ayudante `MENU` simplemente convertirá ese ayudante en lugar de crear su propia etiqueta `<a>` . Cada ítem de menú puede tener un cuarto argumento que consiste de un submenú anidado (y del mismo modo para todo ítem en forma recursiva): ``` >>> print MENU([['Uno', False, 'link1', [['Dos', False, 'link2']]]]) <ul class="web2py-menu web2py-menu-vertical"> <li class="web2py-menu-expand"> <a href="link1">Uno</a> <ul class="web2py-menu-vertical"> <li><a href="link2">Dos</a></li> </ul> </li> </ul> ``` Un ítem de menú puede también tener un quinto elemento opcional, que es un valor booleano. Cuando es falso, el ítem de menú es ignorado por el ayudante MENU. El ayudante MENU toma los siguientes argumentos opcionales: * `_class` : por defecto es "web2py-menu web2py-menu-vertical" y establece la clase de los elementos UL externos. * `ul_class` : por defecto "web2py-menu-vertical" y establece la clase de los UL internos. * `li_class` : por defecto "web2py-menu-expand" y establece la clase de los elementos LI internos. * `li_first` : permite agregar una clase al primer elemento de la lista. * `li_last` : permite agregar una clase al último elemento de la lista. `MENU` toma un argumento opcional `mobile` . Cuando se especifica `True` , en lugar de construir una estructura de menú `UL` en forma recursiva devuelve una lista desplegable `SELECT` con todas las opciones del menú y un atributo `onchange` que redirige a la página correspondiente a la opción seleccionada. Esto está diseñado como representación de menú alternativa para una mayor usabilidad en pequeños dispositivos móviles como por ejemplo en teléfonos. Usualmente el menú se usa en una plantilla con la siguiente sintaxis: ``` {{=MENU(response.menu, mobile=request.user_agent().is_mobile)}} ``` De esta forma un dispositivo móvil se detecta automáticamente y se crea un menú compatible. `BEAUTIFY` ¶ `BEAUTIFY` se usa para construir HTML a partir de objetos compuestos, incluyendo listas, tuplas y diccionarios: ``` {{=BEAUTIFY({"a": ["hola", XML("mundo")], "b": (1, 2)})}} ``` `BEAUTIFY` devuelve un objeto tipo XML serializable en XML, con una representación elegante del argumento de su constructor. Para este caso, la representación en XML de: ``` {"a": ["hola", XML("mundo")], "b": (1, 2)} ``` se procesará como: ``` <table> <tr><td>a</td><td>:</td><td>hola<br />mundo</td></tr> <tr><td>b</td><td>:</td><td>1<br />2</td></tr> </table> ``` ### DOM en el servidor y parseado¶ `elements` ¶ El ayudante DIV y todos sus ayudantes derivados proveen de métodos de búsqueda `element` y `elements` . `element` devuelve el primer elemento hijo que coincida con la condición especificada (o None si no hay coincidencias). `elements` devuelve una lista de todos los elementos hijo encontrados que cumplen con la condición. element and elements usan la misma sintaxis para especificar condiciones de búsqueda, que permiten tres posibles métodos combinables: expresiones tipo jQuery, búsqueda por valor exacto del atributo y búsqueda con expresiones regulares. Aquí hay un ejemplo sencillo: ``` >>> a = DIV(DIV(DIV('a', _id='referencia',_class='abc'))) >>> d = a.elements('div#referencia') >>> d[0][0] = 'changed' >>> print a <div><div><div id="referencia" class="abc">changed</div></div></div> ``` El argumento sin nombre `elements` es una cadena, que puede contener: el nombre de una etiqueta, el id de la etiqueta precedido por el signo almohadilla o numeral (#), la clase precedida por punto (.) o el valor explícito del atributo entre corchetes ([]). Aquí se muestran 4 formas equivalentes para buscar la etiqueta anterior por id: ``` >>> d = a.elements('#referencia') >>> d = a.elements('div#referencia') >>> d = a.elements('div[id=referencia]') >>> d = a.elements('div',_id='referencia') ``` Estos son 4 formas equivalentes para buscar la etiqueta anterior por clase: ``` >>> d = a.elements('.abc') >>> d = a.elements('div.abc') >>> d = a.elements('div[class=abc]') >>> d = a.elements('div',_class='abc') ``` Todo atributo se puede usar para ubicar un elemento (no sólo `id` y `clase` ), incluso los atributos múltiples (el método element puede tomar múltiples pares nombre-valor), pero sólo se devuelve el primer elemento encontrado. Si se usa la sintaxis de jQuery "div#referencia" es posible especificar múltiples criterios de búsqueda separados por un espacio: ``` >>> a = DIV(SPAN('a', _id='t1'), DIV('b', _class='c2')) >>> d = a.elements('span#t1, div.c2') ``` o el equivalente ``` >>> a = DIV(SPAN('a', _id='t1'), DIV('b', _class='c2')) >>> d = a.elements('span#t1', 'div.c2') ``` Si el valor de un atributo se especifica usando un par nombre-valor como argumento, puede ser una cadena o expresión regular: ``` >>> a = DIV(SPAN('a', _id='prueba123'), DIV('b', _class='c2')) >>> d = a.elements('span', _id=re.compile('prueba\d{3}') ``` Hay un argumento de par nombre-valor especial de los ayudantes DIV (y derivados) llamado `find` . Se puede usar para especificar un valor de búsqueda o una expresión regular de búsqueda en el texto contenido por la etiqueta. Por ejemplo: ``` >>> a = DIV(SPAN('abcde'), DIV('fghij')) >>> d = a.elements(find='bcd') >>> print d[0] <span>abcde</span> ``` ``` >>> a = DIV(SPAN('abcde'), DIV('fghij')) >>> d = a.elements(find=re.compile('fg\w{3}')) >>> print d[0] <div>fghij</div> ``` `components` ¶ Este es un ejemplo de cómo listar los elementos en una cadena de html: ``` html = TAG('<a>xxx</a><b>yyy</b>') for item in html.components: print item ``` `parent` y `siblings` ¶ `parent` devuelve el padre del elemento actual. ``` >>> a = DIV(SPAN('a'),DIV('b')) >>> s = a.element('span') >>> d = s.parent >>> d['_class']='abc' >>> print a <div class="abc"><span>a</span><div>b</div></div> >>> for e in s.siblings(): print e <div>b</div> ``` # Reemplazo de elementos¶ Los elementos encontrados se pueden reemplazar o eliminar especificando el argumento `replace` . Observa que igualmente se devuelve una lista de los elementos encontrados. ``` >>> a = DIV(SPAN('x'), DIV(SPAN('y')) >>> b = a.elements('span', replace=P('z') >>> print a <div><p>z</p><div><p>z</p></div> ``` `replace` puede ser un callable. En ese caso se pasará al elemento original y se espera que devuelva el elemento sustituto: ``` >>> a = DIV(SPAN('x'), DIV(SPAN('y')) >>> b = a.elements('span', replace=lambda t: P(t[0]) >>> print a <div><p>x</p><div><p>y</p></div> ``` Si se cumple `replace=None` , los elementos que coincidan se eliminan por completo. ``` >>> a = DIV(SPAN('x'), DIV(SPAN('y')) >>> b = a.elements('span', replace=None) >>> print a <div></div> ``` `flatten` ¶ El método flatten serializa en forma recursiva el contenido de los hijos de un determinado elemento en texto normal (sin etiquetas): ``` >>> a = DIV(SPAN('esta', DIV('es', B('una'))), SPAN('prueba')) >>> print a.flatten() estaesunaprueba ``` Flatten recibe un argumento opcional, `render` , por ejemplo una función que convierte/aplana (flatten) el contenido usando un protocolo distinto. Aquí se muestra un ejemplo para serializar algunas etiquetas en la sintaxis de wiki markmin: ``` >>> a = DIV(H1('título'), P('ejemplo de un ', A('link', _href='#prueba'))) >>> from gluon.html import markmin_serializer >>> print a.flatten(render=markmin_serializer) ## título ejemplo de [[un link #prueba]] ``` Al momento de esta edición disponemos de `markmin_serializer` y `markdown_serializer` . # Parseado (parsing)¶ El objeto TAG es también un parseador XML/HTML. Puede leer texto y convertirlo en una estructura de árbol de ayudantes. Esto facilita la manipulación por medio de la API descripta arriba: ``` >>> html = '<h1>Título</h1><p>esta es una <span>prueba</span></p>' >>> parsed_html = TAG(html) >>> parsed_html.element('span')[0]='PRUEBA' >>> print parsed_html <h1>Título</h1><p>esta es una <span>PRUEBA</span></p> ``` ### Diseño de página (layout)¶ Las vistas se pueden extender e incluir otras vistas en una estructura de árbol. Por ejemplo, podemos pensar en una vista "index.html" que extiende "layout.html" e incluye "body.html". Al mismo tiempo, "layout.html" puede incluir "header.html" y "footer.html". La raíz del árbol es lo que denominamos vista layout. Como con cualquier otra plantilla HTML, puedes editar el contenido utilizando la interfaz administrativa de web2py. El archivo "layout.html" es sólo una convención. Este es un ejemplo minimalista de página que extiende la vista "layout.html" e incluye la vista "pagina.html": ``` {{extend 'layout.html'}} <h1>hola mundo</h1> {{include 'pagina.html'}} ``` El archivo de layout extendido debe contener una instrucción `{{include}}` , algo como: ``` <html> <head> <title>Título de la página</title> </head> <body> {{include}} </body> </html> ``` Cuando se llama a la vista, se carga la vista extendida layout, y la vista que hace la llamada reemplaza la instrucción `{{include}}` dentro del layout. El procesamiento continúa en forma recursiva hasta que toda instrucción `extend` e `include` se haya procesado. La plantilla resultante es entonces traducida a código Python. Ten en cuenta que cuando una aplicación es compilada en bytecode (bytecode compiled), es este código Python lo que se compila, no los archivos de las vistas originales. De este modo, la versión bytecode compiled de una vista determinada es un único archivo .pyc que incluye no sólo el código fuente Python de la vista original, sino el árbol completo de las vistas incluidas y extendidas. `extend` , `include` , `block` y `super` son instrucciones de plantilla especiales, no comandos de Python. Todo contenido o código que precede a la instrucción `{{extend ...}}` se insertará (y por lo tanto ejecutará) antes de el comienzo del contenido y/o código de una vista extendida. Aunque en realidad esto no se use típicamente para insertar contenido HTML antes que el contenido de la vista extendida, puede ser de utilidad como medio para definir variables o funciones que quieras que estén disponibles para la vista extendida. Por ejemplo, tomemos como ejemplo la vista "index.html": ``` {{sidebar_enabled=True}} {{extend 'layout.html'}} <h1>Página de inicio</h1> ``` y una sección de "layout.html": ``` {{if sidebar_enabled:}} <div id="barralateral"> Contenido de la barra lateral </div> {{pass}} ``` Como el `sidebar_enabled` establecido en "index.html" viene antes que el `extend` , esa línea será insertada antes del comienzo de "layout.html", haciendo que `sidebar_enabled` esté disponible en cualquier parte del código "layout.html" (una versión un tanto más sofisticada de este ejemplo se utiliza en la app welcome). Además es conveniente aclarar que las variables devueltas por la función del controlador están disponibles no sólo en la vista principal de la función, sino que están disponibles en toda vista incluida o extendida. El argumento de un `extend` o `include` (por ejemplo el nombre de la vista extendida o incluida) puede ser una variable de Python (pero no una expresión de Python). Sin embargo, esto impone una limitación -- las vistas que usan variables en instrucciones `extend` o `include` no pueden ser compiladas con bytecode. Como se mencionó más arriba, las vistas compiladas con bytecode incluyen todo el árbol de vistas incluidas y extendidas, de manera que las vistas extendidas e incluidas específicas se deben conocer en tiempo de compilación, lo cual no es posible si los nombres de las vistas son variables (cuyos valores no se especifican hasta el tiempo de ejecución). Como las vistas compiladas con bytecode pueden proveer de una notable mejora de la performance, se debería evitar el uso de variables en `extend` e `include` en lo posible. En algunos casos, una alternativa al uso de una variable en `include` es usar simples instrucciones `{{include ...}}` dentro de un bloque `if...else` . ``` {{if una_condicion:}} {{include 'esta_vista.html'}} {{else:}} {{include 'esa_vista.html'}} {{pass}} ``` El código anterior no presenta ningún problema respecto de la compilación con bytecode porque no hay variables en juego. Ten en cuenta, de todos modos, que la vista compilada con bytecode en realidad incluirá tanto el código Python de "esta_vista.html" como el de "esa_vista.html", aunque sólo el código de una de esas vistas se ejecutará, dependiendo del valor de `una_condicion` . Recuerda que esto sólo funcionará para `include` -- no puedes poner instrucciones `{{extend ...}}` dentro de bloques `if...else` . Los diseños (layouts) se usan para encapsular elementos comunes de las páginas (encabezados, pies, menús), y si bien no son obligatorios, harán que tu aplicación sea más fácil de mantener. En particular, te sugerimos que escribas layouts que aprovechen las siguientes variables que se pueden establecer en el controlador. El uso de estas variables especiales hará que tus diseños sean intercambiables: ``` response.title response.subtitle response.meta.author response.meta.keywords response.meta.description response.flash response.menu response.files ``` A excepción de `menu` y `files` , se trata de cadenas y su uso debería ser obvio. El menú `response.menu` es una lista de tuplas con 3 o 4 elementos. Los tres elementos son: el nombre del link, un booleano que indica si el link está activo (si es el link actual), y el URL de la página linkeada (dirección del vínculo). Por ejemplo: ``` response.menu = [('Google', False, 'http://www.google.com',[]), ('Inicio', True, URL('inicio'), [])] ``` El cuarto elemento de la tupla es un submenú opcional. `response.files` es una lista de archivos CSS y JS requeridos por tu página. También te recomendamos el uso de: ``` {{include 'web2py_ajax.html'}} ``` en el encabezado HTML, porque agregará las librerías jQuery y definirá algunas funciones JavaScript con efectos Ajax para compatibilidad hacia atrás. "web2py_ajax.html" incluye las etiquetas `response.meta` en la vista, jQuery básico, la interfaz para selección de fecha y todos los archivos CSS y JS especificados en `response.files` . # Diseño de página por defecto¶ El diseño "views/layout.html" que incorpora por defecto la aplicación de andamiaje welcome (sin mostrar algunas partes opcionales) es bastante complejo pero se basa en la siguiente estructura: ``` <!DOCTYPE html> <head> <meta charset="utf-8" /> <title>{{=response.title or request.application}}</title> ... <script src="{{=URL('static','js/modernizr.custom.js')}}"></script {{ response.files.append(URL('static','css/web2py.css')) response.files.append(URL('static','css/bootstrap.min.css')) response.files.append(URL('static','css/bootstrap-responsive.min.css')) response.files.append(URL('static','css/web2py_bootstrap.css')) }} {{include 'web2py_ajax.html'}} {{ # para usar las barras laterales debes especificar cuál vas a usar left_sidebar_enabled = globals().get('left_sidebar_enabled',False) right_sidebar_enabled = globals().get('right_sidebar_enabled',False) middle_columns = {0:'span12',1:'span9',2:'span6'}[ (left_sidebar_enabled and 1 or 0)+(right_sidebar_enabled and 1 or 0)] }} {{block head}}{{end}} </head<body> <!-- Barra de navegación ====================================== --> <div class="navbar navbar-inverse navbar-fixed-top"> <div class="flash">{{=response.flash or ''}}</div> <div class="navbar-inner"> <div class="container"> {{=response.logo or ''}} <ul id="navbar" class="nav pull-right"> {{='auth' in globals() and auth.navbar(mode="dropdown") or ''}} </ul> <div class="nav-collapse"> {{if response.menu:}} {{=MENU(response.menu)}} {{pass}} </div><!--/.nav-collapse --> </div> </div> </div><!--/top navbar -- <div class="container"> <!-- Sección superior (masthead) ============================ --> <header class="mastheader row" id="header"> <div class="span12"> <div class="page-header"> <h1> {{=response.title or request.application}} <small>{{=response.subtitle or ''}}</small> </h1> </div> </div> </header <section id="main" class="main row"> {{if left_sidebar_enabled:}} <div class="span3 left-sidebar"> {{block left_sidebar}} <h3>Barra lateral izquierda</h3> <p></p> {{end}} </div> {{pass}} <div class="{{=middle_columns}}"> {{block center}} {{include}} {{end}} </div {{if right_sidebar_enabled:}} <div class="span3"> {{block right_sidebar}} <h3>Barra lateral derecha</h3> <p></p> {{end}} </div> {{pass}} </section><!--/main-- <!-- Pie de página ========================================== --> <div class="row"> <footer class="footer span12" id="footer"> <div class="footer-content"> {{block footer}} <!-- este es el pie por defecto --> ... {{end}} </div> </footer> </div </div> <!-- /container -- <!-- El javascript ================================================ (Se ubica al final del documento para acelerar la carga de la página) --> <script src="{{=URL('static','js/bootstrap.min.js')}}"></script> <script src="{{=URL('static','js/web2py_bootstrap.js')}}"></script> {{if response.google_analytics_id:}} <script src="{{=URL('static','js/analytics.js')}}"></script> <script type="text/javascript"> analytics.initialize({ 'Google Analytics':{trackingId:'{{=response.google_analytics_id}}'} });</script> {{pass}} </body> </html> ``` Hay algunas funcionalidades de este diseño por defecto que lo hacen muy fácil de usar y personalizar: * Está escrito en HTML5 y usa la librería "modernizr" [modernizr] para compatibilidad hacia atrás. El layout completo incluye algunas instrucciones condicionales requeridas por IE y se omitieron para simplificar el ejemplo. * Muestra tanto `response.title` como `response.subtitle` que pueden establecerse en el modelo. Si no se especifican, el layout adopta el nombre de la aplicación como título. * Incluye el archivo `web2py_ajax.html` en el encabezado HTML que crea todas las instrucciones de importación link y script. * Usa una versión modificada de Bootstrap de Twitter para un diseño más flexible. Es compatible con dispositivos móviles y modifica las columnas para que se muestren correctamente en pantallas pequeñas. * Usa "analytics.js" para conectar al servicio Analytics de Google. * El ``` {{=auth.navbar(...)}} ``` muestra una bienvenida al usuario actual y enlaza con las funciones de auth por defecto como login, logout, register, change password, etc. según el contexto. Es un creador de ayudantes (helper factory) y la salida se puede manipular como con cualquier otro ayudante. Se ubica en un bloque ``` {{try:}}...{{except:pass}} ``` en caso de que auth no se haya habilitado. * muestra la estructura del menú como `<ul>...</ul>` . * `{{include}}` se reemplaza con el contenido de la vista que extiende el diseño cuando se realiza la conversión (render) de la página. * Por defecto utiliza una estructura condicional de tres columnas (las barras laterales derecha e izquierda se pueden deshabilitar en las vistas que extienden el diseño o layout). * Usa las siguientes clases: header, main, footer * Contiene los siguientes bloques: statusbar, left_sidebar, center, right_sidebar, footer. En las vistas, puedes habilitar o personalizar las barras laterales de esta forma: ``` {{left_sidebar_enable=True}} {{extend 'layout.html'}} Este texto va en el centro {{block left_sidebar}} Este texto va en la barra lateral {{end}} ``` # Personalización del diseño por defecto¶ Es fácil personalizar el diseño de página por defecto o layout porque la aplicación welcome está basada en Bootstrap de Twitter, que cuenta con una buena documentación y soporta el uso de estilos intercambiables (themes). En web2py son cuatro los archivos relevantes en relación con el estilo: * "css/web2py.css" contiene la hoja de estilo específica de web2py * "css/bootstrap.min.css" contiene la hoja de estilo CSS de Bootstrap [bootstrap] Bootstrap * "css/web2py_bootstrap.css" contiene, con modificaciones, algunos parámetros de estilo de Bootstrap según los requerimientos de web2py. * "js/bootstrap.min.js" que viene con las librerías para efectos de menú, ventanas de confirmación emergentes (modal) y paneles. Para cambiar los colores y las imágenes de fondo, prueba agregando el siguiente código en el encabezado de layout.html: ``` <style> body { background: url('images/background.png') repeat-x #3A3A3A; } a { color: #349C01; } .header h1 { color: #349C01; } .header h2 { color: white; font-style: italic; font-size: 14px;} .statusbar { background: #333333; border-bottom: 5px #349C01 solid; } .statusbar a { color: white; } .footer { border-top: 5px #349C01 solid; } </style> ``` Por supuesto, también puedes reemplazar por completo los archivos "layout.html" y "web2py.css" con un diseño propio. # Desarrollo para dispositivos móviles¶ El diseño layout.html por defecto está diseñado para que sea compatible con dispositivos móviles pero no es suficiente. Uno puede necesitar distintas vistas cuando una página es visitada con un dispositivo móvil. Para que el desarrollo para máquinas de escritorio y móviles sea más fácil, web2py incluye el decorador `@mobilize` . Este decorador se aplica a las acciones que deberían separar la vista normal de la móvil. Aquí se demuestra la forma de hacerlo: ``` from gluon.contrib.user_agent_parser import mobilize @mobilize def index(): return dict() ``` Observa que el decorador se debe importar antes de usarlo en el controlador. Cuando la función "index" se llama desde un navegador común (con una máquina de escritorio), web2py convertirá el diccionario devuelto utilizando la vista "[controlador]/index.html". Sin embargo, cuando se llame desde un dispositivo móvil, el diccionario será convertido por "[controlador]/index.mobile.html". Observa que las vistas para móvil tienen la extensión "mobile.html". Como alternativa puedes aplicar la siguiente lógica para que todas las vistas sean compatibles con dispositivos móviles: ``` if request.user_agent().is_mobile: response.view.replace('.html','.mobile.html') ``` La tarea de crear las vistas "*.mobile.html" está relegada al desarrollador, pero aconsejamos especialmente el uso del plugin "jQuery Mobile" que lo hace realmente fácil. ### Funciones en las vistas¶ Dado el siguiente diseño "layout.html": ``` <html> <body> {{include}} <div class="sidebar"> {{if 'mysidebar' in globals():}}{{mysidebar()}}{{else:}} mi barra lateral por defecto {{pass}} </div> </body> </html> ``` ``` {{def mysidebar():}} ¡¡¡Mi nueva barra lateral!!! {{return}} {{extend 'layout.html'}} ¡¡¡Hola mundo!!! ``` Observa que la función se ha definido antes que la instrucción `{{extend...}}` -- esto hace que la función se cree antes que la ejecución del código en "layout.html", y de esa forma se puede llamar a la función en el ámbito de "layout.html", incluso antes que `{{include}}` . Ten en cuenta que la función se incluye en la vista extendida sin el prefijo `=` . El código genera la siguiente salida: Observa que la función se ha definido en HTML (aunque puede también contener código Python) de forma que `response.write` se usa para escribir su contenido (la función no devuelve el contenido). Es por eso que las celdas del diseño llaman a la función usando `{{mysidebar()}}` en lugar de `{{mysidebar()}}` . Las funciones definidas de esta forma pueden recibir argumentos. ### Bloques en las vistas¶ La forma principal para hacer que una vista sea más modular es utilizando los `{{block...}}` y su mecanismo es una alternativa al mecanismo descripto en la sección previa. Si tenemos el siguiente "layout.html": ``` <html> <body> {{include}} <div class="sidebar"> {{block mysidebar}} mi barra lateral por defecto {{end}} </div> </body> </html> ``` Genera el la siguiente salida: Puedes tener varios bloques o blocks'', y si un bloque está declarado en la vista extendida pero no en la vista que la extiende, se utiliza el contenido de la vista extendida. Además, observa que a diferencia del uso de funciones, no es necesario definir bloques antes de `{{extend ...}}` -- incluso si se han definido después del `extend` , se pueden usar para hacer sustituciones en cualquier parte de la vista extendida. Dentro de un bloque, puedes usar la expresión `{{super}}` para incluir el contenido de la vista superior. Por ejemplo, si reemplazamos la siguiente vista que extiende el diseño con: obtenemos: ``` <html> <body> hola mundo!!! <div class="sidebar"> Mi barra lateral por defecto ¡¡¡Mi nueva barra lateral!!! </div> </body> </html> ``` # La capa de abstracción de la base de datos¶ Date: 2009-03-03 Categories: Tags: # Chapter 6: La capa de abstracción de la base de datos * La capa de abstracción de la base de datos * Dependencias * Cadenas de conexión * Palabras reservadas * DAL, Table, Field * Representación de registros * Migraciones * Reparación de migraciones fallidas * insert * commit y rollback * SQL puro * drop * Índices * Bases de datos heredadas y tablas con claves * Transacciones distribuidas * Más sobre las subidas de archivos * Query, Set, Rows * select * Atajos * Recuperando un registro Row * select recursivos * Serialización de registros Rows en las vistas * orderby, groupby, limitby, distinct, having * Operadores lógicos * count, isempty, delete, update * Expresiones * case case * update_record * Inserción y modificación por medio de diccionarios * first y last * as_dict y as_list * Combinando registros * find, exclude, sort * Otros métodos * Campos calculados * Campos virtuales * Relación uno a muchos * Muchos a muchos * list:<type> y contains * Otros operadores * like, regexp, startswith, contains, upper, lower * year, month, day, hour, minutes, seconds * belongs * sum, avg, min, max y len * Subconjuntos de cadenas * Valores por defecto usando coalesce y coalesce_zero * Generación de SQL puro * Importando y Exportando datos * CSV (un objeto Table por vez) * CSV (de todos los objetos Table) * CSV y sincronización con una base de datos remota * HTML y XML (un objeto Table por vez) * Representación de datos * Caché de comandos select * Referencias a la misma tabla y alias * Características avanzadas * Herencia de tablas * filter_in y filter_out * Callback antes y después de la E/S * Control de versiones de registros * Campos comunes, y aplicaciones compartidas * Filtros comunes * Tipos de objeto Field personalizados (experimental) * Uso de DAL sin definir tablas * PostGIS, SpatiaLite, y MS Geo (experimental) * Copia de datos entre distintas bases de datos * Notas sobre la nueva DAL y los adaptadores * Detalles a tener en cuenta ## La capa de abstracción de la base de datos¶ ### Dependencias¶ web2py viene con una Capa de Abstracción de la Base de Datos (DAL), una API que asocia objetos Python a objetos de la base de datos como consultas, tablas y registros. La DAL genera dinámicamente el SQL en tiempo real usando el dialecto específico para la base de datos utilizada, de forma que no tengas que escribir código SQL o necesites aprender distintos dialectos para los comandos SQL (usamos aquí el término SQL en general), y para que la aplicación sea portátil para los distintos tipos de bases de datos. En la tabla de abajo se muestra una lista parcial de las bases de datos soportadas. Puedes consultar el sitio web de web2py y la lista de correo si necesitas otro adaptador. La base de datos NoSQL de Google es un caso particular y se trata en el Capítulo 13. La distribución binaria para Windows funciona instantáneamente con SQLite y MySQL. La distribución binaria para Mac funciona sin configuración adicional con SQLite. Para usar otro motor o back-end de base de datos, utiliza la versión de código fuente e instala el controlador o driver correspondiente a tu base de datos. Una vez que hayas instalado el controlador adecuado, inicia la versión de código fuente de web2py, que detectará el nuevo controlador. A continuación se listan los controladores: database | drivers (fuente) | | --- | --- | SQLite | sqlite3 o pysqlite2 o zxJDBC [zxjdbc] (on Jython) | PostgreSQL | psycopg2 [psycopg2] or pg8000 [pg8000] or zxJDBC [zxjdbc] (en Jython) | MySQL | pymysql [pymysql] o MySQLdb [mysqldb] | Oracle | cx_Oracle [cxoracle] | MSSQL | pyodbc [pyodbc] | FireBird | kinterbasdb [kinterbasdb] o fdb o pyodbc | DB2 | pyodbc [pyodbc] | Informix | informixdb [informixdb] | Ingres | ingresdbi [ingresdbi] | Cubrid | cubriddb [cubridb] [cubridb] | Sybase | Sybase [Sybase] | Teradata | pyodbc [Teradata] | SAPDB | sapdb [SAPDB] | MongoDB | pymongo [pymongo] | IMAP | imaplib [IMAP] | `sqlite3` , `pymysql` , `pg8000` , y `imaplib` vienen con web2py. El soporte para MongoDB es experimental. La opción IMAP te permite usar DAL para acceder a IMAP. web2py define las siguientes clases que conforman DAL: El objeto DAL representa una conexión de la base de datos. Por ejemplo: Table representa una tabla de la base de datos. Las instancias de Table no se crean en forma directa; `DAL.define_table` se encarga de crearlas. ``` db.define_table('mitabla', Field('micampo')) ``` Los métodos más importantes de una tabla son: `.insert` , `.truncate` , `.drop` , e ``` .import_from_csv_file ``` . Field representa un campo de la base de datos. Se pueden crear instancias de la clase Field y usarlas como argumentos de `DAL.define_table` Rows de DAL `Row` : ``` registros = db(db.mitabla.micampo!=None).select() ``` Row contiene los valores del registro. ``` for registro in registros: print registro.micampo ``` Query es un objeto que representa a una instrucción SQL "where": ``` miconsulta = (db.mitabla.micampo != None) | (db.mitabla.micampo > 'A') ``` Set es un objeto que representa un conjunto de registros. Sus métodos más importantes son `count` , `select` , `update` y `delete` . Por ejemplo: ``` miset = db(miconsulta) registros = miset.select() miset.update(micampo='unvalor') miset.delete() ``` Expression es por ejemplo una expresión `orderby` o `groupby` . La clase Field se deriva de Expression. He aquí un ejemplo. ``` miorden = db.mitabla.micampo.upper() | db.mitabla.id db().select(db.mitabla.ALL, orderby=miorden) ``` ### Cadenas de conexión¶ Las conexiones con la base de datos se establecen creando una instancia del objeto DAL: ``` >>> db = DAL('sqlite://storage.db', pool_size=0) ``` `db` no es una palabra especial; es una variable local que almacena el objeto de la conexión `DAL` . Puedes usar otro nombre si es necesario. El constructor de `DAL` requiere un único argumento, la cadena de conexión o connection string. La cadena de conexión es el único código de web2py que es específico del motor de la base de datos utilizado. Aquí se muestran algunos ejemplos de cadenas de conexión para tipos específicos de bases de datos soportadas (en todos los casos, se asume que la base de datos se está corriendo en localhost con el puerto por defecto y que se llama "prueba"): SQLitesqlite://storage.dbMySQLmysql://usuario:contraseña@localhost/pruebaPostgreSQLpostgres://usuario:contraseña@localhost/pruebaMSSQLmssql://usuario:contraseña@localhost/pruebaFireBirdfirebird://usuario:contraseña@localhost/pruebaOracleoracle://usuario/contraseña@pruebaDB2db2://usuario:contraseña@pruebaIngresingres://usuario:contraseña@localhost/pruebaSybasesybase://usuario:contraseña@localhost/pruebaInformixinformix://usuario:contraseña@pruebaTeradatateradata://DSN=dsn;UID=usuario;PWD=contraseña;DATABASE=pruebaCubridcubrid://usuario:contraseña@localhost/pruebaSAPDBsapdb://usuario:contraseña@localhost/pruebaIMAPimap://usuario:contraseña@server:portMongoDBmongodb://usuario:contraseña@localhost/pruebaGoogle/SQLgoogle:sqlGoogle/NoSQLgoogle:datastore Ten en cuenta que en SQLite la base de datos consiste de un único archivo. Si este no existe, se creará. El archivo se bloquea cada vez que se accede a él. En el caso de MySQL, PostgreSQL, MSSQL, FireBird, Oracle, DB2, Ingres e Informix la base de datos "prueba" se debe crear fuera de web2py. Una vez que se ha establecido la conexión, web2py creará, alterará y eliminará las tablas según sea necesario. Además es posible establecer la cadena de conexión a `None` . En este caso DAL no conectará a ningún servicio de base de datos, aunque se podrá acceder a la API para pruebas. Para ver ejemplos de este caso particular consulta el Capítulo 7. A veces puedes necesitar generar un comando SQL como si tuvieras una conexión a una base de datos pero sin una conexión real. Esto se puede hacer con ``` db = DAL('...', do_connect=False) ``` En este caso podrás llamar a `_select` , `_insert` , `_update` y `_delete` para generar el SQL sin llamar a `select` , `insert` , `update` o `delete` . En la mayoría de los casos puedes usar `do_connect=False` incluso sin la presencia de los controladores de la base de datos. Observa que por defecto web2py usa la codificación de caracteres utf8. Si trabajas con bases de datos que tienen otro comportamiento, debes cambiar el parámetro opcional `db_code` , por ejemplo, con este comando ``` db = DAL('...', db_codec='latin1') ``` de lo contrario obtendrás ticket de error UnicodeDecodeError. # Agrupamiento de conexiones¶ El segundo argumento del constructor de DAL es el `pool_size` ; este valor es cero por defecto. Como el proceso de abrir una nueva conexión a la base de datos para cada solicitud es bastante lento, web2py implementa un mecanismo para agrupamiento de conexiones o connection pooling. Una vez que se ha establecido una conexión y una página se ha servido y se ha completado la transacción, en lugar de cerrar la conexión, esta es guardada en un caché o pool. Al arribar una nueva solicitud, web2py intenta reciclar la conexión con la información del caché. y utilizarla para la nueva transacción. Si no existen conexiones disponibles en el caché, se establece una nueva conexión. Al iniciarse web2py, el caché de conexiones está siempre vacío. El caché crece hasta un valor mínimo entre el valor de `pool_size` y el máximo de solicitudes simultáneas. Esto significa que si se verifica `pool_size=10` pero tu servidor no puede recibir más de 5 consultas simultáneas, entonces el valor real del caché de conexiones crecerá solamente hasta 5. Si `pool_size=0` , entonces el caché de conexiones no se utilizará. Las conexiones del caché se comparten en forma secuencial entre distintos hilos, en el sentido de que pueden ser utilizados por dos hilos consecutivos pero no simultáneos. Solo hay un caché de conexiones para cada proceso de web2py. El parámetro `pool_size` es ignorado para SQLite y Google App Engine. Para el caso de SQLite, el caché de conexión se omite ya que no implica ningún beneficio. # Fallas de conexión¶ Si web2py no puede conectar a la base de datos, espera 1 segundo y lo vuelve a intentar hasta 5 veces antes de devolver una falla. En el caso del caché de conexiones es posible que una conexión del caché que continúa abierta pero sin uso por algún tiempo sea cerrada del lado de la base de datos. Gracias a la funcionalidad para restablecimiento de la conexión web2py intenta recuperar estas conexiones cerradas. # Réplicas de la base de datos¶ El primer argumento de `DAL(...)` puede ser una lista compuesta por distintos URI. En este caso web2py trata de conectar a cada una de ellas. El objeto principal para esto es el manejo de múltiples servicios de bases de datos y la distribución de la carga de tareas entre ellos. Este es un caso de uso típico: En este caso la DAL intenta conectar a la primera y, si falla, intentará conectar a la segunda y a la tercera. Esto se puede usar también para distribuir la carga en una configuración master-slave. Daremos más detalles sobre esto en el capítulo 13 cuando tratemos sobre escalabilidad. ### Palabras reservadas¶ `check_reserved` es otro argumento adicional que se puede pasar al constructor de DAL. Le indica que debe comprobar que los nombres de tablas y columnas no coincidan con palabras reservadas de SQL para el motor de bases de datos utilizado. El argumento `check_reserved` es por defecto None. check_reserved es una lista de cadenas que contienen los nombres de los motores de base de datos. El nombre del adaptador es el mismo usado en la cadena de conexión de DAL. Por lo tanto, si quisieras comprobar que no existan conflictos de nombres reservados para PostgreSQL y MSSQL, los argumentos del constructor de DAL deberían ser: ``` db = DAL('sqlite://storage.db', check_reserved=['postgres', 'mssql']) ``` La DAL examinará las palabras especiales en el mismo orden que el especificado en la lista. Hay dos opciones adicionales "all" y "common". Si especificas all, se comprobarán todas las palabras reservadas SQL. Si especificas common, solo verificará las palabras comunes como `SELECT` , `INSERT` , `UPDATE` , etc. Para un motor de base de datos determinado, puedes además especificar si quieres que se comprueben las palabras especiales pero no reservadas de SQL. En ese caso debes agregar `_nonreserved` al nombre. Por ejemplo: ``` check_reserved=['postgres', 'postgres_nonreserved'] ``` Los siguientes motores de base de datos contemplan la verificación de palabras especiales. PostgreSQLpostgres(_nonreserved)MySQLmysqlFireBirdfirebird(_nonreserved)MSSQLmssqlOracleoracle `DAL` , `Table` , `Field` ¶ Puedes experimentar con la API de DAL usando la consola de web2py. Comienza creando una conexión. Para seguir estos ejemplos, puedes usar SQLite. Nada de lo que se trata aquí cambia cuando modificas el motor de la base de datos. ``` >>> db = DAL('sqlite://storage.db') ``` La conexión con la base de datos se ha establecido y se almacenó en la variable global `db` . En cualquier momento puedes recuperar la cadena de conexión. ``` >>> print db._uri sqlite://storage.db ``` y el nombre de la base de datos ``` >>> print db._dbname sqlite ``` La cadena de conexión tiene asignado el nombre `_uri` porque es una instancia de un identificador uniforme de recursos (Uniform Resource Identifier). La DAL permite múltiples conexiones con la misma base de datos o a distintas bases de datos, incluso para distintos tipos de bases de datos. Por ahora, suponemos la presencia de una única conexión de base de datos ya que es lo normal en la mayoría de las situaciones. El método más importante de DAL es `define_table` : ``` >>> db.define_table('persona', Field('nombre')) ``` Esto define, almacena y devuelve un objeto `Table` llamado "persona" que contiene un campo (columna) "nombre". Este objeto puede también recuperarse a través de `db.persona` , por lo que no necesitas obligatoriamente el valor devuelto por el método. No debes declarar un campo denominado "id", porque web2py lo creará automáticamente. Toda tabla tiene un campo "id" por defecto. Se trata de un campo de tipo entero de incremento automático (auto-increment) que toma como primer valor el 1, usado para referencias entre tablas (cross-reference) y para que cada registro sea único, de tal manera que "id" sea una clave primaria. (Nota: que sea 1 el valor inicial depende del motor de la base de datos. Por ejemplo, esta propiedad no se aplica para Google App Engine en su versión NoSQL.) Como opción, puedes definir un campo especificando `type='id'` y web2py lo usará como campo id de incremento automático. Esto no se recomienda, a menos que se trate del acceso a tablas de una base de datos heredada o preexistente. Con ciertas limitaciones, también es posible usar claves primarias distintas y esto se tratará en la sección "Bases de datos heredadas y tablas con claves". Las tablas se pueden definir solo una vez pero puedes hacer que web2py redefina una tabla existente: ``` db.define_table('persona', Field('nombre')) db.define_table('persona', Field('nombre'), redefine=True) ``` La redefinición puede generar una migración si el contenido del campo es distinto. Como usualmente en web2py los modelos se ejecutan antes que los controladores, es posible que algunas tablas se definan incluso cuando no se las necesita. Por eso, para acelerar la ejecución del código es necesario que las definiciones de tablas sean perezosas (lazy). Esto se hace especificando el atributo ``` DAL(..., lazy_tables=True) ``` . Las tablas se crearán únicamente cuando se acceda a ellas. ### Representación de registros¶ Si bien es opcional, se recomienda especificar el formato de representación para los registros: ``` >>> db.define_table('persona', Field('nombre'), format='%(nombre)s') ``` ``` >>> db.define_table('persona', Field('nombre'), format='%(nombre)s %(id)s') ``` o incluso formas más complicadas usando una función: ``` >>> db.define_table('persona', Field('nombre'), format=lambda r: r.name or 'anónimo') ``` El atributo de formato se usará con dos fines: * Para representar registros asociados (reference) en los menús desplegables. * Para establecer un atributo ``` db.otratabla.persona.represent ``` para todos los campos que refieran a esa tabla. Esto quiere decir que SQLTABLE no mostrará las referencias como id sino que usará el formato establecido en su lugar. Estos son los valores por defecto del constructor Field: ``` Field(nombre, 'string', length=None, default=None, required=False, requires='<default>', ondelete='CASCADE', notnull=False, unique=False, uploadfield=True, widget=None, label=None, comment=None, writable=True, readable=True, update=None, authorize=None, autodelete=False, represent=None, compute=None, uploadfolder=os.path.join(request.folder,'uploads'), uploadseparate=None,uploadfs=None) ``` No todos estos atributos son aplicables a cada tipo de campo. "length" es relevante sólo para campos de tipo "string". "uploadfield" y "authorize" son relevantes únicamente para campos de tipo "upload". "ondelete" es relevante sólo para campos de tipo "reference" y "upload". * `length` establece la longitud máxima de un campo "string", "password" o "upload". Si `length` no se especifica, se usa un valor por defecto pero no se garantiza que este valor por defecto sea compatible hacia atrás. Para evitar las migraciones innecesarias al hacer un upgrade, es recomendable especificar siempre la longitud de los campos string, password y upload. * `default` establece el valor por defecto del campo. Este valor se usa cuando se realiza una inserción y el valor para ese campo no se especificó en forma explícita. También se utiliza para precompletar los formularios creados con SQLFORM. Ten en cuenta que, además de tomar un valor fijo, el valor por defecto también puede ser una función (incluyendo las funciones lambda) que devuelve un valor del tipo correspondiente para el campo. En este caso, la función es llamada una vez por cada registro insertado, incluso si se están insertando múltiples registros en una sola transacción. * `required` le dice a DAL que no se debería realizar una inserción en la tabla si no se ha especificado un valor en forma explícita para ese campo. * `requires` es un validador o una lista de validadores. Esto no es utilizado por DAL, pero sí es usado por SQLFORM. Los validadores por defecto para cada tipo de campo se listan en la siguiente tabla: tipo de campo | validadores de campo por defecto | | --- | --- | tipo de campovalidadores de campo por defectostringIS_LENGTH(length) la longitud por defecto es 512textIS_LENGTH(65536)blobNonebooleanNoneintegerIS_INT_IN_RANGE(-1e100, 1e100)doubleIS_FLOAT_IN_RANGE(-1e100, 1e100)decimal(n,m)IS_DECIMAL_IN_RANGE(-1e100, 1e100)dateIS_DATE()timeIS_TIME()datetimeIS_DATETIME()passwordNoneuploadNonereference <table>IS_IN_DB(db, tabla.campo, format)list:stringNonelist:integerNonelist:reference <table>IS_IN_DB(db, tabla.campo, format, multiple=True)jsonIS_JSON()bigintNonebig-idNonebig-referenceNone Decimal requiere y devuelve valores como objetos `Decimal` , según lo especificado en el módulo `decimal` de Python. SQLite no maneja el tipo `decimal` por lo que internamente se lo maneja como un valor `double` . Los (n, m) son las cantidades de dígitos en total y el número de dígitos después de la coma respectivamente. El `big-id` y, `big-reference` están únicamente soportados por algunos motores de bases de datos y son experimentales. No se usan normalmente como tipo de campo a menos que lo requieran bases de datos heredadas. Sin embargo, el constructor de DAL tiene un argumento `bigint_id` que si se establece como `True` convierte los campos `id` y `reference` en `big-id` y `big-reference` respectivamente. Los tipos de campo `list:<tipo>` son especiales porque están diseñados para aprovechar algunas funcionalidades de desnormalización en NoSQL (para el caso de Google App Engine NoSQL, los tipos de campo `ListProperty` y `StringListProperty` ) y para que también sean compatibles con servicios de bases de datos relacionales. En las bases de datos relacionales las listas se almacenan como campos tipo `text` . Los ítems se separan por `|` y cada `|` en una cadena de texto se escapa como `||` . Estos campos se detallan en una sección especial. El campo de tipo `json` se explica por si mismo. Puede almacenar todo objeto serializable como JSON. Está diseñado para funcionar específicamente con MongoDB y provee de compatibilidad y portabilidad para los demás motores de bases de datos soportados. Observa que `requires=...` se controla en el nivel de los fomularios, `required=True` se controla en el nivel de DAL (inserción), mientras que `notnull` , `unique` y `ondelete` se controlan en el nivel de la base de datos. Si bien a veces esto puede parecer redundante, es importante observar esta distinción cuando se programa con DAL. * `ondelete` se traduce en una instrucción "ON DELETE" de SQL. Por defecto se establece como "CASCADE". Esto le dice a la base de datos que si se elimina un registro, también debería eliminar los registros asociados a él. Para deshabilitar esta funcionalidad, establece `ondelete` como "NO ACTION" o "SET NULL". * `notnull=True` se traduce en una instrucción "NOT NULL" de SQL. Esto evita que la base de datos inserte valores nulos para el campo. * `unique=True` se traduce en una instrucción "UNIQUE" de SQL y se asegura de que los valores del campo sean únicos para la tabla. Esto se controla en el nivel de la base de datos. * `uploadfield` se aplica únicamente a campos de tipo "upload". Un campo de tipo "upload" almacena el nombre de un archivo almacenado en otra ubicación, por defecto en el sistema de archivos dentro de la carpeta "uploads/" de la aplicación. Si `uploadfield` se especifica, entonces el archivo es almacenado en un campo blob de la tabla y el valor de `uploadfield` debe ser el nombre del campo blob. Esto se tratará con más detalle en la sección de SQLFORM. * `uploadfolder` es por defecto la carpeta "uploads/" de la aplicación. Si se cambia a otra ruta, los archivos subidos se almacenarán en una carpeta distinta. Por ejemplo, ``` Field(..., uploadfolder=os.path.join(request.folder, 'static/temp')) ``` subirá los archivos a la carpeta "web2py/applications/miapp/static/temp" * `uploadseparate` si se especifica True subirá los archivos en distintas subcarpetas de uploadfolder. Esto permite optimizar el almacenamiento de archivos para evitando que se guarden muchos archivos en un mismo directorio. ADVERTENCIA: no puedes cambiar el valor de `uploadseparate` de True a False sin romper los enlaces a los directorios actuales. web2py puede o bien usar carpetas separadas o no hacerlo. El cambio del comportamiento una vez que se han subido archivos hará que web2py no pueda recuperarlos. En tal caso, es posible mover los archivos reparando el problema, pero descripción del procedimiento no se detalla en esta sección. * `uploadfs` te permite especificar un sistema de archivos diferente para la subida de archivos, incluyendo un sistema de almacenamiento S3 de Amazon o un servidor SFTP remoto. Esta opción requiere tener instalado PyFileSystem. `uploadfs` debe estar enlazado a `PyFileSystem` .PyFileSystem `uploadfs` * `widget` debe ser uno de los objetos widget disponibles, incluyendo widget personalizados, por ejemplo: . En otra sección se presenta una lista descriptiva de los widget predeterminados. Cada tipo de campo tiene un widget por defecto. * `label` es una cadena (o un ayudante o cualquier objeto que admita la serialización como cadena) que contiene la etiqueta que se usará para este campo en los formularios autogenerados. * `comment` es una cadena (o un ayudante u otro objeto serializable como cadena) que contenga un comentario asociado con este campo, y se mostrará a la derecha del campo de ingreso de datos en los formularios autogenerados. * `writable` indica si un campo se puede editar en los formularios. * `readable` indica si un campo se puede visualizar los formularios. Si no se puede escribir ni leer un campo, entonces no se mostrará en los formularios para crear o modificar. * `update` contiene el valor por defecto para este campo cuando se actualice el registro. * `compute` es una función opcional. Si el registro se inserta o actualiza, la función de compute se ejecuta y el campo se completará con el valor devuelto por la función. El registro se pasa como argumento de la función compute como un objeto `dict` , y el dict no incluirá el valor del campo a procesar ni el valor de ningún otro campo con el parámetro compute. * `authorize` se puede usar para requerir control de acceso para ese campo, solo para campos "upload". Esto se tratará con más detalle cuando hablemos de Control de Acceso. * `autodelete` determina si el archivo subido que corresponde al campo también se debería eliminar cuando se elimine el registro que lo contiene. Este parámetro se establece únicamente para el tipo de campo "upload". * `represent` puede ser None o una función; la función recibe el valor actual del campo como argumento y devuelve una representación alternativa de ese valor. Ejemplos: ``` db.mitabla.nombre.represent = lambda nombre, registro: nombre.capitalize() db.mitabla.otro_id.represent = lambda id, registro: registro.micampo db.mitabla.un_uploadfield.represent = lambda valor, registro: \ A('Descárgalo haciendo clic aquí', _href=URL('download', args=valor)) ``` Los campos "blob" también son especiales. Por defecto, la información binaria se codifica como base64 antes de ser almacenada en el campo de la base de datos en sí, y es decodificado cuando se extrae. Esto tiene como desventaja un incremento del 25% del uso de la capacidad de almacenamiento para los campos blob, pero además implica dos beneficios. En promedio reduce la cantidad de información que se transmite entre web2py y el servidor de la base de datos, y hace que la transferencia de datos sea independiente del sistema utilizado para el escapado de caracteres especiales. La mayor parte de los atributos de los campos y tablas se pueden modificar después de la definición: ``` db.define_table('persona', Field('nombre',default=''),format='%(nombre)s') db.persona._format = '%(nombre)s/%(id)s' db.persona.nombre.default = 'anónimo' ``` (observa que los atributos de las tablas normalmente llevan un subguión como prefijo para evitar posibles conflictos con los nombres de los campos). Puedes listar las tablas que se han definido para una determinada conexión de la base de datos: ``` >>> print db.tables ['persona'] ``` Además puedes listar los campos que se hayan definido para una tabla determinada: ``` >>> print db.persona.fields ['id', 'nombre'] ``` Puedes consultar el tipo de una tabla: ``` >>> print type(db.persona) <class 'pydal.objects.Table'> ``` y puedes recuperar una tabla a través de una conexión de la base de datos usando: ``` >>> print type(db['persona']) <class 'pydal.objects.Table'> ``` En forma similar, puedes acceder a los campos especificando el nombre, de distintas formas equivalentes: ``` >>> print type(db.persona.nombre) <class 'pydal.objects.Field'> >>> print type(db.person['nombre']) <class 'pydal.objects.Field'> >>> print type(db['persona']['nombre']) <class 'pydal.objects.Field'> ``` Dado un campo, puedes recuperar sus atributos establecidos en la definición: ``` >>> print db.persona.nombre.type string >>> print db.persona.nombre.unique False >>> print db.persona.nombre.notnull False >>> print db.persona.nombre.length 32 ``` incluyendo la tabla a la que pertenecen, el nombre de la tabla y la conexión de la base de datos de referencia: ``` >>> db.persona.nombre._table == db.persona True >>> db.persona.nombre._tablename == 'persona' True >>> db.persona.nombre._db == db True ``` Un campo tiene además métodos. Algunos de ellos se usan para crear consultas y los veremos más adelante. `validate` es un método especial del objeto campo, que llama a los validadores para ese campo. ``` print db.persona.nombre.validate('Juan') ``` que devuelve una tupla `(valor, error)` . `error` es `None` si el valor de entrada pasa la validación. ### Migraciones¶ `define_table` comprueba la existencia de la tabla definida. Si la tabla no existe, genera el SQL para crearla y lo ejecuta. Si la tabla existe pero se detectan cambios en la definición, crea el SQL para modificar (alter) la tabla y lo ejecuta. si un campo cambió su tipo pero no su nombre, intentará realizar una conversión de los datos (si no quieres que realice la conversión, debes redefinir la conversión dos veces, la primera vez, dejando que web2py descarte el campo eliminándolo, y la segunda vez agregando el nuevo campo definido para que web2py lo pueda crear.). Si la tabla existe y coincide con la definición actual, lo dejará intacto. En cualquier caso creará el objeto `db.person` que representa la tabla. Nos referiremos a este comportamiento por el término "migración" o migration. web2py registra todos los intentos de migraciones en un archivo de log "databases/sql.log". El primer argumento de `define_table` es siempre un nombre de tabla. Los otros argumentos sin nombre, es decir, de tipo positional, son los campos (Field). La función también acepta un argumento opcional llamado "migrate", que se debe especificar explícitamente por nombre, con la siguiente notación: ``` >>> db.define_table('persona', Field('nombre'), migrate='persona.tabla') ``` El valor de migrate es el nombre del archivo (en la carpeta "databases" de la aplicación) en el cual web2py almacena la información interna de la migración para esa tabla. Estos archivos son muy importantes y no se deberían eliminar mientras las existan las tablas correspondientes. En casos en los que una tabla se ha descartado y su archivo correspondiente exista, se puede eliminar en forma manual. Por defecto, migrate se establece como True. Esto hace que web2py genere un nombre de archivo a partir de un hash de la cadena de conexión. Si migrate se establece como False, no se realiza la migración, y web2py asume que la tabla existe en la base de datos y que contiene (por lo menos) los campos listados en `define_table` . Lo más aconsejable es asignar un nombre específico a la tabla de migrate. No deberían existir dos tablas en la misma aplicación con el mismo nombre de archivo de migración. La clase DAL también acepta un argumento "migrate", que determina el valor por defecto de migrate para cada llamada a `define_table` . Por ejemplo, ``` >>> db = DAL('sqlite://storage.db', migrate=False) ``` establecerá por defecto el valor de migrate como False cuando se llame a `db.define_table` sin argumentos. Observa que web2py sólo realiza la migración de nuevas columnas, columnas eliminadas y cambios de tipo de columna (excepto para sqlite). web2py no realiza la migración de cambios en atributos como por ejemplo los valores de `default` , `unique` , `notnull` y `ondelete` . Se pueden deshabilitar las migraciones para todas las tablas con un solo comando: ``` db = DAL(..., migrate_enabled=False) ``` Este es el comportamiento recomendado cuando dos app comparten la misma base de datos. Sólo una de las dos app debería realizar las migraciones, en la otra app deberían estar deshabilitadas. ### Reparación de migraciones fallidas¶ Hay dos problemas comunes respecto de las migraciones y hay formas de recuperar el normal funcionamiento. Uno de los problemas es específico de SQLite. SQLite no realiza un control de los tipos de columnas y no las puede descartar. Esto implica que si tienes un tipo de columna string y lo eliminas, no se eliminará en la práctica. Si agregas el campo nuevamente con un tipo distinto (por ejemplo datetime) obtendrás un campo datetime que contenga cadenas ilegales (es decir, información inútil para el campo). web2py no devuelve un error en este caso porque no sabe lo que contiene la base de datos, pero sí lo hará cuando intente recuperar los registros. Si web2py devuelve un error en la función gluon.sql.parse cuando recupera registros, este es el problema: información corrupta en una columna debido al caso descripto más arriba. La solución consiste en actualizar todos los registros de la tabla y actualizar los valores en la columna en cuestión como None. El otro problema es más genérico pero típico de MySQL. MySQL no permite más de un ALTER TABLE en una transacción. Esto quiere decir que web2py debe separar las transacciones complicadas en operaciones más pequeñas (con un ALTER TABLE por vez) y realizar un commit por operación. Por lo tanto, es posible que parte de una transacción compleja se aplique y que otra genere un error, dejando a web2py inestable. ¿Por qué puede fallar una parte de una transacción? Porque, por ejemplo, implica la modificación de la tabla y la conversión de una columna de texto en una datetime; web2py intentará convertir la información, pero la información no se puede modificar. ¿Qué ocurre en web2py? Se produce un conflicto porque no es posible determinar la estructura de la tabla actualmente almacenada en la base de datos. La solución consiste en deshabilitar las migraciones para todas las tablas y habilitar las migraciones ficticias o fake migration: ``` db.define_table(...., migrate=True, fake_migrate=True) ``` Esto reconstruirá los metadatos de web2py sobre la tabla según su definición. Prueba múltiples definiciones de tablas para ver cuál de ellas funciona (la definida antes de la migración fallida y la definida luego de la falla de migración). Cuando logres recuperar las migraciones, elimina el atributo `fake_migrate=True` . Antes de intentar arreglar problemas de migración es conveniente hacer una copia de los archivos de "applications/tuapp/databases/*.table". Los problemas relacionados con la migración también se pueden solucionar para todas las tablas en un solo paso: ``` db = DAL(..., fake_migrate_all=True) ``` Esto también fallará si el modelo describe tablas que no existen en la base de datos, pero puede ser útil como solución parcial. `insert` ¶ Dada una tabla determinada, puedes insertar registros ``` >>> db.persona.insert(nombre="Alejandro") 1 >>> db.persona.insert(nombre="Roberto") 2 ``` Insert devuelve un valor único "id" para cada registro insertado. Puedes truncar la tabla, es decir, borrar todos sus registros y reiniciar el contador de los valores id. ``` >>> db.persona.truncate() ``` Ahora, si insertas un registro nuevamente, el contador iniciará con un valor de 1 (esto depende del motor de la base de datos y no se aplica en Google NoSQL): ``` >>> db.persona.insert(nombre="Alejandro") 1 ``` Observa que puedes pasar parámetros a `truncate` , por ejemplo puedes especificar que SQLITE debe reiniciar el contador. ``` db.persona.truncate('RESTART IDENTITY CASCADE') ``` El argumento es SQL puro y por lo tanto específico de cada motor. web2py también cuenta con un método para inserciones múltiples bulk_insert ``` >>> db.persona.bulk_insert([{'nombre':'Alejandro'}, {'nombre':'Juan'}, {'nombre':'Timoteo'}]) [3,4,5] ``` Acepta una lista de diccionarios de campos que se insertarán y realiza múltiples inserciones en un solo comando. Devuelve los IDs de los registros insertados. Para las bases de datos relacionales soportadas, no implica un beneficio el uso de esta función en lugar de generar cada inserción en un bucle, pero en Google App Engine NoSQL, significa un aumento considerable de la velocidad. `commit` y `rollback` ¶ En realidad, no se aplica ninguna operación para crear, descartar, truncar o modificar hasta que se agregue el comando commit `>>> db.commit()` Para comprobarlo insertemos un nuevo registro: y devolvámosle su estado anterior (rollback), es decir, ignoremos toda operación desde el último commit: `>>> db.rollback()` Si hacemos ahora un insert, el contador se establecerá nuevamente como 2, ya que las inserciones previas se han anulado. El código en los modelos, vistas y controladores está envuelto en código de web2py similar a lo siguiente: ``` try: ejecutar el modelo, la función del controlador y la vista except: reestablecer (rollback) todas las conexiones registrar la traza del error enviar un ticket al visitante else: aplicar los cambios en todas las conexiones (commit) guardar los cookie, la sesión y devolver la página ``` No es necesario llamar a `commit` o `rollback` en forma explícita en web2py a menos que uno requiera un control más pormenorizado. ### SQL puro¶ # Cronometrado de las consultas¶ Todas las consultas son cronometradas automáticamente por web2py. La variable `db._timings` es una lista de tuplas. Cada tupla contiene la consulta en SQL puro tal como se pasa al controlador y el tiempo que tomó su ejecución en segundos. Esta variable se puede mostrar en las vistas usando la barra de herramientas o toolbar: `executesql` ¶ La DAL te permite ingresar comandos SQL en forma explícita. ``` >>> print db.executesql('SELECT * FROM persona;') [(1, u'Massimo'), (2, u'Massimo')] ``` En el ejemplo, los valores devueltos no son analizados o convertidos por la DAL, y el formato depende del controlador específico de la base de datos. Esta forma de utilización por medio de comandos select no se requiere normalmente, aunque es más común para los índices. `executesql` acepta cuatro argumentos opcionales: `placeholders` , `as_dict` , `fields` y `colnames` . `placeholders` es una secuencia opcional de valores a sustituir, o, si el controlador de la base de datos lo contempla, un diccionario con claves que coinciden con variables placeholder especificadas en el SQL. Si se establece `as_dict` como True, el cursor de los resultados devuelto por el controlador se convertirá en una secuencia de diccionarios cuyos nombres de clave corresponden a los nombres de los campos. Los registros devueltos con `as_dict=True` iguales que los devueltos cuando se usa .as_list() con un select normal. ``` [{campo1: valor1, campo2: valor2}, {campo1: valor1b, campo2: valor2b}] ``` El argumento `fields` es una lista de campos DAL que coinciden con los campos devueltos por la base de datos. Los objetos Field deberían componer uno o más objetos Table definidos en el objeto DAL. La lista de `fields` puede incluir ono o más objetos Table de DAL, como agregado o en lugar de incluir objetos Field, o puede ser una sola tabla (no una lista de tablas). Para el último caso, los objetos Field se extraerán de la(s) o tabla(s). También es posible especificar tanto `fields` y los `colnames` asociados. En este caso, `fields` puede también incluir objetos Expression de DAL además de los objetos Field. Para los objetos Field en "fields", los nombres de columna asociados de `colnames` pueden tomar nombres arbitrarios. Ten en cuenta que los objetos Table de DAL referidos en los parámetros `fields` y `colnames` pueden ser tablas ficticias y no tienen que ser obligatoriamente tablas reales de la base de datos. Además, los valores para `fields` y `colnames` deben tener el mismo orden que los campos en el cursor de los resultados devueltos por la base de datos. `_lastsql` ¶ Tanto para el caso de SQL ejecutado manualmente usando executesql como para SQL generado por la DAL, siempre puedes recuperar el código SQL en `db._lastsql` . Esto es útil para la depuración: ``` >>> registros = db().select(db.persona.ALL) >>> print db._lastsql SELECT persona.id, persona.nombre FROM persona; ``` web2py nunca genera consultas usando el operador "*". Las selecciones de campos son siempre explícitas. `drop` ¶ Por último, puedes descartar (drop) una tabla, y por lo tanto, toda su información almacenada: ``` >>> db.persona.drop() ``` ### Índices¶ Actualmente la API de la DAL no provee de un comando para crear índices de tablas, pero eso se puede hacer utilizando el comando `executesql` . Esto se debe a que la existencia de índices puede hacer que las migraciones se tornen complejas, y es preferible que se especifiquen en forma explícita. Los índices o index pueden ser necesarios para aquellos campos que se utilicen en consultas recurrentes. Este es un ejemplo para la creación de un índice usando SQL con SQLite: ``` >>> db = DAL('sqlite://storage.db') >>> db.define_table('persona', Field('nombre')) >>> db.executesql('CREATE INDEX IF NOT EXISTS miidx ON persona (nombre);') ``` Otros dialectos de bases de datos usan una sintaxis muy similar pero pueden no soportar la instrucción "IF NOT EXISTS". ### Bases de datos heredadas y tablas con claves¶ web2py puede conectar con bases de datos heredadas o legadas bajo ciertas condiciones. La forma más fácil es cuando se cumplen estas condiciones: * Cada tabla debe tener un campo auto incrementado de valores enteros llamado "id" * Los registros deben tener referencias a otros registros exclusivamente por medio del campo "id". Cuando se accede a una tabla existente, por ejemplo, una tabla que no se ha creado con web2py en la aplicación actual, siempre se debe especificar `migrate=False` . Si la tabla heredada tiene un campo auto incrementado de valores enteros pero no se llama "id", web2py también puede leerlo pero la definición de la tabla debe contener explícitamente `Field('...', 'id')` donde ... es el nombre del campo auto incrementado de valores enteros. Por último, si la tabla heredada usa una clave primaria que no es un campo id auto incrementado es posible usar una tabla con claves o keyed table, por ejemplo: ``` db.define_table('cuenta', Field('numero','integer'), Field('tipo'), Field('descripcion'), primarykey=['numero','tipo'], migrate=False) ``` * `primarykey` es una lista de nombres de campo que componen la clave primaria. * Todos los campos de clave primaria tienen campos `NOT NULL` incluso cuando no se especifica el atributo. * Las tablas con claves solo pueden tener referencias a otras tablas con claves. * Los campos de referencia deben usar el formato ``` reference nombredetabla.nombredecampo ``` . * La función `update_record` no está disponible para objetos Row de tablas con claves. Actualmente las tablas con claves están soportadas para DB2, MS-SQL, Ingres e Informix, pero se agregará soporte para otros motores. Al tiempo de esta edición, no podemos garantizar que el atributo `primarykey` funcione para toda tabla heredada existente en todas las bases de datos soportadas. Para mayor simplicidad se recomienda, si es posible, crear una vista de la base de datos que incorpore un campo id auto incrementado. ### Transacciones distribuidas¶ Al tiempo de esta edición, esta característica está soportada únicamente por PostgreSQL, MySQL y Firebird, ya que estos motores exponen una API para aplicar modificaciones en dos fases (two phase commit). Suponiendo que tienes dos (o más) conexiones a distintos servicios de base de datos PostgreSQL, por ejemplo: ``` db_a = DAL('postgres://...') db_b = DAL('postgres://...') ``` En tus modelos o controladores, puedes aplicar cambios en forma simultánea con: ``` DAL.distributed_transaction_commit(db_a, db_b) ``` Si se produce un error, esta función recupera el estado anterior y genera una `Exception` . En los controladores, cuando finaliza una acción, si tienes dos conexiones distintas y no usas la función descripta arriba, web2py aplica los cambios en forma separada. Esto implica que existe la posibilidad de que uno de los cambios se aplique satisfactoriamente y otro no. La transacción distribuida evita que esto ocurra. ### Más sobre las subidas de archivos¶ ``` >>> db.define_table('miarchivo', Field('imagen', 'upload', default='ruta/')) ``` Para el caso de un campo 'upload', el valor por defecto puede ser opcionalmente una ruta (una ruta absoluta o relativa a el directorio de la app actual) y la imagen por defecto se configurará como una copia del archivo especificado en la ruta. Se hace una nueva copia por cada nuevo registro para el que no se especifique una imagen. Normalmente una inserción se maneja automáticamente por medio de un SQLFORM o un formulario crud (que de hecho es un SQLFORM) pero a veces ya dispones del archivo en el sistema y quieres subirlo en forma programática. Esto se puede hacer de la siguiente forma: ``` >>> stream = open(nombredelarchivo, 'rb') >>> db.miarchivo.insert(imagen=db.miarchivo.imagen.store(stream, nombredearchivo)) ``` También es posible insertar un archivo en una forma más simple y hacer que el método de inserción llame a store automáticamente: ``` >>> stream = open(nombredearchivo, 'rb') >>> db.miarchivo.insert(imagen=stream) ``` Esta vez el nombre del archivo se obtiene del objeto stream si se hubiera especificado. El método `store` del objeto del campo upload acepta un stream de un archivo y un nombre de archivo. Usa el nombre del archivo para determinar la extensión (tipo) del archivo, crea un nuevo nombre temporario para el archivo (según el mecanismo para subir archivos de web2py) y carga el contenido del archivo en este nuevo archivo temporario (dentro de la carpeta uploads a menos que se especifique otra ubicación). Devuelve el nuevo nombre temporario, que es entonces almacenado en el campo `imagen` de la tabla `db.miarchivo` . Ten en cuenta que si el archivo se debe almacenar en un campo blob asociado en lugar de usar el sistema de archivos, el método `store()` no insertará el archivo en el campo blob (porque `store()` es llamado después de insert), por lo que el archivo se debe insertar explícitamente en le campo blob: ``` >>> db.define_table('miarchivo', Field('imagen', 'upload', uploadfield='archivo'), Field('archivo', 'blob')) >>> stream = open(nombredearchivo, 'rb') >>> db.miarchivo.insert(imagen=db.miarchivo.imagen.store(stream, nombredearchivo), archivo=stream.read()) ``` `.retrieve` es lo opuesto a `.store` : ``` >>> registro = db(db.miarchivo).select().first() >>> (nombredearchivo, stream) = db.miarchivo.imagen.retrieve(registro.imagen) >>> import shutil >>> shutil.copyfileobj(stream, open(nombredearchivo,'wb')) ``` `Query` , `Set` , `Rows` ¶ Ahora consideremos la tabla definida (y descartada) previamente e insertemos tres registros: ``` >>> db.define_table('persona', Field('nombre')) >>> db.persona.insert(name="Alejandro") 1 >>> db.persona.insert(name="Roberto") 2 >>> db.persona.insert(name="Carlos") 3 ``` Puedes almacenar la tabla en una variable. Por ejemplo, con la variable `persona` , puedes hacer: ``` >>> persona = db.persona ``` Además puedes almacenar un campo en una variable como `nombre` . Por ejemplo, también puedes hacer: ``` >>> nombre = persona.nombre ``` Incluso puedes crear una consulta (usando operadores como ==, !=, <, >, <=, >=, like, belongs) y almacenar la consulta en una variable `q` como en: ``` >>> q = nombre=='Alejandro' ``` Cuando llamas a `db` con una consulta, puedes definir un conjunto de registros. Puedes almacenarlos en una variable `s` y escribir: `>>> s = db(q)` Ten en cuenta que hasta aquí no se ha realizado una consulta a la base de datos en sí. DAL + Query simplemente definen un conjunto de registros en esta base de datos que coinciden con los parámetros de la consulta. web2py determina a partir de la consulta cuál tabla (o tablas) se incluyeron y, de hecho, no hay necesidad de especificarlas. `select` ¶ Dado un conjunto Set, `s` , puedes recuperar sus registros con el comando `select` : ``` >>> registros = s.select() ``` Esto devolverá un objeto iterable de la clase `pydal.objects.Rows` cuyos elementos son objetos Row. Los objetos `pydal.objects.Row` funcionan de la misma forma que un diccionario, pero sus elementos también se pueden acceder como atributos, como con La diferencia de Row con Storage es que los atributos de Row son de solo lectura. El objeto Rows permite recorrer el resultado de un comando select y recuperar los valores de los distintos campos de cada registro: ``` >>> for registro in registros: print registro.id, registro.nombre 1 Alejandro ``` Puedes unir todos los pasos en un comando: ``` >>> for registro in db(db.persona.nombre=='Alejandro').select(): print registro.nombre Alejandro ``` El comando select puede recibir parámetros. Todos los argumentos posicionales se interpretan como los nombres de los campos que quieres recuperar. Por ejemplo, puedes recuperar explícitamente los campos "id" y "nombre": El atributo de tabla ALL te permite especificar todos los campos: ``` >>> for registro in db().select(db.persona.ALL): print registro.nombre Alejandro Roberto Carlos ``` Observa que no hay cadena de consulta pasada a la base de datos. web2py interpreta que si quieres todos los campos de la tabla persona sin otra información, entonces quieres todos los registros de la tabla persona. La siguiente es una sintaxis alternativa equivalente: ``` >>> for registro in db(db.persona.id > 0).select(): print registro.nombre Alejandor Roberto Carlos ``` y web2py entiende que si preguntas por todos los registros de una tabla persona (id > 0) sin información adicional, entonces quieres todos los campos de la tabla persona. Dado un registro ``` registro = registos[0] ``` puedes extraer sus valores usando múltiples expresiones equivalentes: ``` >>> registro.nombre Alejandro >>> row['nombre'] Alejandro >>> row('persona.nombre') Alejandro ``` La primer sintaxis usada es particularmente útil cuando se usan expresiones en lugar de columnas en select. Daremos más detalles sobre esto más adelante. ``` registros.compact = False ``` para deshabilitar la notación `registro[i].nombre` y habilitar, en cambio, la notación menos compacta: ``` registro[i].persona.nombre ``` Esto no es usual y raramente necesario. # Atajos¶ La DAL soporta varios atajos para simplificar el código fuente. En especial: ``` miregistro = db.mitabla[id] ``` devuelve el registro con el `id` dado en caso de que exista. Si el `id` no existe, entonces devuelve `None` . La instrucción es equivalente a ``` miregistro = db(db.mitabla.id==id).select().first() ``` Puedes eliminar registros por id: `del db.mitabla[id]` y esto es equivalente a ``` db(db.mitabla.id==id).delete() ``` y elimina los registros con el `id` dado, si es que existe. Puedes insertar registros: ``` db.mitabla[0] = dict(micampo='unvalor') ``` es el equivalente de ``` db.mitabla.insert(micampo='unvalor') ``` y crea un nuevo registro con los valores especificados en el diccionario a la derecha de cada igualdad. Puedes modificar registros: ``` db.mitabla[id] = dict(micampo='unvalor') ``` ``` db(db.mitabla.id==id).update(micampo='unvalor') ``` y actualizará un registro existente con los valores de campo especificados por el diccionario a la derecha de la igualdad. # Recuperando un registro `Row` ¶ Esta otra sintaxis también es recomendable: ``` registro = db.mitabla(id) registro = db.mitabla(db.mitabla.id==id) registro = db.mitabla(id, micampo='unvalor') ``` Parece similar a `db.mitabla[id]` , aunque esta última sintaxis es más flexible y segura. En primer lugar comprueba si `id` es un entero (o `str(id)` es un entero) y devuelve `None` si no lo es (y nunca genera una excepción). Además permite especificar múltiples condiciones que el registro debe cumplir. Si estas condiciones no se cumplen, entonces devolverá `None` . `select` recursivos¶ Consideremos la tabla anterior de personas y una nueva tabla "cosa" que hace referencia a "persona": ``` >>> db.define_table('cosa', Field('nombre'), Field('id_propietario','reference persona')) ``` y un simple select de esa tabla: ``` >>> cosas = db(db.cosa).select() ``` ``` >>> cosas = db(db.cosa._id>0).select() ``` donde `._id` es una referencia a la clave primaria de la tabla. Normalmente `db.cosa._id` es lo mismo que `db.cosa.id` y lo tomaremos como convención en la mayor parte del libro. Por cada Row de cosas es posible recuperar no solamente campos de la tabla seleccionada (cosa) sino también de las tablas enlazadas (en forma recursiva): ``` >>> for cosa in cosas: print cosa.nombre, cosa.id_propietario.nombre ``` Aquí ``` cosa.id_propietario.nombre ``` requiere un select de la base de datos para cada cosa en cosas y por lo tanto no es eficiente. Se sugiere el uso de instrucciones join cuando sea posible en lugar de select recursivos, aunque esto sí es conveniente cuando se accede a registros individuales. También es posible la operación en sentido opuesto, es decir, un recuperar las cosas que tienen a una persona como referencia: ``` persona = db.persona(id) for cosa in persona.cosa.select(orderby=db.cosa.nombre): print persona.nombre, 'es dueño de', cosa.nombre ``` En esta última expresión `persona.cosa` es un atajo para ``` db(db.cosa.id_propietario==person.id) ``` es decir, el conjunto o Set de los `cosa` que tienen a la persona actual como referencia. Esta sintaxis se daña si la tabla que hace la referencia tiene múltiples referencias a la otra tabla. En ese caso uno debe ser más explícito y usar una consulta con la notación completa. # Serialización de registros `Rows` en las vistas¶ Dada la acción siguiente conteniendo una consulta ``` def index() return dict(registros=db(consulta).select()) ``` El resultado de un select se puede mostrar en una vista con la siguiente sintaxis: ``` {{extend 'layout.html'}} <h1>Registros</h1> {{=registros}} ``` Que es equivalente a: ``` {{extend 'layout.html'}} <h1>Registros</h1> {{=SQLTABLE(registros)}} ``` `SQLTABLE` convierte los registros en una tabla HTML con un encabezado que contiene los nombres de columna y una fila de tabla por registro de la base de datos. Los registros se marcan en forma alternada como clase "even" y "odd" (par e impar). En forma transparente para el desarrollador, los registros del objeto Rows son primero convertidos en un objeto SQLTABLE (no se debe confundir con la clase Table) y luego son serializados. Los valores que se extraen de la base de datos también reciben un formato por medio de los validadores asociados a cada campo y luego se escapan. De todos modos, es posible y a veces conveniente el uso de SQLTABLE en forma explícita. El constructor de SQLTABLE toma los siguientes parámetros opcionales: * `linkto` es el URL o acción a usar para enlazar los campos reference (None por defecto) * `upload` el URL o acción de descarga para permitir la descarga o subida de archivos (None por defecto) * `headers` un diccionario que asocia nombres de campo a las etiquetas que se usarán como encabezados (por defecto es `{}` ). También puede ser una instrucción. Actualmente se contempla ``` headers='nombredecampo:capitalize' ``` . * `truncate` el número de caracteres para el truncado de valores extensos en la tabla (por defecto es 16) * `columns` es la lista de nombresdecampo a mostrarse como columnas (en el formato nombredetabla.nombredecampo). Aquellos que no se listen no se mostrarán (por defecto muestra todos los campos). * `**attributes` son atributos comunes de ayudante html que se pasarán al objeto TABLE más externo. He aquí un ejemplo: ``` {{extend 'layout.html'}} <h1>Registros</h1> {{=SQLTABLE(registros, headers='nombredecampo:capitalize', truncate=100, upload=URL('download')) }} ``` `SQLTABLE` es muy útil pero a veces uno puede necesitar algo más avanzado. `SQLFORM.grid` es una extensión de SQLTABLE que crea una tabla con herramientas de búsqueda y paginación, así como también la habilidad de visualizar detalles, crear, y borrar registros. `SQLFORM.smartgrid` tiene un nivel mayor de abstracción que permite todas las características anteriores pero además crea botones para el acceso a los registros de referencia. Este es un ejemplo de uso de `SQLFORM.grid` : ``` def index(): return dict(grid=SQLFORM.grid(consulta)) ``` y la vista correspondiente: ``` {{extend 'layout.html'}} {{=grid}} ``` `SQLFORM.grid` y `SQLFORM.smartgrid` se deberían preferir a `SQLTABLE` porque son más potentes aunque de mayor nivel y por lo tanto con más restricciones. Estos ayudantes se tratan con mayor detalle en el capítulo 7 `orderby` , `groupby` , `limitby` , `distinct` , `having` ¶ El comando `select` toma cinco argumentos opcionales: orderby, groupby, limitby, left y cache. Aquí trataremos sobre los primeros tres. Puedes recuperar los registros ordenados por nombre: Puedes recuperar los registros ordenados por nombre en el orden inverso (observa el uso del tilde): Puedes hacer que los registros recuperados aparezcan en orden inverso: El uso de `orderby='<random>'` no está soportado para Google NoSQL. Sin embargo, en esta situación así como también en muchas otras donde las características incorporadas no son suficientes, se pueden importar otras:import random rows=db(...).select().sort(lambda row: random.random()) Puedes ordenar los registros según múltiples campos uniéndolos con un "|": Usando `groupby` junto con `orderby` , puedes agrupar registros con el mismo valor para un campo determinado (esto depende del motor de la base de datos, y no está soportado para Google NoSQL): Puedes usar `having` en conjunto con `groupby` para agrupar en forma condicional (se agruparán solo aquellos que cumplan la condición). ``` >>> print db(consulta1).select(db.persona.ALL, groupby=db.persona.nombre, having=consulta2) ``` Observa que consulta1 filtra los registros a mostrarse, consulta2 filtra los registros que se agruparán. Con los argumentos `distinct=True` , puedes especificar que solo quieres recuperar registros únicos (no repetidos). Esto tiene el mismo resultado que agrupar los registros usando todos los campos especificados, con la excepción de que no requiere ordenarlos. Cuando se usa distinct es importante que no se recuperen todos los campos con ALL, y en especial que no se use el campo "id", de lo contrario todo registro será único. Aquí se puede ver un ejemplo: Observa que `distinct` también puede ser una expresión como: Con limitby=(mínimo, máximo), puedes recuperar un subconjunto de registros a partir de desplazamiento=mínimo hasta y sin incluir desplazamiento=máximo (para este caso, los primeros dos comenzando desde cero): ``` >>> for registro in db().select(db.persona.ALL, limitby=(0, 2)): print registro.nombre Alejandro Roberto ``` # Operadores lógicos¶ Las consultas se pueden combinar con el operador binario " `&` ": y el operador binario OR " `|` ": Puedes negar una consulta (o subconsulta) con el operador binario " `!=` ": o por negación explícita con el operador unitario " `~` ": Debido a restricciones de Python con respecto a la sobrecarga de los operadores " `and` " y " `or` ", estos operadores no se pueden usar para crear consultas; deben usarse en cambio los operadores binarios " `&` " y " `|` ". Ten en cuenta que estos operadores (a diferencia de " `and` " y " `or` ") tienen una precedencia mayor que los operadores de comparación, por lo que los paréntesis adicionales en los ejemplos de arriba son obligatorios. En forma similar, el operador unitario " `~` " tiene una precedencia mayor a la de los operadores de comparación, y por lo tanto, las comparaciones negadas con `~` también se deben encerrar entre paréntesis. Además es posible la creación de consultas usando operadores lógicos compuestos (in-place): ``` >>> consulta = db.persona.nombre!='Alejandro' >>> consulta &= db.persona.id>3 >>> consulta |= db.persona.nombre=='Juan' ``` `count` , `isempty` , `delete` , `update` ¶ Puedes contar los registros de un conjunto Set: ``` >>> print db(db.persona.id > 0).count() 3 ``` Observa que `count` toma un argumento opcional `distinct` que por defecto es False, y funciona en forma bastante parecida al argumento para el comando `select` . `count` además tiene un argumento `cache` que funciona en forma similar a su equivalente para el método `select` . En ocasiones puedes necesitar comprobar si una tabla está vacía. Una forma más eficiente de contar registros es usando el método `isempty` : ``` >>> print db(db.persona.id > 0).isempty() False ``` o su equivalente: ``` >>> print db(db.persona).isempty() False ``` Puedes eliminar registros de un conjunto Set: ``` >>> db(db.persona.id > 3).delete() ``` Y puedes modificar todos los registros de un conjunto Set pasando argumentos de par nombre-valor que correspondan a los campos que se deben modificar: ``` >>> db(db.persona.id > 3).update(nombre='Ken') ``` # Expresiones¶ El valor asignado para un comando de modificación update puede ser una expresión. Por ejemplo, consideremos este modelo ``` >>> db.define_table('persona', Field('nombre'), Field('visitas', 'integer', default=0)) >>> db(db.persona.nombre == 'Máximo').update( visitas = db.persona.visitas + 1) ``` Los valores usados en las consultas también pueden ser expresiones ``` >>> db.define_table('persona', Field('nombre'), Field('visitas', 'integer', default=0), Field('clic', 'integer', default=0)) >>> db(db.persona.visits == db.persona.clic + 1).delete() ``` `case` case¶ Una expresión puede contener una instrucción case como por ejemplo en: ``` >>> db.define_table('persona', Field('nombre')) >>> condicion = db.persona.nombre.startswith('M') >>> si_o_no = condition.case('Yes','No') >>> for registro in db().select(db.persona.nombre, si_o_no): ... print registro.persona.nombre, registro(si_o_no) Máximo Yes Juan No ``` `update_record` ¶ otra característica de web2py es que permite actualizar un registro único que ya se encuentre en memoria utilizando `update_record` ``` >>> registro = db(db.persona.id==2).select().first() >>> registro.update_record(noombre='Curt') ``` no se debe confundir `update_record` con ``` >>> registro.update(nombre='Curt') ``` porque para un único registro, el método `update` modificará el objeto Row del registro pero no el registro en sí en la base de datos, como ocurre para el caso de `update_record` . También es posible cambiar los atributos de un registro Row (uno por vez), y entonces llamar a `update_record()` sin argumentos para actualizar los cambios: ``` >>> registro = db(db.persona.id > 2).select().first() >>> registro.nombre = 'Curt' >>> registro.update_record() # guarda los cambios de arriba ``` El método `update_record` está disponible sólo cuando se incluye el campo `id` en el comando select y no se ha habilitado la opción `cacheable` . # Inserción y modificación por medio de diccionarios¶ Un problema usual es el de la necesidad de insertar o modificar registros de una tabla cuando el nombre de la tabla, el campo a modificar y el valor del campo se han almacenado en variables. Por ejemplo: `nombredetabla` , `nombredecampo` y `valor` . La inserción se puede hacer usando la siguiente sintaxis: ``` db[nombredetabla].insert(**{nombredecampo:valor}) ``` La actualización del registro para un id dado se puede hacer con: ``` db(db[nombredetabla]._id==id).update(**{nombredecampo:valor}) ``` Observa que hemos usado `tabla._id` en lugar de `tabla.id` . De esta forma la consulta funciona incluso para tablas con un campo de tipo "id" que tiene un nombre distinto de "id". `first` y `last` ¶ Dado un objeto Rows que contiene registros: ``` >>> registros = db(consulta).select() >>> primero = registros.first() >>> ultimo = registros.last() ``` son equivalentes a ``` >>> primero = registros[0] if len(registros)>0 else None >>> ultimo = registros[-1] if len(registros)>0 else None ``` `as_dict` y `as_list` ¶ Un objeto Row se puede serializar como un diccionario normal usando el método `as_dict()` y un objeto Rows se puede serializar como una lista de diccionarios usando el método `as_list()` . Aquí se muestran algunos ejemplos: ``` >>> registros = db(consulta).select() >>> lista = registros.as_list() >>> primer_diccionario = registros.first().as_dict() ``` Estos métodos son convenientes para pasar objetos Rows a las vistas genéricas y para almacenar objetos Rows en sesiones (ya que los objetos Rows en sí no se pueden serializar por contener una referencia a una conexión abierta de la base de datos): ``` >>> registros = db(consulta).select() >>> sesion.registros = registros # ¡prohibido! >>> sesion.registros = registros.as_list() # ¡permitido! ``` # Combinando registros¶ Los objetos Row se pueden combinar con métodos de Python. Aquí se asume que: ``` >>> print registros1 persona.nombre <NAME> >>> print registros2 persona.nombre <NAME> ``` Puedes realizar una unión de registros a partir de dos conjuntos de registros: También puedes hacer una unión de registros eliminando los duplicados: `find` , `exclude` , `sort` ¶ A veces necesitas realizar dos select y uno contiene un subconjunto del otro. Para este caso, no tiene sentido acceder nuevamente a la base de datos. Los objetos `find` , `exclude` y `sort` te permiten manipular un objeto Rows generando una copia sin acceder a la base de datos. Específicamente: * `find` devuelve un conjunto Rows filtrado por una condición determinada sin modificar el original. * `exclude` devuelve un conjunto Rows filtrado por una condición y los elimina del Rows orginal. * `sort` devuelve un conjunto Rows ordenado por una condición y no realiza cambios en el original. Estos métodos toman un único argumento, una función que realiza una operación para cada registro. Este es un ejemplo de uso: ``` >>> db.define_table('persona', Field('nombre')) >>> db.persona.insert(name='Juan') >>> db.persona.insert(name='Max') >>> db.persona.insert(name='Alejandro') >>> registros = db(db.persona).select() >>> for registro in registros.find(lambda registro: registro.nombre[0]=='M'): print registro.nombre Máximo >>> print len(registros) 3 >>> for registro in registros.exclude(lambda registro: registro.nombre[0]=='M'): print registro.nombre Máximo >>> print len(registro) 2 >>> for registro in registros.sort(lambda registro: registro.nombre): print registro.nombre Alejandro Juan ``` También se puede combinar métodos: ``` >>> registros = db(db.persona).select() >>> registros = registros.find( lambda registro: 'x' in registro.nombre).sort( lambda registro: registro.nombre) >>> for registro in registros: print registro.nombre Alejandro Max ``` Sort toma un argumento opcional `reverse=True` cuyo significado es obvio. El método `find` tiene un argumento opcional limitby con la misma sintaxis y características que su análogo para el método `select` del objeto Set. ### Otros métodos¶ `update_or_insert` ¶ A veces puedes necesitar realizar una inserción sólo si no hay registros con el mismo valor que los que se están insertando. Esto puede hacerse con ``` db.define_table('persona', Field('nombre'), Field('lugardenacimiento')) db.persona.update_or_insert(name='Juan', birthplace='Chicago') ``` El registro se insertará sólo si no existe otro usuario llamado Juan que haya nacido en Chicago. Puedes especificar qué valores se usarán como criterio para determinar si el registro existe. Por ejemplo: ``` db.persona.update_or_insert(db.persona.nombre=='Juan', name='Juan', lugardenacimiento='Chicago') ``` y si existe el tal Juan, su lugardenacimiento se actualizará o de lo contrario se creará un nuevo registro. `validate_and_insert` , `validate_and_update` ¶ La función ``` resultado = db.mitabla.validate_and_insert(campo='valor') ``` ``` id = db.mitabla.insert(campo='valor') ``` con la excepción de que la primera llama a los validadores para los campos antes de realizar las inserciones y no aplica los cambios si no se cumplen los requisitos. Si la validación fracasa, los errores se pueden recuperar de `resultado.error` . Si tiene éxito, se puede recuperar el id del nuevo registro con `resultado.id` . Recuerda que normalmente la validación se hace a través de los algoritmos para el procesamiento de formularios, por lo que está función se debe usar en situaciones especiales. En forma similar ``` resultado = db(consulta).validate_and_update(campo='valor') ``` ``` numero = db(consulta).update(campo='valor') ``` salvo que el primer comando llama a los validadores para los campos antes de realizar la modificación. Observa que además funcionará únicamente si la consulta está restringida a una sola tabla. El número de registros actualizados se puede encontrar en `resultado.updated` y los errores se almacenarán en `resultado.errors` . `smart_query` (experimental)¶ Hay veces que uno necesita analizar una consulta usando lenguaje natural como por ejemplo: ``` nombre contains m and edad greater than 18 ``` La DAL provee de un método para analizar este tipo de consultas: ``` busqueda = 'nombre contain m and edad greater than 18' registros = db.smart_query([db.persona], busqueda).select() ``` El primer argumento debe ser una lista de tablas o campos que se deberían admitir en una búsqueda. Si la cadena de la consulta es inválida, se generará una excepción `RuntiemError` . Esta funcionalidad se puede usar para crear interfaces RESTful (ver capítulo 10) y es usada internamente por `SQLFORM.grid` y `SQLFORM.smartgrid` . En la cadena de búsqueda smartquery, un campo se puede declarar tanto con la sintaxis nombredecampo como con la notación nombredetabla.nombredecampo. En caso de contener espacios, las cadenas deberían delimitarse por comillas dobles. ### Campos calculados¶ Los campos de DAL tienen un atributo `compute` . Este atributo debe ser una función (o lambda) que recibe un objeto Row y devuelve un nuevo valor para el campo. Cuando se modifica un nuevo registro, tanto para las inserciones como para las modificaciones, si el valor para el campo no se provee, web2py intentará calcularlo a partir de otros valores de campos usando la función de `compute` . Aquí se muestra un ejemplo: ``` >>> db.define_table('item', Field('precio_unitario','double'), Field('cantidad','integer'), Field('precio_total', compute=lambda r: r['precio_unitario']*r['cantidad'])) >>> r = db.item.insert(precio_unitario=1.99, cantidad=5) >>> print r.precio_total 9.95 ``` Observa que el valor calculado se almacena en la base de datos y no se calcula al recuperarse, como ocurre en el caso de los campos virtuales, que se detallan más adelante. Hay dos aplicaciones típicas para los campos calculados: * en aplicaciones wiki, para almacenar el texto de entrada procesado como HTML evitando el reprocesamiento en cada solicitud * cuando se quiere calcular valores normalizados para un campo determinado, optimizando las búsquedas ### Campos virtuales¶ Los campos virtuales son también campos calculados (como los detallados en la sección anterior) pero difieren de estos en que son virtuales en el sentido de que no se almacenan en la base de datos y se calculan cada vez que los registros se extraen con una consulta. Este tipo de campos es de utilidad cuando se quiere simplificar el código del usuario sin agregar espacio de almacenamiento, pero no es posible usarlo en búsquedas. # Nuevo estilo para campos virtuales¶ web2py provee de una nueva forma, más fácil, para la definición de campos virtuales y campos virtuales perezosos (lazy virtual fields). Esta sección está marcada como experimental porque la API de esta nueva característica puede sufrir ligeras modificaciones respecto de lo que se detalla a continuación. Aquí vamos a considerar el mismo ejemplo que en la sección previa. Particularmente, suponemos el siguiente modelo: ``` >>> db.define_table('item', Field('precio_unitario', 'double'), Field('cantidad', 'integer')) ``` Se puede definir un campo `precio_total` virtual de esta forma ``` >>> db.item.precio_total = Field.Virtual( lambda registro: registro.item.precio_unitario*registro.item.cantidad) ``` es decir, basta con definir un nuevo campo `precio_total` de tipo `Field.Virtual` . El único argumento del constructor es una función que recibe un registro y devuelve el resultado del cálculo. Un campo virtual definido como se muestra arriba se calcula automáticamente para todo registro que sea resultado de un comando select: También se puede definir campos de tipo especial método que se calculan únicamente por encargo, cuando se los llama. Por ejemplo: ``` >>> db.item.total_descuento = Field.Method(lambda registro, descuento=0.0: \ registro.item.precio_unitario*registro.item.cantidad*(1.0-descuento/100)) ``` En este caso ``` registro.total_descuento ``` no es un valor sino una función. La función toma el mismo argumento que la función pasada al constructor de `Method` , con la diferencia de que `registro` es implícito (se puede comparar con el `self` de los objetos Row). El campo perezoso del ejemplo de arriba nos permite calcular el precio total para cada `item` : Y además, permite pasar un porcentaje opcional del `descuento` (15%): ``` >>> for registro in db(db.item).select(): print registro.total_descuento(15) ``` Los campos tipo Virtual y Method también se pueden definir al definir la tabla: ``` >>> db.define_table('item', Field('precio_unitario','double'), Field('cantidad','integer'), Field.Virtual('precio_total', lambda registro: ...), Field.Method('total_descuento', lambda registro, descuento=0.0: ...)) ``` Ten en cuenta que los campos virtuales no tienen los mismos atributos que los demas campos (default, readable, requires, etc) y que no se listan en el conjunto almacenado en `db.tabla.fields` , y que no se visualizan por defecto en las tablas (TABLE) y grid (SQLFORM.grid, SQLFORM.smartgrid). # El viejo estilo de campos virtuales.¶ Para poder definir uno o más campos virtuales, puedes además definir una clase contenedora, instanciarla y asociarla con una tabla o un select. Por ejemplo, consideremos la siguiente tabla: ``` >>> db.define_table('item', Field('precio_unitario','double'), Field('cantidad','integer'), ``` Uno puede definir un campo virtual `precio_total` de esta forma ``` >>> class MisCamposVirtuales(object): def precio_total(self): return self.item.precio_unitario*self.item.cantidad >>> db.item.virtualfields.append(MisCamposVirtuales()) ``` Observa que cada método de la clase que toma un único argumento (self) es un nuevo campo virtual. `self` hace referencia a cada objeot Row del comando select. Los valores de campos se recuperan por su ruta completa . La tabla se asocia a los campos virtuales agregando una instancia de la clase al atributo `virtualfields` de la tabla. Los campos virtuales también se pueden acceder en forma recursiva a otros campos, como en el siguiente ejemplo ``` >>> db.define_table('item', Field('precio_unitario','double')) >>> db.define_table('item_compra', Field('item','reference item'), Field('cantidad','integer')) >>> class MisCamposVirtuales(object): def precio_total(self): return self.item_compra.item.precio_unitario \ * self.item_compra.cantidad >>> db.item_compra.virtualfields.append(MisCamposVirtuales()) ``` Observa el acceso al campo en forma recursiva con ``` self.item_compra.precio_unitario ``` , donde `self` es cada elemento Row recuperado de la consulta. También pueden operar en función del resultado de un JOIN ``` >>> db.define_table('item', Field('precio_unitario','double')) >>> db.define_table('item_compra', Field('item','reference item'), Field('cantidad','integer')) >>> registros = db(db.item_compra.item==db.item.id).select() >>> class MisCamposVirtuales(object): def precio_total(self): return self.item.precio_unitario \ * self.item_compra.cantidad >>> registros.setvirtualfields(item_compra=MisCamposVirtuales()) >>> for registro in registros: print registro.item_compra.precio_total ``` Observa como para este caso la sintaxis es distinta. El campo virtual accede tanto a como a ``` self.item_compra.cantidad ``` , que pertenece al select con join. El campo virtual se adjunta a los registros Row de la tabla por medio de el método `setvirtualfields` del objeto Rows (registros). Este método toma un número arbitrario de argumentos de par nombre-valor y se puede usar para definir múltiples campos virtuales, múltiples clases, y adjuntarlos a múltiples tablas. ``` >>> class MisCamposVirtuales1(object): def precio_rebajado(self): return self.item.precio_unitariio*0.90 >>> class MisCamposVirtuales2(object): def precio_total(self): return self.item.unit_price \ * self.item_compra.cantidad def precio_total_rebajado(self): return self.item.precio_rebajado \ * self.item_compra.cantidad >>> registros.setvirtualfields( item=MisCamposVirtuales1(), item_compra=MisCamposVirtuales2()) >>> for registro in registros: print registro.item_compra.precio_total_rebajado ``` Los campos virtuales pueden ser perezosos; todo lo que debes hacer es devolver una función y llamar a esa función para recuperarlos: ``` >>> db.define_table('item', Field('precio_unitario','double'), Field('cantidad','integer'), >>> class MisCamposVirtuales(object): def precio_total_perezoso(self): def perezoso(self=self): return self.item.precio_unitario \ * self.item.cantidad return perezoso >>> db.item.virtualfields.append(MisCamposVirtuales()) >>> for item in db(db.item).select(): print item.precio_total_perezoso() ``` o más corto usando una función lambda: ``` >>> class MisCamposVirtuales(object): def precio_total_perezoso(self): return lambda self=self: self.item.precio_unitario \ * self.item.cantidad ``` ### Relación uno a muchos¶ Para ilustrar cómo se debe implementar una relación de uno a muchos con la DAL de web2py, definiremos una tabla "cosa" que esté asociada a otra tabla "persona", modificando los ejemplos anteriores de la siguiente forma: ``` >>> db.define_table('persona', Field('nombre'), format='%(nombre)s') >>> db.define_table('cosa', Field('nombre'), Field('id_propietario', 'reference persona'), format='%(nombre)s') ``` La tabla "cosa" tiene dos campos, el nombre de la cosa y el propietario de la cosa. El campo de los id "id_propietario" es un campo reference. El tipo reference se puede especificar de dos formas: ``` Field('id_propietario', 'reference persona') Field('id_propietario', db.persona) ``` El segundo ejemplo siempre se convierte a la forma del primero. Son equivalentes a excepción del caso de las tablas perezosas, los campos que refieren a la misma tabla o self reference u otros tipos de referencias circulares donde la primera notación es la única admitida. Cuando un tipo de campo es otra tabla, se espera que el campo esté vinculado a la otra tabla por el valor id. De todas formas, puedes devolver el tipo de valor real y obtendrás: ``` >>> print db.cosa.id_propietario.type reference persona ``` Ahora, inserta tres cosas, dos pertenecientes a Alejandro y una a Roberto: ``` >>> db.cosa.insert(nombre='Bote', id_propietario=1) 1 >>> db.cosa.insert(nombre='Silla', id_propietario=1) 2 >>> db.cosa.insert(nombre='Zapatos', id_propietario=2) 3 ``` Puedes recuperar los registros como usualmente se haría para cualquier otra tabla: ``` >>> for registro in db(db.cosa.id_propietario==1).select(): print registro.nombre Bote Silla ``` Como una cosa tiene referencia a una persona, una persona puede tener muchas cosas, por lo que un registro de la tabla persona ahora obtendrá un nuevo atributo cosa, que consta de un conjunto Set, que define las cosas de esa persona. Esto da la posibilidad de recorrer la listas de personas y recuperar sus cosas en una forma sencilla: ``` >>> for persona in db().select(db.persona.ALL): print persona.nombre for cosa in persona.cosa.select(): print ' ', cosa.nombre Alejandro Bote Silla Roberto Zapatos Carlos ``` # Inner join¶ Otra forma de obtener resultados similares es el uso de los join, especialmente el comando INNER JOIN o join interno. web2py realiza operaciones join en forma automática y transparente cuando una consulta enlaza dos o más tablas como en el siguiente ejemplo: Observe que web2py hizo un join, por lo que ahora contiene dos registros, uno para cada tabla, enlazados mutuamente. Como los dos registros podrían tener campos de igual nombre, debes especificar la tabla al extraer el valor del campo de un registro. Esto implica que antes de que puedas hacer: `registro.nombre` donde se deduce fácilmente que se trata del nombre de una cosa o una persona (según se haya especificado en la consulta), cuando obtienes el resultado de un join debes ser más explícito y decir: ``` registro.persona.nombre ``` o bien: `registro.cosa.nombre` Hay una sintaxis alternativa para los INNER JOIN: Mientras la salida es la misma, el SQL generado en los dos casos pueden ser diferente. La segunda sintaxis elimina las posibles ambigüedades cuando la misma tabla es operada con join dos veces y se utilizan alias como en el siguiente ejemplo: ``` >>> db.define_table('cosa', Field('nombre'), Field('id_propietario1','reference persona'), Field('id_propietario2','reference persona')) >>> registros = db(db.persona).select( join=[db.persona.with_alias('id_propietario1').on(db.persona.id==db.cosa.id_propietario1). db.persona.with_alias('id_propietario2').on(db.persona.id==db.cosa.id_propietario2)]) ``` El valor de `join` puede ser una lista de `db.tabla.on(...)` a juntar (join). # Left outer join¶ Observa que Carlos no apareció en la lista de arriba porque no tiene cosas. Si te propones recuperar personas (tengan cosas o no) y sus cosas (si las tuvieran), entonces debes realizar un LEFT OUTER JOIN (join externo a izquierda). Esto se hace usando el argumento "left" del comando select. He aquí un ejemplo: ``` >>> registros=db().select( db.persona.ALL, db.cosa.ALL, izquierda=db.cosa.on(db.persona.id==db.cosa.id_propietario)) >>> for registro in rows: print registro.persona.nombre, 'tiene', registro.cosa.nombre Alejandro tiene Bote Alejandro tiene Silla <NAME> Zapatos Carlos tiene None ``` ``` left = db.coda.on(...) ``` hace el join externo a izquierda. Aquí el argumento de `db.cosa.on` es el requisito para realizar el join (el mismo usado arriba para el join interno). En el caso de un join a izquierda, es necesario especificar en forma explícita los campos que se recuperarán. Se pueden combinar múltiples join a izquierda si se pasa una lista o tupla de elementos `db.mitabla.on(...)` al atributo `left` . # Agrupando y contando¶ Cuando se hacen operaciones join, a veces quieres agrupar los registros según cierto criterio y contarlos. Por ejemplo, contar el número de cosas que tiene cada persona. web2py también contempla este tipo de consultas. Primero, necesitas un operador para conteo. Segundo, necesitas hacer un join entre la tabla persona y la tabla cosa según el propietario. Tercero, debes recuperar todos los registros (personas + cosas), agruparlas en función de las personas, y contarlas, ordenadas en grupos: ``` >>> conteo = db.persona.id.count() >>> for registro in db(db.persona.id==db.cosa.id_propietario).select( db.persona.nombre, conteo, groupby=db.persona.nombre): print registro.persona.nombre, row[conteo] Alejandro 2 Roberto 1 ``` Observa que el operador de conteo `count` (que es un elemento incorporado) es utilizado como campo. La única cuestión aquí es cómo devolver la información. Es evidente que cada registro contiene una persona y el conteo, pero el conteo en sí no es un campo de una persona ni una tabla. ¿A dónde se ubica entonces? Se ubicará en el objeto storage que es representación del registro con una clave igual a la expresión de la consulta misma. El método count del objeto Field tiene un argumento opcional `distinct` . Cuando se especifica como `True` indica que sólo se deben contar los valores del campo cuando no se repitan. ### Muchos a muchos¶ En los ejemplos anteriores, hemos especificado que una cosa tenga un propietario, pero una persona podía tener múltiples cosa. ¿Qué pasa si el Bote es propiedad tanto de Alejandro como de Curt? Esto requiere una relación muchos-a-muchos, y se establece con una tabla intermedia que enlaza a una persona con una cosa en una relación de propiedad o pertenencia. ``` >>> db.define_table('persona', Field('nombre')) >>> db.define_table('cosa', Field('nombre')) >>> db.define_table('pertenencia', Field('persona', 'reference persona'), Field('cosa', 'reference cosa')) ``` las relaciones de pertenencia previas se pueden reescribir ahora de la siguiente forma: ``` >>> db.pertenencia.insert(persona=1, cosa=1) # Alejandro tiene Bote >>> db.pertenencia.insert(persona=1, cosa=2) # Alejandro tiene Silla >>> db.pertenencia.insert(persona=2, cosa=3) # Roberto tiene Zapatos ``` Ahora puedes agregar la nueva relación según la cual Curt es copropietario del Bote: ``` >>> db.pertenencia.insert(persona=3, cosa=1) # Curt también tiene el Bote ``` Como ahora tienes una relación triple entre tablas, conviene definir un nuevo conjunto sobre el cual realizar las operaciones: ``` >>> personas_y_cosas = db( (db.persona.id==db.pertenencia.persona) \ & (db.cosa.id==db.pertenencia.cosa)) ``` Ahora es fácil recuperar todas las personas y sus cosas a partir del nuevo conjunto Set: ``` >>> for registro in personas_y_cosas.select(): print registro.persona.nombre, registro.cosa.nombre <NAME> <NAME> <NAME> Curt Bote ``` De una forma similar, puedes buscar todas las cosas que pertenezcan a Alejandro: ``` >>> for registro in personas_y_cosas(db.persona.nombre=='Alejandro').select(): print registro.cosa.nombre Bote Silla ``` y todos los dueños del Bote: ``` >>> for registro in personas_y_cosas(db.cosa.nombre=='Bote').select(): print registro.persona.nombre Alejandro Curt ``` Una alternativa menos exigente para las relaciones muchos a muchos es la de tagging o selecciones múltiples. El uso de selecciones múltiples se trata en la sección dedicada al validador `IS_IN_DB` . Las selecciones múltiples funcionan incluso en motores de bases de datos que no contemplan el uso de operaciones JOIN como Google App Engine NoSQL. `list:<type>` y `contains` ¶ web2py provee de los siguientes tipos especiales de campo: ``` list:string list:integer list:reference <table> ``` Estos pueden contener listas de cadenas, enteros o referencias respectivamente. En Google App Engine NoSQL `list:string` se traduce en un objeto `StringListProperty` , los otros dos se traducen en objetos `ListProperty(int)` . En las bases de datos relacionales se asocian a campos de tipo text que contienen una lista de ítems separados por `|` . Por ejemplo `[1, 2, 3]` se convierte en `|1|2|3|` . Para las listas de cadenas los ítems se escapan para que todo `|` en el ítem se reemplace por `||` . De todas formas, se trata de una representación interna que es transparente al usuario. Puedes usar `list:string` , por ejemplo, de la siguiente forma: ``` >>> db.define_table('producto', Field('nombre'), Field('colores', 'list:string')) >>> db.producto.colores.requires=IS_IN_SET(('rojo', 'azul', 'verde')) >>> db.producto.insert(nombre='Auto de juguete', colores=['rojo','verde']) >>> productos = db(db.producto.colores.contains('rojo')).select() >>> for item in productos: print item.nombre, item.colores Auto de juguete ['rojo', 'verde'] ``` `list:integer` funciona de la misma forma pero los ítems deben ser enteros. Como siempre, los requisitos se controlan en el nivel de los formularios, no en el nivel del `insert` . Para los campos `list:<type>` el operador `contains(valor)` se traduce como una consulta compleja que busca listas que contengan `valor` . El operador `contains` también funciona con campos normales de tipo `string` y `text` y se traducen como `LIKE '%valor%'` . Los campos `list:reference` y el operador `contains(valor)` son especialmente útiles para desnormalizar las relaciones many-to-many. Aquí hay un ejemplo: ``` >>> db.define_table('etiqueta',Field('nombre'),format='%(nombre)s') >>> db.define_table('producto', Field('nombre'), Field('etiquetas','list:reference etiqueta')) >>> a = db.etiqueta.insert(nombre='rojo') >>> b = db.etiqueta.insert(nombre='verde') >>> c = db.etiqueta.insert(nombre='azul') >>> db.producto.insert(name='Auto de juguete',etiquetas=[a, b, c]) >>> productos = db(db.producto.etiquetas.contains(b)).select() >>> for item in productos: print item.nombre, item.etiquetas Auto de juguete [1, 2, 3] >>> for item in productos: print item.nombre, db.producto.etiquetas.represent(item.etiquetas) Auto de juguete rojo, verde, azul ``` Observa que los campos ``` list:reference etiqueta ``` recibe una restricción por defecto ``` requires = IS_IN_DB(db, 'etiqueta.id', db.etiqueta._format, multiple=True) ``` que produce un menú `SELECT/OPTION` de valores múltiples en formularios. Además, este campo recibe un atributo `represent` por defecto que genera la lista de referencias como una lista separada por comas de referencias con su formato correspondiente. Esto se utiliza en los formularios de lectura y tablas `SQLTABLE` . Mientras `list:reference` tiene un validador por defecto y una representación por defecto, esto no ocurre con `list:integer` y `list:string` . Por lo tanto, estos requieren un validador `IS_IN_SET` o `IS_IN_DB` si quieres usarlos en formularios. ### Otros operadores¶ web2py tiene otros operadores que proveen de una API para el acceso a operadores equivalentes en SQL. Definamos otra tabla "log" para registrar eventos de seguridad, su tiempo de registro y el nivel de severidad, donde la severidad es un entero. ``` >>> db.define_table('log', Field('evento'), Field('registrado', 'datetime'), Field('severidad', 'integer')) ``` Como ya hemos para otros ejemplos, ingresaremos algunos registros, un evento "escáner de puertos" (port scanner), otro con una "secuencia de comandos en sitios cruzados" (xss injection) y un "acceso sin autenticación" (unauthorized login). Para hacer el ejemplo más simple, puedes registrar eventos que tengan igual tiempo pero distinta severidad (1, 2 y 3 respectivamente). ``` >>> import datetime >>> ahora = datetime.datetime.now() >>> print db.log.insert( evento='escáner de puertos', registrado=ahora, severidad=1) 1 >>> print db.log.insert( evento='secuencia de comandos en sitios cruzados', registrado=ahora, severidad=2) 2 >>> print db.log.insert( evento='acceso sin autenticación', registrado=ahora, severidad=3) 3 ``` `like` , `regexp` , `startswith` , `contains` , `upper` , `lower` ¶ Los objetos Field tienen un operador que puedes usar para comparar cadenas: ``` >>> for registro in db(db.log.evento.like('escáner%')).select(): print registro.evento escáner de puertos ``` Aquí "escáner%" especifica una cadena que comienza con "escáner". El signo de porcentaje, "%", es un signo especial o wild-card que quiere decir "toda secuencia de caracteres". El operador like no es sensible a minúsculas, pero es posible hacerlo sensible a minúsculas con ``` db.mitabla.micampo.like('valor', case_sensitive=True) ``` web2py además provee de algunos atajos: ``` db.mitabla.micampo.startswith('valor') db.mitabla.micampo.contains('valor') ``` que son equivalentes respectivamente a ``` db.mitabla.micampo.like('valor%') db.mitabla.micampo.like('%valor%') ``` Observa que `contains` tiene un significado especial en campos `list:<type>` ; esto se trató en una sección previa. El método `contains` también acepta una lista de valores y un argumento opcional booleano `all` , que busca registros que contengan todos los valores de la lista: ``` db.mitabla.micampo.contains(['valor1','valor2'], all=True) ``` o al menos uno de los valores de la lista ``` db.mitabla.micampo.contains(['valor1','valor2'], all=false) ``` Además hay un método `regexp` que funciona en forma similar al método `like` , pero permite el uso de la sintaxis de expresiones regulares para la expresión a comparar. Esto está soportado únicamente en las bases de datos PostgreSQL y SQLite. Los métodos `upper` y `lower` nos permiten convertir el valor de un campo a mayúsculas o minúsculas, y es posible combinarlos con el operador like: ``` >>> for registro in db(db.log.evento.upper().like('ESCÁNER%')).select(): print registro.evento escáner de puertos ``` `year` , `month` , `day` , `hour` , `minutes` , `seconds` ¶ Los tipos de campo date y datetime tienen métodos day, month y year (día, mes y año). Los campos de tipo datetime y time tienen métodos hour, minutes y seconds. He aquí un ejemplo: ``` >>> for registro in db(db.log.registrado.year()==2013).select(): print registro.evento escáner de puertos secuencia de comandos en sitios cruzados acceso sin autenticación ``` `belongs` ¶ El operador de SQL IN se implementa a través del método belongs, que devuelve true cuando el valor del campo pertenece (belongs) al conjunto especificado (una lista o tupla): ``` >>> for registro in db(db.log.severidad.belongs((1, 2))).select(): print registro.event escáner de puertos secuencia de comandos en sitios cruzados ``` La DAL también permite usar un comando select anidado como argumento del operador belongs. El único detalle a tener en cuenta es que el select anidado tiene que ser un `_select` , no `select` , y se debe especificar un solo campo explícitamente, que es el que define el conjunto Set. ``` >>> dias_problematicos = db(db.log.severidad==3)._select(db.log.registrado) >>> for registro in db(db.log.registrado.belongs(dias_problematicos)).select(): print registro.evento escáner de puertos secuencia de comandos en sitios cruzados acceso sin autenticación ``` En aquellos casos donde se requiere un select anidado y el campo de búsqueda es una referencia, también es posible usar una consulta como argumento. Por ejemplo: ``` db.define_table('persona',Field('nombre')) db.define_table('cosa', Field('nombre'), Field('id_propietario','reference cosa')) db(db.cosa.id_propietario.belongs(db.persona.nombre=='Jonatan')).select() ``` En este caso es obvio que el próximo select sólo necesita el campo asociado con ``` db.cosa.id_propietario ``` , por lo que no hay necesidad de usar una notación `_select` más explícita. También es posible usar un select anidado como valor de inserción o actualización, pero para este caso la sintaxis es distinta: ``` perezoso = db(db.persona.nombre=='Jonatan').nested_select(db.persona.id) db(db.cosa.id==1).update(id_propietario = perezoso) ``` En este caso, `perezoso` es una expresión anidada que calcula el `id` de la persona "Jonatan". Las dos líneas componen una única consulta SQL. `sum` , `avg` , `min` , `max` y `len` ¶ Previamente, hemos usado el operador count para contar registros. En forma similar, puedes usar el operador sum para sumar los valores de un campo específico a partir de un conjunto de registros. Como en el caso de count, el resultado de una suma se recupera a través del objeto store: ``` >>> suma = db.log.severidad.sum() >>> print db().select(suma).first()[suma] 6 ``` Además puedes usar `avg` , `min` y `max` para obtener los valores promedio, mínimo y máximo respectivamente para los registros seleccionados. Por ejemplo: ``` >>> maximo = db.log.severidad.max() >>> print db().select(maximo).first()[maximo] 3 ``` `.len()` calcula la longitud de un campo de tipo string, text o boolean. Es posible combinar las expresiones para componer otras expresiones más complejas. Por ejemplo, aquí calculamos la suma de la longitud de todas las cadenas del campo severidad en los eventos del log, incrementadas en una unidad: ``` >>> suma = (db.log.severidad.len()+1).sum() >>> print db().select(suma).first()[suma] ``` # Subconjuntos de cadenas¶ Podemos componer una expresión para que haga referencia a un subconjunto de una cadena o substring. Por ejemplo, podemos agrupar las cosas cuyos nombres tengan las tres letras iniciales iguales y recuperar sólo uno de cada grupo: ``` db(db.cosa).select(distinct = db.cosa.name[:3]) ``` # Valores por defecto usando `coalesce` y `coalesce_zero` ¶ A veces necesitas extraer un valor de una base de datos pero además necesitas valores por defecto en caso de que un registro contenga el valor NULL. En SQL, hay una palabra especial para ese propósito, `COALESCE` . web2py incorpora un método `coalesce` equivalente: ``` >>> db.define_table('usuariodelsistema',Field('nombre'),Field('nombre_completo')) >>> db.sysuser.insert(nombre='max', nombre_completo='Máxima Potencia') >>> db.sysuser.insert(nombre='tim', nombre_completo=None) print db(db.usuariodelsistema).select(db.usuariodelsistema.nombre_completo.coalesce(db.usuariodelsistema.nombre)) "COALESCE(usuariodelsistema.nombre_completo, usuariodelsistema.nombre)" Máxima Potencia tim ``` En otras ocasiones, necesitas calcular una expresión matemática pero algunos campos tienen valores nulos que deberían ser cero. `coalesce_zero` nos saca del apuro especificando en la consulta que los valores nulos de la base de datos deben ser cero: ``` >>> db.define_table('usuariodelsistema',Field('nombre'),Field('puntos')) >>> db.sysuser.insert(nombre='max', puntos=10) >>> db.sysuser.insert(nombre='tim', puntos=None) >>> print db(db.usuariodelsistema).select(db.usuariodelsistema.puntos.coalesce_zero().sum()) "SUM(COALESCE(usuariodelsistema.puntos, 0))" 10 ``` ### Generación de SQL puro¶ A veces puedes necesitar sólo generar el SQL en lugar de ejecutarlo. Esto es fácil de hacer en web2py porque cada comando que realiza una modificación en la base de datos tiene un comando equivalente que no realiza la operación de E/S, sino que simplemente devuelve la expresión SQL que se debería ejecutar. Estos comandos tienen los mismos nombres y sintaxis que los que hacen las modificaciones, pero comienzan con un subguión: Este es el comando de inserción `_insert` ``` >>> print db.persona._insert(nombre='Alejandro') INSERT INTO persona(nombre) VALUES ('Alejandro'); ``` Este es el comando de conteo `_count` ``` >>> print db(db.persona.nombre=='Alejandro')._count() SELECT count(*) FROM persona WHERE persona.nombre='Alejandro'; ``` Y este es el comando para recuperar registros `_select` ``` >>> print db(db.persona.nombre=='Alejandro')._select() SELECT persona.id, persona.nombre FROM persona WHERE persona.nombre='Alejandro'; ``` Ahora el comando de eliminación `_delete` ``` >>> print db(db.persona.nombre=='Alejandro')._delete() DELETE FROM persona WHERE persona.nombre='Alejandro'; ``` Y por último, este es el comando para modificación `_update` ``` >>> print db(db.persona.nombre=='Alejandro')._update() UPDATE persona SET WHERE persona.nombre='Alejandro'; ``` De todas formas, siempre es posible usar `db._lastsql` para recuperar la última expresión SQL, tanto en caso de haberse ejecutado manualmente, como para el caso de la generación automática con DAL. ### Importando y Exportando datos¶ # CSV (un objeto Table por vez)¶ Cuando se convierten los objetos Row a cadena, estos se serializan automáticamente como CSV: ``` >>> registros = db(db.persona.id==db.cosa.id_propietario).select() >>> print registros persona.id, persona.nombre, cosa.id, cosa.nombre, cosa.id_propietario 1, Alejandro, 1, Bote, 1 1, Alejandro,2 , Silla, 1 2, Roberto, 3, Zapatos, 2 ``` Puedes serializar una única tabla en CSV y almacenarla en un archivo "prueba.csv": ``` >>> open('prueba.csv', 'wb').write(str(db(db.persona.id).select())) ``` Esto es equivalente a ``` >>> registros = db(db.persona.id).select() >>> registros.export_to_csv_file(open('prueba.csv', 'wb')) ``` Puedes recuperar los datos de un archivo CSV con: ``` >>> db.persona.import_from_csv_file(open('prueba.csv', 'r')) ``` Cuando se importan datos, web2py busca los nombres de campos en el encabezado del CSV. En este ejemplo, recupera dos columnas: "persona.id" y "persona.nombre". El prefijo "persona." se omite, y además se omite el campo "id". Luego todos los registros se agregan a la tabla asignándoles nuevos valores id. Estas dos operaciones se pueden realizar por medio de la interfaz web appadmin. # CSV (de todos los objetos Table)¶ En web2py, puedes hacer copias y restauraciones de la base de datos completa con dos comandos: Para exportar: ``` >>> db.export_to_csv_file(open('unarchivo.csv', 'wb')) ``` Para importar: ``` >>> db.import_from_csv_file(open('unarchivo.csv', 'rb')) ``` Este mecanismo se puede usar incluso si la base de datos que importa los datos es de distinto tipo que la base de datos exportada. Los datos son almacenados en "unarchivo.csv" con el formato CSV, donde cada tabla comienza con una línea que indica el nombre de la tabla, y otra línea contiene los nombres de los campos: ``` TABLE nombredetabla campo1, campo2, campo3, ... ``` La separación entre tablas es `\r\n\r\n` . El archivo termina con la siguiente línea: `END` No se incluyen los archivos subidos al sistema, a menos que se trate de archivos almacenados en la base de datos. En todo caso, una solución bastante simple consiste en crear un zip con la carpeta "uploads" por separado. Cuando se hace la importación de datos, el nuevo registro se agregará a la base de datos, siempre y cuando el registro no esté vacío. En general, los nuevos registros importados no tendrán el mismo valor del campo id que en la base de datos original (exportada), pero web2py recupera las referencias (los enlaces entre tablas para cada registro) de forma que estas no se pierdan, incluso en el caso de que cambie el valor del id. Si una tabla contiene un campo llamado "uuid", entonces se usará ese campo para identificar duplicados. Además, si un registro importado tiene el mismo "uuid" que otro existente en la base de datos, el registro preexistente se actualizará con los nuevos valores. # CSV y sincronización con una base de datos remota¶ ``` db = DAL('sqlite:memory:') db.define_table('persona', Field('nombre'), format='%(nombre)s') db.define_table('cosa', Field('id_propietario', 'reference persona'), Field('nombre'), format='%(nombre)s') if not db(db.persona).count(): id = db.persona.insert(nombre="Máximo") db.cosa.insert(id_propietario=id, nombre="Silla") ``` Cada registro está identificado con un ID y enlazado a otra tabla también por ese mismo ID. Si tienes dos copias de la misma base de datos en uso en dos instalaciones distintas de web2py, el ID será único sólo para una base de datos determinada localmente pero no para distintas bases de datos. Esto resulta problemático cuando se quieren juntar registros que provienen de distintas bases de datos. Para que un registro mantenga su unicidad en distintas bases de datos, deben: * tener un id único (UUID), * tener un campo event_time (para establecer cuál es el más reciente entre múltiples copias), * tener como referencia el UUID en lugar del id. Esto es posible sin modificar web2py. Debemos hacer lo siguiente: Cambia el modelo anterior por el siguiente: ``` db.define_table('persona', Field('uuid', length=64, default=lambda:str(uuid.uuid4())), Field('modified_on', 'datetime', default=request.now), Field('nombre'), format='%(nombre)s') db.define_table('cosa', Field('uuid', length=64, default=lambda:str(uuid.uuid4())), Field('modified_on', 'datetime', default=request.now), Field('id_propietario', length=64), Field('nombre'), format='%(nombre)s') db.cosa.id_propietario.requires = IS_IN_DB(db,'persona.uuid','%(nombre)s') if not db(db.persona.id).count(): id = uuid.uuid4() db.persona.insert(nombre="Máximo", uuid=id) db.cosa.insert(id_propietario=id, nombre="Silla") ``` Observa que en las definiciones de tabla de arriba, el valor por defecto para los dos campos `uuid` están especificados como funciones lambda, que devuelven un UUID (convertido como cadena). La función lambda es llamada una vez por cada registro que se inserte, asegurando la unicidad de cada UUID de registro, incluso si se ingresaran múltiples registros en una única transacción. Crea una acción del controlador para exportar la base de datos: ``` def exportar(): s = StringIO.StringIO() db.export_to_csv_file(s) response.headers['Content-Type'] = 'text/csv' return s.getvalue() ``` Crea una acción del controlador para importar la copia previamente almacenada de la otra base de datos y sincronizar los registros: ``` def importar_y_sincronizar(): formulario = FORM(INPUT(_type='file', _name='archivos'), INPUT(_type='submit')) if formulario.process().accepted: db.import_from_csv_file(form.vars.datos.file, unique=False) # para cada tabla for tabla in db.tables: # por cada uuid, borrar todos a excepción del último actualizado registros = db(db[tabla]).select(db[tabla].id, db[tabla].uuid, orderby=db[tabla].modified_on, groupby=db[tabla].uuid) for registro in registros: db((db[tabla].uuid==registro.uuid)&\ (db[tabla].id!=registro.id)).delete() return dict(formulario=formulario) ``` Opcionalmente, podrías crear un índice en forma manual para que cada búsqueda por uuid sea más rápida. También es posible utilizar XML-RPC para importar y exportar el archivo. Si los registros hacen tienen referencias a archivos subidos, además debes exportar e importar el contenido de la carpeta upload. Ten en cuenta que los archivos de esa carpeta ya están etiquetados por UUID, de manera que no tienes que preocuparte por conflictos de nombres o problemas con los enlaces. # HTML y XML (un objeto Table por vez)¶ Los objetos Rows también tienen un método `xml` (como el de los ayudantes) que los serializa como XML/HTML: ``` >>> registros = db(db.persona.id > 0).select() >>> print registros.xml() <table> <thead> <tr> <th>persona.id</th> <th>persona.nombre</th> <th>cosa.id</th> <th>cosa.nombre</th> <th>cosa.id_propietario</th> </tr> </thead> <tbody> <tr class="even"> <td>1</td> <td>Alejandro</td> <td>1</td> <td>Bote</td> <td>1</td> </tr> ... </tbody> </table> ``` Si necesitas serializar los registros en cualquier otro formato de XML usando etiquetas personalizadas, puedes hacerlo fácilmente usando el ayudante de etiquetas para todo uso TAG y la notación *: ``` >>> registros = db(db.persona.id > 0).select() >>> print TAG.resultado(*[TAG.registro(*[TAG.campo(r[f], _nombre=f) \ for f in db.persona.fields]) for r in registros]) <resultado> <registro> <campo nombre="id">1</campo> <campo nombre="nombre">Alejandro</campo> </registro> ... </resultado> ``` # Representación de datos¶ La función `export_to_csv_file` acepta un argumento de par nombre-valor llamado `represent` . Si se especifica como `True` , usará la función `represent` para cada columna al exportar los datos en lugar del valor almacenado para el campo. La función además acepta un argumento de par nombre-valor llamado `colnames` que debe contener una lista de nombres de columnas que queremos exportar. Por defecto incluye todas las columnas. Tanto `export_to_csv_file` como `import_from_csv_file` aceptan argumentos de par nombre-valor que le indican al intérprete de CSV en qué formato se deben guardar o abrir los archivos: * `delimiter` : la cadena a utilizar como separador de valores (por defecto es ',') * `quotechar` : el carácter a usar para encerrar (quote) valores de cadena (por defecto establece comillas dobles) * `quoting` : el sistema utilizado para el quoting, es decir, para delimitar cadenas (por defecto es `csv.QUOTE_MINIMAL` ) Este es un ejemplo de uso posible: ``` >>> import csv >>> registros = db(consulta).select() >>> registros.export_to_csv_file(open('/tmp/prueba.txt', 'w'), delimiter='|', quotechar='"', quoting=csv.QUOTE_NONNUMERIC) ``` Que se mostraría aproximadamente como sigue: ``` "hola"|35|"este es el texto de la descripción"|"2009-03-03" ``` Para más detalles puedes consultar la documentación oficial de Python [quoteall] ### Caché de comandos select¶ El método select también acepta un argumento cache, que por defecto es None. A efectos de utilizar el caché, se debería especificar una tupla donde el primer elemento es el modelo de caché (cache.ram, cache.disk, etc.), y el segundo elemento es plazo de vencimiento en segundos. En el ejemplo que sigue, se puede ver un controlador que hace un caché de un comando select de la tabla log definida anteriormente. El select recupera información de la base de datos con una frecuencia inferior a una consulta cada 60 segundos y almacena el resultado en cache.ram. Si la próxima llamada a este controlador ocurre en un lapso menor a 60 segundos desde la última consulta a la base de datos, simplemente recuperará la información almacenada en cache.ram. ``` def cache_de_select(): registros = db().select(db.log.ALL, cache=(cache.ram, 60)) return dict(registros=registros) ``` El método `select` acepta un argumento opcional llamado `cacheable` , que normalmente es `False` . Cuando se especifica `cacheable=True` el objeto `Rows` resultante es serializable aunque los objetos `Row` no admiten el uso de los métodos `update_record` y `delete_record` . Si no necesitas esos métodos puedes acelerar los select considerablemente habilitando la opción cacheable: ``` registros = db(consulta).select(cacheable=True) ``` Si se ha establecido la opción `cacheable=False` (por defecto) sólo se hara un caché de los resultados de la base de datos, pero no del objeto `Rows` en sí. Cuando el argumento `cache` es usado en conjunto con `cacheable=True` se hará un caché de la totalidad del objeto Rows y esto resultará en un caché mucho más veloz: ``` registros = db(consulta).select(cache=(cache.ram, 3600), cacheable=True) ``` ### Referencias a la misma tabla y alias¶ Se pueden definir tablas cuyos campos hacen referencia a la tabla que los contiene, he aquí un ejemplo: ``` db.define_table('persona', Field('nombre'), Field('id_padre', 'reference persona'), Field('id_madre', 'reference persona')) ``` Observa que la notación alternativa para el uso de un objeto Table como tipo de campo resultará en una falla para este caso, porque utilizaría una variable antes de que se haya definido: ``` db.define_table('persona', Field('nombre'), Field('id_padre', db.persona), # ¡no es válido! Field('id_madre', db.persona)) # ¡no es válido! ``` En general, `db.nombredetable` y ``` "reference nombredetabla" ``` son tipos de campo equivalentes, pero el último es el único admitido para referencias a la misma tabla (self reference). Si la tabla enlaza a sí misma, entonces no es posible realizar un JOIN para recuperar un persona y sus padres sin el uso de la instrucción "AS" de SQL. Esto se puede hacer en web2py usando `with_alias` . He aquí un ejemplo: ``` >>> Padre = db.persona.with_alias('padre') >>> Madre = db.persona.with_alias('madre') >>> db.persona.insert(nombre='Máximo') 1 >>> db.persona.insert(nombre='Claudia') 2 >>> db.persona.insert(nombre='Marcos', id_padre=1, id_madre=2) 3 >>> registros = db().select(db.persona.nombre, Padre.nombre, Madre.nombre, left=(Padre.on(Padre.id==db.persona.id_padre), Madre.on(Madre.id==db.persona.id_madre))) >>> for registro in registros: print registro.persona.nombre, registro.padre.nombre, registro.madre.nombre Massimo None None Claudia None None <NAME> ``` Observa que hemos optado por separar entre: * "id_padre": el nombre de campo usado en la tabla "persona"; * "padre": el alias que queremos usar para la tabla enlazada por el campo anterior; esto se comunica a la base de datos; * "Padre": la variable a usar por web2py para hacer referencia a ese alias. La diferencia es sutil, y no hay nada de malo en el uso del mismo nombre para las tres instancias: ``` db.define_table('persona', Field('nombre'), Field('padre', 'reference persona'), Field('madre', 'reference persona')) >>> padre = db.persona.with_alias('padre') >>> madre = db.persona.with_alias('madre') >>> db.persona.insert(nombre='Máximo') 1 >>> db.persona.insert(nombre='Claudia') 2 >>> db.persona.insert(nombre='Marco', padre=1, madre=2) 3 >>> registros = db().select(db.persona.nombre, padre.nombre, madre.nombre, left=(padre.on(padre.id==db.persona.padre), madre.on(madre.id==db.persona.madre))) >>> for registro in registros: print registro.persona.nombre, registro.padre.nombre, registro.madre.nombre Máximo None None Claudia None None <NAME> ``` Pero es importante dar cuenta de esos detalles para poder generar consultas válidas. ### Características avanzadas¶ # Herencia de tablas¶ Es posible crear una tabla que contenga todos los campos de otra tabla. Basta con usar un objeto Table en lugar de un objeto Field como argumento de `define_table` . Por ejemplo ``` db.define_table('persona', Field('nombre')) db.define_table('doctor', db.persona, Field('especialidad')) ``` Además es posible definir una tabla ficticia que no se almacene en una base de datos para poder reutilizarla en otras instancias. Por ejemplo: ``` firma = db.Table(db, 'firma', Field('created_on', 'datetime', default=request.now), Field('created_by', db.auth_user, default=auth.user_id), Field('modified_on', 'datetime', update=request.now), Field('modified_by', db.auth_user, update=auth.user_id)) db.define_table('pago', Field('monto', 'double'), firma) ``` Este ejemplo supone que se ha habilitado la autenticación estándar de web2py. Observa que si usas `Auth` , web2py ya ha creado esa tabla por ti: ``` auth = Auth(db) db.define_table('pago', Field('monto', 'double'), auth.signature) ``` Cuando se usa la herencia de tablas, si quieres que la tabla que hereda los campos también herede sus validadores, asegúrate de definir los validadores de la tabla heredada antes de definir la tabla que los hereda. `filter_in` y `filter_out` ¶ Es posible definir un filtro para cada campo que se llamarán antes de que un valor se inserte en la base de datos para ese campo así como también después de que se recupere el valor en una consulta. Supongamos, por ejemplo, que queremos almacenar una estructura serializable de datos de Python en un campo en formato json. Esto se puede hacer de la siguiente forma: ``` >>> from simplejson import loads, dumps >>> db.define_table('cualquierobjeto',Field('nombre'),Field('datos','text')) >>> db.cualquierobjeto.datos.filter_in = lambda obj, dumps=dumps: dumps(obj) >>> db.cualquierobjeto.datos.filter_out = lambda txt, loads=loads: loads(txt) >>> miobjeto = ['hola', 'mundo', 1, {2: 3}] >>> id = db.cualquierobjeto.insert(nombre='minombredeobjeto', datos=miobjeto) >>> registro = db.cualquierobjeto(id) >>> registro.datos ['hola', 'mundo', 1, {2: 3}] ``` Otra forma de realizar lo mismo es usando un campo de tipo `SQLCustomType` , como se verá más adelante. # Callback antes y después de la E/S¶ Web2py provee de un mecanismo para registrar llamadas de retorno o callback a los que se llamará antes y/o después de crear, modificar o borrar registros. Cada tabla almacena seis listas de callback: ``` db.mitabla._before_insert db.mitabla._after_insert db.mitabla._before_update db.mitabla._after_update db.mitabla._before_delete db.mitabla._after_delete ``` Puedes registrar una función callback agregándola a la lista correspondiente. Un detalle a tener en cuenta es que según la funcionalidad utilizada, cambian las lista de argumentos de entrada aceptadas. Esto se explica mejor a través de ejemplos. ``` >>> db.define_table('persona',Field('nombre')) >>> def miprint(*args): print args >>> db.persona._before_insert.append(lambda f: miprint(f)) >>> db.persona._after_insert.append(lambda f,id: miprint(f,id)) >>> db.persona._before_update.append(lambda s,f: miprint(s,f)) >>> db.persona._after_update.append(lambda s,f: miprint(s,f)) >>> db.persona._before_delete.append(lambda s: miprint(s)) >>> db.persona._after_delete.append(lambda s: miprint(s)) ``` Donde `f` es un diccionario de campos pasados como argumento de los métodos insert o update, `id` es el id del nuevo registro creado y `s` es el objeto Set usado para los métodos update o delete. ``` >>> db.persona.insert(nombre='Juan') ({'nombre': 'Juan'},) ({'nombre': 'Juan'}, 1) >>> db(db.persona.id==1).update(nombre='Timoteo') (<Set (persona.id = 1)>, {'nombre': 'Timoteo'}) (<Set (persona.id = 1)>, {'nombre': 'Timoteo'}) >>> db(db.persona.id==1).delete() (<Set (persona.id = 1)>,) (<Set (persona.id = 1)>,) ``` Los valores devueltos por estos callback deberían ser `None` o `False` . Si alguno de los callback `_before_*` devuelve `True` , anulará la operación de inserción/modificación/borrado en curso. A veces una llamada de retorno puede necesitar ejecutar un comando update para la misma tabla o incluso en otra tabla, y queremos evitar que las llamadas de retorno se llamen a si mismas en forma cíclica. Para evitar este problema, los objetos Set disponen de un método `update_naive` que funciona en forma similar al método `update` , pero omitiendo el uso de las llamadas de retorno para antes y después de la E/S de la base de datos. En web2py es posible especificar que se quiere almacenar una copia de un registro cuando este se modifique individualmente. Hay distintas formas de hacerlo, y es posible hacerlo para todas las tablas en una sola instrucción usando la sintaxis: ``` auth.enable_record_versioning(db) ``` Esto requiere habilitar Auth y se detalla en el capítulo sobre autenticación. También se puede hacer para cada tabla individual, como se muestra a continuación. Tomemos como ejemplo la siguiente tabla: ``` db.define_table('item', Field('nombre'), Field('cantidad','integer'), Field('is_active','boolean', writable=False, readable=False, default=True)) ``` Observa el campo booleano llamado `is_active` , que toma por defecto el valor True. Podemos decirle a web2py que cree una nueva tabla (en la misma base de datos o en otra distinta) y que almacene todas las versiones previas de cada registro en la tabla, cuando haya modificaciones. ``` db.item._enable_record_versioning() ``` o de una forma más explícita: ``` db.item._enable_record_versioning( archive_db = db, archive_name = 'item_archive', current_record = 'current_record', is_active = 'is_active') ``` La opción `archive_db=db` le dice a web2py que almacene la tabla especial de archivado en la misma base de datos que la tabla `item` . La opción `archive_name` establece el nombre para la tabla de archivado. La tabla de archivado tiene los mismos campos que la tabla original `item` , pero los campos configurados como únicos dejan de serlo (porque es necesario almacenar distintas versiones) y tiene un campo adicional con el nombre `current_record` , que es una referencia al registro actual en la tabla `item` . Cuando se eliminan los registros, no se eliminan realmente. Los registros eliminados se copian en la tabla `item_archive` (de igual forma que cuando se modifican) y el campo `is_active` se establece como `False` . Al habilitarse el control de versiones de registros (record versioning), web2py establece el parámetro `custom_filter` para esta tabla que oculta todos los registros en la tabla `item` para los cuales el campo `is_active` tiene el valor False. El parámetro `is_active` en el método ``` _enable_record_versioning ``` permite especificar el nombre del campo usado por `custom_filter` para establecer si se ha eliminado el registro o no. Los filtros de `custom_filter` se omiten cuando se utiliza la interfaz appadmin. # Campos comunes, y aplicaciones compartidas¶ `db._common_fields` es una lista de los campos que deberían pertenecer a toda tabla. Esta lista también puede contener tablas y se interpretará como la lista de todos sus campos. Por ejemplo, en ocasiones puedes necesitar que se agregue una firma digital a todas las tablas que no pertenezcan a `auth` . En este caso, luego de `db.define_tables()` , pero antes de definir cualquier otra tabla, agrega ``` db._common_fields.append(auth.signature) ``` Hay un campo especial: "request_tenant" (el inquilino, arrendatario o teniente de la solicitud). Este campo no existe pero se puede crear y agregar a cualquier tabla (o a todas): ``` db._common_fields.append(Field('request_tenant', default=request.env.http_host, writable=False)) ``` En toda tabla que tenga el campo llamado `db._request_tenant` , todos los registros para todas las consultas se filtrarán siempre con: ``` db.tabla.request_tenant == db.tabla.request_tenant.default ``` y para cada registro insertado, este campo se establece como valor por defecto. En el ejemplo de arriba, hemos optado por la siguiente configuración: ``` default = request.env.http_host ``` es decir, hemos configurado a nuestra app para que filtre todas las tablas en todas las consultas con ``` db.tabla.request_tenant == request.env.http_host ``` Este simple truco nos permite convertir cualquier aplicación en una aplicación multi-tenant (compartida). Esto quiere decir que, aunque corramos una única instancia de una app y usemos un solo servicio de base de datos, si se accede a la app por medio de dos o más dominios (para el ejemplo que usamos el dominio se obtiene de ``` request.env.http_host ``` ) los visitantes verán distintos datos según el dominio de referencia. Como ejemplo, se hablar de un sistema de tiendas en línea para distintos dominios en una sola app y una única base de datos. Puedes deshabilitar los filtros de aplicaciones compartidas usando: ``` registros = db(consulta, ignore_common_filters=True).select() ``` # Filtros comunes¶ Un filtro común (common filter) es la generalización de la característica descripta más arriba para aplicaciones compartidas. Esto provee de una forma sencilla para evitar repetir la misma consulta para cada caso. Consideremos como ejemplo la siguiente tabla: ``` db.define_table('articulo', Field('asunto'), Field('texto', 'text'), Field('publico', 'boolean'), common_filter = lambda consulta: db.articulo.publico==True ) ``` Toda consulta, modificación o eliminación para esa tabla, incluirá únicamente artículos que sean públicos. El atributo también se puede cambiar en los controladores: ``` db.articulo._common_filter = lambda consulta: db.articulo.publico == True ``` Esto sirve tanto como una forma de evitar la repetición de la expresión "db.articulo.publico==True" en cada búsqueda de artículos, pero también como una mejora de seguridad, que evita que te olvides de deshabilitar la visualización de articulos privados. En caso de que quieras que algunos ítems se excluyan de los filtros comunes (por ejemplo, permitiendo que el administrador vea los artículos privados), puedes o bien quitar el filtro: ``` db.blog_post._common_filter = None ``` o ignorarlo: ``` db(consulta, ignore_common_filters=True).select(...) ``` # Tipos de objeto `Field` personalizados (experimental)¶ Además de la posibilidad de usar `filter_in` y `filter_out` , es posible definir nuevos tipos de campo personalizados. Por ejemplo consideremos aquí un campo que contiene datos binarios en formato comprimido: ``` from gluon.dal import SQLCustomType import zlib comprimido = SQLCustomType( type ='text', native='text', encoder =(lambda x: zlib.compress(x or '')), decoder = (lambda x: zlib.decompress(x)) ) db.define_table('ejemplo', Field('datos', type=comprimido)) ``` `SQLCustomType` es un creador de tipos de campo. Su argumento `type` debe ser uno de los campos estándar de web2py. Este le dice a web2py como debe manejar los valores en el nivel de web2py. `native` es el nombre del campo utilizado en el motor de la base de datos. Los nombres permitidos dependen de la base de datos. `encoder` es una función opcional de conversión aplicada cuando los datos se almacenan y `decoder` es la función de conversión que opera en el sentido opuesto, también opcional. Esta característica está marcada como experimental. En realidad ha formado parte del núcleo de web2py desde hace tiempo y funciona normalmente, pero puede hacer que el código no sea portátil, por ejemplo, cuando el tipo nativo utilizado es específico de la base de datos. Esta característica no funciona con Google App Engine NoSQL. # Uso de DAL sin definir tablas¶ Para usar la DAL desde cualquier programa en Python, basta con hacer lo siguiente: ``` from gluon import DAL, Field db = DAL('sqlite://storage.sqlite', folder='ruta/a/carpeta/databases') ``` es decir, basta con importar DAL y Field, conectar a la base de datos especificando la carpeta que contiene los archivos .table (en app/databases). Para acceder a los datos y sus atributos, todavía hace falta que definamos toda tabla que necesitemos utilizar con ``` db.define_tables(...) ``` . Si solo nos interesa acceder a los datos pero no a los atributos de las tablas, entonces nos alcanza con indicarle a webp2y que lea la información necesaria de los metadatos en los archivos .table, sin que sea necesario redefinir las tablas en forma explícita: ``` from gluon import DAL, Field db = DAL('sqlite://storage.sqlite', folder='ruta/a/carpeta/databases', auto_import=True)) ``` Esto nos permite acceder a toda tabla `db.tabla` `sin tener que redefinirla. # PostGIS, SpatiaLite, y MS Geo (experimental)¶ La DAL soporta API geográficas usando PostGIS (para PostgreSQL), spatialite (para SQLite), y MSSQL y las extensiones Spatial. Se trata de una funcionalidad que fue patrocinada por el proyecto Sahana e implementado por <NAME>. DAL provee de tipos de campo geometry y geography y de las siguientes funciones: ``` st_asgeojson (solo para PostGIS) st_astext st_contains st_distance st_equals st_intersects st_overlaps st_simplify (solo para PostGIS) st_touches st_within st_x st_y ``` ``` from gluon.dal import DAL, Field, geoPoint, geoLine, geoPolygon db = DAL("mssql://usuario:contraseña@servidor:db") espacial = db.define_table('espacial', Field('ubicacion','geometry()')) ``` A continuación, se insertan un punto, una línea y un polígono: ``` espacial.insert(ubicacion=geoPoint(1,1)) espacial.insert(ubicacion=geoLine((100,100),(20,180),(180,180))) espacial.insert(ubicacion=geoPolygon((0,0),(150,0),(150,150),(0,150),(0,0))) ``` ``` registros = db(espacial.id>0).select() ``` Siempre devuelve la información geométrica serializada como texto. Además puedes hacer lo mismo en una forma más explícita usando `st_astext()` : ``` print db(espacial.id>0).select(espacial.id, espacial.ubicacion.st_astext()) espacial.id, espacial.ubicacion.STAsText() 1, "POINT (1 2)" 2, "LINESTRING (100 100, 20 180, 180 180)" 3, "POLYGON ((0 0, 150 0, 150 150, 0 150, 0 0))" ``` Puedes recuperar la representación nativa usando `st_asgeojson()` (solo en PostGIS): ``` print db(espacial.id>0).select(espacial.id, espacial.ubicacion.st_asgeojson().with_alias('ubicacion')) spatial.id, ubicacion 1, [1, 2] 2, [[100, 100], [20 180], [180, 180]] 3, [[[0, 0], [150, 0], [150, 150], [0, 150], [0, 0]]] ``` (observa que un arreglo es un punto, un arreglo de arreglos una línea, y un arreglo de arreglos de arreglos es un polígono). Aquí se muestran ejemplos para el uso de las funciones geográficas: ``` consulta = espacial.ubicacion.st_intersects(geoLine((20,120),(60,160))) consulta = espacial.ubicacion.st_overlaps(geoPolygon((1,1),(11,1),(11,11),(11,1),(1,1))) consulta = espacial.ubicacion.st_contains(geoPoint(1,1)) print db(consulta).select(espacial.id, espacial.ubicacion) espacial.id, espacial.ubicacion 3,"POLYGON ((0 0, 150 0, 150 150, 0 150, 0 0))" ``` Las distancias calculadas también se pueden recuperar como valores de coma flotante: ``` distancia = espacial.ubicacion.st_distance(geoPoint(-1,2)).with_alias('distancia') print db(espacial.id>0).select(espacial.id, distancia) espacial.id, distancia 1 2.0 2 140.714249456 3 1.0 ``` # Copia de datos entre distintas bases de datos¶ Consideremos una situación en la que la que hemos estado usando la siguiente base de datos: y queremos transferir la información a otra base de datos usando otra cadena de conexión: ``` db = DAL('postgres://usuario:contraseña@localhost/midb') ``` Antes de hacer el cambio, queremos mover la información y recuperar todos los metadatos de la nueva base de datos. Se supone que existe tal base de datos nueva y que está vacía. Web2py provee de un script que hace esta tarea por ti: ``` cd web2py python scripts/cpdb.py \ -f applications/app/databases \ -y 'sqlite://storage.sqlite' \ -Y 'postgres://usuario:contraseña@localhost/midb' ``` Luego de correr el script puedes simplemente cambiar la cadena de conexión en el modelo y todo debería funcionar instantáneamente, con la información transferida a la nueva base de datos. Este script provee de varias opciones de la línea de comandos que te permiten transferir datos de una aplicación a otra, transferir todas las tablas o solo algunas y eliminar datos. para más detalles usa el siguiente comando: ``` python scripts/cpdb.py -h ``` # Notas sobre la nueva DAL y los adaptadores¶ El código fuente de la Capa de Abstracción de la Base de Datos (DAL) se reescribió completamente en el 2010. Si bien se mantuvo la compatibilidad hacia atrás, las modificaciones realizadas la han hecho más modular y fácil de extender. Aquí explicaremos sus características destacadas. El archivo "gluon/dal.py" define, entre otras, las siguientes clases. ``` ConnectionPool BaseAdapter (extensión de ConnectionPool) Row DAL Reference Table Expression Field Query Set Rows ``` Su uso se ha explicado en las secciones previas, salvo el caso de `BaseAdapter` . Cuando los métodos de un objeto `Table` o `Set` necesitan comunicarse con la base de datos, delegan a los métodos del adaptador la tarea de generar el SQL y/o la llamada a una función. Por ejemplo: ``` db.mitabla.insert(micampo='mivalor') ``` llama a ``` Table.insert(micampo='micampo') ``` que remite la instrucción al adaptador, devolviendo: ``` db._adapter.insert(db.mitabla, db.mitabla._listify(dict(micampo='mivalor'))) ``` Aquí, `db.mitabla._listify` convierte el diccionario con los argumentos en una lista de pares `(campo, valor)` y llama al método de inserción `insert` del objeto `adapter` . `db._adapter` hace algo similar a lo siguiente: ``` consulta = db._adapter._insert(db.mitabla, lista_de_campos) db._adapter.execute(consulta) ``` donde la primer línea genera la consulta y la segunda la ejecuta. `BaseAdapter` define la interfaz para todos los adaptadores. "gluon/dal.py", al tiempo de esta edición, contiene los siguientes adaptadores: ``` SQLiteAdapter extensión de BaseAdapter JDBCSQLiteAdapter extensión de SQLiteAdapter MySQLAdapter extensión de BaseAdapter PostgreSQLAdapter extensión de BaseAdapter JDBCPostgreSQLAdapter extensión de PostgreSQLAdapter OracleAdapter extensión de BaseAdapter MSSQLAdapter extensión de BaseAdapter MSSQL2Adapter extensión de MSSQLAdapter FireBirdAdapter extensión de BaseAdapter FireBirdEmbeddedAdapter extensión de FireBirdAdapter InformixAdapter extensión de BaseAdapter DB2Adapter extensión de BaseAdapter IngresAdapter extensión de BaseAdapter IngresUnicodeAdapter extensión de IngresAdapter GoogleSQLAdapter extensión de MySQLAdapter NoSQLAdapter extensión de BaseAdapter GoogleDatastoreAdapter extensión de NoSQLAdapter CubridAdapter extensión de MySQLAdapter (experimental) TeradataAdapter extensión de DB2Adapter (experimental) SAPDBAdapter extensión de BaseAdapter (experimental) CouchDBAdapter extensión de NoSQLAdapter (experimental) MongoDBAdapter extensión de NoSQLAdapter (experimental) IMAPAdapter extensión de NoSQLAdapter (experimental) ``` que sobreescriben el comportamiento de `BaseAdapter` . Cada adaptador tiene más o menos esta estructura: ``` class MySQLAdapter(BaseAdapter): # especifica qué controlador usa driver = globals().get('pymysql', None) # traduce tipos de campo de web2py a # los tipos de campo de la base de datos types = { 'boolean': 'CHAR(1)', 'string': 'VARCHAR(%(length)s)', 'text': 'LONGTEXT', ... } # conectar a la base de datos usando el controlador def __init__(self, db, uri, pool_size=0, folder=None, db_codec ='UTF-8', credential_decoder=lambda x:x, driver_args={}, adapter_args={}): # analiza la cadena de conexión uri y almacena los # parámetros en driver_args ... # define una función para conectar a la base de datos def connect(driver_args=driver_args): return self.driver.connect(**driver_args) # la agrega al caché de conexiones self.pool_connection(connect) # configura parámetros opcionales al establecerse la conexión self.execute('SET FOREIGN_KEY_CHECKS=1;') self.execute("SET sql_mode='NO_BACKSLASH_ESCAPES';") # sobreescribe los métodos básicos de BaseAdapter según # sea necesesario def lastrowid(self, table): self.execute('select last_insert_id();') return int(self.cursor.fetchone()[0]) ``` Debería ser fácil agregar nuevos adaptadores si se toman como ejemplo los adaptadores incorporados. Cuando se crea la instancia `db` : ``` db = DAL('mysql://...') ``` el prefijo en la cadena uri define el adaptador a utilizar. Los adaptadores asociados se define en el siguiente diccionario, que también se encuentra en "gluon/dal.py": ``` ADAPTERS = { 'sqlite': SQLiteAdapter, 'sqlite:memory': SQLiteAdapter, 'mysql': MySQLAdapter, 'postgres': PostgreSQLAdapter, 'oracle': OracleAdapter, 'mssql': MSSQLAdapter, 'mssql2': MSSQL2Adapter, 'db2': DB2Adapter, 'teradata': TeradataAdapter, 'informix': InformixAdapter, 'firebird': FireBirdAdapter, 'firebird_embedded': FireBirdAdapter, 'ingres': IngresAdapter, 'ingresu': IngresUnicodeAdapter, 'sapdb': SAPDBAdapter, 'cubrid': CubridAdapter, 'jdbc:sqlite': JDBCSQLiteAdapter, 'jdbc:sqlite:memory': JDBCSQLiteAdapter, 'jdbc:postgres': JDBCPostgreSQLAdapter, 'gae': GoogleDatastoreAdapter, # discouraged, for backward compatibility 'google:datastore': GoogleDatastoreAdapter, 'google:sql': GoogleSQLAdapter, 'couchdb': CouchDBAdapter, 'mongodb': MongoDBAdapter, 'imap': IMAPAdapter } ``` luego, la cadena uri es analizada con más detalle por el adaptador. Para cualquier adaptador puedes sustituir el controlador: ``` import MySQLdb as mysqldb from gluon.dal import MySQLAdapter MySQLAdapter.driver = mysqldb ``` para este caso, `mysqldb` debe ser ese módulo que implementa un método .connect(). Puedes especificar controladores opcionales y argumentos de adaptadores: ``` db =DAL(..., driver_args={}, adapter_args={}) ``` # Detalles a tener en cuenta¶ SQLite no tiene soporte para el descarte o alteración de columnas. Esto quiere decir que las migraciones de web2py funcionarán hasta cierto punto. Si eliminas un campo de una tabla, la columna permanece en la base de datos pero no será visible para web2py. Si decides restablecer la columna, web2py intentará crearla nuevamente y fallará. En este caso debes establecer `fake_migrate=True` para que los metadatos se generen nuevamente sin intentar agregar la columna nuevamente. Además, y por la misma razón, SQLite no detecta ningún cambio en los tipos de columna. Si insertas un número en un campo tipo string, se almacenará como cadena. Si luego cambias el modelo y reemplazas el tipo "string" por "integer", SQLite continuará manteniendo el número como cadena y esto puede causar un problema cuando trates de extraer la información. MySQL no tiene soporte para comandos ALTER TABLE múltiples en una sola transacción. Esto significa que todo proceso de migración se separa en múltiples instrucciones que se aplican sucesivamente. Si ocurriera algo que causara una falla, probablemente fracasaría la migración (los metadatos de web2py dejarían de estar sincronizados con las estructuras de tablas reales de la base de datos). Esto es bastante desafortunado, pero se puede prevenir (migrando una tabla por vez) y también se puede reparar a posteriori (restablecer el modelo de web2py al correspondiente a la estructura real de la base de datos, establecer `fake_migrate=True` y luego de que los metadatos se hayan generado nuevamente, configurar `fake_migrate=False` y migrar la tabla nuevamente). Google SQL tiene los mismos problemas que MySQL y más. En especial los metadatos en sí se deben almacenar en la base de datos en uan tabla que no sea migrada por web2py. Esto se debe a que Google App Engine tiene un sistema de archivos de solo lectura. Las migraciones de web2py en Google:SQL combinadas con el problema descripto para MySQL pueden resultar en la corrupción de los metadatos. Nuevamente, es posible prevenir este problema (migrando la tabla y de inmediato estableciendo migrate=False para que la tabla de metadatos ya no se pueda modificar) o se puede arreglar después de una falla (accediendo a la base de datos con el panel administrativo de Google o dashboard y eliminando toda entrada incorrecta de la tabla llamada `web2py_filesystem` ). MSSQL no tiene soporte para la instrucción SQL OFFSET. Por lo tanto la base de datos no puede realizar paginación. Cuando se hace un `limitby=(a, b)` , web2py recuperará los primeros `b` registros y descartará el primer `a` . Esto puede resultar en un excesivo uso de recursos (overhead) si se compara con otros motores de bases de datos. Oracle tampoco tiene soporte para paginación. No contempla paginación o las instrucciones OFFSET y LIMIT. web2py obtiene la paginación transformando la expresión ``` db(...).select(limitby=(a, b)) ``` en un conjunto complejo de comandos select triples anidados (según lo sugerido por la documentación de Oracle). Esto funciona para comandos select simples pero puede fallar en operaciones que utilicen alias y join. MSSQL tiene problemas con las referencias circulares en tablas que tengan ONDELETE CASCADE. Esta es una falla de MSSQL y se puede resolver configurando el atributo ondelete para todos los campos de tipo reference como "NO ACTION". Además puedes hacerlo para todas las tablas con una única instrucción antes de definir las tablas: ``` db = DAL('mssql://....') for clave in ['reference','reference FK']: db._adapter.types[clave]=db._adapter.types[clave].replace( '%(on_delete_action)s','NO ACTION') ``` MSSQL también tiene problemas con los argumentos que se pasan a la instrucción DISTINCT y por lo tanto, esto funciona, ``` db(consulta).select(distinct=True) ``` pero esto no ``` db(consulta).select(distinct=db.mitabla.micampo) ``` Google NoSQL (Datastore) no admite el uso de join, joins a izquierda, sumas, expresiones, uso del operador OR para más de una tabla, el uso del operador ‘like’ en búsquedas en campos tipo "text". Las transacciones son limitadas y no son provistas automáticamente por web2py (debes usar `run_in_transaction` de la API de Google, que está descripta en la documentación en línea de Google App Engine). Google además limita el número de registros que se pueden recuperar por cada consulta (al tiempo de esta edición era 1000 registros). En el sistema de almacenamiento Datastore de Google, los ID son enteros pero no son secuenciales. En SQL el tipo "list:string" se traduce como tipo "text", pero en el Datastore de Google se traduce como objeto `ListStringProperty` . En forma similar, "list:integer" y "list:reference" se traducen como `ListProperty` . Esto hace que las búsquedas de contenido dentro de esos campos sean más eficientes para Google NoSQL que para las bases de datos SQL. # Formularios y Validadores¶ # Chapter 7: Formularios y validadores * Formularios y Validadores * FORM * Los métodos process y validate * Campos ocultos * keepvalues * onvalidation * Detectar un cambio del registro * Formularios y redirección * Múltiples formularios por página * Compartiendo formularios * Agregando botones a los FORM * Otros detalles sobre la manipulación de FORM * SQLFORM * SQLFORM e insert/update/delete * SQLFORM como HTML * SQLFORM y subidas de archivos * Almacenamiento del nombre original del archivo * autodelete * Link a registros asociados * Precompletando el formulario * Agregando elementos adicionales al SQLFORM * SQLFORM sin E/S de la base de datos * Otros tipos de formulario * SQLFORM.factory * Un solo formulario para múltiples tablas * Formularios de confirmación * Formulario para editar un diccionario * CRUD * Formularios personalizados * Validadores * Widget * SQLFORM.grid y SQLFORM.smartgrid ## Formularios y Validadores¶ Hay cuatro maneras distintas de crear formularios en web2py: * `FORM` provee de una implementación de bajo nivel con respecto a los ayudantes. Un objeto `FORM` se puede serializar como HTML y tiene control de los campos que contiene. Los objetos `FORM` saben como validar los campos del formulario enviados. * `SQLFORM` provee de una API de alto nivel para generar formularios de creación, actualización y eliminación a partir de una tabla de la base de datos. * `SQLFORM.factory` es una capa de abstracción que opera sobre `SQLFORM` para aprovechar las funcionalidades para creación de formularios incluso cuando no se especifica una base de datos. Crea un formulario muy similar a `SQLFORM` a partir de la descripción de una tabla pero sin necesidad de crear la tabla en la base de datos. * Métodos `CRUD` . Su funcionamiento es equivalente al de SQLFORM y se basan en SQLFORM, pero proveen de una notación más compacta. Todos estos formularios pueden realizar autocomprobaciones y, si un campo de datos no pasa la validación, pueden modificarse a sí mismos en forma automática y agregar informes de errores. Los formularios se pueden consultar para examinar los valores de validación y para recuperar los mensajes de error que se hayan generado al procesar los datos. Se pueden insertar elementos al HTML en forma programática o recuperar secciones específicas del elemento formulario usando ayudantes. `FORM` y `SQLFORM` son ayudantes y se pueden manipular en forma similar que con los objetos `DIV` . Por ejemplo, puedes establecer el estilo del formulario: ``` formulario = SQLFORM(...) formulario['_style']='border:1px solid black' ``` `FORM` ¶ Tomemos como ejemplo una aplicación de prueba con el siguiente controlador "default.py": ``` def mostrar_formulario(): return dict() ``` ``` {{extend 'layout.html'}} <h2>Formulario de ingreso de datos</h2> <form enctype="multipart/form-data" action="{{=URL()}}" method="post"> Tu nombre: <input name="nombre" /> <input type="submit" /> </form> <h2>Variables enviadas</h2> {{=BEAUTIFY(request.vars)}} ``` Este es un formulario corriente en HTML que le pide al usuario su nombre. Cuando se completa el formulario y se hace clic en el botón de enviar, el formulario se autoenvía (self-submit), y la variable `request.vars.nombre` junto con el valor del campo completado se muestran en la parte inferior. Puedes generar el mismo formulario usando ayudantes. Esto se puede hacer tanto en la vista como en la acción. Como web2py procesa el formulario en la acción, es preferible que lo definamos allí. Este es el nuevo controlador: ``` def mostrar_formulario(): formulario=FORM('Tu nombre:', INPUT(_name='nombre'), INPUT(_type='submit')) return dict(formulario=formulario) ``` ``` {{extend 'layout.html'}} <h2>Formulario de ingreso de datos</h2> {{=formulario}} <h2>Variables enviadas</h2> {{=BEAUTIFY(request.vars)}} ``` El código hasta aquí es equivalente al anterior, pero el formulario se crea por medio de la instrucción `{{=formulario}}` , que serializa el objeto `FORM` . Ahora le damos al ejemplo un nivel más de complejidad al agregar la validación y procesamiento del formulario. Cambia el controlador como sigue: ``` {{extend 'layout.html'}} <h2>Formulario de ingreso</h2> {{=formulario}} <h2>Variables enviadas</h2> {{=BEAUTIFY(request.vars)}} <h2>Variables aceptadas</h2> {{=BEAUTIFY(formulario.vars)}} <h2>Errores en el formulario</h2> {{=BEAUTIFY(formulario.errors)}} ``` * En la acción, agregamos el validador ``` requires=IS_NOT_EMPTY() ``` para el campo de ingreso de datos "nombre". * En la acción, hemos agregado una llamada a ``` formulario.accepts(..) ``` * En la vista, mostramos `formulario.vars` y `formulario.errors` así como también el formulario y `request.vars` . Todo el trabajo lo hace el método `accepts` del objeto `formulario` . El método filtra los datos de `request.vars` según los requerimientos declarados (por medio de los validadores). `accepts` almacena aquellas variables que pasan la validación en `formulario.vars` . Si un campo no cumple con un requisito, el validador correspondiente devolverá un error y el error se almacenará en `formulario.errors` . Tanto `formulario.vars` como `formulario.errors` son objetos similares a `request.vars` . El primero contiene los valores que pasaron la validación, por ejemplo: ``` formulario.vars.nombre = "Maximiliano" ``` El otro contiene los errores, por ejemplo: ``` formulario.errors.name = "¡No puede estar vacío!" ``` Los argumentos de entrada aceptados del método `accepts` son los siguientes: ``` formulario.accepts(vars, session=None, formname='default', keepvalues=False, onvalidation=None, dbio=True, hideerror=False): ``` El significado de los parámetros opcionales se explicarán en las próximas secciones. El primer argumento puede ser `request.vars` o `request.get_vars` o `request.post_vars` o simplemente `request` . Este último es equivalente a aceptar como valores de entrada `request.post_vars` . La función `accepts` devuelve `True` si el formulario se fue aceptado y `False` en caso contrario. Un formulario no se aceptará si tiene errores o cuando no se haya enviado (por ejemplo, la primera vez que se muestre). Así es como se verá la página la primera vez que se muestre: Así se ve cuando se envían datos inválidos: Si se envían datos correctos, la página mostrará lo siguiente: # Los métodos `process` y `validate` ¶ Esta instrucción ``` formulario.accepts(request.post_vars, session,...) ``` se puede abreviar con el siguiente atajo: ``` formulario.process(...).accepted ``` Esta última instrucción no requiere los argumentos `request` y `session` (aunque los puedes especificar opcionalmente). También es distinto a `accepts` porque devuelve el formulario en sí. Internamente `process` llama a accepts y le pasa los parámetros recibidos. El valor devuelto por accepts se almacena en `formulario.accepted` . La función process toma algunos parámetros extra no especificados para `accepts` : * `message_onsuccess` * `onsuccess` : cuando es igual a 'flash' (por defecto) y el formulario se acepta, mostrará un mensaje emergente con el valor de `message_onsuccess` * `message_onfailure` * `onfailure` : cuando es igual a 'flash' (por defecto) y el formulario no pasa la validación, mostrará el valor de `message_onfailure` * `next` : especifica la redirección en caso de que se acepte el formulario. `onsuccess` y `onfailure` pueden ser funciones como por ejemplo ``` lambda formulario: hacer_algo(formulario) ``` ``` formulario.validate(...) ``` es un atajo para ``` formulario.process(...,dbio=False).accepted ``` # Campos ocultos¶ Cuando el formulario anterior sea serializado por `{{=form}}` , y luego de la llamada al método `accepts` , se mostrará de la siguiente forma: ``` <form enctype="multipart/form-data" action="" method="post"> tu nombre: <input name="nombre" /> <input type="submit" /> <input value="783531473471" type="hidden" name="_formkey" /> <input value="default" type="hidden" name="_formname" /> </form> ``` Observa la presencia de dos campos ocultos: "_formkey" y "_formname". Estos campos son originados con la llamada a `accepts` y cumplen dos roles importantes: * El campo oculto llamado "_formkey" es una clave única por formulario usada por web2py para evitar sobreemisión (double submission) de formularios. El valor de esta clave o token se genera cuando el formulario se serializa y es almacenado en el objeto `session` Cuando el formulario se envía, este valor debe coincidir, o de lo contrario `accepts` devolverá `False` sin errores, como si el formulario no se hubiera enviado. Esto se debe a que web2py no puede determinar si el formulario se envió en forma correcta. * El campo llamado "_formname" es generado por web2py para asignarle un nombre específico, pero este nombre se puede sobrescribir. Este campo es necesario para permitir el procesamiento de múltiples formularios en una página. web2py diferencia los distintos formularios enviados según sus nombres. * Los campos ocultos adicionales se especifican como ``` FORM(.., hidden=dict(...)) ``` . El rol de estos campos ocultos y su uso en páginas con uno o más formularios personalizados se trata con más detalle en otras secciones de este capítulo. Si el formulario de anterior es enviado con un campo "nombre" vacío, el formulario no pasa la validación. Cuando el formulario es serializado nuevamente presenta lo siguiente: ``` <form enctype="multipart/form-data" action="" method="post"> your name: <input value="" name="nombre" /> <div class="error">¡No puede estar vacío!</div> <input type="submit" /> <input value="783531473471" type="hidden" name="_formkey" /> <input value="default" type="hidden" name="_formname" /> </form> ``` Observa la presencia de una clase DIV "error" en el formulario personalizado. web2py inserta los mensajes de error en el formulario para notificar al visitante sobre los campos que no pasaron la validación. El método `accepts` , al enviarse el formulario, determina si de hecho ha sido enviado, comprueba si el campo "nombre" está vacío y si es obligatorio, y si es así, inserta el mensaje de error del validador en el formulario. La plantilla básica "layout.html" puede manejar elementos DIV de la clase "error". El diseño por defecto usa efectos de jQuery para hacer que los errores aparezcan y se desplieguen con un fondo rojo. Para más detalles consulta el capítulo 11. `keepvalues` ¶ El argumento opcional `keepvalues` le dice a web2py qué hacer cuando un formulario es aceptado no hay redirección, para que se muestre el mismo formulario nuevamente. Por defecto se reinicia el formulario. Si se establece `keepvalues` como `True` , el formulario se preconfigura con los valores insertados previamente. Esto resulta útil cuando tienes un formulario que se usará en forma sucesiva para ingresar nuevos registros similares. Cuando el argumento `dbio` es `False` , web2py no realizará actualizaciones o inserciones en la base de datos luego de aceptarse el formulario. Si `hideerror` se establece como `True` y el formulario contiene errores, estos no se mostrarán al devolver el formulario al cliente (los informes de errores recuperados con `formulario.errors` dependerán de otras personalizaciones y código fuente utilizados en la app). El argumento `onvalidation` se explica a continuación. `onvalidation` ¶ El argumento `onvalidation` puede ser `None` o puede ser una función que toma un formulario y no devuelve nada. Esa función debería llamarse pasando el formulario como argumento una vez que el formulario haya validado (es decir, que pase la validación) y antes de todo proceso posterior. Esta es una técnica que tiene múltiples usos. Se puede usar, por ejemplo para realizar comprobaciones adicionales del formulario y posiblemente agregar informes de errores. También se puede utilizar para realizar cálculos con los valores de algunos campos según los valores de otros. Se puede usar para activar alguna acción complementaria (como por ejemplo enviar correo electrónico) antes de que el registro se genere o actualice. He aquí un ejemplo: ``` db.define_table('numeros', Field('a', 'integer'), Field('b', 'integer'), Field('c', 'integer', readable=False, writable=False)) def procesar_formulario(formulario): c = formulario.vars.a * form.vars.b if c < 0: formulario.errors.b = 'a*b no puede ser negativo' else: formulario.vars.c = c def insertar_numeros(): formulario = SQLFORM(db.numeros) if formulario.process(onvalidation=procesar_formulario).accepted: session.flash = 'registro insertado' redirect(URL()) return dict(formulario=formulario) ``` # Detectar un cambio del registro¶ Cuando se completa un formulario para modificar un registro de la base de datos existe la posibilidad de que otro usuario esté actualmente modificando el mismo registro. Entonces, cuando guardamos el registro debemos comprobar posibles conflictos. Esto es posible de la siguiente forma: ``` db.define_table('perro',Field('nombre')) def modificar_perro(): perro = db.perro(request.args(0)) or redirect(URL('error')) formulario=SQLFORM(db.perro, perro) formulario.process(detect_record_change=True) if formulario.record_changed: # hacer algo aquí elif formulario.accepted: # hacer algo más else: # no hacer nada return dict(formulario=formulario) ``` # Formularios y redirección¶ Una forma corriente de usar formularios es a través del autoenvío, para que las variables enviadas para validación se procesen en la misma acción que generó el formulario. Una vez que el formulario se ha aceptado, se suele mostrar la misma página nuevamente (algo que haremos aquí con el único propósito de que el ejemplo sea más sencillo). Es más usual sin embargo redirigir al visitante a la próxima página (comúnmente denominada next). Este es el nuevo ejemplo del controlador: def next(): return dict() ``` Para poder establecer un mensaje emergente en la próxima página en lugar de la actual debes usar `session.flash` en lugar de `response.flash` web2py convierte el anterior en este último luego de la redirección. Ten en cuenta que para poder usar `session.flash` no debes usar `session.forget()` . # Múltiples formularios por página¶ El contenido de esta sección es válido tanto para el objeto `FORM` como para `SQLFORM` . Es posible manejar múltiples formularios por página, pero debes procurar que web2py los pueda diferenciar. Si son generados por `SQLFORM` a partir de distintas tablas, entonces web2py les asigna distintos nombres en forma automática; de lo contrario debes asignarles un nombre a cada uno en forma explícita. He aquí un ejemplo: ``` def dos_formularios(): formulario1 = FORM(INPUT(_name='nombre', requires=IS_NOT_EMPTY()), INPUT(_type='submit')) formulario2 = FORM(INPUT(_name='nombre', requires=IS_NOT_EMPTY()), INPUT(_type='submit')) if formulario1.process(formname='formulario_uno').accepted: response.flash = 'formulario uno aceptado' if formulario2.process(formname='formulario_dos').accepted: response.flash = 'formulario dos aceptado' return dict(formulario1=formulario1, formulario2=formulario2) ``` y esta es la salida que produce: Cuando un visitante envía un formulario1 vacío, sólo ese formulario muestra un error; si el visitante envía un formulario2 vacío, sólo el formulario2 muestra el mensaje de error. # Compartiendo formularios¶ El contenido de esta sección es válido tanto para el objeto `FORM` como para `SQLFORM` . Lo que aquí se trata es posible pero no recomendable, ya que siempre es buena práctica el uso de formularios autoenviados. A veces, sin embargo, no tienes alternativa, porque la acción que envía el formulario y la acción que lo recibe pertenecen a distintas aplicaciones. Es posible generar un formulario que se envía a otra acción. Esto se hace especificando el URL de la acción que procesará los atributos del objeto `FORM` o `SQLFORM` . Por ejemplo: ``` formulario = FORM(INPUT(_name='nombre', requires=IS_NOT_EMPTY()), INPUT(_type='submit'), _action=URL('pagina_dos')) def pagina_uno(): return dict(formulario=formulario) def pagina_dos(): if formulario.process(session=None, formname=None).accepted: response.flash = 'formulario aceptado' else: response.flash = 'hubo un error en el formulario' return dict() ``` Observa que como en las dos acciones, "pagina_uno" y "pagina_dos" se usa el mismo formulario, lo hemos definido sólo una vez ubicándolo fuera de toda acción, para evitar la redundancia del código fuente. La parte común de código al inicio de un controlador se ejecuta cada vez antes de pasarle el control a la acción invocada. Como "pagina_uno" no llama a `process` (ni a `accepts` ), el formulario no tiene nombre ni clave, por lo que debes pasar el argumento `session=None` y establecer `formname=None` en `process` , o el formulario no validará cuando "pagina_dos" lo reciba. # Agregando botones a los FORM¶ Normalmente un formulario viene con un único botón de enviar. Es común la necesidad de agregar un botón de "regresar" que en lugar de enviar el formulario, dirija al usuario a otra página diferente. Esto se puede hacer con el método `add_button` : ``` formulario.add_button('Volver', URL('otra_pagina')) ``` Puedes agregar más de un botón al formulario. Los argumentos de `add_button` son el valor (es decir, el texto que se muestra) y el url de la dirección asociada. # Otros detalles sobre la manipulación de FORM¶ Como se trata en el capítulo de Vistas, un FORM es un ayudante de HTML. Los ayudantes se pueden manipular como listas de Python y como diccionarios; esto permite realizar modificaciones y agregar características en los formularios al vuelo. `SQLFORM` ¶ Ahora pasamos a un nivel más avanzado agregándole un modelo a la aplicación: ``` db = DAL('sqlite://storage.sqlite') db.define_table('persona', Field('nombre', requires=IS_NOT_EMPTY())) ``` Modificamos el controlador de la siguiente forma: ``` def mostrar_formulario(): formulario = SQLFORM(db.person) if formulario.process().accepted: response.flash = 'formulario aceptado' elif formulario.errors: response.flash = 'el formulario tiene errores' else: response.flash = 'por favor complete el formulario' return dict(formulario=formulario) ``` La vista no necesita modificaciones. En el nuevo controlador, no necesitas crear un `FORM` , ya que el constructor de `SQLFORM` ha creado uno a partir de la tabla `db.persona` definida en el modelo. Este nuevo formulario, cuando se serializa, se ve de esta forma: ``` <form enctype="multipart/form-data" action="" method="post"> <table> <tr id="persona_nombre__row"> <td><label id="persona_nombre__label" for="persona_nomobre">Tu nombre: </label></td> <td><input type="text" class="string" name="nombre" value="" id="persona_nombre" /></td> <td></td> </tr> <tr id="submit_record__row"> <td></td> <td><input value="Submit" type="submit" /></td> <td></td> </tr> </table> <input value="9038845529" type="hidden" name="_formkey" /> <input value="persona" type="hidden" name="_formname" /> </form> ``` El formulario generado automáticamente es más complejo que el formulario de bajo nivel previo. En primer lugar, este contiene una tabla con registros o row html, y cada registro de la tabla tiene tres columnas. La primer columna contiene los rótulos o label (definidas en `db.person` ), la segunda columna contiene los campos para ingreso de datos (y en caso de errores, los mensajes), y la tercer columna es opcional y por lo tanto vacía (se puede completar con campos del constructor de `SQLFORM` ). Todas etiquetas del formulario tienen sus nombres compuestos a partir de los nombres de la tabla y de sus campos. Esto permite fácilmente personalizar el formulario usando CSS y JavaScript. Esta característica se trata con más detalle en el Capítulo 11. Más importante es el hecho de que el método `accepts` facilita notablemente las cosas. Como en el caso anterior, realiza la validación de los datos ingresados, pero además, si los datos pasan la validación, también realiza la inserción del nuevo registro y almacena en `formulario.vars.id` el id único asociado al registro. El objeto `SQLFORM` también se encarga del manejo automático de los campos tipo "upload" para subir archivos y almacenarlos en la carpeta con el mismo nombre (luego de cambiar en forma segura su nombre para evitar conflictos y prevenir los ataques de tipo directory traversal) y almacena sus nombres (sus nuevos nombres) en el campo correspondiente en la base de datos. Una vez que se procese el formulario, el nuevo nombre de archivo estará disponible en ``` formulario.vars.filename ``` , para se pueda recuperar fácilmente después de almacenarlo en el servidor. Los `SQLFORM` realizan conversiones por defecto como por ejemplo mostrar los valores "booleanos" como elementos checkbox y convertir los campos tipo "text" en elementos textarea. Los campos para los que se haya restringido a un conjunto de valores específico (por medio de una lista o un conjunto de registros de la basse de datos) se muestran como listas desplegables y los campos "upload" incluyen links que permiten la descarga de los archivos subidos. Los campos tipo "blob" se ocultan por defecto, ya que normalmente son manejados por métodos especiales, como veremos más adelante. Tomemos el siguiente modelo como ejemplo: ``` db.define_table('persona', Field('nombre', requires=IS_NOT_EMPTY()), Field('casado', 'boolean'), Field('sexo', requires=IS_IN_SET(['Masculino', 'Femenino', 'Otro'])), Field('perfil', 'text'), Field('imagen', 'upload')) ``` En este caso, `SQLFORM(db.persona)` genera el formulario que se muestra abajo: El constructor de `SQLFORM` permite varias personalizaciones, como por ejemplo mostrar únicamente un subconjunto de los campos, cambiar las etiquetas o labels, agregar valores en la tercer columna opcional, o crear formularios para actualizar y borrar (UPDATE y DELETE) en lugar de los formularios de inserción (INSERT). `SQLFORM` es el objeto de la API más completo y sintético para automatizar tareas de web2py. La clase `SQLFORM` se define en "gluon/sqlhtml.py". Se puede extender fácilmente sobrescribiendo su método `xml` , que es el método que serializa el objeto, para cambiar su salida. `SQLFORM` es la siguiente: ``` SQLFORM(table, record=None, deletable=False, linkto=None, upload=None, fields=None, labels=None, col3={}, submit_button='Submit', delete_label='Check to delete:', showid=True, readonly=False, comments=True, keepopts=[], ignore_rw=False, record_id=None, formstyle='table3cols', buttons=['submit'], separator=': ', **attributes) ``` * El segundo parámetro opcional convierte el formulario de inserción en un formulario de modificación para el registro especificado (ver la próxima sección). * Si `deletable` se establece como `True` , el formulario de modificación muestra el cuadro de confirmación "Marcar para eliminar". El valor de la etiqueta de este campo se establece a través del argumento `delete_label` . * `submit_button` establece el texto del botón para enviar el formulario. * `id_label` establece la etiqueta del campo "id" * El "id" del registro no se muestra si `showid` se establece como `False` . * `fields` es una lista opcional de nombres de campos que quieres mostrar. Si la lista se especifica, sólo se muestran los campos en la lista. Por ejemplo: `fields = ['nombre']` * `labels` es un diccionario de etiquetas para los campos. Los nombres o key son nombres de campos y lo que se muestra es el valor correspondiente a ese nombre en el diccionario. Si no se especifica una etiqueta, web2py la obtiene a partir del nombre del campo (usa mayúsculas para las letras iniciales de los nombres de campo y reemplaza los subguiones con espacios). Por ejemplo: ``` labels = {'nombre':'Tu nombre completo:'} ``` * `col3` es un diccionario de valores para la tercer columna. Por ejemplo: ``` col3 = {'nombre':A('¿Qué es esto?', _href='http://www.google.com/search?q=define:name')} ``` * `linkto` y `upload` son URL opcionales asociados a controladores definidos por el usuario que permiten que el formulario maneje campos de tipo reference. Esto se tratará con más detalle en otra sección. * `readonly` . Si se especifica True, muestra un formulario de sólo lectura * `comments` . Si se especifica False, no muestra los comentarios en col3 * `ignore_rw` . Normalmente, para un formulario de creación o modificación, sólo se muestran los campos marcados como readable=True. Al especificar `ignore_rw=True` se hace que esas restricciones se ignoren, y que se muestren todos los campos. Esto es usado sobre todo en la interfaz appadmin para mostrar todos los campos para cada tabla, sobrescribiendo las opciones del modelo. * formstyle `formstyle` determina el estilo que se usará cuando se serializa el formulario en html. Puede ser "table3cols" (por defecto), "table2cols" (un registro por etiqueta y un registro por cada campo de ingreso de datos), "ul" (crea una lista sin orden con campos de ingreso de datos), "divs" (genera el formulario usando elementos div aptos para css y personalizaciones avanzadas). `formstyle` también puede ser una función que recibe los parámetros id_registro, etiqueta_campo, widget_campo y comentario_campo y devuelve un objeto TR(). * buttonses una lista de campos o botones `INPUT` s o `TAG.BUTTON` (aunque en realidad podría ser cualquier combinación de ayudantes) que se agregarán a un DIV en el lugar donde iría el botón para enviar. * separatorel `separator` establece la cadena que se usa como separador para las etiquetas y los campos de ingreso de datos. * Los atributos opcionales `attributes` son argumentos que comienzan con subguión que puedes pasar al elemento html `FORM` que es creado por el objeto `SQLFORM` . Por ejemplo: ``` _action = '.' _method = 'POST' ``` Hay un atributo especial llamado `hidden` . Cuando se especifica `hidden` como un diccionario, sus ítems se traducen en campos INPUT de tipo "hidden" (consulta los ejemplos para el ayudante `FORM` del Capítulo 5). ``` formulario = SQLFORM(....,hidden=...) ``` hace, como es de esperarse, que los campos ocultos se pasen junto con el envío del formulario. Por defecto, ``` formulario.accepts(...) ``` no leerá los campos ocultos ni los pasará a formulario.vars. Esto se debe a cuestiones de seguridad. Los campos ocultos podrían ser manipulados en una forma no esperada. Por lo tanto, debes pasar en forma explícita los campos ocultos desde la solicitud al formulario: ``` formulario.vars.a = request.vars.a formulario = SQLFORM(..., hidden=dict(a='b')) ``` `SQLFORM` e `insert` / `update` / `delete` ¶ `SQLFORM` crea un nuevo registro de la base de datos cuando el formulario se acepta. Suponiendo un ``` formulario=SQLFORM(db.prueba) ``` `miformulario.vars.id` . Si especificas un registro como segundo argumento opcional en el constructor de `SQLFORM` , el formulario se convierte en un formulario de modificación o UPDATE form para ese registro. Esto quiere decir que cuando el formulario se envíe, no se creará un nuevo registro, sino que se actualizará el registro existente. Si estableces la opción `deletable=True` , el formulario de modificación mostrará un checkbox (cuadro para confirmar una opción). Si se marca el checkbox, el registro se eliminará. Si se envía un formulario y la opción checkbox está marcada, el atributo `formulario.deleted` se establece como `True` . Puedes modificar el controlador del ejemplo anterior para que cuando se pasa un número entero como argumento adicional la ruta del URL, como en el siguiente ejemplo: ``` /prueba/default/mostrar_formulario/2 ``` y si existe un registro en con el id correspondiente, el `SQLFORM` genera un formulario de inserción y eliminación para el registro: La línea 2 encuentra el registro y la línea 3 crea el formulario de eliminación o modificación. La línea 4 realiza la validación y procesamiento del formulario adecuados. Un formulario de modificación es muy similar a uno para crear un registro con la excepción de que se carga con los datos actuales del registro y crea vistas previas de imágenes. Por defecto se establece la opción `deletable = True` ; esto significa que el formulario de modificación mostrará por defecto una opción de eliminación. Los formularios de edición también pueden contener campos de ingreso de datos con valores `name="id"` que se usa para identificar el registro. Este id también se almacena del lado del servidor para mayor seguridad y, si el visitante intenta realizar modificaciones no autorizadas del valor de ese campo, la modificación del registro no se realiza y web2py genera el error SyntaxError, "el usuario está intentando modificar el formulario". Cuando un campo de una tabla Field se marca con `writable=False` , el campo no se mostrará en formularios de creación, y se mostrará como sólo lectura en formularios de modificación. Cuando un campo de una tabla se marca como `writable=False` y `readable=False` , entonces no se mostrará en ningún formulario, incluyendo los formularios de modificación. Los formularios creados con ``` formulario = SQLFORM(...,ignore_rw=True) ``` omiten los atributos `readable` y `writable` y siempre muestran los campos. Los formularios en `appadmin` también omiten estos parámetros por defecto. Los formularios creados con ``` formulario = SQLFORM(tabla,id_registro, readonly=True) ``` siempre muestran todos los campos en modo sólo lectura, y no se pueden enviar y procesar. `SQLFORM` como HTML¶ Hay ocasiones en las que quieres usar `SQLFORM` para aprovechar su mecanismo para crear formularios y procesarlos, pero requieres de un nivel de personalización del formulario en HTML que no puedes lograr con los parámetros de un objeto `SQLFORM` , de forma que tienes que crear el formulario usando HTML. Lo que debes hacer es editar previamente el controlador y agregar una nueva acción: ``` def mostrar_formulario_manual(): formulario = SQLFORM(db.persona) if formulario.process(session=None, formname='prueba').accepted: response.flash = 'formulario aceptado' elif form.errors: response.flash = 'el formulario tiene errores' else: response.flash = 'por favor completa el formulario' # Ten en cuenta que no se pasa una instancia del formulario a la vista return dict() ``` e insertar en el formulario la vista asociada "default/mostrar_formulario_manual.html": ``` {{extend 'layout.html'}} <form> <ul> <li>Tu nombre es <input name="nombre" /></li> </ul> <input type="submit" /> <input type="hidden" name="_formname" value="prueba" /> </form> ``` Observa que la acción no devuelve el formulario porque no necesita pasarlo a la vista. La vista contiene un formulario creado manualmente en HTML. El formulario contiene un campo oculto que debe ser el mismo especificado como argumento de `accepts` en la acción. web2py usa el nombre de formulario en caso de que haya múltiples formularios en la misma página, para establecer cuál se ha enviado. Si la página contiene un único formulario, puedes establecer `formname=None` y omitir el campo oculto en la vista. `formulario.accepts` examinará `response.vars` en busca de datos que coincidan con campos de la tabla de la base de datos `db.persona` . Estos campos se declaran en HTML con el formato ``` <input name="el_nombre_del_campo_va_aquí" /> ``` Ten en cuenta que en el ejemplo dado, las variables del formulario se pasarán en el URL como argumentos. Si no quieres que esto ocurra, tendrás que especificar el protocolo `POST` . Además ten en cuenta que si especificas un campo upload, tendrás que configurar el formulario para permitirlo. Aquí se muestran las dos opciones: ``` <form enctype="multipart/form-data" method="post"> ``` `SQLFORM` y subidas de archivos¶ Los campos de tipo "upload" son especiales. Se muestran como campos INPUT con el atributo `type="file"` . A menos que se especifique lo contrario, el archivo a subir se transmite con un stream utilizando un buffer, y se almacena dentro de la carpeta "uploads" de la aplicación, usando un nuevo nombre, seguro, asignado automáticamente. El nombre del archivo es entonces guardado en el campo de tipo upload. Tomemos, como ejemplo, el siguiente modelo: ``` db.define_table('persona', Field('nombre', requires=IS_NOT_EMPTY()), Field('imagen', 'upload')) ``` Puedes usar la misma acción del controlador "mostrar_formulario" mostrado abajo. Cuando insertas un nuevo registro, el formulario te permite buscar un archivo en tu sistema. Elige, por ejemplo, una imagen en formato jpg. Este archivo se subirá y almacenará en el servidor como: ``` applications/prueba/uploads/persona.imagen.XXXXX.jpg ``` "XXXXXX" es un identificador aleatorio para el archivo asignado por web2py. Observa que, por defecto, el nombre original del archivo de un campo upload se codifica con Base16 y se usa para generar el nuevo nombre para el archivo. Este nombre se recupera por defecto con la acción "download" y se usa para establecer el encabezado content disposition de acuerdo con el tipo de datos del archivo original. Sólo se conserva la extensión del archivo. Esto se debe a un requerimiento de seguridad ya que el archivo puede contener caracteres especiales que pueden habilitar al visitante para perpetrar un ataque del tipo directory traversal y otras clases de operaciones maliciosas. El nuevo archivo se almacena en ``` formulario.vars.imagen ``` . Cuando se edita el registro usando el formulario de modificación, sería mejor que se muestre un link asociado al archivo subido al servidor, y web2py provee de una funcionalidad para crearlo. Si pasas un URL al constructor de `SQLFORM` a través del argumento upload, web2py usa la acción de ese URL para descargar el archivo. Tomemos como ejemplo las siguientes acciones: ``` def mostrar_formulario(): registro = db.persona(request.args(0)) o redirect(URL('index')) formulario = SQLFORM(db.persona, registro, deletable=True, upload=URL('download')) if formulario.process().accepted: response.flash = 'formulario aceptado' elif formulario.errors: response.flash = 'el formulario tiene errores' return dict(formulario=formulario) Ahora, insertamos un nuevo registro en el URL: ``` http://127.0.0.1:8000/prueba/default/mostrar_formulario ``` Sube la imagen, envía el formulario, y luego edita el registro recientemente creado visitando: ``` http://127.0.0.1:8000/prueba/default/mostrar_formulario/3 ``` (aquí asumimos que el último registro tiene un id=3). El formulario mostrará una vista previa de la imagen como se detalla abajo: Este formulario, cuando es serializado, genera el siguiente código HTML: ``` <td><label id="persona_imagen__label" for="persona_imagen">Imagen: </label></td> <td><div><input type="file" id="persona_imagen" class="upload" name="imagen" />[<a href="/prueba/default/download/persona.imagen.0246683463831.jpg">archivo</a>| <input type="checkbox" name="imagen__delete" />delete]</div></td><td></td></tr> <tr id="delete_record__row"><td><label id="delete_record__label" for="delete_record" >Marca aquí para eliminar:</label></td><td><input type="checkbox" id="delete_record" class="delete" name="delete_this_record" /></td> ``` que contiene un link que permite la descarga de un archivo subido, y una opción de confirmación para eliminar el archivo de la base de datos, y por lo tanto establecer el campo "imagen" como NULL. ¿Por qué exponemos este mecanismo? ¿Por qué querrías escribir la función de descarga? Porque puedes necesitar un mecanismo de control de acceso en esa función. Para un ejemplo sobre este tema, consulta el Capítulo 9. Comúnmente los archivos subidos se almacenan en un campo "app/uploads" pero es posible especificar una ubicación alternativa: ``` Field('imagen', 'upload', uploadfolder='...') ``` En la mayoría de los sistemas operativos, el acceso al sistema de archivos puede tornarse lento cuando existen muchos archivos en la misma carpeta. Si planeas subir más de 1000 archivos puedes indicarle a web2py que organice los archivos en subcarpetas: ``` Field('imagen', 'upload', uploadseparate=True) ``` # Almacenamiento del nombre original del archivo¶ web2py automáticamente almacena el archivo original dentro del nuevo nombre de archivo UUID y lo recupera cuando el archivo se descarga. Al descargarse, el nombre de archivo original se almacena en el encabezado Content-Disposition de la respuesta HTTP. Esto se hace en forma transparente, sin necesidad de programación adicional. En algunas ocasiones, podría interesarte almacenar el nombre de archivo original en un campo de la base de datos. En ese caso, debes modificar el modelo y agregar un campo para poder especificarlo: ``` db.define_table('persona', Field('nombre', requires=IS_NOT_EMPTY()), Field('nombre_archivo'), Field('imagen', 'upload')) ``` luego necesitas modificar el controlador para que lo pueda almacenar: ``` def mostrar_formulario(): registro = db.persona(request.args(0)) or redirect(URL('index')) url = URL('download') formulario = SQLFORM(db.persona, registro, deletable=True, upload=url, fields=['nombre', 'imagen']) if request.vars.imagen!=None: formulario.vars.nombre_archivo = request.vars.imagen.filename if formulario.process().accepted: response.flash = 'formulario aceptado' elif formulario.errors: response.flash = 'el formulario tiene errores' return dict(formulario=formulario) ``` Observa que el `SQLFORM` no muestra el campo "nombre_archivo". La acción "mostrar_formulario" pasa el nombre del archivo de los parámetros en `request.vars.imagen` a ``` formulario.vars.nombre_archivo ``` para que sea procesado por `accepts` y almacenado en la base de datos. La función download, antes de servir los archivos, comprueba en la base de datos el nombre de archivo original y lo usa en el encabezado Content-Disposition. `autodelete` ¶ Cuando se elimina un registro con `SQLFORM` , este no elimina físicamente los archivos/s asociados al registro. La razón para esto es que web2py no sabe si el mismo archivo está siendo usado por otra tabla o para otro propósito. Si consideras que es seguro el eliminar el archivo en el sistema asociado al registro de la base de datos cuando se elimina el registro, puedes hacer lo siguiente: ``` db.define_table('imagen', Field('nombre', requires=IS_NOT_EMPTY()), Field('origen','upload',autodelete=True)) ``` El atributo `autodelete` tiene por defecto el valor `False` . Cuando se establece como `True` web2py se asegura de que el archivo también se elimine cuando se borre el registro. # Link a registros asociados¶ Ahora tomemos como ejemplo el caso de dos tablas asociadas por un campo de tipo reference. Por ejemplo: ``` db.define_table('persona', Field('nombre', requires=IS_NOT_EMPTY())) db.define_table('perro', Field('propietario', 'reference persona'), Field('nombre', requires=IS_NOT_EMPTY())) db.perro.propietario.requires = IS_IN_DB(db,db.person.id,'%(nombre)s') ``` Una persona puede tener perros, y cada perro pertenece a un propietario, que es una persona. El propietario del perro debe estar asociado a un `db.persona.id` válido con el nombre `'%(nombre)s'` . Usemos la interfaz appadmin de esta aplicación para agregar algunas personas y sus perros. Cuando se modifica una persona existente, el formulario de modificación en appadmin muestra un link a una página que lista los perros que pertenecen a esa persona. Este comportamiento se puede reproducir usando el argumento `linkto` del objeto `SQLFORM` . `linkto` debe estar asociado al URL de una nueva acción que recibe una cadena de consulta o query string desde el `SQLFORM` y lista los registros correspondientes. He aquí un ejemplo: Esta es la página: Hay un link llamado "perro.propietario". El nombre de este link se puede cambiar a través del argumento `labels` del objeto `SQLFORM` , por ejemplo: ``` labels = {'perro.propietario':" El perro de esta persona"} ``` Si haces clic en el link se abre la siguiente dirección: ``` /prueba/default/listar_registros/perro?query=db.perro.propietario%3D%3D5 ``` "listar_registros" es la acción especificada, con el nombre de la tabla de referencia en `request.args(0)` y la cadena de la consulta SQL almacenada en `request.vars.query` . La cadena con la consulta en el URL contiene el valor "perro.propietario=5" convenientemente codificado como texto url-encoded (web2py lo decodifica en forma automática cuando se analiza el URL). Puedes implementar fácilmente una simple acción asociada a un listado amplio de registros de esta forma: ``` def listar_registros(): REGEX = re.compile('^(\w+).(\w+).(\w+)\=\=(\d+)$') match = REGEX.match(request.vars.query) if not match: redirect(URL('error')) tabla, campo, id = match.group(2), match.group(3), match.group(4) records = db(db[tabla][campo]==id).select() return dict(registros=registros) ``` con la siguiente vista asociada "default/listar_registros.html": ``` {{extend 'layout.html'}} {{=records}} ``` Cuando se devuelve un conjunto de registros con un comando select y se serializa en una vista, primero se lo convierte en un objeto SQLTABLE (no es lo mismo que Table) y luego se serializa en una tabla HTML, donde cada campo corresponde a una columna de la tabla. # Precompletando el formulario¶ Siempre es posible precompletar el formulario usando la sintaxis: ``` formulario.vars.nombre = 'valorcampo' ``` Este tipo de instrucciones deben ir insertadas a continuación de la declaración del formulario y antes de que se acepten los datos ingresados, incluso si el campo (para este ejemplo "nombre") no se debe visualizar en el formulario. # Agregando elementos adicionales al `SQLFORM` ¶ A veces puedes necesitar agregar un elemento adicional a tu formulario luego de haberlo creado. Por ejemplo, puedes necesitar agregar una opción checkbox de confirmación para que el usuario acepte las condiciones de uso del sitio web: ``` formulario = SQLFORM(db.tutabla) mi_elemento_adicional = TR(LABEL('Estoy de acuerdo con el reglamento y las condiciones de uso del sitio'), INPUT(_name='deacuerdo',value=True,_type='checkbox')) formulario[0].insert(-1, mi_elemento_adicional) ``` La variable `mi_elemento_extra` se debería adaptar al estilo del formulario o formstyle. En este ejemplo, se supone el uso de ``` formstyle='table3cols' ``` . Una vez enviado el formulario, ``` formulario.vars.deacuerdo ``` contendrá el estado de la opción, que se puede usar, por ejemplo, en una función `onvalidation` . `SQLFORM` sin E/S de la base de datos¶ En algunas ocasiones, puedes necesitar generar un formulario a partir de una tabla de la base de datos con `SQLFORM` y luego validarlo, como es usual, pero no quieres que se realicen automáticamente las operaciones INSERT/UPDATE/DELETE de la base de datos. Esto se da, por ejemplo, cuando uno de los campos debe ser calculado a partir del valor de otros campos de ingreso de datos. También puede ser necesario cuando debemos realizar validaciones previas adicionales de los datos ingresados que no son posibles por medio de los validadores estándar. Esto puede hacerse simplemente separando el siguiente código: ``` formulario = SQLFORM(db.persona) if formulario.process().accepted: response.flash = 'registro insertado' ``` ``` formulario = SQLFORM(db.persona) if formulario.validate(): ### manejo especial de las subidas de archivos formulario.vars.id = db.persona.insert(**dict(formulario.vars)) response.flash = 'registro insertado' ``` Lo mismo puede hacerse por medio formularios de edición o eliminación separando: ``` formulario = SQLFORM(db.persona, registro) if formulario.process().accepted: response.flash = 'registro actualizado' ``` ``` formulario = SQLFORM(db.persona, registro) if formulario.validate(): if formulario.deleted: db(db.persona.id==registro.id).delete() else: registro.update_record(**dict(formulario.vars)) response.flash = 'registro actualizado' ``` En el caso de la tabla que incluye un campo de tipo "upload", por ejemplo, "nombredelcampo", tanto `process(dbio=False)` como `validate()` se encargan del almacenamiento del archivo subido como si se hubiese establecido `process(dbio=True)` , que es el comportamiento por defecto. El nombre asignado por web2py al archivo subido se puede recuperar con: ``` formulario.vars.nombredelcampo ``` ### Otros tipos de formulario¶ `SQLFORM.factory` ¶ Hay casos en los que quieres generar formularios como si tuvieran una tabla de la base de datos asociada pero no quieres modificar una tabla determinada. Simplemente quieres aprovechar la funcionalidad de `SQLFORM` para generar formularios vistosos y aptos para trabajo con CSS y quizás subir archivos y realizar cambios de nombre. Esto se puede hacer a través de un `factory` para formularios. Este es un ejemplo donde generas el formulario, realizas la validación, subes un archivo y almacenas todos los datos en `session` : ``` def formulario_con_factory(): formulario = SQLFORM.factory( Field('tu_nombre', requires=IS_NOT_EMPTY()), Field('tu_imagen', 'upload')) if formulario.process().accepted: response.flash = 'formulario aceptado' session.your_name = formulario.vars.tu_nombre session.your_image = formulario.vars.tu_imagen elif formulario.errors: response.flash = 'el formulario tiene errores' return dict(formulario=formulario) ``` El objeto Field usado para el constructor de SQLFORM.factory() está completamente documentado en el capítulo de DAL. Una forma de creación del formulario SQLFORM.factory() al vuelo puede ser ``` campos = [] campos.append(Field(...)) formulario=SQLFORM.factory(*campos) ``` Esta es la vista "default/formulario_con_factory.html": Debes usar un subguión en lugar de un espacio para etiquetas, o pasar en forma explícita un diccionario de etiquetas `labels` a `factory` , como lo harías para el caso de `SQLFORM` . Por defecto, `SQLFORM.factory` crea el formulario usando los atributos "id" de html como si el formulario se hubiera creado a partir de una tabla llamada "no_table". Para cambiar este nombre ficticio de tabla, usa el parámetro `table_name` de factory: ``` formulario = SQLFORM.factory(...,table_name='otro_nombre_ficticio') ``` Es conveniente cambiar el valor de `table_name` cuando quieres colocar dos formularios factory en la misma tabla. De esta forma evitarás conflictos con el CSS. # Subiendo archivos con SQLFORM.factory¶ # Un solo formulario para múltiples tablas¶ A menudo ocurre que tienes dos tablas (por ejemplo 'cliente' y 'direccion') que están asociadas por un campo de tipo reference y quieres crear un único formulario que permita ingresar información sobre el cliente y su dirección por defecto. Esto es lo que debes hacer: modelo: ``` db.define_table('cliente', Field('nombre')) db.define_table('direccion', Field('cliente','reference cliente', writable=False,readable=False), Field('calle'),Field('ciudad')) ``` controlador: ``` def registrarse(): formulario=SQLFORM.factory(db.cliente,db.direccion) if formulario.process().accepted: id = db.cliente.insert(**db.cliente._filter_fields(formulario.vars)) formulario.vars.cliente=id id = db.direccioin.insert(**db.direccion._filter_fields(formulario.vars)) response.flash='Gracias por completar el formulario' return dict(formulario=formulario) ``` Observa el SQLFORM.factory (este crea UN formulario usando los campos de ambas tablas, heredando además sus validadores). Cuando el formulario se acepta hace dos inserciones en la base de datos, algunos de los datos van a una tabla y los demás a la otra. Esto únicamente funciona cuando no existen campos de distintas tablas cuyos nombres coinciden. # Formularios de confirmación¶ Muchas veces debes crear un formulario con una opción de confirmación. El formulario debería aceptarse sólo si esa opción se ha aceptado. El formulario puede tener opciones adicionales que enlacen a otras páginas web. web2py provee de una forma simple de hacerlo: Observa que el formulario de confirmación no requiere y no debe llamar a `.accepts` o `.process` porque esto se hace internamente. Puedes agregar botones con link al formulario de confirmación utilizando un diccionario con la forma `{'valor': 'link'}` : # Formulario para editar un diccionario¶ Supongamos un sistema que almacena opciones de configuración en un diccionario, ``` configuracion = dict(color='negro', idioma='Español') ``` y necesitas un formulario para permitir al visitante que modifique ese diccionario: Esto se puede hacer de este modo: ``` formulario = SQLFORM.dictform(configuracion) if formulario.process().accepted: configuracion.update(formulario.vars) ``` El formulario mostrará un campo de ingreso de datos INPUT para cada ítem del diccionario. Usará las claves del diccionario como nombres de los campos y etiquetas y los valores asociados por defecto para obtener los tipos de datos (cadena, entero, coma flotante, fecha y hora, booleano) Esto funciona muy bien pero estás obligado a programar la parte que hace que los datos de configuración ingresados sean permanentes. Por ejemplo puedes necesitar almacenar `configuracion` en una sesión. ``` session.configuracion or dict(color='negro', idioma='Español') formulario = SQLFORM.dictform(session.configuracion) if formulario.process().accepted: session.configuracion.update(formulario.vars) ``` ### CRUD¶ Una de las adiciones recientes a web2py es la API de ABM para Crear/Leer/Modificar/Borrar CRUD, que funciona sobre SQLFORM. CRUD crea un SQLFORM, pero simplifica el código porque incorpora la creación del formulario, el procesamiento de los datos ingresados, las notificaciones y la redirección, todo en una sola función. Lo primero que hay que destacar es que CRUD difiere del resto de las API de web2py que hemos visto hasta aquí porque en un principio no se expone. Se debe importar en forma explícita. Además debe estar asociado a una base de datos específica. Por ejemplo: ``` from gluon.tools import Crud crud = Crud(db) ``` El objeto `crud` definido arriba provee de la siguiente API: * `crud.tables()` devuelve una lista de tablas definidas en la base de datos. * ``` crud.create(db.nombredelatabla) ``` devuelve un formulario de creación para la tabla nombredetabla. * ``` crud.read(db.nombredelatabla, id) ``` ``` crud.update(db.nombredelatabla, id) ``` ``` crud.delete(db.nombredelatabla, id) ``` elimina el registro. * ``` crud.select(db.nombredelatabla, consulta) ``` devuelve una lista de registros recuperados de la tabla. * ``` crud.search(db.nombredelatabla) ``` devuelve una tupla (formulario, registros) donde formulario es un formulario de búsqueda y registros es una lista de registros según los datos enviados a través del formulario. * `crud()` devuelve uno de los formularios anteriores según se especifique en `request.args()` . Por ejemplo, la siguiente acción: ``` def data(): return dict(formulario=crud()) ``` expondrá los siguientes URL: ``` http://.../[app]/[controlador]/data/tables http://.../[app]/[controlador]/data/create/[nombredelatabla] http://.../[app]/[controlador]/data/read/[nombredelatabla]/[id] http://.../[app]/[controlador]/data/update/[nombredelatabla]/[id] http://.../[app]/[controlador]/data/delete/[nombredelatabla]/[id] http://.../[app]/[controlador]/data/select/[nombredelatabla] http://.../[app]/[controlador]/data/search/[nombredelatabla] ``` Por otro lado, la siguiente acción: ``` def crear_nombredelatabla(): return dict(formulario=crud.create(db.nombredelatabla)) ``` solo expondrá la funcionalidad para crear registros ``` http://.../[app]/[controlador]/crear_nombredelatabla ``` Mientras que la siguiente acción: ``` def actualizar_nombredelatabla(): return dict(formulario=crud.update(db.nombredelatabla, request.args(0))) ``` expondrá únicamente la funcionalidad para modificar registros ``` http://.../[app]/[controlador]/modificar_nombredelatabla/[id] ``` y así sucesivamente. El comportamiento de CRUD se puede personalizar de dos formas distintas: configurando un atributo del objeto `crud` o pasando parámetros adicionales a sus distintos métodos. # Configuración¶ He aquí una lista completa de los atributos implementados en CRUD, sus valores por defecto, y su significado: Para el control de autenticación en todos los formularios crud: Su uso se explica en el capítulo 9. Para especificar el controlador que define la función `data` que devuelve el objeto `crud` ``` crud.settings.controller = 'default' ``` ``` crud.settings.create_next = URL('index') ``` ``` crud.settings.update_next = URL('index') ``` Para especificar el URL al cual redirigir luego de eliminar exitosamente un registro con "delete": ``` crud.settings.delete_next = URL('index') ``` Para especificar el URL que se usará como link para los archivos subidos: ``` crud.settings.download_url = URL('download') ``` ``` crud.settings.create_onvalidation = StorageList() ``` `StorageList` es lo mismo que el objeto `Storage` , ambos se definen en "gluon/storage.py", la diferencia es que el primero tiene el valor `[]` por defecto en lugar de `None` . Esto permite la siguiente sintaxis: ``` crud.settings.create_onvalidation.minombredetabla.append(lambda formulario:....) ``` ``` crud.settings.update_onvalidation = StorageList() ``` ``` crud.settings.create_onaccept = StorageList() ``` Para especificar funciones adicionales a ejecutarse luego de finalizar un formulario `crud.update` : ``` crud.settings.update_onaccept = StorageList() ``` Para especificar funciones adicionales a ejecutarse al finalizar un formulario `crud.update` cuando se elimina el registro: ``` crud.settings.update_ondelete = StorageList() ``` ``` crud.settings.delete_onaccept = StorageList() ``` Para determinar si los formularios "update" deben tener un botón para eliminar el registro: ``` crud.settings.update_deletable = True ``` Para establecer si los formularios "update" deberían mostrar el id del registro modificado: ``` crud.settings.showid = False ``` Para indicar si los formularios deberían mantener los valores insertados previamente o tomar los valores por defecto al procesarse exitosamente un formulario: ``` crud.settings.keepvalues = False ``` Crud siempre detecta si un registro que está siendo editado ha sido modificado por un tercero durante el proceso de mostrar el formulario y su validación al ser enviado. Este comportamiento es equivalente a ``` formulario.process(detect_record_change=True) ``` y se establece en: ``` crud.settings.detect_record_change = True ``` y se puede modificar o deshabilitar estableciendo la variable como `False` . Puedes modificar el estilo del formulario por defecto con ``` crud.settings.formstyle = 'table3cols' or 'table2cols' or 'divs' or 'ul' ``` Puedes establecer un separador para todos los formularios: ``` crud.settings.label_separator = ':' ``` Puedes agregar captcha a los formularios, usando la misma convención explicada para auth, con: ``` crud.settings.create_captcha = None crud.settings.update_captcha = None crud.settings.captcha = None ``` # Mensajes¶ Esta es la lista de mensajes personalizables: ``` crud.messages.submit_button = 'Enviar' ``` establece el texto del botón "submit" para los formularios de creación y modificación. ``` crud.messages.delete_label = 'Marca para eliminar:' ``` etablece la etiqueta del botón "delete" en los formularios de modificación. ``` crud.messages.record_created = 'Registro creado' ``` establece el mensaje emergente para la creación exitosa de registros. ``` crud.messages.record_updated = 'Registro actualizado' ``` establece el mensaje emergente para la concreción de una actualización de registro. ``` crud.messages.record_deleted = 'Registro eliminado' ``` establece el mensaje emergente para la eliminación satisfactoria de un registro. ``` crud.messages.update_log = 'Registro %(id)s actualizado' ``` establece el mensaje a registrar en el log para la actualización de un registro. ``` crud.messages.create_log = 'Registro %(id)s creado' ``` establece el mensaje a registrar en el log cuando se crea un registro. ``` crud.messages.read_log = 'Registro %(id)s leído' ``` establece el mensaje a registrar en el log cuando se accede a un registro normalmente. ``` crud.messages.delete_log = 'Registro %(id)s borrado' ``` establece el mensaje a registrar en el log cuando se elimina con éxito un registro. Observa que los `crud.messages` pertenecen a la clase ``` gluon.storage.Message ``` que es similar a . La diferencia es que el primero traduce automáticamente sus valores, sin necesidad de usar el operador `T` . Los mensajes del log se usan si y solo si CRUD está conectado a Auth como se detalla en el Capítulo 9. Los eventos se registran en la tabla "auth_events". # Métodos¶ El comportamiento de CRUD también se puede personalizar en función de cada llamada. Estas son las listas de argumentos soportadas: ``` crud.tables() crud.create(tabla, next, onvalidate, onaccept, log, message) crud.read(tabla, registro) crud.update(tabla, registro, next, onvalidate, onaccept, ondelete, log, message, deletable) crud.delete(table, id_registro, next, message) crud.select(tabla, query, fields, orderby, limitby, headers, **attr) crud.search(tabla, query, queries, query_labels, fields, field_labels, zero, showall, chkall) ``` * `tabla` es una tabla de DAL o nombre de tabla que debe usar el método. * `registro` e `id_registro` son los id del registro que debe utilizar el método. * `next` es el URL de redirección al finalizar el procesamiento del formulario. Si el URL contiene la cadena "[id]" esta será reemplazada por el id del registro actual procesado por el formulario. * `onvalidate` tiene la misma funcionalidad que SQLFORM(..., onvalidation) * `onaccept` es la función a llamar luego de que el formulario enviado sea aceptado y se procesen los datos, pero antes de la redirección. * `log` es el mensaje a registrar en el log. Los log de mensajes en CRUD examinan variables del diccionario `formulario.vars` , por ejemplo "%(id)s". * `message` es el mensaje emergente que se muestra al aceptarse el formulario. * `ondelete` se llama en lugar de `onaccept` cuando un registro es borrado a través de un formulario "update". * `deletable` determina si el formulario "update" debe tener una opción "eliminar". * `query` es la consulta a usar para recuperar los registros. * `fields` es una lista de campos a seleccionar. * `orderby` determina el orden en el cual los registros se deberían recuperar (consulta el Capítulo 6 para más información). * `limitby` determina el rango de los registros seleccionados que deberían mostrarse (para más detalles consulta el Capítulo 6). * `headers` es un diccionario con los nombres de los encabezados de la tabla. * `queries` es una lista como ``` ['equals', 'not equal', 'contains'] ``` que contiene una serie de métodos permitidos en el formulario de búsqueda. * `query_labels` es un diccionario como ``` query_labels=dict(equals='Igual') ``` para asignar nombres a los distintos métodos de búsqueda. * `campos` es una lista de campos que se deben listar en el widget de búsqueda. * `field_labels` es un diccionario que asocia los nombres de los campos con etiquetas. * `zero` es "elige uno" por defecto. Es usado como opción predeterminada para el menú desplegable en el widget de búsqueda. * `showall` configúralo como True si quieres que muestren los registros de la consulta la primera vez que se llama a la acción (disponible desde 1.98.2). * `chkall` configúralo como True si quieres que todas las opciones checkbox del formulario de búsqueda estén habilitadas por defecto (disponible desde 1.98.2). Aquí se muestra un ejemplo de uso en una sola función: ``` ## se asume una tabla definida con db.define_table('persona', Field('nombre')) def gente(): formulario = crud.create(db.persona, next=URL('index'), message=T("registro creado")) personas = crud.select(db.persona, fields=['nombre'], headers={'persona.nombre': 'Nombre'}) return dict(formulario=formulario, personas=personas) ``` He aquí otra función bastante genérica del controlador que te permite buscar, crear y editar cualquier registro de cualquier tabla donde el nombre de tabla es un parámetro pasado como request.args(0): ``` def administrar(): tabla=db[request.args(0)] formulario = crud.update(tabla,request.args(1)) tabla.id.represent = lambda id, registro: A('Editar:', id, _href=URL(args=(request.args(0), id))) busqueda, registros = crud.search(tabla) return dict(formulario=formulario, busqueda=busqueda, registros=registros) ``` Observa que la línea ``` tabla.id.represent=... ``` le indica a web2py como cambiar la representación del campo id y en cambio mostrar un link a la página en sí, y pasando a su vez el id a request.args(1), que convierte la página de creación en una página de modificación. Tanto SQLFORM como CRUD proveen de una utilidad para hacer record versioning, es decir, para administrar versiones de registros de la base de datos: Si tienes una tabla (db.mitabla) que requiere un registro completo de sus versiones puedes simplemente hacer: ``` formulario = SQLFORM(db.mitabla, miregistro).process(onsuccess=auth.archive) ``` ``` formulario = crud.update(db.mitabla, miregistro, onaccept=auth.archive) ``` `auth.archive` define una nueva tabla llamada db.mitabla_archive (el nombre se construye con el nombre de la tabla a la que está asociada) y al actualizase un registro, se almacena una copia (con los datos previos a la actualización) en la nueva tabla archive, incluyendo una referencia al registro actual. Estos registros se actualizan constantemente (conservando únicamente el último estado), y por lo tanto, también se mantienen actualizadas las referencias. Todo esto es hecho en forma transparente. Si por ejemplo quisieras acceder a la tabla archive, deberías definirla en el modelo: ``` db.define_table('mitabla_archive', Field('current_record', 'reference mitabla'), db.mitabla) ``` Observa que la tabla extiende `db.mitabla` (incluyendo a todos sus campos), y agrega una referencia `current_record` al registro actual. `auth.archive` no registra la fecha y hora del registro almacenado a menos que tu tabla original tenga campos de fecha y hora, por ejemplo: ``` db.define_table('mitabla', Field('creado_el', 'datetime', default=request.now, update=request.now, writable=False), Field('creado_por', 'reference auth_user', default=auth.user_id, update=auth.user_id, writable=False), ``` No hay nada de especial en estos campos y puedes asignarles el nombre que quieras. Estos campos se completan antes de que el registro se archive y se archivan con cada copia del registro. El nombre de la tabla archive y/o el campo de referencia se pueden cambiar de esta forma: ``` db.define_table('mihistoria', Field('parent_record', 'reference mytable'), db.mytable) ## ... formulario = SQLFORM(db.mitabla, miregistro) formulario.process(onsuccess = lambda formulario:auth.archive(formulario, archive_table=db.mihistoria, current_record='parent_record')) ``` ### Formularios personalizados¶ Si se crea un formulario con SQLFORM, SQLFORM.factory o CRUD, hay múltiples formas de embeberlo en una vista permitiendo múltiples grados de personalización. Considera por ejemplo el siguiente modelo: ``` db.define_table('imagen', Field('nombre', requires=IS_NOT_EMPTY()), Field('datos', 'upload')) ``` y una acción para subida de archivos ``` def subir_imagen(): return dict(formulario=SQLFORM(db.imagen).process()) ``` La forma más sencilla de embeber el formulario en la vista para `subir_imagen` es `{{=formulario}}` Esto produce un diseño estándar de tabla. Si quisieras usar otro diseño, podrías separar el formulario en componentes ``` {{=formulario.custom.begin}} Nombre de la imagen: <div>{{=formulario.custom.widget.nombre}}</div> Archivo de la imagen: <div>{{=formulario.custom.widget.datos}}</div> Clic aquí para subir: {{=formulario.custom.submit}} {{=formulario.custom.end}} ``` ``` formulario.custom.widget[nombredelcampo] ``` se serializa en el widget correspondiente para el campo. Si el formulario se procesa y este contiene errores, los errores se agregarán debajo de los widget, como es usual. El formulraio del ejemplo anterior se muestra en la imagen de abajo. Podíamos obtener un efecto similar sin el uso de un formulario personalizado, usando: ``` SQLFORM(..., formstyle='table2cols') ``` o en caso de formularios CRUD con el siguiente parámetro: ``` crud.settings.formstyle='table2cols' ``` Otros `formstyle` posibles son "table3cols" (el estilo por defecto), "divs" y "ul". Si no deseas usar widget serializados por web2py, los puedes reemplazar por HTML. Hay algunas variables que puedes usar para ese propósito: ``` form.custom.label[nombredelcampo] ``` contiene la etiqueta para el campo. * ``` form.custom.comment[nombredelcampo] ``` contiene el comentario para el campo. * ``` form.custom.dspval[nombredelcampo] ``` valor de visualización del campo en función del tipo y estilo de formulario. * ``` form.custom.inpval[nombredelcampo] ``` valores para el campo que se usarán en el procesamiento, en función del tipo y estilo del formulario. Si tu formulario tiene la opción `deleteable=True` también deberías agregar ``` {{=form.custom.delete}} ``` para mostrar la opción checkbox de eliminar. Es importante que sigas las convenciones descriptas a continuación. # Convenciones para CSS¶ Las etiquetas en formularios que crean SQLFORM, SQLFORM.factory y CRUD siguen una convención para el uso de nombres en CSS estricta que puede ser usada para realizar personalizaciones posteriores a los formularios. Dada una tabla "mitabla", y un campo "micampo" de tipo "string", estos son convertidos por defecto por un que se ve de la siguiente forma: ``` <input type="text" name="micampo" id="mitabla_micampo" class="string" /> ``` * la clase de la etiqueta del campo para ingreso de datos INPUT equivale al tipo del campo. Esto es muy importante para que el código jQuery en "web2py_ajax.html" funcione. Ese código se asegura de que se ingresen únicamente valores numéricos en campos "integer" o "double" y de que los campos "date" y "datetime" muestren un calendario emergente. * El id es el nombre de la clase más el nombre del campo, unidos por un subguión. Esto te permite hacer referencias específicas al campo, por ejemplo, por medio de ``` jQuery('#mytable_myfield') ``` y manipular la hoja de estilo del campo o asociar acciones a los eventos del campo (focus, blur, keyup, etc.). * el nombre es, como es de esperarse, el nombre del campo. # Ocultar errores¶ En ocasiones, puedes necesitar deshabilitar los informes automáticos de errores y mostrar los errores de formularios en otras ubicaciones que no sean las establecidas por defecto. Esto se puede hacer fácilmente. * Para el caso de los FORM y SQLFORM, debes especificar `hideerror=True` en el método `accepts` . * En el caso de CRUD, establece ``` crud.settings.hideerror=True ``` Además podrías modificar las vistas para mostrar los errores (ya que ahora no se mostrarán automáticamente). Este es un ejemplo para mostrar los errores sobre el formulario y no dentro de él. ``` {{if formulario.errors:}} Tu envío de formulario contiene los siguientes errores: <ul> {{for nombredelcampo in formulario.errors:}} <li>{{=nombredelcampo}} error: {{=formulario.errors[nombredelcampo]}}</li> {{pass}} </ul> {{formulario.errors.clear()}} {{pass}} {{=formulario}} ``` Los errores se mostrarán como en la imagen de abajo. Este mecanismo también funciona con formularios personalizados. ### Validadores¶ Los validadores son clases que se usan para validar campos de ingreso de datos (incluyendo los formularios generados a partir de tablas de la base de datos). Aquí se muestra un ejemplo de uso de un validador en un `FORM` : ``` INPUT(_name='a', requires=IS_INT_IN_RANGE(0, 10)) ``` Este ejemplo muestra como especificar la validación en un campo de una tabla: ``` db.define_table('persona', Field('nombre')) db.persona.nombre.requires = IS_NOT_EMPTY() ``` Los validadores siempre se asignan usando el atributo `requires` de un campo. Un campo puede tener uno o múltiples validadores, los validadores múltiples deben incluirse en una lista: ``` db.persona.nombre.requires = [IS_NOT_EMPTY(), IS_NOT_IN_DB(db, 'persona.nombre')] ``` Normalmente los validadores son llamados automáticamente por la función `accepts` y `process` de un `FORM` u otro objeto ayudante de HTML que contenga un formulario. Los validadores se llaman en el orden en el que fueron listados. Uno puede además llamar explícitamente a los validadores para un campo: ``` db.persona.nombre.validate(valor) ``` que devuelve una tupla `(valor, error)` y `error` es `None` cuando el valor pasa la validación. Los validadores incorporados tienen constructores que toman un argumento opcional: ``` IS_NOT_EMPTY(error_message='no puede estar vacío') ``` `error_message` te permite sobrescribir el mensaje de error por defecto de cualquier validador. Aquí hay un ejemplo de validador aplicado a una tabla de la base de datos: ``` db.persona.nombre.requires = IS_NOT_EMPTY(error_message='¡Completa este campo!') ``` donde hemos usado el operador de traducción `T` para permitir múltiples traducciones del contenido o internationalization. Observa que los mensajes de error por defecto no se traducen. Ten en cuenta que los únicos validadores que se pueden usar con los tipos `list:` son: ``` IS_IN_DB(..., multiple=True) ``` ``` IS_IN_SET(..., multiple=True) ``` * `IS_NOT_EMPTY()` * `IS_LIST_OF(...)` El último se puede usar para aplicar cada validador a los ítems de la lista individualmente. # Validadores¶ `IS_ALPHANUMERIC` ¶ Este validador comprueba que el valor del campo contenga solo caracteres en los rangos a-z, A-Z, o 0-9. ``` requires = IS_ALPHANUMERIC(error_message='¡Debe ser alfanumérico!') ``` `IS_DATE` ¶ Este validador comprueba que el valor del campo contenga una fecha válida en el formato especificado. Es una buena práctica el especificar el formato usando el operador de traducción, para contemplar distintos formatos según el uso local. ``` requires = IS_DATE(format=T('%Y-%m-%d'), error_message='¡Debe ser YYYY-MM-DD!') ``` `IS_DATE_IN_RANGE` ¶ Funciona en forma muy similar al validador anterior, pero permite especificar un rango: ``` requires = IS_DATE_IN_RANGE(format=T('%Y-%m-%d'), minimum=datetime.date(2008,1,1), maximum=datetime.date(2009,12,31), error_message='¡Debe ser YYYY-MM-DD!') ``` `IS_DATETIME` ¶ Este validador comprueba que el valor del campo contenga fecha y hora validas en el formato especificado. Es buena práctica el especificar el formato usando el operador de traducción, para contemplar los distintos formatos según el uso local. ``` requires = IS_DATETIME(format=T('%Y-%m-%d %H:%M:%S'), error_message='¡Debe ser YYYY-MM-DD HH:MM:SS!') ``` Los siguientes símbolos se pueden usar para la cadena del argumento format (se muestra el símbolo y una cadena de ejemplo): ``` %Y '1963' %y '63' %d '28' %m '08' %b 'Aug' %b 'August' %H '14' %I '02' %p 'PM' %M '30' %S '59' ``` `IS_DATETIME_IN_RANGE` ¶ Funciona de una forma muy similar al validador previo, pero permite especificar un rango: ``` requires = IS_DATETIME_IN_RANGE(format=T('%Y-%m-%d %H:%M:%S'), minimum=datetime.datetime(2008,1,1,10,30), maximum=datetime.datetime(2009,12,31,11,45), error_message='¡Debe ser YYYY-MM-DD HH:MM::SS!') ``` Para una descripción completa del parámetro % consulta la sección del validador IS_DATETIME. `IS_DECIMAL_IN_RANGE` ¶ ``` INPUT(_type='text', _name='name', requires=IS_DECIMAL_IN_RANGE(0, 10, dot=".")) ``` Convierte los datos ingresados en objetos Decimal de Python o genera un error si el valor decimal no está comprendido por los límites, incluyendo el mínimo y el máximo. La comparación se hace por medio de algoritmos implementados en Decimal. Los límites máximo y mínimo pueden ser None, lo que implica que no hay límite superior o inferior, respectivamente. El argumento `dot` , es opcional y te permite aplicar traducción automática al símbolo usado para separar los decimales. `IS_EMAIL` ¶ Comprueba que el campo tenga el formato corriente para una dirección de correo electrónico. No intenta verificar la autenticidad de la cuenta enviando un mensaje. ``` requires = IS_EMAIL(error_message='¡El mail no es válido!') ``` `IS_EQUAL_TO` ¶ Comprueba que el valor validado sea igual al valor especificado (que también puede ser una variable): ``` requires = IS_EQUAL_TO(request.vars.password, error_message='Las contraseñas no coinciden') ``` `IS_EXPR` ¶ Su primer argumento es una cadena que contiene una expresión lógica en función de una variable. El campo valida si la expresión evalúa a `True` . Por ejemplo: ``` requires = IS_EXPR('int(value)%3==0', error_message='No es divisible por 3') ``` Se debería comprobar primero que el valor sea un entero para que no se generen excepciones. ``` requires = [IS_INT_IN_RANGE(0, 100), IS_EXPR('value%3==0')] ``` `IS_FLOAT_IN_RANGE` ¶ Comprueba que el valor de un campo sea un número de coma flotante en el rango especificado, `0 <= valor <= 100` para el caso del siguiente ejemplo: ``` requires = IS_FLOAT_IN_RANGE(0, 100, dot=".", error_message='¡Demasiado pequeño o demasiado grande!') ``` El argumento `dot` es opcional y te permite contemplar la traducción automatizada del símbolo para separar los valores decimales. `IS_INT_IN_RANGE` ¶ Comprueba que el valor del campo sea un entero en el rango definido, `0 <= value < 100` para el caso del siguiente ejemplo: ``` requires = IS_INT_IN_RANGE(0, 100, error_message='¡Demasiado pequeño o demasiado grande!') ``` `IS_IN_SET` ¶ Comprueba que los valores del campo estén comprendidos en un conjunto: ``` requires = IS_IN_SET(['a', 'b', 'c'], zero=T('Elige uno'), error_message='Debe ser a, b o c') ``` El argumento zero es opcional y determina el texto de la opción seleccionada por defecto, pero que no pertenece al conjunto de valores admitidos por el validador IS_IN_SET. Si no quieres un texto por defecto, especifica `zero=None` . La opción `zero` se introdujo en la versión (1.67.1). No rompió la compatibilidad hacia atrás en el sentido de que no está en conflicto con aplicaciones anteriores, pero sí cambió su comportamiento, ya que antes no existía esa opción. Los elementos del conjunto deben ser siempre cadenas a menos que el validador sea precedido por `IS_INT_IN_RANGE` (que convierte el valor en un entero) o `IS_FLOAT_IN_RANGE` (que convierte el valor en número de coma flotante). Por ejemplo: ``` requires = [IS_INT_IN_RANGE(0, 8), IS_IN_SET([2, 3, 5, 7], error_message='Debe ser un número primo menor a 10')] ``` También puedes usar un diccionario o una lista de tuplas para hacer que el menú desplegable sea más descriptivo: ``` #### Ejemplo con un diccionario: requires = IS_IN_SET({'A':'Manzana','B':'Banana','C':'Cereza'}, zero=None) #### Ejemplo con una lista de tuplas: requires = IS_IN_SET([('A','Manzana'),('B','Banana'),('C','Cereza')]) ``` `IS_IN_SET` y selecciones múltiples¶ El validador `IS_IN_SET` tiene un atributo opcional `multiple=False` . Si se establece como True, se pueden almacenar múltiples valores en un único campo. El campo debería ser de tipo `list:integer` o `list:string` . Las referencias múltiples se manejan automáticamente en formularios para crear y actualizar, pero son transparentes para DAL. Se aconseja especialmente el uso del plugin de jQuery multiselect para mostrar campos múltiples. Ten en cuenta que cuando se verifica `multiple=True` , `IS_IN_SET` aceptará el valor especificado en `zero` o más, es decir, aceptará el campo cuando no se haya seleccionado nada. `multiple` también puede ser una tupla con el formato `(a, b)` , donde `a` y `b` son el mínimo y el máximo (exclusive) número de ítems que se pueden seleccionar respectivamente. `IS_LENGTH` ¶ Comprueba que la longitud del valor de un campo se encuentre entre los límites establecidos. Funciona tanto para campos de texto como para archivos. * maxsize: la longitud o tamaño máximos admitidos (por defecto es 255) * minsize: la longitud o tamaño mínimo admitidos Comprobar que la cadena de texto tiene una longitud menor a 33 caracteres: ``` INPUT(_type='text', _name='nombre', requires=IS_LENGTH(32)) ``` Comprobar que una contraseña tiene más de 5 caracteres: ``` INPUT(_type='password', _name='nombre', requires=IS_LENGTH(minsize=6)) ``` Comprobar que un archivo subido pesa entre 1KB y 1MB: ``` INPUT(_type='file', _name='nombre', requires=IS_LENGTH(1048576, 1024)) ``` Para todo tipo de campo excepto los de archivos, comprueba la longitud del valor. En el caso de los archivos, el valor es de tipo `cookie.FieldStorage` , por lo que se valida, siguiendo el comportamiento esperado normalmente, la longitud de los datos en el archivo. `IS_LIST_OF` ¶ Este no es exactamente un validador. Su funcionalidad consiste en permitir a los validadores de campos que devuelvan valores múltiples. Se utiliza en esos casos especiales en los que el formulario contiene múltiples campos con el mismo nombre o una caja de selección múltiple. Su único argumento es otro validador, y todo lo que hace es aplicar el otro validador a cada elemento de la lista. Por ejemplo, la siguiente expresión comprueba que cada ítem en una lista sea un entero en un rango entre 0 y 10: ``` requires = IS_LIST_OF(IS_INT_IN_RANGE(0, 10)) ``` Nunca devolverá un error ni contiene mensajes de error. Es el validador anidado el que controlará la generación de errores. `IS_LOWER` ¶ Este validador nunca devuelve un error. Solo convierte el valor de entrada a minúsculas. ``` requires = IS_LOWER() ``` `IS_MATCH` ¶ Este validador compara el valor según una expresión regular y devuelve un error cuando la expresión no coincide. Aquí se muestra un ejemplo de uso del validador para comprobar un código postal de Estados Unidos: ``` requires = IS_MATCH('^\d{5}(-\d{4})?$', error_message='No es un código postal válido') ``` Este ejemplo valida uan dirección IPv4 (nota: el validador IS_IPV4 es más apropiado para este propósito): ``` requires = IS_MATCH('^\d{1,3}(.\d{1,3}){3}$', error_message='No es dirección IPv4') ``` Aquí se comprueba un número de teléfono válido para Estados Unidos: ``` requires = IS_MATCH('^1?((-)\d{3}-?|\(\d{3}\))\d{3}-?\d{4}$', error_message='No es un número de teléfono') ``` Para más información sobre expresiones regulares en Python, puedes consultar la documentación oficial de Python. `IS_MATCH` toma un argumento opcional `strict` que por defecto es `False` . Cuando se establece como `True` sólo compara el inicio de la cadena: ``` >>> IS_MATCH('a')('ba') ('ba', <lazyT 'Expresión inválida'>) # no aceptado >>> IS_MATCH('a', strict=False)('ab') ('a', None) # ¡Aceptado! ``` `IS_MATCH` toma otro argumento opcional `buscar` que por defecto es `False` . Cuando se establece como `True` , usa el método de expresión regular `search` en lugar del método `match` para validar la cadena. ``` IS_MATCH('...', extract=True) ``` filtra y extrae sólo la primer sección que encuentre cuyo valor coincida en lugar de devolver el valor original. `IS_NOT_EMPTY` ¶ Este validador comprueba que el contenido del campo no sea una cadena vacía. ``` requires = IS_NOT_EMPTY(error_message='¡No puede estar vacío!') ``` `IS_TIME` ¶ Este validador comprueba que el valor del campo contenga una hora válida en el formato especificado. ``` requires = IS_TIME(error_message='Debe ser HH:MM:SS!') ``` `IS_URL` ¶ Rechaza las cadenas con URL si se cumple alguna de estas condiciones: * Es una cadena vacía o None * La cadena usa caracteres que no están permitidos en un URL * La cadena no cumple alguna de las normas sintácticas del protocolo HTTP * El prefijo del URL (si se especificó) no es 'http' o 'https' * El dominio de nivel superior o top-level domain no existe (si se especificó un nombre del anfitrión o host). (Estas reglas se basan en RFC 2616[RFC2616]) Esta función únicamente comprueba la sintaxis del URL. No verifica que el URL, por ejemplo, esté asociado a un documento real, o que semánticamente tenga coherencia. Además, esta función automáticamente antepone 'http://' al URL en caso de que se compruebe un URL abreviado (por ejemplo 'google.ca'). Si se usa el parámetro mode='generic', cambiará el comportamiento de la función. En este caso rechazará los URL que verifiquen alguna de estas condiciones: * La cadena es vacía o None * La cadena usa caracteres que no están permitidos en un URL * El protocolo o URL scheme, si se especificó, no es válido (Estas reglas se basan en RFC 2396[RFC2396]) La lista de protocolos permitidos se puede personalizar con el parámetro allowed_schemes. Si excluyes None de la lista, entonces se rechazarán las URL abreviadas (las que no incluyan un protocolo como 'http'). El protocolo antepuesto por defecto se puede personalizar con el parámetro prepend_scheme. Si estableces prepend_scheme como None, entonces no se antepondrá ningún protocolo. Los URL que requieran anteponer un protocolo para su análisis se aceptaran de todas formas, pero el valor a devolver no se modificará. IS_URL es compatible con el estándar Internationalized Domain Name (IDN) especificado en RFC 3490[RFC3490]). Como consecuencia, los URL pueden ser cadenas comunes o cadenas unicode. Si la parte que especifica el dominio del URL (por ejemplo google.ca) contiene caracteres que no pertenecen a US-ASCII, entonces el dominio se convertirá a Punycode (definido en RFC 3492[RFC3492]). IS_URL se separa ligeramente de los estándar, y admite que se utilicen caracteres que no pertenecen a US-ASCII en los componentes de la ruta y la consulta o query del URL. Estos últimos caracteres se codifican. Por ejemplo, los espacios se codifican como '%20'. El caracter de unicode con el código hexadecimal 0x4e86 se traducirá como '%4e%86'. Algunos ejemplos: ``` requires = IS_URL()) requires = IS_URL(mode='generic') requires = IS_URL(allowed_schemes=['https']) requires = IS_URL(prepend_scheme='https') requires = IS_URL(mode='generic', allowed_schemes=['ftps', 'https'], prepend_scheme='https') ``` `IS_SLUG` ¶ ``` requires = IS_SLUG(maxlen=80, check=False, error_message='Debe ser un título compacto') ``` Si `check` se establece como `True` comprueba si el valor de validación es un título compacto o slug (permitiendo únicamente caracteres alfanuméricos y guiones simples). Si se especifica `check` como `False` (por defecto) convierte el valor de entrada al formato de slug. `IS_STRONG` ¶ Comprueba y rechaza valores que no alcanzan un límite mínimo de complejidad (normalmente contraseñas) Example: ``` requires = IS_STRONG(min=10, special=2, upper=2) ``` * min es la longitud mínima admitida para un valor * special es la cantidad mínima de caracteres especiales que debe contener una cadena. Carácter especial es cualquier carácter incluido en `!@#$%^&*(){}[]-+` * upper es la cantidad mínima de mayúsculas admitidas. `IS_IMAGE` ¶ Este validador comprueba que un archivo subido por medio de un campo para ingresar archivos se haya guardado en uno de los formatos especificados y que tenga las dimensiones (ancho y alto) según los límites establecidos. No comprueba el tamaño máximo del archivo (para eso puedes usar IS_LENGHT). Devuelve una falla de validación si no se subieron datos. Soporta los formatos BMP, GIF, JPEG y PNG y no requiere la instalación de Python Imaging Library. Partes de su código fuente fueron tomadas de [source1] Acepta los siguientes parámetros: * extensions: un iterable que contiene extensiones de archivo admitidas en minúsculas * maxsize: un iterable conteniendo el ancho y el alto máximos de la imagen * minsize: un iterable conteniendo el ancho y el alto mínimos de la imagen Puedes usar (-1, -1) como minsize para omitir la comprobación del tamaño de la imagen. * Comprobar si el archivo subido tiene alguno de los formatos de imagen soportados: ``` requires = IS_IMAGE() ``` * Comprobar si el archivo subido es o bien JPEG o PNG: ``` requires = IS_IMAGE(extensions=('jpeg', 'png')) ``` * Comprobar si el archivo es un PNG con un tamaño máximo de 200x200 pixel: ``` requires = IS_IMAGE(extensions=('png'), maxsize=(200, 200)) ``` * Nota: al mostrar un formulario de edición para una tabla que incluye `requires=IS_IMAGE()` , no se mostrará la opción checkbox `delete` porque al eliminar el archivo se produciría una falla durante la validación. Para mostrar la opción de eliminación `delete` utiliza este método de validación: ``` requires = IS_EMPTY_OR(IS_IMAGE()) ``` `IS_UPLOAD_FILENAME` ¶ Este validador comprueba que el nombre de la extensión de un archivo subido a través de un campo para ingresar archivos coincide con el criterio especificado. No se verifica el tipo de archivo de ninguna forma. Devolverá una falla de validación si no se suben datos. * filename: expresión regular para comprobar el nombre del archivo (sin la extensión). * extension: expresión regular para comprobar la extensión. * lastdot: qué punto se debe usar como separador del nombre y la extensión: `True` indica el último punto (por ejemplo "archivo.tar.gz" se separará en "archivo.tar" + "gz") mientras que `False` establece el primer punto (por ejemplo "archivo.tar.gz" se separará en "archivo" + "tar.gz"). * case: 0 indica que se debe mantener la capitalización; 1 indica que se debe convertir a minúsculas (por defecto); 2 indica que se debe convertir a mayúsculas. Si el valor no contiene un punto, las comprobaciones de extensión se harán respecto de una cadena vacía y las comprobaciones de nombres de archivo se harán sobre la totalidad del texto. Comprobar si un archivo tiene la extensión pdf (sensible a mayúsculas): ``` requires = IS_UPLOAD_FILENAME(extension='pdf') ``` Comprobar si un archivo tiene la extensión tar.gz y su nombre comienza con backup: ``` requires = IS_UPLOAD_FILENAME(filename='backup.*', extension='tar.gz', lastdot=False) ``` Comprobar si un archivo no tiene extensión y su nombre coincide con README (sensible a mayúsculas): ``` requires = IS_UPLOAD_FILENAME(filename='^README$', extension='^$', case=0) ``` `IS_IPV4` ¶ Este es un validador que comprueba si el valor de un campo es una dirección IP version 4 en su forma decimal. Se puede configurar para que fuerce el uso de direcciones según un rango específico. Se ha adoptado la expresión regular para IPv4 en ref.[regexlib] * `minip` es el valor más bajo admitido para una dirección; acepta: str, por ejemplo, 192.168.0.1; iterable, por ejemplo, [192, 168, 0, 1]; int, por ejemplo, 3232235521 * `maxip` es la máxima dirección admitida; igual que en el caso anterior Los tres valores de ejemplo son iguales, ya que las direcciones se convierten a enteros para comprobar la inclusión según la siguiente función: ``` numero = 16777216 * IP[0] + 65536 * IP[1] + 256 * IP[2] + IP[3] ``` Comprobar si es una dirección IPv4 válida: `requires = IS_IPV4()` Comprobar si es una dirección IPv4 para redes privadas: ``` requires = IS_IPV4(minip='192.168.0.1', maxip='192.168.255.255') ``` `IS_UPPER` ¶ Este validador nunca devuelve un error. Convierte los valores a mayúsculas. ``` requires = IS_UPPER() ``` `IS_NULL_OR` ¶ Obsoleto, a continuación se describe un alias para `IS_EMPTY_OR` . `IS_EMPTY_OR` ¶ A veces necesitas admitir valores vacíos en un campo además de otros requisitos. Por ejemplo un campo podría ser una fecha o bien estar vacío. El validador `IS_EMPTY_OR` permite hacer lo siguiente: ``` requires = IS_EMPTY_OR(IS_DATE()) ``` `CLEANUP` ¶ Este es un filtro. Nunca devuelve un error. Sólo elimina los caracteres cuyos códigos decimales ASCII no estén en la lista, por ejemplo [10, 13, 32-127]. `requires = CLEANUP()` `CRYPT` ¶ Este también es un filtro. Realiza un hash seguro del valor de entrada y se usa para evitar que se pasen contraseñas sin cifrado a la base de datos. `requires = CRYPT()` Por defecto, CRYPT usa 1000 iteraciones del algoritmo pbkdf2 combinado con SHA512 para producir un hash de 20 byte de longitud. Las versiones anteriores de web2py usaban "md5" o HMAC+SHA512 según se especificara una clave o no. Si se especifica una clave, CRYPT usa un algoritmo HMAC. La clave puede contener un prefijo que determina el algoritmo a usar con HMAC, por ejemplo SHA512: ``` requires = CRYPT(key='sha512:estaeslaclave') ``` Esta es la sintaxis recomendada. La clave debe ser una cadena única asociada con la base de datos usada. La clave no se puede reemplazar una vez establecida. Si pierdes la clave, los valores hash previos se tornan inutilizables. Por defecto, CRYPT usa un argumento salt aleatorio, de forma que cada resultado es distinto. Para usar un valor de salt constante, debes especificar su valor: ``` requires = CRYPT(salt='mivalorsalt') ``` O, para omitir el uso de salt: ``` requires = CRYPT(salt=False) ``` El validador CRYPT hace un hash de los valores de entrada, y esto lo hace un validador un tanto especial. Si necesitas verificar un campo de contraseña antes de que se haga un hash, puedes usar CRYPT en una lista de validadores, pero debes asegurarte de que sea el último de la lista, para que sea el último ejecutado. Por ejemplo: ``` requires = [IS_STRONG(),CRYPT(key='sha512:estaeslaclave')] ``` `CRYPT` también recibe un argumento `min_length` , que toma el valor cero por defecto. El hash resultante toma la forma `alg$salt$hash` , donde `alg` es el algoritmo utilizado, `salt` es la cadena salt (que puede ser vacía), y `hash` es el resultado del algoritmo. En consecuencia, el hash es un valor distinguible, permitiendo, por ejemplo, que el algoritmo se cambie sin invalidar los hash previos. La clave, sin embargo, se debe conservar. # Validadores de base de datos¶ `IS_NOT_IN_DB` ¶ Consideremos el siguiente ejemplo: ``` db.define_table('persona', Field('nombre')) db.person.name.requires = IS_NOT_IN_DB(db, 'persona.nombre') ``` Esto requiere que cuando insertemos una nueva persona, su nombre se haya registrado previamente en la base de datos `db` , en el campo `persona.nombre` . Como ocurre con todos los demás validadores este requisito se controla en el nivel del procesamiento del formulario, no en el nivel de la base de datos. Esto significa que hay una leve posibilidad de que, si dos visitantes intentan insertar registros en forma simultánea con el mismo valor para persona.nombre, esto resulta en una race condition y se aceptarán ambos registros. Por lo tanto, es más seguro indicar en el nivel de la base de datos que este campo debería tener un valor único: ``` db.define_table('persona', Field('nombre', unique=True)) db.persona.nombre.requires = IS_NOT_IN_DB(db, 'persona.nombre') ``` En este caso, si ocurriera una race condition, la base de datos generaría una excepción OperationalError y uno de los registros sería rechazado. El primer argumento de `IS_NOT_IN_DB` puede ser una conexión de la base de datos o un Set. En este último caso, estarías comprobando únicamente el conjunto de valores correspondientes al objeto Set. El siguiente código, por ejemplo, no permite el registro de dos personas consecutivas con el mismo nombre en un plazo de 10 días: ``` import datetime hoy = datetime.datetime.today() db.define_table('persona', Field('nombre'), Field('fechahora_registro', 'datetime', default=now)) ultimos = db(db.persona.fechahora_registro>now-datetime.timedelta(10)) db.persona.nombre.requires = IS_NOT_IN_DB(ultimos, 'persona.nombre') ``` `IS_IN_DB` ¶ Consideremos las siguientes tablas y requisitos: ``` db.define_table('persona', Field('nombre', unique=True)) db.define_table('perro', Field('nombre'), Field('propietario', db.persona) db.perro.propietario.requires = IS_IN_DB(db, 'persona.id', '%(nombre)s', zero=T('Elige uno')) ``` Se controla en el nivel de los formularios para inserción, modificación y eliminación de perros. Requiere que el valor de `perro.propietario` sea un id válido del campo `persona.id` en la base de datos `db` . Por este validador, el campo `perro.propietario` se muestra como un menú desplegable. El tercer argumento del validador es una cadena que describe los elementos del menú. En el ejemplo queremos que se vea el nombre de la persona la persona `%(nombre)s` en lugar del id de la persona `%(id)s` . `%(...)s` se reemplaza por el valor del campo entre paréntesis para cada registro. La opción `zero` funciona de la misma forma que en el validador `IS_IN_SET` . El primer argumento del validador puede ser una conexión de la base de datos o un Set de DAL, como en `IS_NOT_IN_DB` . Esto puede ser útil por ejemplo cuando queremos limitar los registros en el menú desplegable. En este ejemplo, usamos `IS_IN_DB` en un controlador para limitar los registros en forma dinámica cada vez que se llama al controlador: ``` def index(): (...) consulta = (db.tabla.campo == 'xyz') # en general 'xyz' suele ser una variable db.tabla.campo.requires=IS_IN_DB(db(consulta),....) formulario=SQLFORM(...) if formulario.process().accepted: ... (...) ``` Si quieres que el campo realice la validación, pero no quieres un menú desplegable, debes colocar el validador en una lista. ``` db.perro.propietario.requires = [IS_IN_DB(db, 'persona.id', '%(nombre)s')] ``` En algunas ocasiones, puedes necesitar el menú desplegable (por lo que no quieres usar la sintaxis de lista anterior) pero además quieres utilizar validadores adicionales. Para este propósito el validador `IS_IN_DB` acepta un argumento adicional `_and` que tiene como referencia una lista de otros validadores que deben aplicarse si el valor verificado pasa la validación para `IS_IN_DB` . Por ejemplo, para validar todos los propietarios de perros en la base de datos que no pertenecen a un subconjunto: ``` subconjunto=db(db.persona.id>100) db.perro.propietario.requires = IS_IN_DB(db, 'persona.id', '%(nombre)s', _and=IS_NOT_IN_DB(subconjunto, 'persona.id')) ``` `IS_IN_DB` tiene un argumento booleano `distinct` que es por defecto `False` . Cuando se establece como `True` evita la duplicación de datos en la lista desplegable. `IS_IN_DB` además toma un argumento `cache` que funciona como el argumento `cache` de un comando select. `IS_IN_DB` y selecciones múltiples¶ El validador `IS_IN_DB` tiene un atributo opcional `multiple=False` . Si se establece como `True` , se pueden almacenar múltiples valores en un campo. Este campo debería ser de tipo `list:reference` como se describe en el Capítulo 6. También en ese capítulo se puede ver un ejemplo claro de selecciones múltiples o `tagging` . Las referencias múltiples se manejan automáticamente en los formularios de creación y modificación, pero estos son transparentes para DAL. Aconsejamos especialmente el uso del plugin de jQuery multiselect para presentar campos múltiples. # Validadores personalizados¶ Todos los validadores siguen el prototipo detallado a continuación: ``` class Validador: def __init__(self, *a, error_message='Error'): self.a = a self.e = error_message def __call__(self, valor): if validacion(valor): return (procesar(valor), None) return (valor, self.e) def formatter(self, valor): return formato(valor) ``` es decir, cuando se llama al validador, este devuelve una tupla `(x, y)` . Si `y` es `None` , entonces el valor pasó la validación y `x` contiene un valor procesado. Por ejemplo, si el validador requiere que el valor sea un entero, `x` se convierte a `int(valor)` . Si el valor no pasa la validación, entonces `x` contiene el valor de entrada e `y` contiene un mensaje de error que explica la falla de validación. Este mensaje de error se usa para reportar el error en formularios que no son aceptados. Además el validador puede contener un método `formatter` . Este debe realizar la conversión opuesta a la realizada en `__call__` . Por ejemplo, tomemos como ejemplo el código de `IS_DATE` : ``` class IS_DATE(object): def __init__(self, format='%Y-%m-%d', error_message='¡Debe ser YYYY-MM-DD!'): self.format = format self.error_message = error_message def __call__(self, valor): try: y, m, d, hh, mm, ss, t0, t1, t2 = time.strptime(value, str(self.format)) valor = datetime.date(y, m, d) return (valor, None) except: return (valor, self.error_message) def formatter(self, valor): return valor.strftime(str(self.format)) ``` Al aceptarse los datos, el método `__call__` lee la cadena con la fecha del formulario y la convierte en un objeto datetime.date usando la cadena de formato especificada en el constructor. El objeto `formatter` toma el objeto datetime.date y lo convierte en una cadena usando el mismo formato. El `formatter` se llama automáticamente en formularios, pero además puedes llamarlo explícitamente para convertir objetos en función de un formato apropiado. Por ejemplo: ``` >>> db = DAL() >>> db.define_table('unatabla', Field('nacimiento', 'date', requires=IS_DATE('%m/%d/%Y'))) >>> id = db.unatabla.insert(nacimiento=datetime.date(2008, 1, 1)) >>> registro = db.unatabla[id] >>> print db.unatabla.formatter(registro.nacimiento) 01/01/2008 ``` Cuando se requieren múltiples validadores (y se almacenan en una lista), se ejecutan en forma ordenada y la salida de uno es pasada como entrada del próximo. La cadena se rompe cuando uno de los validadores falla. El caso opuesto es que, si usamos el método `formatter` en un campo, los formatter de los validadores asociados también se encadenan, pero en el orden inverso al primer caso. Observa que como alternativa de los validadores personalizados, también puedes usar el argumento `onvalidate` de `form.accepts(...)` , `form.process(...)` y `form.validate(...)` . # Validadores asociados¶ Normalmente los validadores se establecen por única vez en los modelos. Pero a veces necesitas validar un campo y que su validador dependa del valor de otro campo. Esto se puede hacer de varias formas, en el modelo o en el controlador. Por ejemplo, esta es una página que genera un formulario de registro y acepta un nombre de usuario y una contraseña que se debe completar dos veces. Ninguno de los campos puede estar vacío, y las dos contraseñas deben coincidir: ``` def index(): formulario = SQLFORM.factory( Field('nombre', requires=IS_NOT_EMPTY()), Field('password', requires=IS_NOT_EMPTY()), Field('verificacion_password', requires=IS_EQUAL_TO(request.vars.password))) if formulario.process().accepted: pass # o realizar una acción adicional return dict(form=form) ``` El mismo mecanismo se puede aplicar a los objetos FORM y SQLFORM. ### Widget¶ Esta es una lista de widget incorporados en web2py: ``` SQLFORM.widgets.string.widget SQLFORM.widgets.text.widget SQLFORM.widgets.password.widget SQLFORM.widgets.integer.widget SQLFORM.widgets.double.widget SQLFORM.widgets.time.widget SQLFORM.widgets.date.widget SQLFORM.widgets.datetime.widget SQLFORM.widgets.upload.widget SQLFORM.widgets.boolean.widget SQLFORM.widgets.options.widget SQLFORM.widgets.multiple.widget SQLFORM.widgets.radio.widget SQLFORM.widgets.checkboxes.widget SQLFORM.widgets.autocomplete ``` Los primeros diez de la lista son los predeterminados para sus tipos de campos correspondientes. Los widget "options" se usan en los validadores de campo `IS_IN_SET` o `IS_IN_DB` con la opción `multiple=False` (el comportamiento por defecto). El widget "multiple" se usa cuando un validador de campo es `IS_IN_SET` o `IS_IN_DB` y tiene la opción `multiple=True` . Los widget "radio" y "checkboxes" no se usan por defecto en ningún validador, pero se pueden especificar manualmente. El widget autocomplete es especial y se tratará en otra sección. Por ejemplo, para que un campo "string" se presente como textarea: Los widget pueden también especificarse en los campos a posteriori: ``` db.mitabla.micampo.widget = SQLFORM.widgets.string.widget ``` A veces los widget pueden recibir argumentos y debemos especificar sus valores. En este caso se puede usar un `lambda` ``` db.mitabla.micampo.widget = lambda campo, valor: \ SQLFORM.widgets.string.widget(campo, valor, _style='color:blue') ``` Los widget son creadores de ayudantes y sus dos primeros argumentos son siempre `campo` y `valor` . Los otros argumentos pueden incluir atributos comunes de ayudantes como `_style` , `_class` etc. Algunos widget además aceptan argumentos especiales. En particular ``` SQLFORM.widgets.radio ``` y ``` SQLFORM.widgets.checkboxes ``` aceptan un argumento `style` (que no debe confundirse con `_style` ) que se puede especificar como "table", "ul" o "divs" para que su `formstyle` coincida con el del formulario que lo contiene. Puedes crear nuevos widget o extender los predeterminados. ``` SQLFORM.widgets[tipo] ``` es una clase y ``` SQLFORM.widgets[tipo].widget ``` es una función static de la clase correspondiente. Cada función de widget toma dos argumentos: el objeto campo y el valor actual de ese campo. Devuelve una representación del widget. Por ejemplo, el widget string se puede reescribir de la siguiente forma: ``` def mi_widget_string(campo, valor): return INPUT(_name=campo.name, _id="%s_%s" % (campo._tablename, campo.name), _class=campo.type, _value=valor, requires=campo.requires) Los valores del id y la clase deben seguir las convenciones descriptas en las secciones previas de este capítulo. Un widget puede contener sus propios validadores, pero es una buena práctica el asociar los validadores al atributo "requires" del campo y hacer que el widget los obtenga de él. # Widget autocomplete¶ Hay dos usos posibles para el widget autocomplete: para autocompletar un campo que recibe un valor de una lista o para autocompletar un campo reference (donde la cadena a autocompletar es un valor que sustituye la referencia implementado en función de un id). El primer caso es fácil: ``` db.define_table('categoria',Field('nombre')) db.define_table('producto',Field('nombre'),Field('categoria')) db.producto.categoria.widget = SQLFORM.widgets.autocomplete( request, db.categoria.nombre, limitby=(0,10), min_length=2) ``` Donde `limitby` le indica al widget que no muestre más de 10 sugerencias por vez, y `min_lenght` le indica que el widget debe ejecutar un callback Ajax para recuperar las sugerencias sólo después de que el usuario haya escrito al menos 2 caracteres en el campo de búsqueda. El segundo caso es más complicado: ``` db.define_table('categoria', Field('nombre')) db.define_table('producto', Field('nombre'),Field('categoria')) db.producto.categoria.widget = SQLFORM.widgets.autocomplete( request, db.categoria.nombre, id_field=db.categoria.id) ``` En este caso el valor de `id_field` le dice al widget que incluso si el valor a ser autocompletado es un `db.categoria.nombre` , el valor a almacenar es el correspondiente a `db.categoria.id` . `orderby` es un parámetro opcional que le indica al widget la forma de ordenar las sugerencias (el orden es alfabético por defecto). Este widget funciona con Ajax. ¿Dónde está el callback de Ajax? En este widget hay algo de magia. El callback es un método del objeto widget en sí. ¿Cómo se expone? En web2py toda pieza de código fuente puede crear una respuesta generando una excepción HTML. Este widget aprovecha esta posibilidad de la siguiente forma: el widget envía una llamada Ajax al mismo URL que generó el widget inicialmente y agrega un valor especial entre las variables de la solicitud. Todo esto se hace en forma transparente y no requiere la intervención del desarrollador. `SQLFORM.grid` y `SQLFORM.smartgrid` ¶ Importante: grid y smartgrid eran experimentales hasta la versión 2.0 de web2py y presentaban vulnerabilidades relacionadas con la confidencialidad de los datos (information leakage). grid y smartgrid ya no son experimentales, pero de todas formas no podemos garantizar la compatibilidad hacia atrás de la capa de presentación del grid, sólo para su API. Estas son dos herramientas para la creación de controles avanzados para CRUD. Proveen de paginación, la habilidad de navegar, buscar, ordenar, actualizar y eliminar registros usando una sola herramienta o gadget. El más simple de los dos es `SQLFORM.grid` . Este es un ejemplo de uso: ``` @auth.requires_login() def administrar_usuarios(): grid = SQLFORM.grid(db.auth_user) return locals() ``` que produce la siguiente página: El primer argumento de `SQLFORM.grid` puede ser una tabla o una consulta. El gadget de grid proveerá de acceso a los registros que coincidan con la consulta. Antes de que nos sumerjamos en la larga lista de argumentos del gadget de grid debemos entender cómo funciona. El gadget examina `request.args` para decidir qué hacer (listar, buscar, crear, actualizar, borrar, etc.). Cada botón creado por el gadget enlaza con la misma función ( `administrar_usuarios` para el caso anterior) pero pasa distintos parámetros a `request.args` . Por defecto, todos los URL generados por el grid tienen firma digital y son verificados. Esto implica que no se pueden realizar ciertas acciones (crear, modificar, borrar) sin estar autenticado. Estas restricciones se pueden modificar para que sean menos estrictas: ``` def administrar_usuarios(): grid = SQLFORM.grid(db.auth_user,user_signature=False) return locals() ``` pero no es recomendable. Por la forma en que funciona grid uno puede solamente usar un grid por función de controlador, a menos que estos estén embebidos como componentes vía `LOAD` . Para hacer que el grid por defecto de búsqueda funcione en más de un grid incrustado con LOAD, debes usar un `formname` distinto para cada uno. Como la función que contiene el grid puede por sí misma manipular los argumentos de comandos, el grid necesita saber cuáles argumentos debería manejar y cuáles no. Este es un ejemplo de código que nos permite el manejo de múltiples tablas: ``` @auth.requires_login() def administrar(): tabla = request.args(0) if not tabla in db.tables(): redirect(URL('error')) grid = SQLFORM.grid(db[tabla], args=request.args[:1]) return locals() ``` el argumento `args` del `grid` especifica qué argumentos de `request.args` deberían ser recuperados por el grid y cuáles debería ignorar. Para nuestro caso, `request.args[:1]` es el nombre de la tabla que queremos administrar y será manejada por la función `administrar` en sí, no por el gadget. La lista completa de argumentos que acepta el grid es la siguiente: ``` SQLFORM.grid( consulta, fields=None, field_id=None, left=None, headers={}, orderby=None, groupby=None, searchable=True, sortable=True, paginate=20, deletable=True, editable=True, details=True, selectable=None, create=True, csv=True, links=None, links_in_grid=True, upload='<default>', args=[], user_signature=True, maxtextlengths={}, maxtextlength=20, onvalidation=None, oncreate=None, onupdate=None, ondelete=None, sorter_icons=(XML('&#x2191;'), XML('&#x2193;')), ui = 'web2py', showbuttontext=True, _class="web2py_grid", formname='web2py_grid', search_widget='default', ignore_rw = False, formstyle = 'table3cols', exportclasses = None, formargs={}, createargs={}, editargs={}, viewargs={}, buttons_placement = 'right', links_placement = 'right' ) ``` * `fields` es una lista de campos que se recuperarán de la base de datos. También se usa para especificar qué campos se mostrarán en la vista del grid. * `field_id` debe ser un campo de la tabla a usarse como ID, por ejemplo `db.mitabla.id` . * `left` es una expresión opcional left join que se utiliza para generar un `...select(left=...)` . * `headers` es un diccionario que asocia los ``` nombredelatabla.nombredelcampo ``` en la etiqueta del encabezado correspondiente, por ejemplo ``` {'auth_user.email' : 'Correo electrónico'} ``` * `orderby` se usa como orden por defecto de los registros. * `groupby` se usa para agrupar la consulta. Utiliza la misma sintaxis que `select(groupby=...)` . * `searchable` , `sortable` , `deletable` , `editable` , `details` , `create` indica si se habilitarán las funcionalidades de búsqueda, orden, borrar, modificar, visualizar detalles y crear nuevos registros respectivamente. * `selectable` se puede usar para llamar a una función personalizada pasando múltiples registros (se insertará una opción checkbox para cada registro), por ejemplo ``` selectable = lambda ids : redirect(URL('default', 'asociar_multiples', vars=dict(id=ids))) ``` * `paginate` establece la cantidad máxima de registros por página. * `csv` si se establece como true permite que se descarguen los registros en múltiples formatos (se detalla en otra sección). * `links` se usa para mostrar columnas adicionales que pueden ser link a otras páginas. El argumento `link` debe ser una lista de ``` dict(header='nombre', body=lambda row: A(...)) ``` donde `header` es el encabezado de la nueva columna y `body` es una función que toma un registro y devuelve un valor. En el ejemplo, el valor es un ayudante `A(...)` . * `links_in_grid` si se establece como False, los link solo se mostrarán en las páginas "details" y "edit" (por lo tanto, no se mostrarán en la página principal del grid). * `upload` funciona de la misma forma que con SQLFORM. web2py usa la acción de ese URL para descargar el archivo. * `maxtextlength` especifica la longitud máxima del texto que se mostrará para cada valor de un campo, en la vista del grid. Este valor se puede sobrescribir en función del campo usando `maxtextlengths` , un diccionario de elementos 'nombredelatabla.nombredelcampo': longitud, por ejemplo ``` {'auth_user.email': 50} ``` . * `onvalidation` , `oncreate` , `onupdate` y `ondelete` son funciones de retorno o callback. Todas excepto `ondelete` reciben un objeto form como argumento. * `sorter_icons` es una lista de cadenas (o ayudantes) que se usarán para presentar las opciones de orden ascendente y descendente para cada campo. * `ui` si se especifica como 'web2py' generará nombres de clase conforme a la notación web2py, si se especifica `jquery-ui` generará clases conforme a jQuery UI, pero también se puede especificar un conjunto de nombres de clases para los distintos componentes de grid: ``` ui = dict( widget='', header='', content='', default='', cornerall='', cornertop='', cornerbottom='', button='button', buttontext='buttontext button', buttonadd='icon plus', buttonback='icon leftarrow', buttonexport='icon downarrow', buttondelete='icon trash', buttonedit='icon pen', buttontable='icon rightarrow', buttonview='icon magnifier') ``` * `search_widget` permite sobrescribir el widget de búsqueda por defecto. Para más detalles recomendamos consultar el código fuente en "gluon/sqlhtml.py" * `showbuttontext` permite usar botones sin texto (solo se mostrarán iconos) * `_class` es la clase del elemento que contiene grid * `showbutton` permite deshabilitar los botones. * `exportclasses` recibe un diccionario de tuplas. Por defecto se define de la siguiente forma: ``` csv_with_hidden_cols=(ExporterCSV, 'CSV (columnas ocultas)'), csv=(ExporterCSV, 'CSV'), xml=(ExporterXML, 'XML'), html=(ExporterHTML, 'HTML'), tsv_with_hidden_cols=(ExporterTSV, 'TSV (Compatible con Excel, columnas ocultas)'), tsv=(ExporterTSV, 'TSV (Compatible con excel)')) ``` * ExporterCSV, ExporterXML, ExporterHTML y ExporterTSV están definidos en gluon/sqlhtml.py. Puedes usarlos como ejemplo para crear tus propios Exporter. Si pasas un diccionario como ``` dict(xml=False, html=False) ``` deshabilitarás los formatos de exportación html y xml. * `formargs` se pasa a todo objeto SQLFORM que use el grid, mientras `createargs` y `viewargs` se pasan solo a los SQLFORM de creación, edición y detalles. * `formname` , `ignore_rw` y `formstyle` se pasan a los objetos SQLFORM usados por el grid para los formularios de creación y modificación. * `buttons_placement` y `links_placement` toman un parámetro comprendido en ('right', 'left', 'both') que especifica la posición en la visualización de los registros para los botones (o los link). `deletable` , `editable` y `details` son normalmente valores booleanos pero pueden ser funciones que reciben un objeto Row e indican si un registro se debe mostrar o no. Un `SQLFORM.smartgrid` tiene una apariencia muy similar a la de un `grid` ; de hecho contiene un grid, pero está diseñado para aceptar como argumento una tabla, no una consulta, y para examinar esa tabla y un conjunto de tablas asociadas. Por ejemplo, consideremos la siguiente estructura de tablas: ``` db.define_table('padre', Field('nombre')) db.define_table('hijo', Field('nombre'), Field('padre', 'reference padre')) ``` Con SQLFORM.grid puedes crear una lista de padres: ``` SQLFORM.grid(db.padre) ``` todos los hijos: ``` SQLFORM.grid(db.hijo) ``` y todos los padres e hijos en una tabla: ``` SQLFORM.grid(db.padre, left=db.hijo.on(db.hijo.padre==db.padre.id)) ``` Con SQLFORM.smartgrid puedes unir toda la información en un gadget que combine ambas tablas: ``` @auth.requires_login() def administrar(): grid = SQLFORM.smartgrid(db.padre, linked_tables=['hijo']) return locals() ``` que se visualiza de este modo: Observa los link adicionales "hijos". Podríamos crear los `links` adicionales usando un `grid` común, pero en ese caso estarían asociados a una acción diferente. Con un `samartgrid` estos link se crean automáticamente y son manejados por el mismo gadget. Además, observa que cuando se hace clic en el link "hijos" para un padre determinado, solo se obtiene la lista de hijos para ese padre (obviamente) pero además observa que si uno ahora intenta agregar un hijo, el valor del padre para el nuevo hijo se establece automáticamente al del padre seleccionado (que se muestra en el breadcrumbs o migas de pan asociado al gadget). El valor de este campo se puede sobrescribir. Podemos prevenir su sobreescritura aplicándole el atributo de solo lectura: ``` @auth.requires_login(): def administrar(): db.hijo.padre.writable = False grid = SQLFORM.smartgrid(db.padre,linked_tables=['hijo']) return locals() ``` Si el argumento `linked_tables` no se especifica, todas las tablas asociadas se enlazarán. De todos modos, para evitar exponer en forma accidental la información, es recomendable listar explícitamente las tablas que se deben asociar. El siguiente código crea una interfaz de administración muy potente para todas las tablas del sistema: ``` @auth.requires_membership('managers'): def administrar(): tabla = request.args(0) or 'auth_user' if not tabla in db.tables(): redirect(URL('error')) grid = SQLFORM.smartgrid(db[tabla], args=request.args[:1]) return locals() ``` El `smargrid` toma los mismos argumentos como `grid` y algunos más, con algunos detalles a tener en cuenta: * El primer argumento debe ser una tabla, no una consulta * Hay un argumento adicional llamado `constraints` que consiste de un diccionario compuesto por elementos 'nombredelatabla': consulta, que se puede usar para restringir el acceso a los registros mostrados en el grid correspondiente a nombredelatabla. * Hay un argumento adicional llamado `linked_tables` que es una lista de nombres de tabla a los que se puede acceder a través del smartgrid. * `divider` permite especificar un carácter que se usará en el navegador breadcrumb, `breadcrumb_class` especificará la clase del elemento breadcrumb * Todos los argumentos excepto el de la tabla, `args` , `linked_tables` y `user_signatures` aceptan un diccionario según se detalla más abajo. Tomemos como ejemplo el grid anterior: ``` grid = SQLFORM.smartgrid(db.padre, linked_tables=['hijo']) ``` Esto nos permite acceder tanto a `db.padre` como a `db.hijo` . Excepto para el caso de los controles de navegación, para cada tabla individual, una tabla inteligente o smarttable no es otra cosa que un grid. Esto significa que, en este caso, un smartgrid puede crear un grid para el padre y otro para el hijo. Podría interesarnos pasar distintos parámetros a cada grid. Por ejemplo, conjuntos distintos de parámetros `searchable` . Si para un grid deberíamos pasar un booleano: ``` grid = SQLFORM.grid(db.padre, searchable=True) ``` en cambio, para un smartgrid deberíamos pasar un diccionario de booleanos: ``` grid = SQLFORM.smartgrid(db.padre, linked_tables=['hijo'], searchable= dict(padre=True, hijo=False)) ``` De este modo hemos especificado que se puedan buscar padres, pero que no se puedan buscar hijos en función de un padre, ya que no deberían ser tantos como para que sea necesario usar un widget de búsqueda). Los gadget grid y smartgrid han sido incorporados al núcleo en forma definitiva pero están marcados como funcionalidades experimentales porque el diseño de página actual generado y el conjunto exacto de parámetros que aceptan puede ser objeto de modificaciones en caso de que se agreguen nuevas características. `grid` y `smartgrid` no realizan un control automatizado de permisología como en el caso de crud, pero es posible integrar el uso de `auth` por medio de controles específicos: ``` grid = SQLFORM.grid(db.auth_user, editable = auth.has_membership('managers'), deletable = auth.has_membership('managers')) ``` ``` grid = SQLFORM.grid(db.auth_user, editable = auth.has_permission('edit','auth_user'), deletable = auth.has_permission('delete','auth_user')) ``` El `smartgrid` es el único gadget de web2py que muestra el nombre de la tabla y requiere tanto los parámetros singular como plural. Por ejemplo un padre puede tener un "Hijo" o muchos "Hijos". Por lo tanto, un objeto tabla necesita saber sus nombres correspondientes para el singular y el plural. Normalmente web2py los infiere, pero además los puedes especificar en forma explícita: ``` db.define_table('hijo', ..., singular="Hijo", plural="Hijos") ``` o con: ``` db.define_table('hijo', ...) db.child._singular = "Hijo" db.child._plural = "Hijos" ``` Además, deberían traducirse automáticamente usando el operador `T` . Los valores singular y plural se usan luego en `smartgrid` para proveer los nombres adecuados de los encabezados y links. # Correo electrónico y SMS¶ # Chapter 8: Correo y SMS * Correo electrónico y SMS * Configuración de email * Envío de correo electrónico * Envío de correo simple * Correos con HTML * Combinando texto y HTML en correos * Correos con cc Y bcc * Archivos adjuntos * Múltiples adjuntos * Envío de mensajes SMS * Usando el sistema de plantillas para generar mensajes * Enviando mensajes con una tarea en segundo plano * Lectura y manejo de bandejas de correo (Experimental) ## Correo electrónico y SMS¶ ### Configuración de email¶ Web2py incluye la clase `gluon.tools.Mail` que hace más fácil el envío de correo electrónico usando web2py. Uno puede definir un servicio de envío de correo con Ten en cuenta que si tu aplicación usa `Auth` (tratado en el próximo capítulo), el objeto `auth` incluirá su propio servicio de correo en `auth.settings.mailer` , por lo que puedes utilizar este último de la siguiente forma: Debes reemplazar mail.settings con los parámetros correctos para tu servidor de SMTP. Establece si le servidor SMTP no requiere autenticación. Si no deseas utilizar TLS, configura Para tareas de depuración puedes establecer mail.settings.server = 'logging' y los correos no se enviaran sino que se agregarán al historial (log) de la consola. # Configurando el correo electrónico para Google App Engine¶ Para enviar correo con una cuenta de Google App Engine: ``` mail.settings.server = 'gae' ``` Hasta la edición actual web2py no soporta adjuntos o cifrado en correos con Google App Engine. Observa que cron y el scheduler no funcionan con GAE. # x509 y ecriptación PGP¶ Es posible el envío de correos encriptados (o cifrados) con x509 (SMIME) utilizando la siguiente configuración: ``` mail.settings.cipher_type = 'x509' mail.settings.sign = True mail.settings.sign_passphrase = 'tu frase de contraseña' mail.settings.encrypt = True mail.settings.x509_sign_keyfile = 'nombredelarchivo.key' mail.settings.x509_sign_certfile = 'nombredelarchivo.cert' mail.settings.x509_crypt_certfiles = 'nombredelarchivo.cert' ``` Se pueden enviar correos encriptados con PGP. En primer lugar debes instalar el paquete python-pyme. Luego puedes usar GnuPG (GPG) para crear los archivos/clave para el remitente (usa el valor de la dirección de email en mail.settings.sender) y pon los archivos pubring.gpg y secring.pgp en un directorio (por ejemplo, en "/home/www-data/.gnupg"). Usa la siguiente configuración: ``` mail.settings.gpg_home = '/home/www-data/.gnupg/' mail.settings.cipher_type = 'gpg' mail.settings.sign = True mail.settings.sign_passphrase = 'tu frase de contraseña' mail.settings.encrypt = True ``` ### Envío de correo electrónico¶ Una vez que ``` mail.send(to=['<EMAIL>'], subject='hola', # Si se omite reply_to, entonces se usará mail.settings.sender reply_to='<EMAIL>', message='que tal') ``` Mail devuelve `True` si el envío del correo tiene éxito o `False` de lo contrario. La lista completa de argumentos para `mail.send()` es: ``` send(self, to, subject='None', message='None', attachments=1, cc=1, bcc=1, reply_to=1, encoding='utf-8',headers={}, sender=None) ``` Ten en cuenta que `to` , `cc` y `bcc` toman cada uno una lista con direcciones de correo. `headers` es un diccionario de encabezados para refinar los encabezados antes de enviar el correo. Por ejemplo: ``` headers = {'Return-Path' : '<EMAIL>'} ``` `sender` toma por defecto el valor `None` y en ese caso el remitente se establecerá como `mail.settings.sender` . A continuación se detallan algunos ejemplos adicionales que demuestran el uso de `mail.send()` . # Envío de correo simple¶ ``` mail.send('<EMAIL>', 'Asunto del mensaje', 'Cuerpo del mensaje en texto plano') ``` # Correos con HTML¶ ``` mail.send('<EMAIL>', 'Asunto del mensaje', '<html>cuerpo del documento</html>') ``` Si el cuerpo del mensaje comienza con `<html>` y termina con `</html>` , se enviará como correo con HTML. # Combinando texto y HTML en correos¶ El mensaje de correo puede ser una tupla (texto, html): ``` mail.send('<EMAIL>', 'Asunto del mensaje', ('Cuerpo del mensaje en texto plano', '<html>cuerpo del documento</html>')) ``` # Correos con `cc` Y `bcc` ¶ ``` mail.send('<EMAIL>', 'Asunto del mensaje', 'Cuerpo del mensaje en texto plano', cc=['<EMAIL>', '<EMAIL>'], bcc=['<EMAIL>', '<EMAIL>']) ``` # Archivos adjuntos¶ ``` mail.send('<EMAIL>', 'Asunto del mensaje', '<html><img src="cid:foto" /></html>', attachments = mail.Attachment('/path/to/foto.jpg', content_id='foto')) ``` # Múltiples adjuntos¶ ``` mail.send('<EMAIL>', 'Asunto del mensaje', 'Cuerpo del mensaje', attachments = [mail.Attachment('/ruta/al/primer.archivo'), mail.Attachment('/ruta/al/segundo.archivo')]) ``` ### Envío de mensajes SMS¶ El envío de mensajes SMS desde aplicaciones web2py requiere un servicio de terceros que pueda remitir el mensaje al destinatario. Normalmente no es un servicio gratuito, pero varía de país en país. Hemos probado algunos de estos servicios con poco éxito. Las compañías telefónicas bloquean los correos originados desde estos servicios porque pueden utilizarse para envío de spam. Es preferible utilizar a las mismas compañías telefónicas para remitir los mensajes de SMS. Cada compañía tiene una dirección de correo asociada con cada número de teléfono celular, de manera que los mensajes SMS se pueden enviar como correo electrónico a ese número telefónico. web2py viene con un módulo especial para ese proceso: ``` from gluon.contrib.sms_utils import SMSCODES, sms_email email = sms_email('1 (111) 111-1111','T-Mobile USA (tmail)') mail.send(to=email, subject='prueba', message='prueba') ``` SMSCODES es un diccionario que asocia los nombres de las compañías más importantes al postfijo de la dirección de correo. La función `sms_email` toma un número de teléfono (como cadena) y el nombre de la compañía y devuelve la dirección de correo del teléfono. ### Usando el sistema de plantillas para generar mensajes¶ Se puede usar el sistema de plantillas para generar mensajes de correo. Por ejemplo, tomemos como ejemplo la tabla de la base de datos: supongamos que queremos enviar a cada persona el la base de datos el siguiente mensaje, almacenado en un archivo de vista "mensaje.html": ``` Estimado {{=person.name}}, Ha ganado el segundo premio, un set de cuchillos para asado. ``` Puedes hacer lo mismo de la forma siguiente ``` for persona in db(db.persona).select(): context = dict(persona=persona) mensaje = response.render('mensaje.html', context) mail.send(to=['<EMAIL>'], subject='None', message=mensaje) ``` La mayor parte del trabajo se hace en la instrucción ``` response.render('mensaje.html', context) ``` Convierte la vista "mensaje.html" con las variables definidas en el diccionario "context", y devuelve una cadena con el texto del correo convertido. context es un diccionario que contiene variables que serán visibles para la plantilla. Si el mensaje comienza con `<html>` y termina con `</html>` , el correo será un mensaje en formato HTML. Ten en cuenta que si quisieras incluir un link que regrese a tu sitio en el correo HTML, puedes usar la función `URL` . Sin embargo, por defecto, la función `URL` genera un URL relativo, que no funcionará desde el mensaje de correo. Para generar URL absolutos, debes especificar los argumentos `scheme` y `host` en la función URL. Por ejemplo: ``` <a href="{{=URL(..., scheme=True, host=True)}}">Clic aquí</a> ``` ``` <a href="{{=URL(..., scheme='http', host='www.site.com')}}">Clic aquí</a> ``` El mismo mecanismo que se usa para generar el texto del correo se puede usar también para generar los mensajes SMS o cualquier otro tipo de mensaje basado en plantillas. ### Enviando mensajes con una tarea en segundo plano¶ El envío de un mensaje puede llegar a tomar varios segundos debido a la necesidad de autenticación y comunicación con un servidor posible servidor SMTP remoto. Para evitar que el usuario tenga que esperar que la operación se complete, es deseable en algunas ocasiones agregar el correo a enviar en una cola de tareas en segundo plano para que sea enviado posteriormente. Como se detalla en el capítulo 4, esto se puede hacer configurando una simple cola de tareas o utilizando el planificador (scheduler). Aquí se muestra un ejemplo usando una cola de tareas común. Primero, en un archivo del modelo en nuestra aplicación, configuramos el modelo de la base de datos para que almacene nuestra cola de correos: ``` db.define_table('cola', Field('estado'), Field('direccion'), Field('asunto'), Field('mensaje')) ``` Desde un controlador, podemos agregar mensajes a enviar en la cola con: ``` db.queue.insert(estado='pendiente', direccion='<EMAIL>', asunto='prueba', mensaje='prueba') ``` Ahora, necesitamos un script de procesamiento que lea la cola y envíe los correos: ``` ## in file /app/private/cola_mails.py import time while True: registros = db(db.cola.estado=='pendiente').select() for registro in registros: if mail.send(to=registro.direccion, subject=registro.asunto, message=registro.mensaje): registro.update_record(estado='enviado') else: registro.update_record(estado='falla') db.commit() time.sleep(60) # comprobar cada minuto ``` Finalmente, como se describe en el capítulo 4, necesitamos correr el script cola_mails.py como si estuviera dentro de un controlador en nuestra app: ``` python web2py.py -S app -M -N -R applications/app/private/cola_mails.py ``` Donde `-S app` le dice a web2py que corra "cola_mails.py" como "app", `-M` le indica que ejecute los modelos, y `-N` indica que no se debe ejecutar cron. Aquí asumimos que el objeto `-M` . Además observa que es importante hacer un commit de cada cambio tan pronto como sea posible para evitar el bloqueo de la base de datos para otros procesos simultáneos. Como se menciona en el capítulo 4, este tipo de procesos en segundo plano no se deberían ejecutar a través de cron (con la excepción quizás de cron @reboot) porque necesitas asegurarte de que no se esté ejecutando más de una instancia al mismo tiempo. Ten en cuenta que un problema con el envío de correo electrónico a través de un proceso en segundo plano es que hace difícil la tarea de informar al usuario si el envío ha fallado. Si el correo se envía directamente desde la acción del controlador, puedes atrapar cualquier error e inmediatamente devolver un mensaje de error al usuario. Con una tarea en segundo plano, sin embargo, el correo se envía en forma asíncrona, después de que la acción del controlador haya devuelto la respuesta, por lo que se torna más complicado informar la falla al usuario. ### Lectura y manejo de bandejas de correo (Experimental)¶ El adaptador `IMAP` está pensado como interfaz con los servidores IMAP para realizar consultas simples con la sintaxis de consultas a la base de datos de `DAL` , de manera que servicios como la lectura, búsqueda y otros servicios relacionados a IMAP implementados por marcas como Google(mr) y Yahoo(mr) se puedan administrar desde aplicaciones de web2py. El adaptador crea sus tablas y campos en forma "estática", es decir, que el desarrollador debería relegar la definición de las tablas y campos a la instancia de DAL llamando al método `.define_tables()` . Las tablas se definen con la lista de bandejas o carpetas informada por el servidor de correo. # Conexión¶ Para una sola cuenta de correo, este es el código recomendado para iniciar el soporte de IMAP en el modelo de la app. ``` # Reemplaza el usuario, contraseña, servidor y puerto en la # cadena de conexión # Establece el puerto como 993 para soporte de SSL imapdb = DAL("imap://usuario:contraseña@servidor:puerto", pool_size=1) imapdb.define_tables() ``` Ten en cuenta que ``` <imapdb>.define_tables() ``` devuelve un diccionario de cadenas que asocian nombres de tablas de DAL a los nombres de las bandejas del servidor según la estructura ``` {<tablename>: <server mailbox name>, ...} ``` , para que sea posible acceder al nombre real de la bandeja en el servidor IMAP. Si deseas establecer tu propia configuración de nombres de tablas y bandejas y omitir la configuración de nombres automática, puedes pasar como parámetro del adaptador un diccionario personalizado como sigue: ``` imapdb.define_tables({"inbox": "BANDEJA", "basura", "SPAM"}) ``` Para manejar los distintos nombres de bandeja originales en la interfaz de usuario, los siguientes atributos dan acceso a los nombres asociados automáticamente por el adaptador (qué nombre de bandeja tiene cuál nombre de tabla y vice versa): Atributo | Tipo | Formato | | --- | --- | --- | imapdb.mailboxes | dict | | imapdb.<table>.mailbox | string | | El primer comando puede ser de utilidad para recuperar instancias de Set usando el nombre original de la bandeja en el servidor ``` # mailbox es una cadena que contiene el nombre real de la bandeja o carpeta nombresdetabla = dict([(v,k) for k,v in imapdb.mailboxes.items()]) miset = imapdb(imapdb[nombresdetabla[mailbox]]) ``` # Recuperando mensajes y actualización de los flag¶ Aquí se muestra una lista de comandos IMAP que puedes usar en un controlador. Por ejemplo, se supone que tu servicio tiene una bandeja llamada `INBOX` , que es lo normal en las cuentas de Gmail(mr). Para hacer un conteo de los mensajes no revisados de tamaño menor a 6000 octetos en la bandeja de entrada puedes hacer ``` q = imapdb.INBOX.seen == False q &= imapdb.INBOX.created == datetime.date.today() q &= imapdb.INBOX.size < 6000 nuevos = imapdb(q).count() ``` Puedes recuperar los mensajes de la consulta anterior con ``` mensajes = imapdb(q).select() ``` El adaptador implementa los operadores comunes para consultas a la base de datos, incluso belongs ``` mensajes = imapdb(imapdb.INBOX.uid.belongs(<secuencia de uid>)).select() ``` Nota: Se recomienda especialmente mantener la cantidad de resultados de las consultas debajo de cierto umbral de tamaño para evitar la saturación del servidor con comandos select demasiado extensos. Para realizar consultas de mensajes más eficientes, es recomendable especificar un conjunto filtrado de campos: ``` fields = ["INBOX.uid", "INBOX.sender", "INBOX.subject", "INBOX.created"] mensajes = imapdb(q).select(*fields) ``` El adaptador sabe cómo recuperar secciones parciales de los mensajes (algunos campos como por ejemplo `content` , `size` y `attachments` requieren la descarga completa de los datos del mensaje) Es posible filtrar los resultados del comando select con el parámetro limitby y secuencias de campos de la bandeja de correo ``` # Reemplaza los argumentos con valores personalizados miset.select(<secuencia de campos>, limitby=(<int>, <int>)) ``` Supongamos, como ejemplo, que quieres hacer que una acción de una app muestre un mensaje de una bandeja de correo electrónico. Primero recuperamos el mensaje (si está soportado por tu servicio IMAP, recupera los mensajes especificando el campo `uid` para evitar el uso de referencias a números secuenciales erróneos). ``` mimensaje = imapdb(imapdb.INBOX.uid == <uid>).select().first() ``` Si no, puedes usar el campo `id` del mensaje. ``` mimensaje = imapdb.INBOX[<id>] ``` Ten en cuenta que el uso del id del mensaje no está recomendado, porque los números de secuencia pueden cambiar cuando se realizan operaciones de mantenimiento como por ejemplo eliminar mensajes. Si de todos modos deseas registrar valores de referencia a mensajes (por ejemplo en un campo del registro de otra base de datos), la solución es usar el campo uid como referencia siempre y cuando esté soportado y recuperar cada mensaje con el valor registrado. Por último, agrega algo parecido a lo siguiente para mostrar el contenido del mensaje en una vista ``` {{=P(T("Mensaje de"), " ", mimensaje.sender)}} {{=P(T("Received on"), " ", mimensaje.created)}} {{=H5(mimensaje.subject)}} {{for texto in mimensaje.content:}} {{=DIV(texto)}} {{=HR()}} {{pass}} ``` Naturalmente, podemos aprovechar el ayudante `SQLTABLE` para generar listas de mensajes en las vistas ``` {{=SQLTABLE(miset.select(), linkto=URL(...))}} ``` Y por supuesto, es posible usar el valor id de secuencia correspondiente como parámetro de un ayudante de formulario ``` {{=SQLFORM(imapdb.INBOX, <id del mensaje>, fields=[...])}} ``` Los campos soportados actualmente por el adaptador son los siguientes: Campo | Tipo | Descripción | | --- | --- | --- | uid | string | answered | boolean | Flag (utilizados para marcar los mensajes) | created | date | Fecha | content | list:string | Una lista con partes del mensaje de texto plano o html | to | string | destinatario | cc | string | copia de carbón | bcc | string | copia de carbón oculta | size | integer | la cantidad de octetos del mensaje* | deleted | boolean | Flag | draft | boolean | Flag | flagged | boolean | Flag | sender | string | remitente | recent | boolean | Flag | seen | boolean | Flag | subject | string | asunto del mensaje | mime | string | La declaración mime del encabezado | string | El mensaje completo según RFC822** | attachments | list | Cada parte del mensaje sin texto plano como un diccionario | encoding | string | La codificación de caracteres principal detectada | *Del lado de la aplicación se mide como la longitud de la cadena que contiene el mensaje RFC822 ADVERTENCIA: Como los id de registro están asociados a números secuenciales, debes asegurarte de que tu app cliente de IMAP de web2py no elimine mensajes durante el procesamiento de acciones que contengan consultas como select o update, para prevenir la actualización o eliminación errónea de mensajes. Las operaciones típicas de `CRUD` no están soportadas. No hay manera de definir campos personalizados o tablas y realizar inserciones con distintos tipos de datos porque la actualización de bandejas de correo con servicios IMAP está usualmente limitada a la modificación de los flag en el servidor. De todos modos, es posible el acceso a esos comandos para actualizar los flag a través de la interfaz de DAL para IMAP Para marcar los mensajes de la consulta anterior como revisados ``` revisados = imapdb(q).update(seen=True) ``` Aquí eliminamos los correos en la base de datos IMAP que hayan sido enviados por el señor Gumby ``` eliminados = 0 for nombredetabla in imapdb.tables(): eliminados += imapdb(imapdb[nombredetabla].sender.contains("gumby")).delete() ``` Además es posible marcar mensajes a eliminar en vez de eliminarlos directamente con ``` miset.update(deleted=True) ``` # Chapter 9: Control de acceso * Control de acceso * Autenticación * Restricciones al registro de usuarios * Integración con OpenID, Facebook, etc. * CAPTCHA y reCAPTCHA * Personalización de Auth * Personalización de los nombres de tablas Auth * Otros métodos de acceso y formularios de autenticación * Versiones de registros * Mail y Auth * Autorización * Decoradores * Combinando requisitos * CRUD y Autorización * Autorización y descargas * Control de Acceso y Autenticación Básica * Autenticación manual * Configuraciones y mensajes * Servicio Central de Autenticación ## Control de acceso¶ web2py incluye un mecanismo de Control de Acceso Basado en Roles (RBAC) potente y personalizable. He aquí una definición en Wikipedia: "... el Control de Acceso Basado en Roles (RBAC) es una técnica para restringir el acceso al sistema a usuarios autorizados. Es una nueva alternativa del Control de Acceso Obligatorio (MAC) y el Control de Acceso Discrecional (DAC). A veces se refiere a RBAC como seguridad basada en roles (role-based security) RBAC es una tecnología para el control de acceso independiente de las reglas implementadas y flexible lo suficientemente potente para emular DAC y MAC. Asimismo, MAC puede emular RBAC si la configuración de roles (role graph) se restringe a un árbol en lugar de un conjunto parcialmente ordenado. Previamente al desarrollo de RBAC, MAC y DAC eran considerados el único modelo conocido para control de acceso: si un modelo no era MAC, se lo consideraba modelo DAC, y viceversa. La investigación de los finales de la década de 1990 demostró que RBAC no cuadra en ninguna de esas categorías. En el ámbito de una organización, los roles se crean para varias funciones de trabajo. La permisología para realizar ciertas operaciones es asignada a roles específicos. Se asignan a los miembros del personal (u otros usuarios del sistema) roles particulares, y por medio de esas asignaciones de roles adquieren los permisos para acceder a funciones particulares del sistema. A diferencia de los controles de acceso basados en el contexto (context-based CBAC), RBAC no revisa el contexto del mensaje (como por ejemplo la dirección de origen de la conexión). Como no se asigna a los usuarios permisos directamente, sino que solo los obtienen a través de su rol (o roles), el manejo de derechos individuales consta simplemente de asociar los roles apropiados a un usuario determinado; esto simplifica las operaciones más comunes, como por ejemplo agregar un usuario o cambiar a un usuario de departamento. RBAC difiere de las listas de control de acceso (ACLs) usadas normalmente en los sistemas de control automático de acceso tradicionales en que asigna permisos para operaciones específicas que tienen un significado para la organización, no solo para objetos de datos de bajo nivel. Por ejemplo, se podría usar una lista de control de acceso para otorgar o denegar el acceso a escritura a un archivo determinado del sistema, pero eso no informaría sobre la forma en que se puede modificar ese archivo ..." La clase de web2py que implementa RBAC se llama Auth. Auth necesita (y define) las siguientes tablas: * `auth_user` almacena el nombre del usuario, dirección de correo electrónico, contraseña y estado (pendiente de registro, aceptado, bloqueado) * `auth_group` almacena los grupos o roles para usuarios en una estructura muchos-a-muchos. Por defecto, cada usuario pertenece a su propio grupo, pero un usuario puede estar incluido en múltiples grupos, y cada grupo contener múltiples usuarios. Un grupo es identificado por su rol y descripción. * `auth_membership` enlaza usuarios con grupos en una estructura muchos-a-muchos. * `auth_permission` enlaza grupos con permisos. Un permiso se identifica por un nombre y opcionalmente, una tabla y un registro. Por ejemplo, los miembros de cierto grupo pueden tener permisos "update" (de actualización) para un registro específico de una tabla determinada. * `auth_event` registra los cambios en las otras tablas y el acceso otorgado a través de CRUD a objetos controlados con RBAC. * `auth_cas` se usa para el Servicio Central de Autenticación (CAS). Cada aplicación web2py es un proveedor de CAS y puede opcionalmente consumir el servicio CAS. El esquema de acceso se ha reproducido gráficamente en la imagen de abajo: En un principio, no hay una restricción de los nombres de roles o permisos; el desarrollador puede crearlos de acuerdo a los requerimientos de nombres de la organización. Una vez que estos se han creado, web2py provee de una API para comprobar si un usuario está autenticado, si un usuario es miembro de un grupo determinado, y/o si el usuario es miembro de cualquier grupo que tenga asignado un permiso determinado. web2py provee además de decoradores para la restricción de acceso a cualquier función en base al sistema de autenticación o login, membresía o membership y permisos. Además, web2py es capaz de interpretar automáticamente algunos permisos especiales, como por ejemplo, aquellos asociados a los métodos CRUD (create, read, update, delete) y puede llevar un control automático sin necesidad del uso de decoradores. En este capítulo, vamos a tratar sobre distintas partes de RBAC caso por caso. ### Autenticación¶ Para poder usar RBAC, debemos identificar a los usuarios. Esto significa que deben registrarse (o ser registrados) e ingresar al sistema (log in). Auth provee de múltiples métodos de autenticación. El método por defecto consiste en identificar a los usuarios según la tabla local `auth_user` . Como alternativa, se puede registrar a los usuarios por medio de sistemas de autenticación de terceros y proveedores de single sign on como Google, PAM, LDAP, Facebook, LinkedIn, Dropbox, OpenID, OAuth, etc... Para comenzar a usar `Auth` , debes por lo menos colocar este código en un archivo del modelo, que también viene por defecto en la aplicación "welcome" de web2py y supone el uso de un objeto de conexión llamado `db` : Auth tiene un argumento opcional `secure=True` , que forzará la autenticación a través de HTTPS. El campo `password` de la tabla `db.auth_user` tiene un validador `CRYPT` por defecto que requiere una `hmac_key` . En aplicaciones heredadas de web2py deberías ver un argumento extra pasado al constructor de Auth: ``` hmac_key = Auth.get_or_create_key() ``` . Esta última es una función que lee una clave HMAC desde "private/auth.key" en la carpeta de la aplicación. Si el archivo no existe, crea una `hmac_key` aleatoria. Si la misma base de datos auth es compartida por múltiples aplicaciones, asegúrate de que también usen la misma `hmac_key` . Esto ya no es necesario para aplicaciones nuevas porque las contraseñas son saladas según un salt aleatorio e individual. Por defecto, web2py usa la dirección de correo electrónico como nombre de usuario para el login. Si quieres autenticar a los usuarios con username debes establecer ``` auth.define_tables(username=True) ``` . Cuando la base de datos de auth es compartida por varias aplicaciones deberías deshabilitar las migraciones: ``` auth.define_tables(migrate=False=) ``` . Para exponer Auth, necesitas además la siguiente función en un controlador (por ejemplo en "default.py"): ``` def user(): return dict(form=auth()) ``` El objeto `auth` y la acción `user` están definidos por defecto en la aplicación de andamiaje. web2py incluye además una vista de ejemplo "welcome/views/default/user.html" para convertir la función correctamente, similar a: ``` {{extend 'layout.html'}} <h2>{{=T( request.args(0).replace('_',' ').capitalize() )}}</h2> <div id="web2py_user_form"> {{=form}} {{if request.args(0)=='login':}} {{if not 'register' in auth.settings.actions_disabled:}} <br/><a href="{{=URL(args='register')}}">regístrese</a> {{pass}} {{if not 'request_reset_password' in auth.settings.actions_disabled:}} <br/> <a href="{{=URL(args='request_reset_password')}}">olvidé mi contraseña</a> {{pass}} {{pass}} </div> ``` Observa que esta función simplemente muestra un `form` y por lo tanto se puede personalizar usando la notación común para formularios. El único problema es que el formulario producido por medio de `form=auth()` depende de `request.args(0)` ; por lo tanto, si reemplazas el formulario de login `auth()` con uno personalizado, puedes necesitar un condicional `if` en la vista como el siguiente: ``` {{if request.args(0)=='login':}}...formulario de autenticación personalizado...{{pass}} ``` El controlador anterior expone múltiples acciones: ``` http://.../[app]/default/user/register http://.../[app]/default/user/login http://.../[app]/default/user/logout http://.../[app]/default/user/profile http://.../[app]/default/user/change_password http://.../[app]/default/user/verify_email http://.../[app]/default/user/retrieve_username http://.../[app]/default/user/request_reset_password http://.../[app]/default/user/reset_password http://.../[app]/default/user/impersonate http://.../[app]/default/user/groups http://.../[app]/default/user/not_authorized ``` * register permite a un usuario registrarse. Se integra con CAPTCHA, aunque la opción no está habilitada por defecto. También está integrado con una calculadora de entropía definida en "web2py.js". La calculadora indica la fortaleza de la nueva contraseña. Puedes usar el validador `IS_STRONG` para prevenir que web2py acepte contraseñas débiles. * login permite a un usuario registrado el acceso al sistema o login (si el registro del usuario se ha verificado o no se requiere verificación, si se aprovó o no requiere aprobación, y si no está bloqueado). * logout hace lo que esperarías que haga pero además, como los demás métodos, registra la acción y se puede usar además para activar otra acción o event. * profile permite a los usuarios editar sus datos de registro, es decir, el contenido de la tabla `auth_user` . Observa que esta tabla no tiene una estructura fija y se puede personalizar. * change_password permite a los usuarios cambiar su contraseña en forma segura. * verify_email. Si la verificación de correo electrónico está habilitada, los usuarios, al registrarse, reciben un correo con un link para verificar su información de correo. El link refiere a esta misma acción. * retrieve_username. Por defecto, Auth usa email y contraseña para el login, pero puede, opcionalmente, utilizar username en su lugar. Para este último caso, si un usuario olvida su nombre de usuario, el método `retrieve_username` permite al usuario ingresar la dirección de correo electrónico para que se le envíe su nombre de usuario. * request_reset_password. Permite a los usuarios que olvidaron su contraseña que soliciten una nueva. Recibirán una confirmación por correo electrónico enlazada a la acción reset_password. * impersonate permite a un usuario adoptar las credenciales de otro o suplirlo en forma temporal. Esto es importante para propósitos de depuración. `request.args[0]` es el id del usuario que se va a suplir. Esto se permite únicamente si el usuario verifica ``` has_permission('impersonate', db.auth_user, user_id) ``` . Puedes usar ``` auth.is_impersonating() ``` para comprobar si el usuario actual está supliendo a otro usuario. * groups lista los grupos en los que está incluido el usuario como miembro. * not_authorized muestra un mensaje de error cuando el usuario ha intentado hacer sin permisos que lo habiliten. * navbar es un ayudante que genera una barra con links login/registrarse/etc. Logout, profile, change_password, impersonate, y groups requieren un usuario autenticado. Por defecto todos estos recursos se exponen, pero es posible restringir el acceso a un subconjunto de las acciones. Todos los métodos descriptos arriba se pueden extender o reemplazar creando una subclase de Auth. Todos los métodos de arriba se puede usar en acciones separadas. Por ejemplo: ``` def milogin(): return dict(formulario=auth.login()) def miregistro(): return dict(formulario=auth.register()) def miperfil(): return dict(formulario=auth.profile()) ... ``` Para restringir el acceso a funciones a aquellos usuarios que se hayan autenticado únicamente, decora la función como en el siguiente ejemplo ``` @auth.requires_login() def hola(): return dict(message='hola %(first_name)s' % auth.user) ``` Toda función se puede decorar, no sólo las acciones expuestas. Por supuesto que esto es todavía un ejemplo realmente simple de control de acceso. Más adelante trataremos sobre ejemplos más complicados. `auth.user_groups` . `auth.user` contiene una copia de los registros en `db.auth_user` para el usuario actualmente autenticado o `None` en su defecto. También está `auth.user_id` que es lo mismo que `auth.user.id` (es decir, el id del usuario actualmente autenticado) o `None` . En forma similar, `auth.user_groups` contiene un diccionario donde cada clave es el id del grupo del cual el actual usuario autenticado es miembro, el valor asociado a la clave, es el rol correspondiente. El decorador ``` auth.requires_login() ``` así como también los demás decoradores `auth.requires_*` toman un argumento opcional `otherwise` . Se puede especificar como una cadena que indica a dónde redirigir al usuario si falla la autenticación o como un objeto callable. Este objeto se llama en caso de que fracase la acción. # Restricciones al registro de usuarios¶ Si quieres permitir a los visitantes que se registren pero que no tengan acceso hasta que su registro se haya aprobado por el administrador: ``` auth.settings.registration_requires_approval = True ``` Puedes aprobar un registro de usuario a través de la interfaz appadmin. Examina la tabla `auth_user` . Los registros de usuarios pendientes tienen un campo `registration_key` cuyo valor es "pending". Un registro de usuario está aprobado cuando se establece el campo de este valor como vacío. Con la interfaz appadmin, puedes también bloquear a un usuario para que no pueda acceder. Busca al usuario en la tabla `auth_user` y establece el `registration_key` a "bloqueado". Los usuarios "bloqueados" no tienen permitido el acceso. Observa que esto evitará que un visitante se autentique pero no forzará al visitante que ya está autenticado para que cierre su sesión. Se puede usar la palabra "disabled" en lugar de "blocked" si se prefiere, con el mismo resultado. Además puedes bloquear completamente el acceso a la página para registro de usuarios con esta instrucción: Si quieres permitir que todos se registren y automáticamente ser autenticados luego de registrarse pero de todas formas quieres enviar un correo de verificación para que no puedan autenticarse nuevamente luego de cerrar la sesión, a menos que hayan completado las instrucciones en el correo, puedes hacerlo de la siguiente manera: ``` auth.settings.registration_requires_approval = True auth.settings.login_after_registration = True ``` Otros métodos de Auth se pueden restringir de la misma forma. # Integración con OpenID, Facebook, etc.¶ Puedes usar el sistema de Control de Acceso Basado en Roles de web2py y autenticar con otros servicios como OpenID, Facebook, LinkedIn, Google, Dropbox, MySpace, Flickr, etc. La forma más fácil es usar Janrain Engage (antiguamente RPX) (Janrain.com). Dropbox se tratará como caso especial en el capítulo 14, porque implica más que simplemente autenticación, también tiene servicios de almacenamiento para usuarios autenticados. Janrain Engage es un servicio que provee autenticación con middleware. Puedes registrarte en Janrain.com, registrar un dominio (el nombre de tu app) y el conjunto de los URL que vas a usar, y el sistema te proveerá de una clave de acceso a la API. Ahora edita el modelo de tu aplicación de web2py y coloca las siguientes líneas en alguna parte después de la definición del objeto `auth` : ``` from gluon.contrib.login_methods.rpx_account import RPXAccount auth.settings.actions_disabled=['register','change_password','request_reset_password'] auth.settings.login_form = RPXAccount(request, api_key='...', domain='...', url = "http://tu-direccion-externa/%s/default/user/login" % request.application) ``` La primer línea importa el nuevo método de autenticación, la segunda línea deshabilita el registro de usuarios local, y la tercera línea le indica a web2py que debe usar el método RPX de autenticación. Debes ingresar tu propia `api_key` provista por Janrain.com, el dominio que hayas seleccionado al registrar la app y la `url` externa de tu página de login. Para obtener los datos ingresa a janrain.com, luego ve a [Deployment][Application Settings]. En la parte derecha está la "Application Info" (información de la aplicación), la api_key se llama "API Key (Secret)". El dominio es "Application Domain" sin el "https://" inicial y sin el ".rpxnow.com/" final. Por ejemplo: si has registrado un sitio web como "seguro.misitioweb.org", Janrain lo devuelve como el dominio "https://seguro.misitioweb.rpxnow.com". Cuando un nuevo usuario se autentica por primera vez, web2py crea un nuevo registro en `db.auth_user` asociado al usuario. Utilizará el campo `registration_id` para almacenar la clave id única de autenticación para el usuario. Practicamente todo método de autenticación provee también de nombre de usuario, correo, primer nombre y apellido pero eso no está garantizado. Los campos que se devuelven dependen del método de autenticación elegido por el usuario. Si el mismo usuario se autentica dos veces consecutivas utilizando distintos mecanismos de autenticación (por ejemplo una vez con OpenID y luego con Facebook), Janrain podría no notarlo ya que el mismo usuario puede tener asignado otro `registration_id` Puedes personalizar el mapeo de datos entre los datos provistos por Janrain y la información almacenada en `db.auth_user` . Aquí mostramos un ejemplo para Facebook: ``` auth.settings.login_form.mappings.Facebook = lambda profile:\ dict(registration_id = profile["identifier"], username = profile["preferredUsername"], email = profile["email"], first_name = profile["name"]["givenName"], last_name = profile["name"]["familyName"]) ``` Las claves en el diccionario son campos en `db.auth_user` y los valores son entradas de datos en el objeto del perfil provisto por Janrain. Consulta la documentación en línea de Janrain para más detalles sobre el objeto del perfil. Janrain además mantendrá estadísticas sobre los ingresos del usuario. Este formulario de autenticación está completamente integrado con el sistema de Control de Acceso Basado en Roles y por lo tanto puedes crear grupos, asignar membresías, permisos, bloquear usuarios, etc. El servicio básico gratuito de Janrain permite hasta 2500 usuarios únicos registrados por año). Para una mayor cantidad de usuarios hace falta un upgrade a alguno de sus distintos niveles de servicios pagos. Si prefieres no usar Janrain y quieres usar un método distinto de autenticación (LDAP, PAM, Google, OpenID, OAuth/Facebook, LinkedIn, etc.) puedes hacerlo. La API para este propósito se describe más adelante en este capítulo. # CAPTCHA y reCAPTCHA¶ Para impedir que los spammer y bot se registren en tu sitio, deberías solicitar el registro a través de CAPTCHA. web2py soporta reCAPTCHA[recaptcha] por defecto. Esto se debe a que reCAPTCHA tiene un excelente diseño, es libre, accesible (puede leer las a los visitantes), fácil de configurar, y no requiere la instalación de ninguna librería de terceros. Esto es lo que necesitas hacer para usar reCAPTCHA: * Registrarte en reCAPTCHA[recaptcha] y obtener el par (PUBLIC_KEY, PRIVATE_KEY) para tu cuenta. Estas son simplemente dos cadenas de texto. * Agrega el siguiente código a tu modelo luego de la definición del objeto `auth` : ``` from gluon.tools import Recaptcha auth.settings.captcha = Recaptcha(request, 'PUBLIC_KEY', 'PRIVATE_KEY') ``` reCAPTCHA podría no funcionar si accedes al sitio desde 'localhost' o '127.0.0.1', porque está limitado para funcionar solamente con sitios públicamente accesibles. El constructor de `Recaptcha` toma algunos argumentos opcionales: ``` Recaptcha(..., use_ssl=True, error_message='inválido', label='Verificar:', options='') ``` Observa el `use_ssl=False` por defecto. `options` puede ser una cadena de configuración, por ejemplo ``` options="theme:'white', lang:'es'" ``` Más detalles: reCAPTCHA[recaptchagoogle] y customizing. Si no quieres usar reCAPTCHA, puedes examinar la definición de la clase `Recaptcha` en "gluon/tools.py", ya que puedes fácilmente integrar la autenticación con otros sistemas CAPTCHA. Observa que `Recaptcha` es sólo un ayudante que extiende `DIV` . Genera un campo ficticio que realiza la validación usando el servicio `reCaptcha` y, por lo tanto, se puede usar en cualquier formulario, incluso los formularios FORM definidos por el usuario. ``` formulario = FORM(INPUT(...), Recaptcha(...), INPUT(_type='submit')) ``` Puedes inyectarlo en cualquier clase de SQLFORM de esta forma: ``` formulario = SQLFORM(...) or SQLFORM.factory(...) formulario.element('table').insert(-1,TR('',Recaptcha(...),'')) ``` # Personalización de `Auth` ¶ La llamada a `auth.define_tables()` define todas las tablas Auth que no se hayan definido previamente. Esto significa que si así lo quisieras, podrías definir tu propia tabla `auth_user` . Hay algunas formas distintas de personalizar auth. La forma más simple es agregando campos extra: ``` ## después de auth = Auth(db) auth.settings.extra_fields['auth_user']= [ Field('direccion'), Field('ciudad'), Field('codigo_postal'), Field('telefono')] ## antes de auth.define_tables(username=True) ``` Puedes declarar campos extra no solo para la tabla "auth_user" sino también para las otras tablas "auth_". Es recomendable el uso de `extra_fields` porque no creará ningún conflicto en el mecanismo interno. Otra forma de hacer lo mismo, aunque no es recomendable, consiste en definir nuestras propias tablas auth. Si una tabla se declara antes de `auth.define_tables()` es usada en lugar de la tabla por defecto. Esto se hace de la siguiente forma: ``` ## después de auth = Auth(db) db.define_table( auth.settings.table_user_name, Field('first_name', length=128, default=''), Field('last_name', length=128, default=''), Field('email', length=128, default='', unique=True), # requerido Field('password', 'password', length=512, # requerido readable=False, label='Password'), Field('address'), Field('city'), Field('zip'), Field('phone'), Field('registration_key', length=512, # requerido writable=False, readable=False, default=''), Field('reset_password_key', length=512, # requerido writable=False, readable=False, default=''), Field('registration_id', length=512, # requerido writable=False, readable=False, default='')) ## no te olvides de los validadores auth_table_especial = db[auth.settings.table_user_name] # obtiene auth_table_especial auth_table_especial.first_name.requires = \ IS_NOT_EMPTY(error_message=auth.messages.is_empty) auth_table_especial.last_name.requires = \ IS_NOT_EMPTY(error_message=auth.messages.is_empty) auth_table_especial.password.requires = [IS_STRONG(), CRYPT()] auth_table_especial.email.requires = [ IS_EMAIL(error_message=auth.messages.invalid_email), IS_NOT_IN_DB(db, auth_table_especial.email)] auth.settings.table_user = auth_table_especial # le dice a auth que use la tabla especial ## antes de auth.define_tables() ``` Puedes definir cualquier campo que quieras, y puedes cambiar los validadores pero no puedes eliminar los campos marcados como "requerido" en el ejemplo. Es importante que los campos "password", "registration_key", "reset_password_key" y "registration_id" tengan los valores `readable=False` y `writable=False` , porque no debe permitirse que los usuarios los puedan manipular libremente. Si agregas un campo llamado "username", se utilizará en lugar de "email" para el acceso. Si lo haces, también deberás agregar un validador: ``` auth_table.username.requires = IS_NOT_IN_DB(db, auth_table.username) ``` # Personalización de los nombres de tablas `Auth` ¶ Los nombres utilizados de las tablas `Auth` se almacenan en ``` auth.settings.table_user_name = 'auth_user' auth.settings.table_group_name = 'auth_group' auth.settings.table_membership_name = 'auth_membership' auth.settings.table_permission_name = 'auth_permission' auth.settings.table_event_name = 'auth_event' ``` Se pueden cambiar los nombres de las tablas cambiando los valores de las variables de arriba después de la definición del objeto `auth` y antes de la definición de sus tablas. Por ejemplo: ``` auth = Auth(db) auth.settings.table_user_name = 'persona' #... auth.define_tables() ``` También se pueden recuperarlas tablas, independientemente de sus nombres actuales, con ``` auth.settings.table_user auth.settings.table_group auth.settings.table_membership auth.settings.table_permission auth.settings.table_event ``` # Otros métodos de acceso y formularios de autenticación¶ Auth provee de múltiples métodos y técnicas para crear nuevos métodos de autenticación. Cada método de acceso soportado tiene su correspondiente archivo en la carpeta ``` gluon/contrib/login_methods/ ``` Puedes consultar la documentación en los mismos archivos para cada método de acceso, pero aquí mostramos algunos ejemplos. En primer lugar, necesitamos hacer una distinción entre dos tipos de métodos alternativos de acceso: * métodos de acceso que usan el formulario de autenticación de web2py (aunque verifican las credenciales fuera de web2py). Un ejemplo es LDAP. * métodos de acceso que requieren un formulario single-sign-on (como por ejemplo Google o Facebook). Para el último caso, web2py nunca obtiene las credenciales de acceso, solo un parámetro de acceso enviado por el proveedor del servicio. El parámetro o token se almacena en ``` db.auth_user.registration_id ``` . Veamos ejemplos del primer caso: # Básico¶ Digamos que tienes un servicio de autenticación, por ejemplo en el url ``` https://basico.example.com ``` que acepta la autenticación de acceso básica. Eso significa que el servidor acepta solicitudes con un encabezado del tipo: ``` GET /index.html HTTP/1.0 Host: basico.example.com Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== ``` donde la última cadena es la codificación en base64 de la cadena usuario:contraseña. El servicio responde con 200 OK si el usuario está autorizado y 400, 401, 402, 403 o 404 en su defecto. Quieres ingresar el usuario y contraseña usando el formulario estándar de `Auth` `y verificar que las credenciales sean correctas para el servicio. Todo lo que debes hacer es agregar el siguiente código en tu aplicación: ``` from gluon.contrib.login_methods.basic_auth import basic_auth auth.settings.login_methods.append( basic_auth('https://basico.example.com')) ``` ``` auth.settings.login_methods ``` es una lista de métodos de autenticación que se ejecutan en forma secuencial. Por defecto se establece como ``` auth.settings.login_methods = [auth] ``` Cuando se agrega un método alternativo, por ejemplo `basic_auth` , Auth primero intenta dar acceso al visitante según el contenido de `auth_user` , y si eso falla, trata el próximo método de autenticación en la lista. Si un método tiene éxito en la autenticación del usuario, y si se verifica ``` auth.settings.login_methods[0]==auth ``` , `Auth` actúa de la siguiente manera: * si el usuario no existe en la tabla `auth_user` , se crea un nuevo usuario y se almacenan los campos username/email y password. * si el usuario existe en `auth_user` pero la contraseña aceptada no coincide con la almacenada anteriormente, la contraseña antigua es reemplazada con la nueva (observa que las contraseñas son siempre almacenadas como hash a menos que se especifique lo contrario). Si no desea almacenar las la nueva contraseña en `auth_user` , entonces basta con cambiar el orden de los métodos de autenticación, o eliminar `auth` de la lista. Por ejemplo: ``` from gluon.contrib.login_methods.basic_auth import basic_auth auth.settings.login_methods = \ [basic_auth('https://basico.example.com')] ``` Esto es válido para cualquier otro de los métodos de acceso tratados. # SMTP y Gmail¶ Puedes verificar las credenciales de acceso usando un servidor remoto SMTP, por ejemplo Gmail; por ejemplo, autenticas al usuario si el correo y la contraseña que proveen son credenciales válidas para acceder al servicio SMTP de Gmail ( `smtp.gmail.com:567` ). Todo lo que se requiere es el siguiente código: ``` from gluon.contrib.login_methods.email_auth import email_auth auth.settings.login_methods.append( email_auth("smtp.gmail.com:587", "@gmail.com")) ``` El primer argumento de `email_auth` es la dirección:puerto del servidor SMTP. El segundo argumento es el dominio del correo. Esto funciona con cualquier servicio de correo que requiera autenticación con TLS. # PAM¶ La autenticación usando los Pluggable Authentication Modules (PAM) funciona en forma parecida los casos anteriores. Permite a web2py que autentique a los usuarios usando una cuenta del sistema operativo: ``` from gluon.contrib.login_methods.pam_auth import pam_auth auth.settings.login_methods.append(pam_auth()) ``` # LDAP¶ La autenticación utilizando LDAP funciona del mismo modo que en los casos anteriores. Para usar el login de LDAP con MS Active Directory: ``` from gluon.contrib.login_methods.ldap_auth import ldap_auth auth.settings.login_methods.append(ldap_auth(mode='ad', server='mi.dominio.controlador', base_dn='ou=Users,dc=domain,dc=com')) ``` Para usar el login de LDAP con Lotus Notes y Domino: ``` auth.settings.login_methods.append(ldap_auth(mode='domino', server='mi.servidor.domino')) ``` ``` auth.settings.login_methods.append(ldap_auth(server='mi.servidor.ldap', base_dn='ou=Users,dc=domain,dc=com')) ``` ``` auth.settings.login_methods.append(ldap_auth(mode='cn', server='mi.servidor.ldap', base_dn='ou=Users,dc=domain,dc=com')) ``` # Google App Engine¶ La autenticación usando Google cuando se corre en Google App Engine requiere omitir el formulario de acceso de web2py, redirigir a la página de acceso de Google y regresar en caso de éxito. Como el funcionamiento es distinto que en los ejemplos previos, la API es un tanto diferente. ``` from gluon.contrib.login_methods.gae_google_login import GaeGoogleAccount auth.settings.login_form = GaeGoogleAccount() ``` # OpenID¶ Ya hemos tratado sobre la integración con Janrain (que tiene soporte para OpenID) y habíamos notado que era la forma más fácil de usar OpenID. Sin embargo, a veces no deseas depender de un servicio de terceros y quieres acceder al proveedor de OpenID en forma directa a través de la app que consume el servicio, es decir, tu aplicación. Aquí se puede ver un ejemplo: ``` from gluon.contrib.login_methods.openid_auth import OpenIDAuth auth.settings.login_form = OpenIDAuth(auth) ``` `OpenIDAuth` requiere la instalación del módulo adicional python-openid. El método define automágicamente la siguiente tabla: ``` db.define_table('alt_logins', Field('username', length=512, default=''), Field('type', length =128, default='openid', readable=False), Field('user', self.table_user, readable=False)) ``` que almacena los nombres openid de cada usuario. Si quieres mostrar los openid de un usuario autenticado: ``` {{=auth.settings.login_form.list_user_openids()}} ``` # OAuth2.0 y Facebook¶ Hemos tratado previamente la integración con Janrain (que tiene soporte para Facebook), pero a veces no quieres depender de un servicio de terceros y deseas acceder al proveedor de OAuth2.0 en forma directa; por ejemplo, Facebook. Esto se hace de la siguiente forma: ``` from gluon.contrib.login_methods.oauth20_account import OAuthAccount auth.settings.login_form=OAuthAccount(TU_ID_DE_CLIENTE,TU_CLAVE_DE_CLIENTE) ``` Las cosas se tornan un tanto complicadas cuando quieres usar Facebook OAuth2.0 para autenticar en una app específica de Facebook para acceder a su API, en lugar de acceder a tu propia app. Aquí mostramos un ejemplo para acceder a la Graph API de Facebook. Antes que nada, debes instalar el Facebook Python SDK. En segundo lugar, necesitas el siguiente código en tu modelo: ``` # importar los módulos requeridos from facebook import GraphAPI from gluon.contrib.login_methods.oauth20_account import OAuthAccount # extensión de la clase OAUthAccount class FaceBookAccount(OAuthAccount): """OAuth impl for Facebook""" AUTH_URL="https://graph.facebook.com/oauth/authorize" TOKEN_URL="https://graph.facebook.com/oauth/access_token" def __init__(self, g): OAuthAccount.__init__(self, g, TU_ID_DE_CLIENTE, TU_CLAVE_DE_CLIENTE, self.URL_DE_AUTH, self.PARAMETRO_URL) # token url self.graph = None # reemplazamos la función que recupera la información del usuario def get_user(self): "Devuelve el usuario de la Graph API" if not self.accessToken(): return None if not self.graph: self.graph = GraphAPI((self.accessToken())) try: user = self.graph.get_object("me") return dict(first_name = user['first_name'], last_name = user['last_name'], username = user['id']) except GraphAPIError: self.session.token = None self.graph = None return None # puedes usar la clase definida arriba para crear # un nuevo formulario de autenticación auth.settings.login_form=FaceBookAccount() ``` # LinkedIn¶ Hemos tratado anteriormente sobre la integración con Janrain (que tiene soporte para LinkedIn) y esa es la forma más sencilla para usar OAuth. Sin embargo, a veces no quieres depender de un servicio de terceros o quieres acceder directamente a LinkedIn para recuperar más información de la que te provee Janrain. ``` from gluon.contrib.login_methods.linkedin_account import LinkedInAccount auth.settings.login_form=LinkedInAccount(request,CLAVE,CLAVE_SECRET,URL_DE_RETORNO) ``` `LinkedInAccount` requiere que se instale el módulo adicional "python-linkedin". # X509¶ Además puedes autenticar enviando a la página un certificado x509 y tu credencial se extraerá del certificado. Esto necesita instalado `M2Crypto` de esta dirección: ``` http://chandlerproject.org/bin/view/Projects/MeTooCrypto ``` Una vez que tienes M2Crypto instalado puedes hacer: ``` from gluon.contrib.login_methods.x509_auth import X509Account auth.settings.actions_disabled=['register','change_password','request_reset_password'] auth.settings.login_form = X509Account() ``` Ahora puedes autenticar en web2py pasando tu certificado x509. La forma de hacerlo depende del navegador, pero probablemente necesites usar certificados para webservices. En ese caso puedes usar por ejemplo `cURL` para hacer pruebas a tu autenticación: ``` curl -d "firstName=John&lastName=Smith" -G -v --key private.key \ --cert server.crt https://example/app/default/user/profile ``` Esto funciona instantáneamente con Rocket (el servidor web incorporado) pero puedes necesitar algunas configuraciones extra para que funcione del lado del servidor si vas a usar un servidor diferente. En especial debes informar a tu servidor web dónde se ubican los certificados en la máquina que aloja el sistema y que se requiere la verificación de los certificados que son enviados en forma remota. La forma de hacer esto depende del servidor web y por lo tanto no lo trataremos aquí. # Múltiples formularios de acceso¶ Algunos métodos de acceso hacen modificaciones al formulario de autenticación, otros no. Cuando lo hacen, es probable que no puedan coexistir. Esto a veces se puede resolver especificando múltiples formularios de acceso en la misma página. web2py provee un método para esta tarea. Aquí se muestra un ejemplo mezclando el login normal (auth) y el login RPX (janrain.com): ``` from gluon.contrib.login_methods.extended_login_form import ExtendedLoginForm otro_formulario = RPXAccount(request, api_key='...', domain='...', url='...') auth.settings.login_form = ExtendedLoginForm(auth, otro_formulario, signals=['token']) ``` Si se establecen las señales y uno de los parámetros en la solicitud coincide con alguna de las señales, entonces realizará la llamada a como alternativa. `otro_formulario` puede manejar algunas situaciones particulares, por ejemplo, múltiples pasos del lado del acceso con OpenID dentro de . En su defecto mostrará el formulario de acceso login normal junto con `otro_formulario` . # Versiones de registros¶ Puedes usar Auth para habilitar el control de versiones de registros (record versioning): ``` db.enable_record_versioning(db, archive_db=None, archive_names='%(tablename)s_archive', current_record='current_record'): ``` Esto le indica a web2py que cree una tabla de control de registros para cada tabla en `db` y que almacene una copia de cada registro cuando se modifica. Lo que se almacena es la copia antigua, no la nueva versión. Los últimos tres parámetros son opcionales: * `archive_db` permite especificar otra base de datos donde se almacenarán las tablas de control. Si se configura como `None` equivale a especificar `db` . * `archive_names` provee de un patrón para la nomenclatura utilizada para cada tabla. * `current_record` especifica el nombre del campo tipo reference a usarse en la tabla de control para referir al registro original sin modificar. Observa que en caso de verificarse `archive_db!=db` entonces el campo de referencia es meramente un campo tipo integer ya que no se contemplan las referencias entre bases de datos. Solo se realiza el control de versiones para las tablas con campos `modified_by` y `modified_on` (como las creadas por ejemplo por auth.signature) Cuando habilitas , si los registros tienen un campo `is_active` (también creado por auth.signature), los registros nunca se eliminarán sino que se marcarán con `is_active=False` . De hecho, agrega un `common_filter` a cada tabla con control de versión que excluye los registros con `is_active=False` de manera que no sean visibles. Si habilitas no deberías usar `auth.archive` o `crud.archive` o de lo contrario se producirán duplicados de registros. Esas funciones realizan la misma tarea que pero en forma explícita y serán deprecadas, mientras que lo hace automáticamente. `Mail` y `Auth` ¶ Uno puede definir un motor de envío de correo con o simplemente usar el mailer provisto con `auth` : Debes reemplazar los valores de mail.settings con los parámetros apropiados para tu servidor SMTP. Establece si el servidor de SMTP no requiere autenticación. Si no quieres utilizar TLS, establece Puedes leer más acerca de la API para email y su configuración en el Capítulo 8. Aquí nos limitamos a tratar sobre la interacción entre `Auth` . En `Auth` , por defecto, la verificación de correo electrónico está deshabilitada. Para habilitarla, agrega las siguientes líneas en el modelo donde se define `auth` : ``` auth.settings.registration_requires_verification = True auth.settings.registration_requires_approval = False auth.settings.reset_password_requires_verification = True auth.messages.verify_email = 'Haz clic en el link http://' + \ request.env.http_host + \ URL(r=request,c='default',f='user',args=['verify_email']) + \ '/%(key)s para verificar tu dirección de correo electrónico' auth.messages.reset_password = 'Haz clic en el link http://' + \ request.env.http_host + \ URL(r=request,c='default',f='user',args=['reset_password']) + \ '/%(key)s para restablecer tu contraseña' ``` En los dos mensajes de `auth.messages` de arriba, podrías necesitar reemplazar la parte del URL en la cadena con la dirección absoluta y/o apropiada de la acción. Esto se debe a que web2py podría estar instalado detrás de un proxy, y no puede determinar su propia URL con absoluta certeza. Los ejemplos anteriores (que son los valores por defecto) deberían, sin embargo, funcionar en la mayoría de los casos. ### Autorización¶ Una vez que se ha registrado un nuevo usuario, se crea un nuevo grupo que lo contiene. El rol del nuevo usuario es por convención "user_[id]" donde [id] es el id del nuevo usuario creado. Se puede deshabilitar la creación del grupo con ``` auth.settings.create_user_groups = None ``` aunque no es recomendable. Observa que `create_user_groups` no es un valor booleano (aunque se puede establecer como `False` ) sino que es por defecto: ``` auth.settings.create_user_groups="user_%(id)s" ``` Este almacena una plantilla para el nombre del grupo a crear para un determinado `id` de usuario. Los usuarios tienen membresía en los grupos. Cada grupo se identifica por un nombre/rol o role. Los grupos tienen permisos. Los usuarios tienen permisos según a qué grupos pertenezcan. Por defecto cada usuario es nombrado miembro de su propio grupo. ``` auth.settings.everybody_group_id = 5 ``` para que un usuario sea miembro automáticamente del grupo número 5. Aquí 5 es utilizado como ejemplo y asumimos que el grupo ya se ha creado de antemano. Puedes crear grupos, asignar membresías y permisos a través de appadmin o en forma programática usando los siguientes métodos: ``` auth.add_group('rol', 'descripción') ``` el método devuelve el id del nuevo grupo creado. ``` auth.del_group(id_grupo) ``` borra el grupo que tenga id `id_grupo` . ``` auth.del_group(auth.id_group('user_7')) ``` borra el grupo cuyo rol es "user_7", es decir, el grupo exclusivo del usuario número 7 ``` auth.user_group(id_usuario) ``` devuelve el id del grupo exclusivo del usuario identificado con el id `id_usuario` . ``` auth.add_membership(id_grupo, id_usuario) ``` le otorga membresía en el grupo `id_grupo` al usuario `id_usuario` . Si el usuario no se especifica, web2py asume que se trata del usuario autenticado. ``` auth.del_membership(id_grupo, id_usuario) ``` expulsa al usuario `id_usuario` del grupo `id_grupo` . Si no se especifica el usuario, entonces web2py asume que se trata del usuario autenticado. ``` auth.has_membership(id_grupo, id_usuario, rol) ``` verifica que el usuario `id_usuario` es miembro del grupo `id_grupo` o el grupo con el rol especificado. Solo se debería pasar `id_grupo` o `rol` a la función, no ambos. Si no se especifica `id_usuario` , entonces web2py asume que se trata del usuario autenticado. ``` auth.add_permission(id_grupo, 'nombre', 'objeto', id_registro) ``` otorga permiso para "nombre" (definido por el usuario) sobre "objeto" (también definido por el usuario) a miembros del grupo `id_grupo` . Si "objeto" es un nombre de tabla entonces el permiso puede hacer referencia a toda la tabla estableciendo el valor de `id_registro` como cero o bien el permiso puede hacer referencia a un registro específico estableciendo `id_registro` con un valor numérico mayor a cero. Cuando se otorgan permisos sobre tablas, es una práctica común la utilización de nombres comprendidos en el conjunto ('create', 'read', 'update', 'delete', 'select'). Estos parámetros son tratados especialmente y controlados en forma instantánea por las API para CRUD. Si el valor `id_grupo` es cero, web2py usa el grupo exclusivo para el usuario actualmente autenticado. También puedes usar ``` auth.id_group(role="...") ``` para recuperar el id del grupo por su nombre. ``` auth.del_permission(group_id, 'nombre', 'objeto', id_registro) ``` anula el permiso. ``` auth.has_permission('nombre', 'objeto', id_registro, id_usuario) ``` comprueba que el usuario identificado por `id_usuario` tiene membresía en un grupo con el permiso consultado. ``` registro = db(auth.accessible_query('read', db.mitabla, id_usuario))\ .select(db.mitabla.ALL) ``` devuelve todo registro de la tabla "mitabla" para el cual el usuario `id_usuario` tiene permiso de lectura. Si el usuario no se especifica, entonces web2py asume que se trata del usuario autenticado. El comando se puede combinar con otras consultas para obtener consultas más complicadas. es el único método de Auth que requiere el uso de JOIN, por lo que no es utilizable en Google App Engine. Suponiendo que se han establecido las siguientes definiciones: ``` >>> from gluon.tools import Auth >>> auth = Auth(db) >>> auth.define_tables() >>> secretos = db.define_table('documento', Field('cuerpo')) >>> james_bond = db.auth_user.insert(first_name='James', last_name='Bond') ``` ``` >>> doc_id = db.documento.insert(cuerpo = 'confidencial') >>> agentes = auth.add_group(role = 'Agente secreto') >>> auth.add_membership(agentes, james_bond) >>> auth.add_permission(agentes, 'read', secretos) >>> print auth.has_permission('read', secretos, doc_id, james_bond) True >>> print auth.has_permission('update', secretos, doc_id, james_bond) False ``` # Decoradores¶ La forma más corriente de comprobar credenciales no es usar llamadas explícitas a los métodos descriptos más arriba, sino decorando las funciones para que se compruebe la permisología en función del usuario autenticado. Aquí se muestran algunos ejemplos: ``` def funcion_uno(): return 'esta es una función pública' @auth.requires_login() def funcion_dos(): return 'esta requiere de acceso' @auth.requires_membership('agentes') def funcion_tres(): return 'eres un agente secreto' @auth.requires_permission('read', secretos) def funcion_cuatro(): return 'tienes permiso para leer los documentos secretos' @auth.requires_permission('delete', 'archivos todos') def funcion_cinco(): import os for file in os.listdir('./'): os.unlink(file) return 'se borraron todos los archivos' @auth.requires(auth.user_id==1 or request.client=='127.0.0.1', requires_login=True) def funcion_seis(): return 'puedes leer documentos secretos' @auth.requires_permission('sumar', 'número') def sumar(a, b): return a + b def funcion_siete(): return sumar(3, 4) ``` El argumento que especifica los requisitos de ``` auth.requires(condición) ``` puede ser un objeto callable y a menos que la condición sea simple, es preferible pasar un callable que una condición porque será más rápido, ya que la condición sólo se evaluará en caso de ser necesario. Por ejemplo ``` @auth.requires(lambda: comprobar_condicion()) def accion(): .... ``` `@auth.requires` también toma un argumento opcional `requires_login` que es por defecto `True` . Si se establece como False, no requiere login antes de evaluar la condición de verdadero/falso. La condición puede ser un valor booleano o una función que evalúe a un booleano. Observa que el acceso a todas las funciones excepto la de la primera y la última está restringido según la permisología asociada al usuario que realizó la solicitud. Si no hay un usuario autenticado, entonces los permisos no se pueden comprobar; el visitante es redirigido a la página de login y luego de regreso a la página que requiere los permisos. # Combinando requisitos¶ Ocasionalmente, hace falta combinar requisitos. Esto puede hacerse a través de un decorador con `requires` genérico que tome un único argumento que consista en una condición verdadera o falsa. Por ejemplo, para dar acceso a los agentes, pero solo los días martes: o el equivalente: # CRUD y Autorización¶ El uso de decoradores o comprobaciones explícitas proveen de una forma de implementación para el control de acceso. Otra forma de implementación del control de acceso es siempre usar CRUD (en lugar de `SQLFORM` ) para acceder a la base de datos e indicarle a CRUD que debe controlar el acceso a las tablas y registros de la base de datos. Esto puede hacerse enlazando `Auth` y `CRUD` con la siguiente instrucción: Esto evitará que el visitante acceda a cualquier operación CRUD a menos que esté autenticado y tenga la permisología adecuada. Por ejemplo, para permitir a un visitante que publique comentarios, pero que sólo pueda actualizar sus comentarios (suponiendo que se han definido crud, auth y db.comentario): ``` def otorgar_permiso_crear(formulario): id_grupo = auth.id_group('user_%s' % auth.user.id) auth.add_permission(id_grupo, 'read', db.comentario) auth.add_permission(id_grupo, 'create', db.comentario) auth.add_permission(id_grupo, 'select', db.comentario) def otorgar_permiso_actualizar(formulario): id_comentario = formulario.vars.id id_grupo = auth.id_group('user_%s' % auth.user.id) auth.add_permission(group_id, 'update', db.comentario, id_comentario) auth.add_permission(group_id, 'delete', db.comentario, id_comentario) auth.settings.register_onaccept = otorgar_permiso_crear crud.settings.auth = auth def publicar_comentario(): formulario = crud.create(db.comentario, onaccept=otorgar_permiso_actualizar) comentarios = db(db.comentario).select() return dict(formulario=formulario, comentarios=comentarios) def actualizar_comentario(): formulario = crud.update(db.comentario, request.args(0)) return dict(formulario=formulario) ``` Además puedes recuperar registros específicos (aquellos para los que está habilitada la lectura 'read'): ``` def publicar_comentario(): formulario = crud.create(db.comentario, onaccept=otorgar_permiso_actualizar) consulta = auth.accessible_query('read', db.comentario, auth.user.id) comentarios = db(consulta).select(db.comentario.ALL) return dict(formulario=formulario, comentarios=comentarios) ``` Los nombres empleados para la permisología manejados por: son "read", "create", "update", "delete", "select", "impersonate". # Autorización y descargas¶ El uso de decoradores y de `crud.settings.auth` no establece un control de acceso a archivos descargados con la función de descarga corriente. En caso de ser necesario, uno debe declarar explícitamente cuáles campos tipo "upload" contienen archivos que requieren control de acceso al descargarse. Por ejemplo: ``` db.define_table('perro', Field('miniatura', 'upload'), Field('imagen', 'upload')) db.perro.imagen.authorization = lambda registro: \ auth.is_logged_in() and \ auth.has_permission('read', db.perro, registro.id, auth.user.id) ``` El atributo `authorization` del campo upload puede ser None (por defecto) o una función personalizada que compruebe credenciales y/o la permisología para los datos consultados. En el caso del ejemplo, la función comprueba que usuario esté autenticado y tenga permiso de lectura para el registro actual. Además, también para este caso particular, no existe una restricción sobre la descarga de imágenes asociadas el campo "miniatura", pero requiere control de acceso para las imágenes asociadas al campo "imagen". # Control de Acceso y Autenticación Básica¶ En algunas ocasiones, puede ser necesario exponer acciones con decoradores que implementan control de acceso como servicios; por ejemplo, cuando se los llama desde un script o programa pero con la posibilidad de utilizar el servicio de autenticación para comprobar las credenciales de acceso. Auth contempla el acceso por el método de autenticación básico: Con esa opción, una acción como por ejemplo ``` @auth.requires_login() def la_hora(): import time return time.ctime() ``` puede invocarse, por ejemplo, desde un comando de la consola: ``` wget --user=[usuario] --password=[contraseña] http://.../[app]/[controlador]/la_hora ``` También es posible dar acceso llamando a `auth.basic()` en lugar de usar un decorador `@auth` : ``` def la_hora(): import time auth.basic() if auth.user: return time.ctime() else: return 'No tiene autorización' ``` El método de acceso básico es a menudo la única opción para servicios (tratados en el próximo capítulo), pero no está habilitado por defecto. # Autenticación manual¶ A veces necesitas implementar tus propios algoritmos y hacer un sistema de acceso "manual". Esto también está contemplado llamando a la siguiente función: ``` user = auth.login_bare(usuario, contraseña) ``` `login_bare` devuelve el objeto user en caso de que exista y su contraseña es válida, de lo contrario devuelve False. `username` es la dirección de correo electrónico si la tabla `auth_user` no tiene un campo `username` . # Configuraciones y mensajes¶ Esta es la lista de todos los parámetros que se pueden personalizar para Auth Para que `auth` pueda enviar correos se debe enlazar lo siguiente a un objeto `gluon.toools.Mail` : ``` auth.settings.mailer = None ``` El siguiente debe ser el nombre del controlador que define la acción `user` : ``` auth.settings.controller = 'default' ``` El que sigue es un parámetro muy importante: ``` auth.settings.hmac_key = None ``` Debe tomar un valor similar a "sha512:una-frase-de-acceso" y se pasará como parámetro al validador CRYPT para el campo "password" de la tabla `auth_user` . Serán el algoritmo y la frase de acceso usados para hacer un hash de las contraseñas. Por defecto, auth también requiere una extensión mínima para las contraseñas de 4 caracteres. Esto se puede modificar: ``` auth.settings.password_min_length = 4 ``` Para deshabilitar una acción agrega su nombre a la siguiente lista: ``` auth.settings.actions_disabled = [] ``` deshabilitará el registro de usuarios. Si deseas recibir un correo para verificar el registro de usuario debes configurar este parámetro como `True` : ``` auth.settings.registration_requires_verification = False ``` Para autenticar automáticamente a los usuarios una vez que se hayan registrado, incluso si no han completado el proceso de verificación del correo electrónico, establece el siguiente parámetro como `True` : ``` auth.settings.login_after_registration = False ``` Si los nuevos usuarios registrados deben esperar por la aprobación antes de poder acceder configura esto como `True` : ``` auth.settings.registration_requires_approval = False ``` La aprobación consiste en establecer el valor `registration_key==''` a través de appadmin o programáticamente. Si no deseas que se genere un nuevo grupo para cada usuario establece el siguiente parámetro como `False` : Las siguientes configuraciones establecen métodos alternativos para el acceso o login, tratados más arriba: ``` auth.settings.login_methods = [auth] auth.settings.login_form = auth ``` ¿Necesitas habilitar el acceso básico? ``` auth.settings.allows_basic_login = False ``` El siguiente URL corresponde a la acción de autenticación login: ``` auth.settings.login_url = URL('user', args='login') ``` Si el usuario intenta acceder a la página de registro pero ya se ha autenticado, se lo redirigirá a esta URL: ``` auth.settings.logged_url = URL('user', args='profile') ``` Esto debe referir al URL de la acción download, en caso de que el perfil contenga imágenes: ``` auth.settings.download_url = URL('download') ``` Estos parámetros deben enlazar al URL al que quieras usar para redirigir a tus usuarios luego de cada acción de tipo `auth` (en caso de que no se haya establecido un parámetro de redirección especial o referrer): ``` auth.settings.login_next = URL('index') auth.settings.logout_next = URL('index') auth.settings.profile_next = URL('index') auth.settings.register_next = URL('user', args='login') auth.settings.retrieve_username_next = URL('index') auth.settings.retrieve_password_next = URL('index') auth.settings.change_password_next = URL('index') auth.settings.request_reset_password_next = URL('user', args='login') auth.settings.reset_password_next = URL('user', args='login') auth.settings.verify_email_next = URL('user', args='login') ``` Si el usuario no se ha autenticado, y ejecuta una función que requere autenticación, entonces será redirigido a ``` auth.settings.login_url ``` que por defecto es ``` URL('default', 'user/login') ``` . Podemos cambiar ese comportamiento si redefinimos: ``` auth.settings.on_failed_authentication = lambda url: redirect(url) ``` Que es la función a la que se llama para las redirecciones. El argumento `url` pasado a esta función es el url para la página de acceso (login page). Si el visitante no tiene permiso de acceso a una función determinada, es redirigido al URL definido por ``` auth.settings.on_failed_authorization = \ URL('user',args='on_failed_authorization') ``` Puedes cambiar esa variable y redirigir al usuario a otra parte. A menudo querrás usar ``` on_failed_authorization ``` como URL pero puede tomar como parámetro una función que devuelva el URL y que será llamada en caso de fallar la autorización. Hay listas de llamadas de retorno que deberían ejecutarse luego de la validación de formularios para cada una de las acciones correspondientes y antes de toda E/S de la base de datos: ``` auth.settings.login_onvalidation = [] auth.settings.register_onvalidation = [] auth.settings.profile_onvalidation = [] auth.settings.retrieve_password_onvalidation = [] auth.settings.reset_password_onvalidation = [] ``` Cada llamada de retorno puede ser una función que toma un objeto `form` y puede modificar los atributos de ese formulario antes de aplicarse los cambios en la base de datos. Hay listas de llamadas de retorno o callback que se deberían ejecutar luego de la E/S de la base de datos y antes de la redirección: ``` auth.settings.login_onaccept = [] auth.settings.register_onaccept = [] auth.settings.profile_onaccept = [] auth.settings.verify_email_onaccept = [] ``` ``` auth.settings.register_onaccept.append(lambda formulario:\ mail.send(to='<EMAIL>',subject='nuevo usuario', message='el email del nuevo usuario es %s'%formulario.vars.email)) ``` Puedes habilitar captcha para cualquiera de las acciones de `auth` : ``` auth.settings.captcha = None auth.settings.login_captcha = None auth.settings.register_captcha = None auth.settings.retrieve_username_captcha = None auth.settings.retrieve_password_captcha = None ``` Si los parámetros de `.captcha` hacen referencia a , todos los formularios para los cuales la opción correspondiente (como `.login_captcha` ) se haya establecido como `None` tendrán captcha, mientras que aquellos para los que la opción correspondiente se ha establecido como `False` no lo tendrán. Si, en cambio, se establece `.captcha` como `None` , solo aquellos formularios que tengan la opción correspondiente con un objeto como parámetro, tendrán captcha, los otros, no. Este es el tiempo de vencimiento de la sesión: ``` auth.settings.expiration = 3600 # segundos ``` Puedes cambiar el nombre del campo para la contraseña (en Firebird, por ejemplo, "password" es una palabra especial y no se puede usar para nombrar un campo): ``` auth.settings.password_field = 'password' ``` Normalmente el formulario de acceso intenta verificar el formato de los correos. Esto se puede deshabilitar modificando la configuración: ``` auth.settings.login_email_validate = True ``` ¿Quieres mostrar el id de registro en la página de edición del perfil? ``` auth.settings.showid = False ``` Para formularios personalizados puedes necesitar que las notificaciones automáticas de errores en formularios estén deshabilitadas: ``` auth.settings.hideerror = False ``` Además para formularios personalizados puedes cambiar el estilo: ``` auth.settings.formstyle = 'table3cols' ``` (puede ser "table2cols", "divs" y "ul") Y puedes especificar un separador para los formularios generados por auth: ``` auth.settings.label_separator = ':' ``` Por defecto, el formulario de autenticación da la opción de extender el acceso con una opción "remember me". El plazo de vencimiento se puede cambiar o deshabilitar la opción con estos parámetros: ``` auth.settings.long_expiration = 3600*24*30 # un mes auth.settings.remember_me_form = True ``` También puedes personalizar los siguientes mensajes cuyo uso y contexto deberían ser obvios: ``` auth.messages.submit_button = 'Enviar' auth.messages.verify_password = 'Verificar contraseña' auth.messages.delete_label = 'Marque para eliminar:' auth.messages.function_disabled = 'Función deshabilitada' auth.messages.access_denied = 'Privilegios insuficientes' auth.messages.registration_verifying = 'El registro de usuario requiere verificación' auth.messages.registration_pending = 'El registro de usuario está pendiente de aprobación' auth.messages.login_disabled = 'El acceso fue deshabilitado por el administrador' auth.messages.logged_in = 'Autenticado' auth.messages.email_sent = 'Correo enviado' auth.messages.unable_to_send_email = 'Falló el envío del correo' auth.messages.email_verified = 'Dirección de correo verificada' auth.messages.logged_out = 'Se ha cerrado la sesión' auth.messages.registration_successful = 'Registro de usuario completado' auth.messages.invalid_email = 'Dirección de correo inválida' auth.messages.unable_send_email = 'Falló el envío del correo' auth.messages.invalid_login = 'Falló la autenticación' auth.messages.invalid_user = 'El usuario especificado no es válido' auth.messages.is_empty = "No puede ser vacío" auth.messages.mismatched_password = "Los campos de contraseña no coinciden" auth.messages.verify_email = ... auth.messages.verify_email_subject = 'Verificación de contraseña' auth.messages.username_sent = 'Su nombre de usuario ha sido enviado por correo' auth.messages.new_password_sent = 'Se ha enviado una nueva contraseña a su correo' auth.messages.password_changed = 'Contraseña modificada' auth.messages.retrieve_username = 'Su nombre de usuario es: %(username)s' auth.messages.retrieve_username_subject = 'Recuperar usuario' auth.messages.retrieve_password = 'Su contraseña de usuario es: %(password)s' auth.messages.retrieve_password_subject = 'Recuperar contraseña' auth.messages.reset_password = ... auth.messages.reset_password_subject = 'Restablecer contraseña' auth.messages.invalid_reset_password = 'Nueva contraseña inválida' auth.messages.profile_updated = 'Perfil actualizado' auth.messages.new_password = 'Nueva contraseña' auth.messages.old_password = 'Vieja contraseña' auth.messages.group_description = \ 'Grupo exclusivo del usuario %(id)s' auth.messages.register_log = 'Usuario %(id)s registrado' auth.messages.login_log = 'Usuario %(id)s autenticado' auth.messages.logout_log = 'Usuario %(id)s cerró la sesión' auth.messages.profile_log = 'Usuario %(id)s perfil actualizado' auth.messages.verify_email_log = 'Usuario %(id)s correo de verificación enviado' auth.messages.retrieve_username_log = 'Usuario %(id)s nombre de usuario recuperado' auth.messages.retrieve_password_log = 'Usuario %(id)s contraseña recuperada' auth.messages.reset_password_log = 'Usuario %(id)s contraseña restablecida' auth.messages.change_password_log = 'Usuario %(id)s se cambió la contraseña' auth.messages.add_group_log = 'Grupo %(group_id)s creado' auth.messages.del_group_log = 'Grupo %(group_id)s eliminado' auth.messages.add_membership_log = None auth.messages.del_membership_log = None auth.messages.has_membership_log = None auth.messages.add_permission_log = None auth.messages.del_permission_log = None auth.messages.has_permission_log = None auth.messages.label_first_name = 'Nombre' auth.messages.label_last_name = 'Apellido' auth.messages.label_username = 'Nombre de Usuario' auth.messages.label_email = 'Correo Electrónico' auth.messages.label_password = 'Contraseña' auth.messages.label_registration_key = 'Clave de registro de usuario' auth.messages.label_reset_password_key = 'Clave para restablecer contraseña' auth.messages.label_registration_id = 'Identificador del registro de usuario' auth.messages.label_role = 'Rol' auth.messages.label_description = 'Descripción' auth.messages.label_user_id = 'ID del Usuario' auth.messages.label_group_id = 'ID del Grupo' auth.messages.label_name = 'Nombre' auth.messages.label_table_name = 'Nombre de Tabla' auth.messages.label_record_id = 'ID del Registro' auth.messages.label_time_stamp = 'Fecha y Hora' auth.messages.label_client_ip = 'IP del Cliente' auth.messages.label_origin = 'Origen' auth.messages.label_remember_me = "Recordarme (por 30 días)" ``` Los registros de membresía `add|del|has` permiten le uso de "%(user_id)s" y "%(group_id)s". Los registros de permisos `add|del|has` permiten el uso de "%(user_id)s", "%(name)s", "%(table_name)s", y "%(record_id)s". ### Servicio Central de Autenticación¶ web2py provee de soporte para la autenticación con servicios de terceros y single sign on. Aquí describimos el Servicio Central de Autenticación (CAS, Central Authentication Service) que es un estándar industrial y tanto el cliente como el servidor están incorporados en web2py. CAS es un protocolo abierto para la autenticación distribuida y funciona de la siguiente forma: Cuando un visitante arriba a nuestro sitio web, nuestra aplicación comprueba en la sesión si el usuario ya está autenticado (por ejemplo a través de un objeto `session.token` ). Si el usuario no se ha autenticado, el controlador redirige al visitante desde la aplicación de CAS, donde puede autenticarse, registrarse y manejar sus credenciales (nombre, correo electrónico, contraseña). Si el usuario se registra, recibirá un correo; el registro de usuario no estará completo hasta que el usuario conteste el correo. Una vez que el usuario está exitosamente registrado y autenticado, la aplicación CAS redirige al usuario a nuestra aplicación junto con una clave. Nuestra aplicación utiliza la clave para obtener las credenciales del usuario a través de una solicitud HTTP en segundo plano al servidor CAS. Usando este mecanismo, múltiples aplicaciones pueden utilizar un sistema single sing-on a través del servicio CAS. El servidor que provee la autenticación es denominado proveedor del servicio. Aquellas aplicaciones que requieren la autenticación de los visitantes se llaman consumidores del servicio. CAS es similar a OpenID, con una diferencia esencial. En el caso de OpnenID, el visitante elige el proveedor del servicio. En el caso de CAS, nuestra aplicación hace esa elección. Haciendo que CAS sea más segura. Corriendo un proveedor CAS con web2py es tan fácil como copiar la app de andamiaje. De hecho cualquier app que exponga la acción ``` ## proveedor de acceso def user(): return dict(form=auth()) ``` es un proveedor de CAS 2.0 y se puede acceder a sus servicios con los URL ``` http://.../proveedor/default/user/cas/login http://.../proveedor/default/user/cas/validate http://.../proveedor/default/user/cas/logout ``` (suponemos que la app se llama "proveedor"). Puedes acceder a este servicio desde cualquier otra aplicación web (el consumidor) simplemente relegando la autenticación al proveedor: ``` ## en la app consumidor auth = Auth(db,cas_provider = 'http://127.0.0.1:8000/proveedor/default/user/cas') ``` Cuando visitas el url de acceso de la app consumidor, te redirigirá a la app proveedor, que realizará la autenticación y luego redirigirá nuevamente a la app consumidor. Todos los procesos de registro de usuarios, cerrar sesión, cambio de contraseña o recuperar contraseña, se deben completar en la app proveedor. Se creará un registro sobre el acceso del usuario del lado del consumidor para que se puedan agregar campos extra y un perfil local. Gracias a CAS 2.0 todos los campos accesibles para lectura en el proveedor que tienen su correspondiente campo en la tabla `auth_user` del consumidor se copiarán automáticamente. ``` Auth(..., cas_provider='...') ``` funcional con proveedores de terceros y soporta CAS 1.0 y 2.0. La versión se detecta automáticamente. Por defecto genera los URL del proveedor a partir de una base (el url de `cas_provider` de arriba) agregando ``` /login /validate /logout ``` Estos valores se pueden cambiar tanto en el consumidor como en el proveedor ``` ## en la app del consumidor y del proveedor (deben coincidir) auth.settings.cas_actions['login']='login' auth.settings.cas_actions['validate']='validate' auth.settings.cas_actions['logout']='logout' ``` Si deseas conectar a un proveedor CAS de web2py desde un dominio diferente, debes habilitarlos agregándolos a la lista de dominios autorizados: ``` ## en la app proveedor auth.settings.cas_domains.append('example.com') ``` # Uso de web2py para autenticar otras aplicaciones¶ Esto es posible pero depende del servidor web. aquí vamos a suponer que las aplicaciones corren en el mismo servidor web: Apache con `mod_wsgi` . Una de las aplicaciones es web2py con una app que provee control de acceso por medio de Auth. La otra aplicación puede ser un script CGI, un programa en PHP o cualquier otra cosa. Queremos que el servidor web solicite permisos a la primera aplicación cuando una solicitud de un cliente accede a la segunda. En primer lugar es necesario modificar la aplicación de web2py y agregar el siguiente controlador: ``` def verificar_acceso(): return 'true' if auth.is_logged_in() else 'false' ``` que devuelve `true` si el usuario se autenticó y `false` en su defecto. Ahora ejecutemos un proceso en segundo plano: ``` nohup python web2py.py -a '' -p 8002 ``` El puerto 8002 es indispensable y no hay necesidad de habilitar admin, por lo que no se especifica una contraseña. Luego debemos editar el archivo de configuración de Apache (por ejemplo "/etc/apache2/sites-available/default") para que cuando una app que no sea de web2py reciba una solicitud, llame a la función para `verificar_acceso` definida más arriba en su lugar y, si esta devuelve `true` , que continúe con la respuesta a la solicitud, o de lo contrario que prohíba el acceso. Como web2py y la otra aplicación corren en el mismo dominio, si el usuario está autenticado en la app de web2py, la cookie de la sesión será enviada al servidor web Apache incluso si la solicitud es para la otra aplicación y admitirá la verificación de las credenciales. Para que esto sea posible necesitamos un script, "web2py/scripts/access.wsgi" que sabe como lidiar con ese asunto. El script viene incluido con web2py. Todo lo que tenemos que hacer es decirle a apache que llame al script, e informar el URL de la aplicación que requiere control de acceso y la ubicación del script: ``` <VirtualHost *:80> WSGIDaemonProcess web2py user=www-data group=www-data WSGIProcessGroup web2py WSGIScriptAlias / /home/www-data/web2py/wsgihandler.py AliasMatch ^ruta/a/miapp/que/requiere/autenticacion/miarchivo /ruta/al/archivo <Directory /ruta/a/> WSGIAccessScript /ruta/a/web2py/scripts/access.wsgi </Directory> </VirtualHost> ``` Aquí "^ruta/a/miapp/que/requiere/autenticacion/miarchivo" es la expresión regular que debería coincidir con la solicitud entrante y "/ruta/a" es la ubicación absoluta de la carpeta de web2py. El script "access.wsgi" contiene la siguiente línea: ``` URL_CHECK_ACCESS = 'http://127.0.0.1:8002/%(app)s/default/check_access' ``` que refiere a la aplicación de web2py que hemos solicitado pero puedes editarlo para que refiera a una aplicación específica, que corra en otro puerto que no sea 8002. Además puedes cambiar la acción `check_access()` y hacer que su algoritmo sea más complejo. Esta acción puede recuperar el URL que fue originalmente solicitado usando la variable de entorno y puedes implementar reglas más complejas: ``` def verificar_acceso(): if not auth.is_logged_in(): return 'false' elif not usuario_tiene_acceso(request.env.request_uri): return 'false' else: return 'true' ``` # Servicios¶ Date: 2009-08-01 Categories: Tags: # Chapter 10: Servicios * Servicios * Conversión o render de un diccionario * Llamadas a procedimientos remotos o RPC * API de bajo nivel y otras recetas * Webservices Restful * Servicios y Autenticación ## Servicios¶ El W3C define los servicios web como "sistema de software destinado al soporte de interacción máquina-a-máquina en forma interoperable sobre una red". Esta es una definición muy general, e implica una gran cantidad de protocolos destinados a las comunicaciones máquina-a-máquina, no a máquina-a-persona, como por ejemplo XML, JSON, RSS, etc. En este capítulo vamos a tratar sobre la forma de exponer servicios utilizando web2py. Si estás interesado en ejemplos de consumo de servicios de terceros (Twitter, Dropbox, etc.) puedes consultar el Capítulo 9 y 14. web2py provee, sin configuración complementaria, soporte para varios protocolos, incluyendo XML, JSON, RSS, XMLRPC, AMFRPC, y SOAP. También se puede extender web2py para que soporte otros protocolos. Cada uno de estos protocolos está soportado de diversas formas, y se hace una distinción según: * La conversión de la salida de una función en un formato determinado (por ejemplo XML, JSON, RSS, CSV) * Una llamada a procedimiento remoto o RPC (por ejemplo XMLRPC, JSONRPC, AMFRPC) ### Conversión o render de un diccionario¶ # HTML, XML, y JSON¶ Tomemos como ejemplo la siguiente acción: ``` def conteo(): session.conteo = (session.conteo or 0) + 1 return dict(conteo=session.conteo, ahora=request.now) ``` Esta acción devuelve un valor de conteo que se incrementa en una unidad si un usuario refresca la página, y la fecha y hora o timestamp de la solicitud de página actual. Comúnmente esta página se solicitaría a través de: ``` http://127.0.0.1:8000/app/default/conteo ``` y se convertiría en HTML. Sin escribir una línea de código, podemos decirle a web2py que convierta la página utilizando distintos protocolos con solo agregar la extensión del URL: ``` http://127.0.0.1:8000/app/default/conteo.html http://127.0.0.1:8000/app/default/conteo.xml http://127.0.0.1:8000/app/default/conteo.json ``` El diccionario devuelto por la acción se convertirá en HTML, XML, y JSON, respectivamente. Esta es la salida XML: ``` <document> <conteo>3</conteo> <ahora>2009-08-01 13:00:00</ahora> </document> ``` Esta es la salida como JSON: ``` { 'conteo':3, 'ahora':'2009-08-01 13:00:00' } ``` Nota que los objetos time, date y datetime se convierten como cadenas en formato ISO. Esto no está especificado en el estándar JSON, sino una convención en el uso de web2py. # Vistas genéricas¶ Cuando, por ejemplo, se hace una solicitud con la extensión ".xml", web2py busca una plantilla llamada "default/conteo.xml" y si no la encuentra, busca otra plantilla llamada "generic.xml". Los archivos "generic.html", "generic.xml", "generic.json" vienen incluidos con la aplicación de andamiaje actual. Se pueden definir con facilidad otras extensiones personalizadas por el usuario. Por razones de seguridad, solo es posible acceder a las vistas genéricas desde localhost. Para poder acceder desde clientes remotos deberías configurar la varialble response.generic_patterns. Suponiendo que estás usando una copia de la app de andamiaje, edita la siguiente línea en models/db.py * restricción del acceso únicamente para localhost ``` response.generic_patterns = ['*'] if request.is_local else [] ``` * acceso irrestricto a todas las vistas genéricas ``` response.generic_patterns = ['*'] ``` * acceso únicamente con .json ``` response.generic_patterns = ['*.json'] ``` generic_patterns es un patrón tipo glob, eso quiere decir que puedes usar cualquier parámetro que coincida con acciones de tu app o puedes pasar una lista de patrones. ``` response.generic_patterns = ['*.json','*.xml'] ``` Para usar esta funcionalidad en una app de versiones anteriores, deberías copiar los archivos "generic.*" de una app nueva (de versiones posteriores a 1.60). Este es el código de "generic.html" {{=BEAUTIFY(response._vars)}} <button onclick="document.location='{{=URL("admin","default","design", args=request.application)}}'">admin</button> <button onclick="jQuery('#request').slideToggle()">request</button> <div class="hidden" id="request"><h2>request</h2>{{=BEAUTIFY(request)}}</div> <button onclick="jQuery('#session').slideToggle()">session</button> <div class="hidden" id="session"><h2>session</h2>{{=BEAUTIFY(session)}}</div> <button onclick="jQuery('#response').slideToggle()">response</button> <div class="hidden" id="response"><h2>response</h2>{{=BEAUTIFY(response)}}</div> <script>jQuery('.hidden').hide();</script> ``` El código de "generic.xml" ``` {{ try: from gluon.serializers import xml response.write(xml(response._vars),escape=False) response.headers['Content-Type']='text/xml' except: raise HTTP(405,'no xml') }} ``` Y este es el código de "generic.json" ``` {{ try: from gluon.serializers import json response.write(json(response._vars),escape=False) response.headers['Content-Type']='text/json' except: raise HTTP(405,'no json') }} ``` Todo diccionario se puede convertir a HTML, XML y JSON siempre y cuando contenga tipos primitivos de Python (int, float, string, list, typle, dictionary). `response._vars` contiene el diccionario devuelto por la acción. Si el diccionario contiene objetos definidos por el usuario específicos de web2py, se deben convertir previamente en una vista personalizada. # Conversión de registros `Rows` ¶ Si deseas convertir un conjunto de registros devueltos por un comando select en XML o JSON u otro formato, primero transforma el objeto Rows en una lista de diccionarios usando el método `as_list` . Si tenemos por ejemplo el siguiente modelo: La siguiente acción se puede convertir a HTML, pero no a XML o JSON: ``` def todos(): gente = db().select(db.persona.ALL) return dict(gente=gente) ``` mientras la siguiente acción se puede convertir a XML y JSON: ``` def todos(): gente = db().select(db.persona.ALL).as_list() return dict(gente=gente) ``` # Formatos personalizados¶ Si, por ejemplo, quisieras convertir una acción en un pickle de Python: ``` http://127.0.0.1:8000/app/default/conteo.pickle ``` sólo necesitas crear un nuevo archivo de vista "default/conteo.pickle" que contenga: ``` {{ import cPickle response.headers['Content-Type'] = 'application/python.pickle' response.write(cPickle.dumps(response._vars),escape=False) }} ``` Si quieres poder enviar cualquier acción como un archivo pickleado, sólo necesitas guardar el archivo de arriba con el nombre "generic.pickle". No todos los objetos se pueden picklear, y no todo objeto es despickleable. Es seguro mantener los objetos en su formato de Python y sus combinaciones. Los objetos que no contienen referencias a stream (flujos de datos) o conexiones a bases de datos son a menudo pickleables, pero sólo se pueden despicklear en un entorno donde se hayan definido de antemano todas las clases de los objetos pickleados. # RSS¶ web2py incluye una vista genérica "generic.rss" que puede convertir un diccionario devuelto por la acción como una fuente RSS. Como las fuentes RSS tienen una estructura fija (título, link, descripción, ítem, etc.) entonces para que esto funcione, el diccionario devuelto por la acción debe tener la estructura apropiada: ``` {'title' : '', 'link' : '', 'description': '', 'created_on' : '', 'entries' : []} ``` y cada entrada en entries debe tener una estructura similar: ``` {'title' : '', 'link' : '', 'description': '', 'created_on' : ''} ``` ``` def feed(): return dict(title="mi feed", link="http://feed.example.com", description="mi primer feed", entries=[ dict(title="mi feed", link="http://feed.example.com", description="mi primer feed") ]) ``` basta con ir al URL: ``` http://127.0.0.1:8000/app/default/feed.rss ``` Como alternativa, suponiendo que tenemos el siguiente modelo: ``` db.define_table('entrada_rss', Field('title'), Field('link'), Field('created_on','datetime'), Field('description')) ``` ``` def feed(): return dict(title="mi feed", link="http://feed.example.com", description="mi primer feed", entradas=db().select(db.entrada_rss.ALL).as_list()) ``` El método `as_list()` de los objetos Rows convierte los registros en una lista de diccionarios. Si se encuentran ítems adicionales del diccionario con nombres no definidos explícitamente, estos se ignoran. Esta es la vista "generic.rss" provista por web2py: ``` {{ try: from gluon.serializers import rss response.write(rss(response._vars), escape=False) response.headers['Content-Type']='application/rss+xml' except: raise HTTP(405,'no rss') }} ``` Como ejemplo adicional de una aplicación con RSS, tomemos un RSS aggregator que recolecta información de un feed de "slashdot" y devuelve un nuevo feed de rss de web2py. ``` def aggregator(): import gluon.contrib.feedparser as feedparser d = feedparser.parse( "http://rss.slashdot.org/Slashdot/slashdot/to") return dict(title=d.channel.title, link = d.channel.link, description = d.channel.description, created_on = request.now, entries = [ dict(title = entry.title, link = entry.link, description = entry.description, created_on = request.now) for entry in d.entries]) ``` Se puede acceder a él con: ``` http://127.0.0.1:8000/app/default/aggregator.rss ``` # CSV¶ El formato de valores separados por coma (CSV) es un protocolo que representa información en forma tabular. Tomemos como ejemplo el siguiente modelo: ``` db.define_table('animal', Field('especie'), Field('genero'), Field('familia')) ``` ``` def animales(): animales = db().select(db.animal.ALL) return dict(animales=animales) ``` web2py no provee de una vista genérica "generic.csv"; debes definir una vista "default/animales.csv" que serialice la lista de animales en formato CSV. Esta es una implementación posible: ``` {{ import cStringIO stream=cStringIO.StringIO() animales.export_to_csv_file(stream) response.headers['Content-Type']='application/vnd.ms-excel' response.write(stream.getvalue(), escape=False) }} ``` Observa que podríamos incluso definir un archivo "generic.csv", pero además deberíamos especificar el nombre del objeto que se serializará (para este ejemplo "animales"). Es por esto que web2py no incluye un archivo de vista "generic.csv". ### Llamadas a procedimientos remotos o RPC¶ web2py provee de un mecanismo para convertir cualquier función en un webservice. El mecanismo que se detalla aquí difiere del mecanismo descripto anteriormente porque: * La función puede tomar argumentos * La función puede estar definida en un modelo o módulo en lugar de un controlador * Podrías querer especificar con detalle cuál método RPC debe estar soportado * Establece reglas más estrictas respecto de la notación de los URL * Es más inteligente que los métodos previos porque funciona según un conjunto fijo de protocolos. Por esta misma razón no es fácilmente extensible. Para usar esta funcionalidad: Primero, debes importar e iniciar un objeto de servicio service. ``` from gluon.tools import Service service = Service() ``` Esto se hace por defecto en el archivo del modelo "db.py" en la aplicación de andamiaje. En segundo lugar, debes exponer el manejador del servicio en el controlador: Esto también se hace por defecto en el controlador "default.py" de la aplicación de andamiaje. Elimina `session.forget()` si planeas utilizar las cookie de sesión en conjunto con los servicios. En tercer lugar, debes decorar aquellas funciones que quieres exponer como servicio. Esta es una lista de los decoradores soportados actualmente: ``` @service.run @service.xml @service.json @service.rss @service.csv @service.xmlrpc @service.jsonrpc @service.jsonrpc2 @service.amfrpc3('domain') @service.soap('FunctionName',returns={'result':type},args={'param1':type,}) ``` A modo de ejemplo, observa la siguiente función decorada: ``` @service.run def concat(a,b): return a+b ``` Esta función se puede definir en el modelo o en el controlador donde se ha definido la acción `call` . Ahora, esta función se puede invocar en forma remota de dos formas: ``` http://127.0.0.1:8000/app/default/call/run/concat?a=hola&b=mundo http://127.0.0.1:8000/app/default/call/run/concat/hola/mundo ``` En ambos casos la solicitud http devolverá: `holamundo` Si se usa el decorador `@service.xml` , la función se puede llamar a través de: ``` http://127.0.0.1:8000/app/default/call/xml/concat?a=hola&b=mundo http://127.0.0.1:8000/app/default/call/xml/concat/hola/mundo ``` y la salida es devuelta como XML: ``` <document> <result>holamundo</result> </document> ``` También puede serializar la salida de la función incluso si se trata de un objeto Rows de DAL. En ese caso, de hecho, se llamará a `as_list()` automáticamente. Si se usa el decorador `@service.json` , la función se puede llamar con: ``` http://127.0.0.1:8000/app/default/call/json/concat?a=hola&b=mundo http://127.0.0.1:8000/app/default/call/json/concat/hola/mundo ``` y la salida devuelta tendrá el formato JSON Si se usa el decorador `@service.csv` , el manejador del servicio requerirá, como valor de retorno, un objeto iterable que contenga a su vez objetos iterable, como por ejemplo una lista de listas. He aquí un ejemplo: ``` @service.csv def tabla1(a, b): return [[a, b],[1, 2]] ``` Este servicio se puede consumir visitando uno de los siguientes URL: ``` http://127.0.0.1:8000/app/default/call/csv/tabla1?a=hola&b=mundo http://127.0.0.1:8000/app/default/call/csv/tabla1/hola/mundo ``` y devolverá: ``` hola,mundo 1,2 ``` El decorador `@service.rss` recibe un valor de retorno en el mismo formato que con la vista "generic.rss" que se describe en la sección previa. Se pueden utilizar múltiples decoradores por función. Hasta aquí, todo lo tratado en esta sección es sencillamente una alternativa al método descripto en la sección previad. La verdadera potencia del objeto service viene con XMLRPC, JSONRPC, y AMFRPC, como se detalla a continuación. # XMLRPC¶ Consideremos el siguiente código, por ejemplo, en el controlador "default.py": ``` @service.xmlrpc def sumar(a,b): return a+b @service.xmlrpc def div(a,b): return a/b ``` Ahora en una consola o shell de Python puedes hacer ``` >>> from xmlrpclib import ServerProxy >>> server = ServerProxy( 'http://127.0.0.1:8000/app/default/call/xmlrpc') >>> print server.sumar(3,4) 7 >>> print server.sumar('hola','mundo') 'holamundo' >>> print server.div(12,4) 3 >>> print server.div(1,0) ZeroDivisionError: integer division or modulo by zero ``` El módulo de Python xmlrpclib provee de un cliente para el protocolo XMLRPC. web2py funciona como servidor de este servicio. El cliente conecta con el servidor a través de ServerProxy y puede llamar en forma remota a las funciones decoradas en el servidor. La información (a, b) es pasada a la función o las funciones, no por medio variables de GET/POST, sino por medio del uso del protocolo XMLRPC y la codificación adecuada, y por lo tanto conteniendo la información de tipos de datos (enteros, cadenas u otros). Lo mismo vale para el valor o los valores de retorno. Inclusive, toda excepción en el servidor se transmite de regreso al cliente. Existen librerías de XMLRPC para distintos lenguajes de programación (incluyendo C, C++, Java, C#, Ruby y Perl), y pueden interactuar entre ellos. Este es uno de los mejores métodos para crear aplicaciones que se intercomuniquen en una forma independiente del lenguaje de programación. El cliente de XMLRPC también se puede implementar dentro de la acción de web2py, para que una acción pueda comunicarse con otra aplicación de web2py (incluso en el ámbito de la misma instalación) utilizando XMLRPC. Ten en cuenta la restricción incondicional respecto de la sesión para este caso. Si una acción llama a través de XMLRPC a una función de la misma app, la aplicación que realiza la llamada o caller debe previamente liberar el bloqueo de la sesión (session lock): # JSONRPC¶ En esta sección vamos a usar el mismo código de ejemplo usado para XMLRPC pero en cambio vamos a exponer el servicio usando JSONRPC: ``` @service.jsonrpc @service.jsonrpc2 def sumar(a,b): return a+b JSONRPC es muy similar a XMLRPC pero usa JSON en lugar de XML como protocolo de serialización y codificación de datos. Por supuesto, podemos llamar al servicio desde cualquier lenguaje pero aquí lo vamos a hacer con Python. web2py incluye un módulo llamado "gluon/contrib/simplejsonrpc.py" creado por <NAME>. Aquí se puede ver un ejemplo de cómo se puede hacer un llamado al servicio anterior: ``` >>> from gluon.contrib.simplejsonrpc import ServerProxy >>> URL = "http://127.0.0.1:8000/app/default/call/jsonrpc" >>> service = ServerProxy(URL, verbose=True) >>> print service.sumar(1, 2) ``` Usa "http://127.0.0.1:8000/app/default/call/jsonrpc2" para jsonrpc2. # JSONRPC y Pyjamas¶ Aquí vamos a detallar el uso del protocolo JSONRPC y Pyjamas a través de una aplicación de ejemplo. Pyjamas es un port del Kit para Desarrollo Web de Google (escrito originalmente en Java). Pyjamas permite la escritura de aplicaciones cliente en Python, traduciendo el código a JavaScript. web2py sirve el código JavaScript y se comunica con él a través de solicitudes AJAX originadas del lado del cliente y activadas por acciones del usuario. Vamos a describir cómo hacer que Pyjamas funcione con web2py. No se requieren librerías adicionales salvo web2py y Pyjamas. Vamos a construir una simple aplicación con una lista de tareas o todo que tenga un cliente Pyjamas (que consta únicamente de JavaScript), y que se comunique con el servidor exclusivamente por medio de JSONRPC. Primero, creamos una aplicación nueva y la llamamos "todo". En segundo lugar, en "models/db.py", ingresamos el siguiente código: ``` db=DAL('sqlite://storage.sqlite') db.define_table('todo', Field('tarea')) service = Service() ``` (Nota: la clase Service proviene de gluon.tools). En tercer lugar, en "controllers/default.py", ingresa el código siguiente: ``` def index(): redirect(URL('todoApp')) @service.jsonrpc def obtenerTareas(): todos = db(db.todo).select() return [(todo.task, todo.id) for todo in todos] @service.jsonrpc def agregarTarea(tareaJson): db.todo.insert(tarea=tareaJson) return getTasks() @service.jsonrpc def borrarTarea (idJson): del db.todo[idJson] return obtenerTareas() def todoApp(): return dict() ``` El propósito de cada función debería ser obvio. Cuarto, en "views/default/todoApp.html", ingresa el siguiente código: ``` <html> <head> <meta name="pygwt:module" content="{{=URL('static','output/TodoApp')}}" /> <title> simple aplicación para tareas </title> </head> <body bgcolor="white"> <h1> simple aplicación para tareas </h1> <i> ingresa una nueva tarea a ingresar a la base de datos, haz clic en una tarea existente para eliminarla </i> <script language="javascript" src="{{=URL('static','output/pygwt.js')}}"> </script> </body> </html> ``` Esta vista solo ejecuta el código de Pyjamas en "static/output/todoapp" - código no creado todavía. Quinto, en "static/TodoApp.py" (¡observa que el nombre es TodoApp, no todoApp!), ingresa el siguiente código: ``` from pyjamas.ui.RootPanel import RootPanel from pyjamas.ui.Label import Label from pyjamas.ui.VerticalPanel import VerticalPanel from pyjamas.ui.TextBox import TextBox import pyjamas.ui.KeyboardListener from pyjamas.ui.ListBox import ListBox from pyjamas.ui.HTML import HTML from pyjamas.JSONService import JSONProxy class TodoApp: def onModuleLoad(self): self.remote = DataService() panel = VerticalPanel() self.todoTextBox = TextBox() self.todoTextBox.addKeyboardListener(self) self.todoList = ListBox() self.todoList.setVisibleItemCount(7) self.todoList.setWidth("200px") self.todoList.addClickListener(self) self.Status = Label("") panel.add(Label("Agregar una nueva tarea:")) panel.add(self.todoTextBox) panel.add(Label("Clic para eliminar:")) panel.add(self.todoList) panel.add(self.Status) self.remote.obtenerTareas(self) RootPanel().add(panel) def onKeyPress(self, sender, keyCode, modifiers): """ Esta función maneja el evento onKeyPress, y agregará el ítem en la caja de texto a la lista cuando el usuario presione la tecla Intro. Luego, este método también manejará la funcionalidad de autocompleción. """ if keyCode == KeyboardListener.KEY_ENTER and \ sender == self.todoTextBox: id = self.remote.agregarTarea(sender.getText(), self) sender.setText("") if id<0: RootPanel().add(HTML("Error del servidor o respuesta inválida")) def onClick(self, sender): id = self.remote.borrarTarea( sender.getValue(sender.getSelectedIndex()),self) if id<0: RootPanel().add( HTML("Error del servidor o respuesta inválida")) def onRemoteResponse(self, response, request_info): self.todoList.clear() for task in response: self.todoList.addItem(task[0]) self.todoList.setValue(self.todoList.getItemCount()-1, task[1]) def onRemoteError(self, code, message, request_info): self.Status.setText("Error del servidor o respuesta inválida: " \ + "ERROR " + code + " - " + message) class DataService(JSONProxy): def __init__(self): JSONProxy.__init__(self, "../../default/call/jsonrpc", ["obtenerTareas", "agregarTareas","borrarTareas"]) if __name__ == '__main__': app = TodoApp() app.onModuleLoad() ``` Sexto, corremos Pyjamas antes de servir las aplicaciones: ``` cd /ruta/a/todo/static/ python /python/pyjamas-0.5p1/bin/pyjsbuild TodoApp.py ``` Esto traducirá el código Python en JavaScript para que se pueda ejecutar en el navegador. Para acceder a la aplicación, visita el URL: ``` http://127.0.0.1:8000/todo/default/todoApp ``` Esta subsección fue creada por <NAME> con la ayuda de <NAME> (creadores de Pyjamas), actualizado por <NAME>. Fue probado con Pyjamas 0.5p1. El ejemplo está inspirado en esta página de Django en ref. [blogspot1]. # AMFRPC¶ AMFRPC es el protocolo para Llamadas a Procedimientos Remotos usado por los clientes de Flash para comunicación con un servidor. web2py soporta AMFRPC, pero requiere que corras web2py desde el código fuente y que previamente instales la librería PyAMF. Esto se puede instalar desde una consola de Linux o en una consola de Windows escribiendo: `easy_install pyamf` (puedes consultar la documentación de PyAMF para más detalles). En esta subsección asumimos que ya estás familiarizado con la programación en ActionScript. Crearemos un simple servicio que toma dos valores numéricos, los suma y devuelve esa suma. Llamaremos a nuestra nueva aplicación "prueba_pyamp" y al servicio `addNumbers` . Primero, usando Adobe Flash (con cualquier versión a partir de MX 2004), crea una aplicación cliente de Flash comenzando por un archivo de tipo FLA. En el primer cuadro o frame del archivo, agrega estas líneas: ``` import mx.remoting.Service; import mx.rpc.RelayResponder; import mx.rpc.FaultEvent; import mx.rpc.ResultEvent; import mx.remoting.PendingCall; var val1 = 23; var val2 = 86; service = new Service( "http://127.0.0.1:8000/prueba_pyamf/default/call/amfrpc3", null, "midominio", null, null); var pc:PendingCall = service.sumarNumeros(val1, val2); pc.responder = new RelayResponder(this, "onResult", "onFault"); function onResult(re:ResultEvent):Void { trace("Resultado : " + re.result); txt_result.text = re.result; } function onFault(fault:FaultEvent):Void { trace("Falla: " + fault.fault.faultstring); } stop(); ``` Este código le permite al cliente de Flash conectar con un servicio que corresponde a una función llamada "sumarNumeros" en el archivo "/prueba_pyamf/default/gateway". Además debes importar clases para remoting de ActionScript version 2 MX para poder habilitar el uso de llamadas a procedimientos remotos en Flash. Agrega la ruta a estas clases a las opciones de configuración en el IDE de Adobe Flash, o simplemente ubica la carpeta "mx" próxima al nuevo archivo creado. Observa los argumentos del constructor de Service. El primer argumento es el URL correspondiente al servicio que queremos crear. El tercer argumento es el dominio del servicio. Hemos optado por identificarlo como "midomain". En segundo lugar, crea un campo de texto dinámico llamado "txt_resultado" y ubícalo en la escena. Tercero, debes configurar un gateway de web2py que se pueda comunicar con el cliente de Flash definido previamente. Ahora crea una nueva app de web2py llamada `prueba_pyamf` que alojará el nuevo servicio y el gateway AMF para el cliente flash Edita el controlador "default.py" y asegúrate de que contiene ``` @service.amfrpc3('midominio') def sumarNumeros(val1, val2): return val1 + val2 Cuarto, compila y exporta/publica el archivo SWF del cliente de flash como `prueba_pyamf.swf` , ubica el "prueba_pyamf.amf", "prueba_pyamf.html", "AC_RunActiveContent.js", y archivos "crossdomain.xml" en la carpeta "static" de una nueva aplicación que está alojando el gateway, "prueba_pyamf". Puedes probar el cliente visitando: ``` http://127.0.0.1:8000/prueba_pyamf/static/prueba_pyamf.html ``` El gateway es llamado en segundo plano cuando el cliente se conecta con sumarNumeros. Si estás usando AMF0 en lugar de AMF3 puedes también usar el decorador: `@service.amfrpc` en lugar de: ``` @service.amfrpc3('midominio') ``` En este caso debes también cambiar el URL del servicio a: ``` http://127.0.0.1:8000/prueba_pyamf/default/call/amfrpc ``` # SOAP¶ web2py viene con un cliente y servidor de SOAP creados por <NAME>. Se puede usar prácticamente del mismo modo que XML-RPC: Si tienes el siguiente código, por ejemplo, en el controlador "default.py": ``` @service.soap('MiSuma', returns={'result':int}, args={'a':int, 'b':int}) def sumar(a, b): return a + b ``` Ahora en una consola de Python puedes hacer: ``` >>> from gluon.contrib.pysimplesoap.client import SoapClient >>> cliente = SoapClient(wsdl="http://localhost:8000/app/default/call/soap?WSDL") >>> print cliente.MiSuma(a=1, b=2) {'result': 3} ``` Para obtener la codificación apropiada cuando se devuelven cadenas de texto, especifica la cadena como u'texto utf8 válido'. Puedes obtener el WSDL para el servicio en ``` http://127.0.0.1:8000/app/default/call/soap?WSDL ``` Y puedes obtener la documentación para cualquiera de los métodos expuestos: ``` http://127.0.0.1:8000/app/default/call/soap ``` ### API de bajo nivel y otras recetas¶ # simplejson¶ web2py incluye gluon.contrib.simplejson, desarrollado por <NAME>. Este módulo dispone del codificador/decodificador más estandarizado. SimpleJSON implementa dos funciones: ``` gluon.contrib.simplesjson.dumps(a) ``` codifica un objeto de Python `a` como JSON. * ``` gluon.contrib.simplejson.loads(b) ``` decodifica un objeto de JavaScript `b` en un objeto de Python. Los tipos de objetos serializables incluyen a los tipos primitivos, las listas y los diccionarios. Los objetos compuestos se pueden serializar a excepción de clases definidas por el usuario. Aquí se muestra una acción demostrativa (por ejemplo en el controlador "default.py") que serializa las listas de Python que contienen días de la semana usando esta API de bajo nivel: ``` def diasdelasemana(): nombres=['Domingo','Lunes','Martes','Miércoles', 'Jueves','Viernes','Sábado'] import gluon.contrib.simplejson return gluon.contrib.simplejson.dumps(nombres) ``` Abajo se puede ver un ejemplo de página HTML que envía una solicitud Ajax a la acción previa, recibe el mensaje con la notación JSON y almacena la lista en la correspondiente variable de JavaScript: ``` {{extend 'layout.html'}} <script> $.getJSON('/application/default/diasdelasemana', function(data){ alert(data); }); </script> ``` El código usa la función jQuery `$.getJSON` , que realiza una llamada con Ajax y, al recibir la respuesta, almacena los días de la semana en una variable local de JavaScript `data` y pasa la variable a la función callback. En el ejemplo la función callback simplemente alerta con una ventana emergente al visitante de la recepción de los datos. # PyRTF¶ Otro requerimiento frecuente en los sitios web es el de generar documentos compatibles con Word(mr). La forma más simple de hacerlo es usando el Formato de Texto Enriquecido (RTF). Este formato fue inventado por Microsoft y se ha convertido en un estándar. web2py incluye gluon.contrib.pyrtf, desarrollado por <NAME> y revisado por <NAME>. Este módulo te permite generar documentos RTF en forma programática, incluyendo texto con color y formato e imágenes. En el ejemplo que sigue iniciamos dos clases básicas de RTF, Documento y Section, agregamos la última a la primera e insertamos un texto ficticio en la última clase: ``` def creartf(): import gluon.contrib.pyrtf as q doc=q.Document() sec=q.Section() doc.Sections.append(sec) sec.append('Título de la sección') sec.append('web2py es genial. '*100) response.headers['Content-Type']='text/rtf' return q.dumps(doc) ``` Al final se serializa el objeto Document con `q.dumps(doc)` . Observa que antes de devolver el documento RTF es necesario especificar el tipo de contenido en el encabezado o de lo contrario el navegador no sabe cómo manejar el archivo. Dependiendo de la configuración, el navegador debería preguntarte si quieres guardar el archivo o abrirlo usando un editor de texto. # ReportLab y PDF¶ web2py también puede generar documentos PDF, con una librería adicional llamada "ReportLab"[ReportLab]. Si estás corriendo web2py desde el código fuente, es suficiente con tener ReportLab instalado. Si corres una distribución binaria para Windows, debes descomprimir ReportLab en la carpeta "web2py/". Si corres una distribución binaria para Mac, debes descomprimir ReportLab en la carpeta: ``` web2py.app/Contents/Resources/ ``` De aquí en adelante asumiremos que tienes instalado ReportLab y que web2py puede ubicarlo. Vamos a crear una acción simple llamada "dame_un_pdf" que genera un documento PDF. ``` from reportlab.platypus import * from reportlab.lib.styles import getSampleStyleSheet from reportlab.rl_config import defaultPageSize from reportlab.lib.units import inch, mm from reportlab.lib.enums import TA_LEFT, TA_RIGHT, TA_CENTER, TA_JUSTIFY from reportlab.lib import colors from uuid import uuid4 from cgi import escape import os def dame_un_pdf(): titulo = "Este es el título del documento" encabezado = "Primer párrafo" texto = 'bla '* 10000 styles = getSampleStyleSheet() archivotmp=os.path.join(request.folder,'private',str(uuid4())) doc = SimpleDocTemplate(archivotmp) story = [] story.append(Paragraph(escape(titulo),styles["Title"])) story.append(Paragraph(escape(encabezado),styles["Heading2"])) story.append(Paragraph(escape(texto),styles["Normal"])) story.append(Spacer(1,2*inch)) doc.build(story) data = open(archivotmp,"rb").read() os.unlink(archivotmp) response.headers['Content-Type']='application/pdf' return data ``` Observa cómo hemos generado el PDF en un sólo archivo temporario `archivotmp` , leímos el PDF generado del archivo y luego borramos el archivo. Para más información sobre la API de ReportLab, puedes consultar la documentación oficial de ReportLab. Recomendamos especialmente el uso de las API Platypus, como `Paragraph` , `Spacer` , etc. ### Webservices Restful¶ REST es la abreviación de Representational State Transfer (Transferencia de Estado Representacional) y es un tipo de arquitectura de webservice pero no es, como SOAP, un protocolo. De hecho, no existe un estándar REST. Grosso modo, REST dice que un servicio puede entenderse como una colección de recursos. Cada recurso debería identificarse por un URL. Hay cuatro acciones de métodos en un recurso denominadas POST (crear), GET (leer), PUT (actualizar) y DELETE (eliminar), de los cuales proviene el acrónimo CRUD (create-read-update-delete). Un cliente se comunica con el recurso por medio de una solicitud HTTP al URL que identifica el recurso y usando los métodos HTTP PUT/POST/GET/DELETE para enviar instrucciones al recurso. El URL puede tener una extensión, por ejemplo `json` , que especifica qué protocolo se debe usar para la codificación de datos. Entonces, por ejemplo una solicitud POST a ``` http://127.0.0.1/miapp/default/api/persona ``` significa que quieres crear una nueva `persona` . En este caso `persona` debería corresponderse con un registro de la tabla `persona` pero también puede ser otro tipo de recurso (por ejemplo un archivo). En forma similar, una solicitud GET a ``` http://127.0.0.1/miapp/default/api/personas.json ``` implica una solicitud de una lista de personas (registros del tipo `persona` ) en formato json. Una solicitud GET a equivale a solicitar la información asociada con `persona/1` (el registro en `id==1` ) en formato json. En el caso de web2py cada solicitud puede separarse en tres secciones: * Una primera parte que identifica la ubicación del servicio, es decir, la acción que expone el servicio: ``` http://127.0.0.1/miapp/default/api/ ``` * El nombre del recurso ( `persona` , `personas` , `persona/1` , etc.) * El protocolo de comunicación especificado por la extensión. Observa que siempre podemos usar el router para filtrar cualquier prefijo no deseado en el URL y por ejemplo simplificar esto: reemplazándolo con: ``` http://127.0.0.1/api/persona/1.json ``` de todas formas, esto tiene fines demostrativos y ya se ha tratado en detalle en el capítulo 4. En nuestro ejemplo hemos usado la acción llamada `api` pero no es en sí un requisito. Podemos de hecho nombrar la acción que expone el servicio RESTful de cualquier otra forma y además podríamos implementar más de una acción. Para conservar la notación asumimos que nuestra acción RESTful se llama `api` . También asumimos que hemos definido las siguientes tablas: ``` db.define_table('persona', Field('nombre'), Field('datos')) db.define_table('mascota', Field('propietario', db.persona), Field('nombre'), Field('datos')) ``` y que ellas son los recursos que queremos exponer. Lo primero que haremos es crear la acción RESTful: ``` def api(): return locals() ``` Ahora la modificamos para que la extensión se extraiga y filtre de request args (para que `request.args` se pueda usar para identificar el recurso) y para que pueda manejar los distintos métodos en forma separada: ``` @request.restful() def api(): def GET(*args,**vars): return dict() def POST(*args,**vars): return dict() def PUT(*args,**vars): return dict() def DELETE(*args,**vars): return dict() return locals() ``` Ahora, cuando hagamos una solicitud http GET a ``` http://127.0.0.1:8000/miapp/default/api/persona/1.json ``` Se llama y devuelve el resultado de `GET('person', '1')` donde GET es la función definida dentro de la acción. Observa que: * no hubo necesidad de definir los cuatro métodos, sólo aquellos que queremos exponer. * la función del método puede tomar argumentos de pares nombre-valor * la extensión se almacena en `request.extension` y el tipo de contenido se establece en forma automática. El decorador `@request.restful()` se asegura de que la extensión en la información de la ruta se almacene en `request.extension` , que se asocie el método referido a la función correspondiente en la acción (POST, GET, PUT, DELETE), y que se pase `request.args` y `request.vars` a la función seleccionada. Ahora creamos un servicio POST y GET con métodos individuales: ``` @request.restful() def api(): response.view = 'generic.json' def GET(nombredetabla,id): if not nombredetabla=='persona': raise HTTP(400) return dict(persona = db.persona(id)) def POST(nombredetabla,**campos): if not nombredetabla=='persona': raise HTTP(400) return db.persona.validate_and_insert(**campos) return locals() ``` Ten en cuenta que: * los GET y POST se manejan por distintas funciones * la debe recibir los argumentos adecuados (argumentos posicionales obtenidos de `request.args` y argumentos de pares nombre-valor obtenidos de `request.vars` ) * verifican que los datos se hayan especificado correctamente y en caso contrario generan una excepción * GET realiza un select y devuelve un registro, `db.persona(id)` . La salida se convierte automáticamente a JSON porque llama a la vista genérica. * POST realiza un ``` validate_and_insert(..) ``` y devuelve el `id` de un nuevo registro o en su defecto, los errores de validación. Las variables POST, `**campos` , son las variables de post. `parse_as_rest` (experimental)¶ Los algoritmos y código detallados hasta aquí son suficientes para crear cualquier tipo de webservice RESTful aunque web2py puede hacerlo todavía más fácil. De hecho, web2py provee de una sintaxis para describir cuáles tablas de la base de datos queremos exponer y cómo asociar los recursos a los URL y vice versa. Esto se hace por medio de los patrones de URL o URL patterns Un patrón es una cadena que asocia los argumentos posicionales de la solicitud en el URL a una consulta de la base de datos. Hay cuatro tipos de patrones atómicos: * Cadenas con constantes como por ejemplo "amigo" * Cadenas con constantes que se corresponden con una tabla. Por ejemplo "amigo[persona]" asociará "amigos" en el URL a la tabla "persona". * Variables como parámetro de filtros. Por ejemplo "{persona.id}" creará un filtro ``` db.persona.nombre=={persona.id} ``` . * Nombres de campos, expresados en la forma ":campo" Los patrones atómicos se pueden combinar con patrones complejos de URL usando "/" como en ``` "/amigo[persona]/{persona.id}/:field" ``` que transforma un url del tipo ``` http://..../amigo/1/nombre ``` en una consulta relativa a una persona.id que devuelve el nombre de la persona. Aquí "amigo[persona]" busca "amigo" y filtra la tabla "persona". "{persona.id}" busca "1" y crea un filtro con "persona.id==1". ":campo" busca "nombre" y devuelve: ``` db(db.persona.id==1).select().first().nombre ``` Se pueden combinar múltiples patrones de URL en una lista para que una sola acción RESTful pueda servir diferentes tipos de solicitudes. La DAL tiene un método ``` parse_as_rest(patrón, args, vars) ``` que según una lista de patrones, los `request.args` y `request.vars` busca el patrón y devuelve una respuesta (solo GET). Ahora veamos un ejemplo más complicado: ``` @request.restful() def api(): response.view = 'generic.' + request.extension def GET(*args, **vars): patrones = [ "/amigos[persona]", "/amigos/{persona.nombre.startswith}", "/amigos/{persona.nombre}/:field", "/amigos/{persona.nombre}/pets[mascota.propietario]", "/amigos/{persona.nombre}/pet[mascota.propietario]/{mascota.name}", "/amigos/{persona.nombre}/pet[mascota.propietario]/{mascota.name}/:field" ] parser = db.parse_as_rest(patrones, args,vars) if parser.status == 200: return dict(contenido=parser.response) else: raise HTTP(parser.status, parser.error) def POST(nombre_tabla,**vars): if table_name == 'persona': return db.persona.validate_and_insert(**vars) elif table_name == 'mascota': return db.pet.validate_and_insert(**vars) else: raise HTTP(400) return locals() ``` Que entiende las siguientes URL que corresponden a los patrones listados: * GET de todas las personas ``` http://.../api/amigos ``` * GET de una persona cuyo nombre comience con "t" ``` http://.../api/amigo/t ``` * GET del valor del campo información de la primer persona con nombre igual a "Timoteo" ``` http://.../api/amigo/Timoteo/información ``` * GET de una lista de las mascotas de la persona (amigo) de arriba ``` http://.../api/amigo/Timoteo/mascotas ``` * GET de la mascota con nombre "Snoopy" de la persona con nombre "Timoteo" ``` http://.../api/amigo/Timoteo/mascota/Snoopy ``` * GET del valor del campo información para la mascota ``` http://.../api/amigo/Timoteo/mascota/Snoopy/información ``` La acción además expone dos url POST: * POST de un nuevo amigo * POST de una nueva mascota Si tienes instalada la utilidad "curl" puedes intentar: ``` $ curl -d "nombre=Timoteo" http://127.0.0.1:8000/miapp/default/api/amigo.json {"errors": {}, "id": 1} $ curl http://127.0.0.1:8000/miapp/default/api/amigos.json {"contenido": [{"informacion": null, "nombre": "Timoteo", "id": 1}]} $ curl -d "nombre=Snoopy&propietario=1" http://127.0.0.1:8000/miapp/default/api/mascota.json {"errors": {}, "id": 1} $ curl http://127.0.0.1:8000/miapp/default/api/amigo/Timoteo/mascota/Snoopy.json {"contenido": [{"info": null, "propietario": 1, "name": "Snoopy", "id": 1}]} ``` Es posible declarar consultas más complejas como cuando un valor en el URL se usa para generar una consulta que no implica una igualdad. Por ejemplo ``` patrones = ['amigos/{persona.nombre.contains}' ``` asocia la url `http://..../amigos/i` a una consulta de tipo ``` db.persona.name.contains('i') ``` En forma similar: ``` patrones = ['amigos/{persona.nombre.ge}/{persona.nombre.gt.not}' ``` asocia ``` http://..../amigos/aa/uu ``` a la consulta ``` (db.persona.nombre>='aa')&(~(db.persona.nombre>'uu')) ``` los atributos válidos para un campo en un patrón son: `contains` , `startswith` , `le` , `ge` , `lt` , `gt` , `eq` (igualdad, usado por defecto), `ne` (desigualdad). Y los atributos específicos para campos de fecha y hora `day` , `month` , `year` , `hour` , `minute` , `second` . Observa que esta sintaxis de patrones no está pensada como solución completa. No es posible abarcar toda consulta posible con un patrón pero sí muchos casos. Esta sintaxis puede extenderse en nuevas actualizaciones. A menudo queremos exponer algunas URL RESTful pero además queremos restringir las consultas posibles. Esto puede hacerse pasando un argumento extra `queries` al método `parse_as_rest` . `queries` es un diccionario ``` (nombredetabla, consulta) ``` `donde consulta es una consulta de DAL que restringe el acceso a la tabla `nombredetabla` . Además podemos ordenar los resultados usando la variable GET order. ``` http://..../api/amigos?order=nombre|~informacion ``` que ordena alfabéticamente según `nombre` y luego en sentido inverso según `informacion` . También podemos limitar la cantidad de registros especificando las variables GET `limit` y `offset` ``` http://..../api/amigos?offset=10&limit=1000 ``` que devolverá hasta 1000 amigos (personas) y omitirá las primeras 10. El valor por defecto de `limit` es 1000 y el de `offset` es 0. Ahora consideremos un caso extremo. Queremos construir todos los patrones posibles para todas las tablas (excepto las tablas `auth_` ). Queremos que se pueda buscar por cualquier campo de texto, cualquier campo de número entero, cualquier valor de coma flotante double (según un rango determinado) y cualquier fecha. Además necesitamos que se pueda hacer POST de en cualquier tabla: En un caso general esto requiere gran número de patrones. Web2py lo hace simple: ``` @request.restful() def api(): response.view = 'generic.' + request.extension def GET(*args, **vars): patrones = 'auto' parser = db.parse_as_rest(patrones, args,vars) if parser.status == 200: return dict(contenido=parser.response) else: raise HTTP(parser.status,parser.error) def POST(nombre_tabla, **vars): return db[nombre_tabla].validate_and_insert(**vars) return locals() ``` El configurar `patrones='auto'` hace que web2py genere todos los posibles patrones para toda tabla que no pertenezca a Auth. Incluso se pueden crear patrones que consulten a otros patrones: ``` http://..../api/patterns.json ``` que en función de nuestras tablas `persona` y `mascota` produce: ``` {"contenidos": [ "/persona[persona]", "/persona/id/{persona.id}", "/persona/id/{persona.id}/:field", "/persona/id/{persona.id}/mascota[mascota.propietario]", "/persona/id/{persona.id}/mascota[mascota.propietario]/id/{mascota.id}", "/persona/id/{persona.id}/mascota[mascota.propietario]/id/{mascota.id}/:field", "/persona/id/{persona.id}/mascota[mascota.propietario]/propietario/{mascota.propietario}", "/persona/id/{persona.id}/mascota[mascota.propietario]/propietario/{mascota.propietario}/:field", "/persona/nombre/mascota[mascota.propietario]", "/persona/nombre/mascota[mascota.propietario]/id/{mascota.id}", "/persona/nombre/mascota[mascota.propietario]/id/{mascota.id}/:field", "/persona/nombre/mascota[mascota.propietario]/propietario/{mascota.propietario}", "/persona/nombre/mascota[mascota.propietario]/propietario/{mascota.propietario}/:field", "/persona/informacion/mascota[mascota.propietario]", "/persona/informacion/mascota[mascota.propietario]/id/{mascota.id}", "/persona/informacion/mascota[mascota.propietario]/id/{mascota.id}/:field", "/persona/informacion/mascota[mascota.propietario]/propietario/{mascota.propietario}", "/persona/informacion/mascota[mascota.propietario]/propietario/{mascota.propietario}/:field", "/mascota[mascota]", "/mascota/id/{mascota.id}", "/mascota/id/{mascota.id}/:field", "/mascota/propietario/{mascota.propietario}", "/mascota/propietario/{mascota.propietario}/:field" ]} ``` Puedes especificar patrones automáticos para un subconjunto de tablas: ``` patrones = [':auto[persona]',':auto[mascota]'] ``` `smart_query` (experimental)¶ Hay veces en las que necesitas tener mayor flexibilidad y quieres poder pasar a un servicio RESTful una consulta arbitraria como ``` http://.../api.json?search=persona.nombre starts with 'T' and persona.nombre contains 'm' ``` Esto es posible usando ``` @request.restful() def api(): response.view = 'generic.' + request.extension def GET(search): try: registros = db.smart_query([db.persona, db.mascota], search).select() return dict(result=registros) except RuntimeError: raise HTTP(400,"Cadena de búsqueda inválida") def POST(nombre_tabla, **vars): return db[nombre_tabla].validate_and_insert(**vars) return locals() ``` El método `db.smart_query` toma dos argumentos: * una lista de campos o tablas que deberían permitirse en la consulta * una cadena que contenga la consulta expresada en lenguaje natural y devuelve un objeto `db.set` con los registros que coinciden. Observa que la cadena de búsqueda es parseada, no evaluada o ejecutada y por lo tanto no implica riesgos de seguridad. El acceso a la API se puede restringir como es usual usando decoradores. Por lo que, por ejemplo @auth.requires_login() @request.restful() def api(): def GET(s): return 'acceso concedido, has dicho %s' % s return locals() ``` puede utilizarse mediante ``` $ curl --user name:password http://127.0.0.1:8000/miapp/default/api/hola acceso concedido, has dicho hola ``` ### Servicios y Autenticación¶ En el capítulo anterior hemos tratado sobre el uso de los siguientes decoradores: ``` @auth.requires_login() @auth.requires_membership(...) @auth.requires_permission(...) ``` Para acciones normales (no para las decoradas como servicio), estos decoradores se pueden usar incluso si la salida se convierte en otro formato que no sea HTML. Para las funciones definidas como servicios y decoradas usando los decoradores `@service...` , el decorador `@auth...` no se debería usar. Los dos tipos de decorador no se pueden combinar. Si se va a proveer de autenticación, debemos en cambio decorar las acciones `call` : Observa que también es posible instanciar múltiples objetos de servicios, registrar los mismos conjuntos de acciones en ellos y exponer algunos con autenticación y otros que no la requieran: ``` servicio_publico=Service() servicio_privado=Service() @servicio_publico.jsonrpc def f(): return 'público' @servicio_privado.jsonrpc def g(): return 'privado' def llamada_publica(): return servicio_publico() @auth.requires_login() def llamada_privada(): return servicio_privado() ``` Este ejemplo asume que el cliente está pasando credenciales por medio del encabezado HTTP (una cookie de sesión válida o usando autenticación básica, según se describe en la sección anterior). El cliente debe tener soporte para esta operación; no todos los clientes son compatibles. # jQuery y Ajax¶ # Chapter 11: jQuery y Ajax ## jQuery y Ajax¶ Si bien web2py es básicamente para desarrollo del lado del servidor, la app welcome de andamiaje viene con la librería básica de jQuery [jquery], los calendarios jQuery (campo especial para fechas date picker y reloj), el menú "superfish.js", y algunas funciones adicionales de JavaScript que dependen de jQuery. No hay nada de hecho que te impida el uso de otras librerías Ajax como Prototype, ExtJS o YUI, pero hemos decidido incluir en el paquete jQuery por su facilidad de uso y su mayor potencia que las de otras librerías equivalentes. También creemos que jQuery comparte con web2py la búsqueda de unir funcionalidad y síntesis. ### web2py_ajax.html¶ La aplicación de andamiaje de web2py "welcome" incluye un archivo llamado ``` views/web2py_ajax.html ``` que se ve aproximadamente de esta forma: ``` {{ response.files.insert(0,URL('static','js/jquery.js')) response.files.insert(1,URL('static','css/calenadar.css')) response.files.insert(2,URL('static','js/calendar.js')) response.include_meta() response.include_files() }} <script type="text/javascript"><!-- // Estas variables son usadas por la inicialización de ajax en // web2py.js (que se carga más abajo). var w2p_ajax_confirm_message = "{{=T('Are you sure you want to delete this object?')}}"; var w2p_ajax_date_format = "{{=T('%Y-%m-%d')}}"; var w2p_ajax_datetime_format = "{{=T('%Y-%m-%d %H:%M:%S')}}"; //--></script> <script src="{{=URL('static','js/web2py.js')}}" type="text/javascript"></script> ``` Este archivo se incluye en el encabezado HEAD de la plantilla "layout.html" por defecto y provee los siguientes servicios: * Incluye "static/jquery.js". * Incluye "static/calendar.js" y "static/calendar.css", que se usan para la ventana de calendario emergente. * Incluye todos los encabezados `response.meta` * Incluye todos los `response.files` (deben tener el formato CSS o JS, como se establece en el código fuente) * Configura las variables de formulario e incluye "static/js/web2y.js" "web2py.js" hace lo siguiente: * Define una función `ajax` (que utiliza $.ajax de jQuery). * Hace que todo DIV de la clase "error" o cualquier ayudante html con clase "flash" se despliegue. * Evita la escritura de enteros inválidos para campos INPUT con clase "integer". * Evita la escritura de valores float en INPUT de clase "double". * Asocia los campos INPUT de tipo "date" con la ventana emergente date picker. * Asocia los campos INPUT de tipo "datetime" con un date picker para datetime. * Asocia los campos INPUT de tipo "time" con un date picker para la hora. * Define ``` web2py_ajax_component ``` , una herramienta muy importante que describiremos en el Capítulo 12. * Define `web2py_websocket` , una función que se puede usar con websockets de HTML5 (no tratado en este libro pero puedes leer los ejemplos en el código fuente de "gluon/contrib/websocket__messaging.py").websockets * Define funciones para el cálculo de entropía y validación de campos de contraseña. También incluye las funciones `popup` , `collapse` , y `fade` para compatibilidad hacia atrás. Aquí se muestra un ejemplo de cómo actúan los efectos combinados. Tomemos la app prueba con el siguiente modelo: ``` db = DAL("sqlite://db.db") db.define_table('hijo', Field('nombre'), Field('peso', 'double'), Field('fecha_nacimiento', 'date'), Field('hora_nacimiento', 'time')) db.hijo.nombre.requires=IS_NOT_EMPTY() db.hijo.peso.requires=IS_FLOAT_IN_RANGE(0,100) db.hijo.fecha_nacimiento.requires=IS_DATE() db.hijo.hora_nacimiento.requires=IS_TIME() ``` con este controlador "default.py": ``` def index(): formulario = SQLFORM(db.hijo) if formulario.process().accepted: response.flash = 'registro insertado' return dict(formulario=formulario) ``` ``` {{extend 'layout.html}} {{=formulario}} ``` La acción "index" genera el siguiente formulario: Si se envía un formulario con errores, el servidor devuelve la página con el formulario modificado conteniendo los mensajes de error. Los mensajes de error son DIV de la clase "error", y como se ejecuta el código de web2py.js, los errores se mostrarán con un efecto de desplazamiento hacia abajo o slide-down. El color de los errores está configurado en el código CSS en "layout.html". El código en web2py.js evita que escribas un valor ilegal en el campo de ingreso de texto. Esto tiene lugar antes y como un agregado, no como sustituto, de la validación del lado del servidor. El código de web2py.js muestra una herramienta para seleccionar fechas cuando ingresas a un campo INPUT de tipo"date", y muestra una herramienta para selección de fecha y hora cuando ingresas a un campo INPUT de tipo "datetime". He aquí un ejemplo: El código en web2py.js también muestra la siguiente herramienta para selección de hora cuando intentas editar un campo INPUT de tipo "time": Al enviarse los datos del formulario, la acción del controlador establece el mensaje de respuesta para el mensaje de inserción del registro. El diseño de página por defecto convierte ese mensaje en un DIV con id="flash". El código en web2py.js es responsable de hacer que el DIV se muestre y de que se oculte cuando haces clic en él: Estos y otros efectos se pueden modificar en forma programática en las vista y a través de los ayudantes en los controladores. ### Efectos de jQuery¶ Los efectos básicos descriptos aquí no requieren ningún archivo adicional; todo lo que necesitas ya está incluido en web2py_ajax.html. Los objetos HTML/XHTML se pueden identificar por tipo (por ejemplo un DIV), por clase, o por id. Por ejemplo: ``` <div class="uno" id="a">Hola</div> <div class="dos" id="b">Mundo</div> ``` Pertenecen a la clase "uno" y "dos" respectivamente. Además, tienen valores id "a" y "b" respectivamente. En jQuery puedes hacer referencia al primer elemento con la siguiente notación similar a la de CSS ``` jQuery('.uno') // referencia a elemento/s con clase "uno" jQuery('#a') // referencia a elemento con id "a" jQuery('DIV.one') // referencia a elemento/s de tipo "DIV" con clase "uno" jQuery('DIV #a') // referencia a elemento de tipo "DIV" con id "a" ``` y puedes hacer referencia al segundo elemento con ``` jQuery('.two') jQuery('#b') jQuery('DIV.two') jQuery('DIV #b') ``` o puedes seleccionar ambos elementos con `jQuery('DIV')` Los elementos de etiquetas están asociados a eventos, como "onclick". jQuery permite vincular estos eventos con efectos, por ejemplo el "slideToggle": ``` <div class="uno" id="a" onclick="jQuery('.dos').slideToggle()">Hola</div> <div class="dos" id="b">Mundo</div> ``` Ahora, si haces clic en "Hola", "Mundo" desaparece. Si haces clic nuevamente, "Mundo" vuelve a aparecer. Puedes hacer que un elemento se oculte por defecto agregándole la clase hidden: ``` <div class="uno" id="a" onclick="jQuery('.dos').slideToggle()">Hola</div> <div class="dos hidden" id="b">Mundo</div> ``` También puedes asociar acciones a eventos fuera de la etiqueta en sí. El código previo se puede reescribir de la siguiente forma: ``` <div class="uno" id="a">Hola</div> <div class="dos" id="b">Mundo</div> <script> jQuery('.uno').click(function(){jQuery('.dos').slideToggle()}); </script> ``` Los efectos devuelven el objeto que generó la llamada, por lo que es posible crear efectos sucesivos. `click` establece la función de retorno o callback que se ejecutará cuando se hace clic en el elemento. De igual forma se pueden configurar eventos como `change` , `keyup` , `keydown` , `mouseover` , etc. A menudo se necesesita ejecutar código JavaScript sólo una vez que se haya cargado el documento web comoleto. Esto se hace normalmente con el atributo `onload` de BODY, pero jQuery provee de una forma alternativa que no requiere la edición del diseño de página. ``` <div class="uno" id="a">Hola</div> <div class="dos" id="b">Mundo</div> <script> jQuery(document).ready(function(){ jQuery('.uno').click(function(){jQuery('.dos').slideToggle()}); }); </script> ``` El cuerpo de la función sin nombre se ejecuta sólo si el documento está listo, una vez que se ha cargado completamente. Aquí se muestra lista de nombres de eventos más utilizados: # Eventos de formulario¶ * `onchange` : Script que se ejecutará cuando cambie el elemento * `onsubmit` : Script que se ejecutará cuando se envíe el formulario * `onreset` : Script que se ejecutará cuando se reinicie el formulario * `onselect` : Script que se ejecutará cuando se seleccione el formulario * `onblur` : Script que se ejecutará cuando el elemento pierda el foco * `onfocus` : Script que se ejecutará cuando el elemento tome foco # Eventos del teclado¶ * `onkeydown` : Script que se ejecutará cuando se presione una tecla * `onkeypress` : Script que se ejecutará cuando se presione y suelte una tecla * `onkeyup` : Script que se ejecutará cuando se suelte una tecla # Eventos del ratón¶ * `onclick` : Script que se ejecutará cuando se haga clic en un elemento * `ondblclick` : Script que se ejecutará cuando se haga doble clic en un elemento * `onmousedown` : Script que se ejecutará cuando se presione el botón del ratón * `onmousemove` : Script que se ejecutará cuando se desplace el puntero del ratón * `onmouseout` : Script que se ejecutará cuando el puntero del ratón se desplace fuera de un elemento * `onmouseover` : Script que se ejecutará cuando el puntero del ratón se desplace dentro de un elemento * `onmouseup` : Script que se ejecutará cuando se suelte el botón del ratón Esta es una lista de efectos de uso frecuente definidos por jQuery: # Efectos¶ * `jQuery(...).show()` : Hace que el elemento sea visible * `jQuery(...).hide()` : Oculta un elemento * ``` jQuery(...).slideToggle(velocidad, callback) ``` : Hace que un elemento se pliegue o despliegue en sentido vertical. * ``` jQuery(...).slideUp(speed, callback) ``` ``` jQuery(...).slideDown(speed, callback) ``` ``` jQuery(...).fadeIn(speed, callback) ``` : Hace que un objeto oculto aparezca gradualmente * ``` jQuery(...).fadeOut(speed, callback) ``` : Hace que un objeto se oculte gradualmente El argumento velocidad comúnmente es "slow", "fast" o se omite (por defecto). El callback es una función opcional que se llama cuando se completa el efecto. Los efectos de jQuery también pueden ser fácilmente embebidos en ayudantes, por ejemplo, en una vista: ``` {{=DIV('¡clic aquí!', _onclick="jQuery(this).fadeOut()")}} ``` Otros métodos útiles para manipular elementos # Métodos y atributos¶ ``` jQuery(...).prop(nombre) ``` : Devuelve el valor para el nombre de atributo especificado * ``` jQuery(...).prop(nombre, valor) ``` : Establece un nuevo valor para el atributo especificado con nombre * `jQuery(...).html()` : Sin argumentos, devuelve el html contenido en los elementos seleccionados, si se pasa una cadena como argumento, el contenido html de los elementos es reemplazado por esa cadena. * `jQuery(...).text()` : Sin argumentos, devuelve el texto incluido en el elemento (sin etiquetas), si recibe una cadena reemplaza el contenido del elemento con el texto ingresado. * ``` jQuery(...).css(nombre, valor) ``` : Si se pasa un solo parámetro, devuelve la definición de estilo del elemento para ese atributo. Si se pasan dos argumentos, se actualiza el atributo del primer argumento según la cadena especificada en el segundo. * ``` jQuery(...).each(función) ``` : Recorre el conjunto de elementos seleccionados y ejecuta la función para cada iteración. La función recibe como argumento el ítem de la lista de elementos. * `jQuery(...).index()` : Sin argumentos, devuelve la posición del primer elemento de la selección con respecto a sus elementos hermanos. (por ejemplo, la posición de un LI entre otros LI). Si se pasa un elemento como argumento, el método devuelve la posición del elemento pasado como argumento respecto del conjunto de elementos seleccionados previamente. * `jQuery(...).length` : Es un atributo de la selección que devuelve la cantidad de elementos encontrados. jQuery es una librería realmente compacta y concisa para Ajax; por lo tanto web2py no necesita una capa de abstracción extra por sobre jQuery, salvo el caso de la función `ajax` detallada más abajo). Las API de jQuery están disponibles en su forma original siempre que se las requiera. Consulta la documentación para mayor información sobre estos efectos y otras interfaces de jQuery. También se puede extender jQuery utilizando los plugin y Widget de Interfaz de Usuario (User Interface Widgets). Este tema no se tratará aquí; puedes consultar [jquery-ui] para más detalles. # Campos condicionales en formularios¶ Un típico caso de uso de los efectos de jQuery es el de un formulario que cambia de apariencia según el valor de un campo. Esto es fácil de realizar en web2py porque el ayudante SQLFORM genera formularios "CSS friendly" (aptos para CSS). El formulario contiene una tabla con filas. Cada fila contiene una etiqueta, un campo de ingreso de texto, y una tercera columna opcional. Los ítems tiene valores id que derivan estrictamente del nombre de la tabla y los nombres de los campos. Como convención, cada campo INPUT tiene un id ``` nombredetabla_nombredecampo ``` y se ubica dentro de una fila con id ``` nombredetabla_nombredefila_row ``` Como ejemplo, crea un formulario de entrada de datos que solicite el nombre de contribuyente y el nombre de la esposa, pero sólo si él/ella está casada/o. ``` db = DAL('sqlite://db.db') db.define_table('contribuyente', Field('nombre'), Field('casado', 'boolean'), Field('nombre_esposa')) ``` el siguiente controlador "default.py": ``` def index(): form = SQLFORM(db.taxpayer) if form.process().accepted: response.flash = 'record inserted' return dict(form=form) ``` ``` {{extend 'layout.html'}} {{=formulario}} <script> jQuery(document).ready(function(){ jQuery('#contribuyente_nombre_esposa__row').hide(); jQuery('#contribuyente_casado').change(function(){ if(jQuery('#contribuyente_casado').prop('checked')) jQuery('#contribuyente_nombre_esposa__row').show(); else jQuery('#contribuyente_nombre_esposa__row').hide();}); }); </script> ``` El script en la vista tiene la tarea de ocultar la fila que contiene el nombre de la esposa: Cuando el contribuyente marca el campo "casado", reaparece el campo para el nombre de la esposa: Aquí, "contribuyente_casado" es el campo de selección checkbox asociado al campo booleano "casado" de la tabla "contribuyente". "contribuyente_nombre_esposa__row" es la fila que contiene el campo de ingreso de datos para "nombre_esposa" de la tabla "contribuyente". # Confirmación de eliminación¶ Otra aplicación útil es la de solicitar confirmación cuando se marca un ítem para eliminar, por ejemplo el checkbox para eliminar un registro del formulario de edición. Tomemos el siguiente ejemplo y agreguemos la siguiente acción del controlador: ``` def editar(): registro = db.contribuyente[request.args(0)] formulario = SQLFORM(db.contribuyente, registro, deletable=True) if formulario.process().accepted: response.flash = 'registro actualizado' return dict(formulario=formulario) ``` El argumento `deletable=True` en el constructor de SQLFORM le indica a web2py que muestre un checkbox en el formulario de edición. Por defecto es `False` . El script "web2py.js" incluye el siguiente código: ``` jQuery(document).ready(function(){ jQuery('input.delete').prop('onclick', 'if(this.checked) if(!confirm( "{{=T('Sure you want to delete this object?')}}")) this.checked=false;'); }); ``` Por convención, este checkbox tiene asignada la clase "delete". El código de jQuery de arriba asocia el evento onclick de este checkbox con un diálogo de confirmación (JavaScript estándar) y desmarca el checkbox si el contribuyente no lo confirma: ### La función `ajax` ¶ En web2py.js, web2py define una función llamada `ajax` que se basa en, pero se debe confundir con la función de jQuery `$.ajax` . Esta última es mucho más potente que la primera, y para su método de uso, puedes consultar ref.[jquery] y ref.[jquery-b]. Sin embargo, la primera función es suficiente para muchas tareas de cierta complejidad, y más fácil de usar. La función `ajax` es una función de JavaScript y tiene la siguiente sintaxis: ``` ajax(url, [nombre1, nombre2, ...], destino) ``` Esta función llama en forma asíncrona al url (primer argumento), pasa los valores de los campos de ingreso de datos con nombres que coincidan con los nombres de la lista (segundo argumento) y luego almacena la respuesta dentro de la etiqueta HTML con id igual a destino o target (el tercer argumento) Aquí se muestra un ejemplo de un controlador `default` : def echo(): return request.vars.nombre ``` Cuando ingresas algo en el campo INPUT, tan pronto como sueltes la tecla (onkeyup), se llama a la función `ajax` , y el valor del campo `name="nombre"` se pasa a la acción "echo", que envía el texto de regreso a la vista. La función `ajax` recibe la respuesta y muestra la respuesta de echo en el DIV "destino". # Uso de eval en lugar de destino¶ El tercer argumento de la función `ajax` puede ser una cadena ":eval". Esto significa que la cadena devuelta por el servidor no se incrustará en el documento sino que en su lugar se evaluará. Aquí se puede ver un ejemplo de un controlador `default` : def echo(): return "jQuery('#destino').html(%s);" % repr(request.vars.nombre) ``` Esto permite devolver respuestas más complejas que pueden actualizar múltiples elementos del documento. # Autocompleción¶ web2py contiene un widget de autocompleción incorporado que se detalla en el capítulo sobre formularios. Aquí vamos a crear uno más sencillo desde cero. Otra aplicación de la función `ajax` descripta arriba es la autocompleción. En este caso queremos crear un campo de ingreso de datos que reciba un nombre de mes y que, cuando el visitante escriba un nombre incompleto, realice la autocompleción a través de una solicitud Ajax. Como respuesta, debe aparecer una lista desplegable de autocompleción debajo del campo de ingreso de datos. Esto es posible a través del siguiente controlador `default` : def seleccion_mes(): if not request.vars.mes: return '' meses = ['Enero', 'Febrero', 'Marzo', 'Abril', 'Mayo', 'Junio', 'Julio', 'Agosto', 'Septiembre' ,'Octubre', 'Noviembre', 'Diciembre'] comienzo_mes = request.vars.mes.capitalize() seleccionados = [m for m in meses if m.startswith(comienzo_mes)] return DIV(*[DIV(k, _onclick="jQuery('#mes').val('%s')" % k, _onmouseover="this.style.backgroundColor='yellow'", _onmouseout="this.style.backgroundColor='white'" ) for k in seleccionados]) ``` ``` {{extend 'layout.html'}} <style> #sugerencias { position: relative; } .sugerencias { background: white; border: solid 1px #55A6C8; } .sugerencias DIV { padding: 2px 4px 2px 4px; } </style<form> <input type="text" id="mes" name="mes" style="width: 250px" /><br /> <div style="position: absolute;" id="sugerencias" class="sugerencias"></div> </form> <script> jQuery("#mes").keyup(function(){ ajax('seleccion_mes', ['mes'], 'sugerencias')}); </script> ``` El script de jQuery en la vista activa la solicitud Ajax cada vez que el usuario escribe algo en el campo de datos "mes". El valor del campo de datos se envía con una solicitud Ajax a la acción "seleccion_mes". Esta acción busca una lista de nombres de mes que comiencen con el texto enviado (seleccionado), crea la lista de DIV (cada uno contiene una sugerencia de nombre de mes), y devuelve una cadena serializada con los DIV. La vista muestra la respuesta HTML en el DIV "suggestions". La acción "seleccion_mes" genera tanto las sugerencias como el código JavaScript embebido en los DIV que se debe ejecutar cuando el visitante hace clic en cada sugerencia. Por ejemplo cuando el visitante escribe "M", la acción callback devuelve: ``` <div> <div onclick="jQuery('#mes').val('Marzo')" onmouseout="this.style.backgroundColor='white'" onmouseover="this.style.backgroundColor='yellow'">Marzo</div> <div onclick="jQuery('#mes').val('Mayo')" onmouseout="this.style.backgroundColor='white'" onmouseover="this.style.backgroundColor='yellow'">Mayo</div> </div> ``` Este es el efecto obtenido: Si los meses se almacenan en la base de datos, como por ejemplo con: ``` db.define_table('mes', Field('nombre')) ``` entonces simplemente debes reemplazar la acción "seleccion_mes" con: def seleccion_mes(): if not request.vars.mes: return '' patron = request.vars.mes.capitalize() + '%' seleccionados = [registro.nombre for registro in db(db.mes.nombre.like(patron)).select()] return ''.join([DIV(k, _onclick="jQuery('#mes').val('%s')" % k, _onmouseover="this.style.backgroundColor='yellow'", _onmouseout="this.style.backgroundColor='white'" ).xml() for k in seleccionados]) ``` jQuery provee de un plugin opcional Auto-complete con funcionalidad adicional, pero su documentación no se incluye en este texto. # Envío de formularios con Ajax¶ Consideremos como ejemplo una página que permite que el visitante envíe mensajes utilizando Ajax son refrescar todo el documento. Por medio del ayudante LOAD, web2py provee de un mejor mecanismo para esta tarea que el que describimos en este ejemplo, que se detalla en el capítulo 12. Aquí queremos mostrarte cómo hacerlo usando únicamente jQuery. La página contiene un formulario "miformulario" y un DIV de destino "destino". Cuando se envía el formulario, el servidor puede aceptarlo (y realizar una inserción en la base de datos) o bien rechazarlo (porque no pasó la validación). La notificación correspondiente se devuelve por medio de una respuesta Ajax y se muestra en el DIV "destino". ``` db = DAL('sqlite://db.db') db.define_table('publicacion', Field('tu_mensaje', 'text')) db.poublicacion.tu_mensaje.requires = IS_NOT_EMPTY() ``` Ten en cuenta que cada publicación tiene un único campo "tu_mensaje" que no debe estar vacío. Edita el controlador `default.py` y agrega dos acciones: def nueva_publicacion(): formulario = SQLFORM(db.publicacion) if formulario.accepts(request, formname=None): return DIV("Mensaje registrado") elif formulario.errors: return TABLE(*[TR(k, v) for k, v in formulario.errors.items()]) ``` La primera acción no hace otra cosa que devolver una vista. La segunda acción es el callback de Ajax. Este recibe las variables del formulario en `request.vars` , las procesa y devuelve ``` DIV("Mensaje publicado") ``` si la validación tiene éxito o una `TABLE` de mensajes de error si el formulario no es válido. Ahora modifica la vista "default/index.html": <div id="destino"></div<form id="miformulario"> <input name="tu_mensaje" id="tu_mensaje" /> <input type="submit" /> </form<script> jQuery('#miformulario').submit(function() { ajax('{{=URL('nueva_publicacion')}}', ['tu_mensaje'], 'destino'); return false; }); </script> ``` Observa cómo en este ejemplo el formulario se crea manualmente usando HTML, pero se procesa con el SQLFORM en otra acción distinta a la que muestra el formulario. El objeto SQLFORM no se serializó como HTML. `SQLFORM.accepts` en este caso no toma una sesión y establece `formname=None` , porque hemos optado por no configurar el nombre del formulario y su clave especial en el formulario personalizado HTML. El script al final de la vista asocia el botón de envío del formulario "miformulario" a una función declarada al vuelo que envía el campo INPUT con `id="tu_mensaje"` usando la función `ajax` , y muestra la respuesta dentro del DIV con `id="destino"` . # Valoraciones y votos¶ Otro uso de Ajax es el de la votación por y valoración de elementos en una página. Aquí consideramos una aplicación que le permita a los usuarios votar por imágenes publicadas. La aplicación consiste de una sola página que muestra las imágenes ordenadas según la cantidad de votos que recibieron. Vamos a permitir a los usuarios que voten más de una vez, aunque es fácil cambiar ese comportamiento si los usuarios se autentican, realizando un seguimiento de los votos individuales en la base de datos y asociándolos con la dirección en del votante. Tenemos el siguiente modelo: ``` db = DAL('sqlite://images.db') db.define_table('item', Field('imagen', 'upload'), Field('votos', 'integer', default=0)) ``` Este es controlador `default` : ``` def listar_item(): lista_item = db().select(db.item.ALL, orderby=db.item.votos) return dict(lista_item=lista_item) def votar(): item = db.item[request.vars.id] nuevos_votos = item.votos + 1 item.update_record(votos=nuevos_votos) return str(nuevos_votos) ``` La acción de descarga es necesaria para permitir que la vista con la lista de ítems descargue las imágenes almacenadas en la carpeta "uploads". La acción votar se usa para el callback Ajax. Esta es la vista "default/listar_item.html": <form><input type="hidden" id="id" name="id" value="" /></form> {{for item in lista_item:}} <p> <img src="{{=URL('download', args=item.imagen)}}" width="200px" /> <br /> Votes=<span id="item{{=item.id}}">{{=item.votos}}</span> [<span onclick="jQuery('#id').val('{{=item.id}}'); ajax('votar', ['id'], 'item{{=item.id}}');">agregar voto</span>] </p> {{pass}} ``` Cuando el usuario hace clic en "[agregar voto]", el código JavaScript almacena el item.id en el INPUT oculto y envía ese valor al servidor a través de una solicitud Ajax. El servidor aumenta el conteo de los votos para el registro correspondiente y devuelve el nuevo conteo como cadena. Entonces, ese valor se inserta en SPAN de destino `item{{=item.id}}` . Los callback de Ajax se pueden usar para realizar operaciones en segundo plano, pero como alternativa, recomendamos el uso de cron o un proceso en segundo plano (tratados en el capítulo 4), ya que el servidor web impone tiempos máximos de ejecución a los hilos. Si la operación tomase demasiado tiempo, el servidor web la cancelaría. Consulta los parámetros de tu servidor web para la configuración de tiempos máximos para procesos. # Componentes y Agregados (plugin)¶ # Chapter 12: Componentes y agregados * Componentes y Agregados (plugin) ## Componentes y Agregados (plugin)¶ Los componentes y los agregados o plugin son características relativamente novedosas en web2py, y existen diferencias entre los desarrolladores sobre qué son o qué deberían ser. La confusión deriva mayormente de los distintos usos de esos términos en otros proyectos de software y del hecho de que los desarrolladores todavía se encuentran en la tarea de definir sus especificaciones. Sin embargo, el soporte de plugin es una característica importante y debemos establecer ciertas definiciones. Estas definiciones no tienen la intención de cerrar la discusión. Sólo deben mantener cierta coherencia con los patrones de programación que vamos a detallar en este capítulo. Necesitamos resolver dos problemas: * Cómo construir aplicaciones modulares que minimicen la carga del servidor y maximicen la reutilización del código? * Cómo podemos distribuir piezas de código siguiendo de alguna forma el estilo plugin-and-play? Componentes es la solución para el primer problema; plugin es la solución del segundo. ### Componentes¶ Un componente es una parte funcionalmente autónoma de una página web. Un componente puede estar compuesto de módulos, controladores y vistas, pero no hay requisitos estrictos salvo que, cuando se incrustan en una página web, deben localizarse por medio de una etiqueta html (por ejemplo un DIV, un SPAN o un IFRAME) y debe realizar sus tareas en forma independiente del resto de la página. Tenemos especial interés en aquellos componentes que se carguen en la página y que se comuniquen con el controlador a través de Ajax. Un ejemplo de componente es un "componente para comentarios" que se incluye en un DIV y muestra los comentarios de usuarios y un formulario para publicar un comentario. Cuando el formulario se envía, se transmite al servidor por medio de Ajax, la lista se actualiza, y el comentario se almacena del lado del servidor en la base de datos. El contenido del DIV se refresca sin la actualización del resto de la página. La función LOAD de web2py hace fácil la tarea sin conocimiento específico de JavaScript/Ajax o programación. Nuestra meta es ser capaces de desarrollar aplicaciones web por ensamblado de componentes en los diseños de página. Consideremos una simple app de web2py, "prueba", que extiende la app de andamiaje por defecto con el modelo personalizado en el archivo "models/db_comentario.py": ``` db.define_table('comentario', Field('cuerpo','text',label='Tu comentario'), Field('publicado_en','datetime',default=request.now), Field('publicado_por',db.auth_user,default=auth.user_id)) db.comentario.publicado_en.writable=db.comentario.publicado_en.readable=False db.comentario.publicado_por.writable=db.comentario.publicado_por.readable=False ``` una acción en "controllers/comentarios.py" ``` @auth.requires_login() def publicar(): return dict(formulario=crud.create(db.comentario), comentarios=db(db.comentario).select()) ``` y su correspondiente vista "views/comentarios/publicar.html" ``` {{extend 'layout.html'}} {{for comentario in comentarios:}} <div class="comentario"> El {{=comentario.publicado_en}} {{=comentario.publicado_por.first_name}} dice <span class="comentario_cuerpo">{{=comentario.cuerpo}}</span> </div> {{pass}} {{=formulario}} ``` Puedes acceder a él como de costumbre con: ``` http://127.0.0.1:8000/prueba/comentarios/publicar ``` Hasta aquí no hay nada de especial en esta acción, pero podemos convertirla en un componente definiendo una nueva vista con la extensión ".load" que no extiende el diseño de página o layout. Entonces creamos una vista "views/comentarios/publicar.load": ``` {{#extend 'layout.html' <- observa que esto se omite!}} {{for comentario in comentarios:}} <div class="comentario"> El {{=comentario.publicado_en}} {{=comentario.publicado_por.first_name}} dice <span class="comentario_cuerpo">{{=comentario.cuerpo}}</span> </div> {{pass}} {{=formulario}} ``` Podemos acceder a ella por el URL ``` http://127.0.0.1:8000/prueba/comentarios/publicar.load ``` y se verá de esta forma: Este es un componente que podemos embeber en cualquier otra página con tan solo hacer ``` {{=LOAD('comentarios','publicar.load',ajax=True)}} ``` Por ejemplo en "controllers/default.py" podemos editar y en la vista correspondiente agregar el componente: ``` {{extend 'layout.html'}} <p>{{='bla '*100}}</p> {{=LOAD('comentarios','publicar.load',ajax=True)}} ``` Si se visita la página ``` http://127.0.0.1:8000/prueba/default/index ``` mostrará el contenido normal y el componente de comentarios: El componente `{{=LOAD(...)}}` se convierte como sigue: ``` <script type="text/javascript"><!-- web2py_component("/prueba/comentarios/publicar.load","c282718984176") //--></script><div id="c282718984176">loading...</div> ``` (el código real creado depende de las opciones pasadas a la función LOAD). La función ``` web2py_component(url, id) ``` se define en "web2py_ajax.html" y se encarga de toda la magia: llama al `url` a través de Ajax y embebe la respuesta en el DIV con el correspondiente `id` ; envuelve todo envío de formulario en el DIV y transmite esos formularios a través de Ajax. El target o destino de Ajax siempre es el mismo DIV. La lista completa de argumentos del ayudante LOAD es la siguiente: ``` LOAD(c=None, f='index', args=[], vars={}, extension=None, target=None, ajax=False, ajax_trap=False, url=None,user_signature=False, content='loading...',**attr): ``` Descripción: * los dos primeros argumentos `c` y `f` son el controlador y la función que queremos utilizar respectivamente. * `args` y `vars` son los argumentos y variables que queremos ingresar a la función. El primero es una lista, el segundo un diccionario. * `extension` es una extensión opcional. Observa que la extensión puede también pasarse como parte de la función como en `f='index.load'` . * `target` es el `id` del DIV de destino (donde se incrustará el componente). Si no se especifica se generará un `id` aleatorio. * `ajax` debería establecerse como `True` si el DIV se debe completar a través de Ajax y como `False` si el DIV tiene que completarse antes de que se devuelva la página actual (y por lo tanto, evitando la llamada a través de Ajax). * `ajax_trap=True` quiere decir que todo formulario enviado en el DIV se debe capturar y transmitir a través de Ajax, y la respuesta se debe convertir dentro del DIV. `ajax_trap=False` indica que los formularios se deben enviar normalmente, y por lo tanto refrescando la página completa. `ajax_trap` se omite y se asume el valor `True` si `ajax=True` . * `url` , si se especifica, sobrescribe los valores para `c` , `f` , `args` , `vars` , y `extension` y carga el componente ubicado en `url` . Es utilizado para cargar como componentes páginas servidas por otras aplicaciones (que pueden o no ser aplicaciones de web2py). * `user_signature` es por defecto False pero, si te has autenticado, debería ser True. Esto comprobará que el callback de ajax se ha firmado digitalmente. Esa funcionalidad está documentada en el capítulo 4. * `content` es el contenido a mostrarse mientras se realiza la llamada con ajax. Puede ser un ayudante como en `content=IMG(...)` . * Se pueden ingresar atributos `**attr` adicionales para el `DIV` que contiene el componente. 1. Si no se especifica una vista `.load` , hay un `generic.load` que convierte el diccionario devuelto por la acción sin diseño de página (layout). Esto funciona mejor si el diccionario contiene un único elemento. Si cargas un componente con LOAD que tiene una extensión `.load` y el controlador correspondiente redirige a otra acción (por ejemplo un formulario de autenticación), la extensión `.load` se propagará y el nuevo url (al cual se debe redirigir) también se carga con una extensión `.load` . Si llamas a una función a través de Ajax y quieres que la acción fuerce una redirección en la página que la contiene puedes hacerlo con: ``` redirect(url,type='auto') ``` Como las solicitudes Ajax tipo POST no soportan los formularios multipart, por ejemplo subidas de archivos, los campos tipo upload no funcionarán con el componente LOAD. Esto podría engañarte y puedes llegar a pensar que funcionaría de todos modos ya que los campos upload funcionan normalmente si el POST se hace desde una vista con extensión .load del componente. En cambio, las subidas de datos con upload se hacen por medio de widget de terceros compatibles con ajax y comandos especiales de web2py para almacenamiento de archivos subidos. # Comunicación Cliente-Servidor para componentes¶ Cuando la acción de un componente se llama a través de Ajax, web2py pasa dos encabezados HTTP con la siguiente solicitud: ``` web2py-component-location web2py-component-element ``` que son accesibles para la acción por las variables: ``` request.env.http_web2py_component_location request.env.http_web2py_component_element ``` La última también es accesible por medio de: `request.cid` La primera contiene el URL de la página que llamó a la acción del componente. La segunda contiene el `id` del DIV que contendrá la respuesta. La acción del componente también puede almacenar información en dos encabezados especiales HTTP que serán interpretados por la página completa en la respuesta. Estos son: ``` web2py-component-flash web2py-component-command ``` y se pueden establecer con: ``` response.headers['web2py-component-flash']='....' response.headers['web2py-component-command']='...' ``` o (si la acción fue llamada por un componente) automáticamente con: ``` response.flash='...' response.js='...' ``` El primero contiene el texto que quieres que emerja con la respuesta El segundo contiene código JavaScript que quieres ejecutar con la respuesta. No puede contener saltos de línea. Como ejemplo, definamos un componente para formulario de contacto en "controllers/contacto/preguntar.load" que permita al usuario hacer una pregunta. El componente enviará por correo la pregunta al administrador del sistema, devolverá un mensaje emergente "gracias" y eliminará el componente de la página que lo contiene: ``` def preguntar(): formulario=SQLFORM.factory( Field('tu_correo',requires=IS_EMAIL()), Field('pregunta',requires=IS_NOT_EMPTY())) if formulario.process().accepted: if mail.send(to='<EMAIL>', subject='de %s' % formulario.vars.tu_correo, message = formulario.vars.pregunta): response.flash = 'Gracias' response.js = "jQuery('#%s').hide()" % request.cid else: formulario.errors.tu_email = "No se pudo enviar el mail" return dict(formulario=formulario) ``` Las primeras cuatro líneas definen el formulario y lo aceptan. El objeto mail usado para el envío se define en la aplicación de andamiaje por defecto. Las últimas cuatro líneas implementan toda la lógica específica del componente al recibir los datos de encabezado de la solicitud HTTP y estableciendo el encabezado de la respuesta HTTP. Ahora puedes embeber este formulario de contacto en cualquier página por medio de ``` {{=LOAD('contacto','preguntar.load',ajax=True)}} ``` Observa que no hemos definido una vista `.load` para nuestro componente `preguntar` . No la necesitamos porque devuelve un único objeto (formulario) y por lo tanto el "generic.load" lo manejará sin problemas. Recuerda que las vistas genéricas son una herramienta de desarrollo. En producción deberías copiar "views/generic.load" a "views/contacto/preguntar.load". `user_signature` : ``` {{=LOAD('contacto', 'preguntar.load', ajax=True, user_signature=True)}} ``` que agrega una firma digital al URL. La firma digital debe entonces validarse utilizando el siguiente decorador en la función callback: ``` @auth.requires_signature() def preguntar(): ... ``` # Retención o trapping de links con Ajax¶ Usualmente, un link no esta retenido (trapped), y al hacer clic en un link de un componente, se cargará toda la página del link. A veces necesitas que la página se descargue dentro del mismo componente. Esto se puede lograr utilizando el ayudante `A` : ``` {{=A('link a página', _href='http://example.com', cid=request.cid)}} ``` Si se especifica `cid` , la página del link se cargará con Ajax. El `cid` es el `id` del elemento html en el cual se descargará el componente de página descargado. En este caso lo configuramos como `request.cid` , es decir, el `id` del componente que incluye el link. La página solicitada del link puede ser y usualmente es un URL interno del sitio generado utilizando el comando URL. ### Plugin¶ Un plugin o agregado es cualquier subconjunto de archivos en una aplicación. y con esto realmente queremos significar cualquiera: * Un plugin no es un módulo, no es un modelo, tampoco es un controlador, ni es una vista, y de todas formas puede contener módulos, modelos, controladores y/o vistas. * Un plugin no necesariamente debe ser funcionalmente autónomo y puede depender de otros plugin o de código específico del usuario. * Un plugin no es un plugins system y por lo tanto no comprende conceptos como registro o aislamiento, si bien vamos a establecer normas para favorecer una cierto aislamiento. * Estamos hablando de un plugin para tu app, no de un plugin para web2py. ¿Por qué llamarlo plugin entonces? Porque provee de un mecanismo para empaquetado de subconjuntos de una aplicación y su instalación en una nueva app, es decir, conexión (plug-in) en una nueva app. Siguiendo esta definición, todo archivo en tu app puede ser manejado como plugin. Cuando una app se distribuye, sus plugin también se empaquetan y distribuyen con ella. En la práctica, la app admin provee de una interfaz especial para empaquetar y desempaquetar los plugin individualmente. Los archivos y carpetas de tu aplicación que tengan nombres con el prefijo `plugin_` nombre se pueden empaquetar separadamente en un archivo llamado: `web2py.plugin.` nombre `.w2p` y distribuirse en forma conjunta. Los archivos que componen el plugin no son tratados por web2py en una forma distinta a otros archivos excepto que admin sabe por sus nombres que se supone que deben distribuirse en forma conjunta, y los muestra en una página especial: De hecho, y siguiendo la definición anterior, estos plugin son más generales aún que aquellos reconocidos como tales por admin. En la práctica, nos interesan únicamente dos tipos de plugin: * Plugin de Componentes o Component Plugins. Estos son plugin que contienen componentes según la definición de la sección previa. Un plugin de componentes puede contener uno o más de ellos. Podríamos pensar por ejemplo en un `plugin_comentarios` que contenga un componente comentarios como se sugiere más arriba. Otro ejemplo podría ser un `plugin_etiquetado` que contenga un componente etiquetado (tagging) y un componente etiquetas que comparta algunas tablas de la base de datos también definidas por el plugin. * Plugin de diseño de página o Layout Plugins. Estos son plugin que contiene el diseño de página y los archivos estáticos requeridos para ese diseño. Cuando se aplica uno de estos plugin, le da a la app un nuevo estilo visual. Siguiendo las definiciones anteriores, los componentes creados en la sección anterior, por ejemplo "controllers/contact.py", ya son de hecho plugin. Podemos transferirlos de una app a otra y utilizar los componentes que definen. Todavía no son reconocidos en sí como plugin por admin porque no hay nada que los etiquete como plugin. Por lo tanto tenemos que resolver dos problemas: * Ponerle nombres a los archivos del plugin utilizando una convención determinada, de forma que admin pueda reconocerlos como parte del mismo plugin * Si el plugin tiene archivos del modelo, establecer una convención para que los objetos que define no interfieran en o contaminen el espacio de nombres y no entren en conflicto con las definiciones del resto de la app. Supongamos que tenemos un plugin llamado nombre. Estas son las reglas que deberían seguirse: Regla 1: Los modelos y controladores de plugin deberían llamarse, respectivamente * `models/plugin_` nombre `.py` * `controllers/plugin_` nombre `.py` y las vistas, módulos, archivos estáticos y los archivos en la carpeta private deberían ubicarse, respectivamente: * `views/plugin_` nombre `/` * `modules/plugin_` nombre `/` * `static/plugin_` nombre `/` * `private/plugin_` nombre `/` Regla 2: * `plugin_` nombre * `Plugin` Nombre * `_` Regla 3: * `session.plugin_` nombre * `session.Plugin` Nombre Regla 4: Los plugin deberían incluir documentación y licencia. Estos deberían ubicarse en: * `static/plugin_` nombre `/license.html` * `static/plugin_` nombre `/about.html` Regla 5: El plugin puede únicamente depender de la existencia de objetos globales definidos en el archivo "db.py" de andamiaje, por ejemplo * una conexión a base de datos llamada `db` * una instancia de `Auth` llamada `auth` * una instancia de `Crud` llamada `crud` * una instancia de `Service` llamada `service` Algunos plugin pueden ser un poco más sofisticados y tener parámetros de configuración en caso de existir más de una conexión a bases de datos. Regla 6: Si un plugin necesita configuración de parámetros, estos deberían establecerse a través del PluginManager según se detalla a más abajo. Si se siguen las reglas anteriores podemos asegurarnos de que: * admin reconocerá todo archivo o carpeta de `plugin_` nombre como parte de una entidad autónoma. * no habrá conflictos entre los distintos plugin. Las reglas recién detalladas no resuelven el problema de las dependencias y versiones de un plugin específico. Eso excede propósito de esta sección. # Plugin de componentes¶ Los plugin de componente son plugin que definen componentes. Los componentes usualmente acceden a la base de datos y definen sus propios modelos. Aquí transformamos el componente `comentarios` en un plugin de comentarios usando el mismo código que escribimos anteriormente, pero siguiendo las reglas especificadas para los plugin. Primero, creamos un modelo denominado "models/plugin_comentarios.py": ``` db.define_table('plugin_comentarios_comentario', Field('cuerpo','text', label='Tu comentario'), Field('publicado_en', 'datetime', default=request.now), Field('publicado_por', db.auth_user, default=auth.user_id)) db.plugin_comentarios_comentario.publicado_en.writable=False db.plugin_comentarios_comentario.publicado_en.readable=False db.plugin_comentarios_comentario.publicado_por.writable=False db.plugin_comentarios_comentario.publicado_por.readable=False def plugin_comentarios(): return LOAD('plugin_comentarios','publicar', ajax=True) ``` (observa que las últimas dos líneas definen una función que hará más simple incrustar el plugin) El segundo paso consiste en definir un "controllers/plugin_comentarios.py" ``` @auth.requires_login() def publicar(): comentario = db.plugin_comentarios_comentario return dict(formulario=crud.create(comentario), comentarios=db(comentario).select()) ``` Ahora En el tercer paso creamos una vista llamada "views/plugin_comentarios/publicar.load": ``` {{for comentario in comentarios:}} <div class="comentario"> on {{=comentario.publicado_en}} {{=comentario.publicado_por.first_name}} says <span class="comentario_cuerpo">{{=comentario.cuerpo}}</span> </div> {{pass}} {{=formulario}} ``` Ahora podemos usar la app admin para empaquetar el plugin para distribución. Admin guardará el plugin como: ``` web2py.plugin.comentarios.w2p ``` Podemos usar el plugin en cualquier vista con sólo instalar el plugin a través de la página diseño (edit) en admin y agregar lo siguiente a nuestras vistas ``` {{=plugin_comentarios()}} ``` Desde luego que podemos hacer más sofisticado a nuestro plugin agregando componentes que tomen parámetros y opciones de configuración. Cuanto más complicados sean los componentes, más difícil será evitar colisiones. El Plugin Manager descripto más abajo está diseñado para evitar ese problema. # Plugin Manager¶ La clase `PluginManager` está definida en `gluon.tools` . Antes de explicar como funciona internamente, vamos a explicar como usarla. Vamos a tomar como ejemplo el plugin `plugin_comentarios` que describimos anteriormente y lo vamos a mejorar. Ahora queremos que se pueda personalizar: ``` db.plugin_comentarios_comentario.cuerpo.label ``` sin necesidad de modificar el código del plugin en sí. Eso se puede hacer de esta manera: Primero, reescribimos el archivo de plugin "models/plugin_comentarios.py" de esta forma: ``` db.define_table('plugin_comentarios_comentario', Field('cuerpo', 'text', label=plugin_comentarios.comentarios.cuerpo_label), Field('publicado_en', 'datetime', default=request.now), Field('publicado_por', db.auth_user, default=auth.user_id)) def plugin_comentarios() from gluon.tools import PluginManager plugins = PluginManager('comentarios', cuerpo_label='Tu comentario') comentario = db.plugin_comentarios_comentario comentario.label=plugins.comentarios.cuerpo_label comentario.publicado_en.writable=False comentario.publicado_en.readable=False comentario.publicado_por.writable=False comentario.publicado_por.readable=False return LOAD('plugin_comentarios', 'publicar.load', ajax=True) ``` Observa cómo todo el código a excepción de la definición de la tabla está encapsulado en una única función. Otro detalle a tener en cuenta es que la función crea una instancia de `PluginManager` . Ahora en otro modelo en tu app, por ejemplo "models/db.py", puedes configurar este plugin como sigue: ``` from gluon.tools import PluginManager plugins = PluginManager() plugins.comentarios.cuerpo_label = T('Publica a comentario') ``` La instancia `plugins` está creada por defecto en la app de andamiaje en "models/db.py" El objeto PluginManager es un objeto Storage de instancia única o singleton, a nivel del hilo (thread-level) que contiene a su vez objetos Storage. Eso significa que puedes instanciar tantos como quieras en una misma aplicación pero (tengan el mismo nombre o no) se comportarán como si existiera una única instancia de la clase PluginManager. Particularmente cada archivo de plugin puede crear su propio objeto PluginManager y registrarse con sus parámetros específicos con: ``` plugins = PluginManager('nombre', param1='valor', param2='valor') ``` Puedes sobrescribir estos parámetros en cualquier parte (por ejemplo en "models/db.py") con el código: ``` plugins = PluginManager() plugins.nombre.param1 = 'otro valor' ``` Puedes configurar múltiples plugin en un sólo lugar. ``` plugins = PluginManager() plugins.nombre.param1 = '...' plugins.nombre.param2 = '...' plugins.nombre.param3 = '...' plugins.nombre.param4 = '...' plugins.nombre.param5 = '...' ``` Cuando se define un plugin, el PluginManager debe recibir argumentos: el nombre del plugin y pares nombre-valor con parámetros opcionales que se establecerán por defecto. La configuración debe preceder a la definición del plugin (por ejemplo, debe incluirse en un archivo de modelo que tenga prioridad en el orden alfabético). # Plugin de diseño de página¶ Los plugin de diseño de página o layout plugin son más sencillos que los plugin de componentes porque usualmente no contienen código, sino solamente vistas y archivos estáticos. De todas formas deberían cuidarse las buenas prácticas: Primero, crea una carpeta llamada "static/plugin_layout_nombre/" (donde nombre es el nombre de tu diseño) y copia todos tus archivos estáticos allí. En segundo lugar, crea un archivo de diseño llamado "views/plugin_layout_nombre/layout.html" que contenga tu diseño y los link de las imágenes, CSS y archivos JavaScript en "static/plugin_layout_nombre/" El tercer paso es modificar "views/layout.html" para que simplemente contenga: ``` {{extend 'plugin_layout_nombre/layout.html'}} {{include}} ``` La ventaja de este diseño es que los usuarios de este plugin pueden instalar múltiples diseños y elegir cuál es el que aplicarán simplemente editando "views/layout.html". Es más, "views/layout.html" no será empaquetado por admin junto con el plugin, por lo que no hay riesgo de que el plugin sobrescriba el código del usuario en el diseño instalado anteriormente. `plugin_wiki` ¶ ACLARACIÓN: plugin_wiki sigue en etapa de desarrollo y por lo tanto no podemos prometer compatibilidad hacia atrás en el mismo nivel que para el caso de las funciones del núcleo de web2py. plugin_wiki es un plugin con esteroides. Lo que queremos decir con eso es que define múltiples componentes y podría cambiar la forma en que desarrollas tus aplicaciones: Puedes descargarlo desde ``` http://web2py.com/examples/static/web2py.plugin.wiki.w2p ``` La idea detrás de plugin_wiki es que la mayoría de las aplicaciones incluyen páginas semi-estáticas. Estas son páginas que no incluyen algoritmos complicados o personalizados. Contienen texto estructurado (por ejemplo una página de ayuda), imágenes, audio, video, formularios crud o un conjunto estándar de componentes (comentarios, etiquetas, planos, mapas), etc. Estas páginas pueden ser públicas, requerir autenticación o incluir otras restricciones de acceso. Pueden estar enlazadas por un menú o únicamente ser accesibles a través de un formulario ayudante. plugin_wiki provee de una forma sencilla de agregar páginas incluidas en estas categorías en una aplicación común de web2py. En particular plugin_wiki incluye: * Una interfaz tipo wiki que permite la inserción de páginas a tu app y la posibilidad de asociarlas a un titular o slug. Estas páginas (que denominaremos páginas wiki) registran distintas versiones y se almacenan en la base de datos. * Páginas públicas y privadas (con autenticación). Si una página requiere autenticación, también puede requerir que el usuario sea miembro de cierto grupo). * Tres niveles: 1, 2, 3. En el nivel 1, las páginas pueden únicamente incluir texto, imágenes, audio y video. En el nivel 2, las páginas pueden también incluir widget (estos son componentes según se definen en la sección anterior que se pueden embeber en páginas wiki). En el nivel 3, las páginas pueden también incluir código de plantillas de web2py. * La opción de editar páginas con la sintaxis markmin o en HTML usando un editor WYSIWYG (edición sobre la vista previa). * Una colección de widget: implementados como componentes. Incluyen documentación propia y pueden ser embebidos como componentes comunes en una vista cualquiera de app o, utilizando una sintaxis simplificada, en páginas wiki. * Un conjunto de páginas especiales ( `meta-code` , `meta-menu` , etc.) que se pueden usar para personalizar el plugin (por ejemplo para definir código que debería correr el plugin, personalización del menú, etc.) La app welcome junto con plugin_wiki pueden ser considerados como un entorno de desarrollo en sí, apto para la creación de aplicaciones sencillas como por ejemplo un blog. De aquí en más vamos a asumir que se ha aplicado plugin_wiki a una copia de la app de andamiaje welcome. Lo primero que notas luego de instalar el plugin es que agrega un nuevo ítem de menú llamado pages. Haz clic en el ítem de menú pages y serás redirigido a la acción del plugin: ``` http://127.0.0.1:8000/miapp/plugin_wiki/index ``` La página de inicio (index) lista las páginas creadas utilizando el plugin en sí y te permite crear nuevas páginas eligiendo un slug. Prueba creando una página `home` . Serás redirigido a ``` http://127.0.0.1:8000/miapp/plugin_wiki/page/home ``` Haz clic en create page para editar el contenido. Por defecto, el plugin tiene el nivel 3. Esto implica que puedes insertar widget así como también páginas con código. Por defecto usa la sintaxis markmin para la descripción del contenido de la página. `MARKMIN` syntax¶ He aquí una iniciación a la sintaxis markmin: markmin | html | | --- | --- | markminhtml# título<h1>título</h1>## subtítulo<h2>subtítulo</h2>### subsubtítulo<h3>subsubtítulo</h3>**negrita**<strong>negrita</strong>''itálica''<i>itálica</i>http://...<a href="http://...com">http:...</a>http://...png<img src="http://...png" />http://...mp3<audio src="http://...mp3"></audio>http://...mp4<video src="http://...mp4"></video>qr:http://...<a href="http://..."><img src="qr code"/></a>embed:http://...<iframe src="http://..."></iframeObserva que los link, archivos de imagen, audio y video se incrustan automáticamente. Para más información sobre la sintaxis MARKMIN, consulta el capítulo 5. Si la página no existe, la app te solicitará que crees una. La página de edición te permite agregar adjuntos a las páginas (por ejemplo archivos estáticos) y puedes generar links a esos adjuntos como ``` [[milink nombre attachment:3.png]] ``` o embeberlos con ``` [[miimagen attachment:3.png center 200px]] ``` El tamaño ( `200px` ) es opcional. `centro` no es opcional sino que debes reemplazarlo por `left` o `right` . Puedes embeber cuadros con citas o blockquoted text con ``` ----- Este es un cuadro con una cita ----- ``` y también tablas ``` ----- 0 | 0 | X 0 | X | 0 X | 0 | 0 ----- ``` y texto sin conversión (verbatim) ``` `` texto sin conversión `` ``` Además puedes agregar `:class` al final de `-----` o ```` . Para texto enmarcado y tablas se transformará según la clase de la etiqueta, por ejemplo: ``` ----- Prueba -----:abc ``` se convierte como ``` <blockquote class="abc">Prueba</blockquote> ``` Para texto sin conversión se puede usar la clase para embeber contenido de distintos tipos. Por ejemplo, puedes embeber código con resaltado de sintaxis si especificas el lenguaje con `:code` lenguaje ``` `` def index(): return 'hola mundo' ``:code_python ``` Puedes embeber widget: ``` `` name: nombre_del_widget atributo1: valor1 atributo2: valor2 ``:widget ``` Desde la página de edición puedes hacer clic en el creador de widget o widget builder para insertar widget desde una lista, en forma interactiva: (para una lista de widget consulta la sección siguiente) También puedes embeber una plantilla de web2py con código: ``` `` {{for i in range(10):}}<h1>{{=i}}</h1>{{pass}} ``:template ``` # Permisos de página¶ Cuando edites una página encontrarás los siguientes campos: * active (por defecto `True` ). Si una página no está activa, no estará accesible a los visitantes (incluso si es pública). * public (por defecto `True` ). Si una página es pública, podrá ser visitada por usuarios no autenticados. * role (por defecto None). Si una página tiene un rol, será accesible únicamente por usuarios que se hayan autenticado y que sean miembros del grupo correspondiente. # Páginas especiales¶ meta-menu contiene el menú. Si la página no existe, web2py usa `response.menu` , definido en "models/menu.py". El contenido de la página meta-menu sobrescribe el del menú. La sintaxis es como sigue: ``` Ítem 1 Nombre http://link1.com Submenú Ítem 11 Nombre http://link11.com Submenú Ítem 12 Nombre http://link12.com Submenú Ítem 13 Nombre http://link13.com Ítem 2 Nombre http://link1.com Submenú Ítem 21 Nombre http://link21.com Submenú Ítem 211 Nombre http://link211.com Submenú Ítem 212 Nombre http://link212.com Submenú Ítem 22 Nombre http://link22.com Submenú Ítem 23 Nombre http://link23.com ``` donde el espaciado determina la estructura del submenú. Cada ítem se compone de el texto del ítem del menú seguido de un link. Un link puede ser `page` :titular. Un link con el valor `None` no apunta a ninguna página. Los espacios extra se omiten. Aquí hay otro ejemplo: ``` Home page:home Motores de búsqueda None Yahoo http://yahoo.com Google http://google.com Bing http://bing.com Ayuda page:help ``` Esto se convierte de la siguiente forma: Puedes definir las tablas en `meta-code` . Por ejemplo, puedes crear una simple tabla de amigos agregando lo siguiente en `meta-code` : ``` db.define_table('amigo', Field('nombre', requires=IS_NOT_EMPTY())) ``` y puedes crear una interfaz de administración de amigos embebiendo el siguiente código en la página que quieras: ``` ## Lista de amigos `` name: jqgrid table: amigo ``:widget ## Nuevo amigo `` name: create table: amigo ``:widget ``` La página tiene dos encabezados (que comienzan con #): "Lista de amigos" y "Nuevo amigo". La página contiene dos widget (bajo cada encabezado según corresponda): un widget jqgrid que crea una lista de amigos y un widget de inserción para agregar un amigo. `meta-header` , `meta-footer` , `meta-sidebar` no son utilizados por el diseño de página por defecto en "welcome/views/layout.html". Si deseas usarlos, edita "layout.html" usando admin (o la consola) y agrega las siguientes etiquetas en los lugares apropiados: ``` {{=plugin_wiki.embed_page('meta-header') or ''}} {{=plugin_wiki.embed_page('meta-sidebar') or ''}} {{=plugin_wiki.embed_page('meta-footer') or ''}} ``` De esta forma, el contenido de esas páginas aparecerá en el encabezado, barra lateral y pie en el diseño de página. # Configuración de plugin_wiki¶ Como con cualquier otro plugin, en "models/db.py" puedes hacer ``` from gluon.tools import PluginManager plugins = PluginManager() plugins.wiki.editor = auth.user.email == mail.settings.sender plugins.wiki.level = 3 plugins.wiki.mode = 'markmin' or 'html' plugins.wiki.theme = 'ui-darkness' ``` * editor es True si el usuario autenticado tiene autorización para editar páginas de plugin_wiki * level es la permisología: 1 para editar páginas comunes, 2 para embeber widget en páginas, 3 para embeber código * mode determina si se debe usar un editor de "markmin" o un editor WYSIWYG de "html". WYSIWYG * theme es el nombre del estilo o theme de jQuery UI. Por defecto sólo se incluye "ui-darkness" que tiene un sistema neutral de colores. Puedes agregar estilos aquí: ``` static/plugin_wiki/ui/%(estilo)s/jquery-ui-1.8.1.custom.css ``` # Widget disponibles¶ Cada widget se puede incrustar tanto en páginas de plugin_wiki como en cualquier otra plantilla de app de web2py. Por ejemplo, para embeber un video de YouTube en una página de plugin_wiki, puedes hacer ``` `` name: youtube code: l7AWnfFRc7g ``:widget ``` o para incrustar el mismo widget en una vista de web2py, puedes hacer: ``` {{=plugin_wiki.widget('youtube', code='l7AWnfFRc7g')}} ``` En uno u otro caso, las salida es: Los argumentos del widget que no tienen un valor asignado por defecto son obligatorios. Esta es la lista de todos los widget actualmente disponibles: read ``` read(tabla, record_id=None) ``` Lee y muestra un registro * `tabla` es el nombre de la tabla * `record_id` es un número de registro create ``` create(tabla, message='', next='', readonly_fields='', hidden_fields='', default_fields='') ``` Muestra el formulario para crear un registro * `tabla` es el nombre de la tabla * `message` es el mensaje a mostrarse después de la creación del registro * `next` es la redirección al aceptar el formulario, por ejemplo: "pagina/inicio/[id]" * `readonly_fields` es una lista de valores separados por coma indicando campos * `hidden_fields` es una lista separada por coma de campos ocultos * `default_fields` es una lista de valores de campo por defecto `campo=valor` separados por coma update ``` update(tabla,record_id='' ,message='', next='', readonly_fields='' ,hidden_fields='', default_fields='') ``` Displays a record update form * `tabla` es el nombre de la tabla * `record_id` es el registro a actualizar o ``` {{=request.args(-1)}} ``` * `message` es el mensaje a mostrarse después de la actualización del registro * `next` es la redirección al actualizar, por ejemplo: "pagina/inicio/[id]" * `readonly_fields` es una lista de valores separados por coma indicando campos * `hidden_fields` es una lista separada por coma de campos ocultos * `default_fields` es una lista de valores de campo por defecto `campo=valor` separados por coma select ``` select(tabla, query_field='', query_value='', fields='') ``` Lista todos los registros de una tabla * `tabla` es el nombre de la tabla * `query_field` y `query_value` si se especifican, se filtrarán los registros de acuerdo con la consulta ``` query_field == query_value ``` * `fields` es una lista de valores separados por coma con los campos a mostrar search ``` search(tabla, fields='') ``` Es un widget para búsqueda de registros * `tabla` es el nombre de la tabla * `fields` es una lista de valores separados por coma con los campos a mostrar jqgrid ``` jqgrid(tabla, fieldname=None, fieldvalue=None, col_widths='', colnames=None, _id=None,fields='', col_width=80, width=700, height=300) ``` Incrusta un plugin jqGrid * `tabla` es el nombre de la tabla * `fieldname` , `fieldvalue` son filtros opcionales: ``` fieldname==fieldvalue ``` * `col_widths` son los anchos de cada columna * `colnames` es una lista de las columnas que se deben mostrar * `_id` es el "id" del elemento TABLE que contiene el jqGrid * `fields` es una lista de columnas a mostrar * `col_width` es el ancho por defecto de las columnas * `height` es el alto del jqGrid * `width` es el ancho del jqGrid Una vez que ya tienes el plugin_wiki instalado, puedes fácilmente usar el jqGrid en otra vista también. Ejemplo de uso (muestra tutabla filtrada por fk_id==47): ``` {{=plugin_wiki.widget('jqgrid', 'tutabla', 'fk_id', 47, '70,150', 'Id, comentarios', None,'id, notes', 80, 300, 200)}} ``` latex `latex(expression)` Usa la API de Google charting para incrustar LaTeX pie_chart ``` pie_chart(data='1,2,3', names='a,b,c', width=300, height=150, align='center') ``` Incrusta un gráfico de torta o pie chart * `data` es una lista de datos separados por coma * `names` es una lista de etiquetas separada por coma (una por ítem de datos) * `width` es el ancho de la imagen * `height` es la altura de la imagen * `align` especifica la alineación de la imagen bar_chart ``` bar_chart(data='1,2,3', names='a,b,c', width=300, height=150, align='center') ``` Usa la API de Google charting para embeber un gráfico de barras * `data` es una lista de datos separados por coma * `names` es una lista de etiquetas separadas por coma (una por ítem de datos) * `width` es el ancho de la imagen * `height` es el alto de la imagen * `align` determina la alineación de la imagen slideshow ``` slideshow(tabla, field='image', transition='fade', width=200, height=200) ``` Incrusta una presentación con imágenes deslizables. Toma las imágenes de una tabla. * `tabla` es el nombre de la tabla * `field` es el campo upload en la tabla que contiene las imágenes * `transition` determina el tipo de transición, por ejemplo fundido, etc. * `width` es el ancho de la imagen * `height` es el alto de la imagen youtube ``` youtube(code, width=400, height=250) ``` Incrusta un video de YouTube (por código) * `code` es el código del video * `width` es el ancho de la imagen * `height` es la altura de la imagen vimeo ``` vimeo(code, width=400, height=250) ``` Embebe un video de Vimeo (por código) * `code` es el código del video * `width` es el ancho de la imagen * `height` es el alto de la imagen mediaplayer ``` mediaplayer(src, width=400, height=250) ``` Embebe un archivo media file (por ejemplo un video de Flash o un archivo mp3) * `src` es la ubicación del video * `width` es el ancho de la imagen * `height` es el alto de la imagen comments ``` comments(table='None', record_id=None) ``` Embebe comentarios en una página se pueden asociar a una tabla y/o registro * `table` es el nombre de la tabla * `record_id` es el id del registro tags ``` tags(table='None', record_id=None) ``` Agrega etiquetas o tags a una página las etiquetas se pueden asociar a tablas o registros * `table` es el nombre de la tabla * `record_id` es el id del registro tag_cloud `tag_cloud()` Agrega una nube de etiquetas o tag cloud map ``` map(key='....', table='auth_user', width=400, height=200) ``` Incrusta un mapa de Google Puede tomar puntos en el mapa de desde una tabla * `key` es la clave para acceso a la api de mapas de Google (la clave por defecto funciona con 127.0.0.1) * `table` es el nombre de la tabla * `width` es el ancho del mapa * `height` es la altura del mapa La tabla debe tener las columnas: `latitude` , `longitude` y `map_popup` . Cuando se hace clic en un punto, aparecerá el mensaje de `map_popup` . iframe ``` iframe(src, width=400, height=300) ``` Incrusta una página con `<iframe></iframe>` load_url `load_url(src)` Carga el contenido de un url usando la función LOAD load_action ``` load_action(accion, controller='', ajax=True) ``` Carga el contenido de URL(request.application, controller, accion) usando la función LOAD # Extendiendo los widget¶ Se pueden agregar widget a plugin_wiki creando el siguiente archivo de modelo llamado "models/plugin_wiki_"nombre, donde nombre'' es un nombre arbitrario y el archivo contiene algo como: ``` class PluginWikiWidgets(PluginWikiWidgets): @staticmethod def mi_nuevo_widget(arg1, arg2='valor', arg3='valor'): """ información sobre el widget """ return "cuerpo del widget" ``` La primera línea indica que estás extendiendo la lista de widget. Dentro de la clase, puedes definir tantas funciones como necesites. Cada función static es un nuevo widget, salvo en el caso de funciones que comienzan con guión bajo. La función puede tomar una cantidad arbitraria de argumentos que pueden o no tener valores por defecto. El docstring de la función debe documentar la función usando la sintaxis markmin. Cuando los widget se incrustan en páginas plugin_wiki, los argumentos se pasarán al widget como cadenas. Esto implica que la función del widget debe poder aceptar cadenas para cada argumento y eventualmente convertirlas según el tipo de representación requerida. Puedes decidir que tipo de representación de cadena debe ser - sólo asegúrate de que esté documentada en el docstring. El widget puede devolver una cadena o ayudantes de web2py. En este último caso se convertirán a cadena usando `.xml()` . Observa que el nuevo widget puede acceder a cualquier variable en el espacio de nombres global. Como ejemplo, vamos a crear un nuevo widget que muestre el formulario "contacto/preguntar" creado al inicio de este capítulo. Esto puede hacerse creando un archivo "models/plugin_wiki_contact" que contenga: ``` class PluginWikiWidgets(PluginWikiWidgets): @staticmethod def ask(email_label='Your email', question_label='question'): """ Este plugin mostrará un formulario para contacto para que que el visitante pueda hacer una pregunta. La pregunta se te enviará por correo y el widget desaparecerá de la página. Los parámetros son: - email_label: la etiqueta del campo para la dirección del visitante - question_label: la etiqueta del campo para la pregunta """ formulario=SQLFORM.factory( Field('tu_email', requires=IS_EMAIL(), label=email_label), Field('pregunta', requires=IS_NOT_EMPTY()), label=question_label) if formulario.process().accepted: if mail.send(to='<EMAIL>', subject='from %s' % formulario.vars.tu_email, message = formulario.vars.pregunta): command="jQuery('#%s').hide()" % div_id response.flash = 'Gracias' response.js = "jQuery('#%s').hide()" % request.cid else: formulario.errors.tu_email="No se pudo enviar el correo" return formulario.xml() ``` Los widget de plugin_wiki no son convertidos por una vista a menos que el widget llame explícitamente a la función `response.render(...)` # Otras Recetas¶ Date: 2009-07-25 Categories: Tags: # Chapter 14: Otras recetas * Otras Recetas * Upgrade * Cómo distribuir tus aplicaciones como archivos binarios * WingIDE, Rad2Py, y Eclipse * SQLDesigner * Publicación de una carpeta * Pruebas funcionales * Construyendo un web2py minimalista * Obteniendo una URL externa * Prettydate * Coordenadas geográficas o Geocoding * Paginación * httpserver.log y el formato del archivo de historial. * Completando la base de datos con datos ficticios * Aceptando pagos con tarjetas de crédito * Dropbox API * Twitter API * Stream de archivos virtuales ## Otras Recetas¶ ### Upgrade¶ En la página de la interfaz administrativa "site" existe un botón "upgrade now" (actualice la versión ahora). En caso de que no esté disponible o no funcione (por ejemplo por un problema de bloqueo de un archivo), actualizar web2py manualmente es muy fácil. Simplemente descomprime la última versión de web2py sobre la vieja instalación. Esto actualizará todas las librerías así como también las aplicaciones admin, examples, welcome. Además creará un nuevo archivo vacío "NEWINSTALL". Al reiniciar, web2py eliminará los archivos vacíos y empaquetará la aplicación "welcome" en "welcome.w2p". Esta última aplicación se usará como punto de partida para nuevas aplicaciones. web2py no actualiza ningún otro archivo existente en otra aplicación. ### Cómo distribuir tus aplicaciones como archivos binarios¶ Es posible empaquetar tus aplicaciones con la distribución binaria de web2py y distribuirlas juntas. La licencia lo permite, siempre y cuando dejes claro en la licencia de tu aplicación que estás agregando web2py y añadas un enlace a `web2py.com` . Aquí explicamos los pasos necesarios para la distribución de Windows: * Crea tu aplicación como usualmente lo haces. * Con la app admin, crea una compilación en bytecode de tu aplicación (un clic) * También en admin, empaqueta la aplicación compilada, (otro clic) * Crea una carpeta "miapp" * Descarga una distribución binaria para Windows de web2py * Descomprime la distribución en la carpeta "miapp" y ejecútala (dos clic) * Usando admin sube la app compilada empaquetada previamente asignándole el nombre "init" (un clic) * Crea un archivo "miapp/start.bat" que contenga "cd web2py; web2py.exe" * Crea un archivo "miapp/license" que contenga la licencia de tu app y asegúrate de que la licencia contenga la aclaración "se distribuye en conjunto con una copia sin modificaciones de web2py, obtenida en web2py.com" * Comprime la carpeta miapp en un archivo "miapp.zip" * Distribuye y/o vende "miapp.zip" Cuando los usuarios descompriman "miapp.zip" y hagan clic en "run", verán tu aplicación en lugar de la aplicación "welcome". No hay requerimientos del lado del usuario, tampoco es necesario Python preinstalado. Para la distribución binaria en Mac el proceso es el mismo pero no hay necesidad de crear el archivo "bat". ### WingIDE, Rad2Py, y Eclipse¶ Puedes usar web2py con entornos de desarrollo de terceros como WingIDE, Rad2Py, y Eclipse. He aquí una captura de pantalla que ilustra el uso de web2py con WingIDE: En general, el problema con estos IDE (salvo el caso de Rad2Py que fue diseñado específicamente para que funciones con web2py) es que no entienden el contexto en que los modelos y controladores se ejecutan y por lo tanto la autocompleción no funciona en forma instantánea. Para que la autocompleción funcione el truco normalmente consiste en editar tus modelos y controladores agregando el siguiente código: ``` if False: from gluon import * request = current.request response = current.response session = current.session cache = current.cache T = current.T ``` Esto no modifica el funcionamiento ya que nunca se ejecutará pero hace que el IDE lo analice y entienda de dónde vienen los objetos del espacio de nombres global (el módulo `gluon` ), lo que a su vez hace que funcione la autocompleción. ### SQLDesigner¶ Hay un software llamado SQLDesigner que permite la creación de modelos de web2py en forma visual y luego con esa información generar el código automáticamente. Aquí se muestra una captura de pantalla: La versión de SQLDesigner que funciona con web2py se puede encontrar en: ``` https://github.com/elcio/visualdal ``` ### Publicación de una carpeta¶ Consideremos el problema de compartir carpetas y subcarpetas en la web. web2py lo hace realmente fácil. Sólo debes usar un controlador como el siguiente: ``` from gluon.tools import Expose def micarpeta(): return dict(archivos=Expose('/ruta/a/micarpeta')) ``` que puedes procesar en una vista con `{{=archivos}}` . Esto creará una interfaz para visualizar archivos y carpetas, y navegar a través de la estructura de árbol de los archivos. Se crearán vistas previas de las imágenes. El prefijo con la ruta "/ruta/a/micarpeta" se ocultará a los visitantes. Por ejemplo un archivo llamado "/ruta/a/micarpeta/a/b.txt" se reemplazará por "base/a/b.txt". El prefijo "base" se puede especificar usando el argumento `basename` de la función Expose. Puedes especificar una lista de extensiones a listar con el parámetro `extensions` , el resto de los archivos se ocultará. Por ejemplo: ``` def micarpeta(): return dict(archivos=Expose('/ruta/a/micarpeta', basename='.', extensions=['.py', '.jpg'])) ``` Los archivos y carpetas que contengan la palabra "private" en la ruta o tengan un nombre de archivo que comience con "." o que termine con "~" siempre se ocultan. ### Pruebas funcionales¶ web2py viene con un módulo ``` gluon.contrib.webclient ``` que permite realizar pruebas funcionales (functional testing) de aplicaciones locales y remotas de web2py. En realidad, este módulo no es específico de web2py y se puede usar para realizar pruebas e interacción en forma programática con cualquier aplicación web, si bien está preparado para trabajar con sesiones y repuestas de aplicaciones de web2py. He aquí un ejemplo de uso. El programa que se muestra a continuación crea un cliente, conecta con la acción "index" para establecer una sesión, registra un nuevo usuario, luego cierra la sesión y vuelve a autenticarse usando las nuevas credenciales: ``` from gluon.contrib.webclient import WebClient cliente = WebClient('http://127.0.0.1:8000/welcome/default/', postbacks=True) cliente.get('index') # registro datos = dict(first_name = 'Homero', last_name = 'Simpson', email = '<EMAIL>', password = 'prueba', password_two = 'prueba', _formname = 'register') cliente.post('user/register', data=datos) # cerrar sesión cliente.get('user/logout') # autenticarse nuevamente datos = dict(email='<EMAIL>', password='prueba', _formname = 'login') cliente.post('user/login',data=datos) # comprobar registro y si el acceso fue exitoso cliente.get('index') assert('Bienvenido Homero' in cliente.text) ``` El constructor de WebClient toma un prefijo URL como argumento. En el ejemplo ese parámetro es "http://127.0.0.1:8000/welcome/default/". No realiza ningún intercambio de datos en la red. El argumento `postbacks` es por defecto `True` y le indica al cliente cómo debe manejar las respuestas de web2py. El objeto WebClient `client` tiene dos métodos: `get` y `post` . El primer argumento siempre es un postfijo URL. El URL completo para una solicitud POST o GET se compone simplemente concatenando el prefijo y el postfijo. El propósito de esto es simplemente hacer que la sintaxis sea más breve para las comunicaciones intensivas entre el cliente y el servidor. `data` es un parámetro específico de las solicitudes POST que contiene un diccionario de los datos a enviarse. Los formularios de web2py tienen un campo oculto `_formname` y su valor se debe especificar cuando existe más de un formulario en la página. Los formularios de web2py también contienen un campo oculto `_formkey` que está diseñado para prevenir los ataques CSRF. Este campo es manejado en forma automática por el WebClient. Tanto `cliente.get` como `cliente.post` aceptan los siguientes argumentos adicionales: * `headers` : un diccionario de encabezados HTTP opcionales. * `cookies` : un diccionario de cookies opcionales HTTP. * `auth` : un diccionario de parámetros que se deben pasar a ``` urllib2.HTTPBasicAuthHandler().add_password(**auth) ``` para el acceso por el sistema de autenticación básico (basic authentication). Para más información acerca de este tema se puede consultar la documentación de Python para el módulo urllib2. El objeto `client` en el ejemplo mantiene una conversación con el servidor especificado en el constructor por medio de solicitudes GET y POST. Este maneja automáticamente las cookie y las envía de regreso para actualizar las sesiones. Si detecta que se ha generado una nueva cookie de sesión aunque se esté usando actualmente otra, interpretará que se produjo una falla en la sesión y genera una excepción. Si el servidor devuelve un error HTTP, también generará una excepción. Si, en cambio, el servidor devuelve un error HTTP, pero además la respuesta contiene un ticket de web2py, devolverá una excepción conteniendo el código del ticket. El objeto client mantiene un registro o log de las solicitudes en `cliente.history` y el estado asociado con la última solicitud exitosa. El estado se compone de: * `cliente.status` : el código de estado devuelto * `cliente.text` : el contenido de la página * `cliente.headers` : un diccionario de encabezados recuperados * `cliente.cookies` : un diccionario de los cookie recuperados * `cliente.sessions` : un diccionario de sesiones con la forma ``` {nombredeapp: id_sesión} ``` . * `cliente.forms` : un diccionario de formularios web2py detectados en el `cliente.text` . El diccionario tiene la forma ``` {_formname, _formkey} ``` . El objeto WebClient no realiza ningún análisis o conversión del contenido del `cliente.txt` devuelto por el servidor pero esto se puede hacer fácilmente con alguno de los módulos de terceros como BeautifulSoup. Por ejemplo, aquí se puede ver un programa que demuestra cómo se pueden buscar todos los link en una página descargada por el cliente y comprobarlos: ``` from BeautifulSoup import BeautifulSoup dom = BeautifulSoup(cliente.text) for link in dom.findAll('a'): nuevo_cliente = WebClient() nuevo_cliente.get(link.href) print nuevo_cliente.status ``` ### Construyendo un web2py minimalista¶ A veces necesitamos desplegar web2py en un servidor con muy poca memoria RAM. En este caso necesitamos disminuir web2py a su mínima expresión. Una sencilla manera de hacer esto es la siguiente: * En una máquina en producción, instala la distribución de código fuente web2py. * Desde la carpeta principal de web2py ejecuta ``` python scripts/make_min_web2py.py /path/to/minweb2py ``` * Ahora copia la aplicación que quieres desplegar en "/path/to/minweb2py/applications" * Despliega "/path/to/minweb2py" en el pequeño servidor. El script "make_min_web2py.py" creará una distribución mínima de web2py que no incluye: * admin * examples * welcome * scripts * modulos contrib poco utilizados Esta distribución contiene una aplicación "welcome", que consiste en un único archivo para pruebas de implementación. Examina el script. Al principio contiene una lista detallada que indica qué está incluido y qué se omite. Puedes modificarlo fácilmente y ajustarlo a tus necesidades. ### Obteniendo una URL externa¶ Python incluye la librería `urllib` para descargar recursos desde un url: ``` import urllib page = urllib.urlopen('http://www.web2py.com').read() ``` Esto es usualmente correcto, pero el modulo `urllib` no funciona sobre "Google App Engine". Google provee una API diferente para descargar URLs que solamente funciona en GAE. Para hacer que tu código sea portátil, web2py incluye una función `fetch` que se puede usar en GAE así como también en otras instalaciones de Python ``` from gluon.tools import fetch page = fetch('http://www.web2py.com') ``` ### Prettydate¶ A veces es preferible representar un objeto datetime como "hace un año" en lugar de "2009-07-25 14:34:56" . web2py provee de una función para esto: ``` import datetime d = datetime.datetime(2009,7,25,14,34,56) from gluon.tools import prettydate pretty_d = prettydate(d,T) ``` El segundo argumento (T) se debe usar si se quiere habilitar la internacionalización de la salida. ### Coordenadas geográficas o Geocoding¶ Si necesitas convertir una dirección (por ejemplo: "243 S Wabash Ave, Chicago, IL, USA") en coordenadas geográficas (latitud y longitud), web2py provee de una función para realizar eso, entonces. ``` from gluon.tools import geocode address = '243 S Wabash Ave, Chicago, IL, USA' (latitude, longitude) = geocode(address) ``` La función `geocode` requiere una conexión a internet y se conecta al servicio de Google de geocodificación. La función devuelve `(0,0)` en caso de fallar. Ten en cuenta que el servicio de Google de geocodificación tiene un número máximo de solicitudes, por lo que deberías verificar sus términos y condiciones. La función `geocode` está implementada utilizando la función `fetch` y por lo tanto funciona también en GAE. ### Paginación¶ Esta receta es un truco útil para minimizar el acceso a la base de datos cuando se requiere paginación, es decir, cuando necesitas mostrar una lista de registros de una base de datos pero quieres distribuir los registros a través de múltiples páginas. Comienza creando una aplicación primos que almacene los primeros 1000 números primos en una base de datos. Acá está el modelo `db.py` : ``` db = DAL('sqlite://primos.db') db.define_table('primo',Field('valor','integer')) def esprimo(p): for i in range(2,p): if p%i==0: return False return True if len(db().select(db.primo.id))==0: p=2 for i in range(1000): while not esprimo(p): p+=1 db.primo.insert(value=p) p+=1 ``` Ahora crea una acción `listar_elementos` en el controlador "default.py" que contenga lo siguiente: ``` def listar_elementos(): if len(request.args): pagina=int(request.args[0]) else: pagina=0 elementos_por_pagina=20 limitby=(pagina*elementos_por_pagina, (pagina+1)*elementos_por_pagina+1) registros=db().select(db.primo.ALL, limitby=limitby) return dict(registros=registros,pagina=pagina, elementos_por_pagina=elementos_por_pagina) ``` Ten en cuenta que este código selecciona uno o más ítems de los que necesita, 20+1. el elemento extra le indica a la vista si existe la siguiente página. Esta es la vista "default/listar_elementos.html": {{for i, registro in enumerate(registros):}} {{if i==elementos_por_pagina: break}} {{=registro.valor}}<br /> {{pass}} {{if pagina:}} <a href="{{=URL(args=[pagina-1])}}">previa</a> {{pass}} {{if len(registros)>items_per_page:}} <a href="{{=URL(args=[pagina+1])}}">próxima</a> {{pass}} ``` De esta manera podemos obtener la paginación con un sólo "select" por acción, y que ese "select" sólo obtenga una fila más de las que necesita. ### httpserver.log y el formato del archivo de historial.¶ El servidor de web2py registra todas las solicitudes en un archivo llamado: `httpserver.log` en el directorio raíz de web2py. Se pueden especificar un nombre alternativo y la ubicación a través de las opciones de la línea de comandos de web2py. Se añaden nuevas entradas al final del archivo cada vez que se realiza una solicitud. Cada línea tiene una forma similar a la siguiente: ``` 127.0.0.1, 2008-01-12 10:41:20, GET, /admin/default/site, HTTP/1.1, 200, 0.270000 ``` El formato es: ``` ip, timestamp, method, path, protocol, status, time_taken ``` * ip es la dirección IP del cliente que hizo la solicitud * timestamp es la fecha y la hora de la solicitud en el formato ISO 8601, YYYY-MM-DDT HH:MM:SS * method es o bien GET o POST * path es la ruta solicitada por el cliente * protocol es la versión del protocolo HTTP usado para enviar al cliente, normalmente HTTP/1.1 * status es el código de estado del protocolo HTTP [status] * time_taken es la cantidad de tiempo que el servidor tarda en procesar la solicitud, en segundos, no incluye tiempos de subida/descarga. En el repositorio de aplicaciones [appliances], se encuentra disponible para descarga una aplicación para el análisis de log (registros del sistema). El registro en el historial se deshabilita por defecto con el uso de mod_wsgi porque duplicaría (es decir, registraría lo mismo que) el sistema de historial de Apache. ### Completando la base de datos con datos ficticios¶ Para efectuar pruebas y control de calidad o testing, podríamos necesitar insertar registros en la base de datos con información aleatoria. web2py incluye un clasificador bayesiano que es capaz de generar textos legibles para este propósito. Esta es la forma más simple de usarlo: ``` from gluon.contrib.populate import populate populate(db.mitabla, 100) ``` Esto insertará 100 registros aleatorios en db.mitabla, e intentará generar inteligentemente textos cortos para campos que sean cadenas y enteros, double, date, datetime, time, booleanos, etc. según el tipo de campo completado. Además, intentará respetar los requerimientos impuestos por los validadores. Para campos que contengan la palabra "name", intentará generar nombres ficticios. Para campos reference se generarán referencias válidas. Si tienes dos tablas (A y B) donde B hace referencia a A, asegúrate de rellenar primero A y luego B. Como las inserciones son hechas en el ámbito de una única transacción, no intentes insertar demasiados registros al mismo tiempo, particularmente si estás trabajando con campos tipo reference. En cambio, haz un ciclo, insertando 100 cada vez, y luego un `commit()` . ``` for i in range(10): populate(db.mitabla, 100) db.commit() ``` También puedes usar el clasificador bayesiano para aprender características de algunos textos y generar textos aleatorios que sean similares, pero ten en cuenta que el resultado podría no tener sentido: ``` from gluon.contrib.populate import Learner, IUP ell=Learner() ell.learn('una entrada de texto realmente larga ...') print ell.generate(1000, prefix=None) ``` ### Aceptando pagos con tarjetas de crédito¶ Existen múltiples maneras de aceptar pagos con tarjetas de crédito por internet. web2py provee de una API especifica para algunos de los métodos más populares y prácticos: * Google Wallet [googlewallet] * PayPal `paypal` cite * Stripe.com [stripe] * Authorize.net [authorizenet] * DowCommerece [dowcommerce] Los primeros dos mecanismos delegan el proceso de autenticación del pago en un servicio de terceros. Mientras esta es la mejor solución de seguridad (tu aplicación no tiene que manejar ninguna información de tarjetas de crédito) esto hace que el proceso sea incómodo (el usuario debe autenticarse dos veces; por ejemplo, una vez para tu aplicación, y otra con Google) y no permite que tu aplicación maneje pagos sucesivos en una forma automatizada. Hay veces que necesitas más control y quieres generar el formulario de entrada para la información de las tarjetas de crédito y que programáticamente pregunte a quien lo procesa cómo transferir el dinero desde la tarjeta de crédito a tu cuenta. Por esta razón web2py provee de integración instantánea con Stripe, Authorize.net (el modulo fue desarrollado por <NAME> y ligeramente modificado) y DowCommerce. Stripe es la aplicación más simple de usar y también la más económica para un bajo volumen de transacciones (no cobran un cargo fijo aunque el costo por transacción está por arriba del 3%). Authorize.net es mejor para altos volúmenes (tiene una tarifa fija anual y un costo bajo por transacción). Ten en cuenta que en el caso de Stripe y Authorize.net tu programa estará aceptando la información de las tarjetas de crédito. No es necesario que almacenes esta información y nosotros te aconsejamos no hacerlo, por los requerimientos legales que involucra (verificar con Visa o Mastercard), pero hay ocasiones en las que tú puedes querer almacenar la información para pagos sucesivos o reproducir el botón de pago en un sólo clic de Amazon. # Google Wallet¶ La manera más sencilla de usar Google Wallet (Level 1) consiste en integrar un botón a tu página y cuando hagan clic, redirigir a tus visitantes a la página de pagos provista por Google. El primer paso es registrar una cuenta de Google Merchant en la siguiente url: ``` https://checkout.google.com/sell ``` Necesitarás indicarle a Google tu información bancaria. Google te asignará un identificador `merchant_id` y una clave `merchant_key` (no los confundas, mantenlos en secreto). Entonces simplemente necesitas crear el siguiente código en tu vista: ``` {{from gluon.contrib.google_wallet import button}} {{=button(merchant_id="123456789012345", products=[dict(name="calzado", quantity=1, price=23.5, currency='USD', description="zapatillas para correr negras")])}} ``` Cuando un visitante hace clic en el botón, será redirigido a la página de google donde ella o él pueden pagar por los artículos. Aquí products es una lista de productos y cada producto es un diccionario de parámetros que quieres pasar describiendo tus artículos (nombre, cantidad, precio, moneda, descripción, y otros datos opcionales que puedes encontrar descriptos en la documentación de Google Wallet). Si optas por este mecanismo, puedes necesitar generar los valores pasados al botón en forma programática según tu inventario y los datos del carrito de compras del visitante. Todas la información de tasas y tarifas será manejada del lado de Google. Lo mismo ocurre para el caso de la información de la cuenta. Por defecto tu aplicación no recibirá un mensaje de confirmación cuando la transacción se concreta, por lo tanto, podrías tener que visitar tu cuenta de Google Merchant para ver cuáles productos han sido vendidos y pagado por los clientes, y cuáles productos necesitan ser enviados a sus compradores. Por otra parte, Google te enviará un email con la información. Si quieres una integración más estricta tienes que usar el nivel 2 de notificaciones de la API. En ese caso puedes pasar más información a Google y Google llamará a tu API para notificar las ventas. Esto permite mantener la información de la cuenta dentro de tu aplicación pero requiere que tengas servicios web capaces de comunicarse con Google Wallet. Esto último podría llegar a resultar complicado de programar para una app determinada, pero ya disponemos de un plugin que implementa una API para este propósito. ``` http://web2py.com/plugins/static/web2py.plugin.google_checkout.w2p ``` Puedes encontrar información sobre la documentación del plugin dentro del programa. # Paypal¶ La integración con Paypal no se describe aquí, pero puedes encontrar más información en este slice: ``` http://www.web2pyslices.com/main/slices/take_slice/9 ``` # Stripe.com¶ Esta es probablemente una de las formas más fáciles y flexibles para aceptar pagos con tarjetas de crédito. Debes registrar una cuenta en Stripe.com, que es una tarea muy simple, y Stripe te asignará una clave de la API para pruebas antes de que generes cualquier credencial. Una vez que obtengas tu clave de la API, podrás aceptar pagos de tarjetas de crédito, con el siguiente código: ``` from gluon.contrib.stripe import Stripe stripe = Stripe(api_key) d = stripe.charge(amount=100, currency='usd', card_number='4242424242424242', card_exp_month='5', card_exp_year='2012', card_cvc_check='123', description='zapatos negros comunes') if d.get('paid',False): # payment accepted elif: # error is in d.get('error', 'unknown') ``` La respuesta, `d` , es un diccionario que te permite explorar sus valores. El número de tarjeta usado en el ejemplo es un sandbox (espacio de pruebas) y las transacciones siempre son exitosas. Cada transacción está asociada a un identificador almacenado en `d['id']` . Stripe también permite verificar una transacción en otro momento: ``` d = Stripe(key).check(d['id']) ``` y una transacción de reembolso: ``` r = Stripe(key).refund(d['id']) if r.get('refunded', False): # el reembolso se realizó con éxito elif: # error is in d.get('error','unknown') ``` Stripe hace sencillo mantener la contabilidad dentro de tu aplicación. todas las comunicaciones entre tu aplicación y Stripe se realizan sobre servicios web RESTful. Stripe en realidad expone aún más servicios y proporciona un conjunto más amplio de funcionalidades en su API para Python. Puedes leer más sobre esto en su sitio web. # Authorize.Net¶ Otra manera simple de aceptar tarjetas de crédito es usando Authorize.Net. Como siempre, necesitas registrarte y obtener un `login` y una clave para las transacciones ( `transkey` ). Una vez que tengas esto, el funcionamiento es muy similar al de Stripe: ``` from gluon.contrib.AuthorizeNet import process if process(creditcard='4427802641004797', expiration="122012, total=100.0, cvv='123',tax=None, invoice=None, login='cnpdev4289', transkey='<KEY>', testmode=True): # el pago se ha procesado else: # el pago ha sido rechazado ``` Si tienes una cuenta válida de Authorize.Net deberías reemplazar los parámetros `login` y `transkey` del espacio de pruebas con los de tu cuenta, configurar `testmode=False` para ejecutar en la plataforma real en lugar del sandbox, y usar la información de la tarjeta de crédito dada por el visitante. Si `process` devuelve `True` , el dinero ha sido transferido desde la tarjeta de crédito del visitante a tu cuenta en Authorize.Net. `invoice` es sólo una cadena que se puede personalizar y será almacenada por Authorize.Net con el resto de los valores de la transacción para que puedas conciliar los datos con la información en tu aplicación. Aquí se muestra un ejemplo más complejo del flujo de trabajo donde se exponen más variables: ``` from gluon.contrib.AuthorizeNet import AIM payment = AIM(login='cnpdev4289', transkey='<KEY>', testmod=True) payment.setTransaction(creditcard, expiration, total, cvv, tax, invoice) payment.setParameter('x_duplicate_window', 180) # duplicate_window de tres minutos payment.setParameter('x_cust_id', '1324') # ID del cliente payment.setParameter('x_first_name', 'Agent') # nombre payment.setParameter('x_last_name', 'Smith') # apellido payment.setParameter('x_company', 'Test Company') # empresa payment.setParameter('x_address', '1234 Main Street') # dirección payment.setParameter('x_city', 'Townsville') # ciudad payment.setParameter('x_state', 'NJ') # estado payment.setParameter('x_zip', '12345') # código postal payment.setParameter('x_country', 'US') # país payment.setParameter('x_phone', '800-555-1234') # teléfono payment.setParameter('x_description', 'Test Transaction') # descripción payment.setParameter('x_customer_ip', socket.gethostbyname(socket.gethostname())) # IP del cliente payment.setParameter('x_email', '<EMAIL>') # correo electrónico payment.setParameter('x_email_customer', False) # correo del cliente payment.process() if payment.isApproved(): print 'Response Code: ', payment.response.ResponseCode print 'Response Text: ', payment.response.ResponseText print 'Response: ', payment.getResultResponseFull() print 'Transaction ID: ', payment.response.TransactionID print 'CVV Result: ', payment.response.CVVResponse print 'Approval Code: ', payment.response.AuthCode print 'AVS Result: ', payment.response.AVSResponse elif payment.isDeclined(): print 'Su tarjeta de crédito fue rechazada por el banco' elif payment.isError(): print 'Se produjo un error' print 'aprovada', payment.isApproved() print 'rechazada', payment.isDeclined() print 'error', payment.isError() ``` Ten en cuenta que el código del ejemplo anterior usa una cuenta para pruebas. Necesitas registrarte con Authorize.Net (no es un servicio gratuito) y especificar tu propio login, transkey y testmode (True or False) para el constructor de `AIM` . Dropbox es un servicio de almacenamiento muy popular. No solo almacena tus archivos en línea sino que también mantiene el almacenamiento en la nube sincronizado con todos tus equipos. Te permite crear grupos y dar permisos de lectura o escritura para cada carpeta a usuarios individuales o grupos. También mantiene un historial de versiones de todos tus archivos. Además incluye una carpeta llamada "Public" y cada archivo que coloques allí tendrá su propia URL pública. Dropbox es una excelente herramienta de colaboración. Puedes accedes a dropbox fácilmente registrándote en ``` https://www.dropbox.com/developers ``` Y obtendrás una `APP_KEY` y un `APP_SECRET` . Una vez que tu tengas esto podrás usar Dropbox para autenticar a tus usuarios. Crea un archivo llamado "tuapp/private/dropbox.key" y en él escribe ``` <APP_KEY>:<APP_SECERT>:app_folder ``` donde `<APP_KEY>` y `APP_SECRET` son tu clave de aplicación (app_key) y la clave secreta (app_secret). Luego en "models/db.py" agrega el siguiente código: ``` from gluon.contrib.login_methods.dropbox_account import use_dropbox use_janrain(auth,filename='private/dropbox.key') midropbox = auth.settings.login_form ``` Esto permite a los usuarios autenticarse en tu aplicación usando sus credenciales de dropbox, y tu programa será capaz de subir archivos en sus cuentas dropbox: ``` stream = open('archivolocal.txt','rb') midropbox.put('archivodestino.txt', stream) ``` para descargar archivos: ``` stream = midropbox.get('archivodestino.txt') open('archivolocal.txt','wb').write(read) ``` y para obtener el listado del directorio: ``` contenidos = mydropbox.dir(path = '/')['contents'] ``` ### Twitter API¶ Aquí mostramos algunos ejemplos rápidos de cómo enviar y obtener mensajes tweet. No se requieren librerías de terceros, ya que Twitter ha creado una APIs RESTful. Aquí se puede ver un ejemplo de cómo enviar un tweet (mensaje): ``` def publicar_tweet(usuario, clave, mensaje): import urllib, urllib2, base64 import gluon.contrib.simplejson as sj argumentos = urllib.urlencode([('status', mensaje)]) encabezados={} encabezados['Authorization'] = 'Basic '+base64.b64encode( usuario + ':' + clave) solicitud = urllib2.Request( 'http://twitter.com/statuses/update.json', argumentos, encabezados) return sj.loads(urllib2.urlopen(solicitud).read()) ``` Y este es un ejemplo para recibir mensajes de Twitter (es decir, varios tweet): ``` def recuperar_tweet(): """ Recupera conjunto de mensajes de Twitter """ usuario = 'web2py' import urllib import gluon.contrib.simplejson as sj pagina = urllib.urlopen('http://twitter.com/%s?format=json' % usuario).read() mensajes = XML(sj.loads(pagina)['#timeline']) return dict(mensajes=mensajes) ``` Para operaciones más complejas, consulta la documentación de la API de Twitter. ### Stream de archivos virtuales¶ Es común en programadores maliciosos el escaneado (scan) de sitios web buscando vulnerabilidades. Estos programadores usan escáneres de seguridad como Nessus para explorar el sitio web de interés, en búsqueda de script conocidos por sus vulnerabilidades. Un análisis del historial del servidor web examinado o directamente en la base de datos Nessus revela que la mayoría de las vulnerabilidades conocidas están en script de PHP y ASP. Si utilizamos web2py para nuestro servicio, no tendremos estas vulnerabilidades, pero igualmente somos escaneados por ellos. Esto es molesto, y por lo tanto, deberíamos responder a estos escáneres de vulnerabilidades y hacer al atacante entender que está desperdiciando su tiempo. Una posibilidad es redirigir todas las solicitudes para .php, .asp, y toda solicitud sospechosa a una acción ficticia o dummy que responderá al atacante manteniéndolo ocupado por una gran cantidad de tiempo. Eventualmente el atacante se rendirá y no nos escaneará de nuevo. Esta receta requiere de dos partes. Una aplicación dedicada llamada atascador con un controlador "default.py" como sigue: ``` class Atascador(): def read(self,n): return 'x'*n def atascar(): return response.stream(Atascador(), 40000) ``` Cuando esta acción sea invocada, responderá con un infinito flujo de datos de "x"-es. 40000 caracteres al mismo tiempo. El segundo ingrediente es un archivo "route.py" que redirija cualquier solicitud terminada en .php, .asp, etc. (ambos en minúsculas y mayúsculas) al controlador. ``` routes_in=( ('.*.(php|PHP|asp|ASP|jsp|JSP)','atascador/default/atascar'), ) ``` La primera vez que realicen un ataque puede ocurrir una pequeña sobreutilización de recursos, pero nuestra experiencia indica que el mismo atacante no lo intentará dos veces. # Notas de traducción¶ # Chapter 15: Notas de traducción ## Notas de traducción¶ ### Traducción de terminología especializada¶ * Es muy común en la literatura técnica y especialmente en el caso de la programación para internet, el uso de terminología específica y neologismos que, o bien no tienen una traducción directa, o su traducción típica para otras clases de textos resultaría ambigua. Entre otros casos se pueden mencionar el término serialize y sus derivados (serialization, serialized, ...), tutorial, indentation, o typed. Para estos casos hemos optado frecuentemente por el uso de neologismos análogos para el idioma español (por ejemplo, serializar para el caso de serialize), teniendo en cuenta que esta técnica permite evitar la construcción de frases ambiguas que dificultarían la lectura y además seguir la costumbre generalizada para la escritura de textos especializados de informática. * Para los términos cuya traducción es vaga o no hemos podido encontrar una expresión adecuada en español, en general se agregó el término original del idioma inglés entre paréntesis. * Escaping, escape, y otras palabras especiales se han traducido como escapar. http://en.wikipedia.org/wiki/Escape_character * Para frases como response body se optó por la traducción directa (para este caso el cuerpo de la respuesta) en lugar de referir a los objetos de la API, que son fácilmente identificables. De otra forma hubiese sido necesario usar el body de la response o el response.body. * Render se traduce frecuentemente como convertir o conversión y según lo requiera la frase, como procesar o procesamiento. * Named arguments (del lenguaje Python): se ha traducido como pares nombre-valor o pares de argumentos de nombre-valor * El identifier de Python se traduce como nombre * Palabras como callback se conservan en el original, porque no se encontró una traducción convencional, salvo, para este ejemplo, el uso de llamada de retorno, que en algunas situaciones puede resultar confuso. * El verbo link <algo> para el embebido de script o style en un documento web (que posiblemente no tenga al día de hoy una traducción) se cambió por el término poco ortodoxo pero mucho más familiar, linkear. * Para elementos HTML se prefirió usar la traducción literal hijo (child), en lugar de anidado. La razón es que cualquier elemento contenido en otro se puede clasificar como anidado, pero los hijos, si se toma como referencia la estructura de un documento web, son aquellos que descienden de un elemento específico. * Layout se traduce como Diseño o se mantiene el original en caso de hablar de la estructura de vistas, para conservar la nomenclatura para archivos utilizada en la aplicación de andamiaje welcome. (layout.html) * Para factory (por ejemplo helper factory) se usa el término creador, en lugar de la traducción literal factoría, que es confusa. * Se utilizó agregado para plugin en el título, según la traducción de la tercera edición, pero nótese la ambigüedad del uso: ...Debes instalar el agregado.... Un agregado puede ser un simple widget, validador o archivo estático, o un plugin. En el texto se prefiere el término original en inglés por su precisión y brevedad (aunque no existe como término del español). Quizás lo más adecuado sería castellanizarlo usando plugín, pero en este caso podría resultar de difícil comprensión. * Slug se traduce como titular por su semejanza con el término utilizado para títulos de periódicos * Verbatim text es traducido según el significado utilizado en trac (texto sin conversión) * Deployment se tradujo como implementación para mantener la coherencia con la versión anterior, aunque la traducción no es precisa. En algunas oraciones se utilizó la traducción literal desplegar. En el capítulo 13 se prefirió el uso de despliegue para referirse a la instalación en producción en los distintos sistemas de alojamiento. * Master-slave no se traduce, por no haberse encontrado una traducción ampliamente utilizada. * Para los formularios self-submited, a falta de una traducción comúnmente adoptada, se utilizó autoenviados. Otra traducción menos literal sería autoreferentes, pero también puede dar lugar a confusiones. * A falta de una traducción para subsection, se reemplaza simplemente por sección. * Label se ha traducido como etiqueta en lugar del (quizás más apropiado) rótulo, porque este último término es poco usual. * Deprecated se tradujo como obsoleto/a. * Self-identifying para algoritmos se tradujo como diferenciable, ya que se trata de un algoritmo que se puede separar del conjunto y sustituir. (ver Cap. 07 en descripción del validador CRYPT) * Back end (de base de datos) es traducido como servicio, aunque un database back end puede no ser un servicio. En el libro en inglés se utiliza database back-end tanto para referirse a motores de bases de datos como a una instalación de una base de datos concreta. Para el primer caso se usó motor y para el segundo servicio. * Join: La traducción como junta encontrada en ArPug (usuarios de PostgreSQL) no se adoptó en otros textos, por lo que se conserva el nombre original. * Multi-tenant y multi-tenancy no tienen una traducción convencional y se reemplazaron por aplicaciones compartidas, porque esta definición es acorde al significado de esos términos para su uso en web2py. * Request se tradujo como solicitud, en lugar del término más usado petición. Esta convención se podría modificar en caso de que solicitud resulte confuso o inapropiado. * El Dispatcher se traduce como administrador de direcciones, para ilustrar su funcionalidad (también se podría traducir como despachador o despachante). * logging se traduce como historial, siguiendo un artículo anónimo en wikipedia con esa traducción, a falta de una traducción autorizada. * Para traducir framework se utilizó marco de desarrollo, igual que para la 3a edición (Latinux). * Traducción de ejemplos de código fuente: En el caso de uso de variables típicas de la aplicación de andamiaje se conservaron los términos en inglés para que el código mantenga su funcionalidad al ejecutarlo (es decir, para que no se produzcan errores al interactuar con funcionalidades de la aplicación de andamiaje o incluso con funciones del núcleo que esperan definiciones de nombres específicas). Un caso típico es el de index, la función de inicio por defecto del controlador default.py. Para términos convencionales que no requieren un cambio de las variables específicas de la app welcome, se prefirió el uso de su traducción para una mejor legibilidad (salvo casos en los de algunos nombres cuya abreviación coincide con la abreviación del español, como concat). Por ejemplo, los objetos row y form, se han traducido como registro y formulario. ### Acerca de los nombres de argumentos¶ En los ejemplos de funciones incorporadas que aceptan argumentos posicionales, las traducciones pueden generar conflictos en ciertas implementaciones de los ejemplos. Esto se debe a que el signature o lista de argumentos aceptados de una función especifica argumentos de tipo positional-or-keyword. En cada ejemplo a implementar en producción se deberían sustituir los parámetros traducidos por sus originales en inglés. Por ejemplo, para el caso de la palabra query: ``` salida = nombre_de_funcion(consulta, ...) ``` Se debería reemplazar por ``` salida = nombre_de_funcion(query, ...) ``` Esto evitará, por ejemplo, que al trabajar en colaboración con otros desarrolladores, no se intente utilizar el comando análogo (pero incompatible con la definición de la función): ``` salida = nombre_de_funcion(consulta=db.tabla.id > 0) ```
github.com/openshift/origin/pkg/route/graph/analysis
go
Go
None Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package analysis provides functions that analyse routes and setup markers that will be reported by oc status ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [func FindMissingRouter(g osgraph.Graph, f osgraph.Namer) []osgraph.Marker](#FindMissingRouter) * [func FindMissingTLSTerminationType(g osgraph.Graph, f osgraph.Namer) []osgraph.Marker](#FindMissingTLSTerminationType) * [func FindPathBasedPassthroughRoutes(g osgraph.Graph, f osgraph.Namer) []osgraph.Marker](#FindPathBasedPassthroughRoutes) * [func FindPortMappingIssues(g osgraph.Graph, f osgraph.Namer) []osgraph.Marker](#FindPortMappingIssues) * [func FindRouteAdmissionFailures(g osgraph.Graph, f osgraph.Namer) []osgraph.Marker](#FindRouteAdmissionFailures) ### Constants [¶](#pkg-constants) ``` const ( // MissingRoutePortWarning is returned when a route has no route port specified // and the service it routes to has multiple ports. MissingRoutePortWarning = "MissingRoutePort" // WrongRoutePortWarning is returned when a route has a route port specified // but the service it points to has no such port (either as a named port or as // a target port). WrongRoutePortWarning = "WrongRoutePort" // MissingServiceWarning is returned when there is no service for the specific route. MissingServiceWarning = "MissingService" // MissingTLSTerminationTypeErr is returned when a route with a tls config doesn't // specify a tls termination type. MissingTLSTerminationTypeErr = "MissingTLSTermination" // PathBasedPassthroughErr is returned when a path based route is passthrough // terminated. PathBasedPassthroughErr = "PathBasedPassthrough" // MissingTLSTerminationTypeErr is returned when a route with a tls config doesn't // specify a tls termination type. RouteNotAdmittedTypeErr = "RouteNotAdmitted" // MissingRequiredRouterErr is returned when no router has been setup. MissingRequiredRouterErr = "MissingRequiredRouter" ) ``` ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [FindMissingRouter](https://github.com/openshift/origin/blob/v3.8.0/pkg/route/graph/analysis/analysis.go#L188) [¶](#FindMissingRouter) added in v1.1.5 ``` func FindMissingRouter(g [osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Graph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Graph), f [osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Namer](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Namer)) [][osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Marker](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Marker) ``` FindMissingRouter creates markers for all routes in case there is no running router. #### func [FindMissingTLSTerminationType](https://github.com/openshift/origin/blob/v3.8.0/pkg/route/graph/analysis/analysis.go#L142) [¶](#FindMissingTLSTerminationType) added in v1.1.1 ``` func FindMissingTLSTerminationType(g [osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Graph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Graph), f [osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Namer](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Namer)) [][osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Marker](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Marker) ``` #### func [FindPathBasedPassthroughRoutes](https://github.com/openshift/origin/blob/v3.8.0/pkg/route/graph/analysis/analysis.go#L209) [¶](#FindPathBasedPassthroughRoutes) added in v1.1.2 ``` func FindPathBasedPassthroughRoutes(g [osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Graph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Graph), f [osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Namer](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Namer)) [][osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Marker](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Marker) ``` #### func [FindPortMappingIssues](https://github.com/openshift/origin/blob/v3.8.0/pkg/route/graph/analysis/analysis.go#L43) [¶](#FindPortMappingIssues) added in v1.1.4 ``` func FindPortMappingIssues(g [osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Graph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Graph), f [osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Namer](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Namer)) [][osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Marker](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Marker) ``` FindPortMappingIssues checks all routes and reports any issues related to their ports. Also non-existent services for routes are reported here. #### func [FindRouteAdmissionFailures](https://github.com/openshift/origin/blob/v3.8.0/pkg/route/graph/analysis/analysis.go#L163) [¶](#FindRouteAdmissionFailures) added in v1.1.4 ``` func FindRouteAdmissionFailures(g [osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Graph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Graph), f [osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Namer](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Namer)) [][osgraph](/github.com/openshift/[email protected]+incompatible/pkg/api/graph).[Marker](/github.com/openshift/[email protected]+incompatible/pkg/api/graph#Marker) ``` FindRouteAdmissionFailures creates markers for any routes that were rejected by their routers ### Types [¶](#pkg-types) This section is empty.
coro
cran
R
Package ‘coro’ October 12, 2022 Title 'Coroutines' for R Version 1.0.3 Description Provides 'coroutines' for R, a family of functions that can be suspended and resumed later on. This includes 'async' functions (which await) and generators (which yield). 'Async' functions are based on the concurrency framework of the 'promises' package. Generators are based on a dependency free iteration protocol defined in 'coro' and are compatible with iterators from the 'reticulate' package. License MIT + file LICENSE URL https://github.com/r-lib/coro BugReports https://github.com/r-lib/coro/issues Depends R (>= 3.5.0) Imports rlang (>= 0.4.12) Suggests knitr, later (>= 1.1.0), magrittr (>= 2.0.0), promises, reticulate, rmarkdown, testthat (>= 3.0.0) VignetteBuilder knitr Config/testthat/edition 3 Config/Needs/website tidyverse/tidytemplate Encoding UTF-8 RoxygenNote 7.2.0 NeedsCompilation no Author <NAME> [aut, cre], RStudio [cph, fnd] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2022-07-19 15:30:02 UTC R topics documented: asyn... 2 async_collec... 3 async_generato... 4 async_slee... 5 as_iterato... 6 collec... 7 coro_debu... 8 generato... 8 iterato... 10 yiel... 12 async Make an async function Description async() functions are building blocks for cooperative concurrency. • They are concurrent because they are jointly managed by a scheduler in charge of running them. • They are cooperative because they decide on their own when they can no longer make quick progress and need to await some result. This is done with the await() keyword which sus- pends the async function and gives control back to the scheduler. The scheduler waits until the next async operation is ready to make progress. The async framework used by async() functions is implemented in the later and promises pack- ages: • You can chain async functions created with coro to promises. • You can await promises. You can also await futures created with the future package because they are coercible to promises. Usage async(fn) await(x) Arguments fn An anonymous function within which await() calls are allowed. x An awaitable value, i.e. a promise. Value A function that returns a promises::promise(). See Also async_generator() and await_each(); coro_debug() for step-debugging. Examples # This async function counts down from `n`, sleeping for 2 seconds # at each iteration: async_count_down <- async(function(n) { while (n > 0) { cat("Down", n, "\n") await(async_sleep(2)) n <- n - 1 } }) # This async function counts up until `stop`, sleeping for 0.5 # seconds at each iteration: async_count_up <- async(function(stop) { n <- 1 while (n <= stop) { cat("Up", n, "\n") await(async_sleep(0.5)) n <- n + 1 } }) # You can run these functions concurrently using `promise_all()` if (interactive()) { promises::promise_all(async_count_down(5), async_count_up(5)) } async_collect Collect elements of an asynchronous iterator Description async_collect() takes an asynchronous iterator, i.e. an iterable function that is also awaitable. async_collect() returns an awaitable that eventually resolves to a list containing the values re- turned by the iterator. The values are collected until exhaustion unless n is supplied. The collection is grown geometrically for performance. Usage async_collect(x, n = NULL) Arguments x An iterator function. n The number of elements to collect. If x is an infinite sequence, n must be sup- plied to prevent an infinite loop. Examples # Emulate an async stream by yielding promises that resolve to the # elements of the input vector generate_stream <- async_generator(function(x) for (elt in x) yield(elt)) # You can await `async_collect()` in an async function. Once the # list of values is resolved, the async function resumes. async(function() { stream <- generate_stream(1:3) values <- await(async_collect(stream)) values }) async_generator Construct an async generator Description An async generator constructs iterable functions that are also awaitables. They support both the yield() and await() syntax. An async iterator can be looped within async functions and iterators using await_each() on the input of a for loop. The iteration protocol is derived from the one described in iterator. An async iterator always returns a promise. When the iterator is exhausted, it returns a resolved promise to the exhaustion sentinel. Usage async_generator(fn) await_each(x) Arguments fn An anonymous function describing an async generator within which await() calls are allowed. x An awaitable value, i.e. a promise. Value A generator factory. Generators constructed with this factory always return promises::promise(). See Also async() for creating awaitable functions; async_collect() for collecting the values of an async iterator; coro_debug() for step-debugging. Examples # Creates awaitable functions that transform their inputs into a stream generate_stream <- async_generator(function(x) for (elt in x) yield(elt)) # Maps a function to a stream async_map <- async_generator(function(.i, .fn, ...) { for (elt in await_each(.i)) { yield(.fn(elt, ...)) } }) # Example usage: if (interactive()) { library(magrittr) generate_stream(1:3) %>% async_map(`*`, 2) %>% async_collect() } async_sleep Sleep asynchronously Description Sleep asynchronously Usage async_sleep(seconds) Arguments seconds The number of second to sleep. Value A chainable promise. as_iterator Transform an object to an iterator Description as_iterator() is a generic function that transforms its input to an iterator function. The default implementation is as follows: • Functions are returned as is. • Other objects are assumed to be vectors with length() and [[ methods. Methods must return functions that implement coro’s iterator protocol. as_iterator() is called by coro on the RHS of in in for loops. This applies within generators, async functions, and loop(). Usage as_iterator(x) ## Default S3 method: as_iterator(x) Arguments x An object. Value An iterable function. Examples as_iterator(1:3) i <- as_iterator(1:3) loop(for (x in i) print(x)) collect Iterate over iterator functions Description loop() and collect() are helpers for iterating over iterator functions such as generators. • loop() takes a for loop expression in which the collection can be an iterator function. • collect() loops over the iterator and collects the values in a list. Usage collect(x, n = NULL) loop(loop) Arguments x An iterator function. n The number of elements to collect. If x is an infinite sequence, n must be sup- plied to prevent an infinite loop. loop A for loop expression. Value collect() returns a list of values; loop() returns the exhausted() sentinel, invisibly. See Also async_collect() for async generators. Examples generate_abc <- generator(function() for (x in letters[1:3]) yield(x)) abc <- generate_abc() # Collect 1 element: collect(abc, n = 1) # Collect all remaining elements: collect(abc) # With exhausted iterators collect() returns an empty list: collect(abc) # With loop() you can use `for` loops with iterators: abc <- generate_abc() loop(for (x in abc) print(x)) coro_debug Debug a generator or async function Description • Call coro_debug() on a generator(), async(), or async_generator() function to enable step-debugging. • Alternatively, set options(coro_debug = TRUE) for step-debugging through all functions cre- ated with coro. Usage coro_debug(fn, value = TRUE) Arguments fn A generator factory or an async function. value Whether to debug the function. generator Create a generator function Description generator() creates an generator factory. A generator is an iterator function that can pause its execution with yield() and resume from where it left off. Because they manage state for you, generators are the easiest way to create iterators. See vignette("generator"). The following rules apply: • Yielded values do not terminate the generator. If you call the generator again, the execution resumes right after the yielding point. All local variables are preserved. • Returned values terminate the generator. If called again after a return(), the generator keeps returning the exhausted() sentinel. Generators are compatible with all features based on the iterator protocol such as loop() and collect(). Usage generator(fn) gen(expr) Arguments fn A function template for generators. The function can yield() values. Within a generator, for loops have iterator support. expr A yielding expression. See Also yield(), coro_debug() for step-debugging. Examples # A generator statement creates a generator factory. The # following generator yields two times and then returns `"c"`: generate_abc <- generator(function() { yield("a") yield("b") "c" }) # Or equivalently: generate_abc <- generator(function() { for (x in letters[1:3]) { yield(x) } }) # The factory creates generator instances. They are iterators # that you can call successively to obtain new values: abc <- generate_abc() abc() abc() # Once a generator has returned it keeps returning `exhausted()`. # This signals to its caller that new values can no longer be # produced. The generator is exhausted: abc() abc() # You can only exhaust a generator once but you can always create # new ones from a factory: abc <- generate_abc() abc() # As generators implement the coro iteration protocol, you can use # coro tools like `loop()`. It makes it possible to loop over # iterators with `for` expressions: loop(for (x in abc) print(x)) # To gather values of an iterator in a list, use `collect()`. Pass # the `n` argument to collect that number of elements from a # generator: abc <- generate_abc() collect(abc, 1) # Or drain all remaining elements: collect(abc) # coro provides a short syntax `gen()` for creating one-off # generator _instances_. It is handy to adapt existing iterators: numbers <- 1:10 odds <- gen(for (x in numbers) if (x %% 2 != 0) yield(x)) squares <- gen(for (x in odds) yield(x^2)) greetings <- gen(for (x in squares) yield(paste("Hey", x))) collect(greetings) # Arguments passed to generator instances are returned from the # `yield()` statement on reentry: new_tally <- generator(function() { count <- 0 while (TRUE) { i <- yield(count) count <- count + i } }) tally <- new_tally() tally(1) tally(2) tally(10) iterator Iterator protocol Description An iterator is a function that implements the following protocol: • Calling the function advances the iterator. The new value is returned. • When the iterator is exhausted and there are no more elements to return, the symbol quote(exhausted) is returned. This signals exhaustion to the caller. • Once an iterator has signalled exhaustion, all subsequent invokations must consistently return coro::exhausted() or as.symbol(".__exhausted__."). iterator <- as_iterator(1:3) # Calling the iterator advances it iterator() ## [1] 1 iterator() ## [1] 2 # This is the last value iterator() ## [1] 3 # Subsequent invokations return the exhaustion sentinel iterator() ## .__exhausted__. Because iteration is defined by a protocol, creating iterators is free of dependency. However, it is often simpler to create iterators with generators, see vignette("generator"). To loop over an iterator, it is simpler to use the loop() and collect() helpers provided in this package. Usage exhausted() is_exhausted(x) Arguments x An object. Properties Iterators are stateful. Advancing the iterator creates a persistent effect in the R session. Also iterators are one-way. Once you have advanced an iterator, there is no going back and once it is exhausted, it stays exhausted. Iterators are not necessarily finite. They can also represent infinite sequences, in which case trying to exhaust them is a programming error that causes an infinite loop. The exhausted sentinel Termination of iteration is signalled via a sentinel value, as.symbol(".__exhausted__."). Alter- native designs include: • A condition as in python. • A rich value containing a termination flag as in Javascript. The sentinel design is a simple and efficient solution but it has a downside. If you are iterating over a collection of elements that inadvertently contains the sentinel value, the iteration will be terminated early. To avoid such mix-ups, the sentinel should only be used as a temporary value. It should be created from scratch by a function like coro::exhausted() and never stored in a container or namespace. yield Yield a value from a generator Description The yield() statement suspends generator() functions. It works like return() except that the function continues execution at the yielding point when it is called again. yield() can be called within loops and if-else branches but for technical reasons it can’t be used anywhere in R code: • yield() cannot be called as part of a function argument. Code such as list(yield()) is illegal. • yield() does not cross function boundaries. You can’t use it a lambda function passed to lapply() for instance. Usage yield(x) Arguments x A value to yield. See Also generator() for examples.
gba
rust
Rust
Crate gba === A crate for GBA development. ### How To Make Your Own GBA Project Using This Crate This will require the use of Nightly Rust. Any recent-ish version of Nightly should be fine. * **Get The ARM Binutils:** You’ll need the ARM version of the GNU binutils in your path, specifically the linker (`arm-none-eabi-ld`). Linux folks can use the package manager. Mac and Windows folks can use the ARM Website. * **Run `rustup component add rust-src`:** This makes rustup keep the standard library source code on hand, which is necessary for `build-std` to work. * **Create A `.cargo/config.toml`:** You’ll want to set up a file to provide all the right default settings so that a basic `cargo build` and `cargo run` will “just work”. Something like the following is what you probably want. ``` [build] target = "thumbv4t-none-eabi" [unstable] build-std = ["core"] [target.thumbv4t-none-eabi] runner = "mgba-qt" rustflags = ["-Clink-arg=-Tlinker_scripts/mono_boot.ld"] ``` * **Make Your Executables:** At this point you can make a `bin` or an `example` file. Every executable will need to be `#![no_std]` and `#![no_main]`. They will also need a `#[panic_handler]` defined, as well as a `#[no_mangle] extern "C" fn main() -> ! {}` function, which is what the assembly runtime will call to start your Rust program after it fully initializes the system. The C ABI must be used because Rust’s own ABI is not stable. ``` #![no_std] #![no_main] #[panic_handler] fn panic_handler(_: &core::panic::PanicInfo) -> ! { loop {} } #[no_mangle] extern "C" fn main() -> ! { loop {} } ``` * **Optional: Use `objcopy` and `gbafix`:** The `cargo build` will produce ELF files, which mGBA can run directly. If you want to run your program on real hardware you’ll need to first `objcopy` the raw binary out of the ELF into its own file, then Use `gbafix` to give an appropriate header to the file. `objcopy` is part of the ARM binutils you already installed, it should be named `arm-none-eabi-objcopy`. You can get `gbafix` through cargo: `cargo install gbafix`. ### Other GBA-related Crates This crate provides an API to interact with the GBA that is safe, but with minimal restrictions on what components can be changed when. If you’d like an API where the borrow checker provides stronger control over component access then the agb crate might be what you want. ### Safety All safety considerations for the crate assume that you’re building for the `thumbv4t-none-eabi` or `armv4t-none-eabi` targets, using the provided linker script, and then running the code on a GBA. While it’s possible to break any of these assumptions, if you do that some or all of the code provided by this crate may become unsound. Modules --- * asm_runtimeThis module holds the assembly runtime that supports your Rust program. * biosThe GBA’s BIOS provides limited built-in utility functions. * builtin_artThis module provides some basic art assets. * dmaModule for interfacing with the GBA’s Direct Memory Access units. * fixed * gba_cellA GBA-specific “cell” type that allows safe global mutable data. * interrupts * keysModule for interfacing with the device’s button inputs. * mem_fnsModule for direct memory operations. * mgbaLets you interact with the mGBA debug output buffer. * mmioContains all the MMIO address definitions for the GBA’s components. * preludeA module that just re-exports all the other modules of the crate. * random * sound * timersModule to interface with the GBA’s four timer units. * videoModule to control the GBA’s screen. Macros --- * include_aligned_bytesWorks like [`include_bytes!`], but the value is wrapped in `Align4`. Structs --- * Align4Wraps a value to be aligned to a minimum of 4. Crate gba === A crate for GBA development. ### How To Make Your Own GBA Project Using This Crate This will require the use of Nightly Rust. Any recent-ish version of Nightly should be fine. * **Get The ARM Binutils:** You’ll need the ARM version of the GNU binutils in your path, specifically the linker (`arm-none-eabi-ld`). Linux folks can use the package manager. Mac and Windows folks can use the ARM Website. * **Run `rustup component add rust-src`:** This makes rustup keep the standard library source code on hand, which is necessary for `build-std` to work. * **Create A `.cargo/config.toml`:** You’ll want to set up a file to provide all the right default settings so that a basic `cargo build` and `cargo run` will “just work”. Something like the following is what you probably want. ``` [build] target = "thumbv4t-none-eabi" [unstable] build-std = ["core"] [target.thumbv4t-none-eabi] runner = "mgba-qt" rustflags = ["-Clink-arg=-Tlinker_scripts/mono_boot.ld"] ``` * **Make Your Executables:** At this point you can make a `bin` or an `example` file. Every executable will need to be `#![no_std]` and `#![no_main]`. They will also need a `#[panic_handler]` defined, as well as a `#[no_mangle] extern "C" fn main() -> ! {}` function, which is what the assembly runtime will call to start your Rust program after it fully initializes the system. The C ABI must be used because Rust’s own ABI is not stable. ``` #![no_std] #![no_main] #[panic_handler] fn panic_handler(_: &core::panic::PanicInfo) -> ! { loop {} } #[no_mangle] extern "C" fn main() -> ! { loop {} } ``` * **Optional: Use `objcopy` and `gbafix`:** The `cargo build` will produce ELF files, which mGBA can run directly. If you want to run your program on real hardware you’ll need to first `objcopy` the raw binary out of the ELF into its own file, then Use `gbafix` to give an appropriate header to the file. `objcopy` is part of the ARM binutils you already installed, it should be named `arm-none-eabi-objcopy`. You can get `gbafix` through cargo: `cargo install gbafix`. ### Other GBA-related Crates This crate provides an API to interact with the GBA that is safe, but with minimal restrictions on what components can be changed when. If you’d like an API where the borrow checker provides stronger control over component access then the agb crate might be what you want. ### Safety All safety considerations for the crate assume that you’re building for the `thumbv4t-none-eabi` or `armv4t-none-eabi` targets, using the provided linker script, and then running the code on a GBA. While it’s possible to break any of these assumptions, if you do that some or all of the code provided by this crate may become unsound. Modules --- * asm_runtimeThis module holds the assembly runtime that supports your Rust program. * biosThe GBA’s BIOS provides limited built-in utility functions. * builtin_artThis module provides some basic art assets. * dmaModule for interfacing with the GBA’s Direct Memory Access units. * fixed * gba_cellA GBA-specific “cell” type that allows safe global mutable data. * interrupts * keysModule for interfacing with the device’s button inputs. * mem_fnsModule for direct memory operations. * mgbaLets you interact with the mGBA debug output buffer. * mmioContains all the MMIO address definitions for the GBA’s components. * preludeA module that just re-exports all the other modules of the crate. * random * sound * timersModule to interface with the GBA’s four timer units. * videoModule to control the GBA’s screen. Macros --- * include_aligned_bytesWorks like [`include_bytes!`], but the value is wrapped in `Align4`. Structs --- * Align4Wraps a value to be aligned to a minimum of 4. Module gba::asm_runtime === This module holds the assembly runtime that supports your Rust program. Most importantly, you can set the `RUST_IRQ_HANDLER` variable to assign which function should be run during a hardware interrupt. * When a hardware interrupt occurs, control first goes to the BIOS, which will then call the assembly runtime’s handler. * The assembly runtime handler will properly acknowledge the interrupt within the system on its own without you having to do anything. * If a function is set in the `RUST_IRQ_HANDLER` variable then that function will be called and passed the bits for which interrupt(s) occurred. Statics --- * RUST_IRQ_HANDLERThe function pointer that the assembly runtime calls when an interrupt occurs. Module gba::bios === The GBA’s BIOS provides limited built-in utility functions. BIOS functions are accessed with an `swi` instruction to perform a software interrupt. This means that there’s a *significant* overhead for a BIOS call (tens of cycles) compared to a normal function call (3 cycles, or even none of the function ends up inlined). Despite this higher cost, some bios functions are useful enough to justify the overhead. Structs --- * BitUnpackInfoUsed to provide info to a call of the `BitUnPack` function. Functions --- * ArcTan`0x09`: Arc tangent. * ArcTan2`0x0A`: The “2-argument arctangent” (atan2). * BitUnPack⚠`0x10`: Copy data from `src` to `dest` while increasing the bit depth of the elements copied. * HuffUnCompReadNormal⚠`0x13`: Decompress huffman encoded data. * IntrWait`0x04`: Waits for a specific interrupt type(s) to happen. * LZ77UnCompReadNormalWrite8bit⚠`0x11`: Decompress LZ77 data from `src` to `dest` using 8-bit writes. * LZ77UnCompReadNormalWrite16bit⚠`0x12`: Decompress LZ77 data from `src` to `dest` using 16-bit writes. * RLUnCompReadNormalWrite8bit⚠`0x14`: Decompress run-length encoded data (8-bit writes). * RLUnCompReadNormalWrite16bit⚠`0x15`: Decompress run-length encoded data (16-bit writes). * SoftReset`0x00`: Software Reset. * VBlankIntrWait`0x05`: Builtin shorthand for `IntrWait(true, IrqBits::VBLANK)` Module gba::builtin_art === This module provides some basic art assets. It’s mostly intended for the crate’s examples, but you can use it yourself too if you find it pleasing. Structs --- * Cga8x8Thick Statics --- * CGA_8X8_THICKThe CGA Code Page 437 type face, with thick lines. Module gba::dma === Module for interfacing with the GBA’s Direct Memory Access units. The GBA has four DMA units, numbered from 0 to 3. They can be used for extremely efficient memory transfers, and they can also be set to automatically transfer in response to select events. Whenever a DMA unit is active, the CPU does not operate at all. Not even hardware interrupts will occur while a DMA is running. The interrupt will instead happen after the DMA transfer is done. When it’s critical that an interrupt be handled exactly on time (such as when using serial interrupts) then you should avoid any large DMA transfers. In any situation when more than one DMA unit would be active at the same time, the lower-numbered DMA unit runs first. Each DMA unit is controlled by 4 different MMIO addresses, as follows (replace `x` with the DMA unit’s number): * `DMAx_SRC` and `DMAx_DEST`: source and destination address. DMA 0 can only use internal memory, DMA 1 and 2 can read from the gamepak but not write to it, and DMA 3 can even write to the gamepak (when the gamepak itself supports that). In all cases, SRAM cannot be accessed. The addresses of a transfer should always be aliged to the element size. * `DMAx_COUNT`: Number of elements to transfer. The number of elements is either a 14-bit (DMA 0/1/2) or 16-bit (DMA3) number. If the count is set to 0 then the transfer will instead copy one more than the normal maximum of that number’s range (DMA 0/1/2: 16_384, DMA 3: 65_536). * `DMAx_CONTROL`: Configuration bits for the transfer, see `DmaControl`. ### Safety The DMA units are the least safe part of the GBA and should be used with caution. Because Rust doesn’t have a fully precise memory model, and because LLVM is a little fuzzy about the limits of what a volatile address access can do, you are advised to **not** use DMA to alter any memory that is part of Rust’s compilation (stack variables, static variables, etc). You are advised to only use the DMA units to transfer data into VRAM, PALRAM, OAM, and MMIO controls (eg: the FIFO sound buffers). In the future the situation may improve. Structs --- * DmaControlDMA control configuration. Enums --- * DestAddrControlSets the change in destination address after each transfer. * DmaStartTimeWhen the DMA unit should start doing work. * SrcAddrControlSets the change in source address after each transfer. Module gba::gba_cell === A GBA-specific “cell” type that allows safe global mutable data. Most importantly, data stored in a `GbaCell` can be safely shared between the main program and the interrupt handler. All you have to do is declare a static `GbaCell`: ``` static THE_COLOR: GbaCell<Color> = GbaCell::new(Color::new()); ``` And then you can use the `read` and `write` methods to interact with the data: ``` let old_color = THE_COLOR.read(); THE_COLOR.write(Color::default()); ``` Structs --- * GbaCellA GBA-specific wrapper around Rust’s `UnsafeCell` type. Traits --- * GbaCellSafeMarker trait bound for the methods of `GbaCell`. Module gba::keys === Module for interfacing with the device’s button inputs. The GBA has two primary face buttons (A and B), two secondary face buttons (Select and Start), a 4-way directional pad (“D-pad”), and two shoulder buttons (L and R). To get the state of all the buttons just read from `KEYINPUT`. For consistency, you should usually read the buttons only once per frame. Then use that same data for all user input considerations across that entire frame. Otherwise, small fluctuations in pressure can cause inconsistencies in the reading during a frame. In addition to simply providing inputs, the buttons can also trigger a hardware interrupt. Set the desired set of buttons that will trigger a key interrupt with `KEYCNT`, and when that button combination is pressed the key interrupt will be fired. Key interrupts aren’t a good fit for standard inputs, but as a way to provide a single extra special input it works okay. For example, this is generally how games with a “soft reset” button combination do that. The key interrupt handler sets a “reset requested” flag when the key interrupt occurs, and then the main game loop checks the flag each frame and performs a soft reset instead of the normal game simulation when the flag is set. Structs --- * KeyControl`KEYCNT`: Determines when a key interrupt will be sent. * KeyInput`KEYINPUT`: Key input data. Module gba::mem_fns === Module for direct memory operations. Generally you don’t need to call these yourself. Instead, the compiler will insert calls to the functions defined here as necessary. Functions --- * __aeabi_memclr⚠Just call `__aeabi_memset` with 0 as the `byte` instead. * __aeabi_memclr4⚠Just call `__aeabi_memset` with 0 as the `byte` instead. * __aeabi_memclr8⚠Just call `__aeabi_memset` with 0 as the `byte` instead. * __aeabi_memcpy⚠Arbitrary width copy between exclusive regions. * __aeabi_memcpy1⚠Byte copy between exclusive regions. * __aeabi_memcpy2⚠Halfword copy between exclusive regions. * __aeabi_memcpy4⚠Word copy between exclusive regions. * __aeabi_memcpy8⚠Just call `__aeabi_memcpy4` instead. * __aeabi_memmove⚠Copy between non-exclusive regions. * __aeabi_memmove4⚠Copy between non-exclusive regions, prefer `__aeabi_memmove` if possible. * __aeabi_memmove8⚠Copy between non-exclusive regions, prefer `__aeabi_memmove` if possible. * __aeabi_memset⚠Sets all bytes in the region to the `byte` given. * __aeabi_memset4⚠Copy between non-exclusive regions, prefer `__aeabi_memset` if possible. * __aeabi_memset8⚠Copy between non-exclusive regions, prefer `__aeabi_memset` if possible. * memcpy⚠Copy between exclusive regions, prefer `__aeabi_memcpy` if possible. * memmove⚠Copy between non-exclusive regions, prefer `__aeabi_memmove` if possible. * memset⚠Write a value to all bytes in the region, prefer `__aeabi_memset` if possible. Module gba::mgba === Lets you interact with the mGBA debug output buffer. This buffer works as a “standard output” sort of interface: * First `use core::fmt::Write;` so that the `Write` trait is in scope. * Try to make a logger with `MgbaBufferedLogger::try_new(log_level)`. * Use the `write!` macro to write data into the logger. * The logger will automatically flush itself (using the log level you set) when the buffer is full, on a newline, and when it’s dropped. Logging is not always available. Obviously the mGBA output buffer can’t be used if the game isn’t running within the mGBA emulator. `MgbaBufferedLogger::try_new` will fail to make a logger when logging isn’t available. You can also call `mgba_logging_available` directly to check if mGBA logging is possible. ``` use core::fmt::Write; let log_level = MgbaMessageLevel::Debug; if let Ok(logger) = MgbaBufferedLogger::try_new(log_level) { writeln!(logger, "hello").ok(); } ``` ### Fine Details Even when the program is running within mGBA, the `MGBA_LOG_ENABLE` address needs to be written with the `MGBA_LOGGING_ENABLE_REQUEST` value to allow logging. This is automatically done for you by the assembly runtime. If the `MGBA_LOG_ENABLE` address reads back `MGBA_LOGGING_ENABLE_RESPONSE` then mGBA logging is possible. If you’re running outside of mGBA then the `MGBA_LOG_ENABLE` address maps to nothing. Writes will do no harm, and reads won’t read the correct value. Once you know that logging is possible, write your message to `MGBA_LOG_BUFFER`. This works similar to a C-style string: the first 0 byte in the buffer will be considered the end of the message. When the message is ready to go out, write a message level to `MGBA_LOG_SEND`. This makes the message available within the emulator’s logs at that message level and also implicitly zeroes the message buffer so that it’s ready for the next message. Structs --- * MgbaBufferedLogger Enums --- * MgbaMessageLevel Constants --- * MGBA_LOGGING_ENABLE_REQUEST * MGBA_LOGGING_ENABLE_RESPONSE Functions --- * mgba_logging_availableReturns if mGBA logging is possible. Module gba::mmio === Contains all the MMIO address definitions for the GBA’s components. This module contains *only* the MMIO addresses. The data type definitions for each MMIO control value are stored in the appropriate other modules such as `video`, `interrupts`, etc. In general, the docs for each address are quite short. If you want to understand how a subsystem of the GBA works, you should read the docs for that system’s module, and the data type used by the address. The GBATEK names (and thus mGBA names) are used for the MMIO addresses by default. However, in some cases (eg: sound) the GBATEK naming is excessively cryptic, and so new names have been created. Whenever a new name is used, the GBATEK name is still listed as a doc alias for that address. If necessary you can just search the GBATEK name in the rustdoc search bar and the search results will show you the new name. ### Safety The MMIO declarations and wrapper types in this module **must not** be used outside of a GBA. The read and write safety of each address are declared assuming that code is running on a GBA. On any other platform, the declarations are simply incorrect. Constants --- * AFFINE0_SCREENBLOCKSAffine screenblocks (size 0). * AFFINE1_SCREENBLOCKSAffine screenblocks (size 1). * AFFINE2_SCREENBLOCKSAffine screenblocks (size 2). * AFFINE3_SCREENBLOCKSAffine screenblocks (size 3). * AFFINE_PARAM_AAffine parameters A. * AFFINE_PARAM_BAffine parameters B. * AFFINE_PARAM_CAffine parameters C. * AFFINE_PARAM_DAffine parameters D. * BACKDROP_COLORColor that’s shown when no BG or OBJ draws to a pixel * BG0CNTBackground 0 Control * BG0HOFSBackground 0 Horizontal Offset (9-bit, text mode) * BG0VOFSBackground 0 Vertical Offset (9-bit, text mode) * BG1CNTBackground 1 Control * BG1HOFSBackground 1 Horizontal Offset (9-bit, text mode) * BG1VOFSBackground 1 Vertical Offset (9-bit, text mode) * BG2CNTBackground 2 Control * BG2HOFSBackground 2 Horizontal Offset (9-bit, text mode) * BG2PABackground 2 Param A (affine mode) * BG2PBBackground 2 Param B (affine mode) * BG2PCBackground 2 Param C (affine mode) * BG2PDBackground 2 Param D (affine mode) * BG2VOFSBackground 2 Vertical Offset (9-bit, text mode) * BG2XBackground 2 X Reference Point (affine/bitmap modes) * BG2YBackground 2 Y Reference Point (affine/bitmap modes) * BG3CNTBackground 3 Control * BG3HOFSBackground 3 Horizontal Offset (9-bit, text mode) * BG3PABackground 3 Param A (affine mode) * BG3PBBackground 3 Param B (affine mode) * BG3PCBackground 3 Param C (affine mode) * BG3PDBackground 3 Param D (affine mode) * BG3VOFSBackground 3 Vertical Offset (9-bit, text mode) * BG3XBackground 3 X Reference Point (affine/bitmap modes) * BG3YBackground 3 Y Reference Point (affine/bitmap modes) * BG_PALETTEBackground tile palette entries. * BG_TILE_REGION_SIZEThe size of the background tile region of VRAM. * BLDALPHASets EVA(low) and EVB(high) alpha blend coefficients, allows `0..=16`, in 1/16th units * BLDCNTSets color blend effects * BLDYSets EVY brightness blend coefficient, allows `0..=16`, in 1/16th units * CHARBLOCK0_4BPPCharblock 0, 4bpp view (512 tiles). * CHARBLOCK0_8BPPCharblock 0, 8bpp view (256 tiles). * CHARBLOCK1_4BPPCharblock 1, 4bpp view (512 tiles). * CHARBLOCK1_8BPPCharblock 1, 8bpp view (256 tiles). * CHARBLOCK2_4BPPCharblock 2, 4bpp view (512 tiles). * CHARBLOCK2_8BPPCharblock 2, 8bpp view (256 tiles). * CHARBLOCK3_4BPPCharblock 3, 4bpp view (512 tiles). * CHARBLOCK3_8BPPCharblock 3, 8bpp view (256 tiles). * DISPCNTDisplay Control * DISPSTATDisplay Status * DMA0_CONTROLDMA0 Control Bits * DMA0_COUNTDMA0 Transfer Count (14-bit, 0=max) * DMA0_DESTDMA0 Destination Address (internal memory only) * DMA0_SRCDMA0 Source Address (internal memory only) * DMA1_CONTROLDMA1 Control Bits * DMA1_COUNTDMA1 Transfer Count (14-bit, 0=max) * DMA1_DESTDMA1 Destination Address (internal memory only) * DMA1_SRCDMA1 Source Address (non-SRAM memory) * DMA2_CONTROLDMA2 Control Bits * DMA2_COUNTDMA2 Transfer Count (14-bit, 0=max) * DMA2_DESTDMA2 Destination Address (internal memory only) * DMA2_SRCDMA2 Source Address (non-SRAM memory) * DMA3_CONTROLDMA3 Control Bits * DMA3_COUNTDMA3 Transfer Count (16-bit, 0=max) * DMA3_DESTDMA3 Destination Address (non-SRAM memory) * DMA3_SRCDMA3 Source Address (non-SRAM memory) * FIFO_APushes 4 `i8` samples into the Sound A buffer. * FIFO_BPushes 4 `i8` samples into the Sound B buffer. * IEInterrupts Enabled: sets which interrupts will be accepted when a subsystem fires an interrupt * IFInterrupts Flagged: reads which interrupts are pending, writing bit(s) will clear a pending interrupt. * IMEInterrupt Master Enable: Allows turning on/off all interrupts with a single access. * IO_PORT_CONTROLI/O port control * IO_PORT_DATAI/O port data * IO_PORT_DIRECTIONI/O port direction * JOYCNT * JOYSTAT * JOY_RECV * JOY_TRANS * KEYCNTKey control to configure the key interrupt. * KEYINPUTKey state data. * LEFT_RIGHT_VOLUMELeft/Right sound control (but GBAs only have one speaker each). * MGBA_LOG_BUFFERThe buffer to put logging messages into. * MGBA_LOG_ENABLEAllows you to attempt to activate mGBA logging. * MGBA_LOG_SENDWrite to this each time you want to reset a message (it also resets the buffer). * MOSAICSets the intensity of all mosaic effects * NOISE_FREQNoise Frequency/Control * NOISE_LEN_ENVNoise Length/Envelope * OBJ_ATTR0Object attributes 0. * OBJ_ATTR1Object attributes 1. * OBJ_ATTR2Object attributes 2. * OBJ_ATTR_ALLObject attributes (all in one). * OBJ_PALETTEObject tile palette entries. * OBJ_TILESObject tiles. In video modes 3, 4, and 5 only indices 512..=1023 are available. * RCNT * SCREENBLOCK_INDEX_OFFSETThe VRAM byte offset per screenblock index. * SIOCNT * SIODATA8 * SIODATA32 * SIOMLT_SEND * SIOMULTI0 * SIOMULTI1 * SIOMULTI2 * SIOMULTI3 * SOUNDBIASProvides a bias to set the ‘middle point’ of sound output. * SOUND_ENABLEDSound active flags (r), as well as the sound primary enable (rw). * SOUND_MIXMixes sound sources out to the left and right * TEXT_SCREENBLOCKSText mode screenblocks. * TIMER0_CONTROLTimer 0 control * TIMER0_COUNTTimer 0 Count read * TIMER0_RELOADTimer 0 Reload write * TIMER1_CONTROLTimer 1 control * TIMER1_COUNTTimer 1 Count read * TIMER1_RELOADTimer 1 Reload write * TIMER2_CONTROLTimer 2 control * TIMER2_COUNTTimer 2 Count read * TIMER2_RELOADTimer 2 Reload write * TIMER3_CONTROLTimer 3 control * TIMER3_COUNTTimer 3 Count read * TIMER3_RELOADTimer 3 Reload write * TONE1_FREQUENCYTone 1 Frequency/Control * TONE1_PATTERNTone 1 Duty/Len/Envelope * TONE1_SWEEPTone 1 Sweep * TONE2_FREQUENCYTone 2 Frequency/Control * TONE2_PATTERNTone 2 Duty/Len/Envelope * VCOUNTVertical Counter * VIDEO3_VRAMVideo mode 3 bitmap * VIDEO4_VRAMVideo mode 4 palette maps (frames 0 and 1). Each entry is two palette indexes. * VIDEO5_VRAMVideo mode 5 bitmaps (frames 0 and 1). * WAITCNTWait state control for interfacing with the ROM. * WAVE_BANKWave banking controls * WAVE_FREQWave Frequency/Control * WAVE_LEN_VOLUMEWave Length/Volume * WAVE_RAMWave memory, `u4`, plays MSB/LSB per byte. * WIN0HWindow 0 Horizontal: high=left, low=(right+1) * WIN0VWindow 0 Vertical: high=top, low=(bottom+1) * WIN1HWindow 1 Horizontal: high=left, low=(right+1) * WIN1VWindow 1 Vertical: high=top, low=(bottom+1) * WININControls the inside Windows 0 and 1 * WINOUTControls inside the object window and outside of windows Functions --- * bg_palbank * obj_palbank Module gba::prelude === A module that just re-exports all the other modules of the crate. Re-exports --- * `pub use crate::Align4;` * `pub use crate::asm_runtime::*;` * `pub use crate::bios::*;` * `pub use crate::builtin_art::*;` * `pub use crate::dma::*;` * `pub use crate::fixed::*;` * `pub use crate::gba_cell::*;` * `pub use crate::interrupts::*;` * `pub use crate::keys::*;` * `pub use crate::mgba::*;` * `pub use crate::mmio::*;` * `pub use crate::sound::*;` * `pub use crate::timers::*;` * `pub use crate::video::obj::*;` * `pub use crate::video::*;` Macros --- * include_aligned_bytesWorks like [`include_bytes!`], but the value is wrapped in `Align4`. Module gba::timers === Module to interface with the GBA’s four timer units. Similar to the background layers and DMA units, there are four timer units and they’re numbered 0 through 3. There’s two hardware addresses that control each timer. * The timer’s high address is the `TimerControl` bits. * The timer’s low address is a `u16` which *reads* the timer’s “count” value, but *writes* the timer’s “reload” value. In this crate we actually represent that as two separate MMIO controls for improved code clarity. Just be aware that in mGBA’s debugger and in other documentation you’ll see it as a single address. When a timer is disabled, it will continue to read the count value that it stopped at. ### Reloading When the timer goes from disabled to enabled, or when the timer overflows, the last set reload value is copied to the counter value. ### Ticking When a timer is enabled, the timer will tick every so often. Each tick increases the counter value by 1. The rate at which the timer ticks depends on the timer’s configuration: * If the `cascade` bit is set the timer will tick once per overflow of the next lower timer. For example, if timer 3 is set to cascade, it will tick once per overflow of timer 2. Note that timer 0 ignores the cascade bit, since it doesn’t have a “next lower” timer. * Otherwise, the timer ticks every one or more CPU cycles, according to the `TimerScale` set in the `scale` field. ### Overflows When a timer would tick *above* `u16::MAX` then an overflow occurs. This can trigger an interrupt, and will also cause the timer to copy its reload value into its counter. If you want a timer to overflow every `x` ticks (where `x` is non-zero), then use the `wrapping_neg` method to easily get the right reload value to set: ``` let x = 7_u16; TIMER0_RELOAD.write(x.wrapping_neg()); ``` ### Using Cascade To Pause A Timer When a timer goes from disabled to enabled it will reset the counter value to the reload value. If you want to temporarily pause a timer *without* having the counter value get reset when you resume the timer you can instead set the `cascade` bit of the timer while the next lower timer is **disabled**. This keeps the timer “active” but prevents it from ticking. When you turn off cascade mode the timer will resume ticking from the current counter value. Note that this doesn’t work for timer 0, because that timer ignores the cascade bit. Structs --- * TimerControlTimer configuration bits. Enums --- * TimerScaleA number of CPU cycles per timer tick. Module gba::video === Module to control the GBA’s screen. Video Basics --- To configure the screen’s display, you should first decide on the `DisplayControl` value that you want to set to the `DISPCNT` register. This configures several things, but most importantly it determines the `VideoMode` for the display to use. The GBA has four Background layers. Depending on the current video mode, different background layers will be available for use in either “text”, “affine”, or “bitmap” mode. In addition to the background layers, there’s also an “OBJ” layer. This allows the display of a number of “objects”, which can move independently of any background. Generally, one or more objects will be used to display the “sprites” within a game. Because there isn’t an exact 1:1 mapping between sprites and objects, these docs will attempt to only talk about objects. ### Color, Bit Depth, and Palettes Color values on the GBA are 5-bits-per-channel RGB values. They’re always bit-packed and aligned to 2, so think of them as being like a `u16`. Because of the GBA’s limited memory, most images don’t use direct color (one color per pixel). Instead they use indexed color (one *palette index* per pixel). Indexed image data can be 4-bits-per-pixel (4bpp) or 8-bits-per-pixel (8bpp). In either case, the color values themselves are stored in the PALRAM region. The PALRAM contains the `BG_PALETTE` and `OBJ_PALETTE`, which hold the color values for backgrounds and objects respectively. Both palettes have 256 slots. The palettes are always indexed with 8 bits total, but *how* those bits are determined depends on the bit depth of the image: * Things drawing with 8bpp image data index into the full range of the palette directly. * Things drawing with 4bpp image data will also have a “palbank” setting. The palbank acts as the upper 4 bits of the index, selecting which block of 16 palette entries the that thing will be able to use. Then each 4-bit pixel within the image indexes within the palbank. In both 8bpp and 4bpp modes, if a particular pixel’s index value is 0 then that pixel is instead considered transparent. So 8bpp images can use 255 colors (+ transparent), and 4bpp images can use 15 colors (+ transparent). Each background layer and each object can individually be set to display with either 4bpp or 8bpp mode. ### Tiles, Screenblocks, and Charblocks The basic unit of the GBA’s hardware graphics support is a “tile”. Regardless of their bit depth, a tile is always an 8x8 area. This means that they’re either 32 bytes (4bpp) or 64 bytes (8bpp). Since VRAM starts aligned to 4, and since both size tiles are a multiple of 4 bytes in size, we model tile data as being arrays of `u32` rather than arrays of `u8`. Having the data stay aligned to 4 within the ROM gives a significant speed gain when copying tiles from ROM into VRAM. The layout of tiles within a background is defined by a “screenblock”. * Text backgrounds use a fixed 32x32 size screenblock, with larger backgrounds using more than one screenblock. Each TextEntry value in the screenblock has a tile index (10-bit), bits for horizontal flip and vertical flip, and a palbank value. If the background is not in 4bpp mode the palbank value is simply ignored. * Affine backgrounds always have a single screenblock each, and the size of the screenblock itself changes with the background’s size (from 16x16 to 128x128, in powers of 2). Each entry in an affine screenblock is just a `u8` tile index, with no special options. Affine backgrounds can’t use 4bpp color, and they also can’t flip tiles on a per-tile basis. A background’s screenblock is selected by an index (5-bit). The indexes go in 2,048 byte (2k) jumps. This is exactly the size of a text screenblock, but doesn’t precisely match the size of any of the affine screenblocks. Because tile indexes can only be so large, there are also “charblocks”. This offsets all of the tile index values that the background uses, allowing you to make better use of all of the VRAM. The charblock value provides a 16,384 byte (16k) offset, and can be in the range `0..=3`. ### Priority When more than one thing would be drawn to the same pixel, there’s a priority system that determines which pixel is actually drawn. * Priority values are always 2-bit, the range `0..=3`. The priority acts like the sorting index, or you could also think of it as the distance from the viewer. Things with a *lower* priority number are *closer* to the viewer, and so they’ll be what’s drawn. * Objects always draw over top a same-priority background. * Lower indexed objects get drawn when two objects have the same priority. * Lower numbered backgrounds get drawn when two backgrounds have the same priority. There’s also one hardware bug that can occur: when there’s two objects and their the priority and index wouldn’t sort them the same (eg: a lower index number object has a higher priority number), if a background is *also* between the two objects, then the object that’s supposed to be behind the background will instead appear through the background where the two objects overlap. This might never happen to you, but if it does, the “fix” is to sort your object entries so that any lower priority objects are also the lower index objects. Modules --- * objModule for object (OBJ) entry data. Structs --- * BackgroundControl * BlendControl * ColorAn RGB555 color value (packed into `u16`). * DisplayControl * DisplayStatus * Mosaic * TextEntryAn entry within a tile mode tilemap. * WindowInside * WindowOutside Enums --- * ColorEffectMode * VideoModeThe video mode controls how each background layer will operate. Type Definitions --- * Tile4Data for a 4-bit-per-pixel tile. * Tile8Data for an 8-bit-per-pixel tile. Macro gba::include_aligned_bytes === ``` macro_rules! include_aligned_bytes { ($file:expr $(,)?) => { ... }; } ``` Works like [`include_bytes!`], but the value is wrapped in `Align4`. Struct gba::Align4 === ``` #[repr(C, align(4))]pub struct Align4<T>(pub T); ``` Wraps a value to be aligned to a minimum of 4. If the size of the value held is already a multiple of 4 then this will be the same size as the wrapped value. Otherwise the compiler will add sufficient padding bytes on the end to make the size a multiple of 4. Tuple Fields --- `0: T`Implementations --- ### impl<const N: usize> Align4<[u8; N]#### pub fn as_u32_slice(&self) -> &[u32] Views these bytes as a slice of `u32` ###### Panics * If the number of bytes isn’t a multiple of 4 #### pub fn as_u16_slice(&self) -> &[u16] Views these bytes as a slice of `u16` ###### Panics * If the number of bytes isn’t a multiple of 2 Auto Trait Implementations --- ### impl<T> RefUnwindSafe for Align4<T>where T: RefUnwindSafe, ### impl<T> Send for Align4<T>where T: Send, ### impl<T> Sync for Align4<T>where T: Sync, ### impl<T> Unpin for Align4<T>where T: Unpin, ### impl<T> UnwindSafe for Align4<T>where T: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `[From]<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
spinners-jdxcode
rust
Rust
Struct spinners_jdxcode::Spinner === ``` pub struct Spinner { /* private fields */ } ``` Implementations --- ### impl Spinner #### pub fn new(spinner: Spinners, message: String) -> Self Create a new spinner along with a message ##### Examples Basic Usage: ``` use spinners::{Spinner, Spinners}; let sp = Spinner::new(Spinners::Dots, "Loading things into memory...".into()); ``` No Message: ``` use spinners::{Spinner, Spinners}; let sp = Spinner::new(Spinners::Dots, String::new()); ``` #### pub fn with_timer(spinner: Spinners, message: String) -> Self Create a new spinner that logs the time since it was created #### pub fn with_stream(spinner: Spinners, message: String, stream: Stream) -> Self Creates a new spinner along with a message with a specified output stream ##### Examples Basic Usage: ``` use spinners::{Spinner, Spinners, Stream}; let sp = Spinner::with_stream(Spinners::Dots, String::new(), Stream::Stderr); ``` #### pub fn with_timer_and_stream(    spinner: Spinners,    message: String,    stream: Stream) -> Self Creates a new spinner that logs the time since it was created with a specified output stream ##### Examples Basic Usage: ``` use spinners::{Spinner, Spinners, Stream}; let sp = Spinner::with_timer_and_stream(Spinners::Dots, String::new(), Stream::Stderr); ``` #### pub fn stop(&mut self) Stops the spinner Stops the spinner that was created with the `Spinner::new` function. Optionally call `stop_with_newline` to print a newline after the spinner is stopped, or the `stop_with_message` function to print a message after the spinner is stopped. ##### Examples Basic Usage: ``` use spinners::{Spinner, Spinners}; let mut sp = Spinner::new(Spinners::Dots, "Loading things into memory...".into()); sp.stop(); ``` #### pub fn stop_with_symbol(&mut self, symbol: &str) Stop with a symbol that replaces the spinner The symbol is a String rather than a Char to allow for more flexibility, such as using ANSI color codes. ##### Examples Basic Usage: ``` use spinners::{Spinner, Spinners}; let mut sp = Spinner::new(Spinners::Dots, "Loading things into memory...".into()); sp.stop_with_symbol("🗸"); ``` ANSI colors (green checkmark): ``` use spinners::{Spinner, Spinners}; let mut sp = Spinner::new(Spinners::Dots, "Loading things into memory...".into()); sp.stop_with_symbol("\x1b[32m🗸\x1b[0m"); ``` #### pub fn stop_with_newline(&mut self) Stops the spinner and prints a new line ##### Examples Basic Usage: ``` use spinners::{Spinner, Spinners}; let mut sp = Spinner::new(Spinners::Dots, "Loading things into memory...".into()); sp.stop_with_newline(); ``` #### pub fn stop_with_message(&mut self, msg: String) Stops the spinner and prints the provided message ##### Examples Basic Usage: ``` use spinners::{Spinner, Spinners}; let mut sp = Spinner::new(Spinners::Dots, "Loading things into memory...".into()); sp.stop_with_message("Finished loading things into memory!".into()); ``` #### pub fn stop_and_persist(&mut self, symbol: &str, msg: String) Stops the spinner with a provided symbol and message ##### Examples Basic Usage: ``` use spinners::{Spinner, Spinners}; let mut sp = Spinner::new(Spinners::Dots, "Loading things into memory...".into()); sp.stop_and_persist("✔", "Finished loading things into memory!".into()); ``` Trait Implementations --- ### impl Drop for Spinner #### fn drop(&mut self) Executes the destructor for this type. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for Spinner ### impl Send for Spinner ### impl !Sync for Spinner ### impl Unpin for Spinner ### impl !UnwindSafe for Spinner Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum spinners_jdxcode::Spinners === ``` pub enum Spinners { Dots, Dots2, Dots3, Dots4, Dots5, Dots6, Dots7, Dots8, Dots9, Dots10, Dots11, Dots12, Dots8Bit, Line, Line2, Pipe, SimpleDots, SimpleDotsScrolling, Star, Star2, Flip, Hamburger, GrowVertical, GrowHorizontal, Balloon, Balloon2, Noise, Bounce, BoxBounce, BoxBounce2, Triangle, Arc, Circle, SquareCorners, CircleQuarters, CircleHalves, Squish, Toggle, Toggle2, Toggle3, Toggle4, Toggle5, Toggle6, Toggle7, Toggle8, Toggle9, Toggle10, Toggle11, Toggle12, Toggle13, Arrow, Arrow2, Arrow3, BouncingBar, BouncingBall, Smiley, Monkey, Hearts, Clock, Earth, Material, Moon, Runner, Pong, Shark, Dqpb, Weather, Christmas, Grenade, Point, Layer, BetaWave, FingerDance, FistBump, SoccerHeader, Mindblown, Speaker, OrangePulse, BluePulse, OrangeBluePulse, TimeTravel, Aesthetic, } ``` Variants --- ### Dots ### Dots2 ### Dots3 ### Dots4 ### Dots5 ### Dots6 ### Dots7 ### Dots8 ### Dots9 ### Dots10 ### Dots11 ### Dots12 ### Dots8Bit ### Line ### Line2 ### Pipe ### SimpleDots ### SimpleDotsScrolling ### Star ### Star2 ### Flip ### Hamburger ### GrowVertical ### GrowHorizontal ### Balloon ### Balloon2 ### Noise ### Bounce ### BoxBounce ### BoxBounce2 ### Triangle ### Arc ### Circle ### SquareCorners ### CircleQuarters ### CircleHalves ### Squish ### Toggle ### Toggle2 ### Toggle3 ### Toggle4 ### Toggle5 ### Toggle6 ### Toggle7 ### Toggle8 ### Toggle9 ### Toggle10 ### Toggle11 ### Toggle12 ### Toggle13 ### Arrow ### Arrow2 ### Arrow3 ### BouncingBar ### BouncingBall ### Smiley ### Monkey ### Hearts ### Clock ### Earth ### Material ### Moon ### Runner ### Pong ### Shark ### Dqpb ### Weather ### Christmas ### Grenade ### Point ### Layer ### BetaWave ### FingerDance ### FistBump ### SoccerHeader ### Mindblown ### Speaker ### OrangePulse ### BluePulse ### OrangeBluePulse ### TimeTravel ### Aesthetic Trait Implementations --- ### impl Clone for SpinnerNames #### fn clone(&self) -> SpinnerNames Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### type Err = ParseError The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<SpinnerNames, <Self as FromStr>::ErrParses a string `s` to return a value of this type. #### type Iterator = SpinnerNamesIter #### fn iter() -> SpinnerNamesIter ### impl TryFrom<&str> for SpinnerNames #### type Error = ParseError The type returned in the event of a conversion error.#### fn try_from(s: &str) -> Result<SpinnerNames, <Self as TryFrom<&str>>::ErrorPerforms the conversion.Auto Trait Implementations --- ### impl RefUnwindSafe for SpinnerNames ### impl Send for SpinnerNames ### impl Sync for SpinnerNames ### impl Unpin for SpinnerNames ### impl UnwindSafe for SpinnerNames Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### default fn to_string(&self) -> String Converts the given value to a `String`. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum spinners_jdxcode::Stream === ``` pub enum Stream { Stderr, Stdout, } ``` Handles the Printing logic for the Spinner Variants --- ### Stderr ### Stdout Implementations --- ### impl Stream #### pub fn write(    &self,    frame: &str,    message: &str,    start_time: Option<Instant>,    stop_time: Option<Instant>) -> Result<()Writes the current message and optionally prints the durations #### pub fn stop(&self, message: Option<&str>, symbol: Option<&str>) -> Result<()Handles the stopping logic given an optional message and symbol Trait Implementations --- ### impl Clone for Stream #### fn clone(&self) -> Stream Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn default() -> Stream Returns the “default value” for a type. Auto Trait Implementations --- ### impl RefUnwindSafe for Stream ### impl Send for Stream ### impl Sync for Stream ### impl Unpin for Stream ### impl UnwindSafe for Stream Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
react-native-applinks
npm
JavaScript
React Native App Links === React Native App Links is a JavaScript library for [React Native](https://facebook.github.io/react-native/) that implements [App Links protocol](http://www.applinks.org), helping you link to content in other apps and handle incoming deep-links. App Links protocol documentation is available at applinks.org: [app links navigation protocol](http://applinks.org/documentation/#applinknavigationprotocol) Examples --- ### Handle incoming deeplink ``` var React = require('react-native');var { LinkingIOS } = React;var AppLinkURL = require('react-native-applinks').AppLinkURL; var MyApp = React.createClass({  componentDidMount: function() {    LinkingIOS.addEventListener('url', this._handleOpenURL);    var url = LinkingIOS.popInitialURL();    if (url) { this._handleOpenURL({url: url}); }  },  componentWillUnmount: function() {    LinkingIOS.removeEventListener('url', this._handleOpenURL);  },  _handleOpenURL: function(event) {    var alUrl = new AppLinkURL(event.url)    // work with alUrl.appLinkData. For example render back link to referer app.    var backLink = null;    var refererAL = alUrl.appLink.referer_app_link;     // if referer app link was provided we can construct back button with text    if (refererAL) {      backLink = {        url: refererAL.url,        text: 'Back' + refererAL.app_name ? ' to ' + refererAL.app_name : ''      };    }  }} ``` ### Handle outgoing link For fetching app link data from web url you need to use AppLinkNavigation and AppLinkResolver classes. React Native App Links library provides two implementations of AppLinkResolver: * **IndexAPIAppLinkResolver** - gets app link data by querying Facebook's Index API. Read more at [Finding App Link Data with the Index API](https://developers.facebook.com/docs/applinks/index-api). * **NativeAppLinkResolver** - downloads and parses html content for given web url. Scans for 'al:' meta tags and creates app links object. ``` var React = require('react-native');var { LinkingIOS } = React;var AppLinks = require('react-native-applinks');var { AppLinkNavigation, IndexAPIAppLinkResolver, NativeAppLinkResolver } = AppLinks; var MyApp = React.createClass({  _openOutgoingWebUrl(weburl) {    var fbResolver = new IndexAPIAppLinkResolver('your_app_facebook_token');    // var nativeResolver = new NativeAppLinkResolver();    var alNavigation = new AppLinkNavigation(      fbResolver, // nativeResolver,      { target_url: 'http://myapp.com', url: 'myapp://home', app_name: 'My App' },      'iphone'    );    // fetch best possible deeplink from web url's app link data and open using LinkingIOS    alNavigation.fetchUrlFromWebUrl(weburl, LinkingIOS.openURL, (err) => { /* handle error */ });  } ``` Requirements --- React Native App Links requires or works with * React Native Installing React Native App Links --- **npm install react-native-applinks** License --- React Native App Links is BSD-licensed. We also provide an additional patent grant. Readme --- ### Keywords * react * applinks * react native * react native applinks
github.com/openshift/origin/pkg/image/api
go
Go
None Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package api is the internal version of the API. ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [Variables](#pkg-variables) * [func AddTagEventToImageStream(stream *ImageStream, tag string, next TagEvent) bool](#AddTagEventToImageStream) * [func DeepCopy_api_Descriptor(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_Descriptor) * [func DeepCopy_api_DockerConfig(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_DockerConfig) * [func DeepCopy_api_DockerConfigHistory(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_DockerConfigHistory) * [func DeepCopy_api_DockerConfigRootFS(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_DockerConfigRootFS) * [func DeepCopy_api_DockerFSLayer(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_DockerFSLayer) * [func DeepCopy_api_DockerHistory(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_DockerHistory) * [func DeepCopy_api_DockerImage(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_DockerImage) * [func DeepCopy_api_DockerImageConfig(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_DockerImageConfig) * [func DeepCopy_api_DockerImageManifest(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_DockerImageManifest) * [func DeepCopy_api_DockerImageReference(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_DockerImageReference) * [func DeepCopy_api_DockerV1CompatibilityImage(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_DockerV1CompatibilityImage) * [func DeepCopy_api_DockerV1CompatibilityImageSize(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_DockerV1CompatibilityImageSize) * [func DeepCopy_api_Image(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_Image) * [func DeepCopy_api_ImageImportSpec(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageImportSpec) * [func DeepCopy_api_ImageImportStatus(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageImportStatus) * [func DeepCopy_api_ImageLayer(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageLayer) * [func DeepCopy_api_ImageList(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageList) * [func DeepCopy_api_ImageSignature(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageSignature) * [func DeepCopy_api_ImageStream(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageStream) * [func DeepCopy_api_ImageStreamImage(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageStreamImage) * [func DeepCopy_api_ImageStreamImport(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageStreamImport) * [func DeepCopy_api_ImageStreamImportSpec(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageStreamImportSpec) * [func DeepCopy_api_ImageStreamImportStatus(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageStreamImportStatus) * [func DeepCopy_api_ImageStreamList(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageStreamList) * [func DeepCopy_api_ImageStreamMapping(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageStreamMapping) * [func DeepCopy_api_ImageStreamSpec(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageStreamSpec) * [func DeepCopy_api_ImageStreamStatus(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageStreamStatus) * [func DeepCopy_api_ImageStreamTag(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageStreamTag) * [func DeepCopy_api_ImageStreamTagList(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_ImageStreamTagList) * [func DeepCopy_api_RepositoryImportSpec(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_RepositoryImportSpec) * [func DeepCopy_api_RepositoryImportStatus(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_RepositoryImportStatus) * [func DeepCopy_api_SignatureCondition(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_SignatureCondition) * [func DeepCopy_api_SignatureGenericEntity(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_SignatureGenericEntity) * [func DeepCopy_api_SignatureIssuer(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_SignatureIssuer) * [func DeepCopy_api_SignatureSubject(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_SignatureSubject) * [func DeepCopy_api_TagEvent(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_TagEvent) * [func DeepCopy_api_TagEventCondition(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_TagEventCondition) * [func DeepCopy_api_TagEventList(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_TagEventList) * [func DeepCopy_api_TagImportPolicy(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_TagImportPolicy) * [func DeepCopy_api_TagReference(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_TagReference) * [func DeepCopy_api_TagReferencePolicy(in interface{}, out interface{}, c *conversion.Cloner) error](#DeepCopy_api_TagReferencePolicy) * [func DifferentTagEvent(stream *ImageStream, tag string, next TagEvent) bool](#DifferentTagEvent) * [func DifferentTagGeneration(stream *ImageStream, tag string) bool](#DifferentTagGeneration) * [func DockerImageReferenceForImage(stream *ImageStream, imageID string) (string, bool)](#DockerImageReferenceForImage) * [func HasTagCondition(stream *ImageStream, tag string, condition TagEventCondition) bool](#HasTagCondition) * [func ImageConfigMatchesImage(image *Image, imageConfig []byte) (bool, error)](#ImageConfigMatchesImage) * [func ImageStreamToSelectableFields(ir *ImageStream) fields.Set](#ImageStreamToSelectableFields) * [func ImageToSelectableFields(image *Image) fields.Set](#ImageToSelectableFields) * [func ImageWithMetadata(image *Image) error](#ImageWithMetadata) * [func IndexOfImageSignature(signatures []ImageSignature, sType string, sContent []byte) int](#IndexOfImageSignature) * [func IndexOfImageSignatureByName(signatures []ImageSignature, name string) int](#IndexOfImageSignatureByName) * [func IsRegistryDockerHub(registry string) bool](#IsRegistryDockerHub) * [func JoinImageSignatureName(imageName, signatureName string) (string, error)](#JoinImageSignatureName) * [func JoinImageStreamTag(name, tag string) string](#JoinImageStreamTag) * [func Kind(kind string) unversioned.GroupKind](#Kind) * [func LabelForStream(stream *ImageStream) string](#LabelForStream) * [func LatestObservedTagGeneration(stream *ImageStream, tag string) int64](#LatestObservedTagGeneration) * [func MakeImageStreamImageName(name, id string) string](#MakeImageStreamImageName) * [func ManifestMatchesImage(image *Image, newManifest []byte) (bool, error)](#ManifestMatchesImage) * [func MostAccuratePullSpec(pullSpec string, id, tag string) (string, bool)](#MostAccuratePullSpec) * [func NormalizeImageStreamTag(name string) string](#NormalizeImageStreamTag) * [func ParseImageStreamImageName(input string) (name string, id string, err error)](#ParseImageStreamImageName) * [func ParseImageStreamTagName(istag string) (name string, tag string, err error)](#ParseImageStreamTagName) * [func PrioritizeTags(tags []string)](#PrioritizeTags) * [func RegisterDeepCopies(scheme *runtime.Scheme) error](#RegisterDeepCopies) * [func ResolveLatestTaggedImage(stream *ImageStream, tag string) (string, bool)](#ResolveLatestTaggedImage) * [func Resource(resource string) unversioned.GroupResource](#Resource) * [func SetTagConditions(stream *ImageStream, tag string, conditions ...TagEventCondition)](#SetTagConditions) * [func ShortDockerImageID(image *DockerImage, length int) string](#ShortDockerImageID) * [func SortStatusTags(tags map[string]TagEventList) []string](#SortStatusTags) * [func SplitImageSignatureName(imageSignatureName string) (imageName, signatureName string, err error)](#SplitImageSignatureName) * [func SplitImageStreamImage(nameAndID string) (name string, id string, ok bool)](#SplitImageStreamImage) * [func SplitImageStreamTag(nameAndTag string) (name string, tag string, ok bool)](#SplitImageStreamTag) * [func UpdateChangedTrackingTags(new, old *ImageStream) int](#UpdateChangedTrackingTags) * [func UpdateTrackingTags(stream *ImageStream, updatedTag string, updatedImage TagEvent) int](#UpdateTrackingTags) * [type DefaultRegistry](#DefaultRegistry) * [type DefaultRegistryFunc](#DefaultRegistryFunc) * + [func (fn DefaultRegistryFunc) DefaultRegistry() (string, bool)](#DefaultRegistryFunc.DefaultRegistry) * [type Descriptor](#Descriptor) * [type DockerConfig](#DockerConfig) * [type DockerConfigHistory](#DockerConfigHistory) * [type DockerConfigRootFS](#DockerConfigRootFS) * [type DockerFSLayer](#DockerFSLayer) * [type DockerHistory](#DockerHistory) * [type DockerImage](#DockerImage) * + [func (obj *DockerImage) GetObjectKind() unversioned.ObjectKind](#DockerImage.GetObjectKind) * [type DockerImageConfig](#DockerImageConfig) * [type DockerImageManifest](#DockerImageManifest) * [type DockerImageReference](#DockerImageReference) * + [func DockerImageReferenceForStream(stream *ImageStream) (DockerImageReference, error)](#DockerImageReferenceForStream) + [func ParseDockerImageReference(spec string) (DockerImageReference, error)](#ParseDockerImageReference) * + [func (r DockerImageReference) AsRepository() DockerImageReference](#DockerImageReference.AsRepository) + [func (r DockerImageReference) AsV2() DockerImageReference](#DockerImageReference.AsV2) + [func (r DockerImageReference) DaemonMinimal() DockerImageReference](#DockerImageReference.DaemonMinimal) + [func (r DockerImageReference) DockerClientDefaults() DockerImageReference](#DockerImageReference.DockerClientDefaults) + [func (r DockerImageReference) Equal(other DockerImageReference) bool](#DockerImageReference.Equal) + [func (r DockerImageReference) Exact() string](#DockerImageReference.Exact) + [func (r DockerImageReference) Minimal() DockerImageReference](#DockerImageReference.Minimal) + [func (r DockerImageReference) MostSpecific() DockerImageReference](#DockerImageReference.MostSpecific) + [func (r DockerImageReference) NameString() string](#DockerImageReference.NameString) + [func (r DockerImageReference) RegistryURL() *url.URL](#DockerImageReference.RegistryURL) + [func (r DockerImageReference) RepositoryName() string](#DockerImageReference.RepositoryName) + [func (r DockerImageReference) String() string](#DockerImageReference.String) * [type DockerV1CompatibilityImage](#DockerV1CompatibilityImage) * [type DockerV1CompatibilityImageSize](#DockerV1CompatibilityImageSize) * [type Image](#Image) * + [func (obj *Image) GetObjectKind() unversioned.ObjectKind](#Image.GetObjectKind) * [type ImageImportSpec](#ImageImportSpec) * [type ImageImportStatus](#ImageImportStatus) * [type ImageLayer](#ImageLayer) * [type ImageList](#ImageList) * + [func (obj *ImageList) GetObjectKind() unversioned.ObjectKind](#ImageList.GetObjectKind) * [type ImageSignature](#ImageSignature) * + [func (obj *ImageSignature) GetObjectKind() unversioned.ObjectKind](#ImageSignature.GetObjectKind) * [type ImageStream](#ImageStream) * + [func (obj *ImageStream) GetObjectKind() unversioned.ObjectKind](#ImageStream.GetObjectKind) * [type ImageStreamImage](#ImageStreamImage) * + [func (obj *ImageStreamImage) GetObjectKind() unversioned.ObjectKind](#ImageStreamImage.GetObjectKind) * [type ImageStreamImport](#ImageStreamImport) * + [func (obj *ImageStreamImport) GetObjectKind() unversioned.ObjectKind](#ImageStreamImport.GetObjectKind) * [type ImageStreamImportSpec](#ImageStreamImportSpec) * [type ImageStreamImportStatus](#ImageStreamImportStatus) * [type ImageStreamList](#ImageStreamList) * + [func (obj *ImageStreamList) GetObjectKind() unversioned.ObjectKind](#ImageStreamList.GetObjectKind) * [type ImageStreamMapping](#ImageStreamMapping) * + [func (obj *ImageStreamMapping) GetObjectKind() unversioned.ObjectKind](#ImageStreamMapping.GetObjectKind) * [type ImageStreamSpec](#ImageStreamSpec) * [type ImageStreamStatus](#ImageStreamStatus) * [type ImageStreamTag](#ImageStreamTag) * + [func (obj *ImageStreamTag) GetObjectKind() unversioned.ObjectKind](#ImageStreamTag.GetObjectKind) * [type ImageStreamTagList](#ImageStreamTagList) * + [func (obj *ImageStreamTagList) GetObjectKind() unversioned.ObjectKind](#ImageStreamTagList.GetObjectKind) * [type RepositoryImportSpec](#RepositoryImportSpec) * [type RepositoryImportStatus](#RepositoryImportStatus) * [type SignatureCondition](#SignatureCondition) * [type SignatureConditionType](#SignatureConditionType) * [type SignatureGenericEntity](#SignatureGenericEntity) * [type SignatureIssuer](#SignatureIssuer) * [type SignatureSubject](#SignatureSubject) * [type TagEvent](#TagEvent) * + [func LatestImageTagEvent(stream *ImageStream, imageID string) (string, *TagEvent)](#LatestImageTagEvent) + [func LatestTaggedImage(stream *ImageStream, tag string) *TagEvent](#LatestTaggedImage) + [func ResolveImageID(stream *ImageStream, imageID string) (*TagEvent, error)](#ResolveImageID) * [type TagEventCondition](#TagEventCondition) * [type TagEventConditionType](#TagEventConditionType) * [type TagEventList](#TagEventList) * [type TagImportPolicy](#TagImportPolicy) * [type TagReference](#TagReference) * + [func FollowTagReference(stream *ImageStream, tag string) (finalTag string, ref *TagReference, ok bool, multiple bool)](#FollowTagReference) * + [func (tagref TagReference) HasAnnotationTag(searchTag string) bool](#TagReference.HasAnnotationTag) * [type TagReferencePolicy](#TagReferencePolicy) * [type TagReferencePolicyType](#TagReferencePolicyType) ### Constants [¶](#pkg-constants) ``` const ( // DockerDefaultNamespace is the value for namespace when a single segment name is provided. DockerDefaultNamespace = "library" // DockerDefaultRegistry is the value for the registry when none was provided. DockerDefaultRegistry = "docker.io" // DockerDefaultV1Registry is the host name of the default v1 registry DockerDefaultV1Registry = "index." + [DockerDefaultRegistry](#DockerDefaultRegistry) // DockerDefaultV2Registry is the host name of the default v2 registry DockerDefaultV2Registry = "registry-1." + [DockerDefaultRegistry](#DockerDefaultRegistry) // TagReferenceAnnotationTagHidden indicates that a given TagReference is hidden from search results TagReferenceAnnotationTagHidden = "hidden" ) ``` ``` const ( // ManagedByOpenShiftAnnotation indicates that an image is managed by OpenShift's registry. ManagedByOpenShiftAnnotation = "openshift.io/image.managed" // DockerImageRepositoryCheckAnnotation indicates that OpenShift has // attempted to import tag and image information from an external Docker // image repository. DockerImageRepositoryCheckAnnotation = "openshift.io/image.dockerRepositoryCheck" // InsecureRepositoryAnnotation may be set true on an image stream to allow insecure access to pull content. InsecureRepositoryAnnotation = "openshift.io/image.insecureRepository" // ExcludeImageSecretAnnotation indicates that a secret should not be returned by imagestream/secrets. ExcludeImageSecretAnnotation = "openshift.io/image.excludeSecret" // DefaultImageTag is used when an image tag is needed and the configuration does not specify a tag to use. DefaultImageTag = "latest" // ResourceImageStreams represents a number of image streams in a project. ResourceImageStreams [kapi](/k8s.io/kubernetes/pkg/api).[ResourceName](/k8s.io/kubernetes/pkg/api#ResourceName) = "openshift.io/imagestreams" // ResourceImageStreamImages represents a number of unique references to images in all image stream // statuses of a project. ResourceImageStreamImages [kapi](/k8s.io/kubernetes/pkg/api).[ResourceName](/k8s.io/kubernetes/pkg/api#ResourceName) = "openshift.io/images" // ResourceImageStreamTags represents a number of unique references to images in all image stream specs // of a project. ResourceImageStreamTags [kapi](/k8s.io/kubernetes/pkg/api).[ResourceName](/k8s.io/kubernetes/pkg/api#ResourceName) = "openshift.io/image-tags" // Limit that applies to images. Used with a max["storage"] LimitRangeItem to set // the maximum size of an image. LimitTypeImage [kapi](/k8s.io/kubernetes/pkg/api).[LimitType](/k8s.io/kubernetes/pkg/api#LimitType) = "openshift.io/Image" // Limit that applies to image streams. Used with a max[resource] LimitRangeItem to set the maximum number // of resource. Where the resource is one of "openshift.io/images" and "openshift.io/image-tags". LimitTypeImageStream [kapi](/k8s.io/kubernetes/pkg/api).[LimitType](/k8s.io/kubernetes/pkg/api#LimitType) = "openshift.io/ImageStream" ) ``` ``` const ( // SignatureTrusted means the signing key or certificate was valid and the signature matched the image at // the probe time. SignatureTrusted = "Trusted" // SignatureForImage means the signature matches image object containing it. SignatureForImage = "ForImage" // SignatureExpired means the signature or its signing key or certificate had been expired at the probe // time. SignatureExpired = "Expired" // SignatureRevoked means the signature or its signing key or certificate has been revoked. SignatureRevoked = "Revoked" ) ``` These are valid conditions of an image signature. ``` const FutureGroupName = "image.openshift.io" ``` ``` const GroupName = "" ``` ``` const ( // The supported type of image signature. ImageSignatureTypeAtomicImageV1 [string](/builtin#string) = "AtomicImageV1" ) ``` ### Variables [¶](#pkg-variables) ``` var ( SchemeBuilder = [runtime](/k8s.io/kubernetes/pkg/runtime).[NewSchemeBuilder](/k8s.io/kubernetes/pkg/runtime#NewSchemeBuilder)(addKnownTypes) AddToScheme = [SchemeBuilder](#SchemeBuilder).AddToScheme ) ``` ``` var SchemeGroupVersion = [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[GroupVersion](/k8s.io/kubernetes/pkg/api/unversioned#GroupVersion){Group: [GroupName](#GroupName), Version: [runtime](/k8s.io/kubernetes/pkg/runtime).[APIVersionInternal](/k8s.io/kubernetes/pkg/runtime#APIVersionInternal)} ``` SchemeGroupVersion is group version used to register these objects ### Functions [¶](#pkg-functions) #### func [AddTagEventToImageStream](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L696) [¶](#AddTagEventToImageStream) ``` func AddTagEventToImageStream(stream *[ImageStream](#ImageStream), tag [string](/builtin#string), next [TagEvent](#TagEvent)) [bool](/builtin#bool) ``` AddTagEventToImageStream attempts to update the given image stream with a tag event. It will collapse duplicate entries - returning true if a change was made or false if no change occurred. Any successful tag resets the status field. #### func [DeepCopy_api_Descriptor](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L67) [¶](#DeepCopy_api_Descriptor) added in v1.3.0 ``` func DeepCopy_api_Descriptor(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_DockerConfig](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L78) [¶](#DeepCopy_api_DockerConfig) added in v1.3.0 ``` func DeepCopy_api_DockerConfig(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_DockerConfigHistory](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L179) [¶](#DeepCopy_api_DockerConfigHistory) added in v1.3.0 ``` func DeepCopy_api_DockerConfigHistory(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_DockerConfigRootFS](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L192) [¶](#DeepCopy_api_DockerConfigRootFS) added in v1.3.0 ``` func DeepCopy_api_DockerConfigRootFS(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_DockerFSLayer](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L208) [¶](#DeepCopy_api_DockerFSLayer) added in v1.3.0 ``` func DeepCopy_api_DockerFSLayer(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_DockerHistory](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L217) [¶](#DeepCopy_api_DockerHistory) added in v1.3.0 ``` func DeepCopy_api_DockerHistory(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_DockerImage](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L226) [¶](#DeepCopy_api_DockerImage) added in v1.3.0 ``` func DeepCopy_api_DockerImage(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_DockerImageConfig](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L256) [¶](#DeepCopy_api_DockerImageConfig) added in v1.3.0 ``` func DeepCopy_api_DockerImageConfig(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_DockerImageManifest](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L313) [¶](#DeepCopy_api_DockerImageManifest) added in v1.3.0 ``` func DeepCopy_api_DockerImageManifest(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_DockerImageReference](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L354) [¶](#DeepCopy_api_DockerImageReference) added in v1.3.0 ``` func DeepCopy_api_DockerImageReference(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_DockerV1CompatibilityImage](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L367) [¶](#DeepCopy_api_DockerV1CompatibilityImage) added in v1.3.0 ``` func DeepCopy_api_DockerV1CompatibilityImage(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_DockerV1CompatibilityImageSize](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L396) [¶](#DeepCopy_api_DockerV1CompatibilityImageSize) added in v1.3.0 ``` func DeepCopy_api_DockerV1CompatibilityImageSize(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_Image](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L405) [¶](#DeepCopy_api_Image) added in v1.3.0 ``` func DeepCopy_api_Image(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageImportSpec](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L458) [¶](#DeepCopy_api_ImageImportSpec) added in v1.3.0 ``` func DeepCopy_api_ImageImportSpec(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageImportStatus](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L476) [¶](#DeepCopy_api_ImageImportStatus) added in v1.3.0 ``` func DeepCopy_api_ImageImportStatus(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageLayer](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L497) [¶](#DeepCopy_api_ImageLayer) added in v1.3.0 ``` func DeepCopy_api_ImageLayer(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageList](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L508) [¶](#DeepCopy_api_ImageList) added in v1.3.0 ``` func DeepCopy_api_ImageList(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageSignature](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L529) [¶](#DeepCopy_api_ImageSignature) added in v1.3.0 ``` func DeepCopy_api_ImageSignature(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageStream](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L591) [¶](#DeepCopy_api_ImageStream) added in v1.3.0 ``` func DeepCopy_api_ImageStream(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageStreamImage](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L609) [¶](#DeepCopy_api_ImageStreamImage) added in v1.3.0 ``` func DeepCopy_api_ImageStreamImage(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageStreamImport](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L624) [¶](#DeepCopy_api_ImageStreamImport) added in v1.3.0 ``` func DeepCopy_api_ImageStreamImport(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageStreamImportSpec](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L642) [¶](#DeepCopy_api_ImageStreamImportSpec) added in v1.3.0 ``` func DeepCopy_api_ImageStreamImportSpec(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageStreamImportStatus](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L669) [¶](#DeepCopy_api_ImageStreamImportStatus) added in v1.3.0 ``` func DeepCopy_api_ImageStreamImportStatus(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageStreamList](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L706) [¶](#DeepCopy_api_ImageStreamList) added in v1.3.0 ``` func DeepCopy_api_ImageStreamList(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageStreamMapping](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L727) [¶](#DeepCopy_api_ImageStreamMapping) added in v1.3.0 ``` func DeepCopy_api_ImageStreamMapping(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageStreamSpec](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L744) [¶](#DeepCopy_api_ImageStreamSpec) added in v1.3.0 ``` func DeepCopy_api_ImageStreamSpec(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageStreamStatus](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L766) [¶](#DeepCopy_api_ImageStreamStatus) added in v1.3.0 ``` func DeepCopy_api_ImageStreamStatus(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageStreamTag](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L788) [¶](#DeepCopy_api_ImageStreamTag) added in v1.3.0 ``` func DeepCopy_api_ImageStreamTag(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_ImageStreamTagList](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L824) [¶](#DeepCopy_api_ImageStreamTagList) added in v1.3.0 ``` func DeepCopy_api_ImageStreamTagList(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_RepositoryImportSpec](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L845) [¶](#DeepCopy_api_RepositoryImportSpec) added in v1.3.0 ``` func DeepCopy_api_RepositoryImportSpec(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_RepositoryImportStatus](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L856) [¶](#DeepCopy_api_RepositoryImportStatus) added in v1.3.0 ``` func DeepCopy_api_RepositoryImportStatus(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_SignatureCondition](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L885) [¶](#DeepCopy_api_SignatureCondition) added in v1.3.0 ``` func DeepCopy_api_SignatureCondition(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_SignatureGenericEntity](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L899) [¶](#DeepCopy_api_SignatureGenericEntity) added in v1.3.0 ``` func DeepCopy_api_SignatureGenericEntity(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_SignatureIssuer](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L909) [¶](#DeepCopy_api_SignatureIssuer) added in v1.3.0 ``` func DeepCopy_api_SignatureIssuer(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_SignatureSubject](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L918) [¶](#DeepCopy_api_SignatureSubject) added in v1.3.0 ``` func DeepCopy_api_SignatureSubject(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_TagEvent](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L928) [¶](#DeepCopy_api_TagEvent) added in v1.3.0 ``` func DeepCopy_api_TagEvent(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_TagEventCondition](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L940) [¶](#DeepCopy_api_TagEventCondition) added in v1.3.0 ``` func DeepCopy_api_TagEventCondition(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_TagEventList](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L954) [¶](#DeepCopy_api_TagEventList) added in v1.3.0 ``` func DeepCopy_api_TagEventList(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_TagImportPolicy](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L984) [¶](#DeepCopy_api_TagImportPolicy) added in v1.3.0 ``` func DeepCopy_api_TagImportPolicy(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_TagReference](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L994) [¶](#DeepCopy_api_TagReference) added in v1.3.0 ``` func DeepCopy_api_TagReference(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DeepCopy_api_TagReferencePolicy](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L1029) [¶](#DeepCopy_api_TagReferencePolicy) added in v1.5.1 ``` func DeepCopy_api_TagReferencePolicy(in interface{}, out interface{}, c *[conversion](/k8s.io/kubernetes/pkg/conversion).[Cloner](/k8s.io/kubernetes/pkg/conversion#Cloner)) [error](/builtin#error) ``` #### func [DifferentTagEvent](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L668) [¶](#DifferentTagEvent) added in v1.1.2 ``` func DifferentTagEvent(stream *[ImageStream](#ImageStream), tag [string](/builtin#string), next [TagEvent](#TagEvent)) [bool](/builtin#bool) ``` DifferentTagEvent returns true if the supplied tag event matches the current stream tag event. Generation is not compared. #### func [DifferentTagGeneration](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L681) [¶](#DifferentTagGeneration) added in v1.3.0 ``` func DifferentTagGeneration(stream *[ImageStream](#ImageStream), tag [string](/builtin#string)) [bool](/builtin#bool) ``` DifferentTagEvent compares the generation on tag's spec vs its status. Returns if spec generation is newer than status one. #### func [DockerImageReferenceForImage](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L643) [¶](#DockerImageReferenceForImage) added in v1.5.1 ``` func DockerImageReferenceForImage(stream *[ImageStream](#ImageStream), imageID [string](/builtin#string)) ([string](/builtin#string), [bool](/builtin#bool)) ``` DockerImageReferenceForImage returns the docker reference for specified image. Assuming the image stream contains the image and the image has corresponding tag, this function will try to find this tag and take the reference policy into the account. If the image stream does not reference the image or the image does not have corresponding tag event, this function will return false. #### func [HasTagCondition](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L914) [¶](#HasTagCondition) added in v1.1.2 ``` func HasTagCondition(stream *[ImageStream](#ImageStream), tag [string](/builtin#string), condition [TagEventCondition](#TagEventCondition)) [bool](/builtin#bool) ``` HasTagCondition returns true if the specified image stream tag has a condition with the same type, status, and reason (does not check generation, date, or message). #### func [ImageConfigMatchesImage](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L348) [¶](#ImageConfigMatchesImage) added in v1.3.0 ``` func ImageConfigMatchesImage(image *[Image](#Image), imageConfig [][byte](/builtin#byte)) ([bool](/builtin#bool), [error](/builtin#error)) ``` ImageConfigMatchesImage returns true if the provided image config matches a digest stored in the manifest of the image. #### func [ImageStreamToSelectableFields](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/fields.go#L14) [¶](#ImageStreamToSelectableFields) added in v1.0.8 ``` func ImageStreamToSelectableFields(ir *[ImageStream](#ImageStream)) [fields](/k8s.io/kubernetes/pkg/fields).[Set](/k8s.io/kubernetes/pkg/fields#Set) ``` ImageStreamToSelectableFields returns a label set that represents the object. #### func [ImageToSelectableFields](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/fields.go#L6) [¶](#ImageToSelectableFields) added in v1.0.8 ``` func ImageToSelectableFields(image *[Image](#Image)) [fields](/k8s.io/kubernetes/pkg/fields).[Set](/k8s.io/kubernetes/pkg/fields#Set) ``` ImageToSelectableFields returns a label set that represents the object. #### func [ImageWithMetadata](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L372) [¶](#ImageWithMetadata) ``` func ImageWithMetadata(image *[Image](#Image)) [error](/builtin#error) ``` ImageWithMetadata mutates the given image. It parses raw DockerImageManifest data stored in the image and fills its DockerImageMetadata and other fields. #### func [IndexOfImageSignature](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L1076) [¶](#IndexOfImageSignature) added in v1.3.0 ``` func IndexOfImageSignature(signatures [][ImageSignature](#ImageSignature), sType [string](/builtin#string), sContent [][byte](/builtin#byte)) [int](/builtin#int) ``` IndexOfImageSignature returns index of signature identified by type and blob in the image if present. It returns -1 otherwise. #### func [IndexOfImageSignatureByName](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L1065) [¶](#IndexOfImageSignatureByName) added in v1.3.0 ``` func IndexOfImageSignatureByName(signatures [][ImageSignature](#ImageSignature), name [string](/builtin#string)) [int](/builtin#int) ``` IndexOfImageSignatureByName returns an index of signature identified by name in the image if present. It returns -1 otherwise. #### func [IsRegistryDockerHub](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L97) [¶](#IsRegistryDockerHub) added in v1.1.2 ``` func IsRegistryDockerHub(registry [string](/builtin#string)) [bool](/builtin#bool) ``` IsRegistryDockerHub returns true if the given registry name belongs to Docker hub. #### func [JoinImageSignatureName](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L1034) [¶](#JoinImageSignatureName) added in v1.3.0 ``` func JoinImageSignatureName(imageName, signatureName [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` JoinImageSignatureName joins image name and custom signature name into one string with @ separator. #### func [JoinImageStreamTag](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L291) [¶](#JoinImageStreamTag) ``` func JoinImageStreamTag(name, tag [string](/builtin#string)) [string](/builtin#string) ``` JoinImageStreamTag turns a name and tag into the name of an ImageStreamTag #### func [Kind](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L15) [¶](#Kind) added in v1.1.2 ``` func Kind(kind [string](/builtin#string)) [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[GroupKind](/k8s.io/kubernetes/pkg/api/unversioned#GroupKind) ``` Kind takes an unqualified kind and returns back a Group qualified GroupKind #### func [LabelForStream](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L1029) [¶](#LabelForStream) added in v1.3.0 ``` func LabelForStream(stream *[ImageStream](#ImageStream)) [string](/builtin#string) ``` #### func [LatestObservedTagGeneration](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L935) [¶](#LatestObservedTagGeneration) added in v1.1.2 ``` func LatestObservedTagGeneration(stream *[ImageStream](#ImageStream), tag [string](/builtin#string)) [int64](/builtin#int64) ``` LatestObservedTagGeneration returns the generation value for the given tag that has been observed by the controller monitoring the image stream. If the tag has not been observed, the generation is zero. #### func [MakeImageStreamImageName](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L91) [¶](#MakeImageStreamImageName) added in v1.3.0 ``` func MakeImageStreamImageName(name, id [string](/builtin#string)) [string](/builtin#string) ``` MakeImageStreamImageName creates a name for image stream image object from an image stream name and an id. #### func [ManifestMatchesImage](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L310) [¶](#ManifestMatchesImage) added in v1.1.2 ``` func ManifestMatchesImage(image *[Image](#Image), newManifest [][byte](/builtin#byte)) ([bool](/builtin#bool), [error](/builtin#error)) ``` ManifestMatchesImage returns true if the provided manifest matches the name of the image. #### func [MostAccuratePullSpec](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L886) [¶](#MostAccuratePullSpec) added in v1.1.2 ``` func MostAccuratePullSpec(pullSpec [string](/builtin#string), id, tag [string](/builtin#string)) ([string](/builtin#string), [bool](/builtin#bool)) ``` MostAccuratePullSpec returns a docker image reference that uses the current ID if possible, the current tag otherwise, and returns false if the reference if the spec could not be parsed. The returned spec has all client defaults applied. #### func [NormalizeImageStreamTag](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L300) [¶](#NormalizeImageStreamTag) added in v1.0.8 ``` func NormalizeImageStreamTag(name [string](/builtin#string)) [string](/builtin#string) ``` NormalizeImageStreamTag normalizes an image stream tag by defaulting to 'latest' if no tag has been specified. #### func [ParseImageStreamImageName](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L54) [¶](#ParseImageStreamImageName) added in v1.3.0 ``` func ParseImageStreamImageName(input [string](/builtin#string)) (name [string](/builtin#string), id [string](/builtin#string), err [error](/builtin#error)) ``` ParseImageStreamImageName splits a string into its name component and ID component, and returns an error if the string is not in the right form. #### func [ParseImageStreamTagName](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L71) [¶](#ParseImageStreamTagName) added in v1.3.0 ``` func ParseImageStreamTagName(istag [string](/builtin#string)) (name [string](/builtin#string), tag [string](/builtin#string), err [error](/builtin#error)) ``` ParseImageStreamTagName splits a string into its name component and tag component, and returns an error if the string is not in the right form. #### func [PrioritizeTags](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L975) [¶](#PrioritizeTags) added in v1.1.2 ``` func PrioritizeTags(tags [][string](/builtin#string)) ``` PrioritizeTags orders a set of image tags with a few conventions: 1. the "latest" tag, if present, should be first 2. any tags that represent a semantic minor version ("5.1", "v5.1", "v5.1-rc1") should be next, in descending order 3. any tags that represent a full semantic version ("5.1.3-other", "v5.1.3-other") should be next, in descending order 4. any remaining tags should be sorted in lexicographic order The method updates the tags in place. #### func [RegisterDeepCopies](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/zz_generated.deepcopy.go#L21) [¶](#RegisterDeepCopies) added in v1.4.0 ``` func RegisterDeepCopies(scheme *[runtime](/k8s.io/kubernetes/pkg/runtime).[Scheme](/k8s.io/kubernetes/pkg/runtime#Scheme)) [error](/builtin#error) ``` RegisterDeepCopies adds deep-copy functions to the given scheme. Public to allow building arbitrary schemes. #### func [ResolveLatestTaggedImage](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L593) [¶](#ResolveLatestTaggedImage) added in v1.5.1 ``` func ResolveLatestTaggedImage(stream *[ImageStream](#ImageStream), tag [string](/builtin#string)) ([string](/builtin#string), [bool](/builtin#bool)) ``` ResolveLatestTaggedImage returns the appropriate pull spec for a given tag in the image stream, handling the tag's reference policy if necessary to return a resolved image. Callers that transform an ImageStreamTag into a pull spec should use this method instead of LatestTaggedImage. #### func [Resource](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L20) [¶](#Resource) added in v1.1.2 ``` func Resource(resource [string](/builtin#string)) [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[GroupResource](/k8s.io/kubernetes/pkg/api/unversioned#GroupResource) ``` Resource takes an unqualified resource and returns back a Group qualified GroupResource #### func [SetTagConditions](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L924) [¶](#SetTagConditions) added in v1.1.2 ``` func SetTagConditions(stream *[ImageStream](#ImageStream), tag [string](/builtin#string), conditions ...[TagEventCondition](#TagEventCondition)) ``` SetTagConditions applies the specified conditions to the status of the given tag. #### func [ShortDockerImageID](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L901) [¶](#ShortDockerImageID) added in v1.0.7 ``` func ShortDockerImageID(image *[DockerImage](#DockerImage), length [int](/builtin#int)) [string](/builtin#string) ``` ShortDockerImageID returns a short form of the provided DockerImage ID for display #### func [SortStatusTags](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/sort.go#L30) [¶](#SortStatusTags) added in v1.0.6 ``` func SortStatusTags(tags map[[string](/builtin#string)][TagEventList](#TagEventList)) [][string](/builtin#string) ``` SortStatusTags sorts the status tags of an image stream based on the latest created #### func [SplitImageSignatureName](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L1048) [¶](#SplitImageSignatureName) added in v1.3.0 ``` func SplitImageSignatureName(imageSignatureName [string](/builtin#string)) (imageName, signatureName [string](/builtin#string), err [error](/builtin#error)) ``` SplitImageSignatureName splits given signature name into image name and signature name. #### func [SplitImageStreamImage](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L281) [¶](#SplitImageStreamImage) added in v1.3.0 ``` func SplitImageStreamImage(nameAndID [string](/builtin#string)) (name [string](/builtin#string), id [string](/builtin#string), ok [bool](/builtin#bool)) ``` SplitImageStreamImage turns the name of an ImageStreamImage into Name and ID. It returns false if the ID was not properly specified in the name. #### func [SplitImageStreamTag](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L267) [¶](#SplitImageStreamTag) ``` func SplitImageStreamTag(nameAndTag [string](/builtin#string)) (name [string](/builtin#string), tag [string](/builtin#string), ok [bool](/builtin#bool)) ``` SplitImageStreamTag turns the name of an ImageStreamTag into Name and Tag. It returns false if the tag was not properly specified in the name. #### func [UpdateChangedTrackingTags](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L740) [¶](#UpdateChangedTrackingTags) added in v1.0.7 ``` func UpdateChangedTrackingTags(new, old *[ImageStream](#ImageStream)) [int](/builtin#int) ``` UpdateChangedTrackingTags identifies any tags in the status that have changed and ensures any referenced tracking tags are also updated. It returns the number of updates applied. #### func [UpdateTrackingTags](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L783) [¶](#UpdateTrackingTags) ``` func UpdateTrackingTags(stream *[ImageStream](#ImageStream), updatedTag [string](/builtin#string), updatedImage [TagEvent](#TagEvent)) [int](/builtin#int) ``` UpdateTrackingTags sets updatedImage as the most recent TagEvent for all tags in stream.spec.tags that have from.kind = "ImageStreamTag" and the tag in from.name = updatedTag. from.name may be either <tag> or <stream name>:<tag>. For now, only references to tags in the current stream are supported. For example, if stream.spec.tags[latest].from.name = 2.0, whenever an image is pushed to this stream with the tag 2.0, status.tags[latest].items[0] will also be updated to point at the same image that was just pushed for 2.0. Returns the number of tags changed. ### Types [¶](#pkg-types) #### type [DefaultRegistry](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L40) [¶](#DefaultRegistry) added in v1.3.0 ``` type DefaultRegistry interface { DefaultRegistry() ([string](/builtin#string), [bool](/builtin#bool)) } ``` DefaultRegistry returns the default Docker registry (host or host:port), or false if it is not available. #### type [DefaultRegistryFunc](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L45) [¶](#DefaultRegistryFunc) added in v1.3.0 ``` type DefaultRegistryFunc func() ([string](/builtin#string), [bool](/builtin#bool)) ``` DefaultRegistryFunc implements DefaultRegistry for a simple function. #### func (DefaultRegistryFunc) [DefaultRegistry](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L48) [¶](#DefaultRegistryFunc.DefaultRegistry) added in v1.3.0 ``` func (fn [DefaultRegistryFunc](#DefaultRegistryFunc)) DefaultRegistry() ([string](/builtin#string), [bool](/builtin#bool)) ``` DefaultRegistry implements the DefaultRegistry interface for a function. #### type [Descriptor](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/dockertypes.go#L60) [¶](#Descriptor) added in v1.3.0 ``` type Descriptor struct { // MediaType describe the type of the content. All text based formats are // encoded as utf-8. MediaType [string](/builtin#string) `json:"mediaType,omitempty"` // Size in bytes of content. Size [int64](/builtin#int64) `json:"size,omitempty"` // Digest uniquely identifies the content. A byte stream can be verified // against against this digest. Digest [string](/builtin#string) `json:"digest,omitempty"` } ``` Descriptor describes targeted content. Used in conjunction with a blob store, a descriptor can be used to fetch, store and target any kind of blob. The struct also describes the wire protocol format. Fields should only be added but never changed. #### type [DockerConfig](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/dockertypes.go#L26) [¶](#DockerConfig) ``` type DockerConfig struct { Hostname [string](/builtin#string) `json:"Hostname,omitempty"` Domainname [string](/builtin#string) `json:"Domainname,omitempty"` User [string](/builtin#string) `json:"User,omitempty"` Memory [int64](/builtin#int64) `json:"Memory,omitempty"` MemorySwap [int64](/builtin#int64) `json:"MemorySwap,omitempty"` CPUShares [int64](/builtin#int64) `json:"CpuShares,omitempty"` CPUSet [string](/builtin#string) `json:"Cpuset,omitempty"` AttachStdin [bool](/builtin#bool) `json:"AttachStdin,omitempty"` AttachStdout [bool](/builtin#bool) `json:"AttachStdout,omitempty"` AttachStderr [bool](/builtin#bool) `json:"AttachStderr,omitempty"` PortSpecs [][string](/builtin#string) `json:"PortSpecs,omitempty"` ExposedPorts map[[string](/builtin#string)]struct{} `json:"ExposedPorts,omitempty"` Tty [bool](/builtin#bool) `json:"Tty,omitempty"` OpenStdin [bool](/builtin#bool) `json:"OpenStdin,omitempty"` StdinOnce [bool](/builtin#bool) `json:"StdinOnce,omitempty"` Env [][string](/builtin#string) `json:"Env,omitempty"` Cmd [][string](/builtin#string) `json:"Cmd,omitempty"` DNS [][string](/builtin#string) `json:"Dns,omitempty"` // For Docker API v1.9 and below only Image [string](/builtin#string) `json:"Image,omitempty"` Volumes map[[string](/builtin#string)]struct{} `json:"Volumes,omitempty"` VolumesFrom [string](/builtin#string) `json:"VolumesFrom,omitempty"` WorkingDir [string](/builtin#string) `json:"WorkingDir,omitempty"` Entrypoint [][string](/builtin#string) `json:"Entrypoint,omitempty"` NetworkDisabled [bool](/builtin#bool) `json:"NetworkDisabled,omitempty"` SecurityOpts [][string](/builtin#string) `json:"SecurityOpts,omitempty"` OnBuild [][string](/builtin#string) `json:"OnBuild,omitempty"` Labels map[[string](/builtin#string)][string](/builtin#string) `json:"Labels,omitempty"` } ``` DockerConfig is the list of configuration options used when creating a container. #### type [DockerConfigHistory](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/dockertypes.go#L145) [¶](#DockerConfigHistory) added in v1.3.0 ``` type DockerConfigHistory struct { Created [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[Time](/k8s.io/kubernetes/pkg/api/unversioned#Time) `json:"created"` Author [string](/builtin#string) `json:"author,omitempty"` CreatedBy [string](/builtin#string) `json:"created_by,omitempty"` Comment [string](/builtin#string) `json:"comment,omitempty"` EmptyLayer [bool](/builtin#bool) `json:"empty_layer,omitempty"` } ``` DockerConfigHistory stores build commands that were used to create an image #### type [DockerConfigRootFS](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/dockertypes.go#L154) [¶](#DockerConfigRootFS) added in v1.3.0 ``` type DockerConfigRootFS struct { Type [string](/builtin#string) `json:"type"` DiffIDs [][string](/builtin#string) `json:"diff_ids,omitempty"` } ``` DockerConfigRootFS describes images root filesystem #### type [DockerFSLayer](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/dockertypes.go#L91) [¶](#DockerFSLayer) ``` type DockerFSLayer struct { // DockerBlobSum is the tarsum of the referenced filesystem image layer // TODO make this digest.Digest once docker/distribution is in Godeps DockerBlobSum [string](/builtin#string) `json:"blobSum"` } ``` DockerFSLayer is a container struct for BlobSums defined in an image manifest #### type [DockerHistory](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/dockertypes.go#L98) [¶](#DockerHistory) ``` type DockerHistory struct { // DockerV1Compatibility is the raw v1 compatibility information DockerV1Compatibility [string](/builtin#string) `json:"v1Compatibility"` } ``` DockerHistory stores unstructured v1 compatibility information #### type [DockerImage](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/dockertypes.go#L9) [¶](#DockerImage) ``` type DockerImage struct { [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[TypeMeta](/k8s.io/kubernetes/pkg/api/unversioned#TypeMeta) `json:",inline"` ID [string](/builtin#string) `json:"Id"` Parent [string](/builtin#string) `json:"Parent,omitempty"` Comment [string](/builtin#string) `json:"Comment,omitempty"` Created [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[Time](/k8s.io/kubernetes/pkg/api/unversioned#Time) `json:"Created,omitempty"` Container [string](/builtin#string) `json:"Container,omitempty"` ContainerConfig [DockerConfig](#DockerConfig) `json:"ContainerConfig,omitempty"` DockerVersion [string](/builtin#string) `json:"DockerVersion,omitempty"` Author [string](/builtin#string) `json:"Author,omitempty"` Config *[DockerConfig](#DockerConfig) `json:"Config,omitempty"` Architecture [string](/builtin#string) `json:"Architecture,omitempty"` Size [int64](/builtin#int64) `json:"Size,omitempty"` } ``` DockerImage is the type representing a docker image and its various properties when retrieved from the Docker client API. #### func (*DockerImage) [GetObjectKind](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L49) [¶](#DockerImage.GetObjectKind) added in v1.1.3 ``` func (obj *[DockerImage](#DockerImage)) GetObjectKind() [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ObjectKind](/k8s.io/kubernetes/pkg/api/unversioned#ObjectKind) ``` #### type [DockerImageConfig](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/dockertypes.go#L126) [¶](#DockerImageConfig) added in v1.3.0 ``` type DockerImageConfig struct { ID [string](/builtin#string) `json:"id"` Parent [string](/builtin#string) `json:"parent,omitempty"` Comment [string](/builtin#string) `json:"comment,omitempty"` Created [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[Time](/k8s.io/kubernetes/pkg/api/unversioned#Time) `json:"created"` Container [string](/builtin#string) `json:"container,omitempty"` ContainerConfig [DockerConfig](#DockerConfig) `json:"container_config,omitempty"` DockerVersion [string](/builtin#string) `json:"docker_version,omitempty"` Author [string](/builtin#string) `json:"author,omitempty"` Config *[DockerConfig](#DockerConfig) `json:"config,omitempty"` Architecture [string](/builtin#string) `json:"architecture,omitempty"` Size [int64](/builtin#int64) `json:"size,omitempty"` RootFS *[DockerConfigRootFS](#DockerConfigRootFS) `json:"rootfs,omitempty"` History [][DockerConfigHistory](#DockerConfigHistory) `json:"history,omitempty"` OSVersion [string](/builtin#string) `json:"os.version,omitempty"` OSFeatures [][string](/builtin#string) `json:"os.features,omitempty"` } ``` DockerImageConfig stores the image configuration #### type [DockerImageManifest](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/dockertypes.go#L74) [¶](#DockerImageManifest) ``` type DockerImageManifest struct { SchemaVersion [int](/builtin#int) `json:"schemaVersion"` MediaType [string](/builtin#string) `json:"mediaType,omitempty"` // schema1 Name [string](/builtin#string) `json:"name"` Tag [string](/builtin#string) `json:"tag"` Architecture [string](/builtin#string) `json:"architecture"` FSLayers [][DockerFSLayer](#DockerFSLayer) `json:"fsLayers"` History [][DockerHistory](#DockerHistory) `json:"history"` // schema2 Layers [][Descriptor](#Descriptor) `json:"layers"` Config [Descriptor](#Descriptor) `json:"config"` } ``` DockerImageManifest represents the Docker v2 image format. #### type [DockerImageReference](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L390) [¶](#DockerImageReference) ``` type DockerImageReference struct { Registry [string](/builtin#string) Namespace [string](/builtin#string) Name [string](/builtin#string) Tag [string](/builtin#string) ID [string](/builtin#string) } ``` DockerImageReference points to a Docker image. #### func [DockerImageReferenceForStream](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L508) [¶](#DockerImageReferenceForStream) ``` func DockerImageReferenceForStream(stream *[ImageStream](#ImageStream)) ([DockerImageReference](#DockerImageReference), [error](/builtin#error)) ``` DockerImageReferenceForStream returns a DockerImageReference that represents the ImageStream or false, if no valid reference exists. #### func [ParseDockerImageReference](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L108) [¶](#ParseDockerImageReference) ``` func ParseDockerImageReference(spec [string](/builtin#string)) ([DockerImageReference](#DockerImageReference), [error](/builtin#error)) ``` ParseDockerImageReference parses a Docker pull spec string into a DockerImageReference. #### func (DockerImageReference) [AsRepository](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L157) [¶](#DockerImageReference.AsRepository) added in v1.0.7 ``` func (r [DockerImageReference](#DockerImageReference)) AsRepository() [DockerImageReference](#DockerImageReference) ``` AsRepository returns the reference without tags or IDs. #### func (DockerImageReference) [AsV2](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L191) [¶](#DockerImageReference.AsV2) added in v1.1.2 ``` func (r [DockerImageReference](#DockerImageReference)) AsV2() [DockerImageReference](#DockerImageReference) ``` #### func (DockerImageReference) [DaemonMinimal](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L180) [¶](#DockerImageReference.DaemonMinimal) added in v1.0.7 ``` func (r [DockerImageReference](#DockerImageReference)) DaemonMinimal() [DockerImageReference](#DockerImageReference) ``` DaemonMinimal clears defaults that Docker assumes. #### func (DockerImageReference) [DockerClientDefaults](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L135) [¶](#DockerImageReference.DockerClientDefaults) ``` func (r [DockerImageReference](#DockerImageReference)) DockerClientDefaults() [DockerImageReference](#DockerImageReference) ``` DockerClientDefaults sets the default values used by the Docker client. #### func (DockerImageReference) [Equal](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L128) [¶](#DockerImageReference.Equal) added in v1.1.1 ``` func (r [DockerImageReference](#DockerImageReference)) Equal(other [DockerImageReference](#DockerImageReference)) [bool](/builtin#bool) ``` Equal returns true if the other DockerImageReference is equivalent to the reference r. The comparison applies defaults to the Docker image reference, so that e.g., "foobar" equals "docker.io/library/foobar:latest". #### func (DockerImageReference) [Exact](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L240) [¶](#DockerImageReference.Exact) added in v1.0.7 ``` func (r [DockerImageReference](#DockerImageReference)) Exact() [string](/builtin#string) ``` Exact returns a string representation of the set fields on the DockerImageReference #### func (DockerImageReference) [Minimal](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L149) [¶](#DockerImageReference.Minimal) ``` func (r [DockerImageReference](#DockerImageReference)) Minimal() [DockerImageReference](#DockerImageReference) ``` Minimal reduces a DockerImageReference to its minimalist form. #### func (DockerImageReference) [MostSpecific](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L202) [¶](#DockerImageReference.MostSpecific) added in v1.1.2 ``` func (r [DockerImageReference](#DockerImageReference)) MostSpecific() [DockerImageReference](#DockerImageReference) ``` MostSpecific returns the most specific image reference that can be constructed from the current ref, preferring an ID over a Tag. Allows client code dealing with both tags and IDs to get the most specific reference easily. #### func (DockerImageReference) [NameString](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L218) [¶](#DockerImageReference.NameString) added in v1.0.5 ``` func (r [DockerImageReference](#DockerImageReference)) NameString() [string](/builtin#string) ``` NameString returns the name of the reference with its tag or ID. #### func (DockerImageReference) [RegistryURL](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L172) [¶](#DockerImageReference.RegistryURL) added in v1.1.2 ``` func (r [DockerImageReference](#DockerImageReference)) RegistryURL() *[url](/net/url).[URL](/net/url#URL) ``` RepositoryName returns the registry relative name #### func (DockerImageReference) [RepositoryName](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L164) [¶](#DockerImageReference.RepositoryName) added in v1.1.2 ``` func (r [DockerImageReference](#DockerImageReference)) RepositoryName() [string](/builtin#string) ``` RepositoryName returns the registry relative name #### func (DockerImageReference) [String](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L258) [¶](#DockerImageReference.String) ``` func (r [DockerImageReference](#DockerImageReference)) String() [string](/builtin#string) ``` String converts a DockerImageReference to a Docker pull spec (which implies a default namespace according to V1 Docker registry rules). Use Exact() if you want no defaulting. #### type [DockerV1CompatibilityImage](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/dockertypes.go#L105) [¶](#DockerV1CompatibilityImage) ``` type DockerV1CompatibilityImage struct { ID [string](/builtin#string) `json:"id"` Parent [string](/builtin#string) `json:"parent,omitempty"` Comment [string](/builtin#string) `json:"comment,omitempty"` Created [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[Time](/k8s.io/kubernetes/pkg/api/unversioned#Time) `json:"created"` Container [string](/builtin#string) `json:"container,omitempty"` ContainerConfig [DockerConfig](#DockerConfig) `json:"container_config,omitempty"` DockerVersion [string](/builtin#string) `json:"docker_version,omitempty"` Author [string](/builtin#string) `json:"author,omitempty"` Config *[DockerConfig](#DockerConfig) `json:"config,omitempty"` Architecture [string](/builtin#string) `json:"architecture,omitempty"` Size [int64](/builtin#int64) `json:"size,omitempty"` } ``` DockerV1CompatibilityImage represents the structured v1 compatibility information. #### type [DockerV1CompatibilityImageSize](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/dockertypes.go#L121) [¶](#DockerV1CompatibilityImageSize) added in v1.1.2 ``` type DockerV1CompatibilityImageSize struct { Size [int64](/builtin#int64) `json:"size,omitempty"` } ``` DockerV1CompatibilityImageSize represents the structured v1 compatibility information for size #### type [Image](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L57) [¶](#Image) ``` type Image struct { [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[TypeMeta](/k8s.io/kubernetes/pkg/api/unversioned#TypeMeta) [kapi](/k8s.io/kubernetes/pkg/api).[ObjectMeta](/k8s.io/kubernetes/pkg/api#ObjectMeta) // The string that can be used to pull this image. DockerImageReference [string](/builtin#string) // Metadata about this image DockerImageMetadata [DockerImage](#DockerImage) // This attribute conveys the version of docker metadata the JSON should be stored in, which if empty defaults to "1.0" DockerImageMetadataVersion [string](/builtin#string) // The raw JSON of the manifest DockerImageManifest [string](/builtin#string) // DockerImageLayers represents the layers in the image. May not be set if the image does not define that data. DockerImageLayers [][ImageLayer](#ImageLayer) // Signatures holds all signatures of the image. Signatures [][ImageSignature](#ImageSignature) // DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. DockerImageSignatures [][][byte](/builtin#byte) // DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. DockerImageManifestMediaType [string](/builtin#string) // DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. DockerImageConfig [string](/builtin#string) } ``` Image is an immutable representation of a Docker image and metadata at a point in time. #### func (*Image) [GetObjectKind](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L47) [¶](#Image.GetObjectKind) added in v1.1.3 ``` func (obj *[Image](#Image)) GetObjectKind() [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ObjectKind](/k8s.io/kubernetes/pkg/api/unversioned#ObjectKind) ``` #### type [ImageImportSpec](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L455) [¶](#ImageImportSpec) added in v1.1.2 ``` type ImageImportSpec struct { From [kapi](/k8s.io/kubernetes/pkg/api).[ObjectReference](/k8s.io/kubernetes/pkg/api#ObjectReference) To *[kapi](/k8s.io/kubernetes/pkg/api).[LocalObjectReference](/k8s.io/kubernetes/pkg/api#LocalObjectReference) ImportPolicy [TagImportPolicy](#TagImportPolicy) IncludeManifest [bool](/builtin#bool) } ``` ImageImportSpec defines how an image is imported. #### type [ImageImportStatus](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L464) [¶](#ImageImportStatus) added in v1.1.2 ``` type ImageImportStatus struct { Tag [string](/builtin#string) Status [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[Status](/k8s.io/kubernetes/pkg/api/unversioned#Status) Image *[Image](#Image) } ``` ImageImportStatus describes the result of an image import. #### type [ImageLayer](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L82) [¶](#ImageLayer) added in v1.1.2 ``` type ImageLayer struct { // Name of the layer as defined by the underlying store. Name [string](/builtin#string) // LayerSize of the layer as defined by the underlying store. LayerSize [int64](/builtin#int64) // MediaType of the referenced object. MediaType [string](/builtin#string) } ``` ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. #### type [ImageList](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L9) [¶](#ImageList) ``` type ImageList struct { [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[TypeMeta](/k8s.io/kubernetes/pkg/api/unversioned#TypeMeta) [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ListMeta](/k8s.io/kubernetes/pkg/api/unversioned#ListMeta) Items [][Image](#Image) } ``` ImageList is a list of Image objects. #### func (*ImageList) [GetObjectKind](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L48) [¶](#ImageList.GetObjectKind) added in v1.1.3 ``` func (obj *[ImageList](#ImageList)) GetObjectKind() [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ObjectKind](/k8s.io/kubernetes/pkg/api/unversioned#ObjectKind) ``` #### type [ImageSignature](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L101) [¶](#ImageSignature) added in v1.3.0 ``` type ImageSignature struct { [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[TypeMeta](/k8s.io/kubernetes/pkg/api/unversioned#TypeMeta) [kapi](/k8s.io/kubernetes/pkg/api).[ObjectMeta](/k8s.io/kubernetes/pkg/api#ObjectMeta) // Required: Describes a type of stored blob. Type [string](/builtin#string) // Required: An opaque binary string which is an image's signature. Content [][byte](/builtin#byte) // Conditions represent the latest available observations of a signature's current state. Conditions [][SignatureCondition](#SignatureCondition) // A human readable string representing image's identity. It could be a product name and version, or an // image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). ImageIdentity [string](/builtin#string) // Contains claims from the signature. SignedClaims map[[string](/builtin#string)][string](/builtin#string) // If specified, it is the time of signature's creation. Created *[unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[Time](/k8s.io/kubernetes/pkg/api/unversioned#Time) // If specified, it holds information about an issuer of signing certificate or key (a person or entity // who signed the signing certificate or key). IssuedBy *[SignatureIssuer](#SignatureIssuer) // If specified, it holds information about a subject of signing certificate or key (a person or entity // who signed the image). IssuedTo *[SignatureSubject](#SignatureSubject) } ``` ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. #### func (*ImageSignature) [GetObjectKind](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L50) [¶](#ImageSignature.GetObjectKind) added in v1.3.0 ``` func (obj *[ImageSignature](#ImageSignature)) GetObjectKind() [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ObjectKind](/k8s.io/kubernetes/pkg/api/unversioned#ObjectKind) ``` #### type [ImageStream](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L197) [¶](#ImageStream) ``` type ImageStream struct { [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[TypeMeta](/k8s.io/kubernetes/pkg/api/unversioned#TypeMeta) [kapi](/k8s.io/kubernetes/pkg/api).[ObjectMeta](/k8s.io/kubernetes/pkg/api#ObjectMeta) // Spec describes the desired state of this stream Spec [ImageStreamSpec](#ImageStreamSpec) // Status describes the current state of this stream Status [ImageStreamStatus](#ImageStreamStatus) } ``` ImageStream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a Docker image repository on a registry. #### func (*ImageStream) [GetObjectKind](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L51) [¶](#ImageStream.GetObjectKind) added in v1.1.3 ``` func (obj *[ImageStream](#ImageStream)) GetObjectKind() [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ObjectKind](/k8s.io/kubernetes/pkg/api/unversioned#ObjectKind) ``` #### type [ImageStreamImage](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L381) [¶](#ImageStreamImage) ``` type ImageStreamImage struct { [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[TypeMeta](/k8s.io/kubernetes/pkg/api/unversioned#TypeMeta) [kapi](/k8s.io/kubernetes/pkg/api).[ObjectMeta](/k8s.io/kubernetes/pkg/api#ObjectMeta) // The Image associated with the ImageStream and image name. Image [Image](#Image) } ``` ImageStreamImage represents an Image that is retrieved by image name from an ImageStream. #### func (*ImageStreamImage) [GetObjectKind](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L56) [¶](#ImageStreamImage.GetObjectKind) added in v1.1.3 ``` func (obj *[ImageStreamImage](#ImageStreamImage)) GetObjectKind() [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ObjectKind](/k8s.io/kubernetes/pkg/api/unversioned#ObjectKind) ``` #### type [ImageStreamImport](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L400) [¶](#ImageStreamImport) added in v1.1.2 ``` type ImageStreamImport struct { [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[TypeMeta](/k8s.io/kubernetes/pkg/api/unversioned#TypeMeta) // ObjectMeta must identify the name of the image stream to create or update. If resourceVersion // or UID are set, they must match the image stream that will be loaded from the server. [kapi](/k8s.io/kubernetes/pkg/api).[ObjectMeta](/k8s.io/kubernetes/pkg/api#ObjectMeta) // Spec is the set of items desired to be imported Spec [ImageStreamImportSpec](#ImageStreamImportSpec) // Status is the result of the import Status [ImageStreamImportStatus](#ImageStreamImportStatus) } ``` ImageStreamImport allows a caller to request information about a set of images for possible import into an image stream, or actually tag the images into the image stream. #### func (*ImageStreamImport) [GetObjectKind](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L57) [¶](#ImageStreamImport.GetObjectKind) added in v1.1.3 ``` func (obj *[ImageStreamImport](#ImageStreamImport)) GetObjectKind() [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ObjectKind](/k8s.io/kubernetes/pkg/api/unversioned#ObjectKind) ``` #### type [ImageStreamImportSpec](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L413) [¶](#ImageStreamImportSpec) added in v1.1.2 ``` type ImageStreamImportSpec struct { // Import indicates whether to perform an import - if so, the specified tags are set on the spec // and status of the image stream defined by the type meta. Import [bool](/builtin#bool) // Repository is an optional import of an entire Docker image repository. A maximum limit on the // number of tags imported this way is imposed by the server. Repository *[RepositoryImportSpec](#RepositoryImportSpec) // Images are a list of individual images to import. Images [][ImageImportSpec](#ImageImportSpec) } ``` ImageStreamImportSpec defines what images should be imported. #### type [ImageStreamImportStatus](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L425) [¶](#ImageStreamImportStatus) added in v1.1.2 ``` type ImageStreamImportStatus struct { // Import is the image stream that was successfully updated or created when 'to' was set. Import *[ImageStream](#ImageStream) // Repository is set if spec.repository was set to the outcome of the import Repository *[RepositoryImportStatus](#RepositoryImportStatus) // Images is set with the result of importing spec.images Images [][ImageImportStatus](#ImageImportStatus) } ``` ImageStreamImportStatus contains information about the status of an image stream import. #### type [ImageStreamList](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L187) [¶](#ImageStreamList) ``` type ImageStreamList struct { [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[TypeMeta](/k8s.io/kubernetes/pkg/api/unversioned#TypeMeta) [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ListMeta](/k8s.io/kubernetes/pkg/api/unversioned#ListMeta) Items [][ImageStream](#ImageStream) } ``` ImageStreamList is a list of ImageStream objects. #### func (*ImageStreamList) [GetObjectKind](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L52) [¶](#ImageStreamList.GetObjectKind) added in v1.1.3 ``` func (obj *[ImageStreamList](#ImageStreamList)) GetObjectKind() [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ObjectKind](/k8s.io/kubernetes/pkg/api/unversioned#ObjectKind) ``` #### type [ImageStreamMapping](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L336) [¶](#ImageStreamMapping) ``` type ImageStreamMapping struct { [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[TypeMeta](/k8s.io/kubernetes/pkg/api/unversioned#TypeMeta) [kapi](/k8s.io/kubernetes/pkg/api).[ObjectMeta](/k8s.io/kubernetes/pkg/api#ObjectMeta) // The Docker image repository the specified image is located in // DEPRECATED: remove once v1beta1 support is dropped // +k8s:conversion-gen=false DockerImageRepository [string](/builtin#string) // A Docker image. Image [Image](#Image) // A string value this image can be located with inside the repository. Tag [string](/builtin#string) } ``` ImageStreamMapping represents a mapping from a single tag to a Docker image as well as the reference to the Docker image repository the image came from. #### func (*ImageStreamMapping) [GetObjectKind](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L53) [¶](#ImageStreamMapping.GetObjectKind) added in v1.1.3 ``` func (obj *[ImageStreamMapping](#ImageStreamMapping)) GetObjectKind() [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ObjectKind](/k8s.io/kubernetes/pkg/api/unversioned#ObjectKind) ``` #### type [ImageStreamSpec](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L208) [¶](#ImageStreamSpec) ``` type ImageStreamSpec struct { // Optional, if specified this stream is backed by a Docker repository on this server DockerImageRepository [string](/builtin#string) // Tags map arbitrary string values to specific image locators Tags map[[string](/builtin#string)][TagReference](#TagReference) } ``` ImageStreamSpec represents options for ImageStreams. #### type [ImageStreamStatus](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L279) [¶](#ImageStreamStatus) ``` type ImageStreamStatus struct { // DockerImageRepository represents the effective location this stream may be accessed at. May be empty until the server // determines where the repository is located DockerImageRepository [string](/builtin#string) // A historical record of images associated with each tag. The first entry in the TagEvent array is // the currently tagged image. Tags map[[string](/builtin#string)][TagEventList](#TagEventList) } ``` ImageStreamStatus contains information about the state of this image stream. #### type [ImageStreamTag](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L351) [¶](#ImageStreamTag) ``` type ImageStreamTag struct { [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[TypeMeta](/k8s.io/kubernetes/pkg/api/unversioned#TypeMeta) [kapi](/k8s.io/kubernetes/pkg/api).[ObjectMeta](/k8s.io/kubernetes/pkg/api#ObjectMeta) // Tag is the spec tag associated with this image stream tag, and it may be null // if only pushes have occurred to this image stream. Tag *[TagReference](#TagReference) // Generation is the current generation of the tagged image - if tag is provided // and this value is not equal to the tag generation, a user has requested an // import that has not completed, or Conditions will be filled out indicating any // error. Generation [int64](/builtin#int64) // Conditions is an array of conditions that apply to the image stream tag. Conditions [][TagEventCondition](#TagEventCondition) // The Image associated with the ImageStream and tag. Image [Image](#Image) } ``` ImageStreamTag has a .Name in the format <stream name>:<tag>. #### func (*ImageStreamTag) [GetObjectKind](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L54) [¶](#ImageStreamTag.GetObjectKind) added in v1.1.3 ``` func (obj *[ImageStreamTag](#ImageStreamTag)) GetObjectKind() [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ObjectKind](/k8s.io/kubernetes/pkg/api/unversioned#ObjectKind) ``` #### type [ImageStreamTagList](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L373) [¶](#ImageStreamTagList) added in v1.0.8 ``` type ImageStreamTagList struct { [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[TypeMeta](/k8s.io/kubernetes/pkg/api/unversioned#TypeMeta) [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ListMeta](/k8s.io/kubernetes/pkg/api/unversioned#ListMeta) Items [][ImageStreamTag](#ImageStreamTag) } ``` ImageStreamTagList is a list of ImageStreamTag objects. #### func (*ImageStreamTagList) [GetObjectKind](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/register.go#L55) [¶](#ImageStreamTagList.GetObjectKind) added in v1.1.3 ``` func (obj *[ImageStreamTagList](#ImageStreamTagList)) GetObjectKind() [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[ObjectKind](/k8s.io/kubernetes/pkg/api/unversioned#ObjectKind) ``` #### type [RepositoryImportSpec](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L435) [¶](#RepositoryImportSpec) added in v1.1.2 ``` type RepositoryImportSpec struct { // The source of the import, only kind DockerImage is supported From [kapi](/k8s.io/kubernetes/pkg/api).[ObjectReference](/k8s.io/kubernetes/pkg/api#ObjectReference) ImportPolicy [TagImportPolicy](#TagImportPolicy) IncludeManifest [bool](/builtin#bool) } ``` RepositoryImport indicates to load a set of tags from a given Docker image repository #### type [RepositoryImportStatus](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L444) [¶](#RepositoryImportStatus) added in v1.1.2 ``` type RepositoryImportStatus struct { // Status reflects whether any failure occurred during import Status [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[Status](/k8s.io/kubernetes/pkg/api/unversioned#Status) // Images is the list of imported images Images [][ImageImportStatus](#ImageImportStatus) // AdditionalTags are tags that exist in the repository but were not imported because // a maximum limit of automatic imports was applied. AdditionalTags [][string](/builtin#string) } ``` RepositoryImportStatus describes the outcome of the repository import #### type [SignatureCondition](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L148) [¶](#SignatureCondition) added in v1.3.0 ``` type SignatureCondition struct { // Type of signature condition, Complete or Failed. Type [SignatureConditionType](#SignatureConditionType) // Status of the condition, one of True, False, Unknown. Status [kapi](/k8s.io/kubernetes/pkg/api).[ConditionStatus](/k8s.io/kubernetes/pkg/api#ConditionStatus) // Last time the condition was checked. LastProbeTime [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[Time](/k8s.io/kubernetes/pkg/api/unversioned#Time) // Last time the condition transit from one status to another. LastTransitionTime [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[Time](/k8s.io/kubernetes/pkg/api/unversioned#Time) // (brief) reason for the condition's last transition. Reason [string](/builtin#string) // Human readable message indicating details about last transition. Message [string](/builtin#string) } ``` SignatureCondition describes an image signature condition of particular kind at particular probe time. #### type [SignatureConditionType](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L145) [¶](#SignatureConditionType) added in v1.3.0 ``` type SignatureConditionType [string](/builtin#string) ``` / SignatureConditionType is a type of image signature condition. #### type [SignatureGenericEntity](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L165) [¶](#SignatureGenericEntity) added in v1.3.0 ``` type SignatureGenericEntity struct { // Organization name. Organization [string](/builtin#string) // Common name (e.g. openshift-signing-service). CommonName [string](/builtin#string) } ``` SignatureGenericEntity holds a generic information about a person or entity who is an issuer or a subject of signing certificate or key. #### type [SignatureIssuer](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L173) [¶](#SignatureIssuer) added in v1.3.0 ``` type SignatureIssuer struct { [SignatureGenericEntity](#SignatureGenericEntity) } ``` SignatureIssuer holds information about an issuer of signing certificate or key. #### type [SignatureSubject](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L178) [¶](#SignatureSubject) added in v1.3.0 ``` type SignatureSubject struct { [SignatureGenericEntity](#SignatureGenericEntity) // If present, it is a human readable key id of public key belonging to the subject used to verify image // signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. // 0x6<KEY>). PublicKeyID [string](/builtin#string) } ``` SignatureSubject holds information about a person or entity who created the signature. #### type [TagEvent](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L296) [¶](#TagEvent) ``` type TagEvent struct { // When the TagEvent was created Created [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[Time](/k8s.io/kubernetes/pkg/api/unversioned#Time) // The string that can be used to pull this image DockerImageReference [string](/builtin#string) // The image Image [string](/builtin#string) // Generation is the spec tag generation that resulted in this tag being updated Generation [int64](/builtin#int64) } ``` TagEvent is used by ImageRepositoryStatus to keep a historical record of images associated with a tag. #### func [LatestImageTagEvent](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L549) [¶](#LatestImageTagEvent) added in v1.5.1 ``` func LatestImageTagEvent(stream *[ImageStream](#ImageStream), imageID [string](/builtin#string)) ([string](/builtin#string), *[TagEvent](#TagEvent)) ``` LatestImageTagEvent returns the most recent TagEvent and the tag for the specified image. #### func [LatestTaggedImage](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L572) [¶](#LatestTaggedImage) ``` func LatestTaggedImage(stream *[ImageStream](#ImageStream), tag [string](/builtin#string)) *[TagEvent](#TagEvent) ``` LatestTaggedImage returns the most recent TagEvent for the specified image repository and tag. Will resolve lookups for the empty tag. Returns nil if tag isn't present in stream.status.tags. #### func [ResolveImageID](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L858) [¶](#ResolveImageID) ``` func ResolveImageID(stream *[ImageStream](#ImageStream), imageID [string](/builtin#string)) (*[TagEvent](#TagEvent), [error](/builtin#error)) ``` ResolveImageID returns latest TagEvent for specified imageID and an error if there's more than one image matching the ID or when one does not exist. #### type [TagEventCondition](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L316) [¶](#TagEventCondition) added in v1.1.2 ``` type TagEventCondition struct { // Type of tag event condition, currently only ImportSuccess Type [TagEventConditionType](#TagEventConditionType) // Status of the condition, one of True, False, Unknown. Status [kapi](/k8s.io/kubernetes/pkg/api).[ConditionStatus](/k8s.io/kubernetes/pkg/api#ConditionStatus) // LastTransitionTIme is the time the condition transitioned from one status to another. LastTransitionTime [unversioned](/k8s.io/kubernetes/pkg/api/unversioned).[Time](/k8s.io/kubernetes/pkg/api/unversioned#Time) // Reason is a brief machine readable explanation for the condition's last transition. Reason [string](/builtin#string) // Message is a human readable description of the details about last transition, complementing reason. Message [string](/builtin#string) // Generation is the spec tag generation that this status corresponds to. If this value is // older than the spec tag generation, the user has requested this status tag be updated. // This value is set to zero for older versions of streams, which means that no generation // was recorded. Generation [int64](/builtin#int64) } ``` TagEventCondition contains condition information for a tag event. #### type [TagEventConditionType](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L307) [¶](#TagEventConditionType) added in v1.1.2 ``` type TagEventConditionType [string](/builtin#string) ``` ``` const ( // ImportSuccess with status False means the import of the specific tag failed ImportSuccess [TagEventConditionType](#TagEventConditionType) = "ImportSuccess" ) ``` These are valid conditions of TagEvents. #### type [TagEventList](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L289) [¶](#TagEventList) ``` type TagEventList struct { Items [][TagEvent](#TagEvent) // Conditions is an array of conditions that apply to the tag event list. Conditions [][TagEventCondition](#TagEventCondition) } ``` TagEventList contains a historical record of images associated with a tag. #### type [TagImportPolicy](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L242) [¶](#TagImportPolicy) added in v1.1.2 ``` type TagImportPolicy struct { // Insecure is true if the server may bypass certificate verification or connect directly over HTTP during image import. Insecure [bool](/builtin#bool) // Scheduled indicates to the server that this tag should be periodically checked to ensure it is up to date, and imported Scheduled [bool](/builtin#bool) } ``` #### type [TagReference](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L217) [¶](#TagReference) ``` type TagReference struct { // Name of the tag Name [string](/builtin#string) // Optional; if specified, annotations that are applied to images retrieved via ImageStreamTags. Annotations map[[string](/builtin#string)][string](/builtin#string) // Optional; if specified, a reference to another image that this tag should point to. Valid values // are ImageStreamTag, ImageStreamImage, and DockerImage. From *[kapi](/k8s.io/kubernetes/pkg/api).[ObjectReference](/k8s.io/kubernetes/pkg/api#ObjectReference) // Reference states if the tag will be imported. Default value is false, which means the tag will // be imported. Reference [bool](/builtin#bool) // Generation is a counter that tracks mutations to the spec tag (user intent). When a tag reference // is changed the generation is set to match the current stream generation (which is incremented every // time spec is changed). Other processes in the system like the image importer observe that the // generation of spec tag is newer than the generation recorded in the status and use that as a trigger // to import the newest remote tag. To trigger a new import, clients may set this value to zero which // will reset the generation to the latest stream generation. Legacy clients will send this value as // nil which will be merged with the current tag generation. Generation *[int64](/builtin#int64) // ImportPolicy is information that controls how images may be imported by the server. ImportPolicy [TagImportPolicy](#TagImportPolicy) // ReferencePolicy defines how other components should consume the image ReferencePolicy [TagReferencePolicy](#TagReferencePolicy) } ``` TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. #### func [FollowTagReference](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L522) [¶](#FollowTagReference) added in v1.1.2 ``` func FollowTagReference(stream *[ImageStream](#ImageStream), tag [string](/builtin#string)) (finalTag [string](/builtin#string), ref *[TagReference](#TagReference), ok [bool](/builtin#bool), multiple [bool](/builtin#bool)) ``` FollowTagReference walks through the defined tags on a stream, following any referential tags in the stream. Will return ok if the tag is valid, multiple if the tag had at least reference, and ref and finalTag will be the last tag seen. If a circular loop is found ok will be false. #### func (TagReference) [HasAnnotationTag](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/helper.go#L1085) [¶](#TagReference.HasAnnotationTag) added in v1.5.1 ``` func (tagref [TagReference](#TagReference)) HasAnnotationTag(searchTag [string](/builtin#string)) [bool](/builtin#bool) ``` #### type [TagReferencePolicy](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L266) [¶](#TagReferencePolicy) added in v1.5.1 ``` type TagReferencePolicy struct { // Type determines how the image pull spec should be transformed when the image stream tag is used in // deployment config triggers or new builds. The default value is `Source`, indicating the original // location of the image should be used (if imported). The user may also specify `Local`, indicating // that the pull spec should point to the integrated Docker registry and leverage the registry's // ability to proxy the pull to an upstream registry. `Local` allows the credentials used to pull this // image to be managed from the image stream's namespace, so others on the platform can access a remote // image but have no access to the remote secret. It also allows the image layers to be mirrored into // the local registry which the images can still be pulled even if the upstream registry is unavailable. Type [TagReferencePolicyType](#TagReferencePolicyType) } ``` TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. #### type [TagReferencePolicyType](https://github.com/openshift/origin/blob/v1.5.1/pkg/image/api/types.go#L251) [¶](#TagReferencePolicyType) added in v1.5.1 ``` type TagReferencePolicyType [string](/builtin#string) ``` TagReferencePolicyType describes how pull-specs for images in an image stream tag are generated when image change triggers are fired. ``` const ( // SourceTagReferencePolicy indicates the image's original location should be used when the image stream tag // is resolved into other resources (builds and deployment configurations). SourceTagReferencePolicy [TagReferencePolicyType](#TagReferencePolicyType) = "Source" // LocalTagReferencePolicy indicates the image should prefer to pull via the local integrated registry, // falling back to the remote location if the integrated registry has not been configured. The reference will // use the internal DNS name or registry service IP. LocalTagReferencePolicy [TagReferencePolicyType](#TagReferencePolicyType) = "Local" ) ```
phoenix_storybook
hex
Erlang
PhoenixStorybook === PhoenixStorybook provides a [*storybook-like*](https://storybook.js.org) UI interface for your Phoenix components. * Explore all your components, and showcase them with different variations. * Browse your component's documentation, with their supported attributes. * Learn how components behave by using an interactive playground. ![screenshot](https://github.com/phenixdigital/phoenix_storybook/raw/main/screenshots/screenshot-01.jpg) ![screenshot](https://github.com/phenixdigital/phoenix_storybook/raw/main/screenshots/screenshot-02.jpg) [How does it work?](#module-how-does-it-work) --- PhoenixStorybook is mounted in your application router and serves its UI at the mounting point of your choice. It performs automatic discovery of your storybook content under a specified folder (`:content_path`) and then automatically generates a storybook navigation sidebar. Every module detected in your content folder will be loaded and identified as a storybook entry. Three kinds of stories are supported: * `component` to describe your stateless function components or your live_components. * `page` to write & document UI guidelines, or whatever content you want. * `example` to show how your components can be used and mixed in real UI pages. [Installation](#module-installation) --- To start using [`PhoenixStorybook`](PhoenixStorybook.html#content) in your phoenix application you will need to follow these steps: 1. Add the `phoenix_storybook` dependency 2. Run the generator ### [1. Add the `phoenix_storybook` dependency](#module-1-add-the-phoenix_storybook-dependency) Add the following to your mix.exs and run mix deps.get: ``` def deps do [ {:phoenix_storybook, "~> 0.5.0"} ] end ``` > **Note** > When picking a github version of the library (instead of an official hex.pm release) you > need the get the storybook's assets compiled. > To do so, please run [`mix dev.storybook`](Mix.Tasks.Dev.Storybook.html). ### [2. Run the generator](#module-2-run-the-generator) Run from the root of your application: ``` $> mix deps.get $> mix phx.gen.storybook ``` And you are ready to go! ℹ️ If you prefer manual setup, please read the [setup guide](setup.html). ### [Configuration](#module-configuration) Of all config settings, only the `:otp_app`, and `:content_path` keys are mandatory. ``` # lib/my_app_web/storybook.ex defmodule MyAppWeb.Storybook do use PhoenixStorybook, # OTP name of your application. otp_app: :my_app, # Path to your storybook stories (required). content_path: Path.expand("../storybook", __DIR__), # Path to your JS asset, which will be loaded just before PhoenixStorybook's own # JS. It's mainly intended to define your LiveView Hooks in `window.storybook.Hooks`. # Remote path (not local file-system path) which means this file should be served # by your own application endpoint. js_path: "/assets/storybook.js", # Path to your components stylesheet. # Remote path (not local file-system path) which means this file should be served # by your own application endpoint. css_path: "/assets/storybook.css", # This CSS class will be put on storybook container elements where your own styles should # prevail. See the `guides/sandboxing.md` guide for more details. sandbox_class: "my-app-sandbox", # Custom storybook title. Default is "Live Storybook". title: "My Live Storybook", # Theme settings. # Each theme must have a name, and an optional dropdown_class. # When set, a dropdown is displayed in storybook header to let the user pick a theme. # The dropdown_class is used to render the theme in the dropdown and identify which current # theme is active. # # The chosen theme key will be passed as an assign to all components. # ex: <.component theme={:colorful}/> # # The chosen theme class will also be added to the `.lsb-sandbox` container. # ex: <div class="lsb-sandbox theme-colorful">...</div> # # If no theme has been selected or if no theme is present in the URL the first one is enabled. themes: [ default: [name: "Default"], colorful: [name: "Colorful", dropdown_class: "text-pink-400"] # If you want to use custom FontAwesome icons. font_awesome_plan: :pro, # default value is :free font_awesome_kit_id: "foo8b41bar4625", font_awesome_rendering: :webfont, # default value is :svg # Story compilation mode, can be either `:eager` or `:lazy`. # It defaults to `:lazy` in dev environment, `:eager` in other environments. # - When eager: all .story.exs & .index.exs files are compiled upfront. # - When lazy: ony .index.exs files are compiled upfront and .story.exs are compile when the # matching story is loaded in UI. compilation_mode: :eager ] ``` All settings can be overridden from your config files. ``` # config/config.exs config :my_app, MyAppWeb.Storybook, content_path: "overridden/content/path" ``` ℹ️ Learn more on theming components in the [theming guide](theming.html), on icons in the [icons](icons.html) guide. PhoenixStorybook.BackendBehaviour behaviour === Behaviour implemented by your backend module. [Summary](#summary) === [Callbacks](#callbacks) --- [config key, default](#c:config/2) Returns a configuration value from your config.exs storybook settings. [content_tree()](#c:content_tree/0) Returns a precompiled tree of your storybook stories. [find_entry_by_path(t)](#c:find_entry_by_path/1) Returns an entry from its absolute storybook path (not filesystem). [flat_list()](#c:flat_list/0) Returns all the nodes (stoires & folders) of the storybook content tree as a flat list. [leaves()](#c:leaves/0) Returns all the leaves (only stories) of the storybook content tree. [storybook_path(atom)](#c:storybook_path/1) Retuns a storybook path from a story module. [Callbacks](#callbacks) === PhoenixStorybook.Guides === This module is meant to be used from generated `welcome.story.exs` page. It renders HTML markup from markdown guides located in the guides/folder. Markup is precompiled because: * we don't want to force user application to embed Earmark * we don't wont to put markdown guides in priv [Examples](#module-examples) --- ``` Guides.markup("components.md") Guides.markup("icons.md") ``` [Summary](#summary) === [Functions](#functions) --- [markup(binary)](#markup/1) [Functions](#functions) === PhoenixStorybook.Index === An index is an optional file you can create in every folder of your storybook content tree to improve rendering of the storybook sidebar. The index files can be used: * to customize the folder itself: name, icon and opening status. * to customize folder direct children (only stories): name and icon. Indexes must be created as `index.exs` files. Read the [icons](icons.html) guide for more information on custom icon usage. [Usage](#module-usage) --- ``` # storybook/_components.index.exs defmodule MyAppWeb.Storybook.Components do use PhoenixStorybook.Index def folder_name, do: "My Components" def folder_icon, do: {:fa, "icon"} def folder_open?, do: true def entry("a_component"), do: [name: "My Component"] def entry("other_component"), do: [name: "Another Component", icon: {:fa, "icon", :thin}] end ``` [Summary](#summary) === [Functions](#functions) --- [__using__(_)](#__using__/1) Convenience helper for using the functions above. [Functions](#functions) === PhoenixStorybook.Rendering.CodeRenderer === Responsible for rendering your components code snippet, for a given `PhoenixStorybook.Variation`. Uses the [`Makeup`](https://hexdocs.pm/makeup/1.1.0/Makeup.html) libray for syntax highlighting. [Summary](#summary) === [Functions](#functions) --- [render(context)](#render/1) Renders a component code snippet from a `RenderingContext`. Returns a [`Phoenix.LiveView.Rendered`](https://hexdocs.pm/phoenix_live_view/0.20.0/Phoenix.LiveView.Rendered.html). [render(fun_or_mod, context)](#render/2) [render_component_source(story)](#render_component_source/1) Renders source of a component story. Returns a rendered HEEx template. [render_source(source, assigns \\ %{__changed__: %{}})](#render_source/2) [Functions](#functions) === PhoenixStorybook.Rendering.ComponentRenderer === Responsible for rendering your function & live components. [Summary](#summary) === [Functions](#functions) --- [render(context)](#render/1) Renders a component from a `RenderingContext`. Returns a [`Phoenix.LiveView.Rendered`](https://hexdocs.pm/phoenix_live_view/0.20.0/Phoenix.LiveView.Rendered.html). [render(fun_or_mod, context)](#render/2) [Functions](#functions) === PhoenixStorybook.Rendering.RenderingContext === A struct holding all data needed by `ComponentRenderer` and `CodeRenderer` to render story variations. [Summary](#summary) === [Functions](#functions) --- [build(backend_module, story, variation_or_group, extra_attributes, options \\ [])](#build/5) [Functions](#functions) === PhoenixStorybook.Router === Provides LiveView routing for storybook. [Summary](#summary) === [Functions](#functions) --- [live_storybook(path, opts)](#live_storybook/2) Defines a PhoenixStorybook route. [storybook_assets(path \\ "/storybook/assets")](#storybook_assets/1) Defines routes for PhoenixStorybook static assets. [Functions](#functions) === PhoenixStorybook.Stories.Attr === An attr is one of your component attributes. Its structure mimics the LiveView 0.18.0 declarative assigns. Attributes declaration will populate the Playground tab of your storybook, for each of your components. [Summary](#summary) === [Types](#types) --- [t()](#t:t/0) * `id`: the attribute id (required). Should match your component assign. * `type`: the attribute type (required). Must be one of: + `:any` - any term + `:string` - any binary string + `:atom` - any atom + `:boolean` - any boolean + `:integer` - any integer + `:float` - any float + `:map` - any map + `:list` - a List of any arbitrary types + `:global`- any common HTML attributes, + Any struct module * `required`: `true` if the attribute is mandatory. * `default`: attribute default value. * `examples` the list or range of examples suggested for the attribute * `values` the list or range of all possible examples for the attribute. Unlike examples, this option enforces validation of the default value against the given list. * `doc`: a text documentation for this attribute. [Types](#types) === PhoenixStorybook.Stories.Doc === Functions to fetch component documentation and render it at HTML. [Summary](#summary) === [Functions](#functions) --- [fetch_doc_as_html(story)](#fetch_doc_as_html/1) Fetch component documentation from component source and format it as HTML. [Functions](#functions) === PhoenixStorybook.Stories.Slot === A slot is one of your component slots. Its structure mimics the LiveView 0.18.0 declarative assigns. Slots declaration will populate the Playground tab of your storybook, for each of your components. Supported keys: * `id`: the slot id (required). Should match your component slot name. Use the id `:inner_block` for your component default slot. * `doc`: a text documentation for this slot. * `required`: `true` if the attribute is mandatory. [Summary](#summary) === [Types](#types) --- [t()](#t:t/0) [Types](#types) === PhoenixStorybook.Stories.Variation === A variation captures the rendered state of a UI component. Developers write multiple variations per component that describe all the “interesting” states a component can support. Each variation will be displayed in the storybook as a code snippet alongside with the component preview. Variations attributes type are checked against their matching attribute (if any) and will raise a compilation an error in case of mismatch. Advanced component & variation documentation is available in the [components guide](components.html). [Usage](#module-usage) --- ``` def variations do [ %Variation{ id: :default, description: "Default dropdown", attributes: %{ label: "A dropdown", }, slots: [ ~s|<:entry path="#" label="Account settings"/>|, ~s|<:entry path="#" label="Support"/>|, ~s|<:entry path="#" label="License"/>| ] } ] end ``` [Summary](#summary) === [Types](#types) --- [t()](#t:t/0) [Types](#types) === PhoenixStorybook.Stories.VariationGroup === A variation group is a set of similar variations that will be rendered together in a single preview <pre> block. [Usage](#module-usage) --- ``` def variations do [ %VariationGroup{ id: :colors, description: "Different color buttons", variations: [ %Variation{ id: :blue_button, attributes: %{label: "A button", color: :blue } }, %Variation{ id: :red_button, attributes: %{label: "A button", color: :red } }, %Variation{ id: :green_button, attributes: %{label: "A button", color: :green } } ] } ] end ``` [Summary](#summary) === [Types](#types) --- [t()](#t:t/0) [Types](#types) === PhoenixStorybook.Story === A story designates any kind of content in your storybook. For now only following kinds of stories are supported `:component`, `:live_component`, and `:page`. In order to populate your storybook, just create *story* scripts under your content path, and implement their required behaviour. Stories must be created as `story.exs` files. In dev environment, stories are lazily compiled when reached from the UI. [Usage](#module-usage) --- ### [Component](#module-component) Implement your component as such. Confer to: * `PhoenixStorybook.Variation` documentation for variations. * `PhoenixStorybook.Attr` documentation for attributes. ``` # storybook/my_component.exs defmodule MyAppWeb.Storybook.MyComponent do use PhoenixStorybook.Story, :component # required def function, do: &MyAppWeb.MyComponent.my_component/1 def attributes, do: [] def slots, do: [] def variations, do: [] end ``` ### [Live Component](#module-live-component) Very similar to components, except that you need to define a `component/0` function instead of `function/0`. ``` # storybook/my_live_component.exs defmodule MyAppWeb.Storybook.MyLiveComponent do use PhoenixStorybook.Story, :live_component # required def component, do: MyAppWeb.MyLiveComponent def attributes, do: [] def slots, do: [] def variations, do: [] end ``` ℹ️ Learn more on components in the [components guide](components.html). ### [Page](#module-page) A page is a fairly simple story that can be used to write whatever content you want. We use it to provide some UI guidelines. You should implement the render function and an optional navigation function, if you want a tab based sub-navigation. Current tab is passed as `:tab` in `render/1` assigns. ``` # storybook/my_page.exs defmodule MyAppWeb.Storybook.MyPage do use PhoenixStorybook.Story, :page def doc, do: "My page description" def navigation do [ {:tab_one, "Tab One", {:fa, "book"}}, {:tab_two, "Tab Two", {:fa, "cake", :solid}} ] end def render(assigns) do ~H"<div>Your HEEX template</div>" end end ``` ### [Example](#module-example) An example is a real-world UI showcasing how your components can be used and mixed in complex UI interfaces. Examples ares rendered as a child LiveView, so you can implement `mount/3`, `render/1` or any `handle_event/3` callback. Unfortunately `handle_params/3` cannot be defined in a child LiveView. By default, your example story's source code will be shown in a dedicated tab. But you can show additional files source code by implementing the `extra_sources/0` function which should return a list of relative paths to your example related files. ``` # storybook/my_example.story.exs defmodule MyAppWeb.Storybook.MyPage do use PhoenixStorybook.Story, :example def doc, do: "My page description" def extra_sources do [ "./template.html.heex", "./my_page_html.ex" ] end def mount(_, _, socket), do: {:ok, socket} def render(assigns) do ~H"<div>Your HEEX template</div>" end end ``` [Summary](#summary) === [Functions](#functions) --- [__using__(which)](#__using__/1) Convenience helper for using the functions above. [Functions](#functions) === mix dev.storybook === Make sure your storybook local dependency has all its assets packaged in priv. ``` $> mix dev.storybook ``` mix phx.gen.storybook === Generates a Storybook and provides setup instructions. ``` $> mix phx.gen.storybook ``` The generated files will contain: * a storybook backend in `lib/my_app_web/storybook.ex` * a custom js file in `assets/js/storybook.js` * a custom css file in `assets/css/storybook.css` * scaffolding including example stories for your own storybook in `storybook/` The generator supports the `--no-tailwind` flag if you want to skip the TailwindCSS specific bit. Component stories === Basic component documentation is in [`PhoenixStorybook.Story`](PhoenixStorybook.Story.html). [Documentation](#documentation) --- Component documentation is fetched from your component doc tags: * For a live_component, fetches `@moduledoc` content. * For a function component, fetches `@doc` content from the matching function. If you are deploying `phoenix_storybook` in production with an Elixir release, make sure your doc chunks are not [stripped out from the release.](https://hexdocs.pm/mix/Mix.Tasks.Release.html#module-customization) ``` releases: [ my_app_web: [ strip_beams: [ keep: ["Docs"] ] ] ] ``` [Variation groups](#variation-groups) --- You may want to present different variations of a component in a single variation block. It is possible using `PhoenixStorybook.VariationGroup`. [Container](#container) --- By default, each `variation` is rendered within a `div` in the storybook DOM. You can pass additional HTML attributes or extend the class attribute. ``` # storybook/my_component.story.exs defmodule Storybook.MyComponent do use PhoenixStorybook.Story, :component def container, do: {:div, class: "block"} end ``` If you need further *sandboxing* you can opt in for `iframe` rendering. ``` # storybook/my_component.story.exs defmodule Storybook.MyComponent do use PhoenixStorybook.Story, :component def container, do: :iframe end ``` ℹ️ Learn more on this topic in the [sandboxing guide](sandboxing.html). [Aliases & Imports](#aliases-imports) --- When using nested components or JS commands, you might need to reference other functions or components. Whilst it is possible to use fully qualified module names, you might want to provide custom *aliases* and *imports*. Here is an example defining both: ``` defmodule NestedComponent do use PhoenixStorybook.Story, :component def function, do: &NestedComponent.nested_component/1 def aliases, do: [MyStorybook.Helpers.JSHelpers] def imports, do: [{NestedComponent, nested: 1}] def variations do [ %Variation{ id: :default, slots: [ """ <.nested phx-click={JSHelpers.toggle()}>hello</.nested> <.nested phx-click={JSHelpers.toggle()}>world</.nested> """ ] } ] end end ``` [Templates](#templates) --- You may want to render your components within some wrapping markup. For instance, when your component can only be used as a slot of another enclosing component. Some components, such as *modals*, *slideovers*, and *notifications*, are not visible from the start: they first need user interaction. Such components can be accompanied by an outer template, that will for instance render a button next to the component, to toggle its visibility state. ### [Variation templates](#variation-templates) You can define a template in your component story by defining a `template/0` function. Every variation will be rendered within the defined template, the variation itself is injected in place of `<.lsb-variation/>`. ``` def template do """ <div class="my-custom-wrapper"> <.lsb-variation/> </div> """ end ``` You can also override the template, per variation or variation_group by setting the `:template` key to your variation. Setting it to a falsy value will disable templating for this variation. ### [Variation group templates](#variation-group-templates) Variation groups can also leverage on templating: * either by wrapping every variation in their own template. ``` """ <div class="one-wrapper-for-each-variation"> <.lsb-variation/> </div> """ ``` * or by wrapping all variations as a whole, in a single template. ``` """ <div class="a-single-wrapper-for-all"> <.lsb-variation-group/> </div> """ ``` If you want to get unique id, you can use `:variation_id` that will be replaced, at rendering time by the current variation (or variation group) id. ### [Placeholder attributes](#placeholder-attributes) In template, you can pass some extra attributes to your variation. Just add them to the `.lsb-variation` or `.lsb-variation-group` placeholder. ``` """ <.form_for :let={f} for={%{}} as={:user}> <.lsb-variation form={f}/> </.form> """ ``` ### [JS-controlled visibility](#js-controlled-visibility) Here is an example of templated component managing its visibility client-side, by toggling CSS classes through [JS commands](https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.JS.html). ``` defmodule Storybook.Components.Modal do use PhoenixStorybook.Story, :component def function, do: &Components.Modal.modal/1 def template do """ <div> <button phx-click={Modal.show_modal()}>Open modal</button> <.lsb-variation/> </div> """ end def variations do [ %Variation{ id: :default_modal, slots: ["<:body>hello world</:body>"] } ] end end ``` ### [Elixir-controlled visibility](#elixir-controlled-visibility) Some components don't rely on JS commands but need external assigns, like a modal that takes a `show={true}` or `show={false}` assign to manage its visibility state. [`PhoenixStorybook`](PhoenixStorybook.html) handles special `assign` and `toggle` events that you can leverage on to update some properties that will be passed to your components as *extra assigns*. ``` defmodule Storybook.Components.Slideover do use PhoenixStorybook.Story, :component def function, do: &Components.Slideover.slideover/1 def template do """ <div> <button phx-click={JS.push("assign", value: %{show: true})}> Open slideover </button> <.lsb-variation/> </div> """ end def variations do [ %Variation{ id: :default_slideover, attributes: %{ close_event: JS.push("assign", value: %{variation_id: :default_slideover, show: false}) }, slots: ["<:body>Hello world</:body>"] } ] end end ``` ### [Template code preview](#template-code-preview) By default, the code preview will render the variation and its template markup as well. You can choose to render only the variation markup, without its surrounding template by using the `lsb-code-hidden` HTML attribute. ``` """ <div lsb-code-hidden> <button phx-click={Modal.show_modal()}>Open modal</button> <.lsb-variation/> </div> """ ``` [Block, slots & let](#block-slots-let) --- Liveview let you define blocks of HEEx content in your components, referred to as as slots. They can be passed in your variations with the `:slots` keys : ``` %Variation{ id: :modal, slots: [ """ <:button> <button type="button">Cancel</button> </:button> """, """ <:button> <button type="button">OK</button> </:button> """ ] } ``` You can also use [LiveView let mechanism](https://hexdocs.pm/phoenix_live_view/Phoenix.Component.html#module-default-slots) to pass data to your default block. You just need to **declare the let attribute** you are using in your variation. ``` %Variation{ id: :list, attributes: %{stories: ~w(apple banana cherry)}, let: :entry, slots: [ "I like <%= entry %>" ] } ``` `let` syntax can also be used with named slots, but requires no specific livebook setup. ``` %Variation{ id: :table, attributes: %{ rows: [ %{first_name: "Jean", last_name: "Dupont"}, %{first_name: "Sam", last_name: "Smith"} ] }, slots: [ """ <:col :let={user} label="First name"> <%= user.first_name %> </:col> """, """ <:col :let={user} label="Last name"> <%= user.last_name %> </:col> """ ] } ``` [Late evaluation](#late-evaluation) --- In some cases, you want to pass to your variation attributes a complex value which should be evaluated at runtime but not in code preview (where you rather want to see the orignal expression). For instance with the following variation of a `Modal` component. ``` %Variation{ attributes: %{ :"on-open": JS.push("open"), :"on-close": {:eval, ~s|JS.push("close")|} } } ``` Both open & close events would work, but code would be rendered like this. ``` <.modal on-open="%Phoenix.LiveView.JS{ops: [["push", %{event: "open"}]]}" on-close={JS.push("close")} /> ``` [API Reference](api-reference.html) [Next Page → Custom Icons](icons.html)
github.com/kyma-project/kyma/components/apiserver-proxy
go
Go
README [¶](#section-readme) --- ### API Server Proxy #### Overview The Kyma API Server Proxy is a core component that uses JWT authentication to secure access to the Kubernetes API server. It is based on the [kube-rbac-proxy](https://github.com/brancz/kube-rbac-proxy) project. This [Helm chart](https://github.com/kyma-project/kyma/blob/974b44efe253/resources/apiserver-proxy/Chart.yaml) outlines the component's installation. #### Prerequisites Use these tools to work with the API Server Proxy: * [Go](https://golang.org) * [Docker](https://www.docker.com/) #### Details This section describes: * How to run the controller locally * How to build the Docker image for the production environment * How to use the environment variables * How to test the Kyma API Server Proxy ##### Run the component locally Run Minikube to use the API Server Proxy locally. Run this command to run the application without building the binary: ``` go run cmd/proxy/main.go ``` ##### API Server Proxy configuration You can use command-line flags to configure the API Server Proxy. Use these flags to secure the Kubernetes API Server with JWT authentication: ``` --upstream="https://kubernetes.default" The upstream URL to proxy to once requests have successfully been authenticated and authorized. --oidc-issuer="https://dex.{{ DOMAIN }}" The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT). --oidc-clientID=" The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set. --oidc-ca-file="path/to/cert/file" If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used. ``` Find more details about available flags [here](https://github.com/brancz/kube-rbac-proxy/blob/master/README.md) ##### Test Run all tests: ``` go test -v ./... ``` Run all tests with logging enabled: ``` go test -v ./... -args --logtostderr=true --v=10 ``` None
packagist_oway_trip_group_laravel_bmgv2.jsonl
personal_doc
Unknown
# Reach millions of new online customers through our global network of top travel brands Our solutions our partners about us contact us Get in touch Our Solutions Distribution Network Booking System Our Partners About Us Contact Us Get in touch Digital solutions for operators of attractions, tours, & activities Distribution Network and Booking System for operators of We enable your business with technology that can help you manage all your sales channels What we can do for your business Reach millions of new online customers through our global network of top travel brands BeMyGuest Distribution Network FIND OUT MORE Expose your products to thousands of travel agents looking for activities for their offline customers BeMyGuest Agents Marketplace FIND OUT MORE Improve your website's checkout flow and increase online customer sales 24/7 Xplore Ecommerce Find out more Speed up your front-of-house operations and digitise your ticket counter sales Xplore Web POS FIND OUT MORE Automate your corporate clients, groups and agents bookings Xplore B2B Portal FIND OUT MORE We design solutions with your business in mind Attractions & Theme Parks Travel & Local Activities Museums & Wildlife Parks Events & Unique Experiences Private & Shared Day Tours Adventure & Water Activities Solutions for all your customers Our varied system modules can process bookings from all types of customers and sales channels B2C Customers Travelers Online Customers Locals Walk-ins ECOMMERCE MODULE WEB POINT-OF-SALE MODULE B2B Customers Online Travel Agent Offline Travel Agent Groups Corporates B2B PORTAL MODULE AGENTS MARKETPLACE CONTENT & BOOKING API Want to find out more? Our team will get in touch to understand your business goals and suggest how we can best help you. GET IN TOUCH # making it work, together b"Reservation Technology\n Besides offering our own reservation system 'Xplore', BeMyGuest is also fully integrated to the top reservation systems out there. Our vast connectivity ensures that we have access to the largest amount of travel activity products in real-time and on an instant confirmation basis. Our promise to our Network partners is that we will resolve their supply requirements. We are constantly integrating new reservation systems, this is only a selection of partners." b"Reservation Technology\n Besides offering our own reservation system 'Xplore', BeMyGuest is also fully integrated to the top reservation systems out there. Our vast connectivity ensures that we have access to the largest amount of travel activity products in real-time and on an instant confirmation basis. Our promise to our Network partners is that we will resolve their supply requirements. We are constantly integrating new reservation systems, this is only a selection of partners." ## Reservation Technology Besides offering our own reservation system 'Xplore', BeMyGuest is also fully integrated to the top reservation systems out there. Our vast connectivity ensures that we have access to the largest amount of travel activity products in real-time and on an instant confirmation basis. Our promise to our Network partners is that we will resolve their supply requirements. We are constantly integrating new reservation systems, this is only a selection of partners. b'Payment Gateways\n Over the years we have integrated the most popular local Payment Gateways in APAC. To ensure users of our Booking System are able to collect payments directly from their customers in as many as possible local payment modes. We are constantly integrating new Payment Gateways, this is only a selection of partners.' b'Payment Gateways\n Over the years we have integrated the most popular local Payment Gateways in APAC. To ensure users of our Booking System are able to collect payments directly from their customers in as many as possible local payment modes. We are constantly integrating new Payment Gateways, this is only a selection of partners.' ## Payment Gateways Over the years we have integrated the most popular local Payment Gateways in APAC. To ensure users of our Booking System are able to collect payments directly from their customers in as many as possible local payment modes. We are constantly integrating new Payment Gateways, this is only a selection of partners. b'Hardware Options\n BeMyGuest works with selected hardware vendors and will be able to find a suitable solution for your business needs. Our WebPOS software is suitable for most hardware, we will try to retain existing hardware you may already have invested in, but are also able to advise on new models if you are looking for an upgrade.' b'Hardware Options\n BeMyGuest works with selected hardware vendors and will be able to find a suitable solution for your business needs. Our WebPOS software is suitable for most hardware, we will try to retain existing hardware you may already have invested in, but are also able to advise on new models if you are looking for an upgrade.' ## Hardware Options BeMyGuest works with selected hardware vendors and will be able to find a suitable solution for your business needs. Our WebPOS software is suitable for most hardware, we will try to retain existing hardware you may already have invested in, but are also able to advise on new models if you are looking for an upgrade. b"System Integrations\n BeMyGuest is fully integrated to some of the most popular CRM and Accounting systems in the region. But don't worry, if you are not using any of these we are happy to help you with a custom integration to the system of your choice." b"System Integrations\n BeMyGuest is fully integrated to some of the most popular CRM and Accounting systems in the region. But don't worry, if you are not using any of these we are happy to help you with a custom integration to the system of your choice." ## System Integrations BeMyGuest is fully integrated to some of the most popular CRM and Accounting systems in the region. But don't worry, if you are not using any of these we are happy to help you with a custom integration to the system of your choice. # BeMyGuest Pte. Ltd Our solutions our partners about us contact us Get in touch Our Solutions Distribution Network Booking System Our Partners About Us Contact Us Get in touch contact us We look forward to hearing from you BeMyGuest Pte. Ltd 137 Cecil St, #10-01/05 Cecil Building Singapore 069537 Operators & Suppliers <EMAIL> Distributors & Partners <EMAIL> Customer Service <EMAIL> Want to find out more? Our team will get in touch to understand your business goals and suggest how we can best help you. get in touch # We will contact you soon We will contact you soon Our team will get in touch with you. Please be patient, we receive a lot of inquiries everyday. Back to site Have an account? Login here Your Company Details Company Name Company Website Country Select country Australia Cambodia China France Germany Greece Hong Kong India Indonesia Italy Japan Macau Malaysia Netherlands New Zealand Philippines Qatar Saudi Arabia Singapore South Korea Spain Switzerland Taiwan Thailand Turkey United Arab Emirates United Kingdom United States Vietnam Others Monthly Sales Revenue (optional) Select one... $0 - $1,000,000 $1,000,001 - $2,000,000 $2,000,001 - $10,000,000 $10,000,000 + Company Type Attractions Activities Tour Operators Online Travel Agents Offline Travel Agents What are you interested in? I want to log in to your Agents Marketplace to be able to purchase products. I want to integrate your Distribution API to be able to purchase products. What are you interested in? I want to sell my products on your Distribution Network. I need a Booking System to automate sales from my website, ticket counters, or through my agents/corporate clients. Your Personal Details Name Email Contact Code Australia (+61) Cambodia (+855) China (+86) France (+33) Germany (+49) Greece (+30) Hong Kong (+852) India (+91) Indonesia (+62) Italy (+39) Japan (+81) Macau (+853) Malaysia (+60) Netherlands (+31) New Zealand (+64) Philippines (+63) Qatar (+974) Saudi Arabia (+966) Singapore (+65) South Korea (+82) Spain (+34) Switzerland (+41) Taiwan (+886) Thailand (+66) Turkey (+90) United Arab Emirates (+971) United Kingdom (+44) United States (+1) Vietnam (+84) Others Position Held Additional Comments Thank you, our team will get in touch with you within 72 hours. Back to Home Oops! Something went wrong while submitting the form. # join our growing family Our solutions our partners about us contact us Get in touch Our Solutions Distribution Network Booking System Our Partners About Us Contact Us Get in touch join our growing family Join some of the most experienced talent in the travel activities space and contribute to continuous innovation and change. Got skills? Apply below Want to find out more? Our team will get in touch to understand your business goals and suggest how we can best help you. get in touch Our solutions our partners about us contact us Get in touch Our Solutions Distribution Network Booking System Our Partners About Us Contact Us Get in touch News BeConnected Industry Insights & News Press Hong Kong Disneyland Resort will launch the world’s first and largest "Frozen" themed land, World of Frozen, on November 20 Hong Kong Disneyland Resort will launch the world’s first and largest "Frozen" themed land, World of Frozen, on November 20 September 12, 2023 by BeMyGuest on September 12, 2023 Read more Read more Media Coverage What Do Asia Pacific Travelers Want? What Do Asia Pacific Travelers Want? March 29, 2023 by on March 29, 2023 Read on Read on Arival Media Coverage Southeast Asia’s Fragmented Tours Sector Gets Long-Awaited Tech Upgrade Southeast Asia’s Fragmented Tours Sector Gets Long-Awaited Tech Upgrade March 29, 2023 by on March 29, 2023 Read on Read on Skift Media Coverage A "Pretty Much Recovered" BeMyGuest Waits For Next Wave of Travel A "Pretty Much Recovered" BeMyGuest Waits For Next Wave of Travel August 29, 2022 by on August 29, 2022 Read on Read on WIT Media Coverage GX Phocuswright Exclusive Podcast Blanca Menchaca GX Phocuswright Exclusive Podcast Blanca Menchaca January 10, 2022 by on January 10, 2022 Read on Read on GuestX Podcast Media Coverage Video: BeMyGuest on the Tech Opportunity in Tours and Activities Video: BeMyGuest on the Tech Opportunity in Tours and Activities December 6, 2021 by on December 6, 2021 Read on Read on PhocusWire Media Coverage Video: Phocuswright's New Look of the Tours and Activities Marketplace Video: Phocuswright's New Look of the Tours and Activities Marketplace December 6, 2021 by on December 6, 2021 Read on Read on Arival Insights Singapore & Malaysia - VTL Opens! Singapore & Malaysia - VTL Opens! November 30, 2021 by <NAME> on November 30, 2021 Read Article Read Article Insights [WATCH] Phocuswright Conference 2021 - New Look of Tours and Activities Marketplace [WATCH] Phocuswright Conference 2021 - New Look of Tours and Activities Marketplace November 23, 2021 by <NAME> on November 23, 2021 Read Article Read Article Media Coverage Living the Life of Riley in Cornwall Living the Life of Riley in Cornwall October 14, 2021 by on October 14, 2021 Read on Read on WIT Media Coverage Business Travel Must Go On: How a Singapore-based Entrepreneur Did It Amid COVID Restrictions Business Travel Must Go On: How a Singapore-based Entrepreneur Did It Amid COVID Restrictions October 5, 2021 by on October 5, 2021 Read on Read on WIT Media Coverage Who's connected to Google Things To Do Who's connected to Google Things To Do September 24, 2021 by on September 24, 2021 Read on Read on Arival Media Coverage Google launches sustainability initiatives, confirms "Things to do" changes Google launches sustainability initiatives, confirms "Things to do" changes September 23, 2021 by on September 23, 2021 Read on Read on PhocusWire Insights Dubai Expo 2020 : OFFICIAL RESELLER! Dubai Expo 2020 : OFFICIAL RESELLER! September 22, 2021 by <NAME> on September 22, 2021 Read Article Read Article Insights BeMyGuest Integrates with Google for Tour and Activity Booking Links BeMyGuest Integrates with Google for Tour and Activity Booking Links August 27, 2021 by <NAME> on August 27, 2021 Read Article Read Article Media Coverage BeMyGuest integrates with Google BeMyGuest integrates with Google August 25, 2021 by on August 25, 2021 Read on Read on Travel Daily News Media Coverage BeMyGuest integrates with Google BeMyGuest integrates with Google August 25, 2021 by on August 25, 2021 Read on Read on TTG Asia Media Coverage BeMyGuest integrates with Google BeMyGuest integrates with Google August 24, 2021 by on August 24, 2021 Read on Read on WIT Older Want to find out more? Our team will get in touch to understand your business goals and suggest how we can best help you. get in touch # Hong Kong Disneyland Resort will launch the world’s first and largest "Frozen" themed land, World of Frozen, on November 20 Hong Kong, 12 September, 2023 — Hong Kong Disneyland Resort is getting ready for the opening of World of Frozen, the world’s first and largest “Frozen” themed land, on November 20. Inspired by the Walt Disney Animation Studios’ films, “Frozen” and “Frozen 2,” which are among the biggest animated films of all time, World of Frozen will transport guests to the cinematic and living land - Arendelle. Guests will delve into this immersive travel destination, with its captivating stories, beloved characters, culture, stunning landscapes, enchanting music, and cutting-edge technology for the first time in forever by celebrating Summer Snow Day and enjoying the fun-filled attractions such as Frozen Ever After and Wandering Oaken’s Sliding Sleighs. <NAME>, managing director of Hong Kong Disneyland Resort, said: "Today, on the 18th anniversary of Hong Kong Disneyland Resort, we’re thrilled to be sharing more details on World of Frozen. World of Frozen is an integral part of the park’s latest expansion and growth, and with the launch on November 20, we look forward to welcoming even more guests from around the world. We are committed to continually delivering new and innovative offerings so that our guests can enjoy a magical experience with each and every visit to the resort — we can’t wait from them to explore the kingdom of Arendelle.”Explore the living land of Arendelle and celebrate Summer Snow Day Summer Snow Day commemorates the day that Anna saved Elsa and the kingdom with an act of true love. Guests are invited to visit this whimsical realm and rejoice in the lively festivities! World of Frozen puts a number of iconic scenes in the “Frozen” movies to life. Some of its signature spots include the North Mountain, with its peak as the highest point in Arendelle; the Ice Palace where Elsa unleashes her icy powers freely for the first time; Arendelle Castle, home of the royal family; the Bay of Arendelle, featuring the small fishing boat that Anna fell on when she met <NAME>; Clock Tower where Anna danced with Prince Hans; and Friendship Fountain where Elsa uses her magical powers to freeze its water into beautiful snowflake ornamentations. Guests can also immerse themselves in the joy of the Summer Snow Day celebration by taking a Frozen Ever After journey to meet Elsa at her Ice Palace that is open to all for the first time, embarking on a wondrous adventure on Wandering Oaken’s Sliding Sleighs, enjoying a unique interactive play experience with Anna and Elsa at Playhouse in the Woods. To prepare for the Summer Snow Day celebration, guests can immerse in the enchanting festivities with a special outfit. Wear rosemaling-patterned clothes, add glittering ice-inspired face paint, and style the hair like Elsa or wear a cape like Anna to show the love for the royal sisters. No visit to Arendelle would be complete without indulging in its exquisite cuisine and exploring its charming shops, which are deeply rooted in Nordic history and culture. Do not miss Golden Crocus Inn and Bayside Wharf for a taste of local delicacies inspired by traditional Nordic dishes, grab a bite at Forest Fare or sweet treats at Northern Delights. Visit Tick Tock Toys & Collectibles and Traveling Traders for a delightful shopping experience, where guests can find the perfect souvenirs to bring home cherished memories. Complete the magical Arendelle journey by staying at one of the enchanting resort hotels and participate in themed recreation activities that will make the Arendelle experience truly unforgettable.Pushing the boundaries of creativity to bring “Frozen” to life World of Frozen is the next level of immersive storytelling like never before. The world-class expertise of Walt Disney Imagineering pushed the boundaries of creativity to bring Arendelle, the home of Anna and Elsa, to life through extensive research with inspiration from Norway. <NAME>, executive creative director at Walt Disney Imagineering, stated: “I am excited to see this project come to life. This idea started as a sketch seven years ago and has since transformed into a beautiful land that will enchant guests with magical experiences! The collaborative efforts across disciplines have made World of Frozen a reality, creating unforgettable memories that will last for a lifetime.” * Perfectly blends the unique natural landscape of Hong Kong - World of Frozen seamlessly integrates with Lantau Island's natural landscape and the design of North Mountain, providing guests with expansive views of both. The seamless transition between artificial landforms and natural surroundings embraces guests entirely in World of Frozen. * Integrates inspiration from Norway to create a must-visit travel destination - Inspired by Norway, the themed land features a town that allows guests to discover Arendelle. The scenery, architectural design, clothing, and cuisine are all infused with rich Norwegian elements, including “dragestil” (dragon-style) architectural style, Balestrand's asymmetrical architecture, and rosemaling floral decorative paintings. * Dances between reality and magic through enchanting music - The team worked with award-winning composer, <NAME>, to rearrange the iconic music from the movies for World of Frozen, with various tunes played in different locations throughout the land, to tell stories with strong emotional connections. The cinematic music builds the perfect ambience for guests to feel the story of “Frozen” with different senses. * Re-creates cinematic features with meticulous craftsmanship and extensive research - Disney Imagineers spent over three years in concept and design, working in partnership with artists and storytellers from Walt Disney Animation Studios, to accurately depict cinematic features in World of Frozen. The team analyzed ice textures, colors, and how these relate to Elsa's emotions to recreate her Ice Magic. Attention was paid to every detail, including various forms of ice and snow, and different materials were studied and experimented to truly represent the Ice Palace and icy ornaments. * Brings “Frozen” characters to life through advanced technology - Frozen Ever After at Hong Kong Disneyland features Walt Disney Imagineering’s most advanced, fully-electric Audio-Animatronics® figures, which bring to life characters like Anna, Elsa, and Kristoff with amazing realism. <NAME>, senior producer at Walt Disney Imagineering Asia, stated: “Bringing our characters to life, featuring our unique, state-of-the-art Audio-Animatronics® figures, is a thrilling aspect of this project. It is a great testament to the amazing talents at Walt Disney Imagineering who made the impossible possible. We share the same goal with Walt Disney Animation Studios to create incredible entertainment for people of all ages, using innovative technology to tell a compelling, heartfelt story like never before.” To promote the enchanting beauty of Arendelle, Hong Kong Disneyland Resort is delighted to announce today that renowned local travel guru <NAME> has been designated as the Arendelle travel ambassador for Hong Kong! With a wealth of travel experiences, Dodo will be able to share her enchanting journey in Arendelle and travel tips to the guests. Meanwhile, Olaf is appointed as the global ambassador by Anna and Elsa to extend a heartfelt invitation to all corners of the globe for the Summer Snow Day celebration in Arendelle!-ENDS- About Hong Kong Disneyland Resort Hong Kong Disneyland Resort offers unforgettable, culturally distinctive Disney experiences for guests of all ages and nationalities. Filled with your favourite Disney stories and characters, Hong Kong Disneyland offers guests the opportunity to explore seven diverse lands that are home to award-winning, unique attractions and entertainment. Complete your adventure with stays at the resort’s luxurious Disney hotels. The magic doesn’t end at our doorstep; as a dedicated member of the local community that cares deeply about societal wellbeing, Hong Kong Disneyland Resort spreads its magic through community service programs that help families in need, boost creativity among children and families, encourage the protection of the environment, and inspire healthier living. # Singapore & Malaysia - VTL Opens! Date: 2021-11-30 Categories: Tags: Finally! Very welcome and relieved news for both the travel industry and of course family and friends, with Malaysia and Singapore announcing VTL (Vaccinated Travel Lane) arrivals between both countries. Taking effect from today (Nov 29), both VTL Land and VTL Air arrivals are now available. The quarantine free entry by land applies to Malaysian and Singaporean citizens and permanent residents, and long-term pass holders who are employed or working in Singapore or Johor Baru, with 32 bus trips each day. The VTL by air launches with six designated services between Changi Airport and KLIA each day. This is the first VTL initiative for Malaysia, a very welcome announcement considering the close ties between the two countries. In lieu of quarantine, fully vaccinated arriving passengers will be required to undertake an On-arrival rapid antigen test (ART). Family reunions will no doubt abound, which will hopefully in turn impact the Travel Activity sector as celebrations in both destinations boost visits to local attractions and activities.According to the Straits Times, prior to the pandemic, the Singapore-Kuala Lumpur air route was the world’s busiest international air link, operated by eight carriers with an average of 82 flights a day. Welcome back neighbours!! # [WATCH] Phocuswright Conference 2021 - New Look of Tours and Activities Marketplace Date: 2021-11-23 Categories: Tags: In mid November 2021, The Phocuswright Conference returned with one of the largest global, in-person travel events held in Fort Lauderdale, Florida. The travel industry's most significant annual event attracted the cream of global tourism, and APAC's attraction, tour and activity sector was well represented by BeMyGuest CEO and Co-Founder, <NAME>.The subject: Digitizing and attracting investment, tours and activities has gained big momentum. Even in the past year, some specialists added to war chests while other companies recalibrated focus. How will the segment ride the pandemic-induced wave of tech acceleration into the next chapter? Will further alliances and consolidation be necessary to not only survive but thrive? Phocuswright Conference 2021 - The New Look of the Tours and Activities Marketplace Executive Round Table Panelists: <NAME>, Founder & President, <NAME>, CEO & Co-Founder, BeMyGuest <NAME>, CEO, Magpie Moderator <NAME>, Research Director, Phocuswright Inc. # Dubai Expo 2020 : OFFICIAL RESELLER! Date: 2021-09-22 Categories: Tags: No, it’s not a typo. Don’t let the ‘2020’ distract you. The much anticipated Dubai Expo 2020 was delayed like many other events due to the pandemic, but one of the greatest shows on earth is ready to roll, from 1 October, 2021, until 30 March, 2022. Now is the time to get your Dubai Expo 2020 tickets, and with BeMyGuest an Official Reseller, you can book Same Day, Multi Day, and unique Combo tickets featuring other popular attractions and experiences in Dubai. Contact your account manager, or log in to your Agent’s account on the Agents Marketplace. The global event held every five years celebrates innovation, humanitarian progression and achievements, and cooperative endeavours between businesses, science, technology and countries. Held for the first time in the Middle East, 192 nations will participate, with the event expected to attract up to 25 million visitors. ‘Connecting Minds, Creating the Future’ is the theme. Get tickets for your customers now! # BeMyGuest Integrates with Google for Tour and Activity Booking Links Date: 2021-08-27 Categories: Tags: We are excited to officially announce BeMyGuest’s integration with Google, enabling operators with the very real potential to grow attraction, tour and activity leads by increasing your visibility on Google - all without paying for advertisements and campaigns. Some of the media coverage of the announcement so far: TTG Asia - BeMyGuest integrates with Google for tours and activities booking links WebInTravel - BeMyGuest integrates with Google Travel Daily News Asia/Pacific - BeMyGuest integrates with Google for tours and activities booking links BeMyGuest’s role in the integration is to provide Google with the complex technology required of connecting Google Search users directly to thousands of tour and activity products, while ensuring that all technical requirements and connectivity standards are maintained at the highest level. The new ‘Google Things to do’ will feature ticket booking links on Google Search so users can compare ticket options for attractions, tours or activities. These are free ticket booking links, not paid ads, and will come from a mix of OTA feeds, and displayed for selection by users. BeMyGuest will power your ‘Official Site' button, completely whitelabelled, which has prominence at the top of this display, and conduct the entire integration to activate your ability to receive these bookings and regain control of setting your pricing online. You receive more direct traffic for no extra work or setup fees. BeMyGuest’s technology will provide all that’s required to ensure your Official Site remains up to date, and all in real time. We can work with you to ensure you visualise the best products, prices, and content, and facilitate the entire sales process from search to sale. We are currently in the process of selecting operators to include in this initiative, that we believe have the most promising opportunity for success. If you're interested in having your attraction, tour or activity integrated with Google, reach out to us now at <EMAIL>. We are more than happy to discuss how we can collaborate together! Our solutions our partners about us contact us Get in touch Our Solutions Distribution Network Booking System Our Partners About Us Contact Us Get in touch News BeConnected Industry Insights & News Press Singapore’s BeMyGuest Integrates with Google for Tours and Activity Booking Links Singapore’s BeMyGuest Integrates with Google for Tours and Activity Booking Links August 24, 2021 by BeMyGuest on August 24, 2021 Read more Read more Media Coverage Faster checkout, speedier sales for tours and activities Faster checkout, speedier sales for tours and activities August 3, 2021 by on August 3, 2021 Read on Read on Travel Weekly Asia Insights NEW! Xplore WebPOS, designed specifically for attractions, tours and activities NEW! Xplore WebPOS, designed specifically for attractions, tours and activities August 2, 2021 by <NAME> on August 2, 2021 Read Article Read Article Media Coverage BeMyGuest Introduces Web-based Point of Sale System for Offline Transactions BeMyGuest Introduces Web-based Point of Sale System for Offline Transactions July 29, 2021 by on July 29, 2021 Read on Read on WIT Insights NEW! Capacity Calendar & Reservation Portal NEW! Capacity Calendar & Reservation Portal April 29, 2021 by <NAME> on April 29, 2021 Read Article Read Article Insights Reviewing your website and booking system? You need this checklist Reviewing your website and booking system? You need this checklist February 28, 2021 by <NAME> on February 28, 2021 Read Article Read Article Insights Priority 1: Connecting with your domestic market Priority 1: Connecting with your domestic market November 5, 2020 by <NAME> on November 5, 2020 Read Article Read Article Insights A COVID survival kit - no masks required A COVID survival kit - no masks required October 20, 2020 by <NAME> on October 20, 2020 Read Article Read Article Insights Entering the 'New World' Entering the 'New World' September 21, 2020 by <NAME> on September 21, 2020 Read Article Read Article Media Coverage How Pitman is keeping Bali Wake Park afloat: "It's about passion and reputation" How Pitman is keeping Bali Wake Park afloat: "It's about passion and reputation" September 10, 2020 by on September 10, 2020 Read on Read on WIT Media Coverage The long tail of travel may get cut off - automation is one way to get it back up The long tail of travel may get cut off - automation is one way to get it back up April 1, 2020 by on April 1, 2020 Read on Read on PhocusWire Media Coverage WIT Community Blog: <NAME>, CEO & Founder, BeMyGuest WIT Community Blog: B<NAME>, CEO & Founder, BeMyGuest April 1, 2020 by on April 1, 2020 Read on Read on WIT Media Coverage New tech cohort, ‘OCTo’, aims to establish universal connectivity New tech cohort, ‘OCTo’, aims to establish universal connectivity February 20, 2020 by on February 20, 2020 Read on Read on Arival Media Coverage Coronavirus crisis exposes low-tech Achilles’ heel of tours and attractions in Asia Coronavirus crisis exposes low-tech Achilles’ heel of tours and attractions in Asia February 17, 2020 by on February 17, 2020 Read on Read on Skift Media Coverage How coronavirus is affecting operators in Asia How coronavirus is affecting operators in Asia February 7, 2020 by on February 7, 2020 Read on Read on Arival Insights [Watch] Panel - From Offline to Online Key to Better Distribution [Watch] Panel - From Offline to Online Key to Better Distribution January 23, 2020 by <NAME> on January 23, 2020 Read Article Read Article Media Coverage Expert Opinion: Different paces Expert Opinion: Different paces January 22, 2020 by on January 22, 2020 Read on Read on TTG Asia Insights PATA: Record breaking foreign arrivals for Asia Pacific PATA: Record breaking foreign arrivals for Asia Pacific January 22, 2020 by <NAME> on January 22, 2020 Read Article Read Article Newer Older Want to find out more? Our team will get in touch to understand your business goals and suggest how we can best help you. get in touch # api documentation Our solutions our partners about us contact us Get in touch Our Solutions Distribution Network Booking System Our Partners About Us Contact Us Get in touch api documentation BeMyGuest API v2.0 The industry benchmark preferred by OTAs, designed for scalable distribution of attractions, tours & activities in APAC and Globally. Loading Want to find out more? Our team will get in touch to understand your business goals and suggest how we can best help you. GET in touch
darkbird
rust
Rust
Struct darkbird::Database === ``` pub struct Database { /* private fields */ } ``` Implementations --- ### impl Database #### pub fn open(datastores: AnyMap) -> Database #### pub async fn subscribe<K, Doc>( &self, sender: Sender<Event<K, Doc>> ) -> Result<(), SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub async fn insert<K, Doc>( &self, key: K, doc: Doc ) -> Result<(), SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub async fn remove<K, Doc>(&self, key: K) -> Result<(), SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub fn gets<'a, K, Doc>( &self, list: Vec<&K> ) -> Result<Vec<Ref<'_, K, Doc>>, SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub fn range<K, Doc>( &self, field_name: &str, from: String, to: String ) -> Result<Vec<Ref<'_, K, Doc>>, SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub fn lookup<K, Doc>( &self, key: &K ) -> Result<Option<Ref<'_, K, Doc>>, SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub fn lookup_by_index<K, Doc>( &self, index_key: &str ) -> Result<Option<Ref<'_, K, Doc>>, SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub fn lookup_by_tag<K, Doc>( &self, tag: &str ) -> Result<Vec<Ref<'_, K, Doc>>, SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub fn fetch_view<K, Doc>( &self, view_name: &str ) -> Result<Vec<Ref<'_, K, Doc>>, SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub fn search<K, Doc>( &self, text: String ) -> Result<Vec<Ref<'_, K, Doc>>, SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub fn iter<K, Doc>(&self) -> Result<Iter<'_, K, Doc>, SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub fn iter_index<K, Doc>(&self) -> Result<Iter<'_, String, K>, SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub fn iter_tags<K, Doc>( &self ) -> Result<Iter<'_, String, DashSet<K>>, SessionResult>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub fn set<K, Doc>( &self, key: K, value: Doc, expire: Option<Duration> ) -> Result<(), SessionResult>where Doc: Clone + Send + Sync + 'static, K: Clone + PartialOrd + Ord + PartialEq + Eq + Hash + Send + 'static, Just for redisstore engine #### pub fn get<K, Doc>(&self, key: &K) -> Result<Option<Arc<Doc>>, SessionResult>where Doc: Clone + Send + Sync + 'static, K: Clone + PartialOrd + Ord + PartialEq + Eq + Hash + Send + 'static, Just for redisstore engine #### pub fn del<K, Doc>(&self, key: &K) -> Result<(), SessionResult>where Doc: Clone + Send + Sync + 'static, K: Clone + PartialOrd + Ord + PartialEq + Eq + Hash + Send + 'static, Just for redisstore engine #### pub fn set_nx<K, Doc>( &self, key: K, value: Doc, expire: Option<Duration> ) -> Result<bool, SessionResult>where Doc: Clone + Send + Sync + 'static, K: Clone + PartialOrd + Ord + PartialEq + Eq + Hash + Send + 'static, Just for redisstore engine Auto Trait Implementations --- ### impl !RefUnwindSafe for Database ### impl !Send for Database ### impl !Sync for Database ### impl Unpin for Database ### impl !UnwindSafe for Database Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, Struct darkbird::Options === ``` pub struct Options<'a> { /* private fields */ } ``` Implementations --- ### impl<'a> Options<'a#### pub fn new( path: &'a str, storage_name: &'a str, total_page_size: usize, stype: StorageType, off_reporter: bool ) -> Self Trait Implementations --- ### impl<'a> Clone for Options<'a#### fn clone(&self) -> Options<'aReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for Options<'a### impl<'a> Send for Options<'a### impl<'a> Sync for Options<'a### impl<'a> Unpin for Options<'a### impl<'a> UnwindSafe for Options<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, ### impl<T> CloneAny for Twhere T: Any + Clone, Struct darkbird::PageProcessor === ``` pub struct PageProcessor<'a, OldKey, OldDoc, NewKey, NewDoc> { /* private fields */ } ``` Implementations --- ### impl<'a, OldKey, OldDoc, NewKey, NewDoc> PageProcessor<'a, OldKey, OldDoc, NewKey, NewDoc>where OldKey: Serialize + DeserializeOwned + Hash + Eq + PartialEq, OldDoc: Serialize + DeserializeOwned, NewKey: Serialize + DeserializeOwned + Hash + Eq, NewDoc: Serialize + DeserializeOwned, #### pub fn new( root: &'a str, source_name: &'a str, source_total_page_size: usize, sync_name: Sync<'a>, vacuum: bool, handler: fn(_: RQuery<OldKey, OldDoc>) -> RQuery<NewKey, NewDoc> ) -> Self #### pub fn start(self) -> Result<(), StringAuto Trait Implementations --- ### impl<'a, OldKey, OldDoc, NewKey, NewDoc> RefUnwindSafe for PageProcessor<'a, OldKey, OldDoc, NewKey, NewDoc>where NewDoc: RefUnwindSafe, NewKey: RefUnwindSafe, OldDoc: RefUnwindSafe, OldKey: RefUnwindSafe, ### impl<'a, OldKey, OldDoc, NewKey, NewDoc> Send for PageProcessor<'a, OldKey, OldDoc, NewKey, NewDoc>where NewDoc: Send, NewKey: Send, OldDoc: Send, OldKey: Send, ### impl<'a, OldKey, OldDoc, NewKey, NewDoc> Sync for PageProcessor<'a, OldKey, OldDoc, NewKey, NewDoc>where NewDoc: Sync, NewKey: Sync, OldDoc: Sync, OldKey: Sync, ### impl<'a, OldKey, OldDoc, NewKey, NewDoc> Unpin for PageProcessor<'a, OldKey, OldDoc, NewKey, NewDoc>where NewDoc: Unpin, NewKey: Unpin, OldDoc: Unpin, OldKey: Unpin, ### impl<'a, OldKey, OldDoc, NewKey, NewDoc> UnwindSafe for PageProcessor<'a, OldKey, OldDoc, NewKey, NewDoc>where NewDoc: UnwindSafe, NewKey: UnwindSafe, OldDoc: UnwindSafe, OldKey: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, Struct darkbird::Persistent === ``` pub struct Persistent { pub db_session: DatabaseSession, } ``` Fields --- `db_session: DatabaseSession`Implementations --- ### impl Persistent #### pub async fn connect( db_name: DatabaseName, cfg_string: String ) -> Result<Persistent, ()#### pub async fn copy_memtable_to_database<K, Doc, THandler>( &self, storage: Arc<Storage<K, Doc>>, handler: &THandler )where THandler: Setter<K, Doc>, Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub async fn load_memtable_from_database<K, Doc, THandler>( &self, storage: Arc<Storage<K, Doc>>, handler: &THandler ) -> Result<(), SessionResult>where THandler: Getter<K, Doc>, Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, Auto Trait Implementations --- ### impl !RefUnwindSafe for Persistent ### impl Send for Persistent ### impl Sync for Persistent ### impl Unpin for Persistent ### impl !UnwindSafe for Persistent Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, Struct darkbird::Schema === ``` pub struct Schema { /* private fields */ } ``` Implementations --- ### impl Schema #### pub fn new() -> Schema #### pub async fn with_datastore<'a, K, Doc>( self, opts: Options<'a> ) -> Result<Schema, SchemaError>where Doc: Serialize + DeserializeOwned + Clone + Sync + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub async fn with_redisstore<K, Doc>( self, storage_name: &str ) -> Result<Schema, SchemaError>where Doc: Clone + Sync + Send + 'static, K: PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub fn build(self) -> Database Auto Trait Implementations --- ### impl !RefUnwindSafe for Schema ### impl !Send for Schema ### impl !Sync for Schema ### impl Unpin for Schema ### impl !UnwindSafe for Schema Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, Struct darkbird::Stop === ``` pub struct Stop; ``` Auto Trait Implementations --- ### impl RefUnwindSafe for Stop ### impl Send for Stop ### impl Sync for Stop ### impl Unpin for Stop ### impl UnwindSafe for Stop Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, Struct darkbird::Storage === ``` pub struct Storage<K, Doc: Document> { /* private fields */ } ``` Implementations --- ### impl<K, Doc> Storage<K, Doc>where Doc: Serialize + DeserializeOwned + Clone + Send + 'static + Document, K: Serialize + DeserializeOwned + PartialOrd + Ord + PartialEq + Eq + Hash + Clone + Send + Sync + 'static, #### pub async fn open<'a>(ops: Options<'a>) -> Result<Self, String#### pub async fn subscribe( &self, sender: Sender<Event<K, Doc>> ) -> Result<(), SessionResultsubscribe to Reporter #### pub async fn insert(&self, key: K, doc: Doc) -> Result<(), SessionResultinsert to storage and persist to disk #### pub async fn remove(&self, key: K) -> Result<(), SessionResultremove from storage and persist to disk #### pub fn gets(&self, list: Vec<&K>) -> Vec<Ref<'_, K, Doc>gets documents #### pub fn range( &self, field_name: &str, from: String, to: String ) -> Vec<Ref<'_, K, Doc>fetch document by range hash_index #### pub fn lookup(&self, key: &K) -> Option<Ref<'_, K, Doc>lookup by key #### pub fn lookup_by_index(&self, index_key: &str) -> Option<Ref<'_, K, Doc>lookup by hash_index #### pub fn lookup_by_tag(&self, tag: &str) -> Vec<Ref<'_, K, Doc>lookup by tag #### pub fn fetch_view(&self, view_name: &str) -> Vec<Ref<'_, K, Doc>fetch view #### pub fn search(&self, text: String) -> Vec<Ref<'_, K, Doc>search by text #### pub fn iter(&self) -> Iter<'_, K, Docreturn Iter (Safe for mutation) #### pub fn iter_index(&self) -> Iter<'_, String, Kreturn Iter (Safe for mutation) #### pub fn iter_tags(&self) -> Iter<'_, String, DashSet<K>return Iter (Safe for mutation) #### pub fn collection_len(self) -> usize Auto Trait Implementations --- ### impl<K, Doc> !RefUnwindSafe for Storage<K, Doc### impl<K, Doc> Send for Storage<K, Doc>where Doc: Send, K: Send + Sync, ### impl<K, Doc> Sync for Storage<K, Doc>where Doc: Send + Sync, K: Send + Sync, ### impl<K, Doc> Unpin for Storage<K, Doc### impl<K, Doc> !UnwindSafe for Storage<K, DocBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, Enum darkbird::DatabaseName === ``` pub enum DatabaseName { Scylla, Postgres, } ``` Variants --- ### Scylla ### Postgres Auto Trait Implementations --- ### impl RefUnwindSafe for DatabaseName ### impl Send for DatabaseName ### impl Sync for DatabaseName ### impl Unpin for DatabaseName ### impl UnwindSafe for DatabaseName Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, Enum darkbird::DatabaseSession === ``` pub enum DatabaseSession { Scylla(Session), Postgres(Client), } ``` Variants --- ### Scylla(Session) ### Postgres(Client) Auto Trait Implementations --- ### impl !RefUnwindSafe for DatabaseSession ### impl Send for DatabaseSession ### impl Sync for DatabaseSession ### impl Unpin for DatabaseSession ### impl !UnwindSafe for DatabaseSession Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, Enum darkbird::Event === ``` pub enum Event<K, Doc> { Query(RQuery<K, Doc>), Subscribed(Sender<Event<K, Doc>>), } ``` Variants --- ### Query(RQuery<K, Doc>) ### Subscribed(Sender<Event<K, Doc>>) Trait Implementations --- ### impl<K: Clone, Doc: Clone> Clone for Event<K, Doc#### fn clone(&self) -> Event<K, DocReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read moreAuto Trait Implementations --- ### impl<K, Doc> !RefUnwindSafe for Event<K, Doc### impl<K, Doc> Send for Event<K, Doc>where Doc: Send, K: Send, ### impl<K, Doc> Sync for Event<K, Doc>where Doc: Send + Sync, K: Send + Sync, ### impl<K, Doc> Unpin for Event<K, Doc>where Doc: Unpin, K: Unpin, ### impl<K, Doc> !UnwindSafe for Event<K, DocBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, ### impl<T> CloneAny for Twhere T: Any + Clone, Enum darkbird::Format === ``` pub enum Format { Bson, Bincode, } ``` Variants --- ### Bson ### Bincode Trait Implementations --- ### impl Clone for Format #### fn clone(&self) -> Format Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Format ### impl Send for Format ### impl Sync for Format ### impl Unpin for Format ### impl UnwindSafe for Format Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, ### impl<T> CloneAny for Twhere T: Any + Clone, Enum darkbird::RQuery === ``` pub enum RQuery<K, Doc> { Insert(K, Doc), Remove(K), } ``` Variants --- ### Insert(K, Doc) ### Remove(K) Implementations --- ### impl<K, Doc> RQuery<K, Doc#### pub fn from_raw( type_id: &'static str, key: K, doc: Option<Doc> ) -> RQuery<K, Doc#### pub fn into_raw(self) -> (&'static str, K, Option<Doc>) Trait Implementations --- ### impl<K: Clone, Doc: Clone> Clone for RQuery<K, Doc#### fn clone(&self) -> RQuery<K, DocReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. K: Deserialize<'de>, Doc: Deserialize<'de>, #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. K: Serialize, Doc: Serialize, #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations --- ### impl<K, Doc> RefUnwindSafe for RQuery<K, Doc>where Doc: RefUnwindSafe, K: RefUnwindSafe, ### impl<K, Doc> Send for RQuery<K, Doc>where Doc: Send, K: Send, ### impl<K, Doc> Sync for RQuery<K, Doc>where Doc: Sync, K: Sync, ### impl<K, Doc> Unpin for RQuery<K, Doc>where Doc: Unpin, K: Unpin, ### impl<K, Doc> UnwindSafe for RQuery<K, Doc>where Doc: UnwindSafe, K: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, ### impl<T> CloneAny for Twhere T: Any + Clone, ### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Enum darkbird::SessionResult === ``` pub enum SessionResult { Closed, Timeout, Full, NoResponse, DataStoreNotFound, UnImplement, Err(StatusResult), } ``` Variants --- ### Closed ### Timeout ### Full ### NoResponse ### DataStoreNotFound ### UnImplement ### Err(StatusResult) Trait Implementations --- ### impl Debug for SessionResult #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn to_string(&self) -> String Converts the given value to a `String`. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for SessionResult ### impl Send for SessionResult ### impl Sync for SessionResult ### impl Unpin for SessionResult ### impl !UnwindSafe for SessionResult Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, Enum darkbird::StorageType === ``` pub enum StorageType { RamCopies, DiskCopies, } ``` Variants --- ### RamCopies ### DiskCopies Trait Implementations --- ### impl Clone for StorageType #### fn clone(&self) -> StorageType Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for StorageType ### impl Send for StorageType ### impl Sync for StorageType ### impl Unpin for StorageType ### impl UnwindSafe for StorageType Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any, ### impl<T> CloneAny for Twhere T: Any + Clone, Enum darkbird::Sync === ``` pub enum Sync<'a> { Overwrite, New(&'a str), } ``` Variants --- ### Overwrite ### New(&'a str) Auto Trait Implementations --- ### impl<'a> RefUnwindSafe for Sync<'a### impl<'a> Send for Sync<'a### impl<'a> Sync for Sync<'a### impl<'a> Unpin for Sync<'a### impl<'a> UnwindSafe for Sync<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: Any,